- The Royal Wolverhampton NHS Trust (RWT) will work together with British healthtech company Medopad as its official remote patient management solution to improve patient outcomes, adherence and operational gains across primary, secondary and community care. Medopad’s remote patient management platform for RWT will launch this Spring with projects in cardiology, hypertension, diabetes and many other areas. This will be the first deployment of Medopad in a primary care setting and will be part of a population health initiative to help patients who are diagnosed with hypertension in better managing their condition.
- The NLEstimate macro allows you to estimate one or more linear or nonlinear combinations of parameters from any model for which you can save the model parameters and their variance-covariance matrix. Most modeling procedures which offer ESTIMATE, CONTRAST, or LSMEANS statements only provide for estimating or testing linear combinations of model parameters. However, common estimation problems often involve nonlinear combinations, particularly in generalized models with nonidentity link functions such as logistic and Poisson models.
- This sample combines macro programming with PROC FREQ and DATA Step logic to count the number of missing and non-missing values for every variable in a data set. The results are stored in a data set. This sample illustrates one method of counting the number of missing and non-missing values for each variable in a data set. Two methods for structuring the resulting data set are shown.
- The %VARTEST macro provides a one-tailed test of the null hypothesis that the variance equals a non-zero constant for normally distributed data. It also provides point- and confidence interval estimates. NOTE: The CIBASIC option in PROC UNIVARIATE provides one- and two-sided confidence intervals for the standard deviation and variance. PROC TTEST provides a confidence interval for the standard deviation using either of two methods. PURPOSE: The %VARTEST macro tests the null hypothesis that the variance (or standard deviation) of a set of independent and identically normally distributed values is equal to a specified constant against an alternative that the variance (or standard deviation) exceeds the constant. The macro also provides point- and confidence interval estimates for the variance and standard deviation.
- NOTE: Beginning in SAS 9.2, the QIC statistic is produced by PROC GENMOD. Beginning in SAS 9.4 TS1M2, QIC is available in PROC GEE. PURPOSE: The %QIC macro computes the QIC and QICu statistics proposed by Pan (2001) for GEE (generalized estimating equations) models. These statistics allow comparisons of GEE models (model selection) and selection of a correlation structure.
- The SELECT macro performs model selection methods for categorical-response models that can be fit in PROC LOGISTIC. These include models using the logit, probit, cloglog, cumulative logit, or generalized logit links. The macro supports binary as well as ordinal and nominal multinomial models. Standard model selection is done by choosing candidate effects for entry to or removal from the model according to their significance levels. After completion, the set of models selected at each step of this process is sorted on the selected criterion - AUC, R-square, max-rescaled R-square, AIC, or BIC. The requested number of best models on the selected criterion is displayed.
- What we present here is a macro that will automatically check all the numeric variables in a SAS data set for a specific data value, and produce a report showing which variables contain this special value and how many times it appeared. The macro is called FIND_VALUE Many of us are presented with SAS data sets where codes such as 9999 are intermingled with real data values. Sometimes these codes represent missing values; sometimes they represent other non-data values. If you run SAS procedures on numeric variables in such a data set, you will, obviously, produce nonsense. What we present here is a macro that will automatically check all the numeric variables in a SAS data set for a specific data value, and produce a report showing which variables contain this special value and how many times it appeared. The macro is called FIND_VALUE and is presented below. You can download this macro and many other useful macros from the SAS Companion Web Site: support.sas.com/publishing. Search for my book, Cody's Data Cleaning Techniques, Second Edition, and then click on the link to download the programs and data files from the book.
- NOTE: Beginning in SAS 9, you can use the ODS GRAPHICS ON; statement and the PLOTS=SCATTER(ELLIPSE=MEAN) or PLOTS=SCATTER(ELLIPSE=PREDICTED) option in the PROC CORR statement to get confidence ellipse plots about the mean or individual values. PURPOSE: The %CONELIP macro generates confidence ellipses for bivariate normal data. It can either create ellipses for the data or ellipses about the mean.
- NOTE: This macro is obsolete beginning with SAS 8.0. Use the STDIZE procedure in SAS/STAT software beginning in that release. PURPOSE: The %STDIZE macro standardizes one or more numeric variables in a SAS data set by subtracting a location measure and dividing by a scale measure. A variety of location and scale measures are provided, including estimates that are resistant to outliers and clustering
- NOTE: The MVN macro is obsolete. Beginning in SAS 9.2, use the RANDNORMAL function in SAS/IML software or PROC SIMNORMAL in SAS/STAT software to generate multivariate normal data. PURPOSE: The %MVN macro generates multivariate normal data using the Cholesky root of the variance-covariance matrix. Bivariate normal data can be generated using the DATA step.
- Overview This sample shows one way of computing Mahalanobis distance in each of the following scenarios: from each observation to the mean from each observation to a specific observation from each observation to all other observations (all possible pairs)
- The GLMPI macro computes asymptotic 100(1-α)% confidence and prediction intervals that are symmetric about the predicted mean using the delta method.
- These macros compute nonparametric survival curve estimates from interval-censored data. Confidence intervals for survival curves and log-rank tests comparing survival curves from several groups are also provided. NOTE: Beginning with SAS/STAT 13.1 in SAS 9.4 TS1M1, the functionality of these macros has been updated and added to the ICLIFETEST procedure. For details, see the ICLIFETEST documentation. PURPOSE: These macros compute nonparametric maximum likelihood estimates (NPMLEs) of survival curves from interval-censored data. Confidence intervals for survival curves and log-rank tests comparing survival curves from several groups are also provided.
- NOTE: Beginning in SAS 9.4, this macro is no longer needed. Use the OUTPLC= option in Base SAS PROC CORR to save a matrix of polychoric (or tetrachoric) correlations. PURPOSE: The %POLYCHOR macro creates a SAS data set containing a correlation matrix of polychoric correlations or a distance matrix based on polychoric correlations.
- The %CLUSTERGROUPS macro creates a custom template that combines a dendrogram and a blockplot to highlight each of the specified number of clusters with a different color. The %CLUSTERGROUPS macro enhances dendrograms produced in SAS by adding color to highlight the clusters. You specify the number of clusters desired as input to the macro.
- The %JACK and %BOOT macros do jackknife and bootstrap analyses for simple random samples, computing approximate standard errors, bias-corrected estimates, and confidence intervals assuming a normal sampling distribution. The %JACK macro does jackknife analyses for simple random samples, computing approximate standard errors, bias-corrected estimates, and confidence intervals assuming a normal sampling distribution. The %BOOT macro does elementary nonparametric bootstrap analyses for simple random samples, computing approximate standard errors, bias-corrected estimates, and confidence intervals assuming a normal sampling distribution. Also, for regression models, the %BOOT macro can resample either observations or residuals. The %BOOTCI macro computes several varieties of confidence intervals that are suitable for sampling distributions that are not normal.
- The RsquareV macro provides an R-square measure for models with a well-defined variance function such as generalized linear and generalized additive models. R2 is a popular measure of fit used for ordinary regression models. The RsquareV macro provides the R_V^2 statistic proposed by Zhang (2016) for use with any model based on a distribution with a well-defined variance function. This includes the class of generalized linear models and generalized additive models based on distributions such as the binomial for logistic models, Poisson, gamma, and others. It also includes models based on quasi-likelihood functions for which only the mean and variance functions are defined. A partial R2 is provided when comparing a full model to a nested, reduced model. Partial R can be obtained from this when the difference between the full and reduced model is a single parameter. A penalized R2 is also available adjusting for the additional parameters in the full model.
- This sample creates four adverse event with relative risk plots. An adverse event with relative risk plot is a two-panel display of the most frequently occurring adverse events sorted by relative risk for a clinical study. The sample requires a macro that can be downloaded from the Downloads tab. After downloading the program, the sample code on the Full Code tab can be submitted from your SAS session.
- The %MULTNORM macro provides tests and plots of multivariate normality. A test of univariate normality is also given for each of the variables. A chi-square quantile-quantile plot of the observations' squared Mahalanobis distances can be obtained allowing a visual assessment of multivariate normality. Univariate histograms with overlaid normal curves are also available.
- The %ITEM macro computes descriptive statistics for analysis of data from a multiple-choice test. Each observation contains the answers from one subject to a set of questions ("items"). The data are compared to an answer key to determine which answers are correct. The score for each subject is computed as the number of correct answers. The output is very similar to that from the ITEM procedure in the SUGI Supplemental library, but several incorrect statistics have been fixed.

*Comput Methods Programs Biomed**88 (2): 95-101*(*November 2007*)*Comput Appl Biosci**8 (1): 23-27*(*February 1992*)*Comput Methods Programs Biomed*(*July 2019*)*Circulation**133 (6): 601-609*(*February 2016*)*Comput Methods Programs Biomed**35 (3): 203-212*(*July 1991*)*Biom J**49 (5): 719-730*(*August 2007*)*Comput Methods Programs Biomed**104 (2): 266-270*(*November 2011*)*Ophthalmic Epidemiol**25 (1): 1-12*(*February 2018*)*Ophthalmic Epidemiol**24 (2): 130-140*(*April 2017*)*Comput Methods Programs Biomed**101 (1): 87-93*(*January 2011*)*Comput Methods Programs Biomed**74 (1): 69-75*(*April 2004*)*Comput Methods Programs Biomed**89 (3): 289-300*(*March 2008*)*Comput Methods Programs Biomed**118 (2): 218-233*(*February 2015*)*Comput Methods Programs Biomed**58 (1): 25-34*(*January 1999*)*Comput Biomed Res**23 (3): 268-282*(*June 1990*)*Comput Methods Programs Biomed**69 (3): 249-256*(*November 2002*)*J Biopharm Stat**9 (1): 189-216*(*March 1999*)*J Anim Sci**76 (4): 1216-1231*(*April 1998*)*J Biopharm Stat**7 (4): 481-500*(*November 1997*)*Computer Methods and Programs in Biomedicine**108 (1): 310 - 317*(*2012*)