Maakuntien suurimmissa kaupungeissa kirjastohankintoihin käytetään vuonna 2020 keskimäärin kuusi euroa asukasta kohden.Suomen yleisten kirjastojen pitämän tilastotietokannan perusteella hankintamäärärahat eivät ole merkittävästi vähentyneet vuoden 2015 valtionosuusuudistuksen jälkeen.
Tilaston perusteella hankintoihin on käytetty vuoden 2014 jälkeen noin 23 miljoonaa euroa vuosittain. Vuoden 2018 hankintoihin käytetty summa nousi yhteenlaskettuna 24,2 miljoonaan euroon.
On kuitenkin huomattava, että paikoin hankintamäärärahoja oli supistettu voimakkaasti jo ennen uudistusta. Vaikka hankintamäärärahat valtakunnantasolla nousivatkin vuodeksi 2018, rahaa oli silti vähemmän kuin vuonna 2010.
By By Barry Saxifrage in Opinion | National Observer, July 31st 2019,
Global fossil burning keeps rising relentlessly as the world sprints away from climate safety. Here are ten charts from the latest data to show you what's happening and who's doing it.
PROVIDENCE, R.I. [Brown University] 13.11.2019— Nearly two decades after New York’s Twin Towers fell on 9/11, the estimated cost of America’s counterterrorism efforts stands at $6.4 trillion.
That’s according to a Nov. 13 report released by the Costs of War project based at the Watson Institute for International and Public Affairs at Brown University.
According to the report, since late 2001, the United States has appropriated and is obligated to spend $6.4 trillion on counterterrorism efforts through the end of 2020. An estimated $5.4 trillion of that total has funded, and will continue to fund, counterterrorism wars and smaller operations in more than 80 countries; an additional minimum of $1 trillion will provide care for veterans of those wars through the next several decades.
All quantities are presented in units of gigatonnes of carbon (GtC, 10[upphöjt]15 gC), which is the same as petagrams of carbon (PgC; Table 1). Units of gigatonnes of CO2 (or billion tonnes of CO2) used in policy are equal to 3.664 multiplied by the value in units of GtC.
ASA's statement on the use of p-values. This statement focuses on the wide spread use of p=0.05 and it's inherent problems. It addresses recent criticism and urges data scientist to back up their p-values.
SWARM is a cloud platform aimed at improving analytical reasoning in intelligence work. SWARM tries to improve analytical reasoning by improving collaboration within groups of analysts rather than by trying to structure their thinking in any particular way.
- Robust and stochastic optimization
- Convex analysis
- Linear programming
- Monte Carlo simulation
- Model-based estimation
- Matrix algebra review
- Probability and statistics basics
This article is divided into three parts: the first part explains the definition of the economically dependent self-employed and proposes ideas for improving this definition of this dependency. The second part of this article is dedicated to the working conditions of the self-employed, while the last part compares the job satisfaction of the self-employed, employees and family workers.
The Costs of War Project is a team of 35 scholars, legal experts, human rights practitioners, and physicians, which began its work in 2011. We use research and a public website to facilitate debate about the costs of the post-9/11 wars in Iraq, Afghanistan, and Pakistan.
The NLEstimate macro allows you to estimate one or more linear or nonlinear combinations of parameters from any model for which you can save the model parameters and their variance-covariance matrix. Most modeling procedures which offer ESTIMATE, CONTRAST, or LSMEANS statements only provide for estimating or testing linear combinations of model parameters. However, common estimation problems often involve nonlinear combinations, particularly in generalized models with nonidentity link functions such as logistic and Poisson models.
This sample combines macro programming with PROC FREQ and DATA Step logic to count the number of missing and non-missing values for every variable in a data set. The results are stored in a data set.
This sample illustrates one method of counting the number of missing and non-missing values for each variable in a data set. Two methods for structuring the resulting data set are shown.
The %VARTEST macro provides a one-tailed test of the null hypothesis that the variance equals a non-zero constant for normally distributed data. It also provides point- and confidence interval estimates.
NOTE: The CIBASIC option in PROC UNIVARIATE provides one- and two-sided confidence intervals for the standard deviation and variance. PROC TTEST provides a confidence interval for the standard deviation using either of two methods.
PURPOSE:
The %VARTEST macro tests the null hypothesis that the variance (or standard deviation) of a set of independent and identically normally distributed values is equal to a specified constant against an alternative that the variance (or standard deviation) exceeds the constant. The macro also provides point- and confidence interval estimates for the variance and standard deviation.
NOTE: Beginning in SAS 9.2, the QIC statistic is produced by PROC GENMOD. Beginning in SAS 9.4 TS1M2, QIC is available in PROC GEE.
PURPOSE:
The %QIC macro computes the QIC and QICu statistics proposed by Pan (2001) for GEE (generalized estimating equations) models. These statistics allow comparisons of GEE models (model selection) and selection of a correlation structure.
The SELECT macro performs model selection methods for categorical-response models that can be fit in PROC LOGISTIC. These include models using the logit, probit, cloglog, cumulative logit, or generalized logit links. The macro supports binary as well as ordinal and nominal multinomial models.
Standard model selection is done by choosing candidate effects for entry to or removal from the model according to their significance levels. After completion, the set of models selected at each step of this process is sorted on the selected criterion - AUC, R-square, max-rescaled R-square, AIC, or BIC. The requested number of best models on the selected criterion is displayed.
What we present here is a macro that will automatically check all the numeric variables in a SAS data set for a specific data value, and produce a report showing which variables contain this special value and how many times it appeared. The macro is called FIND_VALUE
Many of us are presented with SAS data sets where codes such as 9999 are intermingled with real data values. Sometimes these codes represent missing values; sometimes they represent other non-data values.
If you run SAS procedures on numeric variables in such a data set, you will, obviously, produce nonsense. What we present here is a macro that will automatically check all the numeric variables in a SAS data set for a specific data value, and produce a report showing which variables contain this special value and how many times it appeared.
The macro is called FIND_VALUE and is presented below. You can download this macro and many other useful macros from the SAS Companion Web Site: support.sas.com/publishing. Search for my book, Cody's Data Cleaning Techniques, Second Edition, and then click on the link to download the programs and data files from the book.
NOTE: Beginning in SAS 9, you can use the ODS GRAPHICS ON; statement and the PLOTS=SCATTER(ELLIPSE=MEAN) or PLOTS=SCATTER(ELLIPSE=PREDICTED) option in the PROC CORR statement to get confidence ellipse plots about the mean or individual values.
PURPOSE:
The %CONELIP macro generates confidence ellipses for bivariate normal data. It can either create ellipses for the data or ellipses about the mean.
NOTE: This macro is obsolete beginning with SAS 8.0. Use the STDIZE procedure in SAS/STAT software beginning in that release.
PURPOSE:
The %STDIZE macro standardizes one or more numeric variables in a SAS data set by subtracting a location measure and dividing by a scale measure. A variety of location and scale measures are provided, including estimates that are resistant to outliers and clustering
NOTE: The MVN macro is obsolete. Beginning in SAS 9.2, use the RANDNORMAL function in SAS/IML software or PROC SIMNORMAL in SAS/STAT software to generate multivariate normal data.
PURPOSE:
The %MVN macro generates multivariate normal data using the Cholesky root of the variance-covariance matrix. Bivariate normal data can be generated using the DATA step.
Overview
This sample shows one way of computing Mahalanobis distance in each of the following scenarios:
from each observation to the mean
from each observation to a specific observation
from each observation to all other observations (all possible pairs)
These macros compute nonparametric survival curve estimates from interval-censored data. Confidence intervals for survival curves and log-rank tests comparing survival curves from several groups are also provided.
NOTE: Beginning with SAS/STAT 13.1 in SAS 9.4 TS1M1, the functionality of these macros has been updated and added to the ICLIFETEST procedure. For details, see the ICLIFETEST documentation.
PURPOSE:
These macros compute nonparametric maximum likelihood estimates (NPMLEs) of survival curves from interval-censored data. Confidence intervals for survival curves and log-rank tests comparing survival curves from several groups are also provided.
NOTE: Beginning in SAS 9.4, this macro is no longer needed. Use the OUTPLC= option in Base SAS PROC CORR to save a matrix of polychoric (or tetrachoric) correlations.
PURPOSE:
The %POLYCHOR macro creates a SAS data set containing a correlation matrix of polychoric correlations or a distance matrix based on polychoric correlations.
The %CLUSTERGROUPS macro creates a custom template that combines a dendrogram and a blockplot to highlight each of the specified number of clusters with a different color.
The %CLUSTERGROUPS macro enhances dendrograms produced in SAS by adding color to highlight the clusters. You specify the number of clusters desired as input to the macro.
The %JACK and %BOOT macros do jackknife and bootstrap analyses for simple random samples, computing approximate standard errors, bias-corrected estimates, and confidence intervals assuming a normal sampling distribution.
The %JACK macro does jackknife analyses for simple random samples, computing approximate standard errors, bias-corrected estimates, and confidence intervals assuming a normal sampling distribution.
The %BOOT macro does elementary nonparametric bootstrap analyses for simple random samples, computing approximate standard errors, bias-corrected estimates, and confidence intervals assuming a normal sampling distribution. Also, for regression models, the %BOOT macro can resample either observations or residuals.
The %BOOTCI macro computes several varieties of confidence intervals that are suitable for sampling distributions that are not normal.
The RsquareV macro provides an R-square measure for models with a well-defined variance function such as generalized linear and generalized additive models.
R2 is a popular measure of fit used for ordinary regression models. The RsquareV macro provides the R_V^2 statistic proposed by Zhang (2016) for use with any model based on a distribution with a well-defined variance function. This includes the class of generalized linear models and generalized additive models based on distributions such as the binomial for logistic models, Poisson, gamma, and others. It also includes models based on quasi-likelihood functions for which only the mean and variance functions are defined. A partial R2 is provided when comparing a full model to a nested, reduced model. Partial R can be obtained from this when the difference between the full and reduced model is a single parameter. A penalized R2 is also available adjusting for the additional parameters in the full model.
This sample creates four adverse event with relative risk plots. An adverse event with relative risk plot is a two-panel display of the most frequently occurring adverse events sorted by relative risk for a clinical study.
The sample requires a macro that can be downloaded from the Downloads tab. After downloading the program, the sample code on the Full Code tab can be submitted from your SAS session.
The %MULTNORM macro provides tests and plots of multivariate normality. A test of univariate normality is also given for each of the variables. A chi-square quantile-quantile plot of the observations' squared Mahalanobis distances can be obtained allowing a visual assessment of multivariate normality. Univariate histograms with overlaid normal curves are also available.
The %ITEM macro computes descriptive statistics for analysis of data from a multiple-choice test. Each observation contains the answers from one subject to a set of questions ("items"). The data are compared to an answer key to determine which answers are correct. The score for each subject is computed as the number of correct answers. The output is very similar to that from the ITEM procedure in the SUGI Supplemental library, but several incorrect statistics have been fixed.