To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Latecomer political science communities have faced multiple challenges in the past decades, including the very establishment of their professional identities. Based on the case study of Hungary, this article argues that publication performance is a substantial component of the identity of the political science profession. Hungary is a notable example among Central and East European (CEE) political science academia in the sense that both the initial take-off of the profession and then its increasing challenges are typical to the CEE region. In an inclusive approach, which encompasses all authors published in the field between 1990 and 2018, as well as their publication record, the analysis demonstrates that political science has undergone major expansion, quality growth and internationalisation but these performance qualities are unevenly spread. These reflect important aspects of the profession’s identity. This agency and performance-based approach to identity formation might well be used to build up identity features elsewhere and also in a comparative manner.
Chapter 4 introduces asymmetry properties of error distributions of competing causal models. In a way similar to distributional features of observed variables, distributional characteristics of model errors offer valuable information about the underlying causal flow of variable relations. It presents marginal and joint measures that use higher than second moments of error, discusses methods of statistical inference, and summarizes decision guidelines for causal model selection. Further, results of a Monte-Carlo simulation are presented to characterize the behavior of the discussed DDA measures. Simulated as well as real-world data examples illustrate causal model selection based on third and fourth moment properties of model errors.
Chapter 3 introduces asymmetry properties of the linear regression model that emerge from distributional properties of non-Gaussian variables. Foundations, significance procedures, and decision guidelines of third and fourth moment-based DDA measures are discussed for two scenarios. In the first scenario, we assume that the error term of the “true” model follows a Gaussian distribution. In the second scenario, we relax this distributional assumption and discuss direction of dependence based on distributional characteristics of observed variables when the “true” error term deviates from normality. Further, power characteristics of the DDA measures are presented based on a Monte-Carlo simulation experiment, and synthetic as well as real-world data examples are given to illustrate DDA model selection based on observed variable distributions.
The basic principle of any version of insurance is the paradigm that exchanging risk by sharing it in a pool is beneficial for the participants. In case of independent risks with a finite mean, this is the case for risk-averse decision-makers. The situation may be very different in case of infinite mean models. In that case it is known that risk sharing may have a negative effect, which is sometimes called the nondiversification trap. This phenomenon is well known for infinite mean stable distributions. In a series of recent papers, similar results for infinite mean Pareto and Fréchet distributions have been obtained. We further investigate this property by showing that many of these results can be obtained as special cases of a simple result demonstrating that this holds for any distribution that is more skewed than a Cauchy distribution. We also relate this to the situation of deadly catastrophic risks, where we assume a positive probability for an infinite value. That case gives a very simple intuition why this phenomenon can occur for such catastrophic risks. We also mention several open problems and conjectures in this context.
P-value functions are modern statistical tools that unify effect estimation and hypothesis testing and can provide alternative point and interval estimates compared to standard meta-analysis methods, using any of the many p-value combination procedures available (Xie et al., 2011, JASA). We provide a systematic comparison of different combination procedures, both from a theoretical perspective and through simulation. We show that many prominent p-value combination methods (e.g. Fisher’s method) are not invariant to the orientation of the underlying one-sided p-values. Only Edgington’s method, a lesser-known combination method based on the sum of p-values, is orientation-invariant and still provides confidence intervals not restricted to be symmetric around the point estimate. Adjustments for heterogeneity can also be made and results from a simulation study indicate that Edgington’s method can compete with more standard meta-analytic methods.
Many models of investor behavior predict that investors prefer assets that they believe to have positively skewed return distributions. We elicit detailed return expectations for a broad index fund and a single stock in a representative sample of the Dutch population. The data show substantial heterogeneity in individuals’ skewness expectations of which only very little is captured by sociodemographics. Across assets, most respondents expect a higher variance and skewness for the individual stock compared to the index fund. Portfolio allocations increase with the skewness of respondents’ return expectations for the respective asset, controlling for other moments of a respondent’s expectations.
This paper studies the asymptotic distributions of three reliability coefficient estimates: Sample coefficient alpha, the reliability estimate of a composite score following a factor analysis, and the estimate of the maximal reliability of a linear combination of item scores following a factor analysis. Results indicate that the asymptotic distribution for each of the coefficient estimates, obtained based on a normal sampling distribution, is still valid within a large class of nonnormal distributions. Therefore, a formula for calculating the standard error of the sample coefficient alpha, recently obtained by van Zyl, Neudecker and Nel, applies to other reliability coefficients and can still be used even with skewed and kurtotic data such as are typical in the social and behavioral sciences.
In many practical applications, one is interested only in the average or expected value of flow quantities, such as aerodynamic forces and heat transfer. Governing equations for these mean flow quantities may be derived by averaging the Navier-Stokes and temperature or scalar transport equations. Reynolds averaging introduces additional unknowns owing to the nonlinearity of the equations, which is known as the closure problem in the turbulence literature. Turbulence models for the unclosed terms in the averaged equations are a way to manage the closure problem, for they close the equations with phenomenological models that relate the unknown terms to the solution variables. It is important that these models do not alter the conservation and invariance properties of the original equations of motion. We take a closer look at the equations of motion to understand these fundamental qualities in more depth. We describe averaging operators for canonical turbulent flows at the core of basic turbulence research and modeling efforts, and discuss homogeneity and stationarity. We also examine the Galilean invariance of the equations of motion and the role of vorticity in turbulence dynamics.
Chapter 2 discusses the different graphic techniques for describing data. These include bar graphs, histograms, and frequency polygons, as well as shapes and patterns of distributions. Raw scores are often arranged into frequency distributions, which are orderly arrangements of intervals of a given size which contain frequencies. Frequency distributions organize data in ways that can be viewed and interpreted by investigators. Constructing intervals of specified sizes to encompass scores in a distribution enables investigators to capture certain features that suggest particular statistical techniques for data analysis. Large amounts of collected data from questionnaires, observations, or experiments can be summarized by a simple chart or graph. The meaningfulness, relevance, and understanding of this graphic representation cannot be overstated.
Chapter 5 examines the normal distribution, its relationship to z-scores, and its applicability to probability theory and statistical inference. z-scores or standardized scores are values depicting how far a particular score is from the mean in standard deviation units. Different proportions of the normal curve area are associated with z-scores. The conversions of raw scores to z-scores and z-scores to raw scores are illustrated. Nonnormal distributions which differ markedly from normal curve characteristics are also described. The importance of the normal curve as a probability distribution, along with a brief introduction to probability, is discussed.
In Chapter 7, the author introduces both content analysis and basic statistical analysis to help evaluate the effectiveness of assessments. The focus of the chapter is on guidelines for creating and evaluating reading and listening inputs and selected response item types, particularly multiple-choice items that accompany these inputs. The author guides readers through detailed evaluations of reading passages and accompanying multiple-choice items that need major revisions. The author discusses generative artificial intelligence as an aid for drafting inputs and creating items and includes an appendix which guides readers through the use of ChatGPT for this purpose. The author also introduces test-level statistics, including minimum, maximum, range, mean, variance, standard deviation, skewness, and kurtosis. The author shows how to calculate these statistics for an actual grammar tense test and includes an appendix with detailed guidelines for conducting these analyses using Excel software.
Dispersion describes the spread of the data or how it varies from its mean.The chapter begins with the calculation of the variance and then the more important standard deviation, along with their interpretation.Students learn how to calculate these measures by hand and in the R Commander.Other measures of dispersion like skewness and kurtosis are described.The range and interquartile range are also calculated using the R Commander software for ordinal variables.
Compositional data for 464 clay minerals (2:1 type) were analyzed by statistical techniques. The objective was to understand the similarities and differences between the groups and subgroups and to evaluate statistically clay mineral classification in terms of chemical parameters. The statistical properties of the distributions of total layer charge (TLC), K, VIAl, VIMg, octahedral charge (OC) and tetrahedral charge (TC) were initially evaluated. Critical-difference (P = 1%) comparisons of individual characteristics show that all the clay micas (illite, glauconite and celadonite) differ significantly from all the smectites (montmorillonite, beidellite, nontronite and saponite) only in their TLC and K levels; they cannot be distinguished by their VIAl, VIMg, TC or OC values which reveal no significant differences between several minerals.
Linear discriminant analysis using equal prior was therefore performed to analyze the combined effect of all the chemical parameters. Using six parameters [TLC, K, VIAl, VIMg, TC and OC], eight minerals groups could be derived, corresponding to the three clay micas, four smectites (mentioned above) and vermiculite. The fit between predicted and experimental values was 88.1%. Discriminant analysis using two parameters (TLC and K) resulted in classification into three broad groups corresponding to the clay micas, smectites and vermiculites (87.7% fit). Further analysis using the remaining four parameters resulted in subgroup-level classification with an 85–95% fit between predicted and experimental results. The three analyses yielded D2 Mahalanobis distances, which quantify chemical similarities and differences between the broad groups, within members of a subgroup and also between the subgroups. Classification functions derived here can be used as an aid for classification of 2:1 minerals.
Chapter 3 covers MEASURES OF LOCATION, SPREAD, AND SKEWNESS and includes the following specific topics, among others:Mode, Median, Mean, Weighted Mean,Range, Interquartile Range, Variance, Standard Deviation, and Skewness.
Chapter 3 covers measures of location, spread and skewness and includes the following specific topics, among others: mode, median, mean, weighted mean, range, interquartile range, variance, standard deviation, and skewness.
The threefold purpose of this initial chapter is to provide a review of the historical origins and the methodological beauty of sampling approaches to judgment and decision-making, to illuminate the most prominent recent developments, and to provide a preview of all chapters included in this volume. Accordingly, the chapter is organized into three parts. The historical review in the first part highlights the progress from purely intra-psychic to cognitive-ecological perspectives on adaptive cognition, conceived as a genuine interaction between environmental constraints and adaptive agents’ sampling strategies. The review of novel trends in the second part testifies to the fertility of sampling approaches and the impressive amount of progress it has inspired in terms of functional-level applications in various areas, but also in terms of mechanistic and computational modeling. A preview of all 22 chapters of the present volume in the final part is organized into six sections, covering the full spectrum of these innovative developments in rationality research during the last 15 years.
Knowledge on genetic architecture and inheritance of tomato shelf life contributing traits in different genetic backgrounds is a key issue for shelf life improvement. An investigation was undertaken to estimate the nature and magnitude of variability, traits inter-relationships, third and fourth degree statistics to unravel the genetics of 18 fruit quality and yield traits governing shelflife in F2 population of ‘Arka Vikas’ × ‘Red ball’ cross. The wider standardized range and higher estimates of phenotypic coefficient of variation indicated prevalence of adequate variability for fruit quality and yield traits. Fruit firmness and pericarp thickness ranged from 1.20–3.44 kg/cm2 and 2.44–5.31 mm respectively. Pulp content and shelflife ranged from 58.59–94.70% and 10.60–26.40 days respectively. Significant positive correlation with direct effect on fruit shelf life was exhibited by fruit firmness, pericarp thickness, TSS, titratable acidity, pulp content, fruit length and locule number. Positive skewness with platykurtic distribution recorded for TSS, lycopene, ascorbic acid, titratable acidity, fruit length, weight, pericarp thickness, plant height and number of branches. Negatively skewed with platykurtic distribution observed for pH, fruit diameter, firmness, pulp content, locule number, shelf life and number of clusters which signified duplicate epistasis of dominant genes in traits inheritance. The transgressive segregants for fruit quality traits indicated complementary effects of dispersed allele combinations between parents. Additive and dominance components could be exploited in the advanced segregating population by evaluating large number of families. In addition to additive effects, predominance of dominance effects of genes are important in inheritance of fruit quality traits governing shelflife.
This paper investigates the ordering properties of largest claim amounts in heterogeneous insurance portfolios in the sense of some transform orders, including the convex transform order and the star order. It is shown that the largest claim amount from a set of independent and heterogeneous exponential claims is more skewed than that from a set of independent and homogeneous exponential claims in the sense of the convex transform order. As a result, a lower bound for the coefficient of variation of the largest claim amount is established without any restrictions on the parameters of the distributions of claim severities. Furthermore, sufficient conditions are presented to compare the skewness of the largest claim amounts from two sets of independent multiple-outlier scaled claims according to the star order. Some comparison results are also developed for the multiple-outlier proportional hazard rates claims. Numerical examples are presented to illustrate these theoretical results.
The chapter on visual models discusses basic ways that scientists create visual representations of their data, including charts and graphs, in order to understand their data better. Like all models, visual models are a simplified version of reality. Two of the visual models discussed in this chapter are the frequency table and histogram. The histogram, in particular, is useful in the shape of the distribution of data, skewness, kurtosis, and the number of peaks. Other visual models in the social sciences include frequency polygons, bar graphs, stem-and-leaf plots, line graphs, pie charts, and scatterplots. All of these visual models help researchers understand their data in different ways, though none is perfect for all situations. Modern technology has resulted in the creation of new ways to visualize data. These methods are more complex, but they provide data analysts with new insights into their data. The incorporation of geographic data, animations, and interactive tools give people more options than ever existed in previous eras.