We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Q-matrix of a cognitively diagnostic test is said to be complete if it allows for the identification of all possible proficiency classes among examinees. Completeness of the Q-matrix is therefore a key requirement for any cognitively diagnostic test. However, completeness of the Q-matrix is often difficult to establish, especially, for tests with a large number of items involving multiple attributes. As an additional complication, completeness is not an intrinsic property of the Q-matrix, but can only be assessed in reference to a specific cognitive diagnosis model (CDM) supposed to underly the data—that is, the Q-matrix of a given test can be complete for one model but incomplete for another. In this article, a method is presented for assessing whether a given Q-matrix is complete for a given CDM. The proposed procedure relies on the theoretical framework of general CDMs and is therefore legitimate for CDMs that can be reparameterized as a general CDM.
Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided.
The monotone homogeneity model (MHM—also known as the unidimensional monotone latent variable model) is a nonparametric IRT formulation that provides the underpinning for partitioning a collection of dichotomous items to form scales. Ellis (Psychometrika 79:303–316, 2014, doi:10.1007/s11336-013-9341-5) has recently derived inequalities that are implied by the MHM, yet require only the bivariate (inter-item) correlations. In this paper, we incorporate these inequalities within a mathematical programming formulation for partitioning a set of dichotomous scale items. The objective criterion of the partitioning model is to produce clusters of maximum cardinality. The formulation is a binary integer linear program that can be solved exactly using commercial mathematical programming software. However, we have also developed a standalone branch-and-bound algorithm that produces globally optimal solutions. Simulation results and a numerical example are provided to demonstrate the proposed method.
Dynamic programming methods for matrix permutation problems in combinatorial data analysis can produce globally-optimal solutions for matrices up to size 30×30, but are computationally infeasible for larger matrices because of enormous computer memory requirements. Branch-and-bound methods also guarantee globally-optimal solutions, but computation time considerations generally limit their applicability to matrix sizes no greater than 35×35. Accordingly, a variety of heuristic methods have been proposed for larger matrices, including iterative quadratic assignment, tabu search, simulated annealing, and variable neighborhood search. Although these heuristics can produce exceptional results, they are prone to converge to local optima where the permutation is difficult to dislodge via traditional neighborhood moves (e.g., pairwise interchanges, object-block relocations, object-block reversals, etc.). We show that a heuristic implementation of dynamic programming yields an efficient procedure for escaping local optima. Specifically, we propose applying dynamic programming to reasonably-sized subsequences of consecutive objects in the locally-optimal permutation, identified by simulated annealing, to further improve the value of the objective function. Experimental results are provided for three classic matrix permutation problems in the combinatorial data analysis literature: (a) maximizing a dominance index for an asymmetric proximity matrix; (b) least-squares unidimensional scaling of a symmetric dissimilarity matrix; and (c) approximating an anti-Robinson structure for a symmetric dissimilarity matrix.
Several authors have touted the p-median model as a plausible alternative to within-cluster sums of squares (i.e., K-means) partitioning. Purported advantages of the p-median model include the provision of “exemplars” as cluster centers, robustness with respect to outliers, and the accommodation of a diverse range of similarity data. We developed a new simulated annealing heuristic for the p-median problem and completed a thorough investigation of its computational performance. The salient findings from our experiments are that our new method substantially outperforms a previous implementation of simulated annealing and is competitive with the most effective metaheuristics for the p-median problem.
Parametric likelihood estimation is the prevailing method for fitting cognitive diagnosis models—also called diagnostic classification models (DCMs). Nonparametric concepts and methods that do not rely on a parametric statistical model have been proposed for cognitive diagnosis. These methods are particularly useful when sample sizes are small. The general nonparametric classification (GNPC) method for assigning examinees to proficiency classes can accommodate assessment data conforming to any diagnostic classification model that describes the probability of a correct item response as an increasing function of the number of required attributes mastered by an examinee (known as the “monotonicity assumption”). Hence, the GNPC method can be used with any model that can be represented as a general DCM. However, the statistical properties of the estimator of examinees’ proficiency class are currently unknown. In this article, the consistency theory of the GNPC proficiency-class estimator is developed and its statistical consistency is proven.
Although the K-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The p-median model is an especially well-studied clustering problem that requires the selection of p objects to serve as cluster centers. The objective is to choose the cluster centers such that the sum of the Euclidean distances (or some other dissimilarity measure) of objects assigned to each center is minimized. Using 12 data sets from the literature, we demonstrate that a three-stage procedure consisting of a greedy heuristic, Lagrangian relaxation, and a branch-and-bound algorithm can produce globally optimal solutions for p-median problems of nontrivial size (several hundred objects, five or more variables, and up to 10 clusters). We also report the results of an application of the p-median model to an empirical data set from the telecommunications industry.
Joint maximum likelihood estimation (JMLE) is developed for diagnostic classification models (DCMs). JMLE has been barely used in Psychometrics because JMLE parameter estimators typically lack statistical consistency. The JMLE procedure presented here resolves the consistency issue by incorporating an external, statistically consistent estimator of examinees’ proficiency class membership into the joint likelihood function, which subsequently allows for the construction of item parameter estimators that also have the consistency property. Consistency of the JMLE parameter estimators is established within the framework of general DCMs: The JMLE parameter estimators are derived for the Loglinear Cognitive Diagnosis Model (LCDM). Two consistency theorems are proven for the LCDM. Using the framework of general DCMs makes the results and proofs also applicable to DCMs that can be expressed as submodels of the LCDM. Simulation studies are reported for evaluating the performance of JMLE when used with tests of varying length and different numbers of attributes. As a practical application, JMLE is also used with “real world” educational data collected with a language proficiency test.
The Polytomous Local Independence Model (PoLIM) by Stefanutti, de Chiusole, Anselmi, and Spoto, is an extension of the Basic Local Independence Model (BLIM) to accommodate polytomous items. BLIM, a model for analyzing responses to binary items, is based on Knowledge Space Theory, a framework developed by cognitive scientists and mathematical psychologists for modeling human knowledge acquisition and representation. The purpose of this commentary is to show that PoLIM is simply a paraphrase of a DINA model in cognitive diagnosis for polytomous items. Specifically, BLIM is shown to be equivalent to the DINA model when the BLIM-items are conceived as binary single-attribute items, each with a distinct attribute; thus, PoLIM is equivalent to the DINA for polytomous single-attribute items, each with a distinct attribute.
The asymptotic classification theory of cognitive diagnosis (ACTCD) provided the theoretical foundation for using clustering methods that do not rely on a parametric statistical model for assigning examinees to proficiency classes. Like general diagnostic classification models, clustering methods can be useful in situations where the true diagnostic classification model (DCM) underlying the data is unknown and possibly misspecified, or the items of a test conform to a mix of multiple DCMs. Clustering methods can also be an option when fitting advanced and complex DCMs encounters computational difficulties. These can range from the use of excessive CPU times to plain computational infeasibility. However, the propositions of the ACTCD have only been proven for the Deterministic Input Noisy Output “AND” gate (DINA) model and the Deterministic Input Noisy Output “OR” gate (DINO) model. For other DCMs, there does not exist a theoretical justification to use clustering for assigning examinees to proficiency classes. But if clustering is to be used legitimately, then the ACTCD must cover a larger number of DCMs than just the DINA model and the DINO model. Thus, the purpose of this article is to prove the theoretical propositions of the ACTCD for two other important DCMs, the Reduced Reparameterized Unified Model and the General Diagnostic Model.
The clique partitioning problem (CPP) requires the establishment of an equivalence relation for the vertices of a graph such that the sum of the edge costs associated with the relation is minimized. The CPP has important applications for the social sciences because it provides a framework for clustering objects measured on a collection of nominal or ordinal attributes. In such instances, the CPP incorporates edge costs obtained from an aggregation of binary equivalence relations among the attributes. We review existing theory and methods for the CPP and propose two versions of a new neighborhood search algorithm for efficient solution. The first version (NS-R) uses a relocation algorithm in the search for improved solutions, whereas the second (NS-TS) uses an embedded tabu search routine. The new algorithms are compared to simulated annealing (SA) and tabu search (TS) algorithms from the CPP literature. Although the heuristics yielded comparable results for some test problems, the neighborhood search algorithms generally yielded the best performances for large and difficult instances of the CPP.
The meaning and implications of the word colonialism and of the closely connected terms of empire and imperialism have undergone a profound transformation in the last decades. Until the end of the nineteenth century the word empire or imperialism was generally used in a laudatory and not a pejorative meaning. The Roman Empire had been the model for Western political thought for one thousand years. The Americans at the end of the eighteenth century proudly and hopefully spoke of their empire. The French revolutionaries proclaimed the imperial expansion of their leadership. Modern Western civilization was regarded as superior to other more stagnant civilizations, and to bring higher civilization to less developed countries was considered a praiseworthy enterprise, in spite of the fact that like so many human efforts this too was inextricably mingled with all kinds of corruption and greed. Empire and colonialism always implied dominion and power; and power, whether exercised by “native” or “alien” governments, has a potency for abuse as probably no other relationship has. Yet liberal alien governments—and liberalism means primarily restraint upon, and limitations of, governmental authority—will be more easily controlled by public opinion against abuse of power than illiberal “native” governments.
Among the German nationalists of the Napoleonic period none had a stronger influence on the practical manifestations of Germannationalism than Friedrich Ludwig Jahn, Father Jahn as he was called. He was born in 1778, in the Mark Brandenburg; in the spirit of a new Prussian nationalism, he wrote in 1799 a book on the promotion of patriotism in the Prussian realm,1 which he opened with the characteristic quotation:
In the 1820's and 1830's the romantic movement was at the height of its influence throughout Europe* In its origins, among the Lake poets in England and in the circle around the brothers Schlegel in Germany, romanticism opposed the libertarian and rationalist tendencies of the French Revolution. In its later stage romanticism presented a more complex attitude, its imagination turned simultaneously to the fascination of the past and the Middle Ages and to the appeal of the future happiness of free men. Under the quiet surface of the Biedermeier the unrest of the Napoleonic wars continued to arouse in the educated young generation a longing for change, for activity, for a new sense of self-fulfillment.
Machiavelli was a man of the Italian Renaissance; yet Mazzini shows no trace of Machiavellian thought, and Mussolini's attempt to revive Machiavelli in his homeland failed lamentably and ignominiously. Italy lacked the power and the hardness of character which Machiavellianism presupposes. Piedmont was an imitation Prussia, but only an imitation Prussia. Machiavelli's ideas bore real fruit in nineteenth century Germany. The German inclination to force ideas in the “free realm of the mind” to their logical and absurd conclusions without regard for the limitations of reality and common sense, combined with Prussia's power and hardness of character to implant Machiavellianism firmly in Germany. While German statesmen like Frederick II and Bismarck were its ablest disciples, its noblest teacher and prophet was Heinrich von Treitschke.
Romanticism though in its beginning little concerned with politics or the state, prepared the rise of German nationalism after 1800. It was an aesthetic revolution, a resort to imagination, almost feminine in its sensibility; it was poetry more deeply indebted to the spirit of music than the poetry of the eighteenth century had been, rich in emotional depth, more potent in magic evocation. But German romanticism was and wished to be more than poetry. It was an interpretation of life, nature and history—and this philosophic character distinguished it from romanticism in other lands. It was sharply opposed to the rationalism of the eighteenth century; it mobilized the fascination of the past to fight against the principles of 1789. In that indirect way romanticism came to concern itself with political and social life and with the state. It never developed a program for a modern German nation-state, but with its emphasis on the peculiarity of the German mind it helped the growth of a consciousness of German uniqueness.