We partner with a secure submission system to handle manuscript submissions.
Please note:
You will need an account for the submission system, which is separate to your Cambridge Core account. For login and submission support, please visit the
submission and support pages.
Please review this journal's author instructions, particularly the
preparing your materials
page, before submitting your manuscript.
Click Proceed to submission system to continue to our partner's website.
To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Bayesian least squares techniques are adapted to estimation of stimulus-response curves, rather broadly conceived. Illustrative examples deal with estimation of person characteristic curves and item characteristic curves in the context of mental testing, and estimation of a stimulus-response curve using data from a psychophysical experiment.
On distractor-identification tests students mark as many distractors as possible on each test item. A grading scale is developed for this type testing. The scale is optimal in that it is the unique scale giving an unbiased estimate of the student's “true score”, i.e., the score that would result if no guessing occurred. If the test is administered as a usual multiple choice test and graded using the usual correction for guessing scale, the expected item score is the same as for the distractor-identification testing using the optimal grading scale. However, the variance of the item score is shown to be less for distractor-identification testing than for usual multiple choice testing under certain conditions.
This paper attempts to clarify the nature of redundancy analysis and its relationships to canonical correlation and multivariate multiple linear regression. Stewart and Love introduced redundancy analysis to provide non-symmetric measures of the dependence of one set of variables on the other, as channeled through the canonical variates. Van den Wollenberg derived sets of variates which directly maximize the between set redundancy. Multivariate multiple linear regression on component scores (such as principal components) is considered. The problem is extended to include an orthogonal rotation of the components. The solution is shown to be identical to van den Wollenberg's maximum redundancy solution.
A simple stochastic model is formulated in order to determine the optimal time between the first test and the second test when the test-retest method of assessing reliability is used. A forgetting process and a change in true score process are postulated. The optimal time between tests is derived by maximizing the probability that the respondent has not remembered the response on the first test and has not had a change in true score. The resulting test-retest correlation is then found to be a linear function of the true reliability of the test, where the slope of this function is the key probability of not remembering and having no change in true score. Some numerical examples and suggestions for using the results in empirical studies are given. Specific recommendations are presented for improved design and analysis of intentions data.
Confirmatory factor analysis is considered from a Bayesian viewpoint, in which prior information on parameter is incorporated in the analysis. An iterative algorithm is developed to obtain the Bayes estimates. A numerical example based on longitudinal data is presented. A simulation study is designed to compare the Bayesian approach with the maximum likelihood method.
In predicting\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tilde y$$\end{document} scores from p > 1 observed scores\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(\tilde x)$$\end{document} in a sample of size ñ, the optimal strategy (minimum expected loss), under certain assumptions, is shown to be based upon the least squares regression weights\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(\hat \beta )$$\end{document} computed from a previous sample. Letting\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tilde r(\hat \beta )$$\end{document} represent the correlation between\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tilde y$$\end{document} and the predicted values\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(\hat \beta '\tilde x)$$\end{document}, and letting\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tilde r(w)$$\end{document} represent the correlation between\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tilde y$$\end{document} and a different set of predicted values\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(w'\tilde x)$$\end{document}, where w is any weighting system which is not a function of\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tilde y$$\end{document}, it is shown that the probability of\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tilde r(\hat \beta )$$\end{document} being less than\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tilde r(w)$$\end{document} cannot exceed .50. The relationship of this result to previous research and practical implications are discussed.
Pairwise preference data are represented as a monotone integral transformation of difference on the underlying stimulus-object or utility scale. The class of monotone transformations considered is that in which the kernel of the integral is a linear combination of B-splines. Two types of data are analyzed: binary and continuous. The parameters of the transformation and the underlying scale values or utilities are estimated by maximum likelihood with inequality constraints on the transformation parameters. Various hypothesis tests and interval estimates are developed. Examples of artificial and real data are presented.
A Monte Carlo evaluation of thirty internal criterion measures for cluster analysis was conducted. Artificial data sets were constructed with clusters which exhibited the properties of internal cohesion and external isolation. The data sets were analyzed by four hierarchical clustering methods. The resulting values of the internal criteria were compared with two external criterion indices which determined the degree of recovery of correct cluster structure by the algorithms. The results indicated that a subset of internal criterion measures could be identified which appear to be valid indices of correct cluster recovery. Indices from this subset could form the basis of a permutation test for the existence of cluster structure or a clustering algorithm.
In the last decade several algorithms for computing the greatest lower bound to reliability or the constrained minimum-trace communality solution in factor analysis have been developed. In this paper convergence properties of these methods are examined. Instead of using Lagrange multipliers a new theorem is applied that gives a sufficient condition for a symmetric matrix to be Gramian. Whereas computational pitfalls for two methods suggested by Woodhouse and Jackson can be constructed it is shown that a slightly modified version of one method suggested by Bentler and Woodward can safely be applied to any set of data. A uniqueness proof for the solution desired is offered.
Bond criticized the base-free measure of change proposed by Tucker, Damarin, and Messick by pointing to an incorrect derivation which is here viewed instead as a correct derivation entailing an inadequately specified tacit assumption. Bond's revision leads to estimates of the correlation between initial position and change which are negatively biased by correlated errors, whereas the original approach, with the tacit assumption properly denoted, leads to unbiased values.
The Triangular Constant Method was designed for the measurement of discriminability between sensory stimuli. Its original model assumes a steady excitatory detection state. The purpose of this paper is to elaborate on the consequences of assuming a variable excitatory state and to formulate the concomitant model.
A new look at latent trait models is proposed. The event of an item being solved by a person is related to the event that the momentary value of a person-specific random component is at least as large as the corresponding value of an item-specific random component. The Birnbaum logistic test model is shown to be generated by a bivariate extreme value distribution for the components. Some consequences of this interpretation are outlined.