To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Behavioral measurement is the hallmark of research in the field of computational social science. We are witnessing innovative as well as clever use of existing and novel, commercial, or research-grade “sensors” to measure various aspects of human behavior and well-being. Passive sensing, a version of measurement where data is gathered and tracked unobtrusively using pervasive and ubiquitous sensors, is increasingly recognized and utilized in organizational science research. This chapter presents an overview of where passive sensing has been successful in workplace measurement, ranging from assessing worker personality and productivity, to their well-being, and understanding the overall organizational pulse. A range of passive sensing infrastructures are described (e.g., smartphones, wearable devices, social media) and several machine-learning-based predictive approaches are noted in this body of research. The chapter then highlights outstanding challenges as this field matures, which include issues of limited generalizability in computational measurement of workplace behaviors, gaps and limitations of gold standard assessment, model simplicity and sophistication tradeoffs, and, importantly, privacy risks. The chapter concludes with recommendations on important areas that need further or altogether new investments, so as to fully realize the potential of passive sensing technologies in more accurate, actionable, and ethical workplace measurement.
The ubiquity of mobile devices allows researchers to assess people’s real-life behaviors objectively, unobtrusively, and with high temporal resolution. As a result, psychological mobile sensing research has grown rapidly. However, only very few cross-cultural mobile sensing studies have been conducted to date. In addition, existing multi-country studies often fail to acknowledge or examine possible cross-cultural differences. In this chapter, we illustrate biases that can occur when conducting cross-cultural mobile sensing studies. Such biases can relate to measurement, construct, sample, device type, user practices, and environmental factors. We also propose mitigation strategies to minimize these biases, such as the use of informants with expertise in local culture, the development of cross-culturally comparable instruments, the use of culture-specific recruiting strategies and incentives, and rigorous reporting standards regarding the generalizability of research findings. We hope to inspire rigorous comparative research to establish and refine mobile sensing methodologies for cross-cultural psychology.
The quality of psychological assessment processes in talent management is influenced by our choices about which measurement technologies to use. Technology with relevance to assessing talent is also advancing at great speed in many domains. These advances include processing power and speed, human computer interaction research, and machine learning and artificial intelligence. Given these rapid developments, it is an appropriate time to pause and take stock of how emerging assessment approaches (e.g., game-based assessment) that leverage these new developments are used, relative to more traditional approaches such as questionnaires and interviews. To achieve this objective, we report here on a survey of European assessment practitioners. We ask about the technology they use for psychological assessment, the constructs they measure with those approaches, and the levels of organisations they are used at. We also asked about how traditional approaches are being enhanced with technology and about practitioner perceptions of the reliability, validity and adverse impact and privacy of their technological choices.
In this essay, we review the Technology and Measurement around the Globe chapters with an eye toward integration and synthesis. We primarily focus on implications for testing, and then make connections to the broader world of nontest assessment. We identify themes of privacy, fairness, workplace applications, and emerging technologies, and offer a research agenda for future investigations that seek to understand culture, technology, and measurement.
An overview of testing and measurement in North America is provided, covering topics related to privacy laws and regulations, online proctoring, artificial intelligence, accommodations, accessibility, and the “opt out of testing” movement that are currently defining measurement in North America. This is not to say that these challenges are unique to North America; in fact, the challenges related to these topics are being faced all over the world in varying degrees and the same opportunities exist, but these topics are of particular importance when it comes to measurement and assessment in North America. Building on these observations, a discussion of how advances in technology and computing power provide an opportunity to challenge the status quo related to assessment; these advancements will allow assessment of skills in more authentic ways that will provide better insight into someone’s knowledge, skills, and abilities. The question we should be asking and attempting to answer is “How can assessment developers leverage the power of the cloud and technology to measure skills more accurately and create higher fidelity in the assessment process?”
This chapter provides an overview of the common machine learning algorithms used in psychological measurement (to measure human attributes). They include algorithms used to measure personality from interview videos; job satisfaction from open-ended text responses; and group-level emotions from social media posts and internet search trends. These algorithms enable effective and scalable measures of human psychology and behavior, driving technological advancements in measurement. The chapter consists of three parts. We first discuss machine learning and its unique contribution to measurement. We then provide an overview of the common machine learning algorithms used in measurement and their example applications. Finally, we provide recommendations and resources for using machine learning algorithms in measurement.
When used adequately, technology-enabled measurement can help researchers and practitioners better assess various constructs and phenomena of interest and hence better understand, predict, and influence them in order to address social and behavioral issues. This article examines key issues and experiences in Singapore associated with digital transformation and data society, including challenges and opportunities in technology-enabled measurement that may be applicable as well in other cities and countries. Using Singapore’s digitization transformation journey to apply technology systematically and extensively to improve the lives of its people as an example, critical issues of contexts, changes, and collaborations in research, policy, and practice involving technology-enabled measurement of psychological constructs and processes are discussed.
This chapter reviews the potential of technological innovations to advance assessment of psychological variables in education and the labor market in South America. We discuss in more detail SENNA kids, an electronic assessment tool developed in Brazil, to facilitate the formative assessment of social-emotional skills in young children. For the labour market, we describe an employee-experience tool developed by BONDI-X, a South American start-up, to track employees’ experience and foster communication between employees and organizations. We discuss how features of these two systems and their technologies can be integrated and contribute to a technology-supported self-directed experience system putting individuals in the driver seat of their personal development across their educational and employment careers.
The current chapter provides an overview of technology and measurement in Asia. In the first half of the chapter, we summarize the current use of technology in research, as well as related regulations and legal environments. In the second half of the chapter, we compare the existing technological applications in Asia with the rest of the world, discuss factors influencing the applications in Asia, and highlight potential developmental areas.
There have been tremendous advancements in technology-based assessments in new modes of data collection and the use of artificial intelligence. Traditional assessment techniques in the fields of psychology, business, education, and health need to be reconsidered. Yet, while technology is pervasive, its spread is not consistent due to national differences in economics and culture. Given these trends, this book offers an integrative consolidation of how technology is changing the face of assessments across different regions of the world. There are three major book sections: in foundations, core issues of computational models, passively sensed data, and privacy concerns are discussed; in global perspectives, the book identifies ways technology has changed how we assess human attributes across the world, and finally, in regional focus, the book surveys how different regions around the world have adopted technology-based assessments for their unique cultural and societal context.
In 2003, Bohman, Frieze, and Martin initiated the study of randomly perturbed graphs and digraphs. For digraphs, they showed that for every $\alpha \gt 0$, there exists a constant $C$ such that for every $n$-vertex digraph of minimum semi-degree at least $\alpha n$, if one adds $Cn$ random edges then asymptotically almost surely the resulting digraph contains a consistently oriented Hamilton cycle. We generalize their result, showing that the hypothesis of this theorem actually asymptotically almost surely ensures the existence of every orientation of a cycle of every possible length, simultaneously. Moreover, we prove that we can relax the minimum semi-degree condition to a minimum total degree condition when considering orientations of a cycle that do not contain a large number of vertices of indegree $1$. Our proofs make use of a variant of an absorbing method of Montgomery.
Since the 1960s Mastermind has been studied for the combinatorial and information-theoretical interest the game has to offer. Many results have been discovered starting with Erdős and Rényi determining the optimal number of queries needed for two colours. For $k$ colours and $n$ positions, Chvátal found asymptotically optimal bounds when $k \le n^{1-\varepsilon }$. Following a sequence of gradual improvements for $k\geq n$ colours, the central open question is to resolve the gap between $\Omega (n)$ and $\mathcal{O}(n\log \log n)$ for $k=n$. In this paper, we resolve this gap by presenting the first algorithm for solving $k=n$ Mastermind with a linear number of queries. As a consequence, we are able to determine the query complexity of Mastermind for any parameters $k$ and $n$.
The switch process alternates independently between 1 and $-1$, with the first switch to 1 occurring at the origin. The expected value function of this process is defined uniquely by the distribution of switching times. The relation between the two is implicitly described through the Laplace transform, which is difficult to use for determining if a given function is the expected value function of some switch process. We derive an explicit relation under the assumption of monotonicity of the expected value function. It is shown that geometric divisible switching time distributions correspond to a non-negative decreasing expected value function. Moreover, an explicit relation between the expected value of a switch process and the autocovariance function of the switch process stationary counterpart is obtained, leading to a new interpretation of the classical Pólya criterion for positive-definiteness.
Choices made by individuals have widespread impacts—for instance, people choose between political candidates to vote for, between social media posts to share, and between brands to purchase—moreover, data on these choices are increasingly abundant. Discrete choice models are a key tool for learning individual preferences from such data. Additionally, social factors like conformity and contagion influence individual choice. Traditional methods for incorporating these factors into choice models do not account for the entire social network and require hand-crafted features. To overcome these limitations, we use graph learning to study choice in networked contexts. We identify three ways in which graph learning techniques can be used for discrete choice: learning chooser representations, regularizing choice model parameters, and directly constructing predictions from a network. We design methods in each category and test them on real-world choice datasets, including county-level 2016 US election results and Android app installation and usage data. We show that incorporating social network structure can improve the predictions of the standard econometric choice model, the multinomial logit. We provide evidence that app installations are influenced by social context, but we find no such effect on app usage among the same participants, which instead is habit-driven. In the election data, we highlight the additional insights a discrete choice framework provides over classification or regression, the typical approaches. On synthetic data, we demonstrate the sample complexity benefit of using social information in choice models.
Skin and Soft Tissue Infections (SSTIs) are common bacterial infections. We hypothesized that due to the COVID-19 pandemic, SSTI rates would significantly decrease due to directives to avoid unneeded care and attenuated SSTIs risk behaviours. We retrospectively examined all patients with an ICD-10 diagnosis code in the Los Angeles County Department of Health Services, the second largest U.S. safety net healthcare system between 16 March 2017 and 15 March 2022. We then compared pre-pandemic with intra-pandemic SSTI rates using an interrupted time series analysis. We found 72,118 SSTIs, 46,206 during the pre-pandemic period and 25,912 during the intra-pandemic period. Pre-pandemic SSTI rate was significantly higher than the intra-pandemic rate (3.27 vs. 2.31 cases per 1,000 empanelled patient-months, P < 0.0001). The monthly SSTI cases decreased by 1.19 SSTIs/1,000 empanelled patient-months between the pre- and intra-pandemic periods (P = 0.0003). SSTI subgroups (inpatient, observation unit, emergency department, and outpatient clinics), all had significant SSTI decreases between the two time periods (P < 0.05) except for observation unit (P = 0.50). Compared to the pre-COVID-19 pandemic period, medically attended SSTI rates in our large U.S. safety net healthcare system significantly decreased by nearly 30%. Whether findings reflect true SSTI decreases or decreased health system utilization for SSTIs requires further examination.
This paper proposes a subsampling inference method for extreme conditional quantiles based on a self-normalized version of a local estimator for conditional quantiles, such as the local linear quantile regression estimator. The proposed method circumvents difficulty of estimating nuisance parameters in the limiting distribution of the local estimator. A simulation study and empirical example illustrate usefulness of our subsampling inference to investigate extremal phenomena.
Given a graph $H$, let us denote by $f_\chi (H)$ and $f_\ell (H)$, respectively, the maximum chromatic number and the maximum list chromatic number of $H$-minor-free graphs. Hadwiger’s famous colouring conjecture from 1943 states that $f_\chi (K_t)=t-1$ for every $t \ge 2$. A closely related problem that has received significant attention in the past concerns $f_\ell (K_t)$, for which it is known that $2t-o(t) \le f_\ell (K_t) \le O(t (\!\log \log t)^6)$. Thus, $f_\ell (K_t)$ is bounded away from the conjectured value $t-1$ for $f_\chi (K_t)$ by at least a constant factor. The so-called $H$-Hadwiger’s conjecture, proposed by Seymour, asks to prove that $f_\chi (H)={\textrm{v}}(H)-1$ for a given graph $H$ (which would be implied by Hadwiger’s conjecture).
In this paper, we prove several new lower bounds on $f_\ell (H)$, thus exploring the limits of a list colouring extension of $H$-Hadwiger’s conjecture. Our main results are:
For every $\varepsilon \gt 0$ and all sufficiently large graphs $H$ we have $f_\ell (H)\ge (1-\varepsilon )({\textrm{v}}(H)+\kappa (H))$, where $\kappa (H)$ denotes the vertex-connectivity of $H$.
For every $\varepsilon \gt 0$ there exists $C=C(\varepsilon )\gt 0$ such that asymptotically almost every $n$-vertex graph $H$ with $\left \lceil C n\log n\right \rceil$ edges satisfies $f_\ell (H)\ge (2-\varepsilon )n$.
The first result generalizes recent results on complete and complete bipartite graphs and shows that the list chromatic number of $H$-minor-free graphs is separated from the desired value of $({\textrm{v}}(H)-1)$ by a constant factor for all large graphs $H$ of linear connectivity. The second result tells us that for almost all graphs $H$ with superlogarithmic average degree $f_\ell (H)$ is separated from $({\textrm{v}}(H)-1)$ by a constant factor arbitrarily close to $2$. Conceptually these results indicate that the graphs $H$ for which $f_\ell (H)$ is close to the conjectured value $({\textrm{v}}(H)-1)$ for $f_\chi (H)$ are typically rather sparse.
In the present work, neural networks are applied to formulate parametrized hyperelastic constitutive models. The models fulfill all common mechanical conditions of hyperelasticity by construction. In particular, partially input convex neural network (pICNN) architectures are applied based on feed-forward neural networks. Receiving two different sets of input arguments, pICNNs are convex in one of them, while for the other, they represent arbitrary relationships which are not necessarily convex. In this way, the model can fulfill convexity conditions stemming from mechanical considerations without being too restrictive on the functional relationship in additional parameters, which may not necessarily be convex. Two different models are introduced, where one can represent arbitrary functional relationships in the additional parameters, while the other is monotonic in the additional parameters. As a first proof of concept, the model is calibrated to data generated with two differently parametrized analytical potentials, whereby three different pICNN architectures are investigated. In all cases, the proposed model shows excellent performance.