To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
DeGAUSS is a privacy-preserving application that ingests addresses and generates output that includes latitude, longitude, census tract, deprivation index and drive times to major hospitals. The application uses a complex command line interface and a container management platform to perform analysis. Our objective was to develop a user-friendly DeGAUSS-based application that simplifies place-based analysis. We also enabled automated geomarker generation by providing an API modality to DeGAUSS.
Methods:
We developed a self-service platform based on the DeGAUSS application. The application was linked to user authentication platforms. The self-service application can be implemented as an API, enabling high-volume geocoding transactions. We surveyed active users for feedback.
Results:
The self-service geomarker application was deployed at Children’s Mercy and the University of Kansas Medical Center. During the period evaluated, more than 2 million addresses were geocoded for 24 users through the user interface and more than 15 million addresses through the API. Users expressed high satisfaction with the system. All respondents used the census block group feature and the core geocoding. Most respondents, 60%, used the deprivation index and 30% used the drive time feature. Population health and social determinants of health were the most common uses (80% each) followed by health equity analyses (70%).
Conclusion:
Population health and social determinants of health research require access to precise geographic information about patients or research subjects. The self-service geomarker capability enables users who may not be comfortable with a command line interface to generate geocoded addresses in support of their research and analysis.
Luminescence dating researchers benefit from many community-led software packages. These packages assist with data reduction, statistical modeling, calculation of dosimetric values, and plot production. Yet few resources are simultaneously intuitive, meant for simulating the reduction and growth of luminescence signals, and accessible to non-specialists. The Luminescence Sample Simulator (LuSS) is an application with a graphical user interface that simulates how apparent age and fractional saturation respond to three key scenarios in luminescence dating: sunlight exposure, heat exposure, and burial. Users can simulate these scenarios for an individual cobble or sand grain, or for a population of 100 sand grains. The underlying kinetic parameters can be adjusted manually or taken from a built-in library of published values. Plots of apparent age histograms, luminescence depth profiles, or fractional saturation and apparent age histories are visualized and can be exported. LuSS is written in MATLAB and can operate as a free-to-use, standalone application, or as an app within an existing MATLAB installation. A typical user workflow and three worked examples show how LuSS can model luminescence signal evolution in response to geologic scenarios. Limitations of LuSS include its inability to capture athermal fading or between-grain variability in geologic dose rate or sensitivity.
We introduce a web application, the Case Selector (http://und.edu/faculty/brian.urlacher), that facilitates comparative case study research designs by creating an exhaustive comparison of cases from a dataset on the dependent, independent, and control variables specified by the user. This application was created to aid in systematic and transparent case selection so that researchers can better address the charge that cases are ‘cherry picked.’ An examination of case selection in a prominent study of rebel behaviour in civil war is then used to illustrate different applications of the Case Selector.
This paper presents maxent.ot, a package for doing phonological analysis using Maximum Entropy Optimality Theory written in the statistical programming language R. R has become the de facto standard for doing statistical analysis in linguistic research, and this package allows phonologists to create and disseminate MaxEnt OT analyses in R. A central goal of the package is to support reproducible research and to allow the crucial components of a MaxEnt analysis to be performed conveniently and with only a basic knowledge of R programming. The paper first presents a tutorial on MaxEnt constraint grammars and how to use maxent.ot to perform a simple analysis. We then turn to more advanced features of the package, including model comparison, regularization, and cross-validation.
This article by Alex Robinson explores the ever-expanding landscape of AI tools available to law firms and offers practical strategies for their successful adoption. It emphasises the importance of upskilling teams, identifying business-specific needs, and implementing structured frameworks to evaluate and integrate AI solutions effectively. Drawing on real-world examples and industry insights, the article provides a roadmap for navigating the AI market strategically, ensuring law firms invest in the right tools at the right time to deliver meaningful results.
Horizon scanning (HS) is a methodology that aims to capture signals and trends that highlight future opportunities and challenges. The National Institute for Health and Care Research (NIHR) Innovation Observatory routinely scans for medical technologies and therapeutics to inform policy and practice for healthcare in the United Kingdom (UK). To date, there is no standardized terminology for horizon scanning in healthcare. Here, we discuss the development of a data glossary and the IOAtlas web app.
Methods
We extracted data points from 4 years’ worth of NIHR Innovation Observatory HS projects and collated them by technology type and descriptive family. A source repository was established by extracting a list of all sources used in NIHR Innovation Observatory briefing notes between 2017 and 2021. The repository was validated by external HS organizations and experts, and sources were then mapped to the appropriate time horizons. The glossary and repository were converted to an SQLite database format and connected to a free web app, IOAtlas.
Results
After de-duplication and consolidation, a total of 148 data points were included in the glossary. The source repository consists of 149 sources, with 99 percent being compliant with searching for two or more technology types. The final SQLite database contained 35 tables with 36 relationships.
Conclusions
We present a data glossary to provide globalized standardization for the terminology used in HS projects. The glossary can be accessed through the IOAtlas web app. Furthermore, we provide users with an interface to generate downloadable data extraction templates within IOAtlas.
This study, authored by Dr Fahimeh Abedi, Prof. Tim Miller and Prof. Atif Ahmad, explores the skills gaps lawyers face when advising on emerging technologies in an increasingly complex digital landscape. Using an exploratory sequential mixed methods approach, the authors conducted qualitative interviews with 26 in-house lawyers and a broader quantitative survey revealed key challenges, including complex legislation, unclear regulatory frameworks and ethical concerns in data use. Findings highlight a significant gap in technological literacy within the legal profession, emphasising the need for improved knowledge, skills and ethical awareness. This research provides a roadmap for equipping legal professionals for responsible leadership in a technology-driven future, offering significant insights for policymakers and regulators.
Artificial intelligence (AI) is becoming increasingly important in our daily lives and so is academic research on its impact on various legal domains. One of the fields that has attracted much attention is extra-contractual or tort liability, as AI will inevitably cause damage. Reference can also be made to accidents involving autonomous vehicles. In this chapter, we will discuss some major and general challenges that arise in this context. We will thereby illustrate the remaining importance of national law to tackle these challenges and focus on procedural elements, including disclosure requirements and rebuttable presumptions. We will also illustrate how existing tort law concepts are being challenged by AI characteristics and provide an overview of regulatory answers.
Cap is a software package (citeware) for economic experiments enabling experimenters to analyze emotional states of subjects using z-Tree and FaceReader™. Cap is able to create videos of subjects on client computers based on stimuli shown on screen and restrict recording material to relevant time frames. Another feature of Cap is the creation of time stamps in csv format at prespecified screens (or at prespecified points in time) during the experiment, measured on the client computer. The software makes it possible to import these markers into FaceReader™ easily. Cap is the first program that significantly simplifies the process of connecting z-Tree and FaceReader™ with the additional benefit of extremely high precision. This paper describes the usage, underlying principles as well as advantages and limitations of Cap. Furthermore, we give a brief outlook of how Cap can be beneficial in other contexts.
This paper discusses aspects of recruiting subjects for economic laboratory experiments, and shows how the Online Recruitment System for Economic Experiments can help. The software package provides experimenters with a free, convenient, and very powerful tool to organize their experiments and sessions.
stratEst is a software package for strategy frequency estimation in the freely available statistical computing environment R (R Development Core Team in R Foundation for Statistical Computing, Vienna, 2022). The package aims at minimizing the start-up costs of running the modern strategy frequency estimation techniques used in experimental economics. Strategy frequency estimation (Stahl and Wilson in J Econ Behav Organ 25: 309–327, 1994; Stahl and Wilson in Games Econ Behav, 10: 218–254, 1995) models the choices of participants in an economic experiment as a finite-mixture of individual decision strategies. The parameters of the model describe the associated behavior of each strategy and its frequency in the data. stratEst provides a convenient and flexible framework for strategy frequency estimation, allowing the user to customize, store and reuse sets of candidate strategies. The package includes useful functions for data processing and simulation, strategy programming, model estimation, parameter testing, model checking, and model selection.
We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between test items arises from the influence of one or more common latent variables. Here, we present two generalizations of the network model that encompass latent variable structures, establishing network modeling as parts of the more general framework of structural equation modeling (SEM). In the first generalization, we model the covariance structure of latent variables as a network. We term this framework latent network modeling (LNM) and show that, with LNM, a unique structure of conditional independence relationships between latent variables can be obtained in an explorative manner. In the second generalization, the residual variance–covariance structure of indicators is modeled as a network. We term this generalization residual network modeling (RNM) and show that, within this framework, identifiable models can be obtained in which local independence is structurally violated. These generalizations allow for a general modeling framework that can be used to fit, and compare, SEM models, network models, and the RNM and LNM generalizations. This methodology has been implemented in the free-to-use software package lvnet, which contains confirmatory model testing as well as two exploratory search algorithms: stepwise search algorithms for low-dimensional datasets and penalized maximum likelihood estimation for larger datasets. We show in simulation studies that these search algorithms perform adequately in identifying the structure of the relevant residual or latent networks. We further demonstrate the utility of these generalizations in an empirical example on a personality inventory dataset.
MTVE is an open-source software tool (citeware) that can be applied in laboratory and online experiments to implement video communication. The tool enables researchers to gather video data from these experiments in a way that these videos can be later used for automatic analysis through machine learning techniques. The browser-based tool comes with an easy user interface and can be easily integrated into z-Tree, oTree (and other experimental or survey tools). It provides the experimenters control over several communication parameters (e.g., number of participants, resolution), produces high-quality video data, and circumvents the Cocktail Party Problem (i.e., the problem of separating speakers solely based on audio input) by producing separate files. Using some of the recommended Voice-to-Text AI, the experimenters can transcribe individual files. MTVE can merge these individual transcriptions into one conversation.
OpenMx is free, full-featured, open source, structural equation modeling (SEM) software. OpenMx runs within the R statistical programming environment on Windows, Mac OS–X, and Linux computers. The rationale for developing OpenMx is discussed along with the philosophy behind the user interface. The OpenMx data structures are introduced—these novel structures define the user interface framework and provide new opportunities for model specification. Two short example scripts for the specification and fitting of a confirmatory factor model are next presented. We end with an abbreviated list of modeling applications available in OpenMx 1.0 and a discussion of directions for future development.
Text is a major medium of contemporary interpersonal communication but is difficult for social scientists to study unless they have significant resources or the skills to build their own research platform. In this paper, we introduce a cloud-based software solution to this problem: ReChat, an online research platform for conducting experimental and observational studies of live text conversations. We demonstrate ReChat by applying it to a specific phenomenon of interest to political scientists: conversations among co-partisans. We present results from two studies, focusing on (1) self-selection factors that make chat participants systematically unrepresentative and (2) a pre-registered analysis of loquaciousness that finds a significant association between speakers’ ideological extremity and the amount they write in the chat. We conclude by discussing practical implications and advice for future practitioners of chat studies.
One pedagogical finding that has gained recent attention is the utility of active, effortful retrieval practice in effective learning. Essentially, humans learn best when they are asked to actively generate/recall knowledge for themselves, rather than receiving knowledge passively. In this paper, we (a) provide a framework for both practice and assessment within which students can organically develop active study habits, (b) share resources we have built to help implement such a framework in the linguistics classroom, and (c) provide some examples and evaluation of their success in the context of an introductory phonetics/phonology course.
Chapter 7 shows how in the 1980s patent law came to view computer-related subject matter through the lens of ‘abstractness’ and the role that materiality played in determining the fate of that subject matter. The chapter also looks at how as a result of changes in technology, patent law gradually shifted away from the materiality of the subject matter to look at its specificity and in so doing how the subject matter was dematerialised.
After looking at how software was created and consumed in the 1960s and, as this changed, how it gave rise to questions about the role intellectual property might play in the emerging software industry, Chapter 5 looks at the contrasting ways that patentable subject matter was seen within the information technology industry and how these views were received within the law.
Chapter 6 looks at the problems patent law experienced in the 1960s and 1970s in attempting to reconcile the conflicting views of the industry about what the subject matter was and how it should be interpreted.
A Microsoft® Visual Basic software, WinClbclas, has been developed to calculate the chemical formulae of columbite-supergroup minerals based on data obtained from wet-chemical and electron-microprobe analyses and using the current nomenclature scheme adopted by the Commission on New Minerals, Nomenclature and Classification (CNMNC) of the International Mineralogical Association (IMA) for columbite-supergroup minerals. The program evaluates 36 IMA-approved species, three questionable in terms of their unit-cell parameters, four insufficiently studied questionable species and one ungrouped species, all according to the dominant valance and constituent status in five mineral groups including ixiolite (MO2), wolframite (M1M2O4), samarskite (ABM2O8), columbite (M1M2O6) and wodginite (M1M2M32O8). Mineral compositions of the columbite supergroup are calculated on the basis of 24 oxygen atoms per formula unit. However, the formulae of the five ixiolite to wodginite groups can be estimated by the program on the basis of their cation and anion values in their typical mineral formulae (e.g. 4 cations and 8 oxygens for the wodginite group) with normalisation procedures. The Fe3+ and Fe2+ contents from microprobe-derived total FeO (wt.%) amounts are estimated by stoichiometric constraints. WinClbclas allows users to: (1) enter up to 47 input variables for mineral compositions; (2) type and load multiple columbite-supergroup mineral compositions in the data entry section; (3) edit and load the Microsoft® Excel files used in calculating, classifying, and naming the columbite-supergroup minerals, together with the total monovalent to hexavalent ion; and (4) store all the calculated parameters in the output of a Microsoft® Excel file for further data evaluation. The program is distributed as a self-extracting setup file, including the necessary support files used by the program, a help file and representative sample data files.