To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider an experiment that yields, as data, a sample of independent and identically distributed (real-valued) random variables with a common distribution on the real line. The estimation of the underlying mean and median is discussed at length, and bootstrap confidence intervals are constructed. Tests comparing the underlying distribution to a given distribution (e.g., the standard normal distribution) or a family of distribution (e.g., the normal family of distributions) are introduced. Censoring, which is very common in some clinical trials, is briefly discuss.
The creamatocrit is a simple technique for estimating the lipid content of milk, widely adopted for clinical and research purposes. We evaluated the effect of long-term cryogenic storage on the creamatocrit for human milk.
Methods
Frozen and thawed milk specimens (n = 18) were subjected to the creamatocrit technique. The specimens were reanalyzed after long-term cryogenic storage (10 years at <70°C). The correlation between pre- and post-storage values was tested, and their differences were analyzed using the Bland–Altman plot.
Results
The pre- and post-storage values were highly correlated (r = 0.960, p < .0001). The Bland–Altman plot revealed a positive association between their differences and means (Pitman’s test r = 0.743, p < .001), suggesting the presence of nonconstant bias across the creamatocrit range. Long-term storage of human milk may introduce subtle bias to the creamatocrit in replicating pre-storage values. Further research should evaluate whether this bias is statistically correctable.
During military operations, soldiers are required to successfully complete numerous physical and cognitive tasks concurrently. Understanding the typical variance in research tools that may be used to provide insight into the interrelationship between physical and cognitive performance is therefore highly important. This study assessed the inter-day variability of two military-specific cognitive assessments: a Military-Specific Auditory N-Back Task (MSANT) and a Shoot-/Don’t-Shoot Task (SDST) in 28 participants. Limits of agreement ±95% confidence intervals, standard error of the mean, and smallest detectable change were calculated to quantify the typical variance in task performance. All parameters within the MSANT and SDST demonstrated no mean difference for trial visit in either the seated or walking condition, with equivalency demonstrated for the majority of comparisons. Collectively, these data provided an indication of the typical variance in MSANT and SDST performance, while demonstrating that both assessments can be used during seated and walking conditions.
The response to the COVID-19 pandemic has, from the outset, been characterized by a strong focus on real-time data intelligence and the use of data-driven technologies. Against this backdrop, this article investigates the impacts of the pandemic on Scottish local government’s data practices and, in turn, whether the crisis acted as a driver for digital transformation. Mobilizing the literatures on digital government transformation, and on the impacts of crises on public administrations, the article provides insights into the dynamics of digital transformation during a heightened period of acute demands on the public sector. The research evidences an intensification of public sector data use and sharing in Scottish local authorities, with focus on health-related data and the integration of existing datasets to gather local intelligence. The research reveals significant changes related to the technical and social systems of local government organizations. These include the repurposing and adoption of information systems, the acceleration of inter and intraorganizational data sharing processes, as well as changes in ways of working and in attitudes toward data sharing and collaborations. Drawing on these findings, the article highlights the importance of identifying and articulating specific data needs in relation to concrete policy questions in order to render digital transformation relevant and effective. The article also points to the need of addressing the persistent systemic challenges underlying public sector data engagement through, on one hand, sustained investment in data capabilities and infrastructures and, on the other, support for cross-organizational collaborative spaces and networks.
Text-to-scene conversion systems map natural language text to formal representations required for visual scenes. The difficulty involved in this mapping is one of the most critical challenges for developing these systems. The current study mapped Persian natural language text as the headmost system to a conceptual scene model. This conceptual scene model is an intermediate semantic representation between natural language and the visual scene and contains descriptions of visual elements of the scene. It will be used to produce meaningful animation based on an input story in this ongoing study. The mapping task was modeled as a sequential labeling problem, and a conditional random field (CRF) model was trained and tested for sequential labeling of scene model elements. To the best of the authors’ knowledge, no dataset for this task exists; thus, the required dataset was collected for this task. The lack of required off-the-shelf natural language processing modules and a significant error rate in the available corpora were important challenges to dataset collection. Some features of the dataset were manually annotated. The results were evaluated using standard text classification metrics, and an average accuracy of 85.7% was obtained, which is satisfactory.
Networks are convenient mathematical models to represent the structure of complex systems, from cells to societies. In the last decade, multilayer network science – the branch of the field dealing with units interacting in multiple distinct ways, simultaneously – was demonstrated to be an effective modeling and analytical framework for a wide spectrum of empirical systems, from biopolymers networks (such as interactome and metabolomes) to neuronal networks (such as connectomes), from social networks to urban and transportation networks. In this Element, a decade after one of the most seminal papers on this topic, the authors review the most salient features of multilayer network science, covering both theoretical aspects and direct applications to real-world coupled/interdependent systems, from the point of view of multilayer structure, dynamics and function. The authors discuss potential frontiers for this topic and the corresponding challenges in the field for the next future.
Theory of stable models is the mathematical basis of answer set programming. Several results in that theory refer to the concept of the positive dependency graph of a logic program. We describe a modification of that concept and show that the new understanding of positive dependency makes it possible to strengthen some of these results.
In recent years, co-creation and collaboration platforms to create and deliver new products and services have taken a step forward; this has led to the development of a new active involvement of users, who from co-designers have become independent designers, even if not experts. Co-design is dynamic and provides the tools to generate democratic design processes guided by the users themselves. The democratization of design tools is the premise for a new paradigm defined ‘Diffuse Design’ by Manzini (2015). This contribution explores the approaches of open design and open production with particular attention to the field of visual communication and the production of motion design artifacts. After an introduction to the co-design framework, the main open-production visual communication platforms are presented to offer an overview of the topic. Next, the potential of online platforms to enable non-designers to produce animated artifacts is explored by examining student projects in a motion design University course. The most significant outputs of the student experience are then described and critically analyzed. Finally, the conclusions investigate the different perspectives for reading the democratization of tools for creating visual artifacts and lay the foundations for future lines of research.
This article illustrates the use of unsupervised probabilistic learning techniques for the analysis of planetary reentry trajectories. A three-degree-of-freedom model was employed to generate optimal trajectories that comprise the training datasets. The algorithm first extracts the intrinsic structure in the data via a diffusion map approach. We find that data resides on manifolds of much lower dimensionality compared to the high-dimensional state space that describes each trajectory. Using the diffusion coordinates on the graph of training samples, the probabilistic framework subsequently augments the original data with samples that are statistically consistent with the original set. The augmented samples are then used to construct conditional statistics that are ultimately assembled in a path planning algorithm. In this framework, the controls are determined stage by stage during the flight to adapt to changing mission objectives in real-time.
Idea selection is crucial in design as it impacts the outcome of a project. A collaborative design activity could be considered as a social process where the interactions and individual states (such as the importance in the team and self-efficacy level) could affect decision-making. It is often seen in design teams that some individuals, referred to as ‘influencers’ in the article have more capacity to influence than others, hence they govern the team process for better or worse. Due to the limited work done in the past to study the effect of these influencers on design outcomes, the work aims at increasing the understanding by presenting some insights from its agent-based simulation. The simulation results show how different influencer team compositions affect design outcomes in terms of quality and exploration of the solutions. The idea selection starts with the agents who are ready with their solution in their ‘mind’. The work presented in this article describes a framework for simulating decision-making during idea selection by considering the influencer and majority effect. The empirical study presented in the article verifies the model logic, that is, the presence of influencer and the majority during idea selection and supports the assumption that individuals’ agreement on solutions proposed by other team members depends on the degree of influence and past agreement. The results of the simulation show that teams with well-defined influencers produced solutions with higher variety and had more uniform contributions from team members, but also produced solutions of lower quality.
For diagnostic and therapeutic applications in spacious spots of the gastrointestinal (GI) tract, the single rigid body capsule clinically applied is difficult to realize the fix-point posture adjustment function manipulated by the external permanent magnet system using the static balance control because the posture alignment and the locomotion interfere with each other. To realize this function easily, the dual hemisphere capsule robot (DHCR) is proposed, based on tracking effect—the axis of DHCR keeps tracking the normal orientation of the spatial universal rotating magnetic vector (SURMV). Since tracking effect employs dynamic balance control, dynamic stability of the DHCR system affects posture alignment performance. This paper focuses on posture alignment dynamic modeling and the influence of the magnetic flux density and the angular velocity of the SURMV, along with the damping coefficient of the GI tract surface on stability, obtaining the stability domains of parameters. Furthermore, to reduce error due to the uncertainties in complex GI tract environment, the sliding mode controller based on nominal model is proposed to achieve more accurate dynamic tracking, and Lyapunov theorem is employed to assess stability of controller. Finally, the tracking effect is verified through simulations and experiments, indicating that the fix-point posture adjustment can be realized with higher accuracy and efficiency.
The quality of annotations in manually annotated hate speech datasets is crucial for automatic hate speech detection. This contribution focuses on the positive effects of manually annotating online comments for hate speech within the context in which the comments occur. We quantify the impact of context availability by meticulously designing an experiment: Two annotation rounds are performed, one in-context and one out-of-context, on the same English YouTube data (more than 10,000 comments), by using the same annotation schema and platform, the same highly trained annotators, and quantifying annotation quality through inter-annotator agreement. Our results show that the presence of context has a significant positive impact on the quality of the manual annotations. This positive impact is more noticeable among replies than among comments, although the former is harder to consistently annotate overall. Previous research reporting that out-of-context annotations favour assigning non-hate-speech labels is also corroborated, showing further that this tendency is especially present among comments inciting violence, a highly relevant category for hate speech research and society overall. We believe that this work will improve future annotation campaigns even beyond hate speech and motivate further research on the highly relevant questions of data annotation methodology in natural language processing, especially in the light of the current expansion of its scope of application.
We implement extraction of Coq programs to functional languages based on MetaCoq’s certified erasure. We extend the MetaCoq erasure output language with typing information and use it as an intermediate representation, which we call ${\lambda^T_\square}$. We complement the extraction functionality with a full pipeline that includes several standard transformations (e.g. eta-expansion and inlining) implemented in a proof-generating manner along with a verified optimisation pass removing unused arguments. We prove the pass correct wrt. a conventional call-by-value operational semantics of functional languages. From the optimised ${\lambda^T_\square}$ representation, we obtain code in two functional smart contract languages, Liquidity and CameLIGO, the functional language Elm, and a subset of the multi-paradigm language for systems programming Rust. Rust is currently gaining popularity as a language for smart contracts, and we demonstrate how our extraction can be used to extract smart contract code for the Concordium network. The development is done in the context of the ConCert framework that enables smart contract verification. We contribute with two verified real-world smart contracts (boardroom voting and escrow), which we use, among other examples, to exemplify the applicability of the pipeline. In addition, we develop a verified web application and extract it to fully functional Elm code. In total, this gives us a way to write dependently typed programs in Coq, verify, and then extract them to several target languages while retaining a small trusted computing base of only MetaCoq and the pretty-printers into these languages.
Effective searching for target objects in indoor scenes is essential for household robots to perform daily tasks. With the establishment of a precise map, the robot can navigate to a fixed static target. However, it is difficult for mobile robots to find movable objects like cups. To address this problem, we establish an object search framework that combines navigation map, semantic map, and scene graph. The robot updates the scene graph to achieve a long-term target search. Considering the different start positions of the robots, we weigh the distance the robot walks and the probability of finding objects to achieve global path planning. The robot can continuously update the scene graph in a dynamic environment to memorize the position relation of objects in the scene. This method has been realized in both simulation and real-world environments. The experimental results show the feasibility and effectiveness of this method.
Computational thinking is increasingly gaining importance in modern biology, due to the unprecedented scale at which data is nowadays produced. Bridging the cultural gap between the biological and computational sciences, this book serves as an accessible introduction to computational concepts for students in the life sciences. It focuses on teaching algorithmic and logical thinking, rather than just the use of existing bioinformatics tools or programming. Topics are presented from a biological point of view, to demonstrate how computational approaches can be used to solve problems in biology such as biological image processing, regulatory networks, and sequence analysis. The book contains a range of pedagogical features to aid understanding, including real-world examples, in-text exercises, end-of-chapter problems, colour-coded Python code, and 'code explained' boxes. User-friendly throughout, Computational Thinking for Life Scientists promotes the thinking skills and self-efficacy required for any modern biologist to adopt computational approaches in their research with confidence.
Intended for researchers and practitioners in interaction design, this book shows how Bayesian models can be brought to bear on problems of interface design and user modelling. It introduces and motivates Bayesian modelling and illustrates how powerful these ideas can be in thinking about human-computer interaction, especially in representing and manipulating uncertainty. Bayesian methods are increasingly practical as computational tools to implement them become more widely available, and offer a principled foundation to reason about interaction design. The book opens with a self-contained tutorial on Bayesian concepts and their practical implementation, tailored for the background and needs of interaction designers. The contributed chapters cover the use of Bayesian probabilistic modelling in a diverse set of applications, including improving pointing-based interfaces; efficient text entry using modern language models; advanced interface design using cutting-edge techniques in Bayesian optimisation; and Bayesian approaches to modelling the cognitive processes of users.