To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We present the design and deployment of a capsule endoscope via external electromagnets for locomotion in large volumes alongside its digital twin implementation based on interval type-2 fuzzy logic systems (IT2-FLSs). To perform locomotion, we developed an external mechanism comprising five external electromagnets on a two-dimensional translational platform that is to be placed underneath the patients’ bed and integrated multiple Neodymium magnets into the capsule. The interaction between the central bottom external electromagnet and the internal magnet forms a fixed body frame at the capsule center, allowing rotation. The interaction between the external electromagnets and the two internal magnets results in rotation. The elevation of the capsule is accomplished due to the interaction between the upper external electromagnet and the internal magnets. Through simulations, we model the capsule rotation as a function of torque and drive voltages. We validated the proposed locomotion approach experimentally and observed that the results are highly nonlinear and uncertain. Thus, we define a regression problem in which IT2-FLSs, capable of representing nonlinearity and uncertainty, are learned. To verify the proposed locomotion approach and test the IT2-FLS, we leverage our experimental effort to a stomach phantom and finally to an ex vivo bovine stomach. The experimental results validate the locomotion capability and show that the IT2-FLS can capture uncertainties while resulting in satisfactory prediction performance. To showcase the benefit in a clinical scenario, we present a digital twin implementation of the proposed approach in a virtual environment that can link physical and virtual worlds in real time.
This explanatory mixed-method study seeks to understand the relationships between second language (L2) motivation (including the ideal L2 self and the ought-to L2 self) and students’ informal digital learning of English (IDLE) and whether such relationships are mediated by the most prominent positive emotion – enjoyment. A total of 391 Chinese university students participated in the survey, and 15 of them were interviewed later. Quantitative analysis revealed a strong positive relationship between the ideal L2 self and participants’ IDLE, which was partially mediated by foreign language enjoyment (FLE), while the hypotheses that the ought-to L2 self significantly predicted FLE and IDLE were rejected. The qualitative data added details to how a vivid and elaborate L2 vision contributed to enhanced English learning enjoyment and served as the most influential motivator for IDLE practices. Meanwhile, the external and instrumental motives could not predict Chinese university students’ enjoyment, disempowering them to invest in productive language learning practices in the informal and digitalized environment. The discussion of these findings and pedagogical implications helps to chart the path for utilizing the power of the ideal L2 self to engage Chinese university students with the extramural and digitalized language learning ecology.
Bidirectional transformations (BXs) are a mechanism for maintaining consistency between multiple representations of related data. The lens framework, which usually constructs BXs from lens combinators, has become the mainstream approach to BX programming because of its modularity and correctness by construction. However, the involved bidirectional behaviors of lenses make the equational reasoning and optimization of them much harder than unidirectional programs. We propose a novel approach to deriving efficient lenses from clear specifications via program calculation, a correct-by-construction approach to reasoning about functional programs by algebraic laws. To support bidirectional program calculation, we propose contract lenses, which extend conventional lenses with a pair of predicates to enable safe and modular composition of partial lenses. We define several contract-lens combinators capturing common computation patterns including $\textit{fold}, \textit{filter},\textit{map}$, and $\textit{scan}$, and develop several bidirectional calculation laws to reason about and optimize contract lenses. We demonstrate the effectiveness of our new calculation framework based on contract lenses with nontrivial examples.
Given a graph $H$, let us denote by $f_\chi (H)$ and $f_\ell (H)$, respectively, the maximum chromatic number and the maximum list chromatic number of $H$-minor-free graphs. Hadwiger’s famous colouring conjecture from 1943 states that $f_\chi (K_t)=t-1$ for every $t \ge 2$. A closely related problem that has received significant attention in the past concerns $f_\ell (K_t)$, for which it is known that $2t-o(t) \le f_\ell (K_t) \le O(t (\!\log \log t)^6)$. Thus, $f_\ell (K_t)$ is bounded away from the conjectured value $t-1$ for $f_\chi (K_t)$ by at least a constant factor. The so-called $H$-Hadwiger’s conjecture, proposed by Seymour, asks to prove that $f_\chi (H)={\textrm{v}}(H)-1$ for a given graph $H$ (which would be implied by Hadwiger’s conjecture).
In this paper, we prove several new lower bounds on $f_\ell (H)$, thus exploring the limits of a list colouring extension of $H$-Hadwiger’s conjecture. Our main results are:
For every $\varepsilon \gt 0$ and all sufficiently large graphs $H$ we have $f_\ell (H)\ge (1-\varepsilon )({\textrm{v}}(H)+\kappa (H))$, where $\kappa (H)$ denotes the vertex-connectivity of $H$.
For every $\varepsilon \gt 0$ there exists $C=C(\varepsilon )\gt 0$ such that asymptotically almost every $n$-vertex graph $H$ with $\left \lceil C n\log n\right \rceil$ edges satisfies $f_\ell (H)\ge (2-\varepsilon )n$.
The first result generalizes recent results on complete and complete bipartite graphs and shows that the list chromatic number of $H$-minor-free graphs is separated from the desired value of $({\textrm{v}}(H)-1)$ by a constant factor for all large graphs $H$ of linear connectivity. The second result tells us that for almost all graphs $H$ with superlogarithmic average degree $f_\ell (H)$ is separated from $({\textrm{v}}(H)-1)$ by a constant factor arbitrarily close to $2$. Conceptually these results indicate that the graphs $H$ for which $f_\ell (H)$ is close to the conjectured value $({\textrm{v}}(H)-1)$ for $f_\chi (H)$ are typically rather sparse.
In the present work, neural networks are applied to formulate parametrized hyperelastic constitutive models. The models fulfill all common mechanical conditions of hyperelasticity by construction. In particular, partially input convex neural network (pICNN) architectures are applied based on feed-forward neural networks. Receiving two different sets of input arguments, pICNNs are convex in one of them, while for the other, they represent arbitrary relationships which are not necessarily convex. In this way, the model can fulfill convexity conditions stemming from mechanical considerations without being too restrictive on the functional relationship in additional parameters, which may not necessarily be convex. Two different models are introduced, where one can represent arbitrary functional relationships in the additional parameters, while the other is monotonic in the additional parameters. As a first proof of concept, the model is calibrated to data generated with two differently parametrized analytical potentials, whereby three different pICNN architectures are investigated. In all cases, the proposed model shows excellent performance.
Product data sharing is fundamental for collaborative product design and development. Although the STandard for Exchange of Product model data (STEP) enables this by providing a unified data definition and description, it lacks the ability to provide a more semantically enriched product data model. Many researchers suggest converting STEP models to ontology models and propose rules for mapping EXPRESS, the descriptive language of STEP, to Web Ontology Language (OWL). In most research, this mapping is a manual process which is time-consuming and prone to misunderstandings. To support this conversion, this research proposes an automatic method based on natural language processing techniques (NLP). The similarities of language elements in the reference manuals of EXPRESS and OWL have been analyzed in terms of three aspects: heading semantics, text semantics, and heading hierarchy. The paper focusses on translating between language elements, but the same approach has also been applied to the definition of the data models. Two forms of the semantic analysis with NLP are proposed: a Combination of Random Walks (RW) and Global Vectors for Word Representation (GloVe) for heading semantic similarity; and a Decoding-enhanced BERT with disentangled attention (DeBERTa) ensemble model for text semantic similarity. The evaluation shows the feasibility of the proposed method. The results not only cover most language elements mapped by current research, but also identify the mappings of the elements that have not been included. It also indicates the potential to identify the OWL segments for the EXPRESS declarations.
There have been consistent calls for more research on managing teams and embedding processes in data science innovations. Widely used frameworks (e.g., the cross-industry standard process for data mining) provide a standardized approach to data science but are limited in features such as role clarity, skills, and cross-team collaboration that are essential for developing organizational capabilities in data science. In this study, we introduce a data workflow method (DWM) as a new approach to break organizational silos and create a multi-disciplinary team to develop, implement and embed data science. Different from current data science process workflows, the DWM is managed at the system level that shapes business operating model for continuous improvement, rather than as a function of a particular project, one single business unit, or isolated individuals. To further operationalize the DWM approach, we investigated an embedded data workflow at a mining operation that has been using geological data in a machine-learning model to stabilize daily mill production for the last 2 years. Based on the findings in this study, we propose that the DWM approach derives its capability from three aspects: (a) a systemic data workflow; (b) multi-disciplinary networks of collaboration and responsibility; and (c) clearly identified data roles and the associated skills and expertise. This study suggests a whole-of-organization approach and pathway to develop data science capability.
Modeling complex dynamical systems with only partial knowledge of their physical mechanisms is a crucial problem across all scientific and engineering disciplines. Purely data-driven approaches, which only make use of an artificial neural network and data, often fail to accurately simulate the evolution of the system dynamics over a sufficiently long time and in a physically consistent manner. Therefore, we propose a hybrid approach that uses a neural network model in combination with an incomplete partial differential equations (PDEs) solver that provides known, but incomplete physical information. In this study, we demonstrate that the results obtained from the incomplete PDEs can be efficiently corrected at every time step by the proposed hybrid neural network—PDE solver model, so that the effect of the unknown physics present in the system is correctly accounted for. For validation purposes, the obtained simulations of the hybrid model are successfully compared against results coming from the complete set of PDEs describing the full physics of the considered system. We demonstrate the validity of the proposed approach on a reactive flow, an archetypal multi-physics system that combines fluid mechanics and chemistry, the latter being the physics considered unknown. Experiments are made on planar and Bunsen-type flames at various operating conditions. The hybrid neural network—PDE approach correctly models the flame evolution of the cases under study for significantly long time windows, yields improved generalization and allows for larger simulation time steps.
Dialogues are turn-taking games which model debates about the satisfaction of logical formulas. A novel variant played over first-order structures gives rise to a notion of first-order satisfaction. We study the induced notion of validity for classical and intuitionistic first-order logic in the constructive setting of the calculus of inductive constructions. We prove that such material dialogue semantics for classical first-order logic admits constructive soundness and completeness proofs, setting it apart from standard model-theoretic semantics of first-order logic. Furthermore, we prove that completeness with regard to intuitionistic material dialogues fails in both constructive and classical settings. As an alternative, we propose material dialogues played over Kripke structures. These Kripke material dialogues exhibit constructive completeness when restricting to the negative fragment. The results concerning classical material dialogues have been mechanized using the Coq interactive theorem prover.
Cyberspace is essential for socializing, learning, shopping, and just about everything in modern life. Yet, there is also a dark side to cyberspace: sub-national, transnational, and international actors are challenging the ability of sovereign governments to provide a secure environment for their citizens. Criminal groups hold businesses and local governments hostage through ransomware, foreign intelligence services steal intellectual property and conduct influence operations, governments attempt to rewrite Internet protocols to facilitate censorship, and militaries prepare to use cyberspace operations in wars. Security in the Cyber Age breaks-down how cyberspace works, analyzes how state and non-state actors exploit vulnerabilities in cyberspace, and provides ways to improve cybersecurity. Written by a computer scientist and national security scholar-practitioner, the book offers technological, policy, and ethical ways to protect cyberspace. Its interdisciplinary approach and engaging style make the book accessible to the lay audience as well as computer science and political science students.
This chapter presents an overview of VR systems from hardware (Section 2.1) to software (Section 2.2), including the introduction of the Virtual World Generator (VWG), which maintains the geometry and physics of the virtual world, to human perception (Section 2.3). The purpose is to quickly provide a sweeping perspective so that the detailed subjects in the remaining chapters will be understood within the larger context.
This chapter transitions from the physiology of human vision to perception. How do our brains interpret the world around us so effectively in spite of our limited biological hardware? To understand how we may be fooled by visual stimuli presented by a display, you must first understand how we perceive or interpret the real world under normal circumstances. It is not always clear what we will perceive. We have already seen several optical illusions. VR itself can be considered as a grand optical illusion. Under what conditions will it succeed or fail? Section 6.1 covers perception of the distance of objects from our eyes, which is also related to the perception of object scale. Section 6.2 explains how we perceive motion. An important part of this is the illusion of motion that we perceive from videos, which are merely a sequence of pictures. Section 6.3 covers the perception of color, which may help explain why displays use only three colors (red, green, and blue) to simulate the entire spectral power distribution of light. Finally, Section 6.4 presents a statistically based model of how information is combined from multiple sources to produce a perceptual experience.
This chapter introduces interaction mechanisms that may not have a counterpart in the physical world. Section 10.1 introduces general motor learning and control concepts. The most important concept is remapping, in which a motion in the real world may be mapped into a substantially different motion in the virtual world. This enables many powerful interaction mechanisms. The task is to develop ones that are easy to learn, easy to use, effective for the task, and provide a comfortable user experience. Section 10.2 discusses how the user may move himself in the virtual world, while remaining fixed in the real world. Section 10.3 presents ways in which the user may interact with other objects in the virtual world. Section 10.4 discusses social interaction mechanisms, which allow users to interact directly with each other. Section 10.5 briefly considers some additional interaction mechanisms, such as editing text, designing 3D structures, and Web browsing.
In the real world, audio is crucial to art, entertainment, and oral communication. Audio recording and reproduction can be considered a VR experience by itself, with both a CAVE-like version (surround sound) and a headset version (wearing headphones). When combined consistently with the visual component, audio helps provide a compelling and comfortable VR experience. Each section of this chapter is the auditory (or audio) complement to one of Chapters 4 through 7. The progression again goes from physics to physiology, and then from perception to rendering. Section 11.1 explains the physics of sound in terms of waves, propagation, and frequency analysis. Section 11.2 describes the parts of the human ear and their function. This naturally leads to auditory perception, which is the subject of Section 11.3. Section 11.4 concludes by presenting auditory rendering, which can produce sounds synthetically from models or reproduce captured sounds.
We now want to model motions more accurately because the physics of both real and virtual worlds impact VR experiences. The accelerations and velocities of moving bodies impact simulations in the VWG and tracking methods used to capture user motions in the physical world. Section 8.1 introduces fundamental concepts from math and physics, including velocities, accelerations, and the movement of rigid bodies. Section 8.2 presents the physiology and perceptual issues from the human vestibular system, which senses velocities and accelerations. Section 8.3 then describes how motions are described and produced in a VWG. This includes numerical integration and collision detection. Section 8.4 focuses on vection, which is a source of VR sickness that arises due to sensory conflict between the visual and vestibular systems: the eyes may perceive motion while the vestibular system is not fooled. This can be considered as competition between the physics of the real and virtual worlds.
This chapter covers the geometry part of the Virtual World Generator (VWG), which is needed to make models and move them around. The models could include the walls of a building, furniture, clouds in the sky, the user’s avatar, and so on. Section 3.1 covers the basics of how to define consistent, useful models. Section 3.2 explains how to apply mathematical transforms that move them around in the virtual world. This involves two components: translation (changing position) and rotation (changing orientation). Section 3.3 presents the best ways to express and manipulate 3D rotations, which are the most complicated part of moving models. Section 3.4 then covers how the virtual world appears if we try to “look” at it from a particular perspective. This is the geometric component of visual rendering. Finally, Section 3.5 puts all of the transformations together so that you can see how to go from defining a model to having it appear in the right place on the display.
This chapter considers what VR means in a way that captures the most crucial aspects in spite of rapidly changing technology. Relevant terminology is introduced. The subsequent discussion covers what VR is considered to be today and what we envision for its future. The chapter starts with two thought-provoking examples: (1) A human having an experience of flying over virtual San Francisco by flapping his own wings and (2) a gerbil running on a freely rotating ball while exploring a virtual maze that appears on a projection screen around the mouse.
This chapter addresses visual rendering, which specifies what the visual display should show through an interface to the virtual world generator (VWG). Sections 7.1 and 7.2 cover basic concepts at the core of computer graphics, and VR-specific issues. They mainly address the case of rendering for virtual worlds that are formed synthetically. Section 7.1 explains how to determine the light that should appear at a pixel based on light sources and the reflectance properties of materials that exist purely in the virtual world. Section 7.2 explains rasterization methods, which efficiently solve the rendering problem and are widely used in specialized graphics hardware, called GPUs. Section 7.3 addresses VR-specific problems that arise from imperfections in the optical system. Section 7.4 focuses on latency reduction, which is critical to VR, so that virtual objects appear in the right place at the right time. Otherwise, side effects could arise, such as VR sickness, fatigue, adaptation to flaws, or an unconvincing experience. Section 7.5 explains rendering for captured rather than synthetic virtual worlds. This covers VR experiences that are formed from panoramic photos and videos.
We now want to model motions more accurately because the physics of both real and virtual worlds impact VR experiences. The accelerations and velocities of moving bodies impact simulations in the VWG and tracking methods used to capture user motions in the physical world. Section 8.1 introduces fundamental concepts from math and physics, including velocities, accelerations, and the movement of rigid bodies. Section 8.2 presents the physiology and perceptual issues from the human vestibular system, which senses velocities and accelerations. Section 8.3 then describes how motions are described and produced in a VWG. This includes numerical integration and collision detection. Section 8.4 focuses on vection, which is a source of VR sickness that arises due to sensory conflict between the visual and vestibular systems: the eyes may perceive motion while the vestibular system is not fooled. This can be considered as competition between the physics of the real and virtual worlds.