To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Computer scientists often need to learn new programming languages quickly. The best way to prepare for this is to understand the foundational principles that underlie even the most complicated industrial languages. This text for an undergraduate programming languages course distills great languages and their design principles down to easy-to-learn 'bridge' languages implemented by interpreters whose key parts are explained in the text. The book goes deep into the roots of both functional and object-oriented programming, and it shows how types and modules, including generics/polymorphism, contribute to effective programming. The book is not just about programming languages; it is also about programming. Through concepts, examples, and more than 300 practice exercises that exploit the interpreter, students learn not only what programming-language features are but also how to do things with them. Substantial implementation projects include Milner's type inference, both copying and mark-and-sweep garbage collection, and arithmetic on arbitrary-precision integers.
Archimedes, the founder of statics and hydrostatics, in his mathematics and physics studies, created methods related to his inventions of new machines, for example, the method of mechanical theorems based on his lever invention. He also used the principles of decomposition and replication underlying his heat ray invention, and these two principles permeate his work. Analysis of Archimedes’ work shows how he was perhaps the first to use methodically a strategy for solving diverse complex problems. In this article, we use the term Archimedes Code to encompass the way Archimedes approached problems including those two principles. Archimedes was perhaps the first design theorist and the first to think systematically about how to address design challenges. Furthermore, his work demonstrates the fundamental role of engineering practice in advancing science. The new insights regarding the Archimedes Code and its value in design practice may inspire both design researchers and practitioners.
In this study, we consider a class of multiple-drawing opposite-reinforcing urns with time-dependent replacement rules. The class has the symmetric property of a Friedman-type urn. We divide the class into a small-increment regime and a large-increment regime. For small-increment schemes, we prove almost-sure convergence and a central limit theorem for the proportion of white balls by stochastic approximation. For large-increment schemes, by assuming the affinity condition, we show almost-sure convergence of the proportion of white balls by martingale theory and present a way to identify the limit distribution of the proportion of white balls.
Retinal surgery is widely considered to be a complicated and challenging task even for specialists. Image-guided robot-assisted intervention is among the novel and promising solutions that may enhance human capabilities therein. In this paper, we demonstrate the possibility of using spotlights for 5D guidance of a microsurgical instrument. The theoretical basis of the localization for the instrument based on the projection of a single spotlight is analyzed to deduce the position and orientation of the spotlight source. The usage of multiple spotlights is also proposed to check the possibility of further improvements for the performance boundaries. The proposed method is verified within a high-fidelity simulation environment using the 3D creation suite Blender. Experimental results show that the average positioning error is 0.029 mm using a single spotlight and 0.025 mm with three spotlights, respectively, while the rotational errors are 0.124 and 0.101, which shows the application to be promising in instrument localization for retinal surgery.
We deal with a challenging scheduling problem on parallel machines with sequence-dependent setup times and release dates from a real-world application of semiconductor work-shop production. There, jobs can only be processed by dedicated machines, thus few machines can determine the makespan almost regardless of how jobs are scheduled on the remaining ones. This causes problems when machines fail and jobs need to be rescheduled. Instead of optimising only the makespan, we put the individual machine spans in non-ascending order and lexicographically minimise the resulting tuples. This achieves that all machines complete as early as possible and increases the robustness of the schedule. We study the application of answer-set programming (ASP) to solve this problem. While ASP eases modelling, the combination of timing constraints and the considered objective function challenges current solving technology. The former issue is addressed by using an extension of ASP by difference logic. For the latter, we devise different algorithms that use multi-shot solving. To tackle industrial-sized instances, we study different approximations and heuristics. Our experimental results show that ASP is indeed a promising knowledge representation and reasoning (KRR) paradigm for this problem and is competitive with state-of-the-art constraint programming (CP) and Mixed-Integer Programming (MIP) solvers.
In this paper, a Pythagorean-Hodograph (PH) curve-based pick-and-place operation trajectory planning method for Delta parallel robots is proposed, which realizes the flexible control of pick-and-place operations to meet the requirements of various practical scenarios. First, according to the geometric relationship of pick-and-place operation path, different pick-and-place operations are classified. Then trajectory planning is carried out for different situations, respectively, and in each case, the different polynomial motion laws adopted by the linear motion segment and the curved motion segment are solved. Trajectory optimization is performed with the motion period as optimization objective. The proposed method is easier to implement, and at the same time satisfies the safety, optimization, mobility, and stability of the robot; that is, the proposed method realizes obstacle avoidance, optimal time, flexible control of the robot trajectory, and stable motion. Simulations and experiments verify the effectiveness of the method proposed in this paper. The proposed method can not only realize the fast, accurate, and safe operation in intelligent manufacturing fields such as rapid classification, palletizing, grasping, warehousing, etc., but its research route can also provide a reference for trajectory planning of intelligent vehicles in logistics system.
This self-contained introduction to machine learning, designed from the start with engineers in mind, will equip students with everything they need to start applying machine learning principles and algorithms to real-world engineering problems. With a consistent emphasis on the connections between estimation, detection, information theory, and optimization, it includes: an accessible overview of the relationships between machine learning and signal processing, providing a solid foundation for further study; clear explanations of the differences between state-of-the-art techniques and more classical methods, equipping students with all the understanding they need to make informed technique choices; demonstration of the links between information-theoretical concepts and their practical engineering relevance; reproducible examples using Matlab, enabling hands-on student experimentation. Assuming only a basic understanding of probability and linear algebra, and accompanied by lecture slides and solutions for instructors, this is the ideal introduction to machine learning for engineering students of all disciplines.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations. The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies. Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Data assimilation of flow measurements is an essential tool for extracting information in fluid dynamics problems. Recent works have shown that the physics-informed neural networks (PINNs) enable the reconstruction of unsteady fluid flows, governed by the Navier–Stokes equations, if the network is given enough flow measurements that are appropriately distributed in time and space. In many practical applications, however, experimental measurements involve only time-averaged quantities or their higher order statistics which are governed by the under-determined Reynolds-averaged Navier–Stokes (RANS) equations. In this study, we perform PINN-based reconstruction of time-averaged quantities of an unsteady flow from sparse velocity data. The applied technique leverages the time-averaged velocity data to infer unknown closure quantities (curl of unsteady RANS forcing), as well as to interpolate the fields from sparse measurements. Furthermore, the method’s capabilities are extended further to the assimilation of Reynolds stresses where PINNs successfully interpolate the data to complete the velocity as well as the stresses fields and gain insight into the pressure field of the investigated flow.
David Gibson’s (2008) examination of research on conversational interaction highlighted methodological and theoretical gaps in current understanding – particularly around the localized construction of interaction and the reproduction of social structures. This paper extends extant formal models used by group process researchers to explain how exogenous status structures shape local interaction by incorporating insights from qualitative work examining the local production of conversational interaction. Relational events serve as a bridge between conversation analytic understandings of the deep structure of conversation and expectation states formal models of permeation. We propose a theoretical integration of the status organizing process (permeation) and local turn-taking rules (deep structure) as a more complete model of conversational behavior in task groups. We test a formalized construction of this preliminary theory by examining turn-taking using data from 55 task groups whose members vary in gender, authority, and legitimacy of that authority. This integrated model offers substantial improvements in prediction accuracy over using status information alone. We then propose ways to expand the integrated theoretical framework to advance current understandings of action and events in conversation. Finally, we offer suggestions for insights from group processes theories that could be incorporated into network models of interaction outside of this theoretical framework.
The COVID‑19 pandemic has increased the popularity of online shopping, and companies are looking for ways to provide consumers with experiences that online shopping cannot provide, such as touching products and imagining them in use. In this context, the importance of haptic imagery of products showcased online is increasing. This study replicated and extended Peck et al.’s (2013, Journal of Consumer Psychology, 23, 189–196) finding that physical control and psychological ownership mediate the influence of haptic imagery on purchase intention. This study showed that imagining touching a product increased purchase intention through the mediation of physical control and psychological ownership compared with not imagining, conceptually replicating Peck et al.’s study. This study also examined the moderating effect of product involvement and showed that there was no moderator role of product involvement. The findings would have a practical application in marketing, such as encouraging consumers to imagine touching the product.
This paper introduces a class of automata and associated languages, suitable to model a computational paradigm of fuzzy systems, in which both vagueness and simultaneity are taken as first-class citizens. This requires a weighted semantics for transitions and a precise notion of a synchronous product to enforce the simultaneous occurrence of actions. The usual relationships between automata and languages are revisited in this setting, including a specific Kleene theorem.
Contemporary proof assistants such as Coq require that recursive functions be terminating and corecursive functions be productive to maintain logical consistency of their type theories, and some ensure these properties using syntactic checks. However, being syntactic, they are inherently delicate and restrictive, preventing users from easily writing obviously terminating or productive functions at their whim.
Meanwhile, there exist many sized type theories that perform type-based termination and productivity checking, including theories based on the Calculus of (Co)Inductive Constructions (CIC), the core calculus underlying Coq. These theories are more robust and compositional in comparison. So why haven’t they been adapted to Coq?
In this paper, we venture to answer this question with CIC$\widehat{\ast}$, a sized type theory based on CIC. It extends past work on sized types in CIC with additional Coq features such as global and local definitions. We also present a corresponding size inference algorithm and implement it within Coq’s kernel; for maximal backward compatibility with existing Coq developments, it requires no additional annotations from the user.
In our evaluation of the implementation, we find a severe performance degradation when compiling parts of the Coq standard library, inherent to the algorithm itself. We conclude that if we wish to maintain backward compatibility, using size inference as a replacement for syntactic checking is impractical in terms of performance.
Numerous works have been proposed to generate random graphs preserving the same properties as real-life large-scale networks. However, many real networks are better represented by hypergraphs. Few models for generating random hypergraphs exist, and also, just a few models allow to both preserve a power-law degree distribution and a high modularity indicating the presence of communities. We present a dynamic preferential attachment hypergraph model which features partition into communities. We prove that its degree distribution follows a power-law, and we give theoretical lower bounds for its modularity. We compare its characteristics with a real-life co-authorship network and show that our model achieves good performances. We believe that our hypergraph model will be an interesting tool that may be used in many research domains in order to reflect better real-life phenomena.
Packed with real-world examples, industry insights and practical activities, this textbook is designed to teach machine learning in a way that is easy to understand and apply. It assumes only a basic knowledge of technology, making it an ideal resource for students and professionals, including those who are new to computer science. All the necessary topics are covered, including supervised and unsupervised learning, neural networks, reinforcement learning, cloud-based services, and the ethical issues still posing problems within the industry. While Python is used as the primary language, many exercises will also have the solutions provided in R for greater versatility. A suite of online resources is available to support teaching across a range of different courses, including example syllabi, a solutions manual, and lecture slides. Datasets and code are also available online for students, giving them everything they need to practice the examples and problems in the book.
In this paper, we examine the contribution of Network Science journal to the network science discipline. We do so from two perspectives. First, expanding the existing taxonomy of article contribution, we examine trends in theory testing, theory building, and new method development within the journal’s articles. We find that the journal demands a high level of theoretical contribution and methodological rigor. High levels of theoretical and methodological contribution become significant predictors of article citation rates. Second, we look at the composition of the studies in Network Science and determine that the journal has already established a solid “hard core” for the new discipline.
We consider a two-stage service system with two types of servers, namely subordinates who perform the first-stage service and supervisors who have their own responsibilities in addition to collaborating with the subordinates on the second-stage service. Rewards are earned when first- or second-stage service is completed and when supervisors finish one of their own responsibilities. Costs are incurred when impatient customers abandon without completing the second-stage service. Our problem is to determine how the supervisors should distribute their time between their joint work with the subordinates and their own responsibilities. Under the assumptions that service times at both stages are exponentially distributed and that the customers waiting for second-stage service abandon after an exponential amount of time, we prove that one of two policies will maximize the long-run average profit. Namely, it is optimal for supervisors to start collaborating with subordinates either when subordinates can no longer serve new customers or as soon as there is a customer ready for second-stage service. Furthermore, we show that the optimality condition is a simple threshold on the system parameters. We conclude by proving that pooling supervisors (and their associated subordinates) improves system performance, but with limited returns as more supervisors are pooled.
A stream of research on co-authorship, used as a proxy of scholars’ collaborative behavior, focuses on members of a given scientific community defined at discipline and/or national basis for which co-authorship data have to be retrieved. Recent literature pointed out that international digital libraries provide partial coverage of the entire scholar scientific production as well as under-coverage of the scholars in the community. Bias in retrieving co-authorship data of the community of interest can affect network construction and network measures in several ways, providing a partial picture of the real collaboration in writing papers among scholars. In this contribution, we collected bibliographic records of Italian academic statisticians from an online platform (IRIS) available at most universities. Even if it guarantees a high coverage rate of our population and its scientific production, it is necessary to deal with some data quality issues. Thus, a web scraping procedure based on a semi-automatic tool to retrieve publication metadata, as well as data management tools to detect duplicate records and to reconcile authors, is proposed. As a result of our procedure, it emerged that collaboration is an active and increasing practice for Italian academic statisticians with some differences according to the gender, the academic ranking, and the university location of scholars. The heuristic procedure to accomplish data quality issues in the IRIS platform can represent a working case report to adapt to other bibliographic archives with similar characteristics.
To support and facilitate the rehabilitation of patients with physical limitations and aid the therapist, several robotic structures are being studied. Among the structures, the cable-driven robots stand out. The cable-driven robots are structures actuated by cables and have the advantages of being flexible and reconfigurable for each patient. The objective of this paper is to develop a theoretical model for knee flexion/extension force and moment using a cable-driven robot. The proposed model is necessary for elaborating a referential to which diagnosis can be made and the improvement of the patient evaluated. The presented theoretical model was validated through experiments with twelve sedentary and healthy volunteers. The first procedure tested ten subjects in three thigh angles for knee flexion motion; the second procedure tested two subjects in flexion and extension for the same thigh angle. The results show the validity of the model for 88.58% of the tests in an ANOVA analysis with a 99% confidence interval. The similarity of data for different gender, ages, and intrinsic factors was noted, implying that the model is representative and independent of the subject’s individuality. Differences between flexion and extension values were observed, which need to be studied in the future.