To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this article, the performance analysis and multiobjective structure optimization of 4RRR parallel mechanism are carried out. Firstly, the 4RRR pure rotation parallel mechanism and its design route are introduced. Secondly, the Jacobian matrices in 2DoF pure rotation and 3DoF pure rotation modes are derived using the motion equations of the mechanism. Next, the singularity analysis, kinematic dexterity analysis, dynamic dexterity analysis, and stiffness analysis of the mechanism are carried out, respectively, and it is proved that there is no singularity in the mechanism in its workspace. Since the dexterity performance expression is a nonlinear piecewise function, the kinematic local comprehensive dexterity index and the dynamic local comprehensive dexterity index are proposed as the objects of analysis. Furthermore, the kinematic global comprehensive dexterity index, the dynamic global comprehensive dexterity index, and the global comprehensive stiffness index are proposed to carry out the multiobjective structural optimization. Finally, NSGA3 was used to complete the optimization, and the comprehensive optimal solution of the structure size was obtained.
Humanoid robots are highly redundant, and finding whole-body optimal trajectories for various tasks is very complex. This paper proposes a method to find an energy-optimal, dynamically balanced, and collision-free trajectory of the 20 degrees of freedom humanoid robot in pick and place application. The task of pick and place is divided into three subtasks using the Pseudoinverse Jacobian method of redundancy resolution. The three subtasks are end effector trajectory represented by $\mathcal {T}_1$, hip trajectory represented by $\mathcal {T}_2$, and maximizing the manipulability represented by $\mathcal {T}_3$. The Pseudoinverse Jacobian method is coupled with particle swarm optimization (PSO) to find the optimal trajectories. The main contribution of this paper is the decomposition of the whole-body task of the humanoid robot into three distinct subtasks to find energy-optimal, dynamically balanced, and obstacle-free trajectories. The concept of virtual surface is used to avoid dragging objects on the table surface. The problem is optimized with Particle Swarm Optimization. Simulations were conducted to pick up and place objects from a table and constrained spaces like a drawer. The results show that the robot can pick and place objects from defined locations on the table.
Collaborative robotics in manufacturing introduces a new era of seamless human–robot collaboration (HRC), enhancing production line efficiency and adaptability. However, guaranteeing safe interaction while maintaining performance objectives presents significant challenges. Integrating safety with optimal robot performance is paramount to minimize task time and ensure its completion. Our work introduces an architecture for safety in confined human–robot workspaces by integrating existing safety and productivity methods into a unified framework specifically designed for constrained environments. By employing an improved artificial potential field, we optimize paths based on length and bending energy and compare baseline algorithms like gradient descent algorithm and rapidly exploring random tree (RRT*). We propose an evaluation metric for system performance that objectively maps to the system’s safety and efficiency in diverse collaborative scenarios. Additionally, the architecture supports multimodal interaction, including gesture-based inputs, for intuitive control and improved operator experience. Safety measures address static and dynamic obstacles using potential fields and safety zones, with a real-time safety evaluation module adjusting trajectories under specified constraints. A performance recovery algorithm facilitates swift resumption of high-speed operations post safety interventions. Validation includes comparing the algorithmic performance through simulations and experiments using the 6-degrees of freedom UR5 robot by universal robots to identify the most suitable algorithm. Results demonstrate an 83.87% improvement in system performance compared to ideal case scenarios, validating the effectiveness of the proposed architecture, evaluation metric, and multimodal interaction in enhancing safety and productivity.
In this chapter, the author reflects on his experience as the founding director of IDSS. The reflections examine the challenges of establishing new entities within academia, offer insights into the process, and conclude with a discussion on how this journey has impacted the author’s thinking and research agenda.
The theoretical foundation of functional programming is the Curry-Howard correspondence, also known as the propositions as types paradigm. Types in simply typed lambda calculus correspond to propositions in intuitionistic logic: function types correspond to logical implications, and product types correspond to logical conjunctions. Not only that, programs correspond to proofs and computation corresponds to a procedure of cut elimination or proof normalisation in which proofs are progressively simplified. The Curry-Howard view has proved to be robust and general and has been extended to varied and more powerful type systems and logics. In one of these extensions the language is a form of pi calculus and the logic is linear logic, with its propositions interpreted as session types. In this chapter we present this system and its key results.
This chapter introduces a variant of the pi calculus defined in Chapter 2. We drop choice and channel closing, retaining only message passing. To compensate, messages may now carry values other than channel endpoints: we introduce record and variant values, both of which are standard in functional languages. We further introduce new processes to eliminate the new values. We play the same game at the type level: from input/output, external/internal choice and the two end types, we retain only input/output types. In return we incorporate record and variant types, again standard from functional languages. Unlike the input and output types in all preceding chapters, those in this chapter have no continuation. These changes lead to a linear pi calculus with record and variant types. The interesting characteristic of this calculus is that it allows us to faithfully encode the pi calculus with session types, even though it has no specific support for session types. We present an encoding based on work by Dardha, Giachino and Sangiorgi.
In this chapter, we aim to contribute to the ongoing discussions involving legal entities, big tech, and governments by introducing several key topics and questions related to data privacy, decision-making, and regulation. We explore the balance between mathematical logic and social justice, the challenge of eliminating persistent biases through programming, and the extent of control and accountability humans should maintain over generative systems. We also consider whether machines should be held to the same ethical standards as humans and contemplate the role of the free market in shaping societal outcomes.
The chapter concludes with an examination of how data is monetized through Ad markets, its role in perpetuating bias, and the need to define personal data. Through these discussions, we hope to provide a foundation for deeper exploration and understanding of the complex issues surrounding data privacy, decision-making, and regulation.
This chapter delves into the complexities and challenges of data science, emphasizing the potential pitfalls and ethical considerations inherent in decision-making based on data. It explores the intricate nature of data, which can be multifaceted, noisy, temporally and spatially disjointed, and often a result of the interplay among numerous interconnected components. This complexity poses significant difficulties in drawing causal inferences and making informed decisions.
A central theme of the chapter is the compromise of privacy that individuals may face in the quest for data-driven insights, which raises ethical concerns regarding the use of personal data. The discussion extends to the concept of algorithmic fairness, particularly in the context of racial bias, shedding light on the need for mitigating biases in data-driven decision-making processes.
Through a series of examples, the chapter illustrates the challenges and potential pitfalls associated with data science, underscoring the importance of robust methodologies and ethical considerations. It concludes with a thought-provoking examination of income inequality as a controversial example of data science in practice. The example highlights the nuanced interplay between data, decisions, and societal impacts.
The previous chapters present a linear world in which each channel endpoint is possessed by exactly one thread. Linearity allows precise control of interference and that can be useful. If two processes engage in a series of consecutive message exchanges then we can be sure that messages are not sent to or received from foreign processes. Although linearity is often convenient, it can also be detrimental in concurrent systems. Purely linear systems cannot describe e-commerce interactions where a server (or a pool of servers) is expected to serve a much larger number of clients. Linear systems also fail to describe an auction, where at the end, only one bidder is to get the item for sale. This chapter introduces a type discipline that provides for channels that can be shared and consequently for processes that compete for resources, thus creating race conditions and nondeterminism. We do this in such a way that we do not need to revisit process formation or process reduction, and process typing only requires mild changes. We define the language and type system as extensions of those in Chapter 3.
This chapter asserts that the evolution of AI over the past seven decades has been closely intertwined with advancements in computational power. It identifies four key computing developments – mainframes, personal computers, wireless communication and the internet, and embedded systems – that have significantly influenced the field of data science and AI.
Starting from the early concepts of Turing machines, the chapter traces the parallel evolution of AI through milestones such as the invention of the perceptron, the development of machine learning techniques, and the current state of AI systems. It highlights key moments in AI history, from the first computer to play checkers to the algorithmic triumph of Deep Blue over a chess champion, as well as the recent achievements of AlphaGo.
By placing these advances in the context of broader computing history, the chapter argues that contemporary AI capabilities are the culmination of deliberate and iterative technological progress. It concludes by examining the profound impact of computing and AI on political institutions, citing examples such as the Arab Spring and the Cambridge Analytica scandal.
This chapter introduces Data, Systems, and Society (DSS), a new transdiscipline bridging statistics, information and decision systems, and social and institutional behavior. It emphasizes the value of transdisciplinarity over multidisciplinarity and interdisciplinarity and advocates for integrating DSS across domains where data and systems are pivotal (e.g., engineering, sciences, social sciences, and management). The chapter concludes by illustrating how DSS training has been instrumental in tackling different facets of the COVID-19 pandemic, including testing, vaccination strategies, and evaluating regional policies.
Machine-learning (ML) methods have shown great potential for weather downscaling. These data-driven approaches provide a more efficient alternative for producing high-resolution weather datasets and forecasts compared to physics-based numerical simulations. Neural operators, which learn solution operators for a family of partial differential equations, have shown great success in scientific ML applications involving physics-driven datasets. Neural operators are grid-resolution-invariant and are often evaluated on higher grid resolutions than they are trained on, i.e., zero-shot super-resolution. Given their promising zero-shot super-resolution performance on dynamical systems emulation, we present a critical investigation of their zero-shot weather downscaling capabilities, which is when models are tasked with producing high-resolution outputs using higher upsampling factors than are seen during training. To this end, we create two realistic downscaling experiments with challenging upsampling factors (e.g., 8x and 15x) across data from different simulations: the European Centre for Medium-Range Weather Forecasts Reanalysis version 5 (ERA5) and the Wind Integration National Dataset Toolkit. While neural operator-based downscaling models perform better than interpolation and a simple convolutional baseline, we show the surprising performance of an approach that combines a powerful transformer-based model with parameter-free interpolation at zero-shot weather downscaling. We find that this Swin-Transformer-based approach mostly outperforms models with neural operator layers in terms of average error metrics, whereas an Enhanced Super-Resolution Generative Adversarial Network-based approach is better than most models in terms of capturing the physics of the ground truth data. We suggest their use in future work as strong baselines.