To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
While initially conceived for human motion augmentation, wearable robots have gradually evolved as technological aids in motion assistance and rehabilitation. There are increasing real-world applications in industrial and medical scenarios. Though efforts have been made on wearable robotic systems, e.g. robotic prostheses and exoskeletons, there are still several challenges in kinematics and actuation solutions, dynamic analysis and control of human-robot systems, neuro-control and human-robot interfaces; ergonomics and human-in-the-loop optimization. Meanwhile, real-world applications in industrial or medical scenarios are facing difficulties considering effectiveness.
One way to address this low sample efficiency of reinforcement learning (RL) is to employ human expert demonstrations to speed up the RL process (RL from demonstration or RLfD). The research so far has focused on demonstrations from a single expert. However, little attention has been given to the case where demonstrations are collected from multiple experts, whose expertise may vary on different aspects of the task. In such scenarios, it is likely that the demonstrations will contain conflicting advice in many parts of the state space. We propose a two-level Q-learning algorithm, in which the RL agent not only learns the policy of deciding on the optimal action but also learns to select the most trustworthy expert according to the current state. Thus, our approach removes the traditional assumption that demonstrations come from one single source and are mostly conflict-free. We evaluate our technique on three different domains and the results show that the state-of-the-art RLfD baseline fails to converge or performs similarly to conventional Q-learning. In contrast, the performance level of our novel algorithm increases with more experts being involved in the learning process and the proposed approach has the capability to handle demonstration conflicts well.
In order to enable robots to interact with humans in a natural way, they need to be able to autonomously learn new tasks. The most natural way for humans to tell another agent, which can be a human or robot, to perform a task is via natural language. Thus, natural human–robot interactions also require robots to understand natural language, i.e. extract the meaning of words and phrases. To do this, words and phrases need to be linked to their corresponding percepts through grounding. Afterward, agents can learn the optimal micro-action patterns to reach the goal states of the desired tasks. Most previous studies investigated only learning of actions or grounding of words, but not both. Additionally, they often used only a small set of tasks as well as very short and unnaturally simplified utterances. In this paper, we introduce a framework that uses reinforcement learning to learn actions for several tasks and cross-situational learning to ground actions, object shapes and colors, and prepositions. The proposed framework is evaluated through a simulated interaction experiment between a human tutor and a robot. The results show that the employed framework can be used for both action learning and grounding.
This paper presents an exercise on theory building to characterise design ideation. It starts by examining how early ideas are defined and evaluated in the literature. An essentialist view is identified that explains the creativity of a final design solution by the creative qualities of early ideas attributed by external judges. Criteria for a theory of ideation that does not rely on the primacy of essence are enumerated. Advanced professional practice is examined to understand evaluation of early ideas ‘in the wild’. Accretion is then introduced as an analogical model to imaginatively drive definitions and conjectures about idea formation in the co-evolution of problem and design spaces. Vignettes from ideation episodes are used to illustrate an accretion theory of ideation. An accretion theory supports new ways to think about ideation as a complex formation process where creative solutions emerge from the synthesis of a multitude of fragmentary and partial ideas – or ‘ideasimals’. An accretion theory of ideation helps to explain the creative value of a final design solution without relying on early ideas having a creative essence, because the creativity of a solution is viewed as emergent rather than present in early versions. An accretion lens is used to suggest new ideation metrics to study the qualities of idea fragments and the process of idea formation. Definitions and relevant assessment regimes for different stages of ideation are discussed. The paper concludes with a discussion on entailments of an accretion theory and next steps for this theory building enterprise.
Despite the potential to lead to enhanced environmental performance, the extent to which eco-design leads to success is still unclear. In order to lay bare the effects of eco-design implementation, this paper focuses on understanding the correlations between specific eco-design principles and success through an exploratory study. A sample of 178 products, characterized in terms of their success levels and implemented eco-design principles, was statistically analyzed. The results indicate a number of positive correlations for principles that tend to favor success (e.g. intensified use and product/service systems) and negative correlations for principles that tend to moderate the chances of success (e.g. minimize packaging). Although the mechanisms that cause this phenomenon should be further investigated, the findings can provide designers with additional recommendations for the selection of eco-design principles.
Engineered system architectures leveraging collaboration among multiple actors across organizational boundaries are envisioned to be more flexible, robust, or efficient than independent alternatives but also carry significant downside risks from new interdependencies added between constituents. This paper transitions the concept of risk dominance from equilibrium selection in game theory to engineering design as a strategic measure of collective stability for system of systems. A proposed method characterizes system design as a bi-level problem with two or more asymmetric decision-makers. A measure of risk dominance assesses strategic dynamics with respect to the stability of joint or collaborative architectures relative to independent alternatives using a novel linearization technique to approximate linear incentives among actors. An illustrative example case for an asymmetric three-player design scenario shows how strategic risk dominance can identify and mitigate architectures with unstable risk-reward dynamics.
Network models, which abstractly are given by lax symmetric monoidal functors, are used to construct operads for modeling and designing complex networks. Many common types of networks can be modeled with simple graphs with edges weighted by a monoid. A feature of the ordinary construction of network models is that it imposes commutativity relations between all edge components. Because of this, it cannot be used to model networks with bounded degree. In this paper, we construct the free network model on a given monoid, which can model networks with bounded degree. To do this, we generalize Green’s graph products of groups to pointed categories which are finitely complete and cocomplete.
Due to the considerable growth of the volume of text documents on the Internet and in digital libraries, manual analysis of these documents is no longer feasible. Having efficient approaches to keyword extraction in order to retrieve the ‘key’ elements of the studied documents is now a necessity. Keyword extraction has been an active research field for many years, covering various applications in Text Mining, Information Retrieval, and Natural Language Processing, and meeting different requirements. However, it is not a unified domain of research. In spite of the existence of many approaches in the field, there is no single approach that effectively extracts keywords from different data sources. This shows the importance of having a comprehensive review, which discusses the complexity of the task and categorizes the main approaches of the field based on the features and methods of extraction that they use. This paper presents a general introduction to the field of keyword/keyphrase extraction. Unlike the existing surveys, different aspects of the problem along with the main challenges in the field are discussed. This mainly includes the unclear definition of ‘keyness’, complexities of targeting proper features for capturing desired keyness properties and selecting efficient extraction methods, and also the evaluation issues. By classifying a broad range of state-of-the-art approaches and analysing the benefits and drawbacks of different features and methods, we provide a clearer picture of them. This review is intended to help readers find their way around all the works related to keyword extraction and guide them in choosing or designing a method that is appropriate for the application they are targeting.
The theory ACUI of an associative, commutative, and idempotent binary function symbol + with unit 0 was one of the first equational theories for which the complexity of testing solvability of unification problems was investigated in detail. In this paper, we investigate two extensions of ACUI. On one hand, we consider approximate ACUI-unification, where we use appropriate measures to express how close a substitution is to being a unifier. On the other hand, we extend ACUI-unification to ACUIG-unification, that is, unification in equational theories that are obtained from ACUI by adding a finite set G of ground identities. Finally, we combine the two extensions, that is, consider approximate ACUI-unification. For all cases we are able to determine the exact worst-case complexity of the unification problem.
Our world and the people within it are increasingly interpreted and classified by automated systems. At the same time, automated classifications influence what happens in the physical world. These entanglements change what it means to interact with governance, and shift what elements of our identity are knowable and meaningful. In this cyber-physical world, or 'world state', what is the role for law? Specifically, how should law address the claim that computational systems know us better than we know ourselves? Monitoring Laws traces the history of government profiling from the invention of photography through to emerging applications of computer vision for personality and behavioral analysis. It asks what dimensions of profiling have provoked legal intervention in the past, and what is different about contemporary profiling that requires updating our legal tools. This work should be read by anyone interested in how computation is changing society and governance, and what it is about people that law should protect in a computational world.
Conceptual design is a key stage of product design and has received increasing attention in recent years. However, this stage is characterized by limited information, large uncertainty, and multidisciplinary aspects. Thus, increased workload and time cost are associated with conceptual design information acquisition; sometimes, it is difficult to develop novel solutions and the feasibility of the solutions obtained according to these limited and uncertain information is difficult to guarantee. Genetics-based design (GBD) is an effective approach to develop novel solutions and improve the reuse of knowledge, which is consistent with the goal of the conceptual design process. Product-gene acquisition is the premise and basis of GBD. At present, there are few reported studies in this area; most of the existing works are constrained by the structural aspects of the acquisition process, and there are limited studies on specific implementation techniques. To explore the specific implementation technologies of product-gene acquisition, an intelligent acquisition method based on K-means clustering and mutual information-based feature selection algorithm is proposed in this paper. The product genes defined in this paper are key product information that determines the nature of the product and influences the conceptual design process. Thus, solutions obtained according to them are more feasible than that based on limited and uncertain information. An illustrative example is presented. The results show that the proposed method can achieve intelligent acquisition of product genes to a certain extent. Further, the proposed method will allow designers to quickly search for the corresponding product genes when performing similar functional design tasks.
Advances in technologies that were unimaginable a century ago have helped in establishing the current high standards of living. Undoubtedly, the oil and gas industry has played a pivotal role in this respect. Thanks to the advent of the petroleum industry, the use of oil and gas has created new factories and revolutionized industries such as transportation and power generation for more than a century. Liquid fuels have impacted transportation and have made various communities closer. The reliance on liquid and gaseous fuels has affected the lives of every person in the world with the invention of air transportation and personal vehicles.
In the 1990s, British writers began using “transparency” as a portmanteau word to describe that desirable state of organizational management and governance characterized by candor, openness, honesty, clarity, legal compliance, and full disclosure (Handy, 1990). At first, the word didn’t take hold on this side of the Atlantic, perhaps because it was too vague and philosophical for American tastes in managerial buzz words (which tend to run more to the precise and practical).
Given the rapid rate of technological innovation and a desire to be proactive in addressing potential ethical challenges that arise in contexts of innovation, engineers must learn to engage in value-sensitive design – design that is responsive to a broad range of values that are implicated in the research, development, and application of technologies. One widely-used tool is Life Cycle Assessment (LCA). Physical products, as with organisms, have a life cycle, starting with extraction of raw materials, and including refining, transport, manufacturing, use, and finally end-of-life treatment and disposal. LCA is a quantitative modeling framework that can estimate emissions that occur throughout a product’s life cycle, as well as any harmful effects that these emissions have on the environment and/or public health. Importantly, LCA tools allow engineers to evaluate multiple types of environmental and health impacts simultaneously and are not limited to a single endpoint or score. However, LCA is only useful to the extent that its models accurately include the full range of values implicated in the use of a technology, and to the extent that stakeholders, from designers to decisionmakers, understand and are able to communicate these values and how they are assigned. Effective LCA requires good ethical training to understand these values.
There was a time when we were all six-sigma-ing. We did so because Jack Welch had bought into the six-sigma phenomenon and he had created a phenomenally performing General Electric (GE). Then we moved along from good to great to the search for excellence to becoming great by choice to whatever superlative Jim Collins told us was the way to a company that was built to last. Then someone moved our cheese. We had no time for that because we were just one-minute managers. We smoothed earnings, incentivized employees, and created three tiers of employees – including getting rid of the bottom tier of employees, whether they deserved termination or kudos. We all wanted to be part of the Fortune 100, the Fortune Most Admired Companies, even as we were led by Fortune CEOs and CFOs of the year – many of whom ended up doing time.
Modern engineering and technology have allowed us to connect with each other and even to reach the moon. But technology has also polluted vast areas of the planet and empowered surveillance and authoritarian governments with dangerous tools. There are numerous cases where engineers and other stakeholders routinely ask what they are capable of inventing, and what they actually should invent. Nuclear weapons and biotechnology are two examples. But when analyzing the transformations arising from less controversial modern socio-technological tools – like the Internet, smartphones, and connected devices, which augment and define our work and social practices – two very distinct areas of responsibility become apparent. On the one hand, a question arises around the values and practices of the engineers who create the technologies. What values should guide their endeavors and how can society promote good conduct? On the other hand, there are questions regarding the effects of people using these technologies. While engineering and design choices can either promote or hinder commendable social behavior and appropriate use, this chapter will focus on the first question.
As technology becomes more powerful, intelligent, and autonomous, its usage also creates unintended consequences and ethical challenges for a vast array of stakeholders. The ethical implications of technology on society, for example, range from job losses (such as potential loss of truck driver jobs due to automation) to lying and deception about a product that may occur within a technology firm or on user-generated content platforms. The challenges around ethical technology design are so multifaceted that there is an essential need for each stakeholder to accept responsibility. Even policymakers who are charged with providing the appropriate regulatory framework and legislation about technologies have an obligation to learn about the pros and cons of proposed options.