To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The chapter discusses the critical role of predictive uncertainty and diversity in enhancing the robustness and generalizability of embodied AI and robot learning. It explores the need for robots to efficiently learn and act in the unpredictable physical world by considering diverse scenarios and their consequences. The chapter highlights the importance of distinguishing between evaluative and generative paradigms of uncertainty, emphasizing the need to balance accuracy, uncertainty, and computational complexity in robot models. It examines various sources of uncertainty, including physical and model limitations, partial observability, environment dynamics, and domain shifts. Additionally, it outlines techniques for quantifying uncertainty, such as variance, entropy, and Bayesian methods, and underscores the significance of leveraging uncertainty in decision-making, exploration, and learning robust models. By addressing uncertainty in perception, representation, planning, and control, the chapter aims to improve the reliability and safety of robotic systems in diverse and dynamic environments.
Classification of movement trajectories has many applications in transportation. Supervised neural models represent the current state-of-the-art. Recent security applications require this task to be rapidly employed in environments that may differ from the data used to train such models for which there is little training data. We provide a neuro-symbolic rule-based framework to conduct error correction and detection of these models to support eventual deployment in security applications.
The long game of AI aims at developing agents that are progressively more human-like in an ever-growing number of facets. Such agents must be able to explain the causes and effects of events and attitudes of agents in their world, including their own attitudes. This state of affairs can only be brought about if the agents are endowed with metacognitive abilities. In this chapter, we highlight the importance of metacognition for modeling the phenomenon of trust. Specifically, we present the case for the interdependence of metacognition and mutual trust between members of human-AI teams. We also argue that metacognition based on causality and contentful explanations requires knowledge support modeling human semantic and episodic memories as well as knowledge of language. We illustrate the above point with examples from systems developed using the OntoAgent cognitive architecture.
Currently, there is a gap in the literature regarding effective post-deployment interventions for LLMs. Existing methods like few-shot or zero-shot prompting show promise but lack certainty in post-prompting performance and heavily rely on human expertise for error detection and prompt crafting. Against this backdrop, we trifurcate the challenges for LLM intervention into three folds. First, the ``black-box’’ nature of LLMs obscures the malfunction source within the multitude of parameters, complicating targeted intervention. Second, rectification typically depends on domain experts to identify errors, hindering scalability and automation. Third, the architectural complexity and sheer size of LLMs render pinpointed intervention an overwhelmingly daunting task.
Here, we call for a novel paradigm for LLM intervention inspired by cognitive science principles. This paradigm aims to equip LLMs with self-awareness in error identification and correction, emulating human cognitive efficiency. It would enable LLMs to form transparent decision-making pathways guided by human-comprehensible concepts, allowing for precise model intervention.
Metacognition is the concept of reasoning about an agent’s own internal processes and was originally introduced in the field of developmental psychology. In this position chapter, we examine the concept of applying metacognition to artificial intelligence (AI). We introduce a framework for understanding metacognitive AI that we call TRAP: transparency, reasoning, adaptation, and perception.
The integration of AI into information systems will affect the way users interface with these systems. This exploration of the interaction and collaboration between humans and AI reveals its potential and challenges, covering issues such as data privacy, credibility of results, misinformation, and search interactions. Later chapters delve into application domains such as healthcare and scientific discovery. In addition to providing new perspectives on and methods for developing AI technology and designing more humane and efficient artificial intelligence systems, the book also reveals the shortcomings of artificial intelligence technologies through case studies and puts forward corresponding countermeasures and suggestions. This book is ideal for researchers, students, and industry practitioners interested in enhancing human-centered AI systems and insights for future research.
This groundbreaking volume is designed to meet the burgeoning needs of the research community and industry. This book delves into the critical aspects of AI's self-assessment and decision-making processes, addressing the imperative for safe and reliable AI systems in high-stakes domains such as autonomous driving, aerospace, manufacturing, and military applications. Featuring contributions from leading experts, the book provides comprehensive insights into the integration of metacognition within AI architectures, bridging symbolic reasoning with neural networks, and evaluating learning agents' competency. Key chapters explore assured machine learning, handling AI failures through metacognitive strategies, and practical applications across various sectors. Covering theoretical foundations and numerous practical examples, this volume serves as an invaluable resource for researchers, educators, and industry professionals interested in fostering transparency and enhancing reliability of AI systems.
After acquiring sufficient vocabulary in a foreign language, learners start understanding parts of conversations in that language. Speaking, in contrast, is a harder task. Forming grammatical sentences requires choosing the right tenses and following syntax rules. Every beginner EFL speaker makes grammar errors – and the type of grammar errors can reveal hints about their native language. For instance, Russian speakers tend to omit the determiner “the” because Russian doesn’t use such modifying words. One linguistic phenomenon that is actually easier in English than in many other languages is grammatical gender. English doesn’t assign gender to inanimate nouns such as “table” or “cup.” A few years ago, the differences in grammatical gender between languages helped reveal societal gender bias in automatic translation: translation systems that were shown gender-neutral statements in Turkish about doctors and nurses assumed that the doctor was male while the nurse was female.
At what time does the afternoon start, at 1 p.m. or 3 p.m.? Language understanding requires the ability to correctly match statements to their real-world meaning. This mapping process is a function of the context, which includes various factors such as location and time as well as the speaker’s and listeners’ backgrounds. For example, an utterance like, “It is hot today,” would mean different things were it expressed in Death Valley versus Alaska. Based on our background and experiences, people have different interpretations for time expressions, color descriptions, geographic expressions, qualities, relative expressions, and more. This ability to map language to real-world meaning is also required from the language technology tools we use. For example, translating a recipe that contains instructions to “preheat the oven to 180 degrees” requires a translation system to understand the implicit scale (e.g. Celsius versus Fahrenheit) based on the source language and the user’s location. To date, no automatic translation systems can do this, and there is little “grounding” in any widely used language technology tool.
Non-compositional phrases such as “by and large” are phrases whose meaning cannot be unlocked by simply translating the combination of words they constitute. In particular, figurative expressions – such as idioms, similes and metaphors – are ubiquitous in English. Among other reasons, figurative expressions are acquired late in the language learning journey because they often capture cultural conventions and social norms associated with the people speaking the language. Figurative expressions are especially prevalent in creative writing, acting as the spice that adds flavor to the writing. Artificial intelligence (AI) writing assistants such as ChatGPT are now capable of editing raw drafts into well-written pieces, to the advantage of native and non-native speakers alike. These AI tools, which have gained their writing skills from exposure to vast amounts of online text, are extremely adept at generating text similar to the texts they have been exposed to. Unfortunately, they have demonstrated shortcomings in creative writing that requires deviating from the norm.