To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
AI is evolving rapidly and is poised to have far-reaching societal and global impacts, including in the military domain. AI offers cognitive reasoning and learning about problem domains –processing large quantities of data to develop situational awareness, generate solution goals, recommend courses of action, and provide robotic systems with the means for sense-making, guidance, actions, and autonomy. This chapter explores metacognition – an emerging and revolutionary technology that is enabling AI to become self-aware – to think and reason about its own cognition. This chapter explores metacognition applications in the military domain, focusing on four areas: (1) improving human interaction with AI systems, (2) providing safe and ethical AI behavior, (3) enabling autonomous systems, and (4) improving automated decision aids. The chapter begins with an overview of foundational AI and metacognition concepts, followed by a discussion of the potential contribution of metacognition to improve military operations. The chapter concludes with speculations concerning the more distant future of metacognition and its implications on AI systems and warfare.
The process to better understand the intricate evolution of our urban territories requires combining urban data from different or concurrent instances of time to provide stakeholders with more complete views of possible evolutions of a city. Geospatial rules have been proposed in the past to validate 3D semantic city models, however, there is a lack of research in the validation of multiple, concurrent and successive, scenarios of urban evolution. Using Semantic Web Ontologies and logical rules, we present a novel standards-based methodology for validating integrated city models. Using this methodology, we propose interoperable rules for validating integrated open 3D city snapshots used for representing multiple scenarios of evolution. We also implement a reproducible proof of concept test suite for applying the proposed rules. To illustrate how these contributions can be used in a real-world data validation use-case, we also provide example queries on the validated data. These queries are specifically used to construct a 3D web application for visualizing and analysing urban changes across multiple scenarios of evolution of a selected zone of interest.
We study the performance of a commercially available large language model (LLM) known as ChatGPT on math word problems (MWPs) from the dataset DRAW-1K. To our knowledge, this is the first independent evaluation of ChatGPT. We found that ChatGPT’s performance changes dramatically based on the requirement to show its work, failing $20\%$ of the time when it provides work compared with $84\%$ when it does not. Further, several factors about MWPs relate to the number of unknowns and number of operations that lead to a higher probability of failure when compared with the prior, specifically noting (across all experiments) that the probability of failure increases linearly with the number of addition and subtraction operations. We also have released the dataset of ChatGPT’s responses to the MWPs to support further work on the characterization of LLM performance and present baseline machine learning models to predict if ChatGPT can correctly answer an MWP.
This chapter examines conservative attacks on social media, and their validity. Conservatives have long accused the major social media platforms of left-leaning bias, claiming that platform content moderation policies unfairly target conservative content for blocking, labeling, and deamplification. They point in particular to events during the COVID-19 lockdowns, as well as President Trump’s deplatforming, as proof of such bias. In 2021, these accusations led both Florida and Texas to adopt laws regulating platform content moderation in order to combat the alleged bias. But a closer examination of the evidence raises serious doubts about whether such bias actually exists. An equally plausible explanation for why conservatives perceive bias is that social media content moderation policies, in particular against medical disinformation and hate speech, are more likely to affect conservative than other content. For this reason, claims of platform bias remain unproven. Furthermore, modern conservative attacks on social media are strikingly inconsistent with the general conservative preference not to interfere with private businesses.
Biomechanical intervention on lower limb joints using exoskeletons to reduce joint loads and provide walking assistance has become a research hotspot in the fields of rehabilitation and elderly care. To address the challenges of human-exoskeleton (H-E) kinematic compatibility and knee joint unloading demands, this study proposes a novel rhombus linkage exoskeleton mechanism capable of adaptive knee motion without requiring precise alignment with the human knee axis. The exoskeleton is driven by a Bowden cable system to provide thigh support, thereby achieving effective knee joint unloading. Based on the screw theory, the degrees of freedom (DOF) of the exoskeleton mechanism (DOF = 3) and the H-E closed-loop mechanism (DOF = 1) were analyzed, and the kinematic model of the exoskeleton and the H-E closed-loop kinematic model were established, respectively. A mechanical model of the driving system was developed, and a simulation was conducted to validate the accuracy of the model. The output characteristics of the cable-driven system were investigated under varying bending angles and bending times. A prototype was fabricated and tested in wearable scenarios. The experimental results demonstrate that the exoskeleton system exhibits excellent biocompatibility and weight-bearing support capability. Compatibility tests confirm that the exoskeleton does not interfere with human motion. Through human-in-the-loop optimization, the optimal Bowden cable output force profile was obtained, which minimizes gait impact while achieving a peak support force of 195.8 N. Further validation from wear trials with five subjects confirms the system’s low interference with natural human motion (maximum lower-limb joint angle deviation of only $8^\circ$).
Section 230 of the Communications Decency Act is often called "The Twenty-Six Words That Created the Internet." This 1996 law grants platforms broad legal immunity against claims arising from both third-party content that they host, and good-faith content moderation decisions that they make. Most observers agree that without Section 230 immunity, or some variant of it, the modern internet and social media could not exist. Nonetheless, Section 230 has been subject to vociferous criticism, with both Presidents Biden and Trump having called for its repeal. Critics claim that Section 230 lets platforms have it both ways, leaving them free to host harmful content but also to block any content they object to. This chapter argues that criticisms of Section 230 are largely unwarranted. The diversity of the modern internet, and ability of ordinary individuals to reach broad audiences on the internet, would be impossible without platform immunity. As such, calls for repeal of or major amendments to Section 230 are deeply unwise. The chapter concludes by pointing to important limits on Section 230 immunity and identifying some narrow amendments to Section 230 that may be warranted.
As Chapter 1 discusses, one of the most consistent conservative critiques of social media platforms is that social media is biased against conservative content. A common policy proposal to address this is to regulate such platforms as common carriers. Doing so would require social media platforms to host, on a nondiscriminatory basis, all legal user content and to permit all users to access platforms on equal terms. While this seems an attractive idea – after all, who could object to nondiscrimination – it is not. For one thing, the Supreme Court has now recognized that social media platforms possess "editorial rights" under the First Amendment to control what content they carry, block, and emphasize in their feeds. So, regulating platforms as common carriers, as Texas and Florida have sought to do, is unconstitutional. It is also a terrible idea. Requiring platforms to carry all content on a nondiscriminatory basis, even if limited to legal content (which it would be hard to do) would flood user feeds with such lawful-but-awful content as pornography, hate speech, and terrorist propaganda. This in turn would destroy social media as a usable medium, to the detriment of everyone.
This chapter introduces the concept of metacognition from a cognitive perspective, where it refers to knowledge and mental processes that operate on one’s own cognition. We review different forms of metacognition that involve distinct types of explicit reasoning and automatic processes, as well as various measures and functional benefits. We articulate four conjectures regarding the nature of metacognition in the specific context of the ACT-R cognitive architecture: (1) it involves extracting information about processes in cognitive modules; (2) the information is quantitative and approximate rather than symbolic; (3) the metacognitive information is available in working memory for cognitive processing; and (4) general cognitive processes are sufficient to respond to a situation detected by metacognitive monitoring. We illustrate these principles with examples of past work involving neuro-symbolic models of perception and introspection into declarative models of decision-making. Finally, we situate this approach within the context of theories such as predictive coding and the Common Model of Cognition encompassing other cognitive architectures.
Metacognitive AI is closely connected to certifiable AI and trustworthy AI, the two areas focusing on equipping AI with trustworthy guarantees in high-stake domains. This chapter provides a systematic overview, tutorial, and discussion of the certified approaches in trustworthy deep learning. The chapter introduces essential terminologies, core methodologies, and representative applications of certified approaches. We believe that certified approaches, as a prerequisite for deploying AI in high-stake and safety-critical applications, would be an essential tool in metacognitive AI, and we hope that this chapter can inspire readers to further advance the field of certifiable trustworthiness for metacognitive AI.
This chapter presents a metacognitive AI approach via formal verification and repair of neural networks (NNs). We observe that a neural network repair is a form of metacognition, where trained AI systems relearn until specifications hold. We detail Veritex, a tool for reachability analysis and repair of deep NNs (DNNs). Veritex includes methods for exact and over-approximative reachability analysis of DNNs. The exact methods can compute the exact output reachable domain, as well as the exact unsafe input space that causes safety violations of DNNs. Based on the exact unsafe input–output reachable domain, Veritex can repair unsafe DNNs on multiple safety properties with negligible performance degradation, by updating the DNN parameters via retraining. Veritex primarily addresses the synthesis of provably safe DNNs, which is not yet significantly addressed in the literature. Veritex is evaluated for safety verification and DNN repair. Benchmarks for verification include ACAS Xu, and benchmarks for the repair include an unsafe ACAS Xu and an unsafe agent trained in deep reinforcement learning (DRL), where it is able to modify the NNs until safety is proven.
In this chapter, we use task failure as a trigger to engage in metacognitive processes. We present a procedure by which an agent may exploit failure in the zero-shot outputs of LLMs as a trigger to investigate alternative solutions to the problem using object interactions and knowledge of the object semantics. We additionally propose a method through which knowledge gained from the object interactions can be distilled back into the LLM and avenues for future research.