To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We investigate the incorporation of metacognitive capabilities into Machine Learning Integrated with Network (MLIN) systems and develop machine Learning Integrated with Knowledge (mLINK) strata. This stratum is aimed at integrating knowledge obtained from multiple MLIN elements and reflecting on the ML application performance outcomes in order to provide feedback on metacognitive actions aimed at ensuring performance and improving ML application robustness towards Data Quality (DQ) variations. We discuss multiple use cases to show how the knowledge on the interrelationships between MLIN components, DQ, and ML application performance can be generated and employed by mLINK. We elaborate on how this knowledge is integrated into mLINK to produce metaknowledge, deemed as recommendations on adaptation actions or strategies needed. We define the process of employing these recommendations by mLINK as metacognition and describe multiple examples of utilizing these metacognitive strategies in practice, such as optimizing the data collection; reflection on DQ; DQ assurance; enhanced transfer learning; and Federated Learning for enhancing security, privacy, collaboration, and communication in MLIN.
To enhance understanding and collaboration with autonomous agents, it is crucial to construct a representation of their task strategies that integrates interpretability, monitoring, and formal reasoning. This dual-purpose representation fosters human comprehension and enables automated analytical processes. We achieve this balance by formalizing task strategies through temporal logic formulas. Recent trends emphasize inferring temporal logic formulas from data to explain system behaviors and assess autonomous agents’ competencies. Our methodology relies on positive and negative examples from system observations to construct a concise temporal logic formula consistent with the data. However, existing approaches often overlook real-world data’s noise and uncertainties, limiting practical deployment. Addressing this, we analyze labeled trajectories and aim to infer interpretable formulas that minimize misclassification loss. To tackle data uncertainties, we focus on labeled interval trajectories. Our algorithm maximizes the worst-case robustness margin, enhancing formula robustness and ensuring the adaptability and reliability of temporal logic inference in real-world applications.
One of the central aspects of metacognitive AI is the AI agent’s ability to reason about its own behavior. In particular, for AI systems to be deployed in real-world applications with high impact, it is crucial that we can reason about and guarantee their fairness and robustness. Here, we provide a probabilistic reasoning framework to audit and enforce fairness of automated decision-making systems, using classifiers as the main example, while being robust to uncertainties and noise in the distribution.
This brief conclusion summarizes the main thesis of the book, noting that both conservative and progressive critiques of social media lack strong empirical justifications, and that many if not most of the regulatory proposals directed at social media are not only likely to be found unconstitutional, but are also wrong-headed. It then argues that it is time we all accept that the old, pre-social media world of gatekeepers is over; and further, that this development has important, positive implications for the democratization of public discourse in ways that free speech theory supports. Finally, the Conclusion analogizes the modern hysteria over the growth of social media to earlier panics over changes in communications technology, such as the inventions of the printing press and of moving pictures. As with those earlier panics, this one too is overblown and ignores the positive potential impacts of technological change.
Critics from across the political spectrum attack social media platforms for invading personal privacy. Social media firms famously suck in huge amounts of information about individuals who use their services (and sometimes others as well), and then monetize this data, primarily by selling targeted advertising. Many privacy advocates object to the very collection and use of this personal data by platforms, even if not shared with third parties. In addition, there is the ongoing (and reasonable) concern that the very existence of Big Data creates a risk of leaks. Further, aside from the problem of Big Data, the very existence of social media enables private individuals to invade the privacy of others by widely disseminating personal information. That social media firms’ business practices compromise privacy cannot be seriously doubted. But it is also true that Big Data lies at the heart of social media firms’ business models, permitting them to provide users with free services in exchange for data which they can monetize via targeted advertising. So unless regulators want to take free services away, they must tread cautiously in regulating privacy.
By integrating hard constraints into neural network outputs, we not only improve the reliability of AI systems but also pave the way for meta-cognitive capabilities that ensure the alignment of predictions with domain-specific knowledge.
This topic has received a lot of attention, however, existing methods either impose the constraints in a “weak” form at training time, with no guarantees at inference, or fail to provide a general framework that supports different tasks and constraint types.
We tackle this open problem from a neuro-symbolic perspective, developing a pipeline that enhances a conventional neural predictor with a symbolic reasoning module capable of correcting structured prediction errors and a neural attention module that learns to direct the reasoning effort to focus on potential prediction errors, while keeping other outputs unchanged.
This framework provides an appealing trade-off between the efficiency of constraint-free neural inference and the prohibitive cost of exhaustive reasoning at inference time that satisfies the rigorous demands of meta-cognitive assurance.
Text-to-image (T2I) diffusion models require large-scale training data to achieve such good performance. Still, they seem to lack a common understanding of semantics such as spatial composition, and spurious correlations raising ethical concerns. Data and model size do not matter in learning better semantics; instead, they seem to hurt the model. Recent works have shown the few-shot concept learning abilities of T2I models on simple concepts like cat or dog. Following the line of research, we introduce in this chapter utilizing Concept Algebra for learning new concepts in a resource-efficient way.
To do that, we introduce three works focusing on concept learning to show its effectiveness: (1) Create a benchmark for large-scale evaluations of concept learning methodologies, (2) Reduce ethical biases via Concept Algebra via few-shot concept learning, and (3) Learn spatial relationships via few-shot concept adaptation. Through this research, we describe the efforts to create few-shot synthetic data that is both robust and reduces biases present in various forms.
AI systems have struggled to be deployed in safety-critical applications where the consequences of incorrect predictions are severe. In complex applications and environments, like autonomous driving, it is often impossible or impractical to curate a dataset or simulator that sufficiently spans the entire input space, making it improbable that a perfect agent can be trained offline. Metacognitive AI represents an approach to design agents that continue safely learning and adapting as they encounter new or uncertain scenarios in the environment, which improves their performance over time. A key component to achieve this behavior is quantifying the AI agent’s prediction uncertainty to enable the agent to understand when it is operating in a previously unseen scenario. In this chapter, we discuss a framework for creating a metacognitive agent and delve deeper into Meta Modeling, a method for augmenting existing neural networks with uncertainty quantification. Our approach provides a first step toward realizing a metacognitive AI agent.
The chapter discusses the critical role of predictive uncertainty and diversity in enhancing the robustness and generalizability of embodied AI and robot learning. It explores the need for robots to efficiently learn and act in the unpredictable physical world by considering diverse scenarios and their consequences. The chapter highlights the importance of distinguishing between evaluative and generative paradigms of uncertainty, emphasizing the need to balance accuracy, uncertainty, and computational complexity in robot models. It examines various sources of uncertainty, including physical and model limitations, partial observability, environment dynamics, and domain shifts. Additionally, it outlines techniques for quantifying uncertainty, such as variance, entropy, and Bayesian methods, and underscores the significance of leveraging uncertainty in decision-making, exploration, and learning robust models. By addressing uncertainty in perception, representation, planning, and control, the chapter aims to improve the reliability and safety of robotic systems in diverse and dynamic environments.
After having argued against most current regulatory reform proposals directed at social media, this final chapter considers some regulatory initiatives worthy of consideration. It begins, however, with a call for caution. The principle of "First, do no harm" in medical ethics is highly relevant here. Social media is too new, and too rapidly evolving, for regulators to be able to confidently predict either the current impact of regulation or its long term effects, so regulators must act with humility. That said, social media also is not a law-free zone. Long-standing bodies of law, such as antitrust, contract, tort, and even family law, can and should be applied to social media firms in the same way as other private actors. Furthermore, even Section 230 in its current form should not be sacrosanct, and there is also room to consider granting platform users modest procedural protections against arbitrary content moderation decisions. Finally, there are strong arguments for a federal data privacy law, not directed at social media in particular but certainly applicable to it. In short, social media should not be above the law – but nor should it be the target of lawfare.
In contrast to conservatives, progressives argue that platforms don’t block enough content. In particular, progressive critics point to the prevalence of allegedly harmful content on social media platforms, including politically manipulative content, mis- and disinformation (especially about medical issues), harassment and doxing, and hate speech. They argue that social media algorithms actively promote such content to increase engagement, resulting in many forms of social harm including greater political polarization. And they argue (along with conservatives) that social media platforms have been especially guilty of permitting materials harmful to children to remain accessible. As with conservative attacks however, the progressive war on social media is rife with exaggerations and rests on shaky empirical grounds. In particular, there is very little proof that that platform algorithms increase political polarization, or even proof that social media harms children. Moreover, while not all progressive attacks on social media lack a foundation, they are all rooted in an entirely unrealistic expectation that perfect content moderation is possible.