To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Classification of movement trajectories has many applications in transportation. Supervised neural models represent the current state-of-the-art. Recent security applications require this task to be rapidly employed in environments that may differ from the data used to train such models for which there is little training data. We provide a neuro-symbolic rule-based framework to conduct error correction and detection of these models to support eventual deployment in security applications.
There is a growing attention towards personalised digital health interventions such as health apps. These often depend on the collection of sensitive personal data, which users generally have limited control over. This work explores perspectives on data sharing and health apps in two different policy contexts, London and Hong Kong. Through this study, our goal is to generate insight about what digital health futures should look like and what needs to be done to achieve them. Using a survey based on a hypothetical health app, we considered a range of behavioural influences on personal health data sharing with the Capability, Opportunity, Motivation model of Behaviour (COM-B) to explore some of the key factors affecting the acceptability of data sharing. Results indicate that willingness to use health apps is influenced by users’ data literacy and control, comfort with sharing health and location data, existing health concerns, access to personalised health advice from a trusted source, and willingness to provide data access to specific parties. Gender is a statistically significant factor, as men are more willing to use health apps. Survey respondents in London are statistically more willing to use health apps than respondents in Hong Kong. Finally, we propose several policy approaches to address these factors, which include the co-creation of standards for using artificial intelligence (AI) to generate health advice, innovating app design and governance models that allow users to carefully control their data, and addressing concerns of gender-specific privacy risks and public trust in institutions dealing with data.
The long game of AI aims at developing agents that are progressively more human-like in an ever-growing number of facets. Such agents must be able to explain the causes and effects of events and attitudes of agents in their world, including their own attitudes. This state of affairs can only be brought about if the agents are endowed with metacognitive abilities. In this chapter, we highlight the importance of metacognition for modeling the phenomenon of trust. Specifically, we present the case for the interdependence of metacognition and mutual trust between members of human-AI teams. We also argue that metacognition based on causality and contentful explanations requires knowledge support modeling human semantic and episodic memories as well as knowledge of language. We illustrate the above point with examples from systems developed using the OntoAgent cognitive architecture.
The primary progressive model for curing the perceived ills of social media – the failure to block harmful content – is to encourage or require social media platforms to act as gatekeepers. On this view, the institutional media, such as newspapers, radio, and television, historically ensured that the flow of information to citizens and consumers was "clean," meaning cleansed of falsehoods and malicious content. This in turn permitted a basic consensus to exist on facts and basic values, something essential for functional democracies. The rise of social media, however, destroyed the ability of institutional media to act as gatekeepers, and so, it is argued, it is incumbent on platforms to step into that role. This chapter argues that this is misguided. Traditional gatekeepers shared two key characteristics: scarcity and objectivity. Neither, however, characterizes the online world. And in any event, social media lack either the economic incentives or the expertise to be effective gatekeepers of information. Finally, and most fundamentally, the entire model of elite gatekeepers of knowledge is inconsistent with basic First Amendment principles and should be abandoned.
The area where social media has undoubtedly been most actively regulated is in their data and privacy practices. While no serious critic has proposed a flat ban on data collection and use (since that would destroy the algorithms that drive social media), a number of important jurisdictions including the European Union and California have imposed important restrictions on how websites (including social media) collect, process, and disclose data. Some privacy regulations are clearly justified, but insofar as data privacy laws become so strict as to threaten advertising-driven business models, the result will be that social media (and search and many other basic internet features) will stop being free, to the detriment of most users. In addition, privacy laws (and related rules such as the “right to be forgotten”) by definition restrict the flow of information, and so burden free expression. Sometimes that burden is justified, but especially when applied to information about public figures, suppressing unfavorable information undermines democracy. The chapter concludes by arguing that one area where stricter regulation is needed is protecting children’s data.
This brief introduction argues that the current, swirling debates over the ills of social media are largely a reflection of larger forces in our society. Social media is accused of creating political polarization, yet polarization long predates social media and pervades every aspect of our society. Social media is accused of a liberal bias and “wokeness”; but in fact, conservative commentators accuse every major institution of our society, including academia, the press, and corporate America, of the same sin. Social media is said to be causing psychological harm to young people, especially young women. But our society’s tendency to impose image-consciousness on girls and young women, and to sexualize girls at ever younger ages, pervades not just social but also mainstream media, the clothing industry, and our culture more generally. And as with polarization, this phenomenon long predates the advent of social media. In short, the supposed ills of social media are in fact the ills of our broader culture. It is just that the pervasiveness of social media makes it the primary mirror in which we see ourselves; and apparently, we do not much like what we see.
Currently, there is a gap in the literature regarding effective post-deployment interventions for LLMs. Existing methods like few-shot or zero-shot prompting show promise but lack certainty in post-prompting performance and heavily rely on human expertise for error detection and prompt crafting. Against this backdrop, we trifurcate the challenges for LLM intervention into three folds. First, the ``black-box’’ nature of LLMs obscures the malfunction source within the multitude of parameters, complicating targeted intervention. Second, rectification typically depends on domain experts to identify errors, hindering scalability and automation. Third, the architectural complexity and sheer size of LLMs render pinpointed intervention an overwhelmingly daunting task.
Here, we call for a novel paradigm for LLM intervention inspired by cognitive science principles. This paradigm aims to equip LLMs with self-awareness in error identification and correction, emulating human cognitive efficiency. It would enable LLMs to form transparent decision-making pathways guided by human-comprehensible concepts, allowing for precise model intervention.
The functionality and aesthetic of 3D-printed components can be compromised if visible defects appear on their external surfaces. To overcome this issue, CNC machines were traditionally adopted for milling machining. More recently, industrial robots have been demonstrated to be a valid alternative. This study presents a robotic workstation developed for contouring machining 3D thermoplastic components printed using the material extrusion technology. The workstation adopts a collaborative robot with a novel, custom-designed, and low-cost end-effector made of a powered contouring tool integrated with three load cells for measuring the cutting forces along three perpendicular directions. The tool path planning is defined by a proposed and validated procedure. By a vision algorithm and a touch-stop operation, the 3D CAD model-based tool path is adapted to the current position and orientation of the workpiece. The experimental activity for determining the optimal set of contouring machining parameters (rotational speed, cut depth, and feed rate) and for measuring cutting forces confirms the feasibility of adopting the cobot-based solution for this application and suggests potential improvements for future works.
Despite their widespread use, purely data-driven methods often suffer from overfitting, lack of physical consistency, and high data dependency, particularly when physical constraints are not incorporated. This study introduces a novel data assimilation approach that integrates Graph Neural Networks (GNNs) with optimization techniques to enhance the accuracy of mean flow reconstruction, using Reynolds-averaged Navier–Stokes (RANS) equations as a baseline. The method leverages the adjoint approach, incorporating RANS-derived gradients as optimization terms during GNN training, ensuring that the learned model adheres to physical laws and maintains consistency. Additionally, the GNN framework is well-suited for handling unstructured data, which is common in the complex geometries encountered in computational fluid dynamics. The GNN is interfaced with the finite element method for numerical simulations, enabling accurate modeling in unstructured domains. We consider the reconstruction of mean flow past bluff bodies at low Reynolds numbers as a test case, addressing tasks such as sparse data recovery, denoising, and inpainting of missing flow data. The key strengths of the approach lie in its integration of physical constraints into the GNN training process, leading to accurate predictions with limited data, making it particularly valuable when data are scarce or corrupted. Results demonstrate significant improvements in the accuracy of mean flow reconstructions, even with limited training data, compared to analogous purely data-driven models.