To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter mainly investigates the role of Artificial Intelligence (AI) in augmenting search interactions to enhance users’ understanding across various domains. The chapter begins by examining the current limitations of traditional search interfaces in meeting diverse user needs and cognitive capacities. It then discusses how AI-driven enhancements can revolutionize search experiences by providing tailored, contextually relevant information and facilitating intuitive interactions. Through case studies and empirical analysis, the effectiveness of AI-supported search interaction in improving users’ understanding is evaluated in different scenarios. This chapter contributes to the literature on AI and human–computer interaction by highlighting the transformative potential of AI in optimizing search experiences for users, leading to enhanced comprehension and decision-making. It concludes with implications for research and practice, emphasizing the importance of human-centered design principles in developing AI-driven search systems.
AI-supported crowdsourcing for knowledge sharing is a collaborative approach that leverages artificial intelligence (AI) technologies to facilitate the gathering, organizing, and sharing of information or expertise among a large group of people, known as crowd workers. Despite the growing body of research on motivations in crowdsourcing, the impact of AI-supported crowdsourcing on workers’ motives remains unclear, as does the extent to which their participation can effectively address societal challenges. A systematic review is first conducted to identify trends and gaps in AI-supported crowdsourcing. This chapter then employs a case study through a crowdsourcing platform to look for missing children to demonstrate the pivotal role of AI in crowdsourcing in managing a major societal challenge. Emerging trends and technologies shaping motivations in AI-supported crowdsourcing will be discussed. Additionally, we offer recommendations for practitioners and researchers to integrate AI into crowdsourcing projects to address societal challenges.
Use Case 4 in Chapter 7 explores the regulation of MDTs in the context of employment monitoring under the General Data Protection Regulation (GDPR), the Equality Acquis, the Platform Work Directive (PWD), and the Artificial Intelligence Act (AIA). Article 88 GDPR serves as a useful foundation, supported by valuable guidance aimed at protecting employees from unlawful monitoring practices. In theory, most MDT-based practices discussed in this book are already prohibited under the GDPR. Additionally, the EU’s robust equality acquis can effectively address many forms of discrimination in this sector. The AIA reiterates some existing prohibitions related to MDT-based monitoring practices in the workplace. However, a core challenge in employment monitoring lies in ensuring transparency and enforcement. There has long been a call for a lex specialis for data protection in the employment context, which should include a blacklist of prohibited practices or processing operations, akin to the one found in the PWD. Notably, processing and inferring mind data should be included among the practices on this blacklist.
Chapter 3 examines the regulatory approaches outlined in the Artificial Intelligence Act (AIA) concerning Emotion Recognition Systems (ERS). As the first legislation specifically addressing ERS, the EU’s AI Act employs a multilayered framework that classifies these systems as both limited and high-risk AI technologies. By categorising all ERS as limited risk, the AIA aims to eliminate the practice of inferring emotions or intentions from individuals without their awareness. Additionally, all ERS must adhere to the stringent requirements set for high-risk AI systems. The use of AI systems for inferring emotions in workplace and educational settings is classified as an unacceptable risk and thus prohibited. Considering the broader context, the regulation of ERS represents a nuanced effort by legislators to balance the promotion of innovation with the necessity of imposing rigorous safeguards. However, this book contends that the AIA should not be seen as the ultimate regulation of MDTs. Instead, it serves as a general framework or baseline that requires further legal measures, including additional restrictions or prohibitions through sector-specific legislation.
This chapter aims to provide a comprehensive overview of the current state of credibility research in human–generative AI interactions by analyzing literature from various disciplines. It begins by exploring the key dimensions of credibility assessment and provides an overview of two main measurement methods: user-oriented and technology-oriented. The chapter then examines the factors that influence human perceptions of AI-generated content (AIGC), including attributes related to data, systems, algorithms, and user-specific factors. Additionally, it investigates the challenges and ethical considerations involved in assessing credibility in human–generative AI interactions, scrutinizing the potential consequences of misplaced trust in AIGC. These risks include concerns over security, privacy, power dynamics, responsibility, cognitive biases, and the erosion of human autonomy. Emerging approaches and technological solutions aimed at improving credibility assessment in AI systems are also discussed, alongside a focus on domains where AI credibility assessments are critical. Finally, the chapter proposes several directions for future research on AIGC credibility assessments.
This chapter is devoted to design principles of multiprocessor systems, focusing on two architectural styles: shared-memory and message-passing. Both styles use multiple processors with to achieve a linear speedup of computational power with the number of processors but differ in the method of data exchange. Processors in shared-memory multiprocessors share the same address space and can exchange data through shared-memory locations by regular load and store instructions. This chapter reviews the programming model abstractions for shared-memory and message-passing multiprocessors, then the semantics of message-passing primitives, the protocols needed, and architectural support to accelerate message processing. It covers support of a shared-memory model abstraction by reviewing the concept of cache coherence, the design space of snoopy-cache coherence protocols, classification of communication events, and translation-lookaside buffer consistency strategies. Scalable models of shared memory are treated, with an emphasis on the design of cache coherence solutions that can be applied at a large scale as well as the software techniques to deal with page mappings to exploit locality.
In today’s data-driven world, the demand for advanced intelligent systems to automate and enhance complex tasks is growing. However, developing effective artificial intelligence (AI) often depends on extensive, high-quality training data, which can be costly and time-consuming to obtain. This chapter highlights the potential of human–AI collaboration by integrating human expertise into machine learning workflows to address data limitations and enhance model performance. We explore foundational concepts such as Human-in-the-Loop systems, Active Learning, Crowdsourcing, and Interactive Machine Learning, outlining their interconnections as key paradigms. Through practical applications in diverse domains such as healthcare, finance, and agriculture, along with real-world case studies in education and law, we demonstrate how strategically incorporating human expertise into machine learning workflows can significantly enhance AI performance. From an information science perspective, this chapter emphasizes the powerful human–AI partnership that can drive the next generation of AI systems, enabling continuous learning from human experts and advancing capability and performance.
In the Introduction, the key considerations, scope, and structure of the book are outlined. The chapter sets the stage for a comprehensive exploration of Mind Datafying Technologies (MDTs) and their regulatory landscape. The primary themes and objectives are introduced, providing readers with an understanding of what to expect in the subsequent chapters.
The analysis of MDT regulation across specific use cases – particularly in mental health and well-being, commercial advertising, political advertising, and employment monitoring – suggests that MDTs, especially neurotechnologies, do not necessarily present entirely new legal questions or phenomena. Rather, they occasionally highlight existing deficiencies and weaknesses that have long been recognised, albeit sometimes exacerbating their effects. By strategically adapting and utilising existing laws and legal instruments, substantial improvements can be made to the legal framework governing MDTs. In some cases, stricter regulations are urgently needed, while in others, compliance and enforcement present significant challenges. Although recent legislation has created important opportunities to address these shortcomings, a political consensus has yet to be reached on all necessary aspects. Throughout the book, alternative approaches and adaptations de lege ferenda within both established and newly adopted laws have been proposed as sources of inspiration. The concluding remarks reiterate key legislative adaptations.
Use Case 2 in Chapter 5 examines the regulation of MDTs in the context of commercial advertising under the General Data Protection Regulation (GDPR), the Unfair Commercial Practices Directive (UCPD), and the Audiovisual Media Services Directive (AVSMD). An analysis under the Digital Services Act (DSA) and the Artificial Intelligence Act (AIA) will follow in Chapter 6, alongside a use case focused on political advertising. In the realm of commercial advertising, MDTs intensify long-standing concerns from consumer perspectives. The UCPD serves as a crucial reference point for related laws. Including the processing and inference of mind data in the blacklist outlined in Annex I of the UCPD would have significant implications, akin to the proposed introduction of a sui generis special category of mind data within the GDPR. Importantly, a blanket ban on the processing and inference of mind data for commercial practices under Annex I UCPD would automatically prohibit these practices under the DSA.