To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter mainly investigates the role of Artificial Intelligence (AI) in augmenting search interactions to enhance users’ understanding across various domains. The chapter begins by examining the current limitations of traditional search interfaces in meeting diverse user needs and cognitive capacities. It then discusses how AI-driven enhancements can revolutionize search experiences by providing tailored, contextually relevant information and facilitating intuitive interactions. Through case studies and empirical analysis, the effectiveness of AI-supported search interaction in improving users’ understanding is evaluated in different scenarios. This chapter contributes to the literature on AI and human–computer interaction by highlighting the transformative potential of AI in optimizing search experiences for users, leading to enhanced comprehension and decision-making. It concludes with implications for research and practice, emphasizing the importance of human-centered design principles in developing AI-driven search systems.
AI-supported crowdsourcing for knowledge sharing is a collaborative approach that leverages artificial intelligence (AI) technologies to facilitate the gathering, organizing, and sharing of information or expertise among a large group of people, known as crowd workers. Despite the growing body of research on motivations in crowdsourcing, the impact of AI-supported crowdsourcing on workers’ motives remains unclear, as does the extent to which their participation can effectively address societal challenges. A systematic review is first conducted to identify trends and gaps in AI-supported crowdsourcing. This chapter then employs a case study through a crowdsourcing platform to look for missing children to demonstrate the pivotal role of AI in crowdsourcing in managing a major societal challenge. Emerging trends and technologies shaping motivations in AI-supported crowdsourcing will be discussed. Additionally, we offer recommendations for practitioners and researchers to integrate AI into crowdsourcing projects to address societal challenges.
Use Case 4 in Chapter 7 explores the regulation of MDTs in the context of employment monitoring under the General Data Protection Regulation (GDPR), the Equality Acquis, the Platform Work Directive (PWD), and the Artificial Intelligence Act (AIA). Article 88 GDPR serves as a useful foundation, supported by valuable guidance aimed at protecting employees from unlawful monitoring practices. In theory, most MDT-based practices discussed in this book are already prohibited under the GDPR. Additionally, the EU’s robust equality acquis can effectively address many forms of discrimination in this sector. The AIA reiterates some existing prohibitions related to MDT-based monitoring practices in the workplace. However, a core challenge in employment monitoring lies in ensuring transparency and enforcement. There has long been a call for a lex specialis for data protection in the employment context, which should include a blacklist of prohibited practices or processing operations, akin to the one found in the PWD. Notably, processing and inferring mind data should be included among the practices on this blacklist.
Chapter 3 examines the regulatory approaches outlined in the Artificial Intelligence Act (AIA) concerning Emotion Recognition Systems (ERS). As the first legislation specifically addressing ERS, the EU’s AI Act employs a multilayered framework that classifies these systems as both limited and high-risk AI technologies. By categorising all ERS as limited risk, the AIA aims to eliminate the practice of inferring emotions or intentions from individuals without their awareness. Additionally, all ERS must adhere to the stringent requirements set for high-risk AI systems. The use of AI systems for inferring emotions in workplace and educational settings is classified as an unacceptable risk and thus prohibited. Considering the broader context, the regulation of ERS represents a nuanced effort by legislators to balance the promotion of innovation with the necessity of imposing rigorous safeguards. However, this book contends that the AIA should not be seen as the ultimate regulation of MDTs. Instead, it serves as a general framework or baseline that requires further legal measures, including additional restrictions or prohibitions through sector-specific legislation.
This chapter aims to provide a comprehensive overview of the current state of credibility research in human–generative AI interactions by analyzing literature from various disciplines. It begins by exploring the key dimensions of credibility assessment and provides an overview of two main measurement methods: user-oriented and technology-oriented. The chapter then examines the factors that influence human perceptions of AI-generated content (AIGC), including attributes related to data, systems, algorithms, and user-specific factors. Additionally, it investigates the challenges and ethical considerations involved in assessing credibility in human–generative AI interactions, scrutinizing the potential consequences of misplaced trust in AIGC. These risks include concerns over security, privacy, power dynamics, responsibility, cognitive biases, and the erosion of human autonomy. Emerging approaches and technological solutions aimed at improving credibility assessment in AI systems are also discussed, alongside a focus on domains where AI credibility assessments are critical. Finally, the chapter proposes several directions for future research on AIGC credibility assessments.
In today’s data-driven world, the demand for advanced intelligent systems to automate and enhance complex tasks is growing. However, developing effective artificial intelligence (AI) often depends on extensive, high-quality training data, which can be costly and time-consuming to obtain. This chapter highlights the potential of human–AI collaboration by integrating human expertise into machine learning workflows to address data limitations and enhance model performance. We explore foundational concepts such as Human-in-the-Loop systems, Active Learning, Crowdsourcing, and Interactive Machine Learning, outlining their interconnections as key paradigms. Through practical applications in diverse domains such as healthcare, finance, and agriculture, along with real-world case studies in education and law, we demonstrate how strategically incorporating human expertise into machine learning workflows can significantly enhance AI performance. From an information science perspective, this chapter emphasizes the powerful human–AI partnership that can drive the next generation of AI systems, enabling continuous learning from human experts and advancing capability and performance.
In the Introduction, the key considerations, scope, and structure of the book are outlined. The chapter sets the stage for a comprehensive exploration of Mind Datafying Technologies (MDTs) and their regulatory landscape. The primary themes and objectives are introduced, providing readers with an understanding of what to expect in the subsequent chapters.
The analysis of MDT regulation across specific use cases – particularly in mental health and well-being, commercial advertising, political advertising, and employment monitoring – suggests that MDTs, especially neurotechnologies, do not necessarily present entirely new legal questions or phenomena. Rather, they occasionally highlight existing deficiencies and weaknesses that have long been recognised, albeit sometimes exacerbating their effects. By strategically adapting and utilising existing laws and legal instruments, substantial improvements can be made to the legal framework governing MDTs. In some cases, stricter regulations are urgently needed, while in others, compliance and enforcement present significant challenges. Although recent legislation has created important opportunities to address these shortcomings, a political consensus has yet to be reached on all necessary aspects. Throughout the book, alternative approaches and adaptations de lege ferenda within both established and newly adopted laws have been proposed as sources of inspiration. The concluding remarks reiterate key legislative adaptations.
Use Case 2 in Chapter 5 examines the regulation of MDTs in the context of commercial advertising under the General Data Protection Regulation (GDPR), the Unfair Commercial Practices Directive (UCPD), and the Audiovisual Media Services Directive (AVSMD). An analysis under the Digital Services Act (DSA) and the Artificial Intelligence Act (AIA) will follow in Chapter 6, alongside a use case focused on political advertising. In the realm of commercial advertising, MDTs intensify long-standing concerns from consumer perspectives. The UCPD serves as a crucial reference point for related laws. Including the processing and inference of mind data in the blacklist outlined in Annex I of the UCPD would have significant implications, akin to the proposed introduction of a sui generis special category of mind data within the GDPR. Importantly, a blanket ban on the processing and inference of mind data for commercial practices under Annex I UCPD would automatically prohibit these practices under the DSA.
Use Case 3 in Chapter 6 examines the regulation of MDTs in the context of political advertising under the General Data Protection Regulation (GDPR), the Regulation on Transparency and Targeting of Political Advertising (TTPA), the Digital Services Act (DSA), and the Artificial Intelligence Act (AIA). The prohibition on advertising based on profiling with special category data in both the DSA and the TTPA does not adequately reflect the capabilities of modern data analytics. Both the DSA and the TTPA fall short in addressing MDTs as stand-alone techniques or as complements to online behavioural advertising and political microtargeting. The AIA’s prohibition of subliminal, manipulative, and deceptive techniques requires a complex set of criteria to be met, with the outcome still uncertain.
Chapter 1 lays the groundwork for the subsequent legal analysis. Following the fundamentals, the chapter highlights ongoing global policy discussions and initial regulatory efforts, with particular emphasis on the latest developments within international organisations such as UNESCO, the OECD, the Council of Europe, and the EU. It also addresses relevant legal scholarship, ensuring a comprehensive understanding of the evolving regulatory debate surrounding these technologies.
Nowadays, artificial intelligence (AI) is becoming a powerful tool to process huge volumes of data generated in scientific research and extract enlightening insights to drive further explorations. The recent trend of human-in-loop AI has promoted the paradigm shift in scientific research by enabling the interactive collaboration between AI models and human experts. Inspired by these advancements, this chapter explores the transformative role of AI in accelerating scientific discovery across various disciplines such as mathematics, physics, chemistry, and life sciences. It provides a comprehensive overview of how AI is reshaping the scientific research – enabling more efficient data analysis, enhancing predictive modeling, and automating experimental processes. Through the examination of case studies and recent developments, this chapter underscores AI’s potential to revolutionize scientific discovery, providing insights into current applications and future directions. It also addresses the ethical challenges associated with AI in science. Through this comprehensive analysis, the chapter aims to provide a nuanced understanding of how AI is facilitating scientific discovery and its potential to accelerate innovations while maintaining rigorous ethical standards.