To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In recent years, speech recognition devices have become central to our everyday lives. Systems such as Siri, Alexa, speech-to-text, and automated telephone services, are built by people applying expertise in sound structure and natural language processing to generate computer programmes that can recognise and understand speech. This exciting new advancement has led to a rapid growth in speech technology courses being added to linguistics programmes; however, there has so far been a lack of material serving the needs of students who have limited or no background in computer science or mathematics. This textbook addresses that need, by providing an accessible introduction to the fundamentals of computer speech synthesis and automatic speech recognition technology, covering both neural and non-neural approaches. It explains the basic concepts in non-technical language, providing step-by-step explanations of each formula, practical activities and ready-made code for students to use, which is also available on an accompanying website.
Knowledge-infused learning directly confronts the opacity of current 'black-box' AI models by combining data-driven machine learning techniques with the structured insights of symbolic AI. This guidebook introduces the pioneering techniques of neurosymbolic AI, which blends statistical models with symbolic knowledge to make AI safer and user-explainable. This is critical in high-stakes AI applications in healthcare, law, finance, and crisis management. The book brings readers up to speed on advancements in statistical AI, including transformer models such as BERT and GPT, and provides a comprehensive overview of weakly supervised, distantly supervised, and unsupervised learning methods alongside their knowledge-enhanced variants. Other topics include active learning, zero-shot learning, and model fusion. Beyond theory, the book presents practical considerations and applications of neurosymbolic AI in conversational systems, mental health, crisis management systems, and social and behavioral sciences, making it a pragmatic reference for AI system designers in academia and industry.
Artificial intelligence is transforming industries and society, but its high energy demands challenge global sustainability goals. Biological intelligence, in contrast, offers both good performance and exceptional energy efficiency. Neuromorphic computing, a growing field inspired by the structure and function of the brain, aims to create energy-efficient algorithms and hardware by integrating insights from biology, physics, computer science, and electrical engineering. This concise and accessible book delves into the principles, mechanisms, and properties of neuromorphic systems. It opens with a primer on biological intelligence, describing learning mechanisms in both simple and complex organisms, then turns to the application of these principles and mechanisms in the development of artificial synapses and neurons, circuits, and architectures. The text also delves into neuromorphic algorithm design, and the unique challenges faced by algorithmic researchers working in this area. The book concludes with a selection of practice problems, with solutions available to instructors online.
Designed for educators, researchers, and policymakers, this insightful book equips readers with practical strategies, critical perspectives, and ethical insights into integrating AI in education. First published in Swedish in 2023, and here translated, updated, and adapted for an English-speaking international audience, it provides a user-friendly guide to the digital and AI-related challenges and opportunities in today's education systems. Drawing upon cutting-edge research, Thomas Nygren outlines how technology can be usefully integrated into education, not as a replacement for humans, but as a tool that supports and reinforces students' learning. Written in accessible language, topics covered include AI literacy, source awareness, and subject-specific opportunities. The central role of the teacher is emphasized throughout, as is the importance of thoughtful engagement with technology. By guiding the reader through the fastevolving digital transformation in education globally, it ultimately enables students to become informed participants in the digital world.
It is of great importance to integrate human-centered design concepts at the core of both algorithmic research and the implementation of applications. In order to do so, it is essential to gain an understanding of human–computer interaction and collaboration from the perspective of the user. To address this issue, this chapter initially presents a description of the process of human–AI interaction and collaboration, and subsequently proposes a theoretical framework for it. In accordance with this framework, the current research hotspots are identified in terms of interaction quality and interaction mode. Among these topics, user mental modeling, interpretable AI, trust, and anthropomorphism are currently the subject of academic interest with regard to interaction quality. The level of interaction mode encompasses a range of topics, including interaction paradigms, role assignment, interaction boundaries, and interaction ethics. To further advance the related research, this chapter identifies three areas for future exploration: cognitive frameworks about Human–AI Interaction, adaptive learning, and the complementary strengths of humans and AI.
In the technological wave of the twenty-first century, artificial intelligence (AI), as a transformative technology, is rapidly reshaping our society, economy, and daily life. Since the concept of AI was first proposed, this field has experienced many technological innovations and application expansions. Artificial intelligence has experienced three booms in the past half century and has developed rapidly. In the 1960s, marked by the Turing test, the application of knowledge reasoning systems and other technologies set off the first boom. Computer scientists at that time began to explore how to let computers simulate human intelligence. Early AI research focused on rule systems and logical reasoning. The rise of expert systems and artificial neural networks brought a second wave of enthusiasm (McDermott, 1982). The third boom is marked by deep learning and big data, especially the widespread application of artificial intelligence-generated content represented by ChatGPT. During this period, AI technology shifted from traditional rule systems to methods that relied on algorithms to learn patterns from data. The rise of deep learning enabled AI to achieve significant breakthroughs in areas such as image recognition and natural language processing.
This chapter mainly investigates the role of Artificial Intelligence (AI) in augmenting search interactions to enhance users’ understanding across various domains. The chapter begins by examining the current limitations of traditional search interfaces in meeting diverse user needs and cognitive capacities. It then discusses how AI-driven enhancements can revolutionize search experiences by providing tailored, contextually relevant information and facilitating intuitive interactions. Through case studies and empirical analysis, the effectiveness of AI-supported search interaction in improving users’ understanding is evaluated in different scenarios. This chapter contributes to the literature on AI and human–computer interaction by highlighting the transformative potential of AI in optimizing search experiences for users, leading to enhanced comprehension and decision-making. It concludes with implications for research and practice, emphasizing the importance of human-centered design principles in developing AI-driven search systems.
AI-supported crowdsourcing for knowledge sharing is a collaborative approach that leverages artificial intelligence (AI) technologies to facilitate the gathering, organizing, and sharing of information or expertise among a large group of people, known as crowd workers. Despite the growing body of research on motivations in crowdsourcing, the impact of AI-supported crowdsourcing on workers’ motives remains unclear, as does the extent to which their participation can effectively address societal challenges. A systematic review is first conducted to identify trends and gaps in AI-supported crowdsourcing. This chapter then employs a case study through a crowdsourcing platform to look for missing children to demonstrate the pivotal role of AI in crowdsourcing in managing a major societal challenge. Emerging trends and technologies shaping motivations in AI-supported crowdsourcing will be discussed. Additionally, we offer recommendations for practitioners and researchers to integrate AI into crowdsourcing projects to address societal challenges.
This chapter aims to provide a comprehensive overview of the current state of credibility research in human–generative AI interactions by analyzing literature from various disciplines. It begins by exploring the key dimensions of credibility assessment and provides an overview of two main measurement methods: user-oriented and technology-oriented. The chapter then examines the factors that influence human perceptions of AI-generated content (AIGC), including attributes related to data, systems, algorithms, and user-specific factors. Additionally, it investigates the challenges and ethical considerations involved in assessing credibility in human–generative AI interactions, scrutinizing the potential consequences of misplaced trust in AIGC. These risks include concerns over security, privacy, power dynamics, responsibility, cognitive biases, and the erosion of human autonomy. Emerging approaches and technological solutions aimed at improving credibility assessment in AI systems are also discussed, alongside a focus on domains where AI credibility assessments are critical. Finally, the chapter proposes several directions for future research on AIGC credibility assessments.
In today’s data-driven world, the demand for advanced intelligent systems to automate and enhance complex tasks is growing. However, developing effective artificial intelligence (AI) often depends on extensive, high-quality training data, which can be costly and time-consuming to obtain. This chapter highlights the potential of human–AI collaboration by integrating human expertise into machine learning workflows to address data limitations and enhance model performance. We explore foundational concepts such as Human-in-the-Loop systems, Active Learning, Crowdsourcing, and Interactive Machine Learning, outlining their interconnections as key paradigms. Through practical applications in diverse domains such as healthcare, finance, and agriculture, along with real-world case studies in education and law, we demonstrate how strategically incorporating human expertise into machine learning workflows can significantly enhance AI performance. From an information science perspective, this chapter emphasizes the powerful human–AI partnership that can drive the next generation of AI systems, enabling continuous learning from human experts and advancing capability and performance.
Nowadays, artificial intelligence (AI) is becoming a powerful tool to process huge volumes of data generated in scientific research and extract enlightening insights to drive further explorations. The recent trend of human-in-loop AI has promoted the paradigm shift in scientific research by enabling the interactive collaboration between AI models and human experts. Inspired by these advancements, this chapter explores the transformative role of AI in accelerating scientific discovery across various disciplines such as mathematics, physics, chemistry, and life sciences. It provides a comprehensive overview of how AI is reshaping the scientific research – enabling more efficient data analysis, enhancing predictive modeling, and automating experimental processes. Through the examination of case studies and recent developments, this chapter underscores AI’s potential to revolutionize scientific discovery, providing insights into current applications and future directions. It also addresses the ethical challenges associated with AI in science. Through this comprehensive analysis, the chapter aims to provide a nuanced understanding of how AI is facilitating scientific discovery and its potential to accelerate innovations while maintaining rigorous ethical standards.
With the rapid development of artificial intelligence technology, human–AI interaction and collaboration have become important topics in the field of contemporary technology. The capabilities of AI have gradually expanded from basic task automation to complex decision support, content creation, and intelligent collaboration in high-risk scenarios. This technological evolution has provided unprecedented opportunities for industries in different fields, but also brought challenges, such as privacy protection, credibility issues, and the ethical and legal relationship between AI and humans. This book explores the role and potential of AI in human–AI interaction and collaboration from multiple dimensions and analyzes AI’s performance in privacy and credibility, knowledge sharing, search interaction, false information processing, and high-risk application scenarios in detail through different chapters.
Informal caregivers such as family members or friends provide much care to people with physical or cognitive impairment. To address challenges in care, caregivers often seek information online via social media platforms for their health information wants (HIWs), the types of care-related information that caregivers wish to have. Some efforts have been made to use Artificial Intelligence (AI) to understand caregivers’ information behaviors on social media. In this chapter, we present achievements of research with a human–AI collaboration approach in identifying caregivers’ HIWs, focusing on dementia caregivers as one example. Through this collaboration, AI techniques such as large language models (LLMs) can be used to extract health-related domain knowledge for building classification models, while human experts can benefit from the help of AI to further understand caregivers’ HIWs. Our approach has implications for the caregiving of various groups. The outcomes of human–AI collaboration can provide smart interventions to help caregivers and patients.
Misinformation on social media is a recognized threat to societies. Research has shown that social media users play an important role in the spread of misinformation. It is crucial to understand how misinformation affects user online interaction behavior and the factors that contribute to it. In this study, we employ an AI deep learning model to analyze emotions in user online social media conversations about misinformation during the COVID-19 pandemic. We further apply the Stimuli–Organism–Response framework to examine the relationship between the presence of misinformation, emotions, and social bonding behavior. Our findings highlight the usefulness of AI deep learning models to analyze emotions in social media posts and enhance the understanding of online social bonding behavior around health-related misinformation.
Generative AI based on large language models (LLM) currently faces serious privacy leakage issues due to the wide range of parameters and diverse data sources. When using generative AI, users inevitably share data with the system. Personal data collected by generative AI may be used for model training and leaked in future outputs. The risk of private information leakage is closely related to the inherent operating mechanism of generative AI. This indirect leakage is difficult to detect by users due to the high complexity of the internal operating mechanism of generative AI. By focusing on the private information exchanged during interactions between users and generative AI, we identify the privacy dimensions involved and develop a model for privacy types in human–generative AI interactions. This can provide a reference for generative AI to avoid training private data and help it provide clear explanations of relevant content for the types of privacy users are concerned about.