To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Static analysis of logic programs by abstract interpretation requires designing abstract operators which mimic the concrete ones, such as unification, renaming, and projection. In the case of goal-driven analysis, where goal-dependent semantics are used, we also need a backward-unification operator, typically implemented through matching. In this paper, we study the problem of deriving optimal abstract matching operators for sharing and linearity properties. We provide an optimal operator for matching in the domain $\mathtt{ShLin}^{\omega }$, which can be easily instantiated to derive optimal operators for the domains $\mathtt{ShLin}^2$ by Andy King and the reduced product $\mathtt{Sharing} \times \mathtt{Lin}$.
Due to the F2 ionospheric layer’s ability to reflect radio waves, the foF2 critical frequency is essential since sudden irregularities can disrupt communication and navigation systems, affecting the weather forecast’s accuracy. This paper aims to develop accurate foF2 critical frequency prediction up to 24 hours ahead, focusing on mid and high latitudes, using the long short-term memory (LSTM) model covering the 24th solar cycle from 2008 to 2019. To evaluate the effectiveness of the proposed model, a comparative analysis is conducted with commonly referenced machine learning techniques, including linear regression, decision tree algorithms, and multilayer perceptron (MLP) using the Taylor diagram and error plots. The study involved five monitoring stations, different years with minimum and maximum solar activity, and prediction timeframes. Through extensive experimentation, a comprehensive set of outcomes is evaluated across diverse metrics. The findings conclusively established that the LSTM model has demonstrated superior performance compared to the other models across all stations and years. On average, LSTM is 1.2 times better than the second-best model (DT), 1.6 times as effective as the multilayer perceptron MLP, and three times more accurate than linear regression. The results of this research hold promise for increasing the precision of foF2-prediction, with potential implications for enhancing communication systems and weather forecasting capabilities.
The latest version of 'Programming in Ada' covers the full details of the core language Ada 2022 as approved by ISO in 2023, including new features that aid program proof and the efficient use of multicore architectures. The book is arranged in four parts. The first part introduces the key ideas to the newcomer with a working example illustrating the basic ideas. The algorithmic features, structural features such as OOP and multitasking, and details of the standard library and interaction with the external environment are all covered in subsequent parts. This comprehensive guide includes several working examples and is enhanced by a range of supplementary online materials, including a dozen complete executable programs, five of which illustrate important new features. 'Programming in Ada' is a must-have for anyone looking to learn Ada programming language, and will serve as a definitive reference for years to come.
Web3 is a new frontier of internet architecture emphasizing decentralization and user control. This text for MBA students and industry professionals explores key Web3 concepts, starting from foundational principles and moving to advanced topics like blockchain, smart contracts, tokenomics, and DeFi. The book takes a clear, practical approach to demystify the tech behind NFTs and DAOs as well as the complex regulatory landscape. It confronts challenges of blockchain scalability, a barrier to mainstream adoption of this transformative technology, and examines smart contracts and the growing ecosystem leveraging their potential. The book also explains the nuances of tokenomics, a vital element underpinning Web3's new economic model. This book is ideal for readers seeking to stay on top of emerging trends in the digital economy.
This guide illuminates the intricate relationship between data management, computer architecture, and system software. It traces the evolution of computing to today's data-centric focus and underscores the importance of hardware-software co-design in achieving efficient data processing systems with high throughput and low latency. The thorough coverage includes topics such as logical data formats, memory architecture, GPU programming, and the innovative use of ray tracing in computational tasks. Special emphasis is placed on minimizing data movement within memory hierarchies and optimizing data storage and retrieval. Tailored for professionals and students in computer science, this book combines theoretical foundations with practical applications, making it an indispensable resource for anyone wanting to master the synergies between data management and computing infrastructure.
This chapter introduces social scientific perspectives and methods applicable to observing the relationship between artificial intelligence (AI) and religion. It discusses the contributions that anthropological and sociological approaches can make to this entanglement of two modern social phenomena while also drawing attention to the inherent biases and perspectives that both fields bring with them due to their histories. Examples of research on religion and AI are highlighted, especially when they demonstrate agile and new methodologies for engaging with AI in its many applications; including but not limited to online worlds, multimedia formats, games, social media and the new spaces made by technological innovation such as the innovations such as the platforms underpinning the gig economy. All these AI-enabled spaces can be entangled with religious and spiritual conceptions of the world. This chapter also aims to expand upon the relationship between AI and religion as it is perceived as a general concept or object within human society and civilisation. It explains how both anthropology and sociology can provide frameworks for conceptualising that relationship and give us ways to account for our narratives of secularisation – informed by AI development – that see religion as a remnant of a prior, less rational stage of human civilisation.
The use of care robots can reduce the demands for manpower in long-term care facilities. Further, care robots serve the needs of both the elders residing in care facilities and the staff of the facilities. This chapter considers the following issues for care robots. Whether long-term care robots should be required to meet the high standards for the use of medical devices found in current regulations. How should standards of use be developed for care robots based on the characteristics of the robots? For this question, I note that in Japan, a public–private partnership has shown success in the regulation of care robots. In addition, with care robots, how should we protect the privacy of elders and their relatives or friends in contact with care robots given that the elderly may have reduced cognitive ability. And lastly, what legal and ethical concerns apply to the design of the interfaces between care robots and elders?
When a robot harms humans, are there any grounds for holding it criminally liable for its misconduct? Yes, provided that the robot has the ability to form, act on, and explain its moral decisions. If such a robot falls short of the basic moral standards expected by society, labeling it as a criminal can serve criminal law’s function of censuring wrongful conduct and ease the emotional harm suffered by human victims. Moreover, imposing criminal liability on robots could have significant instrumental value in certain cases, such as in identifying culpable humans. However, this does not exempt the manufacturers, trainers, or owners of the robots from any potential criminal liability.
Even the most market-oriented approaches to regulating AI-enabled robots presume some governmental regulator to collaborate in setting outcome goals. And the more advanced an AI-enabled robot becomes, the greater the need for oversight. For the past several decades, regulatory oversight boards have grown in use to promote the quality, transparency, and accountability of regulatory rules and policy. And recently, leading administrative law voices in the United States and the European Union have called for the creation of an oversight board to monitor regulator use of AI entities. How do we determine if these boards are worth the risks they create? To answer this question, this chapter uses the context of AI-enabled robots, which are increasingly prominent in homes, business, and education, to explain both when regulatory oversight boards are valuable as well as when they can frustrate society’s efforts to reduce the ill effects of emerging smart robots. Regulatory oversight boards create value in this context by conducting impact assessments of regulatory policies with an eye to the technological advancements and social context relevant to AI technologies such as robots, and oversight boards can promote regulatory foresight. However, oversight boards themselves pose risks. By influencing the methodological approach used by regulators, errors made by oversight boards can have outsized impacts. To determine whether any given oversight board is worth the risk it creates, this chapter sets out a simple cost-based approach for comparing the risks and benefits of regulatory oversight boards. This approach then is applied to emerging regulatory oversight boards looking at robots entering society.
The chapter examines a classic subject of HRI, social robotics, and the law, such as the design, manufacture, and use of humanoid AI robots for healthcare. The aim is to illustrate a new set of legal challenges that are unique to these systems when displayed in outer space. Such challenges may require the adoption of new legal standards, in particular, either sui generis standards of space law, or stricter or more flexible standards for HRI in space missions, down to a new “principle of equality” between human standards and robotic standards in outer space. The chapter complements current discussions and initiatives on the development of standards for the use of humanoid AI robots in health law, consumer law, and cybersecurity by considering the realignment of terrestrial standards that we may thus expect with the increasing use of humanoid AI systems in space journeys. The assumption is that breathtaking advancements of AI and robotics, current trends on the privatization of the space, and the evolution of current regulatory frameworks, also but not only in space law, will put the development of these new legal standards in the spotlight.
This chapter explores the intersection of Hindu philosophy and practice with the development of artificial intelligence (AI). The chapter first introduces aspects of technological growth in Hindu contexts, including the reception of ‘Western’ ideas about AI in Hindu communities before describing key elements of the Hindu traditions. It then shows how AI technologies can be conceived of from a Hindu perspective and moves from there to the philosophical contributions Hinduism offers for global reflection on AI. Specifically, the chapter describes openings and contentions for AI in Hindu rituals. The focus is the use of robotics and/or AI in Hindu pūjā (worship of gods) and the key practice of darśan (mutual seeing) with the divine. Subsequently, the chapter investigates how Hindu philosophers have engaged the distinctive qualities of human beings and their investigation into body, minds and consciousness/awareness. The chapter concludes by raising questions for future research.
This chapter introduces the construct of anthropomorphism and highlights its relevance for human–robot interaction (HRI) research. This chapter reviews existing definitions of anthropomorphism and distinguishes it from anthropomorphization. It further discusses established theoretical models of anthropomorphism and their respective empirical support (or lack thereof). Moreover, we address consequences of anthropomorphism, especially for HRI. We shed light on different ways to measure anthropomorphism in HRI, discussing advantages and disadvantages of such measurement approaches, respectively. Finally, the present overview offers reflections on the added value of taking into account anthropomorphism and anthropomorphization in HRI research.
Online reviews have a significant impact on the purchasing decisions of potential consumers. Positive reviews often sway buyers, even when faced with higher prices. This phenomenon has given rise to a deceptive industry dedicated to crafting counterfeit reviews. Companies frequently indulge in procuring bulk fake reviews, employing them to tarnish their rivals’ reputations or artificially bolster their credibility. These spurious reviews materialize through automated systems or compensated individuals. Thus, detecting fake reviews is becoming increasingly important due to their deceptive nature, as they are extremely difficult for humans to identify. To address this issue, current work has focused on machine learning and deep learning techniques to identify fake reviews. However, they have several limitations, including a lack of sufficient training data, inconsistency in providing accurate solutions across different datasets, concept drift, and inability to address new methods that evolved to create fake reviews over time. The objective of this review paper is to find the gaps in the existing research in the field of fake review detection and provide future directions. This paper provides the latest, comprehensive overview and analysis of research efforts focusing on various techniques employed so far, distinguishing characteristics utilized, and the existing datasets used.
In recent years, the embodiment of AI in the form of a robot brings forth new challenges in privacy and transparency. A cognitive robot must be able to integrate multiple tasks in its performance that requires collecting a great amount of data and the use of various AI techniques. As robotics technology continues to advance, socially assistive humanoid robots will play a more central role in interacting with humans. However, can we trust robots in social contexts? How can we design embedded AI robots such that they are more transparent and trustworthy? and what technical, legal, and ethical frameworks might we adopt to build a trust relationship with robots? This chapter discusses these fundamental questions concerning privacy and transparency in human–robot interaction. It will then propose possible ethical and regulatory responses to address them.