To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The importance of automating pavement maintenance tasks for highway systems has garnered interest from both industry and academia. Despite significant research efforts and promising demonstrations being devoted to reaching a level of semi-automation featuring digital sensing and inspection, site maintenance work still requires manual processes using special vehicles and equipment, reflecting a clear gap to transition to fully autonomous maintenance. This paper reviews the current progress in pavement maintenance automation in terms of inspection and repair operations, followed by a discussion of three key technical challenges related to robotic sensing, control, and actuation. To address these challenges, we propose a conceptual solution we term Autonomous Maintenance Plant (AMP), mainly consisting of five modules for sensing, actuation, control, power supply, and mobility. This AMP concept is part of the “Digital Roads” project’s cyber-physical platform where a road digital twin (DT) is created based on its physical counterpart to enable real-time condition monitoring, sensory data processing, maintenance decision making, and repair operation execution. In this platform, the AMP conducts high-resolution survey and autonomous repair operations enabled (instructed) by the road DT. This process is unmanned and completely autonomous with an expectation to create a fully robotized highway pavement maintenance system.
It’s less than a year since OpenAI’s board voted to fire Sam Altman as CEO, in a palace coup that lasted just a weekend before Altman was reinstated. That weekend and subsequent events in OpenAI’s storyline provide all the ingredients for a soap opera. So, just in case Netflix is interested, here’s a stab at a synopsis of what might be just the first of many seasons of ‘The Generative AI Wars’.
This article outlines a human-centered approach to developing digital patient stories, for sharing their experiences in health care, while preserving patient and others’ privacy. Employing a research-through-design approach, the study proposes a design solution using visualization and digital storytelling to document patients’ and families’ experiences and emotions, as well as their interactions with healthcare professionals in the postnatal unit. By transforming selected observational data into animated stories, this approach has the potential to elicit empathy, stimulate stakeholder engagement, and serve as a practical training tool for clinicians. This work was conducted as part of a broader study that aims to contribute to the existing knowledge base by advancing our understanding of stakeholder needs in birthing facilities and through postpartum discharge. This study primarily focuses on strategies for the development of digital stories and summarizes the factors that contributed to the production of digital stories within the context of sensitive data. It may serve as a valuable resource for students, researchers and practitioners interested in utilizing digital stories to encourage discussions, education and ultimately to enhance systems of health care for respect, equity and support.
Data mining and techniques for analyzing big data play a crucial role in various practical fields, including financial markets. However, only a few quantitative studies have been focused on predicting daily stock market returns. The data mining methods used in previous studies are either incomplete or inefficient. This study used the FPC clustering algorithm and prominent clustering algorithms such as K-means, IPC, FDPC, and GOPC for clustering stock market data. The stock market data utilized in this study comprise data from cement companies listed on the Tehran Stock Exchange. These data concerning capital returns and price fluctuations will be examined and analyzed to guide investment decisions. The analysis process involves extracting the stock market data of these companies over the past two years. Subsequently, these companies are categorized based on two criteria: profitability percentage and short-term and long-term price fluctuations, using the FPC clustering algorithm and the classification above algorithms. Then, the results of these clustering analyses are compared against each other using standard and recognized evaluation criteria to assess the quality of the clustering analysis. The findings of this investigation indicate that the FPC algorithm provides more favorable results than other algorithms. Based on the results, companies demonstrating profitability, stability, and loss within short-term (weekly and monthly) and long-term (three-month, six-month, and one-year) time frames will be placed within their respective clusters and introduced accordingly.
Under the umbrella concepts of upscaling and emerging technology, a wide variety of phenomena related to technology development and deployment in society are examined to meet societal imperatives (e.g., environment, safety, social justice). The design literature does not provide an explicit common theoretical and practical framework to clarify the assessment method to handle “an” upscaling. In this nebulous context, designers are struggling to identify the characteristics to anticipate the consequences of emerging technology upscaling. This article therefore first proposes a structuring framework to analyze the literature in a wide range of industrial sectors (energy, chemistry, building, etc.). This characterization brought to light five prevalent archetypes clarifying the concepts of upscaling and emerging technology. Then, a synthesis of invariants and methodological requirements for designers is proposed to deal with upscaling assessment according to each archetype, based on a literature review of existing design methods. This literature review process showed a disparity in treatment for some archetypes, regarding the industrial sector. A discussion is consequently proposed in the conclusion to guide design practices.
To improve understanding of prototyping practice at the fuzzy front end of the design process, this article presents an analysis of a prototyping dataset captured during the IDEA challenge – a 4-day virtually hosted hackathon – using Pro2booth, a web-based prototype capture tool. The dataset comprised 203 prototypes created by four independent teams working in university labs across Europe supported by interviews carried out with each of the teams after the event. The results of the study provide nine key findings about prototyping at hackathons. These include elucidation of the purposes of prototypes in physical, digital and sketch domains and characterisation of teams’ prototyping practices and strategies. The most successful strategy focused on learning about the problem or solution space, often via physical prototypes rather than following more prescriptive ‘theoretical’ methodologies. Recommendations on prototyping strategies in hackathons or similar scenarios are provided, highlighting the importance of practical strategies that prioritise learning and adaptation. The results of this study raise the broader question to the wider research community of how design research and teaching should balance high-level strategic approaches with more hands-on ‘operational’ prototyping.
While governments have long discussed the promise of delegating important decisions to machines, actual use often lags. Consequently, we know little about the variation in the deployment of such delegations in large numbers of similar governmental organizations. Using data from crime laboratories in the United States, we examine the uneven distribution over time of a specific, well-known expert system for ballistics imaging for a large sample of local and regional public agencies; an expert system is an inference engine joined with a knowledge base. Our statistical model is informed by the push-pull-capability theory of innovation in the public sector. We test hypotheses about the probability of deployment and provide evidence that the use of this expert system varies with the pull of agency task environments and the enabling support of organizational resources—and that the impacts of those factors have changed over time. Within this context, we also present evidence that general knowledge of the use of expert systems has supported the use of this specific expert system in many agencies. This empirical case and this theory of innovation provide broad evidence about the historical utilization of expert systems as algorithms in public sector applications.
This book is meant for the serious practitioner-to-be of constructing intelligent machines. Machines that are aware of the world around them, that have goals to achieve, and the ability to imagine the future and make appropriate choices to achieve those goals. It is an introduction to a fundamental building block of artificial intelligence (AI). As the book shows, search is central to intelligence.
Clearly AI is not one monolithic algorithm but a collection of processes working in tandem, an idea espoused by Marvin Minsky in his book The Society of Mind (1986). Human problem solving has three critical components. The ability to make use of experiences stored in memory; the ability to reason and make inferences from what one knows; and the ability to search through the space of possibilities. This book focuses on the last of these. In the real world we sense the world using vision, sound, touch, and smell. An autonomous agent will need to be able to do so as well. Language, and the written word, is perhaps a distinguishing feature of the human species. It is the key to communication which means that human knowledge becomes pervasive and is shared with future generations. The development of mathematical sciences has sharpened our understanding of the world and allows us to compute probabilities over choices to take calculated risks. All these abilities and more are needed by an autonomous agent.
Can one massive neural network be the embodiment of AI? Certainly, the human brain as a seat of intelligence suggests that. Everything we humans do has its origin in activity in our brains, which we call the mind. Perched on the banks of a stream in the mountains we perceive the world around us and derive a sense of joy and well-being. In a fit of contented creativity, we may pen an essay or a poem using our faculty of language. We may call a friend on the phone and describe the scene around us, allowing the friend to visualize the serene surroundings. She may reflect upon her own experiences and recall a holiday she had on the beach. You might start humming your favourite song and then be suddenly jolted out of your reverie remembering that friends are coming over for dinner. You get up and head towards your home with cooking plans brewing in your head.
Having introduced the machinery needed for search in the last chapter, we look at approaches to informed search. The algorithms introduced in the last chapter were blind, or uninformed, taking no cognizance at all of the actual problem instance to be solved and behaving in the same bureaucratic manner wherever the goal might be. In this chapter we introduce the idea of heuristic search, which uses domain specific knowledge to guide exploration. This is done by devising a heuristic function that estimates the distance to the goal for each candidate in OPEN.
When heuristic functions are not very accurate, search complexity is still exponential, as revealed by experiments. We then investigate local search methods that do not maintain an OPEN list, and study gradient based methods to optimize the heuristic value.
Knowledge is necessary for intelligence. Without knowledge, problem solving with search is blind. We saw this in the last chapter. In general, knowledge is that sword in the armoury of a problem solver that can cut through the complexity. Knowledge accrues over time, either distilled from our own experiences or assimilated from interaction with others – parents, teachers, authors, coaches, and friends. Knowledge is the outcome of learning and exists in diverse forms, varying from tacit to explicit. When we learn to ride a bicycle, we know it but are unable to articulate our knowledge. We are concerned with explicit knowledge. Most textbook knowledge is explicit, for example, knowing how to implement a leftist heap data structure.
In a well known incident from ancient Greece, it is said that Archimedes, considered by many to be the greatest scientist of the third century BC, ran naked onto the streets of Syracuse. King Hieron II was suspicious that a goldsmith had cheated him by adulterating a bar of gold given to him for making a crown. He asked Archimedes to investigate without damaging the crown. Stepping into his bathtub Archimedes noticed the water spilling out, and realized in a flash that if the gold were to be adulterated with silver, then it would displace more water since silver was less dense. This was his epiphany moment when he discovered what we now know as the Archimedes principle. And he ran onto the streets shouting ‘Eureka, eureka!’ We now call such an enlightening moment a Eureka moment!
Within Holocaust studies, there has been an increasingly uncritical acceptance that by engaging with social media, Holocaust memory has shifted from the ‘era of the witness’ to the ‘era of the user’ (Hogervorst 2020). This paper starts by problematising this proposition. This claim to a paradigmatic shift implies that (1) the user somehow replaces the witness as an authority of memory, which neglects the wealth of digital recordings of witnesses now circulating in digital spaces and (2) agency online is solely human-centric, a position that ignores the complex negotiations between corporations, individuals, and computational logics that shape our digital experiences. This article proposes instead that we take a posthumanist approach to understanding Holocaust memory on, and with, social media. Adapting Barad's (2007) work on entanglement to memory studies, we analyse two case studies on TikTok: the #WeRemember campaign and the docuseries How To: Never Forget to demonstrate: (1) the usefulness of reading Holocaust memory on social media through the lens of entanglement which offers a methodology that accounts for the complex network of human and non-human actants involved in the production of this phenomenon which are simultaneously being shaped by it. (2) That professional memory institutions and organisations are increasingly acknowledging the use of social media for the sake of Holocaust memory. Nevertheless, we observe that in practice the significance of technical actancy is still undervalued in this context.
Design occurs in complex socio-technical contexts with conflicting stakeholder goals, requirements and other constraints. These limit solution options and create trade-offs where improvements relative to one goal come at the expense of performance on another. Little is known about how the design context influences trade-offs, or how designers interact with context to manage trade-offs. This article reports on an exploratory qualitative interview study investigating design trade-offs in relation to their socio-technical context. We identified nine themes reflecting engineering designers’ perceptions of the influence of the design context on their ability to resolve trade-offs. Findings suggest that the design context is both a source of trade-offs, and of knowledge and information that helps designers clarify ambiguous requirements to navigate and resolve trade-offs. The results provide insight into how designers interact with the design context to learn about the structure of their design problems and the degrees of freedom available to resolve trade-offs. The findings also contribute to understanding the effects of path dependencies in trade-off situations, and how the sequential distribution of design decisions over time constrains trade-off resolution. We discuss some of the goals and challenges of conducting rigorous qualitative research in design and identify potential directions for further research.
We will adopt the overall goal of artificial intelligence (AI) to be ‘to build machines with minds, in the full and literal sense’ as prescribed by the Canadian philosopher John Haugeland (1985).
Not to create machines with a clever imitation of human-like intelligence. Or machines that exhibit behaviours that would be considered intelligent if done by humans – but to build machines that reason.
This book focuses on search methods for problem solving. We expect the user to define the goals to be achieved and the domain description, including the moves available with the machine. The machine then finds a solution employing first principles methods based on search. A process of trial and error. The ability to explore different options is fundamental to thinking.
As we describe subsequently, such methods are just amongst the many in the armoury of an intelligent agent. Understanding and representing the world, learning from past experiences, and communicating with natural language are other equally important abilities, but beyond the scope of this book. We also do not assume that the agent has meta-level abilities of being self-aware and having goals of its own. While these have a philosophical value, our goal is to make machines do something useful, with as general a problem solving approach as possible.
This and other definitions of what AI is do not prescribe how to test if a machine is intelligent. In fact, there is no clear-cut universally accepted definition of intelligence. To put an end to the endless debates on machine intelligence that ensued, the brilliant scientist Alan Turing proposed a behavioural test.
Can Machines Think?
Ever since the possibility of building intelligent machines arose, there have been raging debates on whether machine intelligence is possible or not. All kinds of arguments have been put forth both for and against the possibility. It was perhaps to put an end to these arguments that Alan Turing (1950) proposed his famous imitation game, which we now call the Turing Test. The test is simply this: if a machine interacts with a human using text messages and can fool human judges a sufficiently large fraction of times that they are chatting with another human, then we can say that the machine has passed the test and is intelligent.
Finding a solution is one aspect of problem solving. Executing it is another. In certain applications the cost of executing the solution is important. For example, maintaining supplies to the International Space Station, a repetitive task, or sending a rocket to Jupiter, an infrequent activity. Coming down to Earth, the manufacturing industry needs to manage its supplies, inventory, scheduling, and shipping of products. At home, juggling the morning activity of cooking, sending off kids to school, and heading for office after grabbing a coffee and a bite could do with optimized processes.
In this chapter we look at the algorithm A* for finding optimal solutions. It is a heuristic search algorithm that guarantees an optimal solution. It does so by combining the goal seeking of best first search with a tendency to keep as close to the source as possible. We begin by looking at the algorithm branch & bound that focuses only on the latter, before incorporating the heuristic function.
We revert to graph search for the study of algorithms that guarantee optimal solutions. The task is to find a shortest path in a graph from a start node to a goal node. We have already studied algorithms BFS and DFID in Chapter 3. The key idea there was to extend that partial path which was the shortest. We begin with the same strategy. Except that now we add weights to edges in the graph. Without edge weights, the optimal or shortest path has the least number of edges in the path. With edge weights added, we modify this notion to the sum of the weights on the edges.
The common theme continuing in our search algorithms is as follows:
Pick the best node from OPEN and extend it, till you pick the goal node.
The question that remains is the definition of ‘best’. In DFS, the deepest node is the best node. In BestFirstSearch, the node that appears to be closest to the goal is the best. In BFS, the node closest to the start node is the best. We begin by extending the idea behind breadth first search.
We can generalize our common theme as follows. With every node N on OPEN, we associate a number that stands for the estimated cost of the final solution.
Search spaces can be huge. The number of choices faced by a search algorithm can grow exponentially. We have named this combinatorial explosion, the principal adversary of search, CombEx. In Chapter 4 we looked at one strategy to battle CombEx, the use of knowledge in the form of heuristic functions – knowledge that would point towards the goal node. Yet, for many problems, such heuristics are hard to acquire and often inadequate, and algorithms continue to demand exponential time.
In this chapter we introduce stochastic moves to add an element of randomness to search. Exploiting the gradient deterministically has its drawbacks when the heuristic functions are imperfect, as they often are. The steepest gradient can lead to the nearest optimum and end there. We add a tendency of exploration, which could drag search away from the path to local optima.
We also look at the power of many for problem solving, as opposed to a sole crusader. Population based methods have given a new dimension to solving optimization problems.
Douglas Hofstadter says that humans are not known to have a head for numbers (Hofstadter, 1996). For most of us, the numbers 3.2 billion and 5.3 million seem vaguely similar and big. A very popular book (Gamow, 1947) was titled One, Two, Three … Infinity. The author, George Gamow, talks about the Hottentot tribes who had the only numbers one, two, and three in their vocabulary, and beyond that used the word many. Bill Gates is famously reputed to have said, ‘Most people overestimate what they can do in one year and underestimate what they can do in ten years.’
So, how big is big? Why are computer scientists wary of combinatorial growth? In Table 2.1 we looked at the exponential function 2N and the factorial N!, which are respectively the sizes of search spaces for SAT and TSP, with N variables or cities. How long will take it to inspect all the states when N = 50?
For a SAT problem with 50 variables, 250 = 1,125,899,906,842,624. How big is that? Let us say we can inspect a million or 106 nodes a second. We would then need 1,125,899,906.8 seconds, which is about 35.7 years! There are N! = 3.041409320 × 1064 non-distinct tours (each distinct tour has 2N representations) of 50 cities.
In this chapter we lay the foundations of problem solving using first principles. The first principles approach requires that the agent represent the domain in some way and investigate the consequences of its actions by simulating the actions on these representations. The representations are often referred to as models of the domain and the simulations as search. This approach is also known as model based reasoning, as opposed to problem solving using memory or knowledge, which, incidentally, has its own requirements of searching over representations, but at a sub-problem solving retrieval level.
We begin with the notion of a state space and then look at the notion of search spaces from the perspective of search algorithms. We characterize problems as planning problems and configuration problems, and the corresponding search spaces that are natural to them. We also present two iconic problems, the Boolean satisfiability problem (SAT) and the travelling salesman problem (TSP), among others.
In this chapter we lay the foundations of the search spaces that an agent would explore.
First, we imagine the space of possibilities. Next, we look at a mechanism to navigate this space. And then in the chapters that follow we figure out what search strategy an algorithm can use to do so efficiently.
Our focus is on creating domain independent solvers, or agents, which can be used to solve a variety of problems. We expect that the users of our solvers will implement some domain specific functions in a specified form that will create the domain specific search space for our domain independent algorithms to search in. In effect, these domain specific functions create the space, which our algorithm will view as a graph over which to search. But the graph is not supplied to the search algorithm upfront. Rather, it is constructed on the fly during search. This is done by the user supplied neighbourhood function that links a node in this graph to its neighbours, generating them when invoked. The neighbourhood function takes a node as an input and computes, or returns, the set of neighbours in the abstract graph for the search algorithm to search in.
The earliest programs were entirely hand coded. Both the algorithm and the knowledge that the algorithm embodied were created manually. Machines that learn were always on the wish list though. One of the earliest reported programs was the checkers playing program by Arthur Samuel that went on to beat its creator, evoking the spectre of Frankenstein's monster, a fear which still echoes today among some. Since then machine learning (ML) has steadily advanced due to three factors. First, the availability of vast amounts of data that the internet has made possible. Second, the tremendous increase in computing power available. And third, a continuous evolution of algorithms. But the core of ML is to process data using first principles and incrementally build models about the domain that the data comes from. In this chapter we look at this process.
The computer is ideally suited to learning. It can never forget. The key is to incorporate a ratchet mechanism à la natural selection – a mechanism to encapsulate the lessons learnt into a usable form, a model. Robustness demands that one must build in the ability to withstand occasional mistakes. Because the outlier must not become the norm.
Children, doctors, and machines – they all learn. A toddler touches a piece of burning firewood and is forced to withdraw her hand immediately. She learns to curb her curiosity and pay heed to adult supervision. As she grows up, she picks up motor skills like cycling and learns new languages. Doctors learn from their experience and become experts at their job – in fact, the words ‘expert’ and ‘experience’ are derived from the same root. The smartphone you hold in your hand learns to recognize your voice and handwriting and also tracks your preferences for recommending books, movies, and food outlets in ways that often leave you pleasantly surprised. This chapter is about how we can make machines learn. We also illustrate how such learning is intimately related to the broader class of search methods explored in the rest of this book.
Let us consider a simple example: the task of classifying an email as spam or non-spam. Given the ill-defined nature of the problem, it is hard for us to arrive at a comprehensive set of rules that can do this discrimination.