To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Some of ai's recent achievements stand as extraordinary milestones of its progress, and others have insinuated themselves almost invisibly into our daily routines. In between these extremes, AI programs have become important tools in science and commerce. These three categories provide a useful way of organizing the state of AI today. First, I'll look at some of the headline-making systems appearing just before and just after the beginning of the twenty-first century, beginning with AI game-playing programs.
Games
Although getting computers to excel at games, such as chess and checkers, is thought by some to be a somewhat frivolous diversion from more serious work, computer game-playing has served as a laboratory for exploring new AI techniques–especially in heuristic search and in learning. In a previous chapter, for example, I explained how reinforcement learning methods were used to develop a championship-level backgammon program. From the earliest days of AI, people worked on programs to play chess and checkers, and now, mainly by using massive amounts of heuristically guided computation, computers are able to play these and other games better than humans can.
Chess
The big news in 1997 was the defeat of the world chess champion, Garry Kasparov, by IBM's “Deep Blue” chess-playing computer. (See Fig. 32.1.) The first time Kasparov played Deep Blue, in February 1996, Deep Blue won the first game but lost the match. But on May 11, 1997, a hardware-enhanced 1997 version (unofficially nicknamed “Deeper Blue”) won a six-game match (under regular chess tournament time controls) by two wins to one with three draws.
Just prior to the Dartmouth workshop, Newell, Shaw, and Simon had programmed a version of LT on a computer at the RAND Corporation called the JOHNNIAC (named in honor of John von Neumann). Later papers described how it proved some of the theorems in symbolic logic that were proved by Russell and Whitehead in Volume I of their classic work, Principia Mathematica. LT worked by performing transformations on Russell and Whitehead's five axioms of propositional logic, represented for the computer by “symbol structures,” until a structure was produced that corresponded to the theorem to be proved. Because there are so many different transformations that could be performed, finding the appropriate ones for proving the given theorem involves what computer science people call a “search process.”
To describe how LT and other symbolic AI programs work, I need to explain first what is meant by a “symbol structure” and what is meant by “transforming” them. In a computer, symbols can be combined in lists, such as (A, 7, Q). Symbols and lists of symbols are the simplest kinds of symbol structures. More complex structures are composed of lists of lists of symbols, such as ((B, 3), (A, 7, Q)), and lists of lists of lists of symbols, and so on. Because such lists of lists can be quite complex, they are called “structures.” Computer programs can be written that transform symbol structures into other symbol structures. For example, with a suitable program the structure “(the sum of seven and five)” could be transformed into the structure “(7 + 5),” which could further be transformed into the symbol “12.”
By the early 1980s expert systems and other AI technologies, such as image and speech understanding and natural language processing, were showing great promise. Also, there was dramatic progress in communications technology, computer networks and architectures, and computer storage and processing technologies. Robert Kahn (1938–; Fig. 23.1), who had become Director of DARPA's Information Processing Techniques Office (IPTO) in 1979, began thinking that DARPA should sponsor a major research and development program that would integrate efforts in all of these areas to create much more powerful computer systems. At the same time, there was concern that the Japanese FGCS program could threaten U.S. leadership in computer technology. With these factors as background, Kahn began planning what would come to be called the “Strategic Computing” (SC) program.
Kahn had been a professor at MIT and an engineer at BBN before he joined DARPA's IPTO as a program manager in late 1972. There he initiated and ran DARPA's internetting program, linking the Arpanet along with the Packet Radio and Packet Satellite Nets to form the first version of today's Internet. He and Vinton Cerf, then at Stanford, collaborated on the development of what was to become the basic architecture of the Internet and its “Transmission Control Protocol” (TCP). (TCP was later modularized and became TCP/IP, with IP standing for Internet Protocol.) Cerf joined DARPA in 1976 and led the internetting program until 1982. For their work, Kahn and Cerf shared the 2004 Turing Award of the Association for Computer Machinery.
There have been naysayers from the erliest days of artificial intelligence. Alan Turing anticipated (and dealt with) some of their objections in his 1950 paper. In this chapter, I'll recount some of the controversies surrounding AI – including some not foreseen by Turing. I'll also describe some formidable technical difficulties confronting the field. By the mid-1980s or so, these difficulties had caused some to be rather dismissive about progress up to that time and pessimistic about the possibility of further progress. For example, in wondering about the need for a special issue of the journal Dœdalus devoted to AI in 1988, the philosopher Hilary Putnam wrote “What's all the fuss about now? Why a whole issue of Dœdalus? Why don't we wait until AI achieves something and then have an issue?”
The attacks and expressions of disappointment from outside the field helped precipitate what some have called an “AI winter.”
Opinions from Various Onlookers
The Mind Is Not a Machine
In the introduction to his edited volume of essays titled Minds and Machines, the philosopher Alan Ross Anderson mentions the following two extreme opinions regarding whether or not the mind is a machine:
(1) We might say that human beings are merely very elaborate bits of clockwork, and that our having “minds” is simply a consequence of the fact that the clockwork is very elaborate, or
(2) we might say that any machine is merely a product of human ingenuity (in principle nothing more than a shovel), and that though we have minds, we cannot impart that peculiar feature of ours to anything except our offspring: no machine can acquire this uniquely human characteristic.
Accompanying the technical progress in aptificial intelligence during this period, new conferences and workshops were begun, textbooks were written, and financial support for basic research grew and then waned a bit.
The first large conference devoted exclusively to artificial intelligence was held in Washington, DC, in May 1969. Organized by Donald E. Walker (1928–1993) of the MITRE Corporation and Alistair Holden (1930–1999) of the University of Washington, it was called the International Joint Conference on Artificial Intelligence (IJCAI). It was sponsored by sixteen different technical societies (along with some of their subgroups) from the United States, Europe, and Japan. About 600 people attended the conference, and sixty-three papers were presented by authors from nine different countries. The papers were collected in a proceedings volume, which was made available at the conference to all of the attendees.
Because of the success of this first conference, it was decided to hold a second one in London in 1971. During the early years, organization of the conferences was rather informal, decisions about future conferences being made by a core group of some of the leaders of the field who happened to show up at organizing meetings. At the 1971 meeting in London, I left the room for a moment while people were discussing where and when to hold the next conference.
In september 1948, an interdisciplinary conference was held at the California Institute of Technology (Caltech) in Pasadena, California, on the topics of how the nervous system controls behavior and how the brain might be compared to a computer. It was called the Hixon Symposium on Cerebral Mechanisms in Behavior. Several luminaries attended and gave papers, among them Warren McCulloch, John von Neumann, and Karl Lashley (1890–1958), a prominent psychologist. Lashley gave what some thought was the most important talk at the symposium. He faulted behaviorism for its static view of brain function and claimed that to explain human abilities for planning and language, psychologists would have to begin considering dynamic, hierarchical structures. Lashley's talk laid out the foundations for what would become cognitive science.
The emergence of artificial intelligence as a full-fledged field of research coincided with (and was launched by) three important meetings – one in 1955, one in 1956, and one in 1958. In 1955, a “Session on Learning Machines” was held in conjunction with the 1955 Western Joint Computer Conference in Los Angeles. In 1956, a “Summer Research Project on Artificial Intelligence” was convened at Dartmouth College. And in 1958, a symposium on the “Mechanization of Thought Processes,” was sponsored by the National Physical Laboratory in the United Kingdom.
If machines are to become intelligent, they must, at the very least, be able to do the thinking-related things that humans can do. The first steps then in the quest for artificial intelligence involved identifying some specific tasks thought to require intelligence and figuring out how to get machines to do them. Solving puzzles, playing games such as chess and checkers, proving theorems, answering simple questions, and classifying visual images were among some of the problems tackled by the early pioneers during the 1950s and early 1960s. Although most of these were laboratory-style, sometimes called “toy,” problems, some real-world problems of commercial importance, such as automatic reading of highly stylized magnetic characters on bank checks and language translation, were also being attacked. (As far as I know, Seymour Papert was the first to use the phrase “toy problem.” At a 1967 AI workshop I attended in Athens, Georgia, he distinguished among tau or “toy” problems, rho or real-world problems, and theta or “theory” problems in artificial intelligence. This distinction still serves us well today.)
In this part, I'll describe some of the first real efforts to build intelligent machines. Some of these were discussed or reported on at conferences and symposia – making these meetings important milestones in the birth of AI. I'll also do my best to explain the underlying workings of some of these early AI programs.
The architectures and mechanisms underlying language processing form one important part of the general structure of cognition. This book, written by leading experts in the field, brings together linguistic, psychological and computational perspectives on some of the fundamental issues. Several general introductory chapters offer overviews on important psycholinguistic research frameworks and highlight both shared assumptions and controversial issues. Subsequent chapters explore syntactic and lexical mechanisms; statistical and connectionist models of language understanding; the crucial importance of linguistic representations in explaining behavioural phenomena; evidence from a variety of studies and methodologies concerning the interaction of syntax and semantics; and the implications for cognitive architecture. The book concludes with a set of contributions on select issues of interpretation, including quantification, focus and anaphora in language understanding. Architectures and Mechanisms for Language Processing will appeal to students and scholars alike as a comprehensive and timely survey of recent work in this interdisciplinary area.
Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.
In this book, Michael Arbib, a researcher in artificial intelligence and brain theory, joins forces with Mary Hesse, a philosopher of science, to present an integrated account of how humans 'construct' reality through interaction with the social and physical world around them. The book is a major expansion of the Gifford Lectures delivered by the authors at the University of Edinburgh in the autumn of 1983. The authors reconcile a theory of the individual's construction of reality as a network of schemas 'in the head' with an account of the social construction of language, science, ideology and religion to provide an integrated schema-theoretic view of human knowledge. The authors still find scope for lively debate, particularly in their discussion of free will and of the reality of God. The book integrates an accessible exposition of background information with a cumulative marshalling of evidence to address fundamental questions concerning human action in the world and the nature of ultimate reality.
In recent years, the Internet has come to dominate our lives. E-mail, instant messaging and chat are rapidly replacing conventional forms of correspondence, and the Web has become the first port of call for both information enquiry and leisure activity. How is this affecting language? There is a widespread view that as 'technospeak' comes to rule, standards will be lost. In this book, David Crystal argues the reverse: that the Internet has encouraged a dramatic expansion in the variety and creativity of language. Covering a range of Internet genres, including e-mail, chat, and the Web, this is a revealing account of how the Internet is radically changing the way we use language. This second edition has been thoroughly updated to account for more recent phenomena, with a brand new chapter on blogging and instant messaging. Engaging and accessible, it will continue to fascinate anyone who has ever used the Internet.
This is a book about a gambling system that works. It tells the story of how the author used computer simulations and mathematical modeling techniques to predict the outcome of jai-alai matches and bet on them successfully - increasing his initial stake by over 500% in one year! His results can work for anyone: at the end of the book he tells the best way to watch jai-alai, and how to bet on it. With humour and enthusiasm, Skiena details a life-long fascination with computer predictions and sporting events. Along the way, he discusses other gambling systems, both successful and unsuccessful, for such games as lotto, roulette, blackjack, and the stock market. Indeed, he shows how his jai-alai system functions just like a miniature stock trading system. Do you want to learn about program trading systems, the future of Internet gambling, and the real reason brokerage houses don't offer mutual funds that invest at racetracks and frontons? How mathematical models are used in political polling? The difference between correlation and causation? If you are curious about gambling and mathematics, odds are this book is for you!
Mark Davison examines several legal models designed to protect databases, considering in particular the EU Directive, the history of its adoption and its transposition into national laws. He compares the Directive with a range of American legislative proposals, as well as the principles of misappropriation that underpin them. In addition, the book also contains a commentary on the appropriateness of the various models in the context of moves for an international agreement on the topic. This book will be of interest to academics and practitioners, including those involved with databases and other forms of new media.
The brute force algorithm for an optimization problem is to simply compute the cost or value of each of the exponential number of possible solutions and return the best. A key problem with this algorithm is that it takes exponential time. Another (not obviously trivial) problem is how to write code that enumerates over all possible solutions. Often the easiest way to do this is recursive backtracking. The idea is to design a recurrence relation that says how to find an optimal solution for one instance of the problem from optimal solutions for some number of smaller instances of the same problem. The optimal solutions for these smaller instances are found by recursing. After unwinding the recursion tree, one sees that recursive backtracking effectively enumerates all options. Though the technique may seem confusing at first, once you get the hang of recursion, it really is the simplest way of writing code to accomplish this task. Moreover, with a little insight one can significantly improve the running time by pruning off entire branches of the recursion tree. In practice, if the instance that one needs to solve is sufficiently small and has enough structure that a lot of pruning is possible, then an optimal solution can be found for the instance reasonably quickly. For some problems, the set of subinstances that get solved in the recursion tree is sufficiently small and predictable that the recursive backtracking algorithm can be mechanically converted into a quick dynamic programming algorithm. See Chapter 18. In general, however, for most optimization problems, for large worst case instances, the running time is still exponential.