To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Accompanying the technical progress in aptificial intelligence during this period, new conferences and workshops were begun, textbooks were written, and financial support for basic research grew and then waned a bit.
The first large conference devoted exclusively to artificial intelligence was held in Washington, DC, in May 1969. Organized by Donald E. Walker (1928–1993) of the MITRE Corporation and Alistair Holden (1930–1999) of the University of Washington, it was called the International Joint Conference on Artificial Intelligence (IJCAI). It was sponsored by sixteen different technical societies (along with some of their subgroups) from the United States, Europe, and Japan. About 600 people attended the conference, and sixty-three papers were presented by authors from nine different countries. The papers were collected in a proceedings volume, which was made available at the conference to all of the attendees.
Because of the success of this first conference, it was decided to hold a second one in London in 1971. During the early years, organization of the conferences was rather informal, decisions about future conferences being made by a core group of some of the leaders of the field who happened to show up at organizing meetings. At the 1971 meeting in London, I left the room for a moment while people were discussing where and when to hold the next conference.
In september 1948, an interdisciplinary conference was held at the California Institute of Technology (Caltech) in Pasadena, California, on the topics of how the nervous system controls behavior and how the brain might be compared to a computer. It was called the Hixon Symposium on Cerebral Mechanisms in Behavior. Several luminaries attended and gave papers, among them Warren McCulloch, John von Neumann, and Karl Lashley (1890–1958), a prominent psychologist. Lashley gave what some thought was the most important talk at the symposium. He faulted behaviorism for its static view of brain function and claimed that to explain human abilities for planning and language, psychologists would have to begin considering dynamic, hierarchical structures. Lashley's talk laid out the foundations for what would become cognitive science.
The emergence of artificial intelligence as a full-fledged field of research coincided with (and was launched by) three important meetings – one in 1955, one in 1956, and one in 1958. In 1955, a “Session on Learning Machines” was held in conjunction with the 1955 Western Joint Computer Conference in Los Angeles. In 1956, a “Summer Research Project on Artificial Intelligence” was convened at Dartmouth College. And in 1958, a symposium on the “Mechanization of Thought Processes,” was sponsored by the National Physical Laboratory in the United Kingdom.
If machines are to become intelligent, they must, at the very least, be able to do the thinking-related things that humans can do. The first steps then in the quest for artificial intelligence involved identifying some specific tasks thought to require intelligence and figuring out how to get machines to do them. Solving puzzles, playing games such as chess and checkers, proving theorems, answering simple questions, and classifying visual images were among some of the problems tackled by the early pioneers during the 1950s and early 1960s. Although most of these were laboratory-style, sometimes called “toy,” problems, some real-world problems of commercial importance, such as automatic reading of highly stylized magnetic characters on bank checks and language translation, were also being attacked. (As far as I know, Seymour Papert was the first to use the phrase “toy problem.” At a 1967 AI workshop I attended in Athens, Georgia, he distinguished among tau or “toy” problems, rho or real-world problems, and theta or “theory” problems in artificial intelligence. This distinction still serves us well today.)
In this part, I'll describe some of the first real efforts to build intelligent machines. Some of these were discussed or reported on at conferences and symposia – making these meetings important milestones in the birth of AI. I'll also do my best to explain the underlying workings of some of these early AI programs.
The architectures and mechanisms underlying language processing form one important part of the general structure of cognition. This book, written by leading experts in the field, brings together linguistic, psychological and computational perspectives on some of the fundamental issues. Several general introductory chapters offer overviews on important psycholinguistic research frameworks and highlight both shared assumptions and controversial issues. Subsequent chapters explore syntactic and lexical mechanisms; statistical and connectionist models of language understanding; the crucial importance of linguistic representations in explaining behavioural phenomena; evidence from a variety of studies and methodologies concerning the interaction of syntax and semantics; and the implications for cognitive architecture. The book concludes with a set of contributions on select issues of interpretation, including quantification, focus and anaphora in language understanding. Architectures and Mechanisms for Language Processing will appeal to students and scholars alike as a comprehensive and timely survey of recent work in this interdisciplinary area.
Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.
In this book, Michael Arbib, a researcher in artificial intelligence and brain theory, joins forces with Mary Hesse, a philosopher of science, to present an integrated account of how humans 'construct' reality through interaction with the social and physical world around them. The book is a major expansion of the Gifford Lectures delivered by the authors at the University of Edinburgh in the autumn of 1983. The authors reconcile a theory of the individual's construction of reality as a network of schemas 'in the head' with an account of the social construction of language, science, ideology and religion to provide an integrated schema-theoretic view of human knowledge. The authors still find scope for lively debate, particularly in their discussion of free will and of the reality of God. The book integrates an accessible exposition of background information with a cumulative marshalling of evidence to address fundamental questions concerning human action in the world and the nature of ultimate reality.
In recent years, the Internet has come to dominate our lives. E-mail, instant messaging and chat are rapidly replacing conventional forms of correspondence, and the Web has become the first port of call for both information enquiry and leisure activity. How is this affecting language? There is a widespread view that as 'technospeak' comes to rule, standards will be lost. In this book, David Crystal argues the reverse: that the Internet has encouraged a dramatic expansion in the variety and creativity of language. Covering a range of Internet genres, including e-mail, chat, and the Web, this is a revealing account of how the Internet is radically changing the way we use language. This second edition has been thoroughly updated to account for more recent phenomena, with a brand new chapter on blogging and instant messaging. Engaging and accessible, it will continue to fascinate anyone who has ever used the Internet.
This is a book about a gambling system that works. It tells the story of how the author used computer simulations and mathematical modeling techniques to predict the outcome of jai-alai matches and bet on them successfully - increasing his initial stake by over 500% in one year! His results can work for anyone: at the end of the book he tells the best way to watch jai-alai, and how to bet on it. With humour and enthusiasm, Skiena details a life-long fascination with computer predictions and sporting events. Along the way, he discusses other gambling systems, both successful and unsuccessful, for such games as lotto, roulette, blackjack, and the stock market. Indeed, he shows how his jai-alai system functions just like a miniature stock trading system. Do you want to learn about program trading systems, the future of Internet gambling, and the real reason brokerage houses don't offer mutual funds that invest at racetracks and frontons? How mathematical models are used in political polling? The difference between correlation and causation? If you are curious about gambling and mathematics, odds are this book is for you!
Mark Davison examines several legal models designed to protect databases, considering in particular the EU Directive, the history of its adoption and its transposition into national laws. He compares the Directive with a range of American legislative proposals, as well as the principles of misappropriation that underpin them. In addition, the book also contains a commentary on the appropriateness of the various models in the context of moves for an international agreement on the topic. This book will be of interest to academics and practitioners, including those involved with databases and other forms of new media.
The brute force algorithm for an optimization problem is to simply compute the cost or value of each of the exponential number of possible solutions and return the best. A key problem with this algorithm is that it takes exponential time. Another (not obviously trivial) problem is how to write code that enumerates over all possible solutions. Often the easiest way to do this is recursive backtracking. The idea is to design a recurrence relation that says how to find an optimal solution for one instance of the problem from optimal solutions for some number of smaller instances of the same problem. The optimal solutions for these smaller instances are found by recursing. After unwinding the recursion tree, one sees that recursive backtracking effectively enumerates all options. Though the technique may seem confusing at first, once you get the hang of recursion, it really is the simplest way of writing code to accomplish this task. Moreover, with a little insight one can significantly improve the running time by pruning off entire branches of the recursion tree. In practice, if the instance that one needs to solve is sufficiently small and has enough structure that a lot of pruning is possible, then an optimal solution can be found for the instance reasonably quickly. For some problems, the set of subinstances that get solved in the recursion tree is sufficiently small and predictable that the recursive backtracking algorithm can be mechanically converted into a quick dynamic programming algorithm. See Chapter 18. In general, however, for most optimization problems, for large worst case instances, the running time is still exponential.
It is important to classify algorithms based whether they solve a given computational problem and, if so, how quickly. Similarly, it is important to classify computational problems based whether they can be solved and, if so, how quickly.
The Time (and Space) Complexity of an Algorithm
Purpose
Estimate Duration: To estimate how long an algorithm or program will run.
Estimate Input Size: To estimate the largest input that can reasonably be given to the program.
Compare Algorithms: To compare the efficiency of different algorithms for solving the same problem.
Parts of Code: To help you focus your attention on the parts of the code that are executed the largest number of times. This is the code you need to improve to reduce the running time.
Choose Algorithm: To choose an algorithm for an application:
If the input size won't be larger than six, don't waste your time writing an extremely efficient algorithm.
If the input size is a thousand, then be sure the program runs in polynomial, not exponential, time.
If you are working on the Gnome project and the input size is a billion, then be sure the program runs in linear time.
Time Complexity Time and Space Complexities Are Functions, T(n) and S(n): The time complexity of an algorithm is not a single number, but is a function indicating how the running time depends on the size of the input. We often denote this by T(n), giving the number of operations executed on the worst case input instance of size n. An example would be T(n) = 3n2 + 7n + 23. Similarly, S(n) gives the size of the rewritable memory the algorithm requires.