We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In september 1948, an interdisciplinary conference was held at the California Institute of Technology (Caltech) in Pasadena, California, on the topics of how the nervous system controls behavior and how the brain might be compared to a computer. It was called the Hixon Symposium on Cerebral Mechanisms in Behavior. Several luminaries attended and gave papers, among them Warren McCulloch, John von Neumann, and Karl Lashley (1890–1958), a prominent psychologist. Lashley gave what some thought was the most important talk at the symposium. He faulted behaviorism for its static view of brain function and claimed that to explain human abilities for planning and language, psychologists would have to begin considering dynamic, hierarchical structures. Lashley's talk laid out the foundations for what would become cognitive science.
The emergence of artificial intelligence as a full-fledged field of research coincided with (and was launched by) three important meetings – one in 1955, one in 1956, and one in 1958. In 1955, a “Session on Learning Machines” was held in conjunction with the 1955 Western Joint Computer Conference in Los Angeles. In 1956, a “Summer Research Project on Artificial Intelligence” was convened at Dartmouth College. And in 1958, a symposium on the “Mechanization of Thought Processes,” was sponsored by the National Physical Laboratory in the United Kingdom.
If machines are to become intelligent, they must, at the very least, be able to do the thinking-related things that humans can do. The first steps then in the quest for artificial intelligence involved identifying some specific tasks thought to require intelligence and figuring out how to get machines to do them. Solving puzzles, playing games such as chess and checkers, proving theorems, answering simple questions, and classifying visual images were among some of the problems tackled by the early pioneers during the 1950s and early 1960s. Although most of these were laboratory-style, sometimes called “toy,” problems, some real-world problems of commercial importance, such as automatic reading of highly stylized magnetic characters on bank checks and language translation, were also being attacked. (As far as I know, Seymour Papert was the first to use the phrase “toy problem.” At a 1967 AI workshop I attended in Athens, Georgia, he distinguished among tau or “toy” problems, rho or real-world problems, and theta or “theory” problems in artificial intelligence. This distinction still serves us well today.)
In this part, I'll describe some of the first real efforts to build intelligent machines. Some of these were discussed or reported on at conferences and symposia – making these meetings important milestones in the birth of AI. I'll also do my best to explain the underlying workings of some of these early AI programs.
The architectures and mechanisms underlying language processing form one important part of the general structure of cognition. This book, written by leading experts in the field, brings together linguistic, psychological and computational perspectives on some of the fundamental issues. Several general introductory chapters offer overviews on important psycholinguistic research frameworks and highlight both shared assumptions and controversial issues. Subsequent chapters explore syntactic and lexical mechanisms; statistical and connectionist models of language understanding; the crucial importance of linguistic representations in explaining behavioural phenomena; evidence from a variety of studies and methodologies concerning the interaction of syntax and semantics; and the implications for cognitive architecture. The book concludes with a set of contributions on select issues of interpretation, including quantification, focus and anaphora in language understanding. Architectures and Mechanisms for Language Processing will appeal to students and scholars alike as a comprehensive and timely survey of recent work in this interdisciplinary area.
Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.
In this book, Michael Arbib, a researcher in artificial intelligence and brain theory, joins forces with Mary Hesse, a philosopher of science, to present an integrated account of how humans 'construct' reality through interaction with the social and physical world around them. The book is a major expansion of the Gifford Lectures delivered by the authors at the University of Edinburgh in the autumn of 1983. The authors reconcile a theory of the individual's construction of reality as a network of schemas 'in the head' with an account of the social construction of language, science, ideology and religion to provide an integrated schema-theoretic view of human knowledge. The authors still find scope for lively debate, particularly in their discussion of free will and of the reality of God. The book integrates an accessible exposition of background information with a cumulative marshalling of evidence to address fundamental questions concerning human action in the world and the nature of ultimate reality.
In recent years, the Internet has come to dominate our lives. E-mail, instant messaging and chat are rapidly replacing conventional forms of correspondence, and the Web has become the first port of call for both information enquiry and leisure activity. How is this affecting language? There is a widespread view that as 'technospeak' comes to rule, standards will be lost. In this book, David Crystal argues the reverse: that the Internet has encouraged a dramatic expansion in the variety and creativity of language. Covering a range of Internet genres, including e-mail, chat, and the Web, this is a revealing account of how the Internet is radically changing the way we use language. This second edition has been thoroughly updated to account for more recent phenomena, with a brand new chapter on blogging and instant messaging. Engaging and accessible, it will continue to fascinate anyone who has ever used the Internet.
This is a book about a gambling system that works. It tells the story of how the author used computer simulations and mathematical modeling techniques to predict the outcome of jai-alai matches and bet on them successfully - increasing his initial stake by over 500% in one year! His results can work for anyone: at the end of the book he tells the best way to watch jai-alai, and how to bet on it. With humour and enthusiasm, Skiena details a life-long fascination with computer predictions and sporting events. Along the way, he discusses other gambling systems, both successful and unsuccessful, for such games as lotto, roulette, blackjack, and the stock market. Indeed, he shows how his jai-alai system functions just like a miniature stock trading system. Do you want to learn about program trading systems, the future of Internet gambling, and the real reason brokerage houses don't offer mutual funds that invest at racetracks and frontons? How mathematical models are used in political polling? The difference between correlation and causation? If you are curious about gambling and mathematics, odds are this book is for you!
Mark Davison examines several legal models designed to protect databases, considering in particular the EU Directive, the history of its adoption and its transposition into national laws. He compares the Directive with a range of American legislative proposals, as well as the principles of misappropriation that underpin them. In addition, the book also contains a commentary on the appropriateness of the various models in the context of moves for an international agreement on the topic. This book will be of interest to academics and practitioners, including those involved with databases and other forms of new media.
The brute force algorithm for an optimization problem is to simply compute the cost or value of each of the exponential number of possible solutions and return the best. A key problem with this algorithm is that it takes exponential time. Another (not obviously trivial) problem is how to write code that enumerates over all possible solutions. Often the easiest way to do this is recursive backtracking. The idea is to design a recurrence relation that says how to find an optimal solution for one instance of the problem from optimal solutions for some number of smaller instances of the same problem. The optimal solutions for these smaller instances are found by recursing. After unwinding the recursion tree, one sees that recursive backtracking effectively enumerates all options. Though the technique may seem confusing at first, once you get the hang of recursion, it really is the simplest way of writing code to accomplish this task. Moreover, with a little insight one can significantly improve the running time by pruning off entire branches of the recursion tree. In practice, if the instance that one needs to solve is sufficiently small and has enough structure that a lot of pruning is possible, then an optimal solution can be found for the instance reasonably quickly. For some problems, the set of subinstances that get solved in the recursion tree is sufficiently small and predictable that the recursive backtracking algorithm can be mechanically converted into a quick dynamic programming algorithm. See Chapter 18. In general, however, for most optimization problems, for large worst case instances, the running time is still exponential.
It is important to classify algorithms based whether they solve a given computational problem and, if so, how quickly. Similarly, it is important to classify computational problems based whether they can be solved and, if so, how quickly.
The Time (and Space) Complexity of an Algorithm
Purpose
Estimate Duration: To estimate how long an algorithm or program will run.
Estimate Input Size: To estimate the largest input that can reasonably be given to the program.
Compare Algorithms: To compare the efficiency of different algorithms for solving the same problem.
Parts of Code: To help you focus your attention on the parts of the code that are executed the largest number of times. This is the code you need to improve to reduce the running time.
Choose Algorithm: To choose an algorithm for an application:
If the input size won't be larger than six, don't waste your time writing an extremely efficient algorithm.
If the input size is a thousand, then be sure the program runs in polynomial, not exponential, time.
If you are working on the Gnome project and the input size is a billion, then be sure the program runs in linear time.
Time Complexity Time and Space Complexities Are Functions, T(n) and S(n): The time complexity of an algorithm is not a single number, but is a function indicating how the running time depends on the size of the input. We often denote this by T(n), giving the number of operations executed on the worst case input instance of size n. An example would be T(n) = 3n2 + 7n + 23. Similarly, S(n) gives the size of the rewritable memory the algorithm requires.
A giraffe with its long neck is a very different beast than a mouse, which is different than a snake. However, Darwin and gang observed that the first two have some key similarities, both being social, nursing their young, and having hair. The third is completely different in these ways. Studying similarities and differences between things can reveal subtle and deep understandings of their underlining nature that would not have been noticed by studying them one at a time. Sometimes things that at first appear to be completely different, when viewed in another way, turn out to be the same except for superficial, cosmetic differences. This section will teach how to use reductions to discover these similarities between different optimization problems.
Reduction P1 ≤polyP2: We say that we can reduce problem P1 to problem P2 if we can write a polynomial-time (nΘ(1)) algorithm for P1 using a supposed algorithm for p2 as a subroutine. (Note we may or may not actually have an algorithm for P2.) The standard notation for this is P1≤polyP2.
Why Reduce? A reduction lets us compare the time complexities and underlying structures of the two problems. Reduction is useful in providing algorithms for new problems (upper bounds), for giving evidence that there are no fast algorithms for certain problems (lower bounds), and for classifying problems according to their difficulty.
From determining the cheapest way to make a hot dog to monitoring the workings of a factory, there are many complex computational problems to be solved. Before executable code can be produced, computer scientists need to be able to design the algorithms that lie behind the code, be able to understand and describe such algorithms abstractly, and be confident that they work correctly and efficiently. These are the goals of computer scientists.
A Computational Problem: A specification of a computational problem uses pre-conditions and post-conditions to describe for each legal input instance that the computation might receive, what the required output or actions are. This may be a function mapping each input instance to the required output. It may be an optimization problem which requires a solution to be outputted that is “optimal” from among a huge set of possible solutions for the given input instance. It may also be an ongoing system or data structure that responds appropriately to a constant stream of input.
Example: The sorting problem is defined as follows:
Preconditions: The input is a list of n values, including possible repetitions.
Postconditions: The output is a list consisting of the same n values in non-decreasing order.
An Algorithm: An algorithm is a step-by-step procedure which, starting with an input instance, produces a suitable output. It is described at the level of detail and abstraction best suited to the human audience that must understand it. In contrast, code is an implementation of an algorithm that can be executed by a computer. Pseudocode lies between these two.