We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We should work toward a universal linked information system, in which generality and portability are more important than fancy graphics techniques and complex extra facilities. The aim would be to allow a place to be found for any information or reference which one felt was important, and a way of finding it afterwards. The result should be sufficiently attractive to use that the information contained would grow past a critical threshold.
Tim Berners-Lee
The hypertext visionaries
Vannevar Bush (B.11.1), creator of the “Differential Analyzer” machine, wrote the very influential paper “As We May Think” in 1945, reflecting on the wartime explosion of scientific information and the increasing specialization of science into subdisciplines:
There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers – conclusions which he cannot find time to grasp, much less remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.
Bush concluded that methods for scholarly communication had become “totally inadequate for their purpose.” He argued for the need to extend the powers of the mind, rather than just the powers of the body, and to provide some automated support to navigate the expanding world of information and to manage this information overload.
Computer programs are the most complicated things that humans have ever created.
Donald Knuth
Software and hardware
Butler Lampson, recipient of the Turing Award for his contributions to computer science, relates the followi ng anecdote about software:
There’s a story about some people who were writing the software for an early avionics computer. One day they get a visit from the weight control officer, who is responsible for the total weight of the plane.
“You’re building software?”
“Yes.”
“How much does it weigh?”
“It doesn’t weigh anything.”
“Come on, you can’t fool me. They all say that.”
“No, it really doesn’t weigh anything.”
After half an hour of back and forth he gives it up. But two days later he comes back and says, “I’ve got you guys pinned to the wall. I came in last night, and the janitor showed me where you keep your software.”
He opens a closet door, and there are boxes and boxes of punch cards.
“You can’t tell me those don’t weigh anything!”
After a short pause, they explain to him, very gently, that the software is in the holes.
It is amazing how these holes (Fig. 3.1) have become a multibillion-dollar business and arguably one of the main driving forces of modern civilization.
The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.
Edsger Dijkstra
The software crisis
The term software engineering originated in the early 1960s, and the North Atlantic Treaty Organization sponsored the first conference on the “software crisis” in 1968 in Garmisch-Partenkirchen, West Germany. It was at this conference that the term software engineering first appeared. The conference reflected on the sad fact that many large software projects ran over budget or came in late, if at all. Tony Hoare, a recipient of the Turing Award for his contributions to computing, ruefully remembers his experience of a failed software project:
There was no escape: The entire Elliott 503 Mark II software project had to be abandoned, and with it, over thirty man-years of programming effort, equivalent to nearly one man’s active working life, and I was responsible, both as designer and as manager, for wasting it.
In his classic book The Mythical Man-Month, Fred Brooks (B.4.1) of IBM draws on his experience developing the operating system for IBM’s massive System/360 project. Brooks makes some sobering reflections on software engineering, saying, “It is a very humbling experience to make a multimillion-dollar mistake, but it is also very memorable.”
It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers.… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.
Alan Turing
The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a new way not approached by the information-handling machines we know today.
J. C. R. Licklider
Artificial intelligence is the science of making machines do things that would require intelligence if done by men.
Computer science also differs from physics in that it is not actually a science. It does not study natural objects. Neither is it, as you might think, mathematics. Rather, computer science is about getting something to do something....
Richard Feynman
What is computer science?
It is commonplace to say that we are in the midst of a computing revolution. Computers are impacting almost every aspect of our lives. And this is just the beginning. The Internet and the Web have revolutionized our access to information and to other people. We see computers not only providing intelligence to the safety and performance of such things as cars and airplanes, but also leading the way in mobile communications, with present-day smart phones having more computing power than leading-edge computers only a decade ago. This book tells the story how this all came about, from the early days of computers in the mid-1900s, to the Internet and the Web as we know it today, and where we will likely be in the future.
The academic field of study that encompasses these topics draws from multiple disciplines such as mathematics and electronics and is usually known as computer science. As Nobel Prize recipient, physicist Richard Feynman says in the quotation that introduces this chapter, computer science is not a science in the sense of physics, which is all about the study of natural systems; rather, it is more akin to engineering, since it concerns the study of man-made systems and ultimately is about getting computers to do useful things. Three early computing pioneers, Allen Newell, Alan Perlis, and Herbert Simon, were happy to use science to describe what they did, but put forward a similar definition to Feynman: computer science is the study of computers. As we shall see, computer science has much to do with the management of complexity, because modern-day computers contain many billions of active components. How can such complex systems be designed and built? By relying on the principles of hierarchical abstraction and universality, the two main themes that underlie our discussion of computers.
When he later connected the same laptop to the Internet, the worm broke free and began replicating itself, a step its designers never anticipated.
David E. Sanger
Black hats and white hats
As we have seen in Chapter 10, the Internet was invented by the academic research community and originally connected only a relatively small number of university computers. What is remarkable is that this research project has turned into a global infrastructure that has scaled from thousands of researchers to billions of people with no technical background. However, some of the problems that plague today’s Internet originate from decisions taken by the original Internet Engineering Task Force (IETF). This was a small group of researchers who debated and decided Internet standards in a truly collegial and academic fashion. For a network connecting a community of like-minded friends and with a culture of trust between the universities, this was an acceptable process. However, as the Internet has grown to include many different types of communities and cultures it is now clear that such a trusting approach was misplaced.
One example is the IETF’s definition of the Simple Mail Transfer Protocol (SMTP) for sending and receiving email over the Internet. Unfortunately, the original SMTP protocol did not check that the sender’s actual Internet address was what the email packet header claimed it to be. This allows the possibility of spoofing, the creation of Internet Protocol (IP) packets with either a forged source address or using an unauthorized IP address. Such spoofing is now widely used to mask the source of cyberattacks over the Internet, both by criminal gangs as well as by governments.
I think it’s fair to say that personal computers have become the most empowering tool we’ve ever created. They’re tools of communication, they’re tools of creativity, and they can be shaped by their user.
Bill Gates
The beginnings of interactive computing
In the early days of computing, computers were expensive and scarce. They were built for solving serious computational problems – and certainly not for frivolous activities like playing games! The microprocessor and Moore’s law have changed this perspective – computing hardware is now incredibly cheap and it is the software production by humans and management of computers that is expensive. Some of the ideas of interactive and personal computing can be traced back to an MIT professor called J. C. R. Licklider. Lick – as he was universally known – was a psychologist and one of the first researchers to take an interest in the problem of human-computer interactions. During the Cold War in the 1950s, he had worked at MIT’s Lincoln Labs on the Semi-Automated Ground Environment (SAGE) system designed to give early warning of an airborne attack on the United States. This system used computers to continuously keep track of aircraft using radar data. It was this experience of interactive computing that convinced Lick of the need to use computers to analyze data as the data arrived – for “real time” computing.
Another type of interactive computing was being developed at around the same time. Engineers at MIT’s Lincoln Labs had developed the TX-0 in 1956 – one of the first transistorized computers. Wesley Clark and Ken Olsen had specifically designed and built the TX-0 to be interactive and exciting, the exact opposite of sedate batch processing on a big mainframe computer. Olsen recalled:
Then we had a light pen, which was what we used in the air-defense system and which was the equivalent of the mouse or joystick we use today. With that you could draw, play games, be creative – it was very close to being the modern personal computer.
Will it be possible to remove the heat generated by tens of thousands of components in a single silicon chip?
Gordon Moore
Nanotechnology
In 1959, at a meeting of the American Physical Society in Pasadena, California, physicist Richard Feynman set out a vision of the future in a remarkable after-dinner speech titled “There’s Plenty of Room at the Bottom.” The talk had the subtitle “An Invitation to Enter a New Field of Physics,” and it marked the beginning of the field of research that is now known as nanotechnology. Nanotechnology is concerned with the manipulation of matter at the scale of nanometers. Atoms are typically a few tenths of a nanometer in size. Feynman emphasizes that such an endeavor does not need new physics:
I am not inventing anti-gravity, which is possible someday only if the laws are not what we think. I am telling you what could be done if the laws are what we think; we are not doing it simply because we haven’t yet gotten around to it.
During his talk, Feynman challenged his audience by offering two $1,000 prizes: one “to the first guy who makes an operating electric motor which is only 1/64 inch cube,” and the second prize “to the first guy who can take the information on the page of a book and put it on an area 1/25000 smaller.” He had to pay out on both prizes – the first less than a year later, to Bill McLellan, an electrical engineer and Caltech alumnus (Fig. 15.1). Feynman knew that McLellan was serious when he brought a microscope with him to show Feynman his miniature motor capable of generating a millionth of a horsepower. Although Feynman paid McLellan the prize money, the motor was a disappointment to him because it did not require any technical advances (Fig. 15.2). He had not made the challenge hard enough. In an updated version of his talk given twenty years later, Feynman speculated that, with modern technology, it should be possible to mass-produce motors that are 1/40 a side smaller than McLellan’s original motor. To produce such micromachines, Feynman envisaged the creation of a chain of “slave” machines, each producing tools and machines at one-fourth of their own scale.
It is not my aim to shock you – if indeed that were possible in an age of nuclear fission and prospective interplanetary travel. But the simplest way I can summarize the situation is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until – in a visible future – the range of problems they can handle will be coextensive with the range to which the human mind has been applied.
Herbert Simon and Allen Newell
Cybernetics and the Turing Test
One of the major figures at MIT before World War II was the mathematician Norbert Wiener (B.13.1). In 1918, Wiener had worked at the U.S. Army’s Aberdeen Proving Ground, where the army tested weapons. Wiener calculated artillery trajectories by hand, the same problem that led to the construction of the ENIAC nearly thirty years later. After World War II, Wiener used to hold a series of “supper seminars” at MIT, where scientists and engineers from a variety of fields would gather to eat dinner and discuss scientific questions. J. C. R. Licklider usually attended. At some of these seminars, Wiener put forward his vision of the future, arguing that the technologies of the twentieth century could respond to their environment and modify their actions:
The machines of which we are now speaking are not the dream of the sensationalist nor the hope of some future time. They already exist as thermostats, automatic gyrocompass ship-steering systems, self-propelled missiles – especially such as seek their target – anti-aircraft fire-control systems, automatically controlled oil-cracking stills, ultra-rapid computing machines, and the like.…
As soon as an Analytical Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise – by what course of calculation can these results be arrived at by the machine in the shortest time?
Charles Babbage
Beginnings
What is an algorithm? The word is derived from the name of the Persian scholar Mohammad Al-Khowarizmi (see B.5.1 and Fig. 5.1). In the introduction to his classic book Algorithmics: The Spirit of Computing, computer scientist David Harel gives the following definition:
An algorithm is an abstract recipe, prescribing a process that might be carried out by a human, by a computer, or by other means. It thus represents a very general concept, with numerous applications. Its principal interest and use, however, is in those cases where the process is to be carried out by a computer.
Thus an algorithm can be regarded as a “recipe” detailing the mathematical steps to follow to do a particular task. This could be a numerical algorithm for solving a differential equation or an algorithm for completing a more abstract task, such as sorting a list of items according to some specified property. The word algorithmics was introduced by J. F. Traub in a textbook in 1964 and popularized as a key field of study in computer science by Donald Knuth (B.5.2) and David Harel (B.5.3). When the steps to define an algorithm to carry out a particular task have been identified, the programmer chooses a programming language to express the algorithm in a form that the computer can understand.
In 2012 the U.S. National Research Council published the report “Continuing Innovation in Information Technology.” The report contained an updated version of the Tire Tracks figure, first published in 1995. Figure A.2 gives examples of how computer science research, in universities and in industry, has directly led to the introduction of entirely new categories of products that have ultimately provided the basis for new billion-dollar industries. Most of the university-based research has been federally funded.
The bottom row of the figure shows specific computer science research areas where major investments have resulted in the different information technology industries and companies shown at the top of the figure. The vertical red tracks represent university-based research and the blue tracks represent industry research and development. The dashed black lines indicate periods following the introduction of significant commercial products resulting from this research, and the green lines represent the establishment of billion-dollar industries with the thick green lines showing the achievement of multibillion-dollar markets by some of these industries.
No one saw these mice coming. No one, that is, in my field, writing science fictions. Oh, a few novels were written about those Big Brains, a few New Yorker cartoons were drawn showing those immense electric craniums that needed whole warehouses to THINK in. But no one in all of future writing foresaw those big brutes dieted down to fingernail earplug size so you could shove Moby Dick in one ear and pull Job and Ecclesiastes out the other.
Ray Bradbury
Early visions
The British science fiction writer, Brian Aldiss, traces the origin of science fiction to Mary Shelley’s Frankenstein in 1818. In her book, the unwise scientist, Victor Frankenstein, deliberately makes use of his knowledge of anatomy, chemistry, electricity, and physiology to create a living creature. An alternative starting point dates back to the second half of the nineteenth century with the writing of Jules Verne (B.17.1) and Herbert George (H. G.) Wells (B.17.2). This was a very exciting time for science – in 1859 Charles Darwin had published the Origin of Species; in 1864 James Clerk Maxwell had unified the theories of electricity and magnetism; and in 1869 Mendeleev had brought some order to chemistry with his Periodic Table of the Elements, and Joule and Kelvin were laying the foundations of thermodynamics. Verne had the idea of combining modern science with an adventure story to create a new type of fiction. After publishing his first such story “Five Weeks in a Balloon” in 1863, he wrote:
I have just finished a novel in a new form, a new form – do you understand? If it succeeds, it will be a gold mine.
There are many “popular” books on science that provide accessible accounts of the recent developments of modern science for the general reader. However, there are very few popular books about computer science – arguably the “science” that has changed the world the most in the last half century. This book is an attempt to address this imbalance and to present an accessible account of the origins and foundations of computer science. In brief, the goal of this book is to explain how computers work, how we arrived at where we are now, and where we are likely to be going in the future.
The key inspiration for writing this book came from Physics Nobel Prize recipient Richard Feynman. In his lifetime, Feynman was one of the few physicists well known to a more general public. There were three main reasons for this recognition. First, there were some wonderful British television programs of Feynman talking about his love for physics. Second, there was his best-selling book “Surely You’re Joking, Mr. Feynman!”: Adventures of a Curious Character, an entertaining collection of stories about his life in physics – from his experiences at Los Alamos and the Manhattan atomic bomb project, to his days as a professor at Cornell and Caltech. And third, when he was battling the cancer that eventually took his life, was his participation in the enquiry following the Challenger space shuttle disaster. His live demonstration, at a televised press conference, of the effects of freezing water on the rubber O-rings of the space shuttle booster rockets was a wonderfully understandable explanation of the origin of the disaster.
Video games are bad for you? That’s what they said about rock and roll.
Shigeru Miyamoto
The first computer games
Since the earliest days, computers have been used for serious purposes and for fun. When computing resources were scarce and expensive, using computers for games was frowned upon and was typically an illicit occupation of graduate students late at night. Yet from these first clandestine experiments, computer video games are now big business. In 2012, global video game sales grew by more than 10 percent to more than $65 billion. In the United States, a 2011 survey found that more than 90 percent of children aged between two and seventeen played video games. In addition, the Entertainment Software Association in the United States estimated that 40 percent of all game players are now women and that women over the age of eighteen make up a third of the total game-playing population. In this chapter we take a look at how this multibillion-dollar industry began and how video games have evolved from male-dominated “shoot ’em up” arcade games to more family-friendly casual games on smart phones and tablets.
One of the first computer games was written for the EDSAC computer at Cambridge University in 1952. Graduate student Alexander Douglas used a computer game as an illustration for his PhD dissertation on human-computer interaction. The game was based on the game called tic-tac-toe in the United States and noughts and crosses in the United Kingdom. Although Douglas did not name his game, computer historian Martin Campbell-Kelly saved the game in a file called OXO for his simulator program, and this name now seems to have escaped into the wild. The player competed against the computer, and output was programmed to appear on the computer’s cathode ray tube (CRT) as a display screen. The source code was short and, predictably, the computer could play a perfect game of tic-tac-toe (Fig. 9.1).
It seems reasonable to envision, for a time 10 or 15 years hence, a “thinking center” that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval.... The picture readily enlarges itself into a network of such centers, connected to one another by wide-band communication lines and to individual users by leased-wire services. In such a system, the speed of the computers would be balanced, and the cost of gigantic memories and the sophisticated programs would be divided by the number of users.
J. C. R. Licklider
The network is the computer
Today, with the Internet and World Wide Web, it seems very obvious that computers become much more powerful in all sorts of ways if they are connected together. In the 1970s this result was not so obvious. This chapter is about how the Internet of today came about. As we can see from Licklider’s (B.10.1) quotation beginning this chapter, in addition to arguing for the importance of interactive computing in his 1960 paper on “Man-Computer Symbiosis,” Lick also envisaged linking computers together, a practice we now call computer networking. Larry Roberts, Bob Taylor’s hand-picked successor at the Department of Defense’s Advanced Research Projects Agency (ARPA), was the person responsible for funding and overseeing the construction of the ARPANET, the first North American wide area network (WAN). A WAN links together computers over a large geographic area, such as a state or country, enabling the linked computers to share resources and exchange information.