To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
When I use the term “hacker” I mean someone who enjoys programming and is good at it. Hackers in my experience tend to be an opinionated and individualistic lot and they tend to appreciate strong opinions and independence of thought in others. The slogan of the Perl language is “There's more than one way to do it”, and most hackers are motivated to exploring different ways of doing the same job. That said, if adopting a standard way of doing something provides leverage for building better software, then most hackers will agree to adopt the standard (after much dickering about the details, of course).
If you write code because you want other people to use it, it behooves you to use a language that others are familiar with, adopt standard conventions for input and output so that others can interact with your code without learning some new set of conventions, and provide your code in a format so that others can use it as a component of a larger project without having to understand all the details of your implementation. The struggle to meet these basic criteria requires the hacker to negotiate, make concessions and, generally, work within a community of potential users to produce, adopt and adhere to reasonable standards.
Building great software requires a great deal of discipline and interpersonal skill – in sharp contrast with the stereotype of a hacker as an unkempt, uncommunicative obsessive compulsive lacking basic hygiene and addicted to highly caffeinated drinks.
Most physicists believe that the speed of light is a fundamental limit on how quickly we can move through space. This claim is based on the predictions of mathematical theories and the results of experiments that appear to support them. According to theory, it doesn't matter whether you move through space with a pogo stick or an anti-matter drive, you're still subject to the rules governing all matter in the universe and thus unable to exceed the speed of light.
What if there are limits on what you can compute? Pharmaceutical companies simulate interactions at the atomic level in searching for molecules to cure diseases. There could be viruses for which it will take years to find a vaccine – there is simply no way to speed up the necessary computations. Software developers who write the programs that keep airplanes flying and emergency rooms functioning would like to prove that their code won't malfunction and put lives at risk. But maybe it's impossible to provide such assurances.
In some cases, computational limitations can work to our advantage. Some programs exploit the difficulty of computing answers to particular problems; for example, the most popular encryption schemes for transferring information securely on the World Wide Web rely on the difficulty of computing the prime factors of large composite integers. Of course, if someone figures out how to factor large numbers efficiently, our privacy will be seriously threatened.
The first computers were used primarily to manage information for large companies and perform numerical calculations for the military. Only a few visionaries saw computing as something for everyone or imagined it could become a basic service like the telephone or electric power. This failure of imagination was due in large part to the fact that the people who controlled computing in the early years weren't the ones actually programming computers. If you worked for a large corporation or industrial laboratory, then you might have strictly limited access to a computer, but otherwise you were pretty much out of luck.
In the early years of computing, users submitted their programs to computer operators to be run in batches. You would hand the operator a stack of cards or a roll of paper tape punched full of holes that encoded your program. An operator would schedule your program to be run (possibly in the middle of the night) with a batch of other programs and at some point thereafter you would be handed a printout of the output generated by your program. You didn't interact directly with the computer and if your program crashed and produced no output, you'd have very little idea what had gone wrong.
The people who ran computer facilities were horrified at the idea of having users interact directly with their precious computers.
Programming languages come in all shapes and sizes and some of them hardly seem like programming languages at all. Of course, that depends on what you count as a programming language; as far as I'm concerned, a programming language is a language for specifying computations. But that's pretty broad and maybe we should narrow our definition to include only languages used for specifying computations to machines, that is, languages for talking with computers. Remember, though, that programmers often communicate with one another by sharing code and the programming language used to write that code can significantly influence what can or can't be easily communicated.
C, Java and Scheme are so-called general-purpose, high-level programming languages. Plenty of other programming languages were designed to suit particular purposes, among them the languages built into mathematical programming packages like Maple, Matlab and Mathematica. There are also special-purpose languages called scripting languages built into most word-processing and desktop-publishing programs that make it easier to perform repetitious tasks like personalizing invitations or making formatting changes throughout a set of documents.
Lots of computer users find themselves constantly doing routine housecleaning tasks like identifying and removing old files and searching for documents containing specific pieces of information. Modern operating systems generally provide nice graphical user interfaces to make such house-cleaning easier, but many repetitive tasks are easy to specify but tedious to carry out with these fancy interfaces.
Programming languages, like natural languages, have a vocabulary (lexicon) and rules of syntax (grammar) that you have to learn in order to communicate. Just as unfamiliar grammatical conventions can make learning a new natural language difficult, unfamiliar programming-language syntax can make learning to program difficult. English speakers learning Japanese have to get used to the fact that Japanese verbs generally come at the end of the sentence. With computer languages, the problem is made worse by the fact that computers are much less adept at handling lexically and syntactically mangled programs than humans are at grasping the meaning of garbled speech.
If you want to talk with computers, however, you're going to have to learn a programming language. Just as you learn new natural languages to communicate with other people and experience other cultures, you learn a programming language to communicate with computers and other programmers and to express computational ideas concisely and clearly. The good news is that learning one programming language makes it a lot easier to learn others.
When you start learning to program, you may find yourself consumed with sorting out the lexical and syntactic minutiae of the programming language. You'll have to look up the names of functions and operators and memorize the particular syntax required to invoke them correctly. You may end up spending obscene amounts of time tracking down obscure bugs caused by misplaced commas or missing parentheses.
While writing the previous chapter, I got to thinking about concepts in computer science that connect the microscopic, bit-level world of logic gates and machine language to the macroscopic world of procedures and processes we've been concerned with so far. In listing concepts that might be worth mentioning, I noticed that I was moving from computer architecture, the subdiscipline of computer science concerned with the logical design of computer hardware, to operating systems, the area dealing with the software that mediates between the user and the hardware.
In compiling my list, I was also struck by how many “computerese” terms and phrases have slipped into the vernacular. Interrupt handling (responding to an unexpected event while doing something else) and multitasking (the concurrent performance of several tasks) are prime examples. The common use of these terms concerns not computers but human information processing. I don't know what you'd call the jargon used by psychologists and cognitive scientists to describe how humans think. The word “mentalese” is already taken: the philosopher Jerry Fodor postulates that humans represent the external world in a “language of thought” that is sometimes called “mentalese.” Fodor's mentalese is more like machine language for minds. I'm interested in the language we use to describe how we think, how our thought processes work – a metalanguage for talking about thinking.
One consequence of inexpensive computer memory and storage devices is that much less gets thrown out. People who normally wouldn't characterize themselves as packrats find themselves accumulating megabytes of old email messages, news articles, personal financial data, digital images, digital music in various formats and, increasingly, animations, movies and other multimedia presentations. For many of us, digital memory serves to supplement the neural hardware we were born with for keeping track of things; the computer becomes a sort of neural prosthetic or memory amplifier.
However reassuring it may be to know that every aspect of your digital lifestyle is stored on your computer's hard drive, storing information doesn't do much good if you can't get at what you need when you need it. How do you recall the name of the restaurant your friend from Seattle mentioned in email a couple of years back when she told you about her new job? Or perhaps you're trying to find the recommendation for a compact digital camera that someone sent you in email or you saved from a news article. It's tough remembering where you put things and you'd rather not look through all your files each time you want to recall a piece of information.
In 1999, when NASA launched the first of its Earth Observing System (EOS) satellites, they knew they would have to do something with the terabytes (a terabyte is a billion bytes) of data streaming down from these orbiting observers.
With all my mumbo-jumbo about conjuring up spirits and casting spells, it's easy to lose track of the fact that computers are real and there is a very precise and concrete connection between the programs and fragments of code you run on your computer and the various electrical devices that make up the hardware of your machine. Interacting with the computer makes the notions of computing and computation very real, but you're still likely to feel shielded from the hardware – as indeed you are – and to be left with the impression that the connection to the hardware is all very difficult to comprehend.
For some of you, grabbing a soldering iron and a handful of logic chips and discrete components is the best path to enlightenment. I used to love tinkering with switching devices scavenged from the local telephone company, probing circuit boards to figure out what they could do and then making them do something other than what they were designed for. Nowadays, it's easier than ever to “interface” sensors and motors to computers, but it still helps to know a little about electronics even if you're mainly interested in the software side of things.
I think it's a good experience for every computer scientist to learn a little about analog circuits (for example, build a simple solid-state switch using a transistor and a couple of resistors) and integrated circuits for memory, logic and timing (build a circuit to add two binary numbers out of primitive logic gates).
How ‘tightly’ can we pack a given number of $r$-sets of an $n$-set? To be a little more precise, let $X=[n]=\{ 1,\ldots,n \}$, and let $X^r=\{ A\subset X : |A|=r \}$. For a set system $\mathcal{A}\subset X^r $, the neighbourhood of $\mathcal{A}$ is $N(\mathcal{A})=\{ B \in X^r: |B \bigtriangleup A|\le 2 \hbox{ for some }A \in \mathcal{A} \}$. In other words, $N(\mathcal{A})$ consists of those $r$-sets that are either in $\mathcal{A}$ or are ‘adjacent’ to it, in the sense that they are at minimal Hamming distance (i.e., distance 2) from some point of it. Given $|\mathcal{A}|$, how small can $|N(\mathcal{A})|$ be?
Sampling formulas describe probability laws of exchangeable combinatorial structures like partitions and compositions. We give a brief account of two known parametric families of sampling formulas for compositions and add a new family to the list.
We give a quantitative proof that, for sufficiently large $N$, every subset of $[N]^2$ of size at least $\delta N^2$ contains a square, i.e., four points with coordinates $\{(a,b),(a+d,b),(a,b+d),(a+d,b+d)\}$.
Baranyai's partition theorem states that the edges of the complete $r$-graph on $n$ vertices can be partitioned into $1$-factors provided that $r$ divides $n$. Fon-der-Flaass has conjectured that for $r=3$ such a partitioning exists with the property that any two $1$-factors are ‘far apart’ in some natural sense.
Our aim in this note is to prove that the Fon-der-Flaass conjecture is not always true: it fails for $n=12$. Our methods are based on some new ‘auxiliary’ hypergraphs.
Given a set $L$ of $n$ lines in ${\mathbb R}^3$, joints are points in ${\mathbb R}^3$ that are incident to at least three non-coplanar lines in $L$. We show that there are at most $O(n^{5/3})$ incidences between $L$ and the set of its joints.
This result leads to related questions about incidences between $L$ and a set $P$ of $m$ points in ${\mathbb R}^3$. First, we associate with every point $p \in P$ the minimum number of planes it takes to cover all lines incident to $p$. Then the sum of these numbers is at most \[ O\big(m^{4/7}n^{5/7}+m+n\big).\] Second, if each line forms a fixed given non-zero angle with the $xy$-plane – we say the lines are equally inclined – then the number of (real) incidences is at most \[ O\big(\min\big\{m^{3/4}n^{1/2}\kappa(m),\ m^{4/7}n^{5/7}\big\} + m + n\big) , \] where $\kappa(m) \,{=}\, (\log m)^{O(\alpha^2(m))}$, and $\alpha(m)$ is the slowly growing inverse Ackermann function. These bounds are smaller than the tight Szemerédi–Trotter bound for point–line incidences in $\reals^2$, unless both bounds are linear. They are the first results of this type on incidences between points and $1$-dimensional objects in $\reals^3$. This research was stimulated by a question raised by G. Elekes.
I show that the zeros of the chromatic polynomials $P_G(q)$ for the generalized theta graphs $\Theta^{(s,p)}$ are, taken together, dense in the whole complex plane with the possible exception of the disc $|q-1| < 1$. The same holds for their dichromatic polynomials (alias Tutte polynomials, alias Potts-model partition functions) $Z_G(q,v)$ outside the disc $|q+v| < |v|$. An immediate corollary is that the chromatic roots of not-necessarily-planar graphs are dense in the whole complex plane. The main technical tool in the proof of these results is the Beraha–Kahane–Weiss theorem on the limit sets of zeros for certain sequences of analytic functions, for which I give a new and simpler proof.
Bollobás and Riordan introduce a Tutte polynomial for coloured graphs and matroids in [3]. We observe that this polynomial has an expansion as a sum indexed by the subsets of the ground-set of a coloured matroid, generalizing the subset expansion of the Tutte polynomial. We also discuss similar expansions of other contraction–deletion invariants of graphs and matroids.
It is shown that the hard-core model on ${{\mathbb Z}}^d$ exhibits a phase transition at activities above some function $\lambda(d)$ which tends to zero as $d\rightarrow \infty$. More precisely, consider the usual nearest neighbour graph on ${{\mathbb Z}}^d$, and write ${\cal E}$ and ${\cal O}$ for the sets of even and odd vertices (defined in the obvious way). Set $${\cal G}L_M={\cal G}L_M^d =\{z\in{{\mathbb Z}}^d:\|z\|_{\infty}\leq M\},\quad \partial^{\star} {\cal G}L_M =\{z\in{{\mathbb Z}}^d:\|z\|_{\infty}= M\},$$ and write ${\cal I}({\cal G}L_M)$ for the collection of independent sets (sets of vertices spanning no edges) in ${\cal G}L_M$. For $\lambda>0$ let ${\bf I}$ be chosen from ${\cal I}({\cal G}L_M)$ with $\Pr({\bf I}=I) \propto \lambda^{|I|}$.
TheoremThere is a constant$C$such that if$\lambda > Cd^{-1/4}\log^{3/4}d$, then$$\lim_{M\rightarrow\infty}\Pr(\underline{0}\in{\bf I}|{\bf I}\supseteq \partial^{\star} {\cal G}L_M\cap {\cal E})~> \lim_{M\rightarrow\infty}\Pr(\underline{0}\in{\bf I}| {\bf I}\supseteq \partial^{\star} {\cal G}L_M\cap {\cal O}).$$ Thus, roughly speaking, the influence of the boundary on behaviour at the origin persists as the boundary recedes.
We consider two interrelated tasks in a synchronous $n$-node ring: distributed constant colouring and local communication. We investigate the impact of the amount of knowledge available to nodes on the time of completing these tasks. Every node knows the labels of nodes up to a distance $r$ from it, called the knowledge radius. In distributed constant colouring every node has to assign itself one out of a constant number of colours, so that adjacent nodes get different colours. In local communication every node has to communicate a message to both of its neighbours. We study these problems in two popular communication models: the one-way model, in which each node can only either transmit to one neighbour or receive from one neighbour, in any round, and the radio model, in which simultaneous receiving from two neighbours results in interference noise. Hence the main problem in fast execution of the above tasks is breaking symmetry with restricted knowledge of the ring.
We show that distributed constant colouring and local communication are tightly related and one can be used to accomplish the other. Also, in most situations the optimal time is the same for both of them, and it strongly depends on knowledge radius. For knowledge radius $r=0$, i.e., when each node knows only its own label, our bounds on time for both tasks are tight in both models: the optimal time in the one-way model is $\Theta(n)$, while in the radio model it is $\Theta(\log n)$. For knowledge radius $r=1$ both tasks can be accomplished in time $O(\log \log n)$ in the one-way model, if the ring is oriented. For $2 \leq r \leq c \log ^* n$, where $c < 1/2$, the upper bounds on time are $O(\log^{(2r)} n)$ in the one-way model and $O(\log ^{(2\lfloor r/2 \rfloor)} n)$ in the radio model; the lower bound is $\Omega (\log^* n)$, in both models. For $r \geq (\log^*n)/2$ both tasks can be completed in constant time, in the one-way model, and distributed constant colouring also in the radio model. Finally, if $r \geq \log^*n$ then constant time is also enough for local communication in the radio model.