To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We saw in Chapter 5 that the one-time pad is a cryptosystem that provides perfect secrecy, so why not use it? The obvious reason is that the key needs to be as long as the message and the users need to decide on this secret key in advance using a secure channel.
Having introduced public key cryptography in Chapter 7 one might wonder why anyone would want to use a symmetric cryptosystem. Why not simply use RSA or some other public key cryptosystem and dispense with the need to exchange secret keys once and for all?
The problem with this approach is that symmetric cryptosystems are generally much faster. For example in 1996, DES was around 1000 times faster than RSA. In situations where a large amount of data needs to be encrypted quickly or the users are computationally limited, symmetric cryptosystems still play an important role. A major problem they face is how to agree a common secret key to enable communications to begin.
This basic ‘key exchange problem’ becomes ever more severe as communication networks grow in size and more and more users wish to communicate securely. Indeed while one could imagine. Alice and Bob finding a way to exchange a secret key securely the same may not be true if you have a network with 1000 users.
Now, one of the peculiar characteristics of the savage in his domestic hours, is his wonderful patience of industry. An ancient Hawaiian war-club or spear-paddle, in its full multiplicity and elaboration of carving, is as great a trophy of human perseverance as a Latin lexicon. For, with but a bit of broken sea-shell or a shark's tooth, that miraculous intricacy of wooden network has been achieved; and it has cost steady years of steady application.
Neuronal function involves the interaction of electrical and chemical signals that are distributed in time and space. The mechanisms that generate these signals and regulate their interactions are marked by a wide diversity of properties, differing across neuronal cell class, developmental stage, and species (e.g. Chapter 7 in (Johnston and Wu 1995); also see (McCormick 1998)). To be useful in research, a simulation environment must provide a flexible and powerful means for incorporating new biophysical mechanisms in models. It must also help the user remain focused on the model instead of programming.
Such a means is provided to NEURON by NMODL, a high level language that was originally implemented for NEURON by Michael Hines and later extended by him and Upinder Bhalla to generate code suitable for linking with GENESIS (Wilson and Bower 1989).
… so, entering, the first thing I did was to stumble over an ash-box in the porch. Ha! thought I, ha, as the flying particles almost choked me, are these ashes from that destroyed city, Gomorrah?
Modeling and understanding
Modeling can have many uses, but its principal benefit is to improve understanding. The chief question that it addresses is whether what is known about a system can account for the behavior of the system. An indispensable step in modeling is to postulate a conceptual model that expresses what we know, or think we know, about a system, while omitting unnecessary details. This requires considerable judgment and is always vulnerable to hindsight and revision, but it is important to keep things as simple as possible. The choice of what to include and what to leave out depends strongly on the hypothesis that we are studying. The issue of how to make such decisions is outside the primary focus of this book, although from time to time we may return to it briefly.
The task of building a computational model should only begin after a conceptual model has been proposed. In building a computational model we struggle to establish a match between the conceptual model and its computational representation, always asking the question: would the conceptual model behave like the simulation? If not, where are the errors? If so, how can we use NEURON to help understand why the conceptual model implies that behavior?
Introducing NEURON
NEURON is a simulation environment for models of individual neurons and networks of neurons that are closely linked to experimental data.
Having considered classical symmetric cryptography in the previous chapter we now introduce the modern complexity theoretic approach to cryptographic security.
Recall our two characters Alice and Bob who wish to communicate securely. They would like to use a cryptosystem, in which encryption (by Alice) and decryption (by Bob using his secret key) are computationally easy but the problem of decryption for Eve (who does not know Bob's secret key) should be as computationally intractable as possible.
This complexity theoretic gap between the easy problems faced by Alice and Bob and the hopefully impossible problems faced by Eve is the basis of modern cryptography. In order for such a gap to exist there must be a limit to the computational capabilities of Eve. Moreover it would be unrealistic to suppose that any limits on the computational capabilities of Eve did not also apply to Alice and Bob. This leads to our first assumption:
Alice, Bob and Eve can only perform probabilistic polynomial time computations.
So for Alice and Bob to be able to encrypt and decrypt easily means that there should be (possibly probabilistic) polynomial time algorithms for both procedures.
But exactly how should we formalise the idea that Eve must face a computationally intractable problem when she tries to decrypt an intercepted cryptogram without Bob's secret key?
In spite of difficulty in defining the syllable unequivocally, and controversy over its role in theories of spoken and written language processing, the syllable is a potentially useful unit in several practical tasks which arise in computational linguistics and speech technology. For instance, syllable structure might embody valuable information for building word models in automatic speech recognition, and concatenative speech synthesis might use syllables or demisyllables as basic units. In this paper, we first present an algorithm for determining syllable boundaries in the orthographic form of unknown words that works by analogical reasoning from a database or corpus of known syllabifications. We call this syllabification by analogy (SbA). It is similarly motivated to our existing pronunciation by analogy (PbA) which predicts pronunciations for unknown words (specified by their spellings) by inference from a dictionary of known word spellings and corresponding pronunciations. We show that including perfect (according to the corpus) syllable boundary information in the orthographic input can dramatically improve the performance of pronunciation by analogy of English words, but such information would not be available to a practical system. So we next investigate combining automatically-inferred syllabification and pronunciation in two different ways: the series model in which syllabification is followed sequentially by pronunciation generation; and the parallel model in which syllabification and pronunciation are simultaneously inferred. Unfortunately, neither improves performance over PbA without syllabification. Possible reasons for this failure are explored via an analysis of syllabification and pronunciation errors.
This paper describes in detail an algorithm for the unsupervised learning of natural language morphology, with emphasis on challenges that are encountered in languages typologically similar to European languages. It utilizes the Minimum Description Length analysis described in Goldsmith (2001), and has been implemented in software that is available for downloading and testing.
For a random permutation of $n$ objects, as $n \to \infty$, the process giving the proportion of elements in the longest cycle, the second-longest cycle, and so on, converges in distribution to the Poisson–Dirichlet process with parameter 1. This was proved in 1977 by Kingman and by Vershik and Schmidt. For soft reasons, this is equivalent to the statement that the random permutations and the Poisson–Dirichlet process can be coupled so that zero is the limit of the expected $\ell_1$ distance between the process of cycle length proportions and the Poisson–Dirichlet process. We investigate how rapid this metric convergence can be, and in doing so, give two new proofs of the distributional convergence.
One of the couplings we consider has an analogue for the prime factorizations of a uniformly distributed random integer, and these couplings rely on the ‘scale-invariant spacing lemma’ for the scale-invariant Poisson processes, proved in this paper.
We put the final piece into a puzzle first introduced by Bollobás, Erdõs and Szemerédi in 1975. For arbitrary positive integers $n$ and $r$ we determine the largest integer $\Delta=\Delta (r,n)$, for which any $r$-partite graph with partite sets of size $n$ and of maximum degree less than $\Delta$ has an independent transversal. This value was known for all even $r$. Here we determine the value for odd $r$ and find that $\Delta(r,n)=\Delta(r-1,n)$. Informally this means that the addition of an oddth partite set does not make it any harder to guarantee an independent transversal.
In the proof we establish structural theorems which could be of independent interest. They work for all$r\geq 7$, and specify the structure of slightly sub-optimal graphs for even$r\geq 8$.
We show that a maximum cut of a random graph below the giant-component threshold can be found in linear space and linear expected time by a simple algorithm. In fact, the algorithm solves a more general class of problems, namely binary 2-variable constraint satisfaction problems. In addition to Max Cut, such Max 2-CSPs encompass Max Dicut, Max 2-Lin, Max 2-Sat, Max-Ones-2-Sat, maximum independent set, and minimum vertex cover. We show that if a Max 2-CSP instance has an ‘underlying’ graph which is a random graph $G \in \mathcal{G}(n,c/n)$, then the instance is solved in linear expected time if $c \leq 1$. Moreover, for arbitrary values (or functions) $c>1$ an instance is solved in expected time $n \exp(O(1+(c-1)^3 n))$; in the ‘scaling window’ $c=1+\lambda n^{-1/3}$ with $\lambda$ fixed, this expected time remains linear.
Our method is to show, first, that if a Max 2-CSP has a connected underlying graph with $n$ vertices and $m$ edges, then $O(n 2^{(m-n)/2})$ is a deterministic upper bound on the solution time. Then, analysing the tails of the distribution of this quantity for a component of a random graph yields our result. Towards this end we derive some useful properties of binomial distributions and simple random walks.
We study relational structures (especially graphs and posets) which satisfy the analogue of homogeneity but for homomorphisms rather than isomorphisms. The picture is rather different. Our main results are partial characterizations of countable graphs and posets with this property; an analogue of Fraïssé's theorem; and representations of monoids as endomorphism monoids of such structures.
In this paper, we give sharp upper bounds on the maximum number of edges in very unbalanced bipartite graphs not containing any cycle of length 6. To prove this, we estimate roughly the sum of the sizes of the hyperedges in triangle-free multi-hypergraphs.
The adaption of combinatorial duality to infinite graphs has been hampered by the fact that while cuts (or cocycles) can be infinite, cycles are finite. We show that these obstructions fall away when duality is reinterpreted on the basis of a ‘singular’ approach to graph homology, whose cycles are defined topologically in a space formed by the graph together with its ends and can be infinite. Our approach enables us to complete Thomassen's results about ‘finitary’ duality for infinite graphs to full duality, including his extensions of Whitney's theorem.