To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The two most important objectives for network operation are:
(i) capacity minimization
(ii) revenue maximization.
For capacity minimization, there are three operational phases in survivable WDM network operation: (i) initial call setup, (ii) short-/medium-term reconfiguration, and (iii) long-term reconfiguration. All three optimization problems may be modeled using an ILP formulation separately. A single ILP formulation that can incorporate all three phases of the network operation is presented in this chapter. This common framework also takes service disruption into consideration. Typically, most of the design problems in optical networks have considered a static traffic demand and have tried to optimize the network cost assuming various cost models and survivability paradigms. Fast restoration is a key feature addressed in the designs. Once the network is provisioned, the critical issue is how to operate the network in such a way that the network performance is optimized under dynamic traffic.
The framework for revenue maximization is modified to include a service differentiation model based on lightpath protection. A multi-stage solution methodology is developed to solve individual service classes sequentially and to combine them to obtain a feasible solution. Different cost comparisons in terms of the increase in revenue obtained for various service classes with the base case of accepting demands without any protection show the gains of planning and operation efficiency.
Capacity minimization
Among the three phases of capacity minimization the initial call setup phase is a static optimization problem where the network capacity is optimized for the given topology and the traffic matrix to be provisioned on the network.
Optical components are devices that transmit, shape, amplify, switch, transport, or detect light signals. The improvements in optical component technologies over the past few decades have been the key enabler in the evolution and commercialization of optical networks. In this appendix, the basic principles behind the functioning of the various components are briefly reviewed. In general, there are three groups of optical components.
(i) Active components: devices that are electrically powered, such as lasers, wavelength shifters, and modulators.
(ii) Passive components: devices that are not electrically powered and that do not generate light of their own, such as fibers, multiplexers, demultiplexers, couplers, isolators, attenuators, and circulators.
(iii) Optical modules: devices that are a collection of active and/or passive optical elements used to perform specific tasks. This group includes transceivers, erbium-doped amplifiers, optical switches, and optical add/drop multiplexers.
Fiber optic cables
The backbone that connects all of the nodes and systems together is the optical fiber. The fiber allows signals of enormous frequency range (25 THz) to be transmitted over long distances without significant distortion in the information content. While there are losses in the fiber due to reflection, refraction, scattering, dispersion, and absorption, the bandwidth available in this medium is orders of magnitude more than that provided by other conventional mediums such as copper cables. As will be explained below, the bandwidth available in the fiber is limited only by the attenuation characteristics of the medium at low frequencies and its dispersion characteristics at high frequencies.
Optical technology involves research into components, such as couplers, amplifiers, switches, etc., that form the building blocks of the networks. Some of the main components used in optical networking are described in Appendix A1. With the help of these components, one designs a network and operates it. Issues in network design include minimizing the total network cost, the ability of the network to tolerate failures, the scalability of the network to meet future demands based on projected traffic volumes, etc. The operational part of the network involves monitoring the network for proper functionality, routing traffic, handling dynamic traffic in the network, reconfiguring the network in the case of failure, etc. In this chapter, these issues are introduced in brief, followed by a discussion of the two main issues in network operation, namely survivability and how traffic grooming relates to managing smaller traffic streams.
Network design
Network design involves assigning sufficient resources in the network to meet the projected traffic demand. Typically, network design problems consider a static traffic matrix and aim to design a network that is optimized based on certain performance metrics. Network design problems employing a static traffic matrix are typically formulated as optimization problems. If the traffic pattern in the network is dynamic, i.e. the specific traffic is not known a priori, the design problem involves assigning resources based on certain projected traffic distributions. In the case of dynamic traffic the network designer attempts to quantify certain network performance metrics based on the distribution of the traffic. The most commonly used metric in evaluating a network under dynamic traffic patterns is the blocking probability.
A network is represented by a graph G = (V, E), where V is a finite set of elements called nodes or vertices, and E is a set of unordered pairs of nodes called edges or arcs. This is an undirected graph. A directed graph is also defined similarly except that the arcs or edges are ordered pairs. For both directed and undirected graphs, an arc or an edge from a node i to a node j is represented using the notation (i, j). Examples of five-node directed and undirected graphs are shown in Fig. A3.1. In an undirected graph, an edge (i, j) can carry data traffic in both directions (i.e. from node i to node j and from node j to node i), whereas in a directed graph, the traffic is only carried from node i to node j.
Graph representations. A graph is stored either as an adjacency matrix or an incidence matrix, as shown in Fig. A3.2. For a graph with N nodes, an N × N 0−1 matrix stores the link information in the adjacency matrix. The element (i, j) is a 1 if node i has a link to node j. An incidence matrix, on the other hand, is an N × M matrix where M is the number of links numbered from 0 to M - 1. The element (i, j) stores the information on whether link j is incident on node i or not. Thus, the incidence matrix carries information about exactly what links are incident on a node.
One of the important performance metrics by which a wide area network is evaluated is based on the success ratio of the number of requests that are accepted in the network. This metric is usually posed in its alternate form as the blocking probability, which refers to the rejection ratio of the requests in the network. The smaller the rejection ratio is, the better the network performance. Although other performance metrics exist, such as the effective traffic carried in the network, the fairness of request rejections with respect to requests requiring different capacity requirements or different path lengths, the most meaningful way to measure the performance of a wide-area network is through the blocking performance. To some extent the other performance metrics described above can be obtained as functions of the blocking performance.
Analytical models that evaluate the blocking performance of wide-area circuit-switched networks are employed during the design phase of a network. In the design phase these models are typically employed as an elimination test, rather than as an acceptance test. In other words, the analytical models are employed as back of the envelope calculations to evaluate a network design, rejecting those designs that are below a certain threshold.
Blocking model
The following assumptions are made to develop an analytical model for evaluating the blocking performance of a TSN.
The network has N nodes.
The call arrival at every node follows a Poisson process with rate λn. The choice of Poisson traffic is to keep the analysis tractable.
… can you not tell water from air? My dear sir, in this world it is not so easy to settle these plain things. I have ever found your plain things the knottiest of all.
This and the following chapter deal with concepts that are not NEURON-specific but instead pertain equally well to any tools used for neural modeling.
Why model?
In order to achieve the ultimate goal of understanding how nervous systems work, it will be necessary to know many different kinds of information:
The anatomy of individual neurons and classes of cells, pathways, nuclei, and higher levels of organization.
The pharmacology of ion channels, transmitters, modulators, and receptors.
The biochemistry and molecular biology of enzymes, growth factors, and genes that participate in brain development and maintenance, perception and behavior, learning and forgetting, health and disease.
But while this knowledge will be necessary for an understanding of brain function, it isn't sufficient. This is because the moment-to-moment processing of information in the brain is carried out by the spread and interaction of electrical and chemical signals that are distributed in space and time. These signals are generated and regulated by mechanisms that are kinetically complex, highly nonlinear, and arranged in intricate anatomical structures.
How should Peggy prove to Victor that she is who she claims to be?
There is no simple answer to this question, it depends on the situation. For example if Peggy and Victor meet in person, she may show him her passport (hopefully issued by an authority that he trusts). Alternatively she could present him with a fingerprint or other biometric information which he could then check against a central database. In either case it should be possible for Peggy to convince Victor that she really is Peggy. This is the first requirement of any identification scheme: honest parties should be able to prove and verify identities correctly.
A second requirement is that a dishonest third party, say Oscar, should be unable to impersonate Peggy. For example, two crucial properties of any passport are that it is unforgeable and that its issuring authority can be trusted not to issue a fake one. In the case of biometrics Victor needs to know that the central database is correct.
A special and rather important case of this second requirement arises when Victor is himself dishonest. After asking Peggy to prove her identity, Victor should not be able to impersonate her to someone else.
But what it was that inscrutable Ahab said to that tiger – yellow crew of his – these were words best omitted here; for you live under the blessed light of the evangelical land. Only the infidel sharks in the audacious seas may give ear to such words, when, with tornado brow, and eyes of red murder, and foam-glued lips, Ahab leaped after his prey.
Much of the flexibility of NEURON is due to its use of a built-in interpreter, called hoc (pronounced “hoak”), for defining the anatomical and biophysical properties of models of neurons and neuronal networks, controlling simulations, and creating a graphical user interface. In this chapter we present a survey of hoc and how it is used in NEURON. Readers who seek the most up-to-date list of hoc keywords and documentation of syntax are referred to the online Programmer's Reference (see link at http://www.neuron.yale.edu/neuron/docs/docs.html). This can also be downloaded as a pkzip archive for convenient offline viewing with any WWW browser. The standard distribution for MSWindows includes a copy of the Programmer's Reference which is current as of the date of the NEURON executable that it accompanies (see the “Documentation” item in the NEURON program group).
NEURON's hoc is based on the floating point calculator by the same name that was developed by Kernighan and Pike (1984).
Finally: It was stated at the outset, that this system would not be here, and at once, perfected. You cannot but plainly see that I have kept my word. But I now leave my cetological system standing thus unfinished, even as the great Cathedral of Cologne was left, with the crane still standing upon the top of the uncompleted tower. For small erections may be finished by their first architects; grand ones, true ones, ever leave the copestone to posterity. God keep me from ever completing anything. This whole book is but a draught–nay, but the draught of a draught. Oh Time, Strength, Cash, and Patience!
Until relatively recently cryptosystems were always symmetric. They relied on the use of a shared secret key known to both sender and receiver.
This all changed in the 1970s. Public key cryptosystems, as they are now called, revolutionised the theory and practice of cryptography by relying for their impenetrability on the existence of a special type of one-way function known as a trapdoor function. Using these the need for a shared secret key was removed. Hence James Ellis and Clifford Cocks of the Government Communication Headquarters (GCHQ), Cheltenham in the UK, who first discovered this technique, named it ‘non-secret encryption’.
For a fascinating account of how this discovery was made see Chapter 5 of Singh (2000). He recounts how key distribution was a major problem for the UK military in the late 1960s. In 1969 Ellis came up with the idea of what we now call a ‘trapdoor function’. Informally this is a one-way function which can be inverted easily by anyone in possession of a special piece of information: the trapdoor.
This was exactly the same idea as Diffie, Hellman and Merkle came up with several years later, but like them Ellis was unable to find a way of implementing it.
It was three years later in November 1973 that Cocks, a young recruit to GCHQ, came up with the very simple solution (essentially the RSA cryptosystem) which was rediscovered several years later by Rivest, Shamir and Adleman (1978).
We have seen two possible methods for secure encryption so far, but both had serious problems.
The one-time pad in Chapter 5 offered the incredibly strong guarantee of perfect secrecy: the cryptogram reveals no new information about the message. The drawback was that it required a secret shared random key that is as long as the message. This really presents two distinct problems: first the users need to generate a large number of independent random bits to form the pad and, second, they need to share these bits securely.
The public key systems built on families of trapdoor functions in Chapter 7 provided an ingenious solution to the problem of sharing a secret key. They also offered a reasonable level of security under various plausible intractability assumptions. However, this security was framed in terms of the difficulty Eve would face in recovering a message from a cryptogram. This is significantly weaker than perfect secrecy. It is extremely easy for Eve to gain some information about the message from the cryptogram in a system such as RSA. For instance if the same message is sent twice then Eve can spot this immediately.
So saying he procured the plane; and with his old silk handkerchief first dusting the bench, vigorously set to planing away at my bed, the while grinning like an ape. The shavings flew right and left; till at last the plane-iron came bump against an indestructible knot.
Information processing in the nervous system involves the spread and interaction of electrical and chemical signals within and between neurons and glia. From the perspective of the experimentalist working at the level of cells and networks, these signals are continuous variables. They are described by the diffusion equation and the closely related cable equation (Rall 1977; Crank 1979), in which potential (voltage, concentration) and flux (current, movement of solute) are smooth functions of time and space. But everything in a digital computer is inherently discontinuous: memory addresses, data, and instructions are all specified in terms of finite sequences of 0s and 1s, and there are finite limits on the precision with which numbers can be represented. Thus there is no direct parallel between the continuous world of biology and what exists in digital computers, so special effort is required to implement digital computer models of biological neural systems. The aim of this chapter is to show how the NEURON simulation environment makes it easier to bridge this gap.
But this critical act is not always unattended with the saddest and most fatal casualties.
Computational neuronal modeling usually focuses on voltage and current in excitable cells, but it is often necessary to represent other processes such as chemical reactions, diffusion, and the behavior of electronic instrumentation. These phenomena seem quite different from each other, and each has evolved its own distinct “notational shorthand.” As these specialized notations have particular advantages for addressing domain-specific problems, NEURON has provisions that allow users to employ each of them as appropriate (see Chapter 9). Apparent differences notwithstanding, there are fundamental parallels among these notations that can be exploited at the computational level: all are equivalent to sets of algebraic and differential equations. In this chapter, we will explore these parallels by examining the mathematical representations of chemical reactions, electrical circuits, and cables.
Chemical reactions
A natural first step in thinking about voltage-dependent or ligand-gated channel models or elaborate cartoons of dynamic processes is to express them with chemical reaction notation (i.e. kinetic schemes) (Fig. 3.1). Kinetic schemes focus attention on conservation of material (in a closed set of reactions, material is neither created or destroyed) and flow of material from one state to another.
I promise nothing complete; because any human thing supposed to be complete, must for that very reason infallibly be faulty.
Who should read this book?
This book is about how to use the NEURON simulation environment to construct and apply empirically based models of neurons and neural networks. It is written primarily for neuroscience investigators, teachers, and students, but readers with a background in the physical sciences or mathematics who have some knowledge about brain cells and circuits and are interested in computational modeling will also find it helpful. The emphasis is on the most productive use of NEURON as a means for testing hypotheses that are founded on experimental observations, and for exploring ideas that may lead to the design of new experiments. Therefore the book uses a problem-solving approach, with many working examples that readers can try for themselves.
What this book is, and is not, about
Formulating a conceptual model is an attempt to capture the essential features that underlie some particular function. This necessarily involves simplification and abstraction of real-world complexities. Even so, one may not necessarily understand all implications of the conceptual model. To evaluate a conceptual model it is often necessary to devise a hypothesis or test in which the behavior of the model is compared against a prediction.