To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Intuition tells us that steganographic capacity should perhaps be defined as the largest payload that Alice can embed in her cover image using a specific embedding method without introducing artifacts detectable by Eve. After all, knowledge of this secure payload appears to be fundamental for the prisoners to maintain the security of communication. Unfortunately, determining the secure payload for digital images is very difficult even for the simplest steganographic methods, such as LSB embedding. The reason is the lack of accurate statistical models for real images. Moreover, it is even a valid question whether capacity can be meaningfully defined for an individual image and a specific steganographic method. Indeed, capacity of noisy communication channels depends only on the channel and not on any specific communication scheme.
This chapter has two sections, each devoted to a different capacity concept. In Section 13.1, we study the steganographic capacity of perfectly secure stegosystems. Here, we are interested in the maximal relative payload (or rate) that can be securely embedded in the limit as the number of pixels in the image approaches infinity. Capacity defined in this way is a function of only the physical communication channel and the cover source rather than the steganographic scheme itself. It is the maximal relative payload that Alice can communicate if she uses the best possible stegosystem. The significant advantage of this definition is that we can leverage upon powerful tools and constructions previously developed for study of robust watermarking systems.
Presenting a thorough overview of the theoretical foundations of non-parametric system identification for nonlinear block-oriented systems, this book shows that non-parametric regression can be successfully applied to system identification, and it highlights the achievements in doing so. With emphasis on Hammerstein, Wiener systems, and their multidimensional extensions, the authors show how to identify nonlinear subsystems and their characteristics when limited information exists. Algorithms using trigonometric, Legendre, Laguerre, and Hermite series are investigated, and the kernel algorithm, its semirecursive versions, and fully recursive modifications are covered. The theories of modern non-parametric regression, approximation, and orthogonal expansions, along with new approaches to system identification (including semiparametric identification), are provided. Detailed information about all tools used is provided in the appendices. This book is for researchers and practitioners in systems theory, signal processing, and communications and will appeal to researchers in fields like mechanics, economics, and biology, where experimental data are used to obtain models of systems.
Stochastic resonance has been observed in many forms of systems, and has been hotly debated by scientists for over 30 years. Applications incorporating aspects of stochastic resonance may yet prove revolutionary in fields such as distributed sensor networks, nano-electronics, and biomedical prosthetics. Ideal for researchers in fields ranging from computational neuroscience through to electronic engineering, this book addresses in detail various theoretical aspects of stochastic quantization, in the context of the suprathreshold stochastic resonance effect. Initial chapters review stochastic resonance and outline some of the controversies and debates that have surrounded it. The book then discusses suprathreshold stochastic resonance, and its extension to more general models of stochastic signal quantization. Finally, it considers various constraints and tradeoffs in the performance of stochastic quantizers, before culminating with a chapter in the application of suprathreshold stochastic resonance to the design of cochlear implants.
In Chapter 1 we briefly introduced an information channel as a model of a communication link or a related system where the input is a message and the output is an imperfect reproduction of it. In particular we also use this concept as a model of a storage system where input and output are separated in time rather than in space. In our presentation we do not refer to the underlying physical medium or discuss whether it is fundamentally continuous or quantized. The process of transmitting and receiving (writing and reading) is assumed to use finite alphabets, which may well be different, and it is understood that these alphabets represent a digital implementation of processes that make efficient use of the physical medium under the current technological and economic constraints. In this chapter we introduce the fundamentally important concept of channel capacity. It is defined in a straightforward way as the maximum of mutual information; however, the significance becomes clear only as we show how this is actually the amount of information that can be reliably transmitted through the channel. Reliable communication at rates approaching capacity requires the use of coding. For this reason we have chosen to present the basic concepts of channel coding in the same chapter and to emphasize the relation between codes and the information-theoretic quantities. In reality the codes that are used are matched to a few special channels, and other real channels are converted to or approximated by one of these types.
Typical data sources have complex structure, or we say that they exhibit memory. In this chapter we study some of the basic tools for describing sources with memory, and we extend the concept of entropy from the memoryless case discussed in Chapter 1.
Initially we describe the sources in terms of vectors or patterns that occur. Since the number of messages possible under a set of constraints is often much smaller than the total number of symbol combinations, the amount of information is significantly reduced. This point of view is reflected in the notion of combinatorial entropy. In addition to the structural constraints the sources can be characterized by probability distributions, and the probabilistic definition of entropy is extended to sources with memory.
We are particularly interested in models of two-dimensional (2-D) data, and some of the methods commonly used for one-dimensional (1-D) sources can be generalized to this case. However, the analysis of 2-D fields is in general much more complex. Information theory is relevant for understanding the possibilities and limitations of many aspects of 2-D media, but many problems are either intractable or even not computable.
Finite-state sources
The source memory is described by distinguishing several states that summarize the influence of the past. We consider only the cases in which a finite number of states is sufficient.
In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through transmission and storage media designed for that purpose. The information is represented in digital formats using symbols and alphabets taken in a very broad sense. The deliberate choice of the way information is represented, often referred to as coding, is an essential aspect of the theory, and for many results it will be assumed that the user is free to choose a suitable code.
We present the classical results of information theory, which were originally developed as a model of communication as it takes place over a telephone circuit or a similar connection. However, we shall pay particular attention to two-dimensional (2-D) applications, and many examples are chosen from these areas. A huge amount of information is exchanged in formats that are essentially 2-D, namely web-pages, graphic material, etc. Such forms of communication typically have an extremely complex structure. The term media is often used to indicate the structure of the message as well as the surrounding organization. Information theory is relevant for understanding the possibilities and limitation of many aspects of 2-D media and one should not expect to be able to model and analyze all aspects within a single approach.
Entropy of discrete sources
A discrete information source is a device or process that outputs symbols at discrete instances from some finite alphabet A = {x1, x2, …, xr}.
This Appendix contains the following graph tables:
A1. The spectra and characteristic polynomials of the adjacency matrix, Seidel matrix, Laplacian and signless Laplacian for connected graphs with at most 5 vertices;
A2. The eigenvalues, angles and main angles of connected graphs with 2 to 5 vertices;
A3. The spectra and characteristic polynomials of the adjacency matrix for connected graphs with 6 vertices;
A4. The spectra and characteristic polynomials of the adjacency matrix for trees with at most 9 vertices;
A5. The spectra and characteristic polynomials of the adjacency matrix for cubic graphs with at most 12 vertices.
In Tables A1 and A2, the graphs are given in the same order as in Table 1 in the Appendix of [CvDSa]. In Table A1, the spectra and coefficients for the characteristic polynomials with respect to the adjacency matrix, Laplacian, signless Laplacian and Seidel matrix, appear in consecutive lines. Table A2, which is taken from [CvPe2], was also published in [CvRS3]. This table contains, for each graph, the eigenvalues (first line), the main angles (second line) and the vertex angle sequences, with vertices labelled as in the diagrams alongside. Vertices of graphs in Table A2 are ordered in such a way that the corresponding vertex angle sequences are in lexicographical order. Since similar vertices have the same angle sequence, just one sequence is given for each orbit.
In Chapters 3 and 4 we have concentrated on the relation between the structure and spectrum of a graph. Here we discuss the connection between structure and a single eigenvalue, and for this the central notion is that of a star complement. In Section 5.1 we define star complements both geometrically and algebraically, and note their basic properties. In Section 5.2 we illustrate a technique for constructing and characterizing graphs by star complements. In Section 5.3 we use star complements to obtain sharp upper bounds on the multiplicity of an eigenvalue different from −1 or 0 in an arbitrary graph, and in a regular graph. In Section 5.4 we describe how star complements can be used to determine the graphs with least eigenvalue −2, and in Section 5.5 we investigate the role of certain star complements in generalized line graphs.
Star complements
Let G be a graph with vertex set V(G) ={1, …, n} and adjacency matrix A. Let {e1, …, en} be the standard orthonormal basis of IRn and let P be the matrix which represents the orthogonal projection of IRn onto the eigenspace ε(μ) of A with respect to {e1, …, en}. Since ε(μ) is spanned by the vectors P ej (j =1, …, n) there exists X ⊆V(G) such that the vectors P ej (j ∈ X) form a basis for ε(μ). Such a subset X of V(G) is called a star set for μ in G.