To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The main goal of steganography is to communicate secret messages without making it apparent that a secret is being communicated. This can be achieved by hiding messages in ordinary-looking objects, which are then sent in an overt manner through some communication channel. In this chapter, we look at the individual elements that define steganographic communication.
Before Alice and Bob can start communicating secretly, they must agree on some basic communication protocol they will follow in the future. In particular, they need to select the type of cover objects they will use for sending secrets. Second, they need to design the message-hiding and message-extraction algorithms. For increased security, the prisoners should make both algorithms dependent on a secret key so that no one else besides them will be able to read their messages. Besides the type of covers and the inner workings of the steganographic algorithm, Eve's ability to detect that the prisoners are communicating secretly will also depend on the size of the messages that Alice and Bob will communicate. Finally, the prisoners will send their messages through a channel that is under the control of the warden, who may or may not interfere with the communication.
We recognize the following five basic elements of every steganographic channel (see Figure 4.1):
• Source of covers,
• Data-embedding and -extraction algorithms,
• Source of stego keys driving the embedding/extraction algorithms,
• Source of messages,
• Channel used to exchange data between Alice and Bob.
Digital images are commonly represented in four basic formats – raster, palette, transform, and vector. Each representation has its advantages and is suitable for certain types of visual information. Likewise, when Alice and Bob design their steganographic method, they need to consider the unique properties of each individual format. This chapter explains how visual data is represented and stored in several common image formats, including raster and palette formats, and the most popular format in use today, the JPEG. The material included in this chapter was chosen for its relevance to applications in steganography and is thus necessarily somewhat limited. The topics covered here form the minimal knowledge base the reader needs to become familiar with. Those with sufficient background may skip this chapter entirely and return to it later on an as-needed basis. An excellent and detailed exposition of the theory of color models and their properties can be found in [74]. A comprehensive description of image formats appears in [32].
In Section 2.1, the reader is first introduced to the basic concept of color as perceived by humans and then learns how to represent color quantitatively using several different color models. Section 2.2 provides details of the processing needed to represent a natural image in the raster (BMP, TIFF) and palette formats (GIF, PNG). Section 2.3 is devoted to the popular transform-domain format JPEG, which is the most common representation of natural images today. For all three formats, the reader is instructed how to work with such images in Matlab.
Intuition tells us that steganographic capacity should perhaps be defined as the largest payload that Alice can embed in her cover image using a specific embedding method without introducing artifacts detectable by Eve. After all, knowledge of this secure payload appears to be fundamental for the prisoners to maintain the security of communication. Unfortunately, determining the secure payload for digital images is very difficult even for the simplest steganographic methods, such as LSB embedding. The reason is the lack of accurate statistical models for real images. Moreover, it is even a valid question whether capacity can be meaningfully defined for an individual image and a specific steganographic method. Indeed, capacity of noisy communication channels depends only on the channel and not on any specific communication scheme.
This chapter has two sections, each devoted to a different capacity concept. In Section 13.1, we study the steganographic capacity of perfectly secure stegosystems. Here, we are interested in the maximal relative payload (or rate) that can be securely embedded in the limit as the number of pixels in the image approaches infinity. Capacity defined in this way is a function of only the physical communication channel and the cover source rather than the steganographic scheme itself. It is the maximal relative payload that Alice can communicate if she uses the best possible stegosystem. The significant advantage of this definition is that we can leverage upon powerful tools and constructions previously developed for study of robust watermarking systems.
Presenting a thorough overview of the theoretical foundations of non-parametric system identification for nonlinear block-oriented systems, this book shows that non-parametric regression can be successfully applied to system identification, and it highlights the achievements in doing so. With emphasis on Hammerstein, Wiener systems, and their multidimensional extensions, the authors show how to identify nonlinear subsystems and their characteristics when limited information exists. Algorithms using trigonometric, Legendre, Laguerre, and Hermite series are investigated, and the kernel algorithm, its semirecursive versions, and fully recursive modifications are covered. The theories of modern non-parametric regression, approximation, and orthogonal expansions, along with new approaches to system identification (including semiparametric identification), are provided. Detailed information about all tools used is provided in the appendices. This book is for researchers and practitioners in systems theory, signal processing, and communications and will appeal to researchers in fields like mechanics, economics, and biology, where experimental data are used to obtain models of systems.
Stochastic resonance has been observed in many forms of systems, and has been hotly debated by scientists for over 30 years. Applications incorporating aspects of stochastic resonance may yet prove revolutionary in fields such as distributed sensor networks, nano-electronics, and biomedical prosthetics. Ideal for researchers in fields ranging from computational neuroscience through to electronic engineering, this book addresses in detail various theoretical aspects of stochastic quantization, in the context of the suprathreshold stochastic resonance effect. Initial chapters review stochastic resonance and outline some of the controversies and debates that have surrounded it. The book then discusses suprathreshold stochastic resonance, and its extension to more general models of stochastic signal quantization. Finally, it considers various constraints and tradeoffs in the performance of stochastic quantizers, before culminating with a chapter in the application of suprathreshold stochastic resonance to the design of cochlear implants.
In Chapter 1 we briefly introduced an information channel as a model of a communication link or a related system where the input is a message and the output is an imperfect reproduction of it. In particular we also use this concept as a model of a storage system where input and output are separated in time rather than in space. In our presentation we do not refer to the underlying physical medium or discuss whether it is fundamentally continuous or quantized. The process of transmitting and receiving (writing and reading) is assumed to use finite alphabets, which may well be different, and it is understood that these alphabets represent a digital implementation of processes that make efficient use of the physical medium under the current technological and economic constraints. In this chapter we introduce the fundamentally important concept of channel capacity. It is defined in a straightforward way as the maximum of mutual information; however, the significance becomes clear only as we show how this is actually the amount of information that can be reliably transmitted through the channel. Reliable communication at rates approaching capacity requires the use of coding. For this reason we have chosen to present the basic concepts of channel coding in the same chapter and to emphasize the relation between codes and the information-theoretic quantities. In reality the codes that are used are matched to a few special channels, and other real channels are converted to or approximated by one of these types.
Typical data sources have complex structure, or we say that they exhibit memory. In this chapter we study some of the basic tools for describing sources with memory, and we extend the concept of entropy from the memoryless case discussed in Chapter 1.
Initially we describe the sources in terms of vectors or patterns that occur. Since the number of messages possible under a set of constraints is often much smaller than the total number of symbol combinations, the amount of information is significantly reduced. This point of view is reflected in the notion of combinatorial entropy. In addition to the structural constraints the sources can be characterized by probability distributions, and the probabilistic definition of entropy is extended to sources with memory.
We are particularly interested in models of two-dimensional (2-D) data, and some of the methods commonly used for one-dimensional (1-D) sources can be generalized to this case. However, the analysis of 2-D fields is in general much more complex. Information theory is relevant for understanding the possibilities and limitations of many aspects of 2-D media, but many problems are either intractable or even not computable.
Finite-state sources
The source memory is described by distinguishing several states that summarize the influence of the past. We consider only the cases in which a finite number of states is sufficient.