We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This concise overview of digital signal generation will introduce you to powerful, flexible and practical digital waveform generation techniques. These techniques, based on phase-accumulation and phase-amplitude mapping, will enable you to generate sinusoidal and arbitrary real-time digital waveforms to fit your desired waveshape, frequency, phase offset and amplitude, and to design bespoke digital waveform generation systems from scratch. Including a review of key definitions, a brief explanatory introduction to classical analogue waveform generation and its basic conceptual and mathematical foundations, coverage of recursion, DDS, IDFT and dynamic waveshape and spectrum control, a chapter dedicated to detailed examples of hardware design, and accompanied by downloadable Mathcad models created to help you explore 'what if?' design scenarios, this is essential reading for practitioners in the digital signal processing community, and for students who want to understand and apply digital waveform synthesis techniques.
Most electronic design engineers, irrespective of being the ‘analogue’ or ‘digital’ variety, are occasionally faced with the task of designing an oscillatory signal generator with particular implementation constraints, control and performance requirements. These requirements might include extremely low distortion, unusual ‘application specific’ waveshape, wide frequency tuning range, low temperature drift, and so on. Historically, such a task will have been tackled with a wholly analogue design, possibly augmented by digital control, where extremely high levels of performance are evident in some cases. If we take high-end audio test instrumentation as an example, the now legendary Hewlett Packard HP8903B and Audio Precision AP1 audio test sets both use digitally controlled analogue state variable oscillators to generate extremely low distortion sine waves. The state variable analogue oscillator is effectively an analogue computer model designed to compute solutions of a second-order differential equation. A specific class of solution (under certain parametric conditions) is a continuous sinusoidal oscillation. These generators are outstanding examples of what can be achieved with innovative analogue design. However, the world is becoming increasingly digital and very high levels of digital processing power can be implemented at relatively low cost. Various ‘all digital’ waveform generation techniques are therefore now practicable; and when all of their advantages are weighed up against the disadvantages (yes, digital processing is not necessarily a panacea to guarantee ideal performance), they nearly always represent the best solution. This approach is reinforced, if not driven, by the ever-improving performance of commercial digital to analogue conversion (DAC) devices as measured by their spurious-free dynamic range (i.e. distortion) and bandwidth. It is not unreasonable to state that the integrated DAC is the foremost enabling technology for nearly all applications of digital waveform generation. Exceptions to this observation apply to purely digital signals, which exist as a discrete-time sequence of binary numbers representing the signal waveform samples.
We begin this chapter by reviewing some important mathematical principles that underpin digital waveform generation. We then proceed to introduce and develop a concept which is central to this book – sampling a tabulated signal – and an associated concept – the wavetable. After introducing the wavetable as a fundamental building block, we consider several methods for specifying an arbitrary waveform function that is tabulated within it.
Section 2.4 introduces phase accumulation frequency synthesis and phase–amplitude mapping based upon wavetable lookup as the foundations of what we call generalised direct digital synthesis (DDS). This section also outlines some important error mechanisms that are fundamental to the technique, and whose mitigation is the topic of later chapters.
We conclude this chapter by reviewing the principal control parameters of a digital waveform generation system against their ideal characteristics. Finally, we define some qualitative performance metrics that we use in later chapters to investigate the effects of design and control parameter changes using computational simulation of a mathematical model. These metrics are also used to compare different waveform generation algorithms under identical control parameter conditions.
Mathematical preliminaries
In this section we briefly review some important mathematical concepts which underpin the generation of electronic signals by digital means, particularly those based upon phase accumulation and phase–amplitude mapping (i.e. DDS). Our objective is to provide a sufficiently detailed review to enable an understanding of the concepts presented in later chapters. We begin with a review of continuous and discrete-time signals.
In this chapter we investigate DDS sine wave generation as an introduction to a general discussion of DDS arbitrary waveform generation in Chapter 5. We begin by reviewing phase accumulation frequency synthesis, discuss considerations for demonstrating sinusoidal DDS behaviour through computer simulation and finally review several important sinusoidal phase–amplitude mapping techniques. Relative performance is illustrated using simulated SNR, SFDR and amplitude spectra as a function of key control and design parameters. We focus on phase truncated wavetable indexing and introduce linear phase interpolation as an error reduction mechanism that gives near-optimal performance in most practicable applications (i.e. SNR and SFDR comparable to or better than amplitude quantisation noise).
Optimal sinusoidal phase–amplitude mapping with practicable wavetable lengths is accomplished with a technique that we call trigonometric identity phase interpolation. This technique uses the trigonometric angle summation identity to compute a phase–amplitude mapping whose SNR and SFDR are bound only by quantisation noise. Although computationally more costly than linear interpolation, this technique is easily adapted to generate exactly quadrature sinusoids with optimal SNR and SFDR. A reduced multiplication implementation is also possible that trades multiplication for addition operations and is presented in Chapter 8. The principal utility of this technique is in applications which require optimal SNR and SFDR performance simultaneous with phase offset control precision bounded by the phase accumulator resolution.
This chapter investigates sinusoidal oscillators based upon recursive algorithms. Recursive oscillators are essentially discrete-time simulations of physical (e.g. mass-spring) oscillatory systems having a simple harmonic motion with zero damping as their solution. Accordingly, this type of oscillating system can only produce sinusoidal waveforms. The principal advantage of all recursive oscillators is their computational simplicity enabling low cost implementation. However, there are also several distinct shortcomings whose importance depends upon application. For example, non-linear frequency control, oscillation amplitude instability or arithmetic round-off noise growth over time.
There are many recursive oscillator algorithms reported in the literature, each with its own advantages and disadvantages. It is also evident that there is no single oscillator algorithm that is optimal and satisfies all requirements. As fundamentally closed-loop systems, all recursive oscillators are bound by the discrete-time Barkhausen criteria that must be satisfied to ensure sustained, stable oscillation. The classical continuous-time Barkhausen criteria require that the total loop gain of an oscillating system be exactly unity and the total loop phase shift be an integer multiple of 2π radians. In Section 3.1 we summarise the discrete-time form where we generalise the recursive oscillator difference equations using a matrix representation, as reported by [1]. In some recursive algorithms, quantised data representation and arithmetic rounding errors often lead to violation of these criteria, causing oscillation amplitude instability over time.
In this chapter we investigate several techniques for generating waveforms with time-varying waveshape and corresponding spectrum. We consider three distinct techniques which are inherently compatible with the DDS model:
paged wavetable access – where a contiguous sequence of waveform functions is tabulated in memory and accessed sequentially according to a time-varying waveshape (or spectrum) parameter;
linear wavetable combination – where multiple wavetables that each tabulate different waveform functions are linearly combined according to a time-varying parameter;
modulation – where the frequency, phase or amplitude of a typically sinusoidal carrier waveform is modulated by a modulator waveform.
Paged wavetable access requires a paged memory structure and provides time-varying waveshape (or corresponding spectrum) by selecting waveform ‘waypoints’ from a set of predefined wavetables. This is analogous to replaying a sequence of video frames where each frame represents a distinct waveform. In its most rudimentary form, the resolution of this technique is bound by the amount of physical memory available for wavetable storage and hence the number of distinct waveshape waypoints that may be included in a set. We also present an enhanced form called paged wavetable interpolation, which effectively interpolates waveforms that lie between the predefined waypoints, thus increasing the waveshape or spectrum control resolution. Waveshape may now be controlled in a piecewise-linear manner with arbitrarily fine control resolution according to a fractional page address.
In Chapter 4 we introduced the idea of phase domain processing and outlined the permissible arithmetic operations that may be applied in a DDS context. In this chapter we apply simple multiplicative scaling of a phase sequence to combine DDS frequency synthesis (i.e. phase accumulation) with waveform synthesis based upon the inverse discrete Fourier transform (IDFT). This, in turn, enables computationally feasible real-time execution of the IDFT at any fundamental frequency and represents a powerful waveform generation technique. Waveform synthesis using the IDFT is fundamentally a frequency domain or spectrum specification method requiring multiple harmonic amplitude and phase parameters to specify the waveform. In a similar manner to the wavetable methods discussed earlier, we may also apply ‘spectral shaping’ (e.g. the Lanczos sigma function) to mitigate waveform ringing artefacts due to the Gibbs phenomenon. The fundamental frequency of the synthesised waveform may be programmed with all the advantageous attributes of DDS (e.g. phase continuity, linearity and arbitrarily fine frequency control). We now have a DDS arbitrary waveform generator with a fully parameterised IDFT phase–amplitude mapping algorithm.
Another application of phase domain processing, that we investigate further in Chapter 8, exploits the properties of a phase sequence formed by the addition of two separate sequences from coherent phase accumulators clocked at different sample frequencies with a radix-2 ratio. By appropriate partitioning of the input phase increment between the two phase accumulators, the frequency control resolution of the summed sequence is determined solely by the accumulator with the lower clock frequency. This technique may be used to significantly reduce the amount of fast logic needed in very high frequency phase accumulator designs, thereby optimising power consumption, heat dissipation and cost [1].
Systematic generation of periodic signals with electronically controlled frequency, phase, amplitude and waveform shape (or waveshape) is ubiquitous in nearly every electronic system. The sinusoidal local oscillator in a super-heterodyne radio receiver is a simple example of a signal source whose controllable frequency tunes the receiver. Another example is a step input waveform (e.g. a square wave) that allows us to measure the step response of a closed-loop control system (e.g. rise time, fall time, overshoot and settling time) under controlled excitation conditions. A more complex ‘staircase’ input waveform allows us to measure step response at particular points over the system's dynamic range and is useful for investigating non-linear behaviour.
The progressive migration towards ‘software defined’ systems across all application domains is driving the development of high performance bespoke digital signal generation technology that is embeddable within a host system. This embedding can take the form of a software code or a ‘programmable logic’ (e.g. FPGA) implementation depending on speed, with both implementations satisfying the software definable criterion. Today, applications as diverse as instrumentation, communications, radar, electronic warfare, sonar and medical imaging systems require embedded, digitally controlled signal sources, often with challenging performance and control requirements. Furthermore, many of these applications now require signal sources that generate non-sinusoidal waveforms that are specified according to a precisely defined waveshape or spectrum function that is peculiar to the application. Moreover, in addition to conventional frequency, phase and amplitude control, these signal sources can have vastly increased utility by providing parametric and thereby dynamic control of waveshape or corresponding spectrum. As we will see, there are several digital waveform generation techniques that provide this functionality.
In this chapter we investigate hardware implementation of the DDS, sinusoidal and arbitrary waveform generation techniques presented in earlier chapters. We do not concern ourselves with specific target technologies such as FPGAs, but restrict our signal flow descriptions to the ‘register transfer level’ (RTL). The exact implementation technology (e.g. FPGA, ASIC or even hardwired logic) and the partitioning between hardware and embedded software are left to the suitably skilled reader and his or her application-specific requirements. For the most part, implementation of these algorithms, particularly in wide bandwidth applications, is best handled in high speed FPGA or ASIC logic. It is intended that this chapter will impart sufficient architectural detail to enable adaptation to any particular implementation technology.
There are several processing strategies that underpin high speed DSP hardware implementation. These comprise arithmetic pipelining, time division multiplexing and parallel processing. We begin Section 8.1 by reviewing these techniques. We then discuss high speed pipelined implementation of the digital accumulator and its constituent adder which are fundamental building blocks in both DDS and the IDFT. Given its fundamental importance, we investigate wavetable memory architectures and introduce the idea of a ‘vector memory’ that produces a vector of consecutive data samples relative to a single base address in only one memory access cycle. This architecture employs a combination of parallel processing and pipelining. As we recall from Chapter 5, phase interpolated wavetable indexing requires multiple wavetable samples that surround the sample indexed by the integer part of the fractional phase index. The number of samples, and hence the length of the vector, are determined by the order of the interpolation polynomial.
In this chapter we discuss conversion of discrete-time digital signals to the continuous-time or analogue domain. So far we have only investigated generation of digitally represented signals that are sampled in time and quantised in amplitude to a specific number of bits b. Accordingly, in an ideal convertor there are 2b equally spaced quantisation levels that follow an exact linear relationship with the input code. We call this process digital to analogue conversion, and it comprises several distinct sequentially connected processing functions that we consider in this chapter:
the digital to analogue convertor (DAC);
a glitch reduction stage, or ‘deglitcher’ (if required);
a typically low-pass reconstruction filter;
analogue post-processing (e.g. switched attenuation, DC offset control and output line driving).
Fundamentally, a DAC takes a b-bit digital input word and together with a reference voltage (or current) Vref computes a corresponding output voltage Vout or current iout, depending on its architecture. In effect, a DAC multiplies the reference voltage by a factor determined from the input word as a fraction of full-scale. Accordingly, the often overlooked DAC reference is a critical design consideration and for our present discussion we subsume it into the DAC function. There are many DAC architectures reported in the literature and available in ‘single chip form’ from commercial suppliers such as Analog Devices, Linear Technology and Texas Instruments.
In Chapter 4 we investigated sinusoidal DDS and developed the concept of phase–amplitude mapping using a wavetable. In sinusoidal DDS, the wavetable is a lookup table that tabulates one cycle of a sine function and translates from the phase domain to the amplitude domain. In this chapter we extend this idea to the generation of non-sinusoidal waveforms where a single-cycle, periodic arbitrary waveform function is now tabulated in the wavetable. We call this method DDS arbitrary waveform generation or DDS AWG. DDS AWG is a generalisation of sinusoidal DDS that generates arbitrary waveforms with fixed waveshape and independently controlled frequency, phase offset and amplitude. Furthermore, DDS also allows independent dynamic modulation of these parameters according to a modulation waveform. However, the signal processing structure of DDS AWG is easily modified to provide smooth, parametrically controlled (i.e. time-varying) waveshape and corresponding spectrum. We consider dynamic waveshape control further in Chapter 6.
Before proceeding, several fundamental problems become evident when we move from generation of sinusoidal to arbitrary waveforms using DDS principles. These may be summarised as:
specification and tabulation of an arbitrary waveform function that is compatible with DDS requirements, as introduced in Chapter 2;
an increase in the magnitude of the amplitude error signal εa(n) as a function of waveform harmonic content, the amount of phase truncation (i.e. the number of F bits) and the phase increment φ;
the additional computational complexity of linear and higher-order phase interpolation that is required to reduce the magnitude of εa(n) and hence increase waveform SNR and SFDR;
the susceptibility to harmonic alias images in the Nyquist band when the upper waveform harmonics exceed the Nyquist frequency;
the necessity for pre-tabulation band-limiting of a wavetable function specified in the time domain (i.e. by shape) to mitigate harmonic aliasing caused by the high frequency content of any waveform discontinuities.
Modern blockbuster movies seamlessly introduce impossible characters and action into real-world settings using digital visual effects. These effects are made possible by research from the field of computer vision, the study of how to automatically understand images. Computer Vision for Visual Effects will educate students, engineers and researchers about the fundamental computer vision principles and state-of-the-art algorithms used to create cutting-edge visual effects for movies and television. The author describes classical computer vision algorithms used on a regular basis in Hollywood (such as blue screen matting, structure from motion, optical flow and feature tracking) and exciting recent developments that form the basis for future effects (such as natural image matting, multi-image compositing, image retargeting and view synthesis). He also discusses the technologies behind motion capture and three-dimensional data acquisition. More than 200 original images demonstrating principles, algorithms and results, along with in-depth interviews with Hollywood visual effects artists, tie the mathematical concepts to real-world filmmaking.
A key responsibility of a visual effects supervisor on a movie set is to collect three-dimensional measurements of structures, since the set may be broken down quickly after filming is complete. These measurements are critical for guiding the later insertion of 3D computer-generated elements. In this chapter, we focus on the most common tools and techniques for acquiring accurate 3D data.
Visual effects personnel use several of the same tools as professional surveyors to acquire 3D measurements. For example, to acquire accurate distances to a small set of 3D points, they may use a total station. The user centers the scene point to be measured in the crosshairs of a telescope-like sight, and the two spherical angles defining the heading are electronically measured with high accuracy. Then an electronic distance measuring device uses the time of flight of an infrared or microwave beam that reflects off of the scene point to accurately determine the distance to the target. However, acquiring more than a few 3D distance measurements in this way is tedious and time-consuming.
It's recently become common to automatically survey entire filming locations using laser range-finding techniques, which we discuss in Section 8.1. The result is a cloud of hundreds of thousands of 3D points visible along lines of sight emanating from the laser scanner. These techniques, collectively called Light Detection and Ranging or LiDAR, are highly accurate and allow the scanning of objects tens to hundreds of meters away.
43 of the top 50 films of all time are visual effects driven. Today, visual effects are the “movie stars” of studio tent-pole pictures – that is, visual effects make contemporary movies box offfice hits in the same way that big name actors ensured the success of films in the past. It is very difficult to imagine a modern feature film or TV program without visual effects.
The Visual Effects Society, 2011
Neo fends off dozens of Agent Smith clones in a city park. Kevin Flynn confronts a thirty-years-younger avatar of himself in the Grid. Captain America's sidekick rolls under a speeding truck in the nick of time to plant a bomb. Nightcrawler “bamfs” in and out of rooms, leaving behind a puff of smoke. James Bond skydives at high speed out of a burning airplane. Harry Potter grapples with Nagini in a ramshackle cottage. Robert Neville stalks a deer in an overgrown, abandoned Times Square. Autobots and Decepticons battle it out in the streets of Chicago. Today's blockbuster movies so seamlessly introduce impossible characters and action into real-world settings that it's easy for the audience to suspend its disbelief. These compelling action scenes are made possible by modern visual effects.
Visual effects, the manipulation and fusion of live and synthetic images, have been a part of moviemaking since the first short films were made in the 1900s. For example, beginning in the 1920s, fantastic sets and environments were created using huge, detailed paintings on panes of glass placed between the camera and the actors. Miniature buildings or monsters were combined with footage of live actors using forced perspective to create photo-realistic composites. Superheroes flew across the screen using rear-projection and blue-screen replacement technology.
Separating a foreground element of an image from its background for later compositing into a new scene is one of the most basic and common tasks in visual effects production. This problem is typically called matting or pulling a matte when applied to film, or keying when applied to video. At its humblest level, local news stations insert weather maps behind meteorologists who are in fact standing in front of a green screen. At its most difficult, an actor with curly or wispy hair filmed in a complex real-world environment may need to be digitally removed from every frame of a long sequence.
Image matting is probably the oldest visual effects problem in filmmaking, and the search for a reliable automatic matting system has been ongoing since the early 1900s [393]. In fact, the main goal of Lucasfilm's original Computer Division (part of which later spun off to become Pixar) was to create a general-purpose image processing computer that natively understood mattes and facilitated complex compositing [375]. A major research milestone was a family of effective techniques for matting against a blue background developed in the Hollywood effects industry throughout the 1960s and 1970s. Such techniques have matured to the point that blue- and green-screen matting is involved in almost every mass-market TV show or movie, even hospital shows and period dramas.
On the other hand, putting an actor in front of a green screen to achieve an effect isn't always practical or compelling, and situations abound in which the foreground must be separated from the background in a natural image. For example, movie credits are often inserted into real scenes so that actors and foreground objects seem to pass in front of them, a combination of image matting, compositing, and matchmoving. The computer vision and computer graphics communities have only recently proposed methods for semi-automatic matting with complex foregrounds and real-world backgrounds.