501 results in Image processing and machine vision
2 - The foundations of digital waveform generation
- Pete Symons
-
- Book:
- Digital Waveform Generation
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013, pp 49-89
-
- Chapter
- Export citation
-
Summary
We begin this chapter by reviewing some important mathematical principles that underpin digital waveform generation. We then proceed to introduce and develop a concept which is central to this book – sampling a tabulated signal – and an associated concept – the wavetable. After introducing the wavetable as a fundamental building block, we consider several methods for specifying an arbitrary waveform function that is tabulated within it.
Section 2.4 introduces phase accumulation frequency synthesis and phase–amplitude mapping based upon wavetable lookup as the foundations of what we call generalised direct digital synthesis (DDS). This section also outlines some important error mechanisms that are fundamental to the technique, and whose mitigation is the topic of later chapters.
We conclude this chapter by reviewing the principal control parameters of a digital waveform generation system against their ideal characteristics. Finally, we define some qualitative performance metrics that we use in later chapters to investigate the effects of design and control parameter changes using computational simulation of a mathematical model. These metrics are also used to compare different waveform generation algorithms under identical control parameter conditions.
Mathematical preliminaries
In this section we briefly review some important mathematical concepts which underpin the generation of electronic signals by digital means, particularly those based upon phase accumulation and phase–amplitude mapping (i.e. DDS). Our objective is to provide a sufficiently detailed review to enable an understanding of the concepts presented in later chapters. We begin with a review of continuous and discrete-time signals.
4 - DDS sine wave generation
- Pete Symons
-
- Book:
- Digital Waveform Generation
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013, pp 109-161
-
- Chapter
- Export citation
-
Summary
In this chapter we investigate DDS sine wave generation as an introduction to a general discussion of DDS arbitrary waveform generation in Chapter 5. We begin by reviewing phase accumulation frequency synthesis, discuss considerations for demonstrating sinusoidal DDS behaviour through computer simulation and finally review several important sinusoidal phase–amplitude mapping techniques. Relative performance is illustrated using simulated SNR, SFDR and amplitude spectra as a function of key control and design parameters. We focus on phase truncated wavetable indexing and introduce linear phase interpolation as an error reduction mechanism that gives near-optimal performance in most practicable applications (i.e. SNR and SFDR comparable to or better than amplitude quantisation noise).
Optimal sinusoidal phase–amplitude mapping with practicable wavetable lengths is accomplished with a technique that we call trigonometric identity phase interpolation. This technique uses the trigonometric angle summation identity to compute a phase–amplitude mapping whose SNR and SFDR are bound only by quantisation noise. Although computationally more costly than linear interpolation, this technique is easily adapted to generate exactly quadrature sinusoids with optimal SNR and SFDR. A reduced multiplication implementation is also possible that trades multiplication for addition operations and is presented in Chapter 8. The principal utility of this technique is in applications which require optimal SNR and SFDR performance simultaneous with phase offset control precision bounded by the phase accumulator resolution.
3 - Recursive sine wave oscillators
- Pete Symons
-
- Book:
- Digital Waveform Generation
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013, pp 90-108
-
- Chapter
- Export citation
-
Summary
This chapter investigates sinusoidal oscillators based upon recursive algorithms. Recursive oscillators are essentially discrete-time simulations of physical (e.g. mass-spring) oscillatory systems having a simple harmonic motion with zero damping as their solution. Accordingly, this type of oscillating system can only produce sinusoidal waveforms. The principal advantage of all recursive oscillators is their computational simplicity enabling low cost implementation. However, there are also several distinct shortcomings whose importance depends upon application. For example, non-linear frequency control, oscillation amplitude instability or arithmetic round-off noise growth over time.
There are many recursive oscillator algorithms reported in the literature, each with its own advantages and disadvantages. It is also evident that there is no single oscillator algorithm that is optimal and satisfies all requirements. As fundamentally closed-loop systems, all recursive oscillators are bound by the discrete-time Barkhausen criteria that must be satisfied to ensure sustained, stable oscillation. The classical continuous-time Barkhausen criteria require that the total loop gain of an oscillating system be exactly unity and the total loop phase shift be an integer multiple of 2π radians. In Section 3.1 we summarise the discrete-time form where we generalise the recursive oscillator difference equations using a matrix representation, as reported by [1]. In some recursive algorithms, quantised data representation and arithmetic rounding errors often lead to violation of these criteria, causing oscillation amplitude instability over time.
Digital Waveform Generation
- Pete Symons
-
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013
-
This concise overview of digital signal generation will introduce you to powerful, flexible and practical digital waveform generation techniques. These techniques, based on phase-accumulation and phase-amplitude mapping, will enable you to generate sinusoidal and arbitrary real-time digital waveforms to fit your desired waveshape, frequency, phase offset and amplitude, and to design bespoke digital waveform generation systems from scratch. Including a review of key definitions, a brief explanatory introduction to classical analogue waveform generation and its basic conceptual and mathematical foundations, coverage of recursion, DDS, IDFT and dynamic waveshape and spectrum control, a chapter dedicated to detailed examples of hardware design, and accompanied by downloadable Mathcad models created to help you explore 'what if?' design scenarios, this is essential reading for practitioners in the digital signal processing community, and for students who want to understand and apply digital waveform synthesis techniques.
6 - Dynamic waveshape and spectrum control
- Pete Symons
-
- Book:
- Digital Waveform Generation
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013, pp 228-248
-
- Chapter
- Export citation
-
Summary
In this chapter we investigate several techniques for generating waveforms with time-varying waveshape and corresponding spectrum. We consider three distinct techniques which are inherently compatible with the DDS model:
paged wavetable access – where a contiguous sequence of waveform functions is tabulated in memory and accessed sequentially according to a time-varying waveshape (or spectrum) parameter;
linear wavetable combination – where multiple wavetables that each tabulate different waveform functions are linearly combined according to a time-varying parameter;
modulation – where the frequency, phase or amplitude of a typically sinusoidal carrier waveform is modulated by a modulator waveform.
Paged wavetable access requires a paged memory structure and provides time-varying waveshape (or corresponding spectrum) by selecting waveform ‘waypoints’ from a set of predefined wavetables. This is analogous to replaying a sequence of video frames where each frame represents a distinct waveform. In its most rudimentary form, the resolution of this technique is bound by the amount of physical memory available for wavetable storage and hence the number of distinct waveshape waypoints that may be included in a set. We also present an enhanced form called paged wavetable interpolation, which effectively interpolates waveforms that lie between the predefined waypoints, thus increasing the waveshape or spectrum control resolution. Waveshape may now be controlled in a piecewise-linear manner with arbitrarily fine control resolution according to a fractional page address.
7 - Phase domain processing – DDS and the IDFT
- Pete Symons
-
- Book:
- Digital Waveform Generation
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013, pp 249-265
-
- Chapter
- Export citation
-
Summary
In Chapter 4 we introduced the idea of phase domain processing and outlined the permissible arithmetic operations that may be applied in a DDS context. In this chapter we apply simple multiplicative scaling of a phase sequence to combine DDS frequency synthesis (i.e. phase accumulation) with waveform synthesis based upon the inverse discrete Fourier transform (IDFT). This, in turn, enables computationally feasible real-time execution of the IDFT at any fundamental frequency and represents a powerful waveform generation technique. Waveform synthesis using the IDFT is fundamentally a frequency domain or spectrum specification method requiring multiple harmonic amplitude and phase parameters to specify the waveform. In a similar manner to the wavetable methods discussed earlier, we may also apply ‘spectral shaping’ (e.g. the Lanczos sigma function) to mitigate waveform ringing artefacts due to the Gibbs phenomenon. The fundamental frequency of the synthesised waveform may be programmed with all the advantageous attributes of DDS (e.g. phase continuity, linearity and arbitrarily fine frequency control). We now have a DDS arbitrary waveform generator with a fully parameterised IDFT phase–amplitude mapping algorithm.
Another application of phase domain processing, that we investigate further in Chapter 8, exploits the properties of a phase sequence formed by the addition of two separate sequences from coherent phase accumulators clocked at different sample frequencies with a radix-2 ratio. By appropriate partitioning of the input phase increment between the two phase accumulators, the frequency control resolution of the summed sequence is determined solely by the accumulator with the lower clock frequency. This technique may be used to significantly reduce the amount of fast logic needed in very high frequency phase accumulator designs, thereby optimising power consumption, heat dissipation and cost [1].
1 - Introduction to waveform generation
- Pete Symons
-
- Book:
- Digital Waveform Generation
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013, pp 1-48
-
- Chapter
- Export citation
-
Summary
Systematic generation of periodic signals with electronically controlled frequency, phase, amplitude and waveform shape (or waveshape) is ubiquitous in nearly every electronic system. The sinusoidal local oscillator in a super-heterodyne radio receiver is a simple example of a signal source whose controllable frequency tunes the receiver. Another example is a step input waveform (e.g. a square wave) that allows us to measure the step response of a closed-loop control system (e.g. rise time, fall time, overshoot and settling time) under controlled excitation conditions. A more complex ‘staircase’ input waveform allows us to measure step response at particular points over the system's dynamic range and is useful for investigating non-linear behaviour.
The progressive migration towards ‘software defined’ systems across all application domains is driving the development of high performance bespoke digital signal generation technology that is embeddable within a host system. This embedding can take the form of a software code or a ‘programmable logic’ (e.g. FPGA) implementation depending on speed, with both implementations satisfying the software definable criterion. Today, applications as diverse as instrumentation, communications, radar, electronic warfare, sonar and medical imaging systems require embedded, digitally controlled signal sources, often with challenging performance and control requirements. Furthermore, many of these applications now require signal sources that generate non-sinusoidal waveforms that are specified according to a precisely defined waveshape or spectrum function that is peculiar to the application. Moreover, in addition to conventional frequency, phase and amplitude control, these signal sources can have vastly increased utility by providing parametric and thereby dynamic control of waveshape or corresponding spectrum. As we will see, there are several digital waveform generation techniques that provide this functionality.
Glossary of terms
- Pete Symons
-
- Book:
- Digital Waveform Generation
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013, pp xvii-xviii
-
- Chapter
- Export citation
8 - Hardware implementation architectures
- Pete Symons
-
- Book:
- Digital Waveform Generation
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013, pp 266-306
-
- Chapter
- Export citation
-
Summary
In this chapter we investigate hardware implementation of the DDS, sinusoidal and arbitrary waveform generation techniques presented in earlier chapters. We do not concern ourselves with specific target technologies such as FPGAs, but restrict our signal flow descriptions to the ‘register transfer level’ (RTL). The exact implementation technology (e.g. FPGA, ASIC or even hardwired logic) and the partitioning between hardware and embedded software are left to the suitably skilled reader and his or her application-specific requirements. For the most part, implementation of these algorithms, particularly in wide bandwidth applications, is best handled in high speed FPGA or ASIC logic. It is intended that this chapter will impart sufficient architectural detail to enable adaptation to any particular implementation technology.
There are several processing strategies that underpin high speed DSP hardware implementation. These comprise arithmetic pipelining, time division multiplexing and parallel processing. We begin Section 8.1 by reviewing these techniques. We then discuss high speed pipelined implementation of the digital accumulator and its constituent adder which are fundamental building blocks in both DDS and the IDFT. Given its fundamental importance, we investigate wavetable memory architectures and introduce the idea of a ‘vector memory’ that produces a vector of consecutive data samples relative to a single base address in only one memory access cycle. This architecture employs a combination of parallel processing and pipelining. As we recall from Chapter 5, phase interpolated wavetable indexing requires multiple wavetable samples that surround the sample indexed by the integer part of the fractional phase index. The number of samples, and hence the length of the vector, are determined by the order of the interpolation polynomial.
9 - Digital to analogue conversion
- Pete Symons
-
- Book:
- Digital Waveform Generation
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013, pp 307-340
-
- Chapter
- Export citation
-
Summary
In this chapter we discuss conversion of discrete-time digital signals to the continuous-time or analogue domain. So far we have only investigated generation of digitally represented signals that are sampled in time and quantised in amplitude to a specific number of bits b. Accordingly, in an ideal convertor there are 2b equally spaced quantisation levels that follow an exact linear relationship with the input code. We call this process digital to analogue conversion, and it comprises several distinct sequentially connected processing functions that we consider in this chapter:
the digital to analogue convertor (DAC);
a glitch reduction stage, or ‘deglitcher’ (if required);
a typically low-pass reconstruction filter;
analogue post-processing (e.g. switched attenuation, DC offset control and output line driving).
Fundamentally, a DAC takes a b-bit digital input word and together with a reference voltage (or current) Vref computes a corresponding output voltage Vout or current iout, depending on its architecture. In effect, a DAC multiplies the reference voltage by a factor determined from the input word as a fraction of full-scale. Accordingly, the often overlooked DAC reference is a critical design consideration and for our present discussion we subsume it into the DAC function. There are many DAC architectures reported in the literature and available in ‘single chip form’ from commercial suppliers such as Analog Devices, Linear Technology and Texas Instruments.
5 - DDS arbitrary waveform generation
- Pete Symons
-
- Book:
- Digital Waveform Generation
- Published online:
- 05 November 2013
- Print publication:
- 17 October 2013, pp 162-227
-
- Chapter
- Export citation
-
Summary
From sinusoidal to arbitrary waveforms
In Chapter 4 we investigated sinusoidal DDS and developed the concept of phase–amplitude mapping using a wavetable. In sinusoidal DDS, the wavetable is a lookup table that tabulates one cycle of a sine function and translates from the phase domain to the amplitude domain. In this chapter we extend this idea to the generation of non-sinusoidal waveforms where a single-cycle, periodic arbitrary waveform function is now tabulated in the wavetable. We call this method DDS arbitrary waveform generation or DDS AWG. DDS AWG is a generalisation of sinusoidal DDS that generates arbitrary waveforms with fixed waveshape and independently controlled frequency, phase offset and amplitude. Furthermore, DDS also allows independent dynamic modulation of these parameters according to a modulation waveform. However, the signal processing structure of DDS AWG is easily modified to provide smooth, parametrically controlled (i.e. time-varying) waveshape and corresponding spectrum. We consider dynamic waveshape control further in Chapter 6.
Before proceeding, several fundamental problems become evident when we move from generation of sinusoidal to arbitrary waveforms using DDS principles. These may be summarised as:
specification and tabulation of an arbitrary waveform function that is compatible with DDS requirements, as introduced in Chapter 2;
an increase in the magnitude of the amplitude error signal εa(n) as a function of waveform harmonic content, the amount of phase truncation (i.e. the number of F bits) and the phase increment φ;
the additional computational complexity of linear and higher-order phase interpolation that is required to reduce the magnitude of εa(n) and hence increase waveform SNR and SFDR;
the susceptibility to harmonic alias images in the Nyquist band when the upper waveform harmonics exceed the Nyquist frequency;
the necessity for pre-tabulation band-limiting of a wavetable function specified in the time domain (i.e. by shape) to mitigate harmonic aliasing caused by the high frequency content of any waveform discontinuities.
8 - Three-Dimensional Data Acquisition
- Richard J. Radke, Rensselaer Polytechnic Institute, New York
-
- Book:
- Computer Vision for Visual Effects
- Published online:
- 05 December 2012
- Print publication:
- 19 November 2012, pp 300-352
-
- Chapter
- Export citation
-
Summary
A key responsibility of a visual effects supervisor on a movie set is to collect three-dimensional measurements of structures, since the set may be broken down quickly after filming is complete. These measurements are critical for guiding the later insertion of 3D computer-generated elements. In this chapter, we focus on the most common tools and techniques for acquiring accurate 3D data.
Visual effects personnel use several of the same tools as professional surveyors to acquire 3D measurements. For example, to acquire accurate distances to a small set of 3D points, they may use a total station. The user centers the scene point to be measured in the crosshairs of a telescope-like sight, and the two spherical angles defining the heading are electronically measured with high accuracy. Then an electronic distance measuring device uses the time of flight of an infrared or microwave beam that reflects off of the scene point to accurately determine the distance to the target. However, acquiring more than a few 3D distance measurements in this way is tedious and time-consuming.
It's recently become common to automatically survey entire filming locations using laser range-finding techniques, which we discuss in Section 8.1. The result is a cloud of hundreds of thousands of 3D points visible along lines of sight emanating from the laser scanner. These techniques, collectively called Light Detection and Ranging or LiDAR, are highly accurate and allow the scanning of objects tens to hundreds of meters away.
1 - Introduction
- Richard J. Radke, Rensselaer Polytechnic Institute, New York
-
- Book:
- Computer Vision for Visual Effects
- Published online:
- 05 December 2012
- Print publication:
- 19 November 2012, pp 1-8
-
- Chapter
- Export citation
-
Summary
43 of the top 50 films of all time are visual effects driven. Today, visual effects are the “movie stars” of studio tent-pole pictures – that is, visual effects make contemporary movies box offfice hits in the same way that big name actors ensured the success of films in the past. It is very difficult to imagine a modern feature film or TV program without visual effects.
The Visual Effects Society, 2011Neo fends off dozens of Agent Smith clones in a city park. Kevin Flynn confronts a thirty-years-younger avatar of himself in the Grid. Captain America's sidekick rolls under a speeding truck in the nick of time to plant a bomb. Nightcrawler “bamfs” in and out of rooms, leaving behind a puff of smoke. James Bond skydives at high speed out of a burning airplane. Harry Potter grapples with Nagini in a ramshackle cottage. Robert Neville stalks a deer in an overgrown, abandoned Times Square. Autobots and Decepticons battle it out in the streets of Chicago. Today's blockbuster movies so seamlessly introduce impossible characters and action into real-world settings that it's easy for the audience to suspend its disbelief. These compelling action scenes are made possible by modern visual effects.
Visual effects, the manipulation and fusion of live and synthetic images, have been a part of moviemaking since the first short films were made in the 1900s. For example, beginning in the 1920s, fantastic sets and environments were created using huge, detailed paintings on panes of glass placed between the camera and the actors. Miniature buildings or monsters were combined with footage of live actors using forced perspective to create photo-realistic composites. Superheroes flew across the screen using rear-projection and blue-screen replacement technology.
2 - Image Matting
- Richard J. Radke, Rensselaer Polytechnic Institute, New York
-
- Book:
- Computer Vision for Visual Effects
- Published online:
- 05 December 2012
- Print publication:
- 19 November 2012, pp 9-54
-
- Chapter
- Export citation
-
Summary
Separating a foreground element of an image from its background for later compositing into a new scene is one of the most basic and common tasks in visual effects production. This problem is typically called matting or pulling a matte when applied to film, or keying when applied to video. At its humblest level, local news stations insert weather maps behind meteorologists who are in fact standing in front of a green screen. At its most difficult, an actor with curly or wispy hair filmed in a complex real-world environment may need to be digitally removed from every frame of a long sequence.
Image matting is probably the oldest visual effects problem in filmmaking, and the search for a reliable automatic matting system has been ongoing since the early 1900s [393]. In fact, the main goal of Lucasfilm's original Computer Division (part of which later spun off to become Pixar) was to create a general-purpose image processing computer that natively understood mattes and facilitated complex compositing [375]. A major research milestone was a family of effective techniques for matting against a blue background developed in the Hollywood effects industry throughout the 1960s and 1970s. Such techniques have matured to the point that blue- and green-screen matting is involved in almost every mass-market TV show or movie, even hospital shows and period dramas.
On the other hand, putting an actor in front of a green screen to achieve an effect isn't always practical or compelling, and situations abound in which the foreground must be separated from the background in a natural image. For example, movie credits are often inserted into real scenes so that actors and foreground objects seem to pass in front of them, a combination of image matting, compositing, and matchmoving. The computer vision and computer graphics communities have only recently proposed methods for semi-automatic matting with complex foregrounds and real-world backgrounds.
5 - Dense Correspondence and Its Applications
- Richard J. Radke, Rensselaer Polytechnic Institute, New York
-
- Book:
- Computer Vision for Visual Effects
- Published online:
- 05 December 2012
- Print publication:
- 19 November 2012, pp 148-206
-
- Chapter
- Export citation
-
Summary
In the last chapter we focused on detecting and matching distinctive features. Typically, features are sparsely distributed – that is, not every pixel location has a feature centered at it. However, for several visual effects applications, we require a dense correspondence between pixels in two images, even in relatively flat or featureless areas. One of the most common applications of dense correspondence in filmmaking is for slowing down or speeding up a shot after it's been filmed for dramatic effect. To create the appropriate intermediate frames, we need to estimate the trajectory of every pixel in the video sequence over the course of a shot, not just a few pixels near features.
More mathematically, we want to compute a vector field (u(x,y),v(x,y)) over the pixels of the first image I1, so that the vector at each pixel (x,y) points to a corresponding location in the second image I2. That is, the pixels I1(x,y) and I2(x +u(x,y),y + v(x,y)) correspond. We usually abbreviate the vector field as (u,v) with the understanding that both elements are functions of x and y.
Defining what constitutes a correspondence in this context can be tricky. As in feature matching, our intuition is that a correspondence implies that both pixels arise from the same point on the surface of some object in the physical world. The vector (u,v) is induced by the motion of the camera and/or the object in the interval between taking the two pictures.
B - Figure Acknowledgments
- Richard J. Radke, Rensselaer Polytechnic Institute, New York
-
- Book:
- Computer Vision for Visual Effects
- Published online:
- 05 December 2012
- Print publication:
- 19 November 2012, pp 364-366
-
- Chapter
- Export citation
Contents
- Richard J. Radke, Rensselaer Polytechnic Institute, New York
-
- Book:
- Computer Vision for Visual Effects
- Published online:
- 05 December 2012
- Print publication:
- 19 November 2012, pp vii-x
-
- Chapter
- Export citation
Computer Vision for Visual Effects
- Richard J. Radke
-
- Published online:
- 05 December 2012
- Print publication:
- 19 November 2012
-
Modern blockbuster movies seamlessly introduce impossible characters and action into real-world settings using digital visual effects. These effects are made possible by research from the field of computer vision, the study of how to automatically understand images. Computer Vision for Visual Effects will educate students, engineers and researchers about the fundamental computer vision principles and state-of-the-art algorithms used to create cutting-edge visual effects for movies and television. The author describes classical computer vision algorithms used on a regular basis in Hollywood (such as blue screen matting, structure from motion, optical flow and feature tracking) and exciting recent developments that form the basis for future effects (such as natural image matting, multi-image compositing, image retargeting and view synthesis). He also discusses the technologies behind motion capture and three-dimensional data acquisition. More than 200 original images demonstrating principles, algorithms and results, along with in-depth interviews with Hollywood visual effects artists, tie the mathematical concepts to real-world filmmaking.
3 - Image Compositing and Editing
- Richard J. Radke, Rensselaer Polytechnic Institute, New York
-
- Book:
- Computer Vision for Visual Effects
- Published online:
- 05 December 2012
- Print publication:
- 19 November 2012, pp 55-106
-
- Chapter
- Export citation
-
Summary
In this chapter, we discuss image compositing and editing, the manipulation of a single image or the combination of elements from multiple sources to make a convincing final image. Like image matting, image compositing and editing are pervasive in modern TV and filmmaking. Virtually every frame of a blockbuster movie is a combination of multiple elements. We can think of compositing as the inverse of matting: putting images together instead of pulling them apart. Consequently, the problems we consider are generally easier to solve and require less human intervention.
In the simplest case, we may just want to place a foreground object extracted by matting onto a different background image. As we saw in Chapter 2, obtaining high-quality mattes is possible using a variety of algorithms, and new images made using the compositing equation (2.3) generally look very good. On the other hand, a fair amount of user interaction is often required to obtain these mattes – for example, heuristically combining different color channels, painting an intricate trimap, or scribbling and rescribbling to refine a matte. The algorithms in the first half of this chapter take a different approach: the user roughly outlines an object in a source image to be removed and recomposited into a target image, and the algorithm automatically estimates a good blend between the object and its new background without explicitly requiring a matte. These “drag-and-drop”-style algorithms could potentially save a lot of manual effort.
7 - Motion Capture
- Richard J. Radke, Rensselaer Polytechnic Institute, New York
-
- Book:
- Computer Vision for Visual Effects
- Published online:
- 05 December 2012
- Print publication:
- 19 November 2012, pp 255-299
-
- Chapter
- Export citation
-
Summary
Motion capture (often abbreviated as mocap) is probably the application of computer vision to visual effects most familiar to the average filmgoer. As illustrated in Figure 7.1, motion capture uses several synchronized cameras to track the motion of special markers carefully placed on the body of a performer. The images of each marker are triangulated and processed to obtain a time series of 3D positions. These positions are used to infer the time-varying positions and angles of the joints of an underlying skeleton, which can ultimately help animate a digital character that has the same mannerisms as the performer. While the Gollum character from the Lord of the Rings trilogy launched motion capture into the public consciousness, the technology already had many years of use in the visual effects industry (e.g., to animate synthetic passengers in wide shots for Titanic). Today, motion capture is almost taken for granted as a tool to help map an actor's performance onto a digital character, and has achieved great success in recent films like Avatar.
In addition to creating computer-generated characters for feature films, motion capture is pervasive in the video game industry, especially for sports and action games. The distinctive mannerisms of golf and football players, martial artists, and soldiers are recorded by video game developers and strung together in real time by game engines to create dynamic, reactive character animations. In non-entertainment contexts, motion capture is used in orthopedics applications to analyze a patient's joint motion over the course of treatment, and in sports medicine applications to improve an athlete's performance.