To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter discusses issues in intersection testing and describes algorithms for intersecting rays with geometric objects. Intersection testing is one of the fundamental algorithms used in ray tracing, which usually requires intersecting a large number of rays against a large set of geometric objects. For this reason, it is often the case that the bulk of the computational time required for ray tracing is spent on intersection testing. Intersection testing has other applications besides ray tracing (for e.g., in motion planning and collision detection).
Intersection testing is a large and diverse field. In this chapter, we discuss only some of the first topics, namely, intersecting rays against simple geometric objects. We only briefly discuss methods such as bounding boxes and bounding spheres and global pruning methods such as octtrees and binary space partitioning. More information on intersection testing can be found in the book (Möller, and, Haines, 1999), which has two chapters devoted to this topic, including numerous references to the literature. The Graphics Gems volumes also contain many articles on intersection testing. For additional reading suggestions, see the end of this chapter.
Two important goals need be kept to in mind when designing software for intersection testing. The first design goal concerns accuracy and robustness. Clearly, it is important that the algorithms be accurate; even more so, the algorithms need to be free from occasional errors that may make visible artifacts, such as holes, or cause an occasional inconsistency in intersection status.
I have written a ray tracing package that implements basic recursive ray tracing. This software and its source code are freely available and can be downloaded from this book's Web site.
The ray tracing software uses an object-oriented approach to rendering and ray tracing. The object-oriented design includes base classes for materials, for geometric shapes, for lights, and for textures. This provides considerable flexibility in adding features since it allows the addition of new geometric shapes without affecting the functionality of older code; similarly, new kinds of lighting models, new kinds of textures, and so on, can be added without needing to change the structure of the software.
The material and lighting classes supported by the software include the usual material properties such as ambient, diffuse, and specular color and specular exponents. In addition, the material classes include reflection and transmission color coefficients for use in recursive ray tracing. The complete Phong model for local lighting is supported, including all the OpenGL-type features such as spotlights and attenuation. A version of the Cook–Torrance model is also supported.
The ray tracing software supports a range of geometric shapes, including spheres, triangles, parallelograms, cones, cylinders, tori, ellipsoids, parallelepipeds, and Bézier patches. Collectively, these geometric shapes are called viewable objects. The viewable object classes are responsible for detecting intersections of rays against a particular geometric shape. The viewable object classes also calculate normals and keep track of the material of an object.
This chapter discusses the mathematics of linear, affine, and perspective transformations and their uses in OpenGL. The basic purpose of these transformations is to provide methods of changing the shape and position of objects, but the use of these transformations is pervasive throughout computer graphics. In fact, affine transformations are arguably the most fundamental mathematical tool for computer graphics.
An obvious use of transformations is to help simplify the task of geometric modeling. For example, suppose an artist is designing a computerized geometric model of a Ferris wheel. A Ferris wheel has considerable symmetry and includes many repeated elements such as multiple cars and struts. The artist could design a single model of the car and then place multiple instances of the car around the Ferris wheel attached at the proper points. Similarly, the artist could build the main structure of the Ferris wheel by designing one radial “slice” of the wheel and using multiple rotated copies of this slice to form the entire structure. Affine transformations are used to describe how the parts are placed and oriented.
A second important use of transformations is to describe animation. Continuing with the Ferris wheel example, if the Ferris wheel is animated, then the positions and orientations of its individual geometric components are constantly changing. Thus, for animation, it is necessary to compute time-varying affine transformations to simulate the motion of the Ferris wheel.
This chapter takes up the subject of interpolation. For the purposes of the present chapter, the term “interpolation” means the process of finding intermediate values of a function by averaging its values at extreme points. Interpolation was already studied in Section II.4, where it was used for Gouraud and Phong interpolation to average colors or normals to create smooth lighting and shading effects. In Chapter V, interpolation is used to apply texture maps. More sophisticated kinds of interpolation will be important in the study of Bézier curves and B-splines in Chapters VII and VIII. Interpolation is also very important for ianimation, where both positions and orientations of objects may need to be interpolated.
The first three sections below address the simplest forms of interpolation; namely, linear interpolation on lines and triangles. This includes studying weighted averages, affine combinations, extrapolation, and barycentric coordinates. Then we turn to the topics of bilinear and trilinear interpolation with an emphasis on bilinear interpolation, including an algorithm for inverting bilinear interpolation. The next section has a short, abstract discussion on convex sets, convex hulls, and the definition of convex hulls in terms of weighted averages. After that, we take up the topic of weighted averages performed on points represented in homogeneous coordinates. It is shown that the effect of the homogeneous coordinate is similar to an extra weighting coefficient, and as a corollary, we derive the formulas for hyperbolic interpolation that are important for accurate interpolation in screen-space coordinates.
Texture mapping, in its simplest form, consists of applying a graphics image, a picture, or a pattern to a surface. A texture map can, for example, apply an actual picture to a surface such as a label on a can or a picture on a billboard or can apply semirepetitive patterns such as wood grain or stone surfaces. More generally, a texture map can hold any kind of information that affects the appearance of a surface: the texture map serves as a precomputed table, and the texture mapping then consists simply of table lookup to retrieve the information affecting a particular point on the surface as it is rendered. If you do not use texture maps, your surfaces will either be rendered as very smooth, uniform surfaces or will need to be rendered with very small polygons so that you can explicitly specify surface properties on a fine scale.
Texture maps are often used to very good effect in real-time rendering settings such as computer games since they give good results with a minimum of computational load. In addition, texture maps are widely supported by graphics hardware such as graphics boards for PCs so that they can be used without needing much computation from a central processor.
Texture maps can be applied at essentially three different points in the graphics rendering process, which we list more or less in order of increasing generality and flexibility:
A texture map can hold colors that are applied to a surface in “replace” or “decal” mode: the texture map colors just overwrite whatever surface colors are otherwise present. In this case, no lighting calculations should be performed, as the results of the lighting calculations would just be overwritten.
A spline curve is a smooth curve specified succinctly in terms of a few points. These two aspects of splines, that they are smooth and that they are specified succinctly in terms of only a few points, are both important. First, the ability to specify a curve with only a few points reduces storage requirements. In addition, it facilitates the computer-aided design of curves and surfaces because the designer or artist can control an entire curve by varying only a few points. Second, the commonly used methods for generating splines give curves with good smoothness properties and without undesired oscillations. Furthermore, these splines also allow for isolated points where the curve is not smooth, such as points where the spline has a “corner.” A third important property of splines is that there are simple algorithms for finding points on the spline curve or surface and simple criteria for deciding how finely a spline must be approximated by linear segments to obtain a sufficiently faithful representation of the spline. The main classes of splines discussed in this book are the Bézier curves and the B-spline curves. Bézier curves and patches are covered in this chapter, and B-splines in the next chapter.
Historically, splines were specified mechanically by systems such as flexible strips of wood or metal that were tied into position to record a desired curve. These mechanical systems were awkward and difficult to work with, and they could not be used to give a permanent, reproducible description of a curve.
Radiosity is a global lighting method that tracks the spread of diffuse light around a scene. As a global lighting method, it attempts to simulate the effect of multiple light reflection. Unlike basic ray tracing, which tracks only the specular transport of light, radiosity tracks only the diffuse transport of light.
The goal of a radiosity algorithm is to calculate the illumination levels and brightness of every surface in a scene. As an example, consider a scene of a classroom with fluorescent light fixtures, painted walls, a nonshiny tile floor, and desks and other furniture. We assume that there are no shiny surfaces, and thus no significant amount of specular light reflection. All the light in the room emanates originally from the ceiling lights; it then reflects diffusely from objects in the room, especially from the walls and floor, providing indirect illumination of the entire room. For instance, portions of the floor underneath the desk may have no direct illumination from any of the lights; however, these parts of the floor are only partly shadowed. Likewise, the ceiling of the room receives little direct illumination from the overhead lights but still is not dark. As a more extreme case, the bottom sides of the desk tops are partly shadowed but are certainly not completely dark: they are illuminated by light reflecting off the floor.
This paper presents a novel Cobotic system with differential CVT. The new system is significantly cheaper, simpler to control and more efficient than Cobots with S-CVTs. Both path-guidance and power-assist functions can be simply realized with the new system. Basic structures, kinematic and dynamic models, as well as control algorithms, which are essential for design, control synthesis and control of the system, are briefly presented in the paper.
The contributions to this issue aim to provide robotics and, in general, the automatic control community with results of research and applications focused on the cost-effectiveness of automation systems.
Low Cost Automation or Cost Effective Automation promotes cost oriented reference architectures and development approaches that properly integrate human skill and technical solutions, includes decentralized process control strategies, addresses automation integrated with information processing, as well as automation of non-sophisticated and easily handled operations for production maintenance.
Low Cost Automation is not an oxymoron like military intelligence or jumbo shrimps. It opposes the rising cost of sophisticated automation and propagates the use of innovative and intelligent solutions at an affordable cost. The concept can be regarded as a collection of methodologies aiming at exploiting tolerance of imprecision or uncertainties to achieve tractability, robustness and, in the end, low cost solutions. Mathematically, elegant designs of automation systems are often not feasible because of neglecting real world problems, i.e. they are failure-prone and therefore often very expensive for their users.
Low Cost Automation does not mean basic or poor performance control. The design of automation systems considers their life cycle with respect to their costs. For example, machine vision, despite in some cases costly components, properly applied can reduce the overall cost. It is used to guide field robots, identifying and assembling parts, and to sort out agricultural products.
Execution monitoring is a proven tool for securing program execution and to enforce safety properties on applets and mobile code, in particular. Inlining monitoring tools perform their task by inserting certain run-time checks into the monitored application before executing it. For efficiency reasons, they attempt to insert as few checks as possible using techniques ranging from simple ad hoc optimizations to theorem proving. Partial evaluation is a powerful tool for specifying and implementing program transformations. The present work demonstrates that standard partial evaluation techniques are sufficient to transform an interpreter equipped with monitoring code into a non-standard compiler. This compiler generates application code, which contains the inlined monitoring code. If the monitor is enforcing a security policy, then the result is a secured application code. If the policy is defined using a security automaton, then the transformation can elide many run-time checks by using abstract interpretation. Our approach relies on proper staging of the monitoring interpreter. The transformation runs in linear time, produces code linear in the size of the original program, and is guaranteed not to duplicate incoming code.
Cyclone is a type-safe programming language that provides explicit run-time code generation. The Cyclone compiler uses a template-based strategy for run-time code generation in which pre-compiled code fragments are stitched together at run time. This strategy keeps the cost of code generation low, but it requires that optimizations, such as register allocation and code motion, are applied to templates at compile time. This paper describes a principled approach to implementing such optimizations. In particular, we generalize standard flow-graph intermediate representations to support templates, define a mapping from (a subset of) Cyclone to this representation, and describe a dataflow-analysis framework that supports standard optimizations across template boundaries.
This work deals with the real-time robot control implementation. In this paper, an algorithm for solving Inverse Dynamic Problem based on the Gibbs-Appell equations is proposed and verified. It is developed using mainly vectorial variables, and the equations are expressed in a recursive form, it has a computational complexity of O(n). This algorithm will be compared with one based on Newton-Euler equations of motion, formulated in a similar way, and using mainly vectors in their recursive formulation. This algorithm was implemented in an industrial PUMA robot. For the robot control a new and open architecture based on PC had been implemented. The architecture used has two main advantages. First it provides a total open control architecture, and second it is not expensive. Because the controller is based on PC, any control technique can be programmed and implemented, and in this way the PUMA can work on high level tasks, such as automatic trajectory generation, task planning, control by artificial vision, etc.
This paper considers human-centered and socially appropriate robots as well as automation systems within the context of their cost-effectiveness. Usually, the objection of system designers is that approaches for human-centered and socio-technical design result in systems that are more expensive than those made by traditional methods, and are therefore not truly affordable, in particular for small and medium sized enterprises (SMEs). This widespread opinion is challenged in the paper by some arguments supporting the forecast that human-centered and socio-technical design will soon become justifiable in tangible (economic) as well as intangible benefits for all involved partners, including society at large.
Representing defeasibility is an important issue in common sense reasoning. In reasoning about action and change, this issue becomes more difficult because domain and action related defeasible information may conflict with general inertia rules. Furthermore, different types of defeasible information may also interfere with each other during the reasoning. In this paper, we develop a prioritized logic programming approach to handle defeasibilities in reasoning about action. In particular, we propose three action languages ${\cal AT}^{0}$, ${\cal AT}^{1}$, and ${\cal AT}^{2}$ which handle three types of defeasibilities in action domains named defeasible constraints, defeasible observations and actions with defeasible and abnormal effects respectively. Each language with a higher superscript can be viewed as an extension of the language with a lower superscript. These action languages inherit the simple syntax of ${\cal A}$ language but their semantics is developed in terms of transition systems where transition functions are defined based on prioritized logic programs. By illustrating various examples, we show that our approach eventually provides a powerful mechanism to handle various defeasibilities in temporal prediction and postdiction. We also investigate semantic properties of these three action languages and characterize classes of action domains that present more desirable solutions in reasoning about action within the underlying action languages.