To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
While 802.11n was a revolutionary enhancement over 802.11a/g, and necessitated an entire book for proper presentation of the new technology, 802.11ac is more of an evolutionary improvement over 802.11n by providing wider bandwidth channels and multi-user MIMO. As such we felt that the treatment of new 802.11ac features could be addressed by a few extra chapters in an update of our original 802.11n book, now an 802.11n/ac book.
The new single user Very High Throughput physical layer packet structure is described in a new Chapter 7, including a description of 80 MHz and 160 MHz waveforms. The new downlink multi-user MIMO mechanism in 802.11ac is presented in a new Chapter 14. Enhancements to channel access for 802.11ac have been added to Chapter 11. Several new basic service set and clear channel assessment rules to manage 80 MHz and 160 MHz operation and coexistence are described in Chapter 11. Modifications to 802.11n channel model Doppler component are given in Chapter 3 and Chapter 13. Furthermore, we discuss the simplification of single user transmit beamforming in 802.11ac in Chapter 13.
The first version of the 802.11 standard was ratified in 1997 after seven long years of development. However, initial adoption of this new technology was slow, partly because of the low penetration of devices that needed the “freedom of wireless.”
The real opportunity for 802.11 came with the increased popularity of laptop computers just a few years later. This popularity brought a rapidly growing user base wanting network connectivity not only while connected to an Ethernet cable at home or at work, but also in between: in hotels, airports, conference centers, restaurants, parks, etc. 802.11 provided a cheap and easy way to make laptop mobility a reality for anyone who wanted it.
However, technology by itself is rarely sufficient, particularly in the networking space, where interoperability of devices from multiple vendors is almost always the key to market success. Having been formed as WECA in 1999, the Wi-Fi Alliance was ready to provide certification of multi-vendor interoperability.
With the right technology from the IEEE 802.11 Working Group, certified interoperability from the Wi-Fi Alliance, and a real market need based on a growing installed base of laptops, the conditions were ripe for the Wi-Fi market to take off, and indeed it did. By 2007 virtually every new laptop contains Wi-Fi as standard equipment. More importantly, and unlike some other “successful” wireless technologies, many of these devices are used regularly. With this wide use came a growing understanding of the power of cheap, easy-to-deploy, and easy-to-manage interoperable Wi-Fi networks.
This paper presents a dynamic-level control algorithm to meet simultaneously multiple desired tasks based on allocated priorities for redundant robotic systems. It is shown that this algorithm can be treated as a general framework to achieve control over the whole body of the robot. The control law is an extension of the well-known acceleration-based control to the redundant robots, and considers also possible interactions with the environment occurring at any point of the robot body. The stability of this algorithm is shown and some of the previously developed results are formulated using this approach. To handle the interaction on robot body, null space impedance control is developed within the multi-priority framework. The effectiveness of the proposed approaches is evaluated by means of computer simulation.
This work presents the experimental validation and updating of a flexible multibody model ideated for taking into account installation conditions of industrial serial planar manipulators without resorting to cumbersome modeling. The flexibility of the frame, the manipulator is fixed, is modeled over the flexibility of joints, which is introduced as lumped stiffness. In particular, the flexible frame is included in the model by using the Component Mode Synthesis methodology, in which only the natural modes of vibration and the static constrain modes are accounted. The flexible multibody model has been developed because these commercial machines are mainly used to perform low-speed tasks, and they are designed by taking into account their flexibility at most in the joints. Unfortunately, there are particular installation conditions in which even low-speed tasks can generate low-frequency vibrations that highly interfere with the task. This aspect is considered here, and how to manage this problem is explained by using the developed multibody model. The model is validated through experimental measurements. The experimental tests consist of several modal analyses, together with acceleration and laser Doppler measurements in operational conditions. This methodology takes into account the installation conditions through the model of flexible frame, and gives a tool for studying ad hoc solutions which prevent the occurrence of unwanted low-frequency vibrations.
In this paper we consider the problem of planning paths for articulated bodies operating in workplaces containing obstacles and regions with preferences expressed as degrees of desirability. Degrees of desirability could specify danger zones and desire zones. A planned path should not collide with the obstacles and should maximize the degrees of desirability. Region desirability can also convey search-control strategies guiding the exploration of the search space. To handle desirability specifications, we introduce the notion of flexible probabilistic roadmap (flexible PRM) as an extension of the traditional PRM. Each edge in a flexible PRM is assigned a desirability degree. We show that flexible PRM planning can be achieved very efficiently with a simple sampling strategy of the configuration space defined as a trade-off between a traditional sampling oriented toward coverage of the configuration space and a heuristic optimization of the path desirability degree. For path planning problems in dynamic environments, where obstacles and region desirability can change in real time, we use dynamic and anytime search exploration strategies. The dynamic strategy allows the planner to replan efficiently by exploiting results from previous planning phases. The anytime strategy starts with a quickly computed path with a potentially low desirability degree which is then incrementally improved depending on the available planning time.
For building large systems, it is essential to build them from components and use module abstraction, that is, abstraction of both data and control in modules or components, to separate what functionalities are provided by a component from how the functionalities are implemented inside the component. Module abstraction is best supported as abstract data types. An abstract data type is an interface for a certain set of operations on a certain kind of data, that is, the “what”, shielding users from having to know how the data are represented and how the operations are implemented, that is, the “how”. It is a fundamental concept in modern high-level programming languages, particularly object-oriented languages.
Unfortunately, clear and modular implementations of the modules or components result in poor performance when nontrivial query operations are frequently performed and values of query parameters are gradually updated. At the same time, efficient implementations that incrementally maintain the query results with respect to updates to parameter values are much more difficult to develop and to understand, because the code grows significantly and is no longer clear or modular. Because the definitions and uses of queries and updates can cross multiple components, transforming clear implementations of the queries into efficient incremental implementations requires incrementalization across module abstraction.
We have shown a systematic method that succeeds in designing efficient implementations for many problems in many application domains starting with clear specifications of these problems using high-level language constructs. The method is systematic by being based on the language constructs used in the specifications and being guided by the cost considerations taken from the application domains.
The method, even though consisting of Steps Iterate, Incrementalize, and Implement in order, is driven by the middle step—Step Incrementalize. Because efficient computations on nontrivial input must proceed repeatedly on incremented input, Step Incrementalize aims to make the computation on each incremented input efficient by storing and reusing values computed on the previous input. Steps Iterate and Implement are enabling mechanisms: to maximize reuse by Step Incrementalize, Step Iterate determines a minimum input increment to take repeatedly; to support efficient access of the stored values by Step Incrementalize, Step Implement designs a combination of linked and indexed data structures to hold the values.
We first take a deeper look at incrementalization, showing how to systematically exploit the previous result, intermediate results, and auxiliary values for incremental computation. This will use simple examples specified using recursive functions, followed by several different sorting programs as additional examples. We then discuss abstractions in general, focusing on the importance of and principles for not only building up, but also breaking through, abstractions. The latter may sound surprising at first, but it is natural when query functions are incrementalized with respect to updates.
Many complex computational problems are most clearly and easily specified using logic rules. Logic rules state that if certain hypotheses hold then certain conclusions hold. These rules can be used to infer new facts from given facts. Example applications include queries in databases, analysis of computer programs, and reasoning about security policies. Datalog, which stands for Database logic, is an important rule-based language for specifying how new facts can be inferred from existing facts. Its fixed-point semantics allows the computation of the set of all facts that can be inferred from a given set of facts. It is sufficiently powerful for expressing many practical analysis problems.
While a Datalog program can be easily implemented using a logic programming system, such as a Prolog system, evaluated using various evaluation methods, such as well-established tabling methods, or rewritten using transformation methods for more efficient evaluation, such as using the well-known magic-sets method, these implementations are typically for fast prototyping. The running times of Datalog programs implemented using these methods can vary dramatically depending on the order of rules and the order of hypotheses in a rule. Even less is known about the space usage. Developing and implementing efficient algorithms specialized for any given set of rules and with time and space guarantees is a nontrivial, recurring task.
We first look at computational problems programmed using loops over numbers and arrays. A loop is a command for repeating a sequence of operations. A number is an integer or a real number with an internal representation in a computer. An array is an arrangement of computer memory elements in one or more dimensions. Problems involving arithmetic on numbers were the first problems for which computing devices were built. Problems involving operations on arrays were at the center of many subsequent larger computer applications. Because nontrivial computations involve performing operations iteratively, loops are a most commonly used, most important construct in programming solutions to problems.
Clear and straightforward problem solutions tend to have expensive computations in loops, where the values that these computations depend on are updated slightly in each iteration. To improve efficiency, the results of these computations can be stored and incrementally maintained with respect to updates to the values that they depend on. The transformation of programs to achieve this is called incrementalization.
We will use a small example and several variations to explain the basic ideas of incrementalization. We then describe two larger examples: one in hardware design, to show additional loop optimizations enabled by incrementalization, and the other in image processing, to show handling of nested loops and arrays in incrementalization. We discuss the need for higher-level languages at the end.
Many combinatorics and optimization problems can be solved straightforwardly by combining solutions to subproblems, where the subproblems may overlap in complicated manners. Such ways of solving the problems can be easily specified using recursive functions—functions whose definitions involve calls to the functions themselves, called recursive calls. The recursive calls are used for solving subproblems. Applications include many kinds of data analysis and decision-making problems, such as the great many kinds of analysis and manipulations needed on sequences, be they sequences in biological computing, document processing, financial analysis, or sensor data analysis.
Straightforward evaluation of recursive functions may be inefficient, and may in fact be extremely inefficient when they are used to solve overlapping subproblems, because subproblems may share subsubproblems, and common subsubproblems may be solved repeatedly. An efficient algorithm solves every subsubproblem just once, saves the result appropriately, and reuses the result when the subsubproblem is encountered again. Such algorithms are called dynamic programming algorithms. To arrive at such efficient algorithms, we must determine how efficient computations should proceed, and how computed results should be saved and reused. These correspond to Steps Iterate and Incrementalize. Step Implement just determines a straightforward way of storing the saved results in appropriate data structures.
Many English noun pairs suggest an almost limitless array of semantic interpretation. A fruit bowl might be described as a bowl for fruit, a bowl that contains fruit, a bowl for holding fruit, or even (perhaps in a modern sculpture class), a bowl made out of fruit. These interpretations vary in syntax, semantic denotation, plausibility, and level of semantic detail. For example, a headache pill is usually a pill for preventing headaches, but might, perhaps in the context of a list of side effects, be a pill that can cause headaches (Levi, J. N. 1978. The Syntax and Semantics of Complex Nominals. New York: Academic Press.). In addition to lexical ambiguity, both relational ambiguity and relational vagueness make automatic semantic interpretation of these combinations difficult. While humans parse these possibilities with ease, computational systems are only recently gaining the ability to deal with the complexity of lexical expressions of semantic relations. In this paper, we describe techniques for paraphrasing the semantic relations that can hold between nouns in a noun compound, using a semi-supervised probabilistic method to rank candidate paraphrases of semantic relations, and describing a new method for selecting plausible relational paraphrases at arbitrary levels of semantic specification. These methods are motivated by the observation that existing semantic relation classification schemes often exhibit a highly skewed class distribution, and that lexical paraphrases of semantic relations vary widely in semantic precision.
Design may refer to both the process of creating a plan, a scheme, or generally an organization of elements, for accomplishing a goal, and the result of that process. Wikipedia states that design is usually considered in the context of applied arts, engineering, architecture, and other creative endeavors, and normally requires considering aesthetic, functional, and many other aspects of an object or a process [319]. In the context of this book in the computing world, design refers to the creation of computer programs, including algorithmic steps and data representations, that satisfy given requirements.
Design can be exciting because it is linked to problem solving, creation, accomplishments, and so on. It may also be frustrating because it is also linked to details, restrictions, retries, and the like. In the computing world, the creation of a computer program to accomplish a computation task clearly requires problem solving; the sense of excitement in it is easy to perceive by anyone who ever did it. At the same time, one needs to mind computation details and obey given restrictions in often repeated trials; the sense of frustration in the process is also hard to miss.
Systematic design refers to step-by-step processes to go from problem descriptions to desired results, in contrast to ad hoc techniques. For program design, it refers to step-wise procedures to go from specifications prescribing what to compute to implementations realizing how to compute. The systematic nature is important for reproducing, automating, and enhancing the creation or development processes. Clarity of the specifications is important for understanding, deploying, and evolving the programs. Efficiency of the implementations is important for their acceptance, usage, and survival.
Most computer applications must handle collections of data. A set is a collection of distinct elements. Operations on sets, such as union and difference of the elements of two sets, are higher-level than operations on arrays, which are assignment and access of elements at indexed positions. The largest class of computer applications that involve collections is database applications, which typically handle large collections of data and provide many kinds of queries and other functionalities. Because of their higher-level nature, sets and set operations can be used to express problem solutions more clearly and easily, and have been used increasingly in programming languages, though typically only in applications that are not performance critical.
Problem solutions programmed using high-level set operations typically have performance problems, because high-level set operations typically involve many elements and are often repeatedly performed as sets are updated. To improve performance of applications programmed using sets, expensive high-level operations on sets must be transformed into efficient incremental operations, and sets must be implemented using data structures that support efficient incremental operations. These correspond to Steps Incrementalize and Implement, respectively. Step Iterate just determines a simple way of iteration that adds one element at a time to a set.
Because graph problems are typically specified using sets, we first describe the method using the graph reachability problem as an example, showing how efficient graph algorithms can be derived systematically.