To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Before we dive into the technical details of digital system design, it is useful to take a high-level look at the way systems are designed in industry today. This will allow us to put the design techniques we learn in subsequent chapters into the proper context. This chapter examines four aspects of contemporary digital system design practice: the design process, implementation technology, computer-aided design tools, and technology scaling.
We start in Section 2.1 by describing the design process – how a design starts with a specification and proceeds through the phases of concept development, feasibility studies, detailed design, and verification. Except for the last few steps, most of the design work is done using English-language documents. A key aspect of any design process is a systematic – and usually quantitative – process of managing technical risk.
Digital designs are implemented on very-large-scale integrated (VLSI) circuits (often called chips) and packaged on printed-circuit boards (PCBs). Section 2.2 discusses the capabilities of contemporary implementation technology.
The design of highly complex VLSI chips and boards is made possible by sophisticated computer-aided design (CAD) tools. These tools, described in Section 2.3, amplify the capability of the designer by performing much of the work associated with capturing a design, synthesizing the logic and physical layout, and verifying that the design is both functionally correct and meets timing.
Approximately every two years, the number of transistors that can be economically fabricated on an integrated-circuit chip doubles. We discuss this growth rate, known as Moore's law, and its implications for digital systems design in Section 2.4.
THE DESIGN PROCESS
As in other fields of engineering, the digital design process begins with a specification. The design then proceeds through phases of concept development, feasibility, partitioning, and detailed design. Most texts, like this one, deal with only the last two steps of this process. To put the design and analysis techniques we will learn into perspective, we will briefly examine the other steps here. Figure 2.1 gives an overview of the design process.
After you have developed several programs for some related tasks, you will find it convenient to group them together and make them available as a cohesive whole. Packages are designed to make it easy to distribute your programs to others, but they also provide a framework for you to write programs that integrate with Mathematicaseamlessly.
A package is simply a text file containing Mathematicacode. Typically you put related functions in a package. So there might be a computational geometry package or a random walks package that includes functions in support of those tasks. The package framework includes a name-localizing construct, analogous to Module, but for entire files of definitions. The idea is to allow you, the programmer, to define a collection of functions for export.These exported functions are what the users of your package will work with and are often referred to as publicfunctions. Other functions, those that are not for export,are auxiliary, or privatefunctions, and are not intended to be accessible to users. The package framework, and contexts specifically, provide a convenient way to declare some functions public and others private. In this chapter we will describe this framework and show how to write, install, and use the packages developed with it.
Working with packages
Loading and using packages
Upon starting a Mathematicasession, the built-in functions are immediately available for you to use. There are, however, many more functions that you can access that reside in files supplied with Mathematica.The definitions in those files are placed in special structures called packages.Indeed, these files themselves are often called “packages” instead of “files.”
Mathematicapackages have been written for many different domains. They are provided with each version of Mathematicaand are referred to as the Standard Extra Packages. Their documentation is available in the Documentation Center (under the Help menu) and they provide a good set of examples for learning about package creation and usage.
The idea of representing an idealized version of an object with something called a pattern is central to mathematics and computer science. For the purposes of search, patterns provide a template with which to compare expressions. They can be used to filter data by selecting only those parts that match the pattern. Because of the wide applicability of patterns, many modern programming languages have extensive pattern-matching capabilities that enable them to identify objects that meet some criteria in order to classify, select, or transform those objects through the use of rules.
Pattern matching in Mathematica is done through a powerful yet flexible pattern language. Used in rules to transform expressions from one form to another, they can be applied to broad classes of expressions or they can be limited to very narrowly-defined objects through the use of conditional and structured pattern matching. Pattern matching is the key to identifying which rules should be applied to expressions that you wish to transform.
If you have used regular expressions in languages such as Perl or Ruby, or via libraries in Java, Python, or C++, then you are already familiar with pattern matching on strings (discussed further in Chapter 7). Mathematica's pattern language generalizes this to arbitrary objects and expressions. Although the syntax may be new to you, with practice it becomes natural, providing a direct connection between the statement of a problem and its expression in a program. This chapter starts with an introduction to patterns and pattern matching and then proceeds to a discussion of transformation rules in which patterns are used to identify the parts of an expression that are to be transformed. The chapter concludes with several concrete examples that make use of pattern matching and transformation rules to show their application to some common programming tasks.
Lists are the key data structure used in Mathematica to group objects together. They share some features with arrays in other languages such as C and Java, but they are more general and can be used to represent a wide range of objects: vectors, matrices, tensors, iterator and parameter specifications, and much more.
Because lists are so fundamental, an extensive set of built-in functions is available to manipulate them in a variety of ways. In this chapter, we start by looking at the structure and syntax of lists before moving on to constructing, measuring, and testing lists. We then introduce some of the built-in functionality used to manipulate lists such as sorting and partitioning. Finally, we will discuss associations, a feature first introduced in Mathematica 10. Associations provide a framework for efficient representation and lookup of large data structures such as associative arrays (for example, a large database of article and book references or a music library).
Many of the things you might wish to do with a list or association can be accomplished using built-in functions and the programming concepts in this book. And most of these operations extend to arbitrary expressions in a fairly natural way, as we will see in later chapters. As such, it is important to have a solid understanding of these functions before going further, since a key to efficient programming in Mathematica is to use the built-in functions whenever possible to manipulate lists and associations as well as general expressions.
Creating and displaying lists
List structure and syntax
The standard input form of a list is a sequence of elements separated by commas and enclosed in curly braces:
{e1, e2, …, en}
Internally, lists are stored in the functional form using the List function with an arbitrary number of arguments.
Mobile device data rates are increasing at effectively Moore's law [1] due to the fact that mobile devices are all integrated in silicon and thus are taking advantage of the reduction in geometry and increase in functionality and the number of transistors per die. The current cellular approach of using large outdoor base station towers to provide mobile broadband via wireless communications, however, does not scale efficiently to cope with the forecast 13-fold increase expected by 2017 [2]. Thus a need for a new approach is required. As we will discuss in this chapter, small cell deployments have the potential to provide a scalable solution to this demand where they are beginning to change the network topology into a so-called heterogeneous network (HetNet) containing a mix of different cell sizes and cell power levels, as shown in Figure 8.1. This will lead to a mix of macro cells, micro cells, pico cells, and femto cells. This deployment is not as uniform as outdoor macro cells with different cell sizes and a much more irregular deployment results.
As we will show in this chapter, small cell network (SCN) deployments provide, for the first time, a low-cost efficient scalable architecture to meet the expected demand. This technology was initially deployed in large scale when femto cells (residential small cells) were first deployed by a number of leading wireless operators (including Vodafone and AT&T). The solution was enabled by two key developments, namely:
• low-cost chip-sets (a so-called “system-on-chip”), which included the entire signal processing and most of the radio software stack, and
• high-speed internet access (greater than Mbps) was available, thus providing the so-called “backhaul” (the connectivity into the network) for these access points.
These developments mean that with volume the effective capital costs of small cells are negligible compared to the cost of deployment and operating costs.
Small cell deployments have a range of topologies and configurations that will depend on the location and the demand requirements. For example, downtown city areas will consist of a combination of small cells where they will exist:
on outdoor light poles to serve “hotspot” traffic needs around cafes and other places where users congregate
in city buildings to serve enterprise customers with high demands on throughput and reliability
in apartments or residential homes to serve private users' needs.
Strings are used across many disciplines to represent data, filenames, and other objects: they are used by linguists studying representation, classification, and patterns involved in audio and text usage; biologists working with genomic data as strings are interested in sequence structure and assembly and perform extensive statistical analysis on strings; programmers operate on string data for such tasks as text search, file manipulation, and text processing. Strings are so ubiquitous that almost every modern programming language has a string data type and dozens of functions for operating on strings.
In this chapter we will introduce the tools available for working with strings, starting with a look at their structure and syntax, then moving on to a discussion of the many high-level functions that are optimized for string manipulation. Many of the functions introduced in Chapter 3 for operating on lists have analogs with strings, and you would do well to first make sure you understand those before tackling the string functions.
String patterns follow on the discussion of more general patterns introduced in Chapter 4. We will introduce an alternative syntax – regular expressions – that provides a compact and efficient mechanism for working with strings. The chapter closes with several applied examples from bioinformatics (creating random strings, partitioning strings, analyzing GC content, displaying sequences in tables) and linguistics (text processing, corpus analysis, n-grams, collocation, word and sentence length frequency).
Visualization is a means to organize, model, and ultimately make sense of information. Functions, numerical and abstract data, text, and many other objects are commonly analyzed and studied using visual representations. Sometimes the representation is fixed spatially, as with mathematical functions or geometric objects; other times, as with information visualization, a spatial representation is not given and must be created; and sometimes the information is ordered temporally and so time itself becomes a visualization parameter. In any of these domains, the idea is to find a representation that best conveys the information and relationships under study.
Although built-in graphics functions are often sufficient for your visualizations, you will periodically need to create your own customized code to visualize the objects under study. Sometimes it is more efficient to build upon existing functions, modifying them as needed. Other times you will find it best to create such visualizations from scratch, using the graphics building blocks that make up Mathematica'ssymbolic graphics language.
This chapter covers how to construct functions for visualizing different kinds of data and objects using the basic building blocks of graphical expressions in Mathematica - primitives, directives, and options. We also look at ways to make dynamic graphics, including constructs for changing graphics dynamically using a pointing device such as a mouse. We then address ways to make your graphics more efficient by looking at the internal representation of graphics objects, as well as the use of multi-objects and an alternative representation that results in a compressed graphics object, GraphicsComplex. Finally, the chapter closes with several problems in bioinformatics, chemistry, geometry, and computer science, in which we use built-in graphics functionality together with the graphics language to create visualizations for some nontrivial problems.
A relatively small number of modules, decoders, multiplexers, encoders, etc., are used repeatedly in digital designs. These building blocks are the idioms of modern digital design. Often, we design a module by composing a number of these building blocks to realize the desired function, rather than writing its truth table and directly synthesizing a logical implementation.
In the 1970s and 1980s most digital systems were built from small integrated circuits that each contained one of these building block functions. The popular 7400 series of TTL logic [106], for example, contained many multiplexers and decoders. During that period, the art of digital design largely consisted of selecting the right building blocks from the TTL databook and assembling them into modules. Today, with most logic implemented as ASICs or FPGAs, we are not constrained by what building blocks are available in the TTL databook. However, the basic building blocks are still quite useful elements from which to build a system.
MULTI-BIT NOTATION
Throughout this book we use bus notation in figures to denote multi-bit signals with a single line. For example, in Figure 8.1, we represent the eight-bit signal b7:0 with a single line. The diagonal slash across the signal indicates that this line represents a multi-bit signal. The number “8” below the slash indicates that the width of the bus is eight bits.
Single bits and subfields are selected from a multi-bit signal using diagonal connectors, as shown for bits b7 and b5, and the three-bit subfield b5:3. Each diagonal connector is labeled with the bits being selected. The bit selections may overlap – as is the case with b5 and b5:3. The subfield b5:3 is itself a multi-bit signal and is labeled accordingly – with a slash and the number “3.”
DECODERS
In general, a decoder converts symbols from one code to another. We have already seen an example of a binary to seven-segment decoder in Section 7.3. When used by itself, however, the term decoder means a binary to one-hot decoder. That converts a symbol from a binary code (each bit pattern represents a symbol) to a one-hot code (at most one bit can be high at a time and each bit represents a symbol). In Section 8.4 we will discuss encoders that reverse this process. That is, they are one-hot to binary decoders.
This chapter introduces the concept of temporary cognitive small cell networks (TCSCN) as a supplement infrastructure to LTE-Advanced macro networks, and examines how the cognitive capabilities can enable the rapid and temporary nature of such deployments. Temporary networks are suitable for disaster-recovery scenarios where the nominal macro network is severely affected or completely paralyzed. In addition to that, such temporary networks can address the sudden increase in wireless traffic in certain geographic areas due to public events. The approach in realizing the cognitive capabilities is achieved by exploiting the latest LTE-Advanced HetNet features, as well as by presenting novel techniques for intelligently mitigating interference between the macro network base stations and the introduced temporary infrastructure. Simulation results are presented in order to show the enhancement of the wireless service when such temporary networks are deployed together with the proposed cognitive capabilities. At the end of this chapter an overview will be provided about open research directions that are fundamental for further possible realization of temporary cognitive small cell networks.
Introduction
The recent developments in broadband wireless networks have added an unprecedented level of reliability and bandwidth efficiency to the cellular communications, leading to the emergence of new user applications that would not be possible without such enhancements. In fact, commercial wireless networks are nowadays an irreplaceable gear in modern economies and societies, not only used for voice and video communications but also as a carrier for many mission-critical businesses and applications, such as wireless video monitoring, transportation signals, logistics tracking, and automatic power meter reading.
With this increasing dependency on telecommunication networks, the total failure of the economic system and public services would be massive if such networks are disrupted by means of a natural disaster such as a flood, earthquake, or tsunami, or by man-made attack. Among critical users of telecommunication networks are public safety agencies that are progressively depending on broadband telecommunication, making the need for providing a reliable broadband wireless service one of the top priorities for both commercial and public safety network operators.
Temporary small cells are seen as one of the viable solutions assisting network operators in tackling a sudden disruption of the wireless service, or a sudden increase in service demand generated by public events. These events include the potential move of the crowd, generating a heavy demand on cellular network capacity.
Multi-layer heterogeneous network (HetNet) deployments including small cell base stations (BSs) are considered to be the key to further enhancements of the spectral efficiency achieved in mobile communication networks [1]. Besides the capacity enhancement due to frequency reuse, a limiting factor in HetNets has been identified as inter-cell interference. The 3rd Generation Partnership Project (3GPP) discussed inter-cell interference coordination (ICIC) mechanisms in long term evolution (LTE) Release 8/9 [2]. LTE Release 8/9 ICIC techniques were introduced to primarily save cell-edge user equipments (UEs). They are based on limited frequency domain interference information exchange via the X2 interface, whereby ICIC related X2 messages are defined in the 3GPP standard [3]. In LTE Release 8/9 ICIC, a BS provides information about set of frequency resources in which it is likely to schedule DL transmissions to cell-edge UEs, for the benefit of a neighboring BS. The neighboring BS in turn avoids scheduling its UEs on these frequency resources.
With the growing demand for data services and the introduction of HetNets it has become increasingly difficult to meet a UE's quality of service (QoS) requirements with these mechanisms. To cope with the QoS requirements and growing demand for data services, enhanced ICIC (e-ICIC) solutions have been proposed in LTE Release 10 and further e-ICIC (Fe-ICIC) solutions to reduce cell reference signal (CRS) interference in e-ICIC techniques are discussed in LTE Release 11 [4].
In LTE Release 10 e-ICIC techniques, the focus is on time- and frequency-domain techniques and power-control techniques. While in time-domain techniques, the transmissions of the victim UEs are coordinated in time-domain resources, in frequency-domain techniques, e-ICIC is mainly achieved by frequency-domain orthogonalization. The power-control techniques have been intensively discussed in 3GPP. Hereby, power control is performed by the aggressor cell to reduce inter-cell interference to victim UEs. In 3GPP studies, e-ICIC mechanisms with adaptive resource partitioning, cell range expansion (CRE), and interference coordination/cancellation take a central stage [5].
In the following, the inter-cell interference problem in HetNets is introduced and time- and frequency-domain e-ICIC techniques are discussed based on 3GPP specifications. In addition, single- and multi-flow transmission techniques for e-ICIC and system capacity improvement are described.
Inter-cell interference in HetNets
One of the major features extensively studied for LTE Release 10, also known as LTE-Advanced, is the HetNet coverage and capacity optimization, e.g., through the use of cell-range expansion (CRE) techniques.
Heterogeneous networks (HetNets) are being deployed as a feasible and cost-effective solution to address the recent data explosion caused by smart phones and tablets. In a co-channel HetNet deployment, several low-power small cells are overlaid on the same carrier as the existing macro network. While this is the most spectrally efficient approach, coverage areas of the small cells can be significantly smaller due to their lower transmit powers, which can limit the volume of data offload. Extending the range of pico cells to increase traffic offload via increased number of associated users to these cells is known as cell range extension (CRE). On the flip side, CRE results in interference issues that have been resolved via standards based solutions in 3GPP, known as the Release 10 enhanced inter-cell interference coordination (eICIC) capability. In this chapter, we address the problem of ensuring connected state mobility or handover performance in co-channel HetNets. HetNets with and without range extension are considered. We show how the aforesaid interference coordination techniques can also be leveraged to improve mobility performance. Furthermore, we discuss how the handover decisions and handover parameters can be further optimized based on user speed. We show that the handover failure rate can be significantly reduced using mobile speed dependent handover parameter adaptation and CRE with subframe blanking, although at the cost of an increase in the short time-of-stay (SToS) rate. Finally, other aspects such as radio link failure recovery, small cell discovery, and related enhancements are discussed.
Introduction
As a result of rapid penetration of smart phones and tablets, mobile users have started to use more and more data services, in addition to the conventional voice service, on their devices. Due to this trend, demand for network capacity has been growing significantly. It is observed that the capacity demand normally originates unevenly in the cellular coverage area. In other words, the demand is concentrated in some smaller geographical areas, for example shopping malls, stadiums, and high-rise buildings. The conventional homogeneous cellular networks are intended to provide uniform coverage and services with base stations having the same transmit powers, antenna parameters, backhaul connectivity, etc., across a wide geographical area. To serve spatially concentrated data demand, HetNets are a viable and cost-effective solution.
In Chapter 14 we saw how a finite-state machine can be synthesized from a state diagram by writing down a table for the next-state function and synthesizing the logic that realizes this table. For many sequential functions, however, the next-state function can be more simply described by an expression rather than by a table. Such functions are more efficiently described and realized as datapaths, where the next state is computed as a logical function, often involving arithmetic circuits, multiplexers, and other building block circuits.
COUNTERS
A simpler counter
Suppose you want to build a finite-state machine with the state diagram shown in Figure 16.1. This circuit is forced to state 0 whenever input r is true. Whenever input r is false, the machine counts through the states from 0 to 31 and then cycles back to 0. Because of this counting behavior, we refer to this finite-state machine as a counter.
We could design the counter employing the methodology developed in Chapter 14. A VHDL description taking this approach for a three-bit counter (eight states) is shown in Figure 16.2. A three-bit wide bank of flip-flops holds the current state, count, and updates it from the next state, nxt, on each rising edge of the clock. The matching case statement captures the state table, specifying the next state for each input and current state combination.
While this method for generating counters works, it is verbose and inefficient. The lines of the state table are repetitive. The behavior of the machine can be captured entirely by the single line
nxt ≤ (others => ’0’) when rst else count+1;
We use array aggregate notation, “(others => ’0’)”, to specify a std_logic_vector value with all elements equal to ’0’. This is a datapath description of the finite-state machine in which we specify the next state as a function of the current state and inputs.
What happens when we violate the setup- and hold-time constraints of a flip-flop? Until now, we have considered only the normal behavior of a flip-flop when these constraints are satisfied. In this chapter we investigate the abnormal behavior that occurs when we violate these constraints. We will see that violating setup and hold times may result in the flip-flop entering a metastable state in which its state variable is neither a 1 nor a 0. It may stay in this metastable state for an indefinite amount of time before arriving at one of the two stable states (0 or 1). This synchronization failure can lead to serious problems in digital systems.
To stretch an analogy, flip-flops are a lot like people. If you treat them well, they will behave well. If you mistreat them, they behave poorly. In the case of flip-flops, you treat them well by observing their setup and hold constraints. As long as they are well treated, flip-flops will function properly, never missing a bit. If, however, you mistreat your flip-flop by violating the setup and hold constraints, it may react by misbehaving – staying indefinitely in a metastable state. This chapter explores what happens when these good flip-flops go bad.
SYNCHRONIZATION FAILURE
When we violate the setup- or hold-time constraints of a D flip-flop, we can put the internal state of the flip-flop into an illegal state. That is, the internal nodes of the flip-flop can be left at a voltage that is neither a 0 nor a 1. If the output of the flip-flop is sampled while it is in this state, the result is indeterminate and possibly inconsistent. Some gates may see the flip-flop output as a 0, while others may see it as a 1, and still others may propagate the indeterminate state.
Consider the following experiment with a D flip-flop. Initially both d and clk are low. During our experiment, they both rise. If signal d rises ts before clk, the output q will be 1 at the end of the experiment. If signal clk rises th before d, the output q will be 0 at the end of the experiment.
The so-called Third Generation Partnership Project (3GPP) Long Term Evolution (LTE) and its evolved version, LTE-Advanced, is currently the most prominent and advanced mobile communication system. When it was designed and standardized, starting in 2004 and first released in 2008 as Release 8, it was targeted mainly for macro base station networks with uniform, well-planned and deployed high-power nodes where coverage, mobility, and the provision of high throughput across large areas are at the heart of its requirements. As a consequence, small cell (or low-power node) deployments together with high-power macro base stations were not considered for the original designs. The specific issues with the introduction of small cells in the system therefore cannot be addressed sufficiently without additional features to deal with interference, mobility, traffic load management, etc., related to such deployments.
The major problems from deploying small cells, especially together with macro base stations, include:
More severe interference conditions. Although interference is always a key issue for cellular communication and handling interference is built into the core of LTE designs, coexistence of network nodes of different power levels, especially in the co-channel scenario where the same frequency channel (known as the component carrier) is used for both macro base station cells and small cells, results in much worse interference condition than before. The LTE system generally has a very robust physical layer design to ensure that each physical channel can be reliably received at fairly low signal to interference-plus-noise ratio (SINR) range. However, in order to fully utilize the potential of small cells to offload traffic, small cells sometimes need to serve users at even lower SINR, which requires a mechanism to either avoid or cope with strong interference.
Mobility management and traffic load balancing. The introduction of small cells with low transmission power basically creates cells with a small footprint within the system, which increases the frequency of handovers between cells due to user mobility. This then results in dramatically larger handover failure as well as associated backhaul signaling. Furthermore, also because of much smaller coverage area, the traffic loads between the cells are more likely to be unbalanced and time-varying, and a more efficient load-balancing and shifting mechanism is needed.
In this chapter we work through several examples of combinational circuits to reinforce the concepts in the preceding chapters. A multiple-of-3 circuit is another example of an iterative circuit. The tomorrow circuit from Section 1.4 is an example of a counter circuit with subcircuits for modularity. A priority arbiter is an example of a building-block circuit – built using design entities described in preceding chapters. Finally, a circuit designed to play tic-tac-toe gives a complex example combining many concepts.
MULTIPLE-OF-3 CIRCUIT
In this section we develop a circuit that determines whether an input number is a multiple of 3. We implement this function using an iterative circuit (like the magnitude comparator of Section 8.6). A block diagram of an iterative multiple-of-3 circuit is shown in Figure 9.1. Each stage performs, in binary, a step of long division of the input number by 3, passing along the remainder but discarding the quotient. The circuit checks the input number one bit at a time starting at the MSB. At each bit, we compute the remainder so far (0, 1, or 2). At the LSB we check whether the overall remainder is 0. Each bit cell takes the remainder so far to its left, and one bit of the input, and computes the remainder so far to its right.
The VHDL design entity for the bit cell of our iterative multiple-of-3 circuit is shown in Figure 9.2. The remainder in remin represents the remainder from the neighboring bit to the left, and hence has a weight of 2 relative to the current bit position. In our neighboring bit this signal represented a remainder of 0, 1, or 2. However, in the present bit, this value is shifted to the left by one bit and it represents a value of 0, 2, or 4. Hence we can concatenate remin with the current bit of the input, input, to form a three-bit binary number and then take the remainder (mod 3) of this number. A case statement is used to compute the new remainder.