To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Mathematica is a large system used across an astonishing array of disciplines – physics, bioinformatics, geo-science, linguistics, network analysis, optics, risk management, software engineering, and many more. It integrates tools for importing, analyzing, simulating, visualizing, reporting, and connecting to other programs. Underlying all of these tools is a modern programming language with which you can extend the things you can do with Mathematica almost limitlessly.
This book focuses on the programming language, but before we dive into details of syntax and structure, it will be helpful to look at some of the essential elements of actual programs to give you a better sense of what it means to program with Mathematica. So, to start, let us walk through the creation of a short program so you can see how a Mathematica programmer might solve a problem. We will not attempt to write the most efficient or shortest code here, but will instead concentrate on the process of programming – things like rewording the problem, finding the right tools and approach, and testing your program. Hopefully, this prelude will give you a better sense of the breadth and depth of the Mathematica language and help you answer the question, “Why use Mathematica to write programs?”
The second aim of this chapter is to help you become familiar with the Mathematica environment, showing you how to get started, how to work with the interface, documentation, and programming tools, and how to start putting them together to do interesting things. The basic syntax used by Mathematica is introduced, including functions and lists and several alternate syntaxes that you can use to input expressions. This is followed by information on how to get help when you get stuck or have trouble understanding an error that has occurred. If you are already familiar with these aspects of Mathematica, feel free to skim or skip these topics and return to them when the need arises.
The interconnect between modules is as important a component of most systems as the modules being connected. As described in Section 5.6, wires account for a large fraction of the delay and power in a typical system. A wire of just 3 μm in length has the same capacitance (and hence dissipates the same power) as a minimum-sized inverter. A wire of about 100 μm in length dissipates about the same power as one bit of a fast adder.
Whereas simple systems are connected with direct point-to-point connections between modules, larger and more complex systems are better organized with a bus or a network. Consider an analogy to a telephone or intercom system. If you need to talk to only two or three people, you might use a direct line to each person you need to talk to. However, if you need to talk to hundreds of people, you would use a switching system, allowing you to dial any of your correspondents over a shared interconnect.
ABSTRACT INTERCONNECT
Figure 24.1 shows a high-level view of a system using a general interconnect (e.g., a bus or a network). A number of clients are connected to the network by a pair of links to and from the interconnect. The links may be serialized (Section 22.3), and flow control is required on at least the link into the interconnect – to back-pressure the client in the event of contention.
To communicate, client S (the source client), transmits a packet over the link iS into the interconnect. The packet includes, at minimum, a destination address, D, and a payload, P, which may be of arbitrary (or even variable) length. The interconnect, possibly with some delay due to contention, delivers P to client D over link oD out of the interconnect. The payload P may contain a request type (e.g., read or write), a local address within D, and data or other arguments for a remote operation. Because the interconnect is addressed, any client A can communicate with any client B while requiring only a single pair of unidirectional links on each client module.
In this chapter, we look at three methods for improving the speed of arithmetic circuits, and in particular multipliers. We start in Section 12.1 by revisiting binary adders and see how to reduce their delay from O(n) to O(log(n)) by using hierarchical carry-look-ahead circuits. This technique can be applied directly to build fast adders and is also used to accelerate the summation of partial products in multipliers. In Section 12.2 we see how the number of partial products that need to be summed in a multiplier can be greatly reduced by recoding one of the inputs as a sequence of higher-radix, signed digits. Finally, in Section 12.3 we see how the partial products can be accumulated with O(log(n)) delay by using a tree of full adders. The combination of these three techniques into a fast multiplier is left as Exercises 12.17 to 12.20.
CARRY LOOK-AHEAD
Recall that the adder developed in Section 10.2 is called a ripple-carry adder because a transition on the carry signal must ripple from bit to bit to affect the final value of the MSB of the sum. This ripple-carry results in an adder delay that increases linearly with the number of bits in the adder. For large adders, this linear delay becomes prohibitive.
We can build an adder with a delay that increases logarithmically, rather than linearly, with the width of the adder by using a dual-tree structure as shown in Figure 12.1. This circuit works by computing carry propagate and carry generate across groups of bits in the upper tree and then using these signals to generate the carry signal into each bit in the lower tree. The propagate signal pij is true if a carry into bit i will propagate from bit i to bit j and generate a carry out of bit j. The generate signal gij is true if a carry will be generated out of bit j regardless of the carry into bit i.
In recent years the use of data services in mobile networks has notably increased, which requires a higher quality of services and data throughput capacity from operators. These requirements become much more demanding in indoor environments, where, due to the wall-penetration losses, communications suffer a higher detriment. As a solution, short-range base stations (BSs), known as femto cells [1], are proposed. Femto cells are installed by the end consumer and communicate with the macro cell system through the internet by means of a digital subscriber line (DSL), fiber, or cable connection. Due to this deployment model, the number and location of femto cells are unknown for the operators and therefore there is no possibility for centralized network planning. In the case of co-channel operation, which is the more rewarding option for operators in terms of spectral efficiency, an aggregated interference problem may arise due to multiple, simultaneous, and uncoordinated femto cell transmissions. On the other hand, the macro cell network can also cause significant interference to the femto cell system due to a lack of control of position of the femto nodes and their users. In this chapter we focus then on the challenging problem in the area of small cell networks, the inter-cell interference coordination among different layers of the network.
We propose two different self-organizing solutions, based on two smart techniques that operate on the most appropriate configuration parameters of the network for each situation. In particular, we address femto–macro and macro–femto problems. On the one hand, for the femto–macro case, we propose in Section 16.1 a machine learning (ML) approach to optimize the transmission power levels by modeling the multiple femto BSs as a multi-agent system [2], where each femto cell is able to learn transmission power policy in such a way that the interference it is generating, added to the whole femto cell network interference, does not jeopardize the macro cell system performance. To do this, we propose a reinforcement learning (RL) category of solutions [3], known as time-difference learning. On the other hand, for the macro–femto problem, we propose in Section 16.2 a solution based on interference minimization through self-organization of antenna parameters. In homogeneous macro cell networks, BS antenna parameters are configured in the planning and deployment phase and left unchanged for a long time.
The general definition of a small cell is the low-powered radio access node operating in licensed and unlicensed spectrum with the smaller coverage of ten meters to one or two kilometers, compared to a mobile macro cell with a range of a few tens of kilometers. With the introduction of this new concept, the heterogeneous network (HetNet) constructed with different layers of small cells and large cells can deliver the increased bandwidths, reduced latencies, and higher uplink (UL) and downlink (DL) throughput to end users. Since 2009, the standard evolution of the small cell related topics has been studied in 3GPP (The 3rd Generation Partnership Project) LTE (long-term evolution) and LTE-Advanced. The following sections in this chapter will introduce the standardization progress of LTE and LTE-Advanced in small cells.
Definition of small cells in 3GPP LTE-Advanced
In 3GPP LTE and LTE-Advanced, small cells can generally be characterized as either relay nodes, or pico cells (also referred to as hotzone cells), controlled by a pico eNodeB, or femto cells, controlled by a Home evolved NodeB (HeNB). The common features among the relays, pico cells, and femto cells are low transmission power node and independent eNB functionality, while the typical different features can be summarized as follows:
1. Relay node [1, 2]. A relay node (RN) is a network node connected wirelessly to a source eNodeB, called the donor eNodeB. According to the different implementation types of the relay node into wireless network, the roles of the relay node played are also different.
2. Pico cell. A pico cell usually controls multiple small cells, which are planned by:
a. The 3rd Generation Partnership Project (3GPP), which unites six telecommuni-cations standard development organizations (ARIB, ATIS, CCSA, ETSI, TTA, and TTC), known as organizational partners, and provides their members with a stable environment to produce the highly successful reports and specifications that define 3GPP technologies.
b. The evolved Node B could be abbreviated as eNodeB or eNB by the network operator in a similar way as the macro cells [3]. The pico cell is usually open to all users (open subscriber group (OSG))[4].
Flip-flops are among the most critical circuits in a modern digital system. As we have seen in previous chapters, flip-flops are central to all synchronous sequential logic. Registers (built from flip-flops) hold the state (both control and data state) of all of our finite-state machines. In addition to this central role in logic design, flip-flops also consume a large fraction of the die area, power, and cycle time of a typical digital system.
Until now, we have considered a flip-flop as a black box. In this chapter, we study the internal workings of the flip-flop. We derive the logic design of a typical D flip-flop and show how the timing properties introduced in Chapter 15 follow from this design.
We first develop the flip-flop design informally – following an intuitive argument. We start by developing the latch. The implementation of a latch follows directly from its specification. From the implementation we can then derive the setup, hold, and delay times of the latch. We then see how to build a flip-flop by combining two latches in a master–slave arrangement. The timing properties of the flip-flop can then be derived from its implementation.
Following this informal development, we then derive the design of a latch and flip-flop using flow-table synthesis. This serves both to reinforce the properties of these storage elements and to give a good example of flow-table synthesis. We introduce the concept of state equivalence during this derivation. This formal derivation can be skipped by a casual reader.
INSIDE A LATCH
A schematic symbol for a latch is shown in Figure 27.1(a), and waveforms illustrating its behavior and timing are shown in Figure 27.1(b). A latch has two inputs, data d and enable g, and one output, q. When the enable input is high, the output follows the input. When the enable input is low, the output holds its current state.
As shown in Figure 27.1(b), a latch, like a flip-flop, has a setup time ts and a hold time th. An input must be setup ts before the enable falls and held for th after the enable has fallen in order for the input value to be correctly stored. Latch delay is characterized by both delay from the enable rising to the output changing, tdGQ, and delay from the data input changing to the output changing, tdDQ.
Combined with a technology upgrade to LTE/LTE-A, small cells are presented by many industry players (from operators to equipment vendors to analysts) as the most cost-effective solution to the known increase in mobile data demand [1–3]. As small cells (e.g., femto and pico) become the broadly adopted solution for adding capacity to modern, smart phone-dominated cellular networks, their numbers, and the areas where they will be deployed, will increase dramatically (Figure 17.1).
This reduction in cell size and the growth in cell numbers has required many new approaches to be developed to ensure that the next generation of networks are built to exploit costly, limited spectrum resources while maximizing capacity.
Such new methods consider the network design process in a holistic manner and ensure sufficient computational power is available to remove any accuracy compromises inherent with the traditional design processes. The set of techniques used to accomplish this we call large-scale network design, or L-SND for short.
The US cellular market has many good examples of planned small cell deployments that are to occur at a national level [1]. Results and data from such small cell designs are included in this chapter to illustrate the L-SND accuracy and scalability difficulties that have been overcome when compared to the limitations found with traditional methods.
Large-scale network design
Large-scale network design groups advanced radio-planning algorithms and technical solutions to evaluate vast numbers of sites without overloading engineering resources. Big data, cloud computing, and optimized metaheuristic algorithms are basic parts of the pool of multiple components that have to be combined to solve the small cell challenge.
The methodology described in this chapter has been implemented in Overture and the results shown have been shared by Keima Technologies.
Following model-based control theory [4, 5], L-SND aims to reduce the gap between business-case analysis and deployment. By utilizing the advantages of big data and cloud computing it is possible to seamlessly link marketing, planning, and deployment information. As the analysis evolves, robustness is built into the study by integration of feedback and new or updated big data components to review the objectives. In a real-life scenario, such systems will utilize practical and realistic information to evaluate outcomes and to maximize the advantages of economies of scale through a quicker (but more detailed) design phase.
The traditional mobile cellular networks are often designed so that a base station (BS) is always under uninterrupted working condition without considering the dynamic nature of user traffic, which results in an inefficient usage of energy. How to improve the system energy efficiency in order to achieve green networking is our major concern in this chapter. Beginning with a comprehensive review of the related works in literature, we introduce a self-organized BS virtual small networking (VSN) protocol so as to adaptively manage BSs' working states based on heterogeneity of user traffic changing in space and time. Motivated by the fact that low-traffic areas can apply a more aggressive BS-off strategy than hotspots, the proposed method is targeted at dividing BSs into groups with some similarity measurements so that the BS-off strategy can be performed more efficiently. Numerical results show that our proposals can save energy consumption on the entire cellular network to a great extent.
Introduction
As demand increases for more energy-efficient technologies in wireless networks, to tackle critical issues such as boosting cost on power consumption and excessive greenhouse gas emissions, the concept of green networking has drawn great attention in recent years. In fact, during the last decades, people have witnessed that the carbon footprint of the telecommunications industry has been exponentially growing due to the explosive rise of service requirements and subscribers’ demands. The concern on reducing power consumption comes from both environmental and economical reasons. With respect to the environment, the information and communications technology (ICT) industry is responsible for approximately 2% of current global electricity demands, with 6% yearly growth in ICT-related carbon dioxide emission (CO2-e) forecast till 2020 [1]. With respect to economics, the power consumption for operating a typical base station (BS), which needs to be connected to the electrical grid, may cost approximately $3,000/year, while off-grid BSs, generally running on diesel power generators in remote areas, may cost ten times more [2]. As more than 120,000 new BSs are deployed annually [3], there is still no end in sight for the development of mobile communications with a large amount of new subscribers and a constant desire for upgrading user equipment from 2G to 3G, and then to 4G.
Combinational logic circuits implement logical functions on a set of inputs. Used for control, arithmetic, and data steering, combinational circuits are the heart of digital systems. Sequential logic circuits (see Chapter 14) use combinational circuits to generate their next state functions.
In this chapter we introduce combinational logic circuits and describe a procedure to design these circuits given a specification. At one time, before the mid 1980s, such manual synthesis of combinational circuits was a major part of digital design practice. Today, however, designers write the specification of logic circuits in a hardware description language (like VHDL) and the synthesis is performed automatically by a computer-aided design (CAD) program.
We describe the manual synthesis process here because every digital designer should understand how to generate a logic circuit from a specification. Understanding this process allows the designer to better use the CAD tools that perform this function in practice, and, on rare occasions, to generate critical pieces of logic manually.
COMBINATIONAL LOGIC
As illustrated in Figure 6.1, a combinational logic circuit generates a set of outputs whose state depends only on the current state of the inputs. Of course, when an input changes state, some time is required for an output to reflect this change. However, except for this delay the outputs do not reflect the history of the circuit.With a combinational circuit, a given input state will always produce the same output state regardless of the sequence of previous input states. A circuit where the output depends on previous input states is called a sequentiall circuit (see Chapter 14).
For example, a majority circuit, a logic circuit that accepts n inputs and outputs a 1 if at least ⌊n/2+1⌋ of the inputs are 1, is a combinational circuit. The output depends only on the number of 1s in the present input state. Previous input states do not affect the output.
On the other hand, a circuit that outputs a 1 if the number of 1s in the n inputs is greater than the previous input state is sequential (not combinational). A given input state, e.g., ik = 011, can result in o = 1 if the previous input was ik−1 = 010, or it can result in o = 0 if the previous input was ik−1 = 111.
In contrast to voice traffic, wireless data traffic is mostly asymmetric and time-variant with a requirement for a dynamically adjusting technique to divide the uplink (UL) and downlink (DL) resource. In typical cellular systems, the length of UL resource and the length of DL resource are predetermined. In a typical frequency-division duplex (FDD) system, the UL and DL transmission use distinctive frequency bands, which is especially efficient in cases of symmetric traffic due to the avoidance of possible interference between UL and DL transmission. However the FDD system has difficulty in adjusting its UL and DL resource in asymmetric traffic since the resource division is operated by the duplexer in the hardware. A typical time-division duplex (TDD) system is capable of adjusting the UL and DL transmission in time domain. However, due to the requirement of synchronization in order to eliminate the interference, the UL and DL resource is still fixed. To support asymmetric and time-variant traffic, LTE provides small cell base stations (BSs) with dynamic TDD by supporting seven TDD UL/DL configurations, enabling the BSs dynamically to change the ratio of UP and DL resource to handle the time-variant traffic. Nevertheless, such a scheme also induces two type of interference: BS–BS interference and MS–MS interference. In this chapter the interference issues and several interference mitigation methods will be extensively discussed.
Dynamic TDD system overview
Introduction
To divide the UL and DL traffic resource, some typical communication systems apply FDD, where different frequency bands are used for transmitting and receiving, the benefit of which is that no interference will be incurred between UL and DL signals. For the symmetrical traffic on UL and DL (e.g., voice service), the FDD system is suitable since the BS is assigned the same amount of radio resource in the UL and DL. Whereas for wireless data services, FDD is not flexible enough to handle this type of dynamic UL/DL traffic due to the character of the UL and DL traffic being asymmetric and time-variant in these cases.
Compared to FDD, TDD is different in that the UL and DL resource is divided in time domain and can be easily adjusted. It possesses an advantage of greater flexibility in handling the dynamic UL/DL traffic. In the TDD system, the boundary between the UL and DL duty cycle is adaptively adjustable according to service requirements.