To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Triadic closure has been conceptualized and measured in a variety of ways, most famously the clustering coefficient. Existing extensions to affiliation networks, however, are sensitive to repeat group attendance, which does not reflect common interpersonal interpretations of triadic closure. This paper proposes a measure of triadic closure in affiliation networks designed to control for this factor, which manifests in bipartite models as biclique proliferation. To avoid arbitrariness, the paper introduces a triadic framework for affiliation networks, within which a range of measures can be defined; it then presents a set of basic axioms that suffice to narrow this range to the one measure. An instrumental assessment compares the proposed and two existing measures for reliability, validity, redundancy, and practicality. All three measures then take part in an investigation of three empirical social networks, which illustrates their differences.
For positive integers n and q and a monotone graph property $\mathcal{A}$, we consider the two-player, perfect information game WC(n, q, $\mathcal{A}$), which is defined as follows. The game proceeds in rounds. In each round, the first player, called Waiter, offers the second player, called Client, q + 1 edges of the complete graph Kn which have not been offered previously. Client then chooses one of these edges which he keeps and the remaining q edges go back to Waiter. If, at the end of the game, the graph which consists of the edges chosen by Client satisfies the property $\mathcal{A}$, then Waiter is declared the winner; otherwise Client wins the game. In this paper we study such games (also known as Picker–Chooser games) for a variety of natural graph-theoretic parameters, such as the size of a largest component or the length of a longest cycle. In particular, we describe a phase transition type phenomenon which occurs when the parameter q is close to n and is reminiscent of phase transition phenomena in random graphs. Namely, we prove that if q ⩾ (1 + ϵ)n, then Client can avoid components of order cϵ−2 ln n for some absolute constant c > 0, whereas for q ⩽ (1 − ϵ)n, Waiter can force a giant, linearly sized component in Client's graph. In the second part of the paper, we prove that Waiter can force Client's graph to be pancyclic for every q ⩽ cn, where c > 0 is an appropriate constant. Note that this behaviour is in stark contrast to the threshold for pancyclicity and Hamiltonicity of random graphs.
How many strict local maxima can a real quadratic function on {0, 1}n have? Holzman conjectured a maximum of $\binom{n }{ \lfloor n/2 \rfloor}$. The aim of this paper is to prove this conjecture. Our approach is via a generalization of Sperner's theorem that may be of independent interest.
The objective of this note is to study the distribution of the running maximum of the level in a level-dependent quasi-birth-death process. By considering this running maximum at an exponentially distributed “killing epoch” T, we devise a technique to accomplish this, relying on elementary arguments only; importantly, it yields the distribution of the running maximum jointly with the level and phase at the killing epoch. We also point out how our procedure can be adapted to facilitate the computation of the distribution of the running maximum at a deterministic (rather than an exponential) epoch.
Utilising both key mathematical tools and state-of-the-art research results, this text explores the principles underpinning large-scale information processing over networks and examines the crucial interaction between big data and its associated communication, social and biological networks. Written by experts in the diverse fields of machine learning, optimisation, statistics, signal processing, networking, communications, sociology and biology, this book employs two complementary approaches: first analysing how the underlying network constrains the upper-layer of collaborative big data processing, and second, examining how big data processing may boost performance in various networks. Unifying the broad scope of the book is the rigorous mathematical treatment of the subjects, which is enriched by in-depth discussion of future directions and numerous open-ended problems that conclude each chapter. Readers will be able to master the fundamental principles for dealing with big data over large systems, making it essential reading for graduate students, scientific researchers and industry practitioners alike.
The ever-increasing use of smart phone devices, multimedia applications, and social networking, along with the demand for higher data rates, ubiquitous coverage, and better quality of service, pose new challenges to the traditional mobile wireless network paradigm that depends on macro cells for service delivery. Small cell networks (SCNs) have emerged as an attractive paradigm and hold great promise for future wireless communication systems (5G systems). SCNs encompass a broad variety of cell types, such as micro, pico, and femto cells, as well as advanced wireless relays, and distributed antenna systems. SCNs co-exist with the macro cellular network and bring the network closer to the user equipment. SCNs require low power, incur low cost, and provide increased spatial reuse. Data traffic offloading eases the load on the expensive macro cells with significant savings expected to the network operators using small cells.
As the demand for increased bandwidth rages on, SCNs emerged in dense urban areas mainly to provide coverage and capacity. They have now gained momentum and are expected to dominate in the coming years, with the rollout in large scale – either planned or in ad-hoc manner – and the development of 5G systems with many small cell components. Already, the number of “small cells” in the world exceeds the total number of traditional mobile base stations. SCNs are also envisioned to pave the way for new services. However, there are many challenges in the design and deployment of small cell networks, which have to be addressed in order to be technically and commercially successful. This book provides various concepts in the design, analysis, optimization, and deployment of small cell networks, using a treatment approach suitable for pedagogical and practical purposes.
This book is an excellent source for understanding small cell network concepts, associated problems, and potential solutions in next-generation wireless networks. It covers from fundamentals to advanced topics, deployment issues, environmental concerns, optimized solutions, and standards activities in emerging small cell networks. New trends, challenges, and research results are also provided. Written by leading experts in the field from academia and industry around the world, it is a valuable resource dealing with both the important, core, and specialized issues in these areas. It offers a wide coverage of topics, while balancing the treatment to suit the needs of first-time learners of the concepts and specialists in the field.
We disseminate a set of small cells' field trial experiments conducted at 2.6 GHz and focused on coverage/capacity within multi-floor office buildings. LTE pico cells deployed indoors as well as LTE small cells deployed outdoors are considered. The latter rely on small emission power levels coupled with intelligent ways of generating transmission beams with various directivity levels by means of adaptive antenna arrays. Furthermore, we introduce an analytical three-dimensional (3D) performance prediction framework, which we calibrate and validate against field measurements. The framework provides detailed performance levels at any point of interest within a building; it allows us to determine the minimum number of small cells required to deliver desirable coverage and capacity levels, their most desirable location subject to deployment constraints, transmission power levels, antenna characteristics (beam shapes), and antenna orientation (azimuth, tilt) to serve a targeted geographical area. In addition, we disseminate specialized solutions for LTE small cells’ deployment within hotspot traffic venues, such as stadiums, through design and deployment feasibility analysis.
Introduction
Small cells are low-cost, low-power base stations designed to improve coverage and capacity of wireless networks. By deploying small cells on top and in complement to the traditional macro cellular networks, operators are in a much better position to provide the end users with a more uniform and improved quality of experience (QoE). Small cells’ deployment is subject to service delivery requirements, as well as to the actual constraints specific to the targeted areas. For a good uniformity of service, in populated areas where the presence of buildings is the main reason for significant radio signal attenuation, small cells may need to be closely spaced, e.g., within a couple of hundred meters from each other. Naturally, the performance of small cells is highly dependent on environment-specific characteristics, such as the materials used for building construction, their specific propagation properties, and the surroundings. It is particularly important to have a proper characterization of an environment where small cells are deployed.
This chapter focuses on in-building performance and feasibility of LTE small cells through measurements, taking as reference both outdoor small cell and indoor pico cell deployments. We created scenarios where wireless connectivity within a target building is offered either by small cells located on the exterior of other buildings (small cells with outdoor characteristics) or simply by small cells located within the target building (pico cells with indoor characteristics).
Factoring a state machine is the process of splitting the machine into two or more simpler machines. Factoring can greatly simplify the design of a state machine by separating orthogonal aspects of the machine into separate FSMs where they can be handled independently. The separate FSMs communicate via logic signals. One FSM provides input control signals to another FSM and senses its output status signals. Such factoring, if done properly, makes the machine simpler and also makes it easier to understand and maintain – by separating issues.
In a factored FSM, the state of each sub-machine represents one dimension of a multidimensional state space. Collectively the states of all of the sub-machines define the state of the overall machine – a single point in this state space. The combined machine has a number of states that is equal to the product of the number of states of the individual sub-machines – the number of points in the state space. With individual sub-machines having a few tens of states, it is not unusual for the overall machine to have thousands to millions of states. It would be impractical to handle such a large number of states without factoring.
We have already seen one form of factoring in Section 16.3 where we developed a state machine with a datapath component and a control component. In effect, we factored the total state of the machine into a datapath portion and a control portion. Here we generalize this concept by showing how the control portion itself can be factored.
In this chapter, we illustrate factoring by working two examples. In the first example, we start with a flat FSM and factor it into multiple simpler FSMs. In the second example we derive a factored FSM directly from the specification, without bothering with the flat FSM. Most real FSMs are designed using the latter method. A factoring is usually a natural outgrowth of the specification of a machine. It is rarely applied to an already flat machine.
Functional programming – the use and evaluation of functions as a programming paradigm – has a long and productive history in the world of programming languages. Lisp came about in the 1950s in the search for a convenient language to represent mathematical concepts in programs, borrowing from the lambda calculus of the logician Alonzo Church. More recent languages have in turn embraced many aspects of Lisp – in addition to Lisp's offspring such as Scheme and haskell, you will find elements of functional constructs in Java, Python, Ruby, and Perl. Mathematicaitself has clear bloodlines to Lisp, including the ability to operate on data structures such as lists as single objects and in its representation of mathematical properties through rules.
Mathematica functions, unlike those in many other languages, are considered “first-class” objects, meaning that they can be used as arguments to other functions, they can be returned as values, and they can be part of many other kinds of data objects such as arrays. In addition, you can create and use functions at runtime, that is, when you evaluate an expression. This functionalstyle of programming distinguishes Mathematicafrom traditional procedural languages like C and Fortran. Facility with functional programming is therefore essential for taking full advantage of the Mathematica language to solve your computational tasks.
We start with some of the most useful functional programming constructs – higher-order functions such as Map, Apply, Thread, Select, and Outer. We then introduce iteration, a mechanism by which the output of one computation is fed as input into the next.
The tremendous increase of bandwidth-craving mobile applications (e.g., video streaming, video chatting, and online gaming) has posed enormous challenges to the design of future wireless networks. Deploying small cells (e.g., pico, micro, and femto) has been shown to be an efficient and cost-effective solution to support this constantly rising demand since the smaller cell size can provide higher link quality and more efficient spatial reuse [1]. Small cells could also deliver some other benefits such as offloading the macro network traffic, providing service to coverage holes and regions with poor signal reception (e.g., macro cell edges). Following this trend, the evolving 5G networks [2] are expected to be composed of hundreds of interconnected heterogeneous small cells.
Figure 11.1 gives an illustration of a heterogeneous network (HetNet) where a macro cell is underlaid with different types of small cells. Different from the cautiously planned traditional network, the architecture of a HetNet is more random and unpredictable due to the increased density of small cells and their impromptu way of deployment. In this case, the manual intervention and centralized control used in traditional network management will be highly inefficient, time consuming, and expensive, and therefore will be not applicable for dense heterogeneous small cell networks. Instead, self-organization has been proposed as an essential feature for future small cell networks [3, 4].
The motivations for enabling self-organization in small cell networks are explained below.
• Numerous network devices with different characteristics are expected to be interconnected in future wireless networks. Also, these devices are expected to have “plug and play” capability. Therefore the initial pre-operational configuration has to be done with minimum expertise involvement.
• With the emergence of small cells, the spatio-temporal dynamics of the networks has become more unpredictable than legacy systems due to the unplanned nature of small cell deployment. Therefore intelligent adaptation of the network nodes is necessary. That is, the self-organizing small cells need to learn from the environment and adapt with the network dynamics to achieve the desired performance.
After reading to this point in the book, you now have the skills to design complex combinational and sequential logic modules. However, if someone were to ask you to design a DVD player, a computer system, or an Internet router you would realize that each of these is not a single finite-state machine (or even a single datapath with associated finite-state controller). Rather, a typical system is a collection of modules, each of which may include several datapaths and finite-state controllers. These systems must first be decomposed into simple modules before the design and analysis skills you have learned in the previous chapters can be applied. However, the problem remains that of how to partition the system to this level where the design becomes manageable. This system-level design is one of the most interesting and challenging aspects of digital systems.
SYSTEM DESIGN PROCESS
The design of a system involves the following steps.
Specification The most important step in designing any system is deciding – and clearly specifying in writing – what you are going to build. We discuss specifications in more detail in Section 21.2.
Partitioning Once the system has been specified, the main task in system design is dividing the system into manageable subsystems or modules. This is a process of divide and conquer. The overall system is divided into subsystems that can then be designed (conquered) separately. At each stage, the subsystems should be specified to the same level of detail as the overall system was during our first step. As described in Section 21.3, we can partition a system by state, task, or interface.
Interface specification It is particularly important that the interfaces between subsystems be described in detail. With good interface specifications, individual modules can be developed and verified independently. When possible, interfaces should be independent of module internals – allowing modules to be modified without affecting the interface, or the design of neighboring modules.
Timing design Early in the design of a system, it is important to describe the timing and sequencing of operations. In particular, as work flows between modules, the sequencing of which module does a particular task on a particular cycle must be worked out to ensure that the right data come together at the correct place and time. This timing design also drives the performance tuning step described below.
In its brief history, the world of programming has undergone a remarkable evolution. Those of us old enough to remember boxes of punch cards and batch jobs couldn't be happier about some of these changes. One could argue that the limitations, physical and conceptual,of the early programming environments helped to focus that world in a very singular manner. Eventually, efforts to overcome those limitations led to a very visible and broad transformation of the world of computer programming. We now have a plethora of languages, paradigms, and environments to choose from. At times this embarrassment of riches can be a bit overwhelming, but I think most would agree that we are fortunate to have such variety in programming languages with which to do our work.
I learned about Mathematica as I suspect many people have – after using several languages over the years, a colleague introduced me to a new and very different tool, Mathematica. I soon realized that it was going to help me in my work in ways that previous languages could not. Perhaps the most notable feature was how quickly I could translate the statement of a problem to a working program. This was no doubt due to having a functional style of programming at my fingertips but also being able to think in terms of rules and patterns seemed to fit well with my background in mathematics.
Well, Mathematica is no longer a young up-start in the programming world. It has been around now for over 25 years, making it, if not an elder statesman, certainly a mature and familiar player. And one that is used by people in fields as varied as linguistics, bioinformatics, engineering, and information theory. Like myself, many people are first introduced to it in an academic setting. Many more are introduced through a colleague at work. Still others have seen it mentioned in various media and are curious as to what it is all about. After using it to do basic or more advanced computation, most users soon find the need to extend the default set of tools that come with Mathematica.Programming is the ticket.
So what makes Mathematica such a useful programming tool? First, it is a well-designed language, one whose internal logic will be quite apparent as you get to know it.
This book is intended to teach an undergraduate student to understand and design digital systems. It teaches the skills needed for current industrial digital system design using a hardware description language (VHDL) and modern CAD tools. Particular attention is paid to systemlevel issues, including factoring and partitioning digital systems, interface design, and interface timing. Topics needed for a deep understanding of digital circuits, such as timing analysis, metastability, and synchronization, are also covered. Of course, we cover the manual design of combinational and sequential logic circuits. However, we do not dwell on these topics because there is far more to digital system design than designing such simple modules.
Upon completion of a course using this book, students should be prepared to practice digital design in industry. They will lack experience, but they will have all of the tools they need for contemporary practice of this noble art. The experience will come with time.
This book has grown out of more than 25 years of teaching digital design to undergraduates (CS181 at Caltech, 6.004 at MIT, EE121 and EE108A at Stanford). It is also motivated by 35 years of experience designing digital systems in industry (Bell Labs, Digital Equipment, Cray, Avici, Velio Communications, Stream Processors, and NVIDIA). It combines these two experiences to teach what students need to know to function in industry in a manner that has been proven to work on generations of students. The VHDL guide in Appendix B is informed by nearly a decade of teaching VHDL to undergraduates at UBC (EECE 353 and EECE 259).
We wrote this book because we were unable to find a book that covered the system-level aspects of digital design. The vast majority of textbooks on this topic teach the manual design of combinational and sequential logic circuits and stop. While most texts today use a hardware description language, the vast majority teach a TTL-esque design style that, while appropriate in the era of 7400 quad NAND gate parts (the 1970s), does not prepare a student to work on the design of a three-billion-transistor GPU. Today's students need to understand how to factor a state machine, partition a design, and construct an interface with correct timing. We cover these topics in a simple way that conveys insight without getting bogged down in details.
Memory is widely used in digital systems for many different purposes. In a processor, SDDR DRAM chips are used for main memory and SRAM arrays are used to implement caches, translation lookaside buffers, branch prediction tables, and other internal storage. In an Internet router (Figure 23.3(b)), memory is used for packet buffers, for routing tables, to hold per-flow data, and to collect statistics. In a cellphone SoC, memory is used to buffer video and audio streams.
A memory is characterized by three key parameters: its capacity, its latency, and its throughput. Capacity is the amount of data stored, latency is the amount of time taken to access data, and throughput is the number of accesses that can be done in a fixed amount of time.
A memory in a system, e.g., the packet buffer in a router, is often composed of multiple memory primitives: on-chip SRAM arrays or external DRAM chips. The number of primitives needed to realize a memory is governed by its capacity and its throughput. If one primitive does not have sufficient capacity to realize the memory, multiple primitives must be used – with just one primitive accessed at a time. Similarly, if one primitive does not have sufficient bandwidth to provide the required throughput, multiple primitives must be used in parallel – via duplication or interleaving.
MEMORY PRIMITIVES
The vast majority of all memories in digital systems are implemented from two basic primitives: on-chip SRAM arrays and external SDDR DRAM chips. Here we will consider these memory primitives as black boxes, discussing their properties and how to interface to them. It is beyond the scope of this book to look inside the box and study their implementation.
SRAM arrays
On-chip SRAM arrays are useful for building small, fast, dedicated memories integrated near the logic that produces and consumes the data they store. While the total capacity of the SRAM that can be realized on one chip (about 400 Mb) is small compared with a single 4 Gb DRAM chip, these arrays can be accessed in a single clock cycle, compared with 25 cycles or more for a DRAM access.
In this appendix we present a few guidelines for VHDL coding style. These guidelines grew out of experience with both VHDL and Verilog description languages. These guidelines developed over many years of teaching students digital design and managing design projects in industry, where the guidelines have been proven to reduce effort, to produce better final designs, and to produce designs that are more readable and maintainable. The many examples of VHDL throughout this book serve as examples of this style. The style presented here is intended for synthesizable designs – VHDL design entities that ultimately map to real hardware. A very different style is used in testbenches. This section is not intended to be a reference manual for VHDL. The following appendix provides a brief summary of the VHDL syntax used in this book and many other references are available online. Rather, this section gives a set of principles and style rules that help designers write correct, maintainable code. A reference manual explains what legal VHDL code is. This appendix explains what good VHDL code is. We give examples of good code and bad code, all of which are legal.
BASIC PRINCIPLES
We start with a few basic principles on which our VHDL style is based. The style presented is essentially a VHDL-2008 equivalent to a set of Verilog style guidelines based upon our experience over many years of teaching students digital design and managing design projects in industry combined with nearly a decade of experience teaching earlier versions of VHDL.
Know where your state is Every bit of state in your design should be explicitly declared. In our style, all state is in explicit flip-flop or register components, and all other portions are purely combinational. This approach avoids a host of problems that arise when writing sequential statements directly within an “if rising\_edge(clk) then” statement inside a process. It also makes it much easier to detect inferred latches that occur when not all signals are assigned in all branches of a conditional statement.
Understand what your design entities will synthesize to When you write a design entity, you should have a good idea what logic will be generated. If your design entity is described structurally, wiring together other components, the outcome is very predictable. Small behavioral design entities and arithmetic blocks are also very predictable.
Verification and test are engineering processes that complement design. Verification is the task of ensuring that a design meets its specification. On a typical digital systems project, more effort is expended on verification than on the design itself. Because of the high cost and long delays involved in fabricating a chip, thorough verification is essential to ensure that the chip works the first time. A design error that is not caught during verification would result in costly delays and retooling.
Testing is performed to ensure that a particular instantiation of a design functions properly. When a chip is fabricated, some transistors, wires, or contacts may be faulty. A manufacturing test is performed to detect these faults so the device can be repaired or discarded.
DESIGN VERIFICATION
Simulation is the primary tool used to verify that a design meets its specification. The design is simulated using a number of tests that provide stimulus to the unit being tested and check that the design produces correct outputs. The VHDL testbenches we have seen throughout this book are examples of such tests.
Verification coverage
The verification challenge amounts to ensuring that the set of test patterns, the test suite, written to verify a design is complete. We measure the degree of completion of a test suite by its coverage of the specification and of the implementation. We typically insist on 100% coverage of both specification features and implementation lines or edges to consider the design verified.
The specification coverage of a set of tests is measured by determining the fraction of features in the specification that are exercised and checked by the tests. For example, suppose you have developed a digital clock chip that includes a day/date and an alarm function. Table 20.1 gives a partial list of features to be tested. Even for something as simple as a digital clock, the list of features can easily run into the hundreds. For a complex chip it is not unusual to have 105 or more features. Each test verifies one or more features. As tests are written, the features covered by each test are checked off.
Teaching functional programming as a second programming paradigm is often difficult as students can have strong preconceptions about programming. When most of these preconceived ideas fail to be confirmed, functional programming may be seen as an unnecessarily difficult topic. A typical topic that causes such difficulties is the language of types employed by many modern functional languages. In this paper, we focus on addressing this difficulty through the use of step-by-step calculations of type expressions. The outcome of the study is an elaboration of a worked example format and a methodical approach for teaching types to beginner functional programmers.