To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The specification for a digital system typically includes not only its function, but also the delay and power (or energy) of the system. For example, a specification for an adder describes (i) the function, that the output is to be the sum of the two inputs; (ii) the delay, that the output must be valid within 1 ns after the inputs are stable; and (iii) its energy, that each add consumes no more than 2 pJ. In this chapter we shall derive simple methods to estimate the delay and power of CMOS logic circuits.
DELAY OF STATIC CMOS GATES
As illustrated in Figure 5.1, the delay of a logic gate, tp, is the time from when the input of the gate crosses the 50% point between V0 and V1 to when the output of the gate crosses the same point. Specifying delay in this manner allows us to compute the delay of a chain of logic gates by simply summing the delays of the individual gates. For example, in Figure 5.1 the delay from a to c is the sum of the delay of the two gates. The 50% point on the output of the first inverter is also the 50% point on the input of the second inverter.
Because the resistance of the PFET pull-up network may be different than that of the NFET pull-down network, a CMOS gate may have a rising delay that is different from its falling delay. When the two delays differ, we denote the rising delay, the delay from a falling input to a rising output, as tpr and the falling delay as tpf, as shown in Figure 5.1.
We can use the simple switch model derived in Section 4.2 to estimate tpr and tpf by calculating the RCtime constant of the circuit formed by the output resistance of the driving gate and the input capacitance of its load(s).1 Because this time constant depends in equal parts on the driving and receiving gates, we cannot specify the delay of a gate by itself, but only as a function of output load.
Digital systems are pervasive in modern society. Some uses of digital technology are obvious – such as a personal computer or a network switch. However, there are also many other applications of digital technology. When you speak on the phone, in almost all cases your voice is being digitized and transmitted via digital communications equipment. When you listen to an audio file, the music, recorded in digital form, is processed by digital logic to correct errors and improve the audio quality. When you watch TV, the image is transmitted in a digital format and processed by digital electronics. If you have a DVR (digital video recorder) you are recording video in digital form. DVDs are compressed digital video recordings. When you play a DVD or stream a movie, you are digitally decompressing and processing the video. Most communication radios, such as cell phones and wireless networks, use digital signal processing to implement their modems. The list goes on.
Most modern electronics uses analog circuitry only at the edge – to interface to a physical sensor or actuator. As quickly as possible, signals from a sensor (e.g., a microphone) are converted into digital form. All real processing, storage, and transmission of information is done digitally. The signals are converted back to analog form only at the output – to drive an actuator (e.g., a speaker) or control other analog systems.
Not so long ago, the world was not as digital. In the 1960s digital logic was found only in expensive computer systems and a few other niche applications. All TVs, radios, music recordings, and telephones were analog.
The shift to digital was enabled by the scaling of integrated circuits. As integrated circuits became more complex, more sophisticated signal processing became possible. Complex techniques such as modulation, error correction, and compression were not feasible in analog technology. Only digital logic, with its ability to perform computations without accumulating noise and its ability to represent signals with arbitrary precision, could implement these signal processing algorithms.
In this book we will look at how the digital systems that form such a large part of our lives function and how they are designed.
Answer set programming is a declarative programming paradigm oriented towards difficult combinatorial search problems. A fundamental task in answer set programming is to compute stable models, i.e., solutions of logic programs. Answer set solvers are the programs that perform this task. The problem of deciding whether a disjunctive program has a stable model is ΣP2-complete. The high complexity of reasoning within disjunctive logic programming is responsible for few solvers capable of dealing with such programs, namely dlv, gnt, cmodels, clasp and wasp. In this paper, we show that transition systems introduced by Nieuwenhuis, Oliveras, and Tinelli to model and analyze satisfiability solvers can be adapted for disjunctive answer set solvers. Transition systems give a unifying perspective and bring clarity in the description and comparison of solvers. They can be effectively used for analyzing, comparing and proving correctness of search algorithms as well as inspiring new ideas in the design of disjunctive answer set solvers. In this light, we introduce a general template, which accounts for major techniques implemented in disjunctive solvers. We then illustrate how this general template captures solvers dlv, gnt, and cmodels. We also show how this framework provides a convenient tool for designing new solving algorithms by means of combinations of techniques employed in different solvers.
This chapter gives some additional examples of sequential circuits. We start with a simple FSM that reduces the number of 1s on its input by a factor of 3 to review how to draw a state diagram from a specification and how to implement a simple FSM in VHDL. We then implement an SOS detector to review factoring of state machines. Next, we revisit our tic-tactoe game from Section 9.4 and build a datapath sequential circuit that plays a game against itself using the combinational move generator we previously developed. We illustrate the use of table-driven sequential circuits and composing circuits from sequential building blocks like counters and shift registers by building a Huffman encoder and decoder. The encoder uses table lookup along with a counter and shift register, while the decoder traverses a tree data structure stored in a table.
DIVIDE-BY-3 COUNTER
In this section we will design a finite-state machine that outputs a high signal on the output for one cycle for each three cycles the input has been high. More specifically, our FSM has a single input called input and a single output called output. When input is detected high for the third cycle (and sixth, ninth, etc.), output will go high for exactly one cycle. This FSM divides the number of pulses on the input by 3. It does not divide the binary number represented by the input by 3.
A state diagram for this machine is shown in Figure 19.1. At first it may seem that we can implement this machine with three states; however, four are required. We need states A to D to distinguish having seen the input high for zero, one, two, or three cycles so far. The machine resets to state A. It sits in this state until the input is high on a rising clock edge, at which time it advances to state B. The second high input takes the machine to C, and the third high input takes the machine to D, where the output goes high for one cycle. We can't simply have this third high input take us back to A because we need to distinguish having seen three cycles of high input – in which case the output goes high – from having seen zero cycles of high input.
Social network analysis is the study of how links between a set of actors are formed. Typically, it is believed that links are formed in a structured manner, which may be due to, for example, political or material incentives, and which often may not be directly observable. The stochastic blockmodel represents this structure using latent groups which exhibit different connective properties, so that conditional on the group membership of two actors, the probability of a link being formed between them is represented by a connectivity matrix. The mixed membership stochastic blockmodel extends this model to allow actors membership to different groups, depending on the interaction in question, providing further flexibility.
Attribute information can also play an important role in explaining network formation. Network models which do not explicitly incorporate covariate information require the analyst to compare fitted network models to additional attributes in a post-hoc manner. We introduce the mixed membership of experts stochastic blockmodel, an extension to the mixed membership stochastic blockmodel which incorporates covariate actor information into the existing model. The method is illustrated with application to the Lazega Lawyers dataset. Model and variable selection methods are also discussed.
Where do new markets come from? I construct a network model in which national markets are nodes and flows of recorded music between them are links and conduct a longitudinal analysis of the global pattern of trade in the period 1976–2010. I hypothesize that new export markets are developed through a process of transitive closure in the network of international trade. When two countries' markets experience the same social influences, it brings them close enough together for new homophilous ties to be formed. The implication is that consumption of foreign products helps, not hurts, home-market producers develop overseas markets, but only in those countries that have a history of consuming the same foreign products that were consumed in the home market. Selling in a market changes what is valued in that market, and new market formation is a consequence of having social influences in common.
Innovation Engineering (IE) is an educational training program that presents tools and advice on product innovation in three main categories: Create, idea generation; Communicate, communicating ideas; and Commercialize, selecting ideas to invest in further. The concepts taught in IE include common suggestions for early-stage product innovation. This paper addresses a challenge of implementing the IE program, specifically that it does not provide peer-reviewed sources or adequate data to substantiate its approach. This lack of substantiation limits effective implementation at companies. This paper also takes a step in examining IE’s claims that it is ‘a new science’ and a ‘new field of academic study’, a topic motivated by the Design Science Journal’s aim to serve as the archival venue of science-based design knowledge across multiple disciplines. This paper provides a compilation of academic literature that has tested the tools and advice espoused by IE. Almost all included papers contain test-versus-control experimental evidence. A mix of supporting and refuting evidence was found. Overall, the work provides a useful compilation of evidence-of-effectiveness related to common innovation and design practices that spans different design stages and is applicable for multiple disciplines and industries. This evidence comes from a variety of sources, including design, engineering education, psychology, marketing, and management. The work can also serve as an approach to evaluate overarching approaches to design in general, specifically, testing the foundations by vetting related test-versus-control experimental studies.
This paper deals with the mean residual life function (MRLF) and its monotonicity in the case of additive and multiplicative hazard rate models. It is shown that additive (multiplicative) hazard rate does not imply reduced (proportional) MRLF and vice versa. Necessary and sufficient conditions are obtained for the two models to hold simultaneously. In the case of non-monotonic failure rates, the location of the turning points of the MRLF is investigated in both the cases. The case of random additive and multiplicative hazard rate is also studied. The monotonicity of the mean residual life is studied along with the location of the turning points. Examples are provided to illustrate the results.
Processes are often viewed as coalgebras, with the structure maps specifying the state transitions. In the simplest case, the state spaces are discrete, and the structure map simply takes each state to the next states. But the coalgebraic view is also quite effective for studying processes over structured state spaces, e.g. measurable, or continuous. In the present paper, we consider coalgebras over manifolds. This means that the captured processes evolve over state spaces that are not just continuous, but also locally homeomorphic to normed vector spaces, and thus carry a differential structure. Both dynamical systems and differential forms arise as coalgebras over such state spaces, for two different endofunctors over manifolds. A duality induced by these two endofunctors provides a formal underpinning for the informal geometric intuitions linking differential forms and dynamical systems in the various practical applications, e.g. in physics. This joint functorial reconstruction of tangent bundles and cotangent bundles uncovers the universal properties and a high-level view of these fundamental structures, which are implemented rather intricately in their standard form. The succinct coalgebraic presentation provides unexpected insights even about the situations as familiar as Newton's laws.
Finding translations for technical terms is an important problem in machine translation. In particular, in highly specialized domains such as biology or medicine, it is difficult to find bilingual experts to annotate sufficient cross-lingual texts in order to train machine translation systems. Moreover, new terms are constantly being generated in the biomedical community, which makes it difficult to keep the translation dictionaries up to date for all language pairs of interest. Given a biomedical term in one language (source language), we propose a method for detecting its translations in a different language (target language). Specifically, we train a binary classifier to determine whether two biomedical terms written in two languages are translations. Training such a classifier is often complicated due to the lack of common features between the source and target languages. We propose several feature space concatenation methods to successfully overcome this problem. Moreover, we study the effectiveness of contextual and character n-gram features for detecting term translations. Experiments conducted using a standard dataset for biomedical term translation show that the proposed method outperforms several competitive baseline methods in terms of mean average precision and top-k translation accuracy.
The problem of finding a densest segment of a list is similar to the well-known maximum segment sum problem, but its solution is surprisingly challenging. We give a general specification of such problems, and formally develop a linear-time online solution, using a sliding-window style algorithm. The development highlights some elegant properties of densities, involving partitions that are decreasing and all right-skew.
We revisit the random m-ary search tree and study a finer profile of its node outdegrees with the purpose of exploring possibilities of data structure compression. The analysis is done via Pólya urns. The analysis shows that the number of nodes of each individual node outdegree has a phase transition: Up to m = 26, the number of nodes of outdegree k, for k = 0, 1, …, m, is asymptotically normal; that behavior changes at m = 27. Based on the analysis, we propose a compact m-ary tree that offers significant space saving.
This paper presents a novel motion planning approach inspired by the Dynamic Programming (DP) applicable to multi degree of freedom robots (mobile or stationary) and autonomous vehicles. The proposed discrete–time algorithm enables a robot to reach its destination through an arbitrary obstacle field in the fewest number of time steps possible while minimizing a secondary objective function. Furthermore, the resulting optimal trajectory is guaranteed to be globally optimal while incorporating state constraints such as velocity, acceleration, and jerk limits. The optimal trajectories furnished by the algorithm may be further updated in real time to accommodate changes in the obstacle field and/or cost function. The algorithm is proven to terminate in a finite number of steps without its computational complexity increasing with the type or number of obstacles. The effectiveness of the global and replanning algorithms are demonstrated on a planar mobile robot with three degrees of freedom subject to velocity and acceleration limits. The computational complexity of the two algorithms are also compared to that of an A*–type search.
This paper introduces a solution to the problem of steering an aerodynamical system, with non-holonomic constraints superimposed on dynamic equations of motion. The proposed approach is a dimensionality reduction of the Optimal Control Problem (OCP) with heavy path constraints to be solved by Rapidly-Exploring Random Tree (RRT) algorithm. In this research, we formulated and solved the OCP with Euler–Lagrange formula in order to find the optimal-time trajectory. The RRT constructs a non-collision path in static, high-dense obstacle environment (i.e. heavy path constraint). Based on a real-world aircraft model, our simulation results found the collision-free path and gave improvements of time and fuel consumption of the optimized Hamiltonian-based model over the original non-optimized model.
There was an error in the spelling of the author's affiliation. Where the affiliation read “Department of mechanical engineering, Mashad Branch, Islamic Azad University, Mashad, Iran” it should instead have read “Department of mechanical engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran”.
Evidence is emerging that the role of protein structure in disease needs to be rethought. Sequence mutations in proteins are often found to affect the rate at which a protein switches between structures. Modeling structural transitions in wildtype and variant proteins is central to understanding the molecular basis of disease. This paper investigates an efficient algorithmic realization of the stochastic roadmap simulation framework to model structural transitions in wildtype and variants of proteins implicated in human disorders. Our results indicate that the algorithm is able to extract useful information on the impact of mutations on protein structure and function.
Many students complete PhDs in functional programming each year. As a service to the community, the Journal of Functional Programming publishes the abstracts from PhD dissertations completed during the previous year.
In this study, a bilateral teleoperation control algorithm is developed in which the model-mediation method is integrated with an impedance controller. The model-mediation method is also extended to three-degrees-of-freedom teleoperation. The aim of this controller is to compensate for instability issues and excessive forcing applied to the slave environment stemming from time delays in communication. The proposed control method is experimentally tested with two haptic desktop devices. Test results indicate that stability and passivity of the bilateral teleoperation system is preserved under variable time delays in communication. It is also observed that safer interactions of the slave system with its environment can be achieved by utilizing an extended version of the model-mediation method with an impedance controller.
We prove that the relation of bisimilarity between countable labelled transition systems (LTS) is Σ11-complete (hence not Borel), by reducing the set of non-well orders over the natural numbers continuously to it.
This has an impact on the theory of probabilistic and non-deterministic processes over uncountable spaces, since logical characterizations of bisimilarity (as, for instance, those based on the unique structure theorem for analytic spaces) require a countable logic whose formulas have measurable semantics. Our reduction shows that such a logic does not exist in the case of image-infinite processes.
We present a systematic study of bisimulation-up-to techniques for coalgebras. This enhances the bisimulation proof method for a large class of state based systems, including labelled transition systems but also stream systems and weighted automata. Our approach allows for compositional reasoning about the soundness of enhancements. Applications include the soundness of bisimulation up to bisimilarity, up to equivalence and up to congruence. All in all, this gives a powerful and modular framework for simplified coinductive proofs of equivalence.