To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Wireless sensor networks must be designed to meet a number of challenging requirements including extended lifetime in the face of energy constraints, robustness, scalability, and autonomous operation. The many design concepts and protocols described in the preceding chapters address these challenges in different aspects of network operation. While much additional work remains to be done to realize the potential of tomorrow's systems, even these partial solutions offer reason for optimism.
Perhaps the most important lesson to take away from the studies described in this book is that the fundamental challenges must be tackled by making appropriate design choices and optimizations across multiple layers.
Consider energy efficiency, which is perhaps the most fundamental concern due to limited battery resources. The most significant source of energy consumption in many applications is radio communication. At deployment time, energy efficiency concerns can inform the selection of an appropriate mixture of heterogeneous nodes and their placement. Localization and synchronization techniques can be performed with low communication overheads for energy efficiency. At the physical/link layers, parameters such as the choice of modulation scheme, transmit power settings, packet size, and error control techniques can provide energy savings. Medium access techniques for sensor networks use sleep modes to minimize idle radio energy consumption. Topology control techniques suitable for over-deployed networks also put redundant nodes to sleep until they are needed to provide coverage and connectivity. At the network layer, routing techniques can be designed to incorporate energy-awareness and innetwork compression to minimize energy usage.
Wireless communication is both a blessing and a curse for sensor networks. On the one hand, it is key to their flexible and low-cost deployment. On the other hand, it imposes considerable challenges because wireless communication is expensive and wireless link conditions are often harsh and vary considerably in both space and time due to multi-path propagation effects.
Wireless communications have been studied in depth for several decades and entire books are devoted to the subject [171, 207]. The goal of this chapter is by no means to survey all that is known about wireless communications. Rather, we will focus on three sets of simple models that are useful in understanding and analyzing higher-layer networking protocols for WSN:
Link quality model: a realistic model showing how packet reception rate varies statistically with distance. This incorporates both an RF propagation model and a radio reception model.
Energy model: a realistic model for energy costs of radio transmissions, receptions, and idle listening.
Interference model: a realistic model that incorporates the capture effect whereby packets from high-power transmitters can be successfully received even in the presence of simultaneous traffic.
Wireless link quality
The following is the basic ideal model of a wireless link: two nodes have a perfect link (with 100% packet reception rate) if they are within communication range R, and a non-existent link (0% packet reception rate) if they are outside this range.
The objective of transport layer protocols is to provide reliability and other quality of service (QoS) services and guarantees for information transfer over inherently unreliable and resource-constrained networks. The following are the interrelated guarantees and services that may be needed in wireless sensor networks:
Reliable delivery guarantee: For some critical data, it may be necessary to ensure that the data arrive from origin to destination without loss.
Priority delivery: The data generated within the WSN may be of different priorities; e.g., the data corresponding to an unusual event detection may have much higher priority than periodic background readings. If the network is congested, it is important to ensure that at least the high-priority data get through, even if the low-priority data have to be dropped or suppressed.
Delay guarantee: In critical applications, particularly those where the sensor data are used to initiate some form of actuation or response, the data packets generated by sensor sources may have strict requirements for delivery to the destination within a specified time.
Energy-efficient delivery: Energy wastage during times of network congestion must be minimized, for instance by forcing any necessary packet drops to occur as close to the source as possible.
Fairness: Different notions of fairness may be relevant, depending on the application. These range from ensuring that all nodes in the network provide equal amounts of data (e.g. in a simple data-gathering application), to max– min fairness, to proportional fairness.
A fundamental innovation in the area of wireless sensor networks has been the concept of data-centric networking. In a nutshell, the idea is this: routing, storage, and querying techniques for sensor networks can all be made more efficient if communication is based directly on application-specific data content instead of the traditional IP-style addressing [74].
Consider the World Wide Web. When one searches for information on a popular search site, it is possible to enter a query directly for the content of interest, find a hit, and then click to view that content. While this process is quite fast, it does involve several levels of indirection and names: ranging from high-level names, like the query string itself, to domain names, to IP (internet protocol) addresses, and MAC addresses. The routing mechanism that supports the whole search process is based on the hierarchical IP addressing scheme, and does not directly take into account the content that is being requested. This is advantageous because the IP is designed to support a huge range of applications, not just web searching. This comes with increased indirection overhead in the form of the communication and processing necessary for binding; for instance the search engine must go through the index to return web page location names as the response to the query string, and the domain names must be translated to IP addresses through DNS. This tradeoff is still quite acceptable, since the Internet is not resource constrained.
Wireless sensor networks, however, are qualitatively different.
A part-of-speech tagger is a fundamental and indispensable tool in computational linguistics, typically employed at the critical early stages of processing. Although taggers are widely available that achieve high accuracy in very general domains, these do not perform nearly as well when applied to novel specialized domains, and this is especially true with biological text. We present a stochastic tagger that achieves over 97.44% accuracy on MEDLINE abstracts. A primary component of the tagger is its lexicon which enumerates the permitted parts-of-speech for the 10000 words most frequently occurring in MEDLINE. We present evidence for the conclusion that the lexicon is as vital to tagger accuracy as a training corpus, and more important than previously thought.
Consider a discrete-time insurance risk model with risky investments. Under the assumption that the loss distribution belongs to a certain subclass of the subexponential class, Tang and Tsitsiashvili (Stochastic Processes and Their Applications 108(2): 299–325 (2003)) established a precise estimate for the finite time ruin probability. This article extends the result both to the whole subexponential class and to a nonstandard case with associated discount factors.
Compared to the known univariate distributions for continuous data, there are relatively few available for discrete data. In this article, we derive a collection of 16 flexible discrete distributions by means of conditional Poisson processes. The calculations involve the use of several special functions and their properties.
Numerical inversion of Laplace transforms is a powerful tool in computational probability. It greatly enhances the applicability of stochastic models in many fields. In this article we present a simple Laplace transform inversion algorithm that can compute the desired function values for a much larger class of Laplace transforms than the ones that can be inverted with the known methods in the literature. The algorithm can invert Laplace transforms of functions with discontinuities and singularities, even if we do not know the location of these discontinuities and singularities a priori. The algorithm only needs numerical values of the Laplace transform, is extremely fast, and the results are of almost machine precision. We also present a two-dimensional variant of the Laplace transform inversion algorithm. We illustrate the accuracy and robustness of the algorithms with various numerical examples.
In September 2000, the Brazilian system dispatch and spot prices were calculated twice, using different inflow forecasts for that month, as in the last 5 days of August the inflows to the reservoirs in the South and Southeast regions changed 200%. The first run used a smaller forecasted energy inflow and the second used a higher energy inflow. Contrary to expectations, the spot price in the second run, with the higher energy inflow, was higher than the one found in the first run. This paper describes the problem, presents the special features of the PAR(p) model that allow the described behavior, and shows the solution taken to avoid the problem.
This article deals with estimations of probabilities of rare events using fast simulation based on the splitting method. In this technique, the sample paths are split into multiple copies at various stages in the simulation. Our aim is to optimize the algorithm and to obtain a precise confidence interval of the estimator using branching processes. The numerical results presented suggest that the method is reasonably efficient.
The control and operation of an electric power system is based on the ability to determine the state of the system in real time. State estimation (SE) has been introduced in the 1960s to achieve this objective. The initial implementation was based on single-phase measurements and a power system model that was assumed to operate under single-frequency, balanced conditions, and a symmetric system model. These assumptions are still prevalent today. The single-frequency, balanced, and symmetric system assumptions have simplified the implementation but have generated practical problems. The experience is that the SE problem does not have 100% performance; that is, there are cases and time periods for which the SE algorithm will not converge. There are practical and theoretical reasons for this and they are explained in the paper. Recent mergers and mandated regional transmission organizations (RTOs) as well as recent announcements for the formation of mega-RTOs will result in the application of the SE in systems of unprecedented size. We believe that these practical and theoretical issues will become of greater importance. There are scientists who believe that the SE problem is scalable, meaning that it will work for the mega-RTOs the same way that it performs now for medium–large systems. There are scientists who believe that this is not true. The fact is that no one has investigated the problem, let alone performed numerical experiments to prove or disprove any claims. This paper identifies a number of issues relative to the SE of mega-RTOs and provides some preliminary results from numerical experiments for the relation between the SE algorithm performance and the power system size.
Motivated by various applications in queuing theory, this article is devoted to the monotonicity and convexity of some functions associated with discrete-time or continuous-time denumerable Markov chains. For the discrete-time case, conditions for the monotonicity and convexity of the functions are obtained by using the properties of stochastic dominance and monotone matrix. For the continuous-time case, by using the uniformization technique, similar results are obtained. As an application, the results are applied to analyze the monotonicity and convexity of functions associated with the queue length of some queuing systems.
This article presents two methods for predicting weather-related overhead distribution feeder failures. The first model is based on linear regression, which uses a regression function to determine the correlation between the weather factors and overhead feeder failures. The second method is based on a one-layer Bayesian network, which uses conditional probabilities to model the correlation. Both methods are discussed and followed by tests to assess their performance. The results obtained using these methods are discussed and compared.
Optimal levels of preventive maintenance performed on any system ensures cost-effective and reliable operation of the system. In this paper a component with deterioration and random failure is modeled using Markov processes while incorporating the concept of minor and major preventive maintenance. The optimal mean times to preventive maintenance (both minor and major) of the component is determined by maximizing its availability with respect to mean time to preventive maintenance. Mathematical optimization programs Maple 7 and Lingo 7 are used to find the optimal solution, which is illustrated using a numerical example. Further, an optimal maintenance policy is obtained using Markov Decision Processes (MDPs). Linear Programming (LP) is utilized to implement the MDP problem.
This paper investigates the effects of measurement and parameter errors on the calculation of power transfer distribution factors (PTDFs). Calculation of PTDFs depends mainly on two factors: the operating conditions and the network topology. Both of these factors change in real time and are monitored through the use of a state estimator. The role of the state estimator in providing not only the state information but also the real-time network model is shown to influence power market operations via PTDF-based decisions such as congestion management and pricing. The IEEE 14 bus test system is used for the illustration of these influences.
This paper valuates generation assets within deregulated electricity markets. A new framework for modeling electricity markets with a Markov chain model is proposed. The Markov chain model captures the fundamental economic forces underlying the electricity markets such as demand on electricity and supplied online generation capacity. Based on this new model, a real option analysis is adopted to valuate generation assets. The Markov chain model is combined with a binomial tree to approximate the stochastic movement of prices on both electric energy and ancillary services, which are driven by the market forces. A detailed example is presented. This method is shown to provide optimal operation policies and market values of generation assets. This method also provides means to analyze the impacts of demand growth patterns, competition strategies of competitors, and other key economic forces.
Suppose that there are n families with children attending a certain school and that the number of children in these families are independent and identically distributed random variables, each with probability mass function P{X = j} = pj, j ≥ 1, with finite mean μ = [sum ]j≥1jpj. If a child is selected at random from the school and XI is the number of children in the family to which the child belongs, it is known that limn→∞P{XI = j} = jpj /μ,j ≥ 1. Here, asymptotic expansions for P{XI = j} are developed under the condition E|X|3 < ∞.
We present a modular correspondence between various categorical structures and their internal languages in terms of extensional dependent type theories à la Martin-Löf. Starting from lex categories, through regular ones, we provide internal languages of pretopoi and topoi and some variations of them, such as, for example, Heyting pretopoi.
With respect to the internal languages already known for some of these categories, such as topoi, the novelty of these calculi is that formulas corresponding to subobjects can be regained as particular types that are equipped with proof-terms according to the isomorphism ‘propositions as mono types’, which was invisible in previously described internal languages.
This paper axiomatises the structure of bigraphs, and proves that the resulting theory is complete. Bigraphs are graphs with double structure, representing locality and connectivity. They have been shown to represent dynamic theories for the $\pi$-calculus, mobile ambients and Petri nets in a way that is faithful to each of those models of discrete behaviour. While the main purpose of bigraphs is to understand mobile systems, a prerequisite for this understanding is a well-behaved theory of the structure of states in such systems. The algebra of bigraph structure is surprisingly simple, as this paper demonstrates; this is because bigraphs treat locality and connectivity orthogonally.