To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter provides an introduction to uncertainty relations underlying sparse signal recovery. We start with the seminal work by Donoho and Stark (1989), which defines uncertainty relations as upper bounds on the operator norm of the band-limitation operator followed by the time-limitation operator, generalize this theory to arbitrary pairs of operators, and then develop, out of this generalization, the coherence-based uncertainty relations due to Elad and Bruckstein (2002), plus uncertainty relations in terms of concentration of the 1-norm or 2-norm. The theory is completed with set-theoretic uncertainty relations which lead to best possible recovery thresholds in terms of a general measure of parsimony, the Minkowski dimension. We also elaborate on the remarkable connection between uncertainty relations and the “large sieve,” a family of inequalities developed in analytic number theory. We show how uncertainty relations allow one to establish fundamental limits of practical signal recovery problems such as inpainting, declipping, super-resolution, and denoising of signals corrupted by impulse noise or narrowband interference.
When uncontrollable resources fluctuate, optimal power flow (OPF), routinely used by the electric power industry to redispatch hourly controllable generation (coal, gas, and hydro plants) over control areas of transmission networks, can result in grid instability and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition is considered undesirable in power engineering practice. Possibly, it can lead to a risky outcome that compromises grid stability – line tripping. A chance-constrained (CC) OPF approach is developed, which corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, this CC-OPF approach satisfies all the constraints with high probability while minimizing the cost of economic redispatch.
In compressed sensing (CS) a signal x ∈ Rn is measured as y =A x + z, where A ∈ Rm×n (m<n) and z ∈ Rm denote the sensing matrix and measurement noise. The goal is to recover x from measurements y when m<n. CS is possible because we typically want to capture highly structured signals, and recovery algorithms take advantage of a signal’s structure to solve the under-determined system of linear equations. As in CS, data-compression codes take advantage of a signal’s structure to encode it efficiently. Structures used by compression codes are much more elaborate than those used by CS algorithms. Using more complex structures in CS, like those employed by data-compression codes, potentially leads to more efficient recovery methods requiring fewer linear measurements or giving better reconstruction quality. We establish connections between data compression and CS, giving CS recovery methods based on compression codes, which indirectly take advantage of all structures used by compression codes. This elevates the class of structures used by CS algorithms to those used by compression codes, leading to more efficient CS recovery methods.
The large amount of synchrophasor data obtained by Phasor Measurement Units (PMUs) provides dynamic visibility of power systems. As the data is being collected from geographically distant locations facilitated by computer networks, the data quality can be compromised by data losses, bad data, and cybernetic attacks. Data privacy is also an increasing concern. This chapter, describes a common framework of methods for data recovery, error correction, detection and correction of cybernetic attacks, and data privacy enhancement by exploiting the intrinsic low-dimensional structures in the high-dimensional spatial-temporal blocks of PMU data. The developed data-driven approaches are computationally efficient with provable analytical guarantees. For instance, the data recovery method can recover the ground-truth data even if simultaneous and consecutive data losses and errors happen across all PMU channels for some time. This approach can identify PMU channels that are under false data injection attacks by locating abnormal dynamics in the data. Random noise and quantization can be applied to the measurements before transmission to compress the data and enhance data privacy.
Identifying arbitrary topologies of power networks in real time is a computationally hard problem due to the number of hypotheses that grows exponentially with the network size. The potential of recovering the topology of a grid using only the publicly available data (e.g., market data) provides an effective approach to learning the topology of the grid based on the dynamically changing and up-to-date data. This enables learning and tracking the changes in the topology of the grid in a timely fashion. A major advantage of this method is that the labeled data used for training and inference is available in an arbitrarily large amount fast and at very little cost. As a result, the power of offline training is fully exploited to learn very complex classifiers for effective real-time topology identification.
This chapter introduces the fundamental elements of random matrix theory and highlights key applications in line outage detection using actual data recovered from existing power systems around the globe. The key mathematical component is a novel concept referred to as the mean spectral radius (MSR) of non-Hermitian random matrices. By analyzing the changes of the MSR of random matrices, grid failure detection is reliably achieved. Several studies and simulations are considered to observe the performance of this new theoretical approach to line outage detection.
Smart grids (SGs) promise to deliver dramatic improvements compared to traditional power grids thanks primarily to the large amount of data being exchanged and processed within the grid, which enables the grid to be monitored more accurately and at a much faster pace. The smart meter (SM) is one of the key devices that enable the SG concept by monitoring a household’s electricity consumption and reporting it to the utility provider (UP), i.e., the entity that sells energy to customers, or to the distribution system operator (DSO), i.e., the entity that operates and manages the grid. However, the very availability of rich and high-frequency household electricity consumption data, which enables a very efficient power grid management, also opens up unprecedented challenges on data security and privacy. To counter these threats, it is necessary to develop techniques that keep SM data private, and, for this reason, SM privacy has become a very active research area. The aim of this chapter is to provide an overview of the most significant privacy-preserving techniques for SM data, highlighting their main benefits and disadvantages.
This chapter provides a survey of the common techniques for determining the sharp statistical and computational limits in high-dimensional statistical problems with planted structures, using community detection and submatrix detection problems as illustrative examples. We discuss tools including the first- and second-moment methods for analyzing the maximum-likelihood estimator, information-theoretic methods for proving impossibility results using mutual information and rate-distortion theory, and methods originating from statistical physics such as the interpolation method. To investigate computational limits, we describe a common recipe to construct a randomized polynomial-time reduction scheme that approximately maps instances of the planted clique problem to the problem of interest in total variation distance.
This chapter presents a game-theoretic solution to several challenges in electricity markets, e.g., intermittent generation; high levels of average prices; price volatility; and fundamental aspects concerning the environment, reliability, and affordability. It proposes a stochastic bi-level optimization model to find the optimal nodal storage capacities required to achieve a certain price volatility level in a highly volatile energy-only electricity market. The decision on storage capacities is made in the upper-level problem and the operation of strategic/regulated generation, storage, and transmission players is modeled in the lower-level problem using an extended stochastic (Bayesian) Cournot-based game.
Deep learning (DL) has seen tremendous recent successes in many areas of artificial intelligence. It has since sparked great interests in its potential use in power systems. However, success from using DL in power systems has not been straightforward. Even with the continuing proliferation of data collected in the power systems from, e.g., synchrophasors and smart meters, how to effectively use these data, especially with DL techniques, remains a widely open problem. This chapter shows that the great power of DL can be unleashed in solving many fundamentally hard high-dimensional real-time inference problems in power systems. In particular, DL, if used appropriately, can effectively exploit both the intricate knowledge from the nonlinear power system models and the expressive power of DL predictor models. This chapter also shows the great promise of DL in significantly improving the stability, resilience, and security of power systems.
We study compression for function computation of sources at nodes in a network at receiver(s). The rate region of this problem has been considered under restrictive assumptions. We present results that significantly relax these assumptions. For a one-stage tree network, we characterize a rate region by a necessary and sufficient condition for any achievable coloring-based coding scheme, the coloring connectivity condition. We propose a modularized coding scheme based on graph colorings to perform arbitrarily closely to derived rate lower bounds. For a general tree network, we provide a rate lower bound based on graph entropies and show that it is tight for independent sources. We show that, in a general tree network case with independent sources, to achieve the rate lower bound, intermediate nodes should perform computations, but for a family of functions and random variables, which we call chain-rule proper sets, it suffices to have no computations at intermediate nodes to perform arbitrarily closely to the rate lower bound. We consider practicalities of coloring-based coding schemes and propose an efficient algorithm to compute a minimum-entropy coloring of a characteristic graph.
Clustering is a general term for techniques that, given a set of objects, aim to select those that are closer to one another than to the rest, according to a chosen notion of closeness. It is an unsupervised-learning problem since objects are not externally labeled by category. Much effort has been expended on finding natural mathematical definitions of closeness and then developing/evaluating algorithms in these terms. Many have argued that there is no domain-independent mathematical notion of similarity but that it is context-dependent; categories are perhaps natural in that people can evaluate them when they see them. Some have dismissed the problem of unsupervised learning in favor of supervised learning, saying it is not a powerful natural phenomenon. Yet, most learning is unsupervised. We largely learn how to think through categories by observing the world in its unlabeled state. Drawing on universal information theory, we ask whether there are universal approaches to unsupervised clustering. In particular, we consider instances wherein the ground-truth clusters are defined by the unknown statistics governing the data to be clustered.