To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Age-of-information (AoI) is used to characterize the freshness of information, and is critical for information monitoring, tracking, and control, which is typically required in many network applications, such as autonomous vehicles, virtual/augmented reality, and Internet-of-Things (IoT). Both inter-arrival times and delays of packets affect AoI performance, and thus traditional delay-efficient algorithms do not necessarily exhibit low AoI performance. This calls for “age-efficient” algorithm design in communication networks, which forms the focus of this chapter. In particular, we first discuss the recent advances in the age-efficient algorithm design for three different types of common network traffic: (i) elastic traffic (cf. Section 1.1): packets are allowed to be delivered without any specific deadline constraints; (ii) inelastic traffic (cf. Section 1.2): packets will be dropped if they are not delivered within a specific deadline; (iii) heterogeneous traffic (cf. Section 1.3): different packets may have different size. To facilitate our discussions, we explicitly consider the discrete-time model and emphasize the difference between age-efficient and delay-efficient algorithm design paradigms. Then, we examine “fresh” scheduling design for remote estimation with the goal of optimally balancing the trade-of between the estimation accuracy and the communication cost (cf. Section 1.4).
Although proponents of online dispute resolution systems proclaim that their innovations will expand access to justice for so-called “simple cases,” evidence of how the technology actually operates and who is benefitting from it demonstrates just the opposite. Resolution of some disputes may be more expeditious and user interface more intuitive. But in order to achieve this, parties generally do not receive meaningful information about their rights and defenses. The opacity of the technology (ODR code is not public and unlike court appearance its proceedings are private) means that due process defects and systemic biases are difficult to identify and address. Worse still, the “simple cases” argument for ODR assumes that the dollar value of a dispute is a reasonable proxy for its complexity and significance to the parties. This assumption is contradicted by well established research on procedural justice. Moreover, recent empirical studies show that low money value cases, which dominate state court dockets, are for the most part debt collection proceedings brought by well-represented private creditors or public creditors (including courts themselves, which increasingly depend on fines and fees for their operating budget). Defendants in these proceedings are overwhelmingly unrepresented individuals. What ODR offers in these settings is not access to justice for ordinary people, but rather a powerful accelerated collection and compliance technology for private creditors and the state. This chapter examines the design features of ODR and connects them to the ideology of tech evangelism that drives deregulation and market capture, the aspirations of the alternative dispute resolution movement, and hostility to the adversary system that has made strange bedfellows of traditional proponents of access to justice and tech profiteers. The chapter closes with an analysis of front-end standards for courts and bar regulators to consider to ensure that technology marketed in the name of access to justice actually serves the legal needs of ordinary people.
Pittsburgh is arguably one of the great twentieth-century urban success stories, but in the twenty-first century, Pittsburgh is unexceptional. That makes Pittsburgh a good case for examining governance of smart city technology, because Pittsburgh is neither behind some imaginary urban technology curve nor ahead of it. Like many cities, it doesn’t aspire to be celebrated as a “smart city”; instead, it merely hopes to do well, even to thrive. Pittsburgh has steadily accumulated and deployed a broad range of technology systems as part of its public administration practice, celebrating its advances as often and as much as it might. The case study documents what might be referred to as “ordinary” or “normal” governance of smart city technology and governance via smart city technology. The chapter offers a broad historical take on ICTs and smart technologies in Pittsburgh. It also dives more deeply into some specific examples. Its research and presentation are pluralistic in tone, style, and method.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Smart cities require much more than smart tech. Cities need trusted governance and engaged citizens. Integrating surveillance, AI, automation, and smart tech within basic infrastructure, as well as public and private services and spaces, raises a complex set of ethical, economic, political, social, and technological questions that requires systematic study and careful deliberation. Throughout this book, authors have asked contextual research questions and explored compelling but often distinct answers guided by the shared structure of the GKC framework. The Conclusion discusses some of the key themes across chapters in this volume, considering lessons learned and implications for future research.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
In this chapter, we consider a joint sampling and scheduling problem for optimizing data freshness in multisource systems. Data freshness is measured by a nondecreasing penalty function of Age of Information, where all sources have the same age-penalty function. Sources take turns to generate update samples, and forward them to their destinations one-by-one through a shared channel with random delay. There is a scheduler, that chooses the update order of the sources, and a sampler, that determines when a source should generate a new sample in its turn. We aim to find the optimal scheduler–sampler pairs that minimize the total-average age-penalty (Ta-AP). We start the chapter by providing a brief explanation of the sampling problem in the light of single–source networks, as well as some useful insights and applications on age of information and its penalty functions. Then, we move on to the multisource networks, where the problem becomes more challenging. We provide a detailed explanation of the model and the solution in this case. Finally, we conclude this chapter by providing an open question in this area and its inherent challenges.
To further explore the issues discussed in previous chapters, this chapter uses the city of Bloomington, Indiana, and its open data portal as a case study. As open data portals are considered to be an instantiation of digital commons, it is assumed that its design and governance would support cooperation and community participation and at least some forms of communal ownership, co-creation, and use. To test these assumptions, the GKC framework and its concepts and guiding questions are applied to this specific case to understand the actions around the portal and their patterns and outcomes.
Smart city technology has its value and its place; it isn’t automatically or universally harmful. Urban challenges andopportunities addressed via smart technology demand systematic study, examining general patterns and local variations as smart city practices unfold around the world. Smart cities are complex blends of community governance institutions, social dilemmas that cities face, and dynamic relationships among information and data, technology, and human lives. Some of those blends are more typical and common. Some are more nuanced in specific contexts. This volume uses the Governing Knowledge Commons (GKC) framework to sort out relevant and important distinctions. The framework grounds a series of case studies examining smart technology deployment and use in different cities. This chapter briefly explains what that framework is, why and how it is a critical and useful tool for studying smart city practices, and what the key elements of the framework are. The GKC framework is useful both here and can be used in additional smart city case studies in the future.
America’s market for legal technology presents a puzzle. On the one hand, America’s market for legal services is among the most tightly regulated in the world, suggesting infertile ground for a legal technology revolution. On the other side of this puzzle is America’s advanced and free-wheeling market for legal tech, which is likely the most robust in the world. This chapter explains this seeming puzzle and then uses that explanation to make some predictions about where legal technology will continue to flourish in America and where legacy players—lawyers, law schools, and judges—will instead stymie its development. In order to predict the future we first must understand the present and the past, so the chapter presents a brief overview of lawyer regulation, the structure of the American market for lawyers and legal services, and the current state of legal tech. This more granular view of the innovation ecosystem can explain why some tech sectors are booming, while others remain stubbornly behind, and also where we’ll see continued and even accelerated legal tech growth and where we won’t.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Should the justice system sustain remote operations in a post-pandemic world? Commentators are skeptical, particularly regarding online jury trials. Some of this skepticism stems from empirical concerns. This paper explores two oft-expressed concerns for sustaining remote jury trials: first, that using video as a communication medium will dehumanize parties to a case, reducing the human connection from in-person interactions and making way for less humane decision-making; and second, that video trials will diminish the ability of jurors to detect witness deception or mistake. Our review of relevant literature suggests that both concerns are likely misplaced. Although there is reason to exercise caution and to include strong evaluation with any migration online, available research suggests that video will neither materially affect juror perceptions of parties nor alter the jurors’ (nearly nonexistent) ability to discern truthful from deceptive or mistaken testimony. On the first point, the most credible studies from the most analogous situations suggest video interactions cause little or no effect on human decisions. On the second point, a well-developed body of social science research shows a consensus that human detection accuracy is only slightly above chance levels, and that such accuracy is the same whether the interaction is in person or virtual.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.