To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
MDLs rely, for legitimacy, on the notion that the individual litigant calls the shots. That fact justifies a system that affords MDL litigants few, if any, safeguards, even while furnishing class members in class actions elaborate procedural protections. In this Chapter, we zero in on litigant autonomy in MDLs. We explain why autonomy matters, dissect its components, and evaluate how much autonomy MDL litigants seem to have in practice. We then turn to a necessary component of that autonomy: information. We review data from a recent survey indicating litigants felt confused and uninformed regarding their suits. In light of that evidence, we assess what transferee courts are doing to keep litigants up-to-date and well informed. We then furnish the results of our own empirical analysis of court-run MDL websites, which are often extolled, including by judges, as a key venue for client-court communication. Unfortunately, our analysis reveals deep and pervasive deficits with respect to usability and relevance. If this is where case-related communication is supposed to be happening, then litigant confusion is unsurprising. We close with recommendations for courts seeking to harness simple technology to promote better communication. Improved MDL websites aren’t a panacea. But they might promote the autonomy interests of litigants—and light a path for future reform.
Barriers to suicide cluster detection and monitoring include requiring advanced software and statistical knowledge. We tested face validity of a simple method using readily accessible household software, Excel 3D Maps, to identify suicide clusters in this county, years 2014–2019. For spatial and temporal clusters, respectively, we defined meaningful thresholds of suicide density as 1.39/km2 and 33.9/yearly quarter, defined as the 95th percentile of normal logarithmic and normal scale distributions of suicide density per area in each ZIP Code Tabulated Area and 24 yearly quarters from all years. We generated heat maps showing suicide densities per 2.5 km viewing diameter. We generated a one-dimensional temporal map of 3-month meaningful cluster(s). We identified 21 total population spatial clusters and one temporal cluster. For greater accessibility, we propose an alternative method to traditional scan statistics using Excel 3D Maps potentially broadly advantageous in detecting, monitoring, and intervening at suicide clusters.
Court-connected ODR has already shown itself capable of dramatically improving access to justice by eliminating barriers rooted in the fact that courts traditionally resolve disputes only during certain hours, in particular physical places, and only through face-to-face proceedings. Given the centrality of courthouses to our system of justice, too many Americans have discovered their rights are too difficult or costly to exercise. As court-connected ODR systems spread, offering more inclusive types of dispute resolution services, people will soon find themselves with the law and the courts at their fingertips. But robust access to justice requires more than just raw, low-cost opportunities to resolve disputes. Existing ODR platforms seek to replicate in-person procedures, simplifying and clarifying steps where possible, but litigants without representation still proceed without experience, expertise, guardrails, or the ability to gauge risk or likely outcomes. Injecting ODR with a dose of data science has the potential to address many of these shortfalls. Enhanced ODR is unlikely to render representation obsolete, but it can dramatically reduce the gap between the “haves” and the “have nots” and, on some dimensions—where machines can outperform humans—next generation platforms may be a significant improvement.
The legal services market is commonly thought of as divided into two “hemispheres”—PeopleLaw, which serves individuals and small businesses, and BigLaw, which serves corporated clients. The last few decades have seen an increasing concentration of resources within the legal profession toward the latter, to the alleged detriment of the former. At the same time, the costs of accessing legal representation exceed the financial resources of many ordinary citizens and small businesses, compromising their access to the legal system. We ask: Will the adoption of new digital technologies lead to a levelling of the playing field between the PeopleLaw and BigLaw sectors? We consider this in three related dimensions. First, for users of legal services: Will technology deliver reductions in cost sufficient to enable affordable access to the legal system for consumer clients whose legal needs are currently unmet? Second, for legal services firms: Will the deployment of technology to capture economies of scale mean that firms delivering legal services across the two segments become more similar? And third, for the structure of the legal services market: Will the pursuit of economies of scale trigger consolidation that leads both segments toward a more concentrated market structure?
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Smart cities require trusted governance and engaged citizens, especially governance of intelligence and intelligence-enabled control. In some very important respects, smart cities should remain dumb, and that will take governance. This introduction provides an overview of the book’s aims, structure, and contributions of individual chapters.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
In this chapter, we study the economic issues of fresh data trading markets, where the data freshness is captured by Age-of-Information (AoI). In our model, a destination user requests, and pays for, fresh data updates from a source provider. In this work, the destination incurs an age-related cost, modeled as a general increasing function of the AoI. To understand economic viability and profitability of fresh data markets, we consider a pricing mechanism to maximize the source’s profit, while the destination chooses a data update schedule to trade off its payments to the source and its age-related cost. The problem is exacerbated when the source has incomplete information regarding the destination’s age-related cost, which requires one to exploit (economic) mechanism design to induce the truthful information. This chapter attempts to build such a fresh data trading framework that centers around the following two key questions: (a) How should a source choose the pricing scheme to maximize its profit in a fresh data market under complete market information? (b) Under incomplete information, how should a source design an optimal mechanism to maximize its profit while ensuring the destination’s truthful report of its age-related cost information?
This case study focuses on smart tech deployment and governance in Philadelphia. In 2019, the City of Philadelphia launched a new smart city initiative, SmartCityPHL. SmartCityPHL includes a roadmap of strategies, processes, and plans for deployment. In many ways, the new initiative is remarkable. It is ambitious yet pragmatic; it outlines a set of guiding principles along with deliberative and participatory processes; it is broadly inclusive of people and values – as reflected in its simple definition of a smart city: “a city that uses integrated information and communication technology to support the economic, social, and environmental goals of its community.” On its face, and perhaps in comparison with other smart city initiatives, SmartCityPHL provides an exciting roadmap. But the 2019 initiative was not the first smart city project in Philadelphia. There is, in fact, a long history of Philadelphians turning to supposedly smart technology to solve community problems.
This chapter outlines a forward-looking, intelligent approach to thinking through and evaluating supposedly smart systems. First, it clarifies that it is not the city that is smart. Rather, smartness is better understood and evaluated in terms of affordances supposedly smart tools provide actual people. Who gains what kinds of intelligence? For what purposes? Subject to what governance? Second, it identifies and addresses key challenges to intelligent governance in smart city projects. Cities must move beyond a transactional mindset, appreciate how smart systems become an integral part of the built environment, and develop appropriate governance. Third, it proposes an approach to smart city governance grounded in local, contextual norms and scaffolded by key questions to ask throughout smart city planning, procurement, implementation, and management processes. This approach is importantly not oriented around Elinor Ostrom’s famous design principles, but rather a shared set of evaluative questions to guide decision-making.
Optimization of information freshness in wireless networks has usually been performed based on queueing analysis that captures only the temporal traffic dynamics associated with the transmitters and receivers. However, the effect of interference, which is mainly dominated by the interferers’ geographic locations, is not well understood. This chapter presents a theoretical framework for the analysis of the Age of Information (AoI) from a joint queueing-geometry perspective. We also provide the design of a decentralized scheduling policy that exploits local observation to make transmission decisions that minimize the AoI. To quantify the performance, we derive analytical expressions for the average AoI. Numerical results validate the accuracy of the analyses as well as the efficacy of the proposed scheme in reducing the AoI.
Natural language processing techniques promise to automate an activity that lies at the core of many tasks performed by lawyers, namely the extraction and processing of information from unstructured text. The relevant methods are thought to be a key ingredient for both current and future legal tech applications. This chapter provides a non-technical overview of the current state of NLP techniques, focusing on their promise and potential pitfalls in the context of legal tech applications. It argues that, while NLP-powered legal tech can be expected to outperform humans in specific categories of tasks that play to the strengths of current ML techniques, there are severe obstacles to deploying these tools in other contexts, most importantly in tasks that require the equivalent of legal reasoning.