To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
From document review in litigation, to compliance, case prediction, billing, negotiation and settlement, contracting, patent management, due diligence, legal research, and beyond, technology is transforming the production of legal work and in turn the economics of the legal industry. Legal informatics is the academic discipline that underlies many of these transformational technologies, and despite all of these technical advances, no modern comprehensive treatment of the field has been offered to date. With contributions from more than two dozen academic and industry experts, this book offers readers a first-of-its-kind introductory overview of the exciting field of legal informatics.
If an event of interest is correlated with text data, we can learn models of text that predict the event outcome. For example, researchers have predicted financial risk with regression models that use the text of company financial disclosures.2 Topic models can predict outcomes as a function of the proportions of a document that are devoted to the automatically discovered topics,3 and this technique has been used to develop, for example, a topic model that forecasts roll call votes using the text of congressional bills.4 An advantage of the topic model prediction approach is that the model learns interpretable topics and the relationships between the learned topics and outcomes. A disadvantage of the topic model approach is that other, less interpretable text models often exhibit higher predictive power.
Electronic discovery (e-discovery) is an integral component of legal informatics, touching on everything from search and artificial intelligence to design and legal services transformation. Any discussion of electronic discovery must begin with an explanation of its relevance to legal work. E-discovery – also known as “ediscovery” or, somewhat datedly, “eDiscovery” – is the discovery in legal proceedings of evidence in an electronic format. Due to the nature of modern technology, e-discovery encompasses an overwhelming majority of evidence, such that e-discovery and other forms of discovery have become virtually synonymous. As such, legal discovery is now fraught with issues concerning how information is stored, retrieved, exchanged, and generally made accessible to parties during legal proceedings. A common challenge for attorneys is what to do with a multi-terabyte collection of evidence that consists of millions of documents across hundreds of file types, with only a matter of months before their first depositions. The best solutions to this kind of increasingly common challenge will include recourse to big data and machine learning, which are discussed in this chapter.
The world of contracts is undergoing fundamental changes. This is partly due to technology: there can be tremendous benefits from self-enforcing, machine-readable contracts. But these technologies are not used everywhere. Many contracts continue to be performed by people. In the context of commercial deals and relationships,1 a vast number of contracts still need to be planned, understood, approved, implemented, and monitored by people.2 Initiatives across the world seek to innovate contracting processes and documents and develop more effective, engaging ways for people to work with them. This chapter focuses on these initiatives and the need to make contracts truly human-readable.
This case study demonstrates in more detail how one particular technique – computational argumentation – can be effectively used to build automated reasoning tools that provide decision support capabilities for legal practitioners. This case study will also demonstrate how legal cases can be represented and interpreted through computational models of arguments, and how this enables software programs to generate and reason about the relevant arguments for deciding a case, akin to human judicial reasoning.
Companies, and the professionals who serve them, spend vast amounts of time extracting data from contracts. This work is done in areas including M&A due diligence and integration, corporate contract management, lease abstraction, auditing, and others. In recent years, software has come to market that helps users review contracts faster and more accurately, and that also helps to better organize the process and understand its results.
There are a variety of informatics-centric tasks for which the goal is to predict something or extract some kind of signal. In this section, we consider artificial intelligence broadly, artificial intelligence applied to law, and the very fruitful fields of machine learning (ML) and natural language processing (NLP).
The General Counsel of a Fortune 100 company was recently asked if he measured ROI (return on investment) on his legal spend. “No,” he said, “I can’t. I can’t measure quality.”
At various conference panels, several of the largest firms claim they are revamping the way they handle their legal spend to be more in line with other cost centers.1 The rise of “Legal Operations” in corporate legal departments is leading the way in the use of legal metrics.2 These standard business metrics include performance, efficiency, and value. ROI means measuring return, and return requires estimating value. Value can be defined as quality divided by cost. Therefore, measuring quality is key to the modernization of legal departments, as well as their external legal service providers.
This groundbreaking work offers a first-of-its-kind overview of legal informatics, the academic discipline underlying the technological transformation and economics of the legal industry. Edited by Daniel Martin Katz, Ron Dolin, and Michael J. Bommarito, and featuring contributions from more than two dozen academic and industry experts, chapters cover the history and principles of legal informatics and background technical concepts – including natural language processing and distributed ledger technology. The volume also presents real-world case studies that offer important insights into document review, due diligence, compliance, case prediction, billing, negotiation and settlement, contracting, patent management, legal research, and online dispute resolution. Written for both technical and non-technical readers, Legal Informatics is the ideal resource for anyone interested in identifying, understanding, and executing opportunities in this exciting field.
This textbook introduces fundamental concepts, major models, and popular applications of pattern recognition for a one-semester undergraduate course. To ensure student understanding, the text focuses on a relatively small number of core concepts with an abundance of illustrations and examples. Concepts are reinforced with hands-on exercises to nurture the student's skill in problem solving. New concepts and algorithms are framed by real-world context and established as part of the big picture introduced in an early chapter. A problem-solving strategy is employed in several chapters to equip students with an approach for new problems in pattern recognition. This text also points out common errors that a new player in pattern recognition may encounter, and fosters the ability for readers to find useful resources and independently solve a new pattern recognition task through various working examples. Students with an undergraduate understanding of mathematical analysis, linear algebra, and probability will be well prepared to master the concepts and mathematical analysis presented here.
This chapter is a tutorial about some of the key issues in semantics of the first-order aspects of probabilistic programming languages for statistical modelling – languages such as Church, Anglican, Venture and WebPPL. We argue that s-finite measures and s-finite kernels provide a good semantic basis.
Reasoning about probabilistic programs is hard because it compounds the difficulty of classic program analysis with sometimes subtle questions of probability theory. Having precise mathematical models, or semantics, describing their behaviour is therefore particularly important. In this chapter, we review two probabilistic semantics. First, an operational semantics which models the local, step-by-step, behaviour of programs, then a denotational semantics describing global behaviour as an operator transforming probability distributions over memory states.
Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and masking of soft errors is challenging, expensive and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning and big data analytics) can often naturally tolerate soft errors.In this chapter, we demonstrate how a programming language, Rely, enables developers to reason about and verify the quantitative reliability of an application – namely, the probability that it produces the correct result when executed on unreliable hardware. Rely leverages a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering.
Church's λ-calculus has become a universally accepted model of pure functional programming, and its properties have been thoroughly scrutinised by the research community in the last 90 years. Many variations of it have been introduced for the sake of capturing programming with various forms of effects, thus going beyond pure functional programming. This chapter is meant to be a gentle introduction to a family of such calculi, namely probabilistic λ-calculi, in their two main variations: randomised λ-calculi and Bayesian λ-calculi. We focus our attention on the operational semantics, expressive power and termination properties of randomised λ-calculi, only giving some hints and references about denotational models and Bayesian λ-calculi.