To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter documents how the Legal Design Lab at Stanford University has integrated design thinking into law school technology curriculum. In this chapter we profile the objectives of the lab and explore the work the lab has undertaken to introduce new opportunities for skill acquisition through design thinking courses, innovation sprints, and workshops. We explore the purpose, process, and outcomes of these new experiments in legal education, and overview the interdisciplinary methods we have developed, brought from design schools and human-computer interaction programmes. We detail examples of the specific types of classes, sprints, and workshops run, how we define learning outcomes, and how we evaluate student performance. Further, we explore the way in which we leverage technology to provide students with opportunities to acquire user research, mapping, rapid prototyping, and improved communication skills. Drawing on lessons observed over the life of the Design Lab, we conclude by reflecting on our experience of integrating design thinking into a law school programme and argue for the importance of design thinking as an aspect of technology training within and outside of law.
Over the last decade Artificial Intelligence (AI) in the form of data-driven tools designed to support legal task completion, have occupied a growing position within the delivery of private legal services and the exercise of administrative functions by the public sector. As a result, whilst technological literacy was once understood as the capacity to use particular forms of word processing software, navigate the Internet or send electronic correspondence, modern forms of literacy demand a user exhibits a broader range of skills, including the ability to understand, apply, visualise and infer patterns from data. This chapter considers the range of current initiatives developed to address the technology skills and awareness gap amongst law students, and identifies the subject areas that ought to take priority in future curriculum development. It argues that exposure to data analysis and data-driven technologies represents a necessary component of students’ preparation for entry into the professions on the basis that this knowledge: (i) enhances student employability in an increasingly competitive graduate job market; and (ii) equips graduates to meet their wider civic responsibilities to uphold the rule of law and promote access to justice.
In the field of educational technology there are classic oppositions that shape what we do in our use of technology in higher education (HE) – behaviourism versus constructivism, open versus for-profit, conventional versus innovative curriculum design, technocracy versus democracy. Both sides of the binaries are critical components of what we might determine as the ‘social’ in HE, and the extent to which their oppositions govern our approach to curriculum design also determines the type of learning that our students undertake in their programmes. In this chapter we explore the effect of the antinomies on the development of simulation software designed and built last decade and still in use at Strathclyde Law School, and adapted elsewhere. The chapter will analyse the assumptions and the history – legal educational, technological and social – that are part of the software build and outline future use and expectations for the software as it develops beyond what might, to date, be characterised as its early beta or incunabula stages of development in HE. Above all we shall begin to trace what we hope is one resolution of the classic opposition of technocracy and democracy, a theme that will be developed in future publications.
The lawyer of the future will exist as a ‘polytechnic’ or ‘many-skilled’ professional, applying their legal expertise to a client’s changing world in an increasingly agile way and within a range of organisational settings. For legal educators, there is a need to consider how education can best prepare future lawyers for this reality. The long view suggests that we should be looking to build core skills in legal, design and logic principles rather than learning specific technologies that may be rapidly superseded. But how can we develop these skills, and how we can balance the need to understand core academic principles of law against the need for applied, workplace experience? This chapter looks at the balancing process, focusing on the impact of changing roles in law firms and the demands of the in-house legal and law-advisory-organisation dynamic. It examines how legal education can instil within lawyers, both an understanding of the principles of law alongside an appreciation of the application of those principles in the workplace. It presents a vision of the roles and specialisations that are likely to emerge within the profession, and considers how the future work of lawyers will sit alongside alternative paths into the legal industry.
We shall assume that we are mining a database, that data arrives in a stream or streams, and if it is not processed immediately or stored, then it is lost forever. Moreover, we shall assume that the data arrives so rapidly that it is not feasible to store it all in active storage (i.e., in a conventional database), and then interact with it at the time of our choosing. The algorithms for processing streams each involve summarization of the stream in some way. We shall start by considering how to make a useful sample of a stream and how to filter a stream to eliminate most of the “undesirable” elements. We then show how to estimate the number of different elements in a stream using much less storage than would be required if we listed all the elements we have seen. Another approach to summarizing a stream is to look at only a fixed-length “window” consisting of the last n elements for some (typically large) n. We then query the window as if it were a relation in a database. If there are many streams and/or n is large, we may not be able to store the entire window for every stream, so we need to summarize even the windows. We address the fundamental problem of maintaining an approximate count on the number of 1s in the window of a bit stream, while using much less space than would be needed to store the entire window itself.
Game-based legal learning has emerged as a topic of intense interest over the last decade as a means of ‘modernising’ legal education, with game-based learning eliciting a wide range of responses from the legal academy. Somewhat unsurprisingly, resistance in the name of tradition has persisted. Yet, the view of game playing as a distinctly modern pedagogical development, and opposition on the basis of tradition is sheer folly. Ludic education has been the dominant teaching method for millennia, with legal game playing traced at least to the time of Cicero. Revealing the rich history of game playing in law, this chapter details ludic legal education from the declamation of Ancient Rome to Nintendo’s Phoenix Wright: Ace Attorney. It observes the way in which games can operate as a compelling delivery device for instruction, allow for experimentation, and encourage students to voice their opinions in a field where the sheer breadth of precedent and the relative impenetrability of legal texts may prove intimidating. In demonstrating the potential of game playing to overcome barriers to learning, this chapter considers the modern design principles that have enabled games to emerge as robust and enjoyable content delivery devices in legal education.
This chapter is not intended to be a complete discussion of machine learning. We concentrate on a small number of ideas, and emphasize how to deal with very large data sets. Especially important is how we exploit parallelism to build models of the data. We consider the classical “perceptron” approach to learning a data classifier, where a hyperplane that separates two classes is sought. Then, we look at more modern techniques involving support-vector machines. Similar to perceptrons, these methods look for hyperplanes that best divide the classes, so that few, if any, members of the training set lie close to the hyperplane. We next consider nearest-neighbor techniques, where data is classified according to the class(es) of their nearest neighbors in some space. We end with a discussion of decision trees, which are branching programs for predicting the class of an example.
Two main methods are used to solve continuous-time quasi birth-and-death processes: matrix geometric (MG) and probability generating functions (PGFs). MG requires a numerical solution (via successive substitutions) of a matrix quadratic equation A0 + RA1 + R2A2 = 0. PGFs involve a row vector $\vec{G}(z)$ of unknown generating functions satisfying $H(z)\vec{G}{(z)^\textrm{T}} = \vec{b}{(z)^\textrm{T}},$ where the row vector $\vec{b}(z)$ contains unknown “boundary” probabilities calculated as functions of roots of the matrix H(z). We show that: (a) H(z) and $\vec{b}(z)$ can be explicitly expressed in terms of the triple A0, A1, and A2; (b) when each matrix of the triple is lower (or upper) triangular, then (i) R can be explicitly expressed in terms of roots of $\det [H(z)]$; and (ii) the stability condition is readily extracted.
Discovering community structure in complex networks is a mature field since a tremendous number of community detection methods have been introduced in the literature. Nevertheless, it is still very challenging for practitioners to determine which method would be suitable to get insights into the structural information of the networks they study. Many recent efforts have been devoted to investigating various quality scores of the community structure, but the problem of distinguishing between different types of communities is still open. In this paper, we propose a comparative, extensive, and empirical study to investigate what types of communities many state-of-the-art and well-known community detection methods are producing. Specifically, we provide comprehensive analyses on computation time, community size distribution, a comparative evaluation of methods according to their optimization schemes as well as a comparison of their partitioning strategy through validation metrics. We process our analyses on a very large corpus of hundreds of networks from five different network categories and propose ways to classify community detection methods, helping a potential user to navigate the complex landscape of community detection.