To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Earth systems models, and perhaps sciences more generally, excite experiments in making ensembles.
But rather than regarding earth as an experimental object, this chapter turns to earth systems models as experimental practices in constituting ensembles. It is less concerned with the epistemic features of the models in describing earth processes and more with how models re-configure relations in an ensemble. I work again here with the presentiment that ensembles have some different propensities to the operating system, machines, devices or to their pre-eminent contemporary arrangement as platforms.
Ensembles are interesting precisely because they lack the internal regulation of a machine, and they do not circulate in concretized detachment in the way certain technical elements may do. Ensembles differ from other technical configurations in that they constrain internal couplings, or mutual conditionings. But their composition as ensembles depends on a ‘margin of indetermination’ (Simondon 1989, 12).
For Simondon, the margin of indetermination plays out in the ‘inter-commutativity’ of the ensemble not in its emergence with an associated milieu/background/middle (1989, 73).
Earth and science and technology studies
What does science and technology studies (STS) actually say about the world and what it is made up of? Increasingly, recent STS speaks directly of earth, its places, surfaces, flows and histories. And while this work might seem to lie at some remove from the data centres, terminals, hash functions, graphics processing or functional closures of app programming, platforms are affected by it.
OCaml Blockly is a block-based programming environment for a subset of the functional language OCaml, developed based on Google Blockly. The distinct feature of OCaml Blockly is that it knows the scoping and typing rules of OCaml. As such, for any complete program in OCaml Blockly, its OCaml counterpart compiles: it is free from syntax errors, scoping errors, and type errors. OCaml Blockly supports introductory constructs of OCaml that are sufficient to write the shortest path problem for the Tokyo metro network. This paper describes the design of OCaml Blockly and how it is used in a CS-major course on functional programming.
Are platforms good to think with? Platforms arise from a long series of arrangements of people and things. Such arrangements lack the coherence of a device, machine or ‘system’, even if they include them. But platforms bind and constrict current configurations of collective life and perhaps digital social research. They might still enable something.
This book devises some experimental approaches to platform things, narratives, places and habits. It draws from science and technology studies (STS) and associated approaches to media, ontologies, knowledges and power. More than some other approaches, an STS-oriented inquiry might attend to the variety of platforms scattered across society-economy-nature-language-subject-object divisions. I hold one key question in mind throughout: does STS assemble the equipment and acumen to not only follow platforms as they diversify across human-nonhuman differences, but to understand how to survive on/in/with/off them? Experimental ontology in ensembles responds to that question.
How many platforms exist?
Many observers see the last two decades, for better or worse, as platform-time. In social sciences, researchers began by re-thinking knowledge as platform objectivity (Cambrosio et al 2004), economic relations as platform capitalism (Srnicek 2016), platform cooperativism (Scholz and Schneider 2017), a society as platform society (Van Dijck et al 2018), and platformization as a far-reaching transformation (Plantin and Punathambekar 2018), leading to platform urbanism (Leszczynski 2019) or perhaps just the latest version of media specificity (Acland 2015).
The architecture of the ten-metre diving platform at Civic Pool in Canberra (c 1955) can also be seen in the Sprungturm at the Freibad, Berlin Pankow (c 1960). The ten-metre platform abstracts from the coastal platforms in certain facets. Diving platforms are infrastructurally rich in scale and modalities of edging and elevation. The platform stands on land and projects over a body of water, the diving pool.
The diving platform adopts the elementary stepped access to levels. The diving platform (see Figure VI.1) limits the space for steps and ladders, so that a compromise gradient, usually built as a step-ladder or a winding staircase, needs to be constructed.
Many built structures include edges, lines or planes that differentiate spaces vertically. The diving platform is a somewhat unusual case since its edges cannot align with its support. Its levels jut out over a pool of water. Such arrangements, for all their apparent simplicity as a way of elevating a surface above water, bring some complexities. The levels of the platform set at three, five and ten metres cannot be stacked vertically. In that case, divers on high levels would risk hitting lower levels. So the levels of the platform need to be staggered so that the highest level projects further over the water than the lower. Such a staggered overhang requires more complicated support than the vertically aligned levels of a high-rise building with its box-like stacking. Jenga players know that the vertical stack is quite stable until holes start appearing lower down the stack. Overhangs may involve different techniques of cantilevering or balancing the projecting higher levels with the greater mass of a base or foundation.
What if platform edges, the tightly controlled programmatic access points and terminal interfaces, were not the most important ways to approach platforms? A different departure point lies elsewhere: in light and images. Images open some paths towards platform grounding, in view of their high level of artifice, the heavy investments in value regimes associated with images, and their entwining with particular images or bodies.
As in other chapters in this book, an experimental ontology centres on grounded or place-based relations. Locating the grounding of platforms in images is hard. One statement of the difficulty appears in Bruno Latour's account of ‘digital infrastructure’:
[I] t is rather unfortunate that just at the time when we seem to have lost our ground because of the climatic mutation, we are also collectively unsettled by the complete disconnect between older technics of inscriptions and the digital infrastructure that is now activating them from behind. Just at the time when we need to land on an earth that would give us some solidity, we also have to reconcile ourselves with a technical infrastructure for which we don't have the right bodily apparatus. (May 2019, 18)
At core, the difficulty is not that older techniques of inscription of speech and images have disconnected from platforms (‘digital infrastructure’), but the ‘bodily apparatus’ is not ‘right’. How did even one thing become an image? What dependencies and affiliations does the imbrication of images in ensembles entail?
Advances in incremental Datalog evaluation strategies have made Datalog popular among use cases with constantly evolving inputs such as static analysis in continuous integration and deployment pipelines. As a result, new logic programming debugging techniques are needed to support these emerging use cases.
This paper introduces an incremental debugging technique for Datalog, which determines the failing changes for a rollback in an incremental setup. Our debugging technique leverages a novel incremental provenance method. We have implemented our technique using an incremental version of the Soufflé Datalog engine and evaluated its effectiveness on the DaCapo Java program benchmarks analyzed by the Doop static analysis library. Compared to state-of-the-art techniques, we can localize faults and suggest rollbacks with an overall speedup of over 26.9$\times$ while providing higher quality results.
This chapter covers quantum algorithmic primitives for loading classical data into a quantum algorithm. These primitives are important in many quantum algorithms, and they are especially essential for algorithms for big-data problems in the area of machine learning. We cover quantum random access memory (QRAM), an operation that allows a quantum algorithm to query a classical database in superposition. We carefully detail caveats and nuances that appear for realizing fast large-scale QRAM and what this means for algorithms that rely upon QRAM. We also cover primitives for preparing arbitrary quantum states given a list of the amplitudes stored in a classical database, and for performing a block-encoding of a matrix, given a list of its entries stored in a classical database.
This chapter covers the multiplicative weights update method, a quantum algorithmic primitive for certain continuous optimization problems. This method is a framework for classical algorithms, but it can be made quantum by incorporating the quantum algorithmic primitive of Gibbs sampling and amplitude amplification. The framework can be applied to solve linear programs and related convex problems, or generalized to handle matrix-valued weights and used to solve semidefinite programs.
This chapter covers quantum algorithmic primitives related to linear algebra. We discuss block-encodings, a versatile and abstract access model that features in many quantum algorithms. We explain how block-encodings can be manipulated, for example by taking products or linear combinations. We discuss the techniques of quantum signal processing, qubitization, and quantum singular value transformation, which unify many quantum algorithms into a common framework.
The data landscape has changed almost beyond recognition over the last 30 years. Established methods of collecting, compiling and publishing infor - mation have either been made redundant or radically transformed. We have looked at the drivers of these changes in the previous chapter, with the internet and, more recently, AI reconfiguring data value chains and business models in the process. This chapter explores the current state of the information landscape and considers how much data is being created in the mid-2020s, where it is coming from and how it is being used.
How much data is there?
Precisely mapping and measuring the global information economy is impossible as countries measure things differently and many of the inputs and outputs are hidden behind firewalls and corporate networks. However, approximations can be made based on publicly reported data, government statistics and technology sales. Broadly, we can attempt to measure the monetary value of data and information products produced each year as well as the quantity of data produced, distributed and stored. For the purposes of this book, the utility of considering such figures is to help us better understand broader trends in the production, distribution and use of data and what this might mean for the future of information professionals from a range of disciplines. Understanding the shifting sands of our data-driven environment can help us make better decisions about where to invest our time and resources going forward.
One of the first comprehensive and rigorous attempts to measure the volume of information being generated each year was carried out by economists Peter Lyman and Hal Varian at Berkeley University in 2000 (Lyman and Varian, 2000).
Some modern implementations of vector concepts rely heavily on a precise knowledge of time. Measurements of time, both ancient and modern, have always been heavily tied to Earth’s rotation, and so this rotation must be described in detail. I begin that task by describing Earth’s orientation relative to the solar system and the stars, and use a DCM to quantify Earth’s orientation at a given moment. This introduces the idea of Universal Time, UT1. Further concepts require a short discussion of relativity, both special and general, which I do by using a balloon to describe curved spacetime. The result is UTC, our modern ‘Greenwich Mean Time’. Measuring time over long periods is made easy through the concept of the Julian day, and so I discuss the Julian and Gregorian calendars. I include a detailed example of using these ideas to calculate the sight direction of a star at some time and place on Earth.
In the Preface, we motivate the book by discussing the history of quantum computing and the development of the field of quantum algorithms over the past several decades. We argue that the present moment calls for adopting an end-to-end lens in how we study quantum algorithms, and we discuss the contents of the book and how to use it.
This chapter explores the rapidly developing policies of national and international governmental bodies relating to data and AI, the emerging resulting legislation and the ethical concerns leading to these initiatives. As policies, laws and rulings are evolving to deal with the new challenges posed by AI and the data it feeds off, this chapter presents a snapshot of the legal situation in late 2024; however, the issues underpinning policy and legislative changes are ongoing. These centre on concerns with privacy and data protection, copyright, monopolistic practices and trust as well as efforts to stimulate economic growth and international competitiveness. The chapter focuses on developments in the UK and the US as well as across the EU, which allows for policies and legislation to be compared and highlights different national concerns and priorities.
Ethical concerns
Scientific breakthroughs and new technologies have long been a source of alarm for philosophers, writers and politicians. This is particularly true where questions of what it is to be human and how far machines can replace human activities emerge. Mary Shelley's 1818 creation of the Frankenstein monster highlighted concerns emerging from the enlightenment and the move from a society based on superstition to one more founded on the principles of science (Gunkel, 2024, 3). Science fiction novels and films – from Metropolis in 1927 and Brave New World in 1932 to 2001: A Space Odyssey in 1968 and Bladerunner in 1982 – developed these ideas in different and unsettling ways. While public policies and laws change fairly frequently in response to events and differing priorities, the values and ethics that underpin them are more constant.