To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Christmas concert at Canberra Mini-trains started at 4pm, with a summer heat wave simmering on the former sheep pastures on the south-east edge of the city. The platform, covered in green carpet, was separated from the browning grass and hoof-compacted clay soil of the audience area by a rope with red pendants limp in the still air. On stage, performers in black-tie suits played swing-era arrangements of Christmas carols. A lone raven watched from a fence post. The amplified strains of ‘God Rest Ye Merry Gentlemen’ rolled out over the pastures as the carriages of the mini-trains trundled around the track, a few children and their adults on board.
Standing on stage among performers such as musicians, singers, actors, dancers and perhaps politicians fronting an audience is part of what Erving Goffman calls the ‘platform format’. The scene and configuration of this platform, and so many others like it, presents something different to the coastlines, rocks, geometry and churches. As the platform diagrams for the Sydney Opera House drawn by Jörn Utzon in his Platforms and Plateaus: Ideas of a Danish Architect show (Utzon 1962), platforms stage every level of height. Utzon's ground-plan for the support-substrate of the opera stage is full of steps and stages on a range of scales. The main stages or performance platforms are big steps arranged in front of steps for seating, surrounded by steps for approaching, and standing on wide flat steps. Some platforms do not use elevation for visibility, but sit at the base of a depression. Not all platforms are spaces of performance, but nearly all rely on stages.
How did platforms come on stage after 2000? If we start from how platforms appear in web browsers and then apps on smartphones and other devices in increasingly virtualized and microstate, embedded configurations, the complex development of platforms is woven through by practices of coding and software development. Coding and software development itself is re-made in that weaving.
Devices such as applications, apps and websites are the most immediate experience of platforms for many people. The existence of apps and websites, as well as everything that connects to them, depends on code in multiple ways. There has been much recognition of some of the effects of coding and its connections with agency, politics, economy and culture. But perhaps attention to how developers belong to and are included in what they code would help us understand the ensemble-side of platforms. And to ask: how does this progressively entanglement boost experimental ontology?
What matters ontologically in coding things to be platform-like, in making functionalities, interfaces or screens that update, scroll, display, animate, label, tabulate, list or link to ‘content’? In a discussion thread entitled ‘What is a practical use for a closure in JavaScript?’ on the Q&A site Stack Overflow in 2015, a participant declares: ‘Technically every function you make in Javascript on a browser is a closure because the window object is bound to it’ (alex 2015). The discussion about ‘closures in JavaScript’ meanders through a dozen or so major responses, some with many sub-replies, and some receiving hundreds of upvotes indicating ‘this answer is useful’.
Roughly one metre cubes, two deep red oblong stones lie in a semi-circle collection of stones. A fragment of the wavering thickness of living/non-living layering of the earth's crust, of what Elizabeth Povinelli has described as geontology (Povinelli 2016), they form part of the Australian National Rock Collection near Galambary or Black Mountain, Canberra (see Figure IV.1). The stones have the width, depth and breadth, surfaces, extremities and angles that reflect their formation and their handling in the mining industry. In the Pilbara, the rock was cut out of the ground by Fortescue Metals Group, one of the world's largest iron ore mining companies. Saved from crushing and shipping as iron ore at the mine, they are now climbed by children and present a training challenge for passing mountain bikers.
The boulders come from the Brockman Formation of the Hamersley Ranges in the mineral-rich Pilbara region of north-west Australia, Pitjanjare country. The 2.3-billion-year-old rocks were cut from a geological formation of vast horizontal stability, millions of square kilometres of central and western Australia, that geologists call ‘a platform’. Much of Australia's landmass and large parts of Eurasia are platforms. The rock is banded with waves of iron compounds taken up by marine microbes and sedimented in fine oxide layers. Continental platforms are usually made up of layers of sedimentary rock washed by water across the much older basalt or other igneous ‘basement’ of Precambrian eons.
Hammersley iron is a platform deposit (Morris 1983).
Traditional frequent itemset mining (FIM) is constrained by several limitations, mainly due to its failure to account for item quantity and significance, including factors such as price and profit. To address these limitations, high utility itemset mining (HUIM) is presented. Traditional HUIM algorithms are designed to operate solely on static transactional datasets. Nevertheless, in practical applications, datasets tend to be dynamic, with examples like market basket analysis and business decision-making involving regular updates to the data. Dynamic datasets are updated incrementally with the frequent addition of new data. Incremental HUIM (iHUIM) approaches mine the high utility itemsets (HUIs) from incremental datasets without scanning the whole dataset. In contrast, traditional HUIM approaches require a full dataset scan each time the dataset is updated. Consequently, iHUIM approaches effectively reduce the computational cost of identifying HUIs whenever a new record is added. This survey provides a novel taxonomy that includes two-based, pattern-growth-based, projection-based, utility-list-based, and pre-large-based algorithms. The paper delivers an in-depth analysis, covering the features and characteristics of the existing state-of-the-art algorithms. Additionally, it supplies a detailed comparative overview, advantages, disadvantages, and future research directions of these algorithms. The survey provides both a categorized analysis and a comprehensive, consolidated summary and analysis of all current state-of-the-art iHUIM algorithms. It offers a more in-depth comparative analysis than the currently available state-of-the-art surveys. Additionally, the survey highlights several research opportunities and future directions for iHUIM.
Earth systems models, and perhaps sciences more generally, excite experiments in making ensembles.
But rather than regarding earth as an experimental object, this chapter turns to earth systems models as experimental practices in constituting ensembles. It is less concerned with the epistemic features of the models in describing earth processes and more with how models re-configure relations in an ensemble. I work again here with the presentiment that ensembles have some different propensities to the operating system, machines, devices or to their pre-eminent contemporary arrangement as platforms.
Ensembles are interesting precisely because they lack the internal regulation of a machine, and they do not circulate in concretized detachment in the way certain technical elements may do. Ensembles differ from other technical configurations in that they constrain internal couplings, or mutual conditionings. But their composition as ensembles depends on a ‘margin of indetermination’ (Simondon 1989, 12).
For Simondon, the margin of indetermination plays out in the ‘inter-commutativity’ of the ensemble not in its emergence with an associated milieu/background/middle (1989, 73).
Earth and science and technology studies
What does science and technology studies (STS) actually say about the world and what it is made up of? Increasingly, recent STS speaks directly of earth, its places, surfaces, flows and histories. And while this work might seem to lie at some remove from the data centres, terminals, hash functions, graphics processing or functional closures of app programming, platforms are affected by it.
OCaml Blockly is a block-based programming environment for a subset of the functional language OCaml, developed based on Google Blockly. The distinct feature of OCaml Blockly is that it knows the scoping and typing rules of OCaml. As such, for any complete program in OCaml Blockly, its OCaml counterpart compiles: it is free from syntax errors, scoping errors, and type errors. OCaml Blockly supports introductory constructs of OCaml that are sufficient to write the shortest path problem for the Tokyo metro network. This paper describes the design of OCaml Blockly and how it is used in a CS-major course on functional programming.
Are platforms good to think with? Platforms arise from a long series of arrangements of people and things. Such arrangements lack the coherence of a device, machine or ‘system’, even if they include them. But platforms bind and constrict current configurations of collective life and perhaps digital social research. They might still enable something.
This book devises some experimental approaches to platform things, narratives, places and habits. It draws from science and technology studies (STS) and associated approaches to media, ontologies, knowledges and power. More than some other approaches, an STS-oriented inquiry might attend to the variety of platforms scattered across society-economy-nature-language-subject-object divisions. I hold one key question in mind throughout: does STS assemble the equipment and acumen to not only follow platforms as they diversify across human-nonhuman differences, but to understand how to survive on/in/with/off them? Experimental ontology in ensembles responds to that question.
How many platforms exist?
Many observers see the last two decades, for better or worse, as platform-time. In social sciences, researchers began by re-thinking knowledge as platform objectivity (Cambrosio et al 2004), economic relations as platform capitalism (Srnicek 2016), platform cooperativism (Scholz and Schneider 2017), a society as platform society (Van Dijck et al 2018), and platformization as a far-reaching transformation (Plantin and Punathambekar 2018), leading to platform urbanism (Leszczynski 2019) or perhaps just the latest version of media specificity (Acland 2015).
The architecture of the ten-metre diving platform at Civic Pool in Canberra (c 1955) can also be seen in the Sprungturm at the Freibad, Berlin Pankow (c 1960). The ten-metre platform abstracts from the coastal platforms in certain facets. Diving platforms are infrastructurally rich in scale and modalities of edging and elevation. The platform stands on land and projects over a body of water, the diving pool.
The diving platform adopts the elementary stepped access to levels. The diving platform (see Figure VI.1) limits the space for steps and ladders, so that a compromise gradient, usually built as a step-ladder or a winding staircase, needs to be constructed.
Many built structures include edges, lines or planes that differentiate spaces vertically. The diving platform is a somewhat unusual case since its edges cannot align with its support. Its levels jut out over a pool of water. Such arrangements, for all their apparent simplicity as a way of elevating a surface above water, bring some complexities. The levels of the platform set at three, five and ten metres cannot be stacked vertically. In that case, divers on high levels would risk hitting lower levels. So the levels of the platform need to be staggered so that the highest level projects further over the water than the lower. Such a staggered overhang requires more complicated support than the vertically aligned levels of a high-rise building with its box-like stacking. Jenga players know that the vertical stack is quite stable until holes start appearing lower down the stack. Overhangs may involve different techniques of cantilevering or balancing the projecting higher levels with the greater mass of a base or foundation.
What if platform edges, the tightly controlled programmatic access points and terminal interfaces, were not the most important ways to approach platforms? A different departure point lies elsewhere: in light and images. Images open some paths towards platform grounding, in view of their high level of artifice, the heavy investments in value regimes associated with images, and their entwining with particular images or bodies.
As in other chapters in this book, an experimental ontology centres on grounded or place-based relations. Locating the grounding of platforms in images is hard. One statement of the difficulty appears in Bruno Latour's account of ‘digital infrastructure’:
[I] t is rather unfortunate that just at the time when we seem to have lost our ground because of the climatic mutation, we are also collectively unsettled by the complete disconnect between older technics of inscriptions and the digital infrastructure that is now activating them from behind. Just at the time when we need to land on an earth that would give us some solidity, we also have to reconcile ourselves with a technical infrastructure for which we don't have the right bodily apparatus. (May 2019, 18)
At core, the difficulty is not that older techniques of inscription of speech and images have disconnected from platforms (‘digital infrastructure’), but the ‘bodily apparatus’ is not ‘right’. How did even one thing become an image? What dependencies and affiliations does the imbrication of images in ensembles entail?
Advances in incremental Datalog evaluation strategies have made Datalog popular among use cases with constantly evolving inputs such as static analysis in continuous integration and deployment pipelines. As a result, new logic programming debugging techniques are needed to support these emerging use cases.
This paper introduces an incremental debugging technique for Datalog, which determines the failing changes for a rollback in an incremental setup. Our debugging technique leverages a novel incremental provenance method. We have implemented our technique using an incremental version of the Soufflé Datalog engine and evaluated its effectiveness on the DaCapo Java program benchmarks analyzed by the Doop static analysis library. Compared to state-of-the-art techniques, we can localize faults and suggest rollbacks with an overall speedup of over 26.9$\times$ while providing higher quality results.
This chapter covers quantum algorithmic primitives for loading classical data into a quantum algorithm. These primitives are important in many quantum algorithms, and they are especially essential for algorithms for big-data problems in the area of machine learning. We cover quantum random access memory (QRAM), an operation that allows a quantum algorithm to query a classical database in superposition. We carefully detail caveats and nuances that appear for realizing fast large-scale QRAM and what this means for algorithms that rely upon QRAM. We also cover primitives for preparing arbitrary quantum states given a list of the amplitudes stored in a classical database, and for performing a block-encoding of a matrix, given a list of its entries stored in a classical database.
This chapter covers the multiplicative weights update method, a quantum algorithmic primitive for certain continuous optimization problems. This method is a framework for classical algorithms, but it can be made quantum by incorporating the quantum algorithmic primitive of Gibbs sampling and amplitude amplification. The framework can be applied to solve linear programs and related convex problems, or generalized to handle matrix-valued weights and used to solve semidefinite programs.
This chapter covers quantum algorithmic primitives related to linear algebra. We discuss block-encodings, a versatile and abstract access model that features in many quantum algorithms. We explain how block-encodings can be manipulated, for example by taking products or linear combinations. We discuss the techniques of quantum signal processing, qubitization, and quantum singular value transformation, which unify many quantum algorithms into a common framework.
The data landscape has changed almost beyond recognition over the last 30 years. Established methods of collecting, compiling and publishing infor - mation have either been made redundant or radically transformed. We have looked at the drivers of these changes in the previous chapter, with the internet and, more recently, AI reconfiguring data value chains and business models in the process. This chapter explores the current state of the information landscape and considers how much data is being created in the mid-2020s, where it is coming from and how it is being used.
How much data is there?
Precisely mapping and measuring the global information economy is impossible as countries measure things differently and many of the inputs and outputs are hidden behind firewalls and corporate networks. However, approximations can be made based on publicly reported data, government statistics and technology sales. Broadly, we can attempt to measure the monetary value of data and information products produced each year as well as the quantity of data produced, distributed and stored. For the purposes of this book, the utility of considering such figures is to help us better understand broader trends in the production, distribution and use of data and what this might mean for the future of information professionals from a range of disciplines. Understanding the shifting sands of our data-driven environment can help us make better decisions about where to invest our time and resources going forward.
One of the first comprehensive and rigorous attempts to measure the volume of information being generated each year was carried out by economists Peter Lyman and Hal Varian at Berkeley University in 2000 (Lyman and Varian, 2000).