To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Impressions of an algorithmic new life inevitably have their boundaries. In a recent analysis of ‘eight public engagement experiments’, Annette Markham (2021) explored in some detail the limits that solidify within framings of algorithmic futures. Markham found that there were strong and persistent frames in place that secured certain visions of future devices, data and algorithmic social formations. Yet these visions are not simply accepted without question. The discussions at these events, Markham found, revealed that, when prompted, a critical understanding of the platforms and technologies could be readily articulated by the participants. Despite the critical ruminations of these participants, Markham also found that the framing of particular future scenarios persisted and carried with them a strong sense of inevitability. The future was somehow fixed. There was something obdurate about these framings; something that made them seem unavoidable. Indeed, Markham’s (2021: 384) argument is that such ‘discursive patterns continually strengthen the dominant frames of inevitability and powerlessness’. This would suggest that algorithmic thinking, as we discussed in Chapter 1, has a strong sense of the future inscribed within it. More than this though, it is also suggestive of how robust and irresistible those framings have become.
Continuing with some of the themes covered so far, this chapter explores the way that algorithmic thinking comes to have limits or boundaries that constrain social forms. Or, more specifically, it reflects on how the limits of the known and the knowable are an active part of the tensions of algorithmic thinking. To be clear from the outset, this chapter is not trying to position or establish those limits, nor is it claiming that they are fixed and secure, rather it explores the types of tensions that arise at such boundaries and how those tensions might be understood. As Markham identified, however sturdy these limits might appear there remains scope for them to shift if provoked to do so, with the possibility to imagine alternatives outside those existing frames, especially if the means are found to support and encourage different ways of thinking about the possibilities. As I will focus upon in this chapter, these sites of movement and tension are associated with what is known and what is thought to be knowable.
Back in 1994, reflecting on the direction of rapidly advancing neural network technologies, the Nobel Prize-winning neural systems expert Leon N. Cooper mused on what the future might yet hold:
I do have a concrete prediction. The twentieth century is the century of computers, telephones, cars, and airplanes. I think the twenty-first century will be the century of what we call intelligent machines – machines that combine the rapid processing power of the current machines with the ability to associate, to reason, to do sensible things. And I think these machines will just evolve. We’ll have simple ones at first, and finally we’re going to have reasoning machines. (Cooper, 1998: 94)
This juxtaposition of eras, as imagined a quarter of a century ago, hints at how the advancing computer science of the time, especially in its use of brain science as a foil, was beginning to see a future in which machines would hold escalating forms of intelligence. Where the past hundred years had been defined by machines, the coming hundred years, Cooper predicted, would be defined by how those machines were to become intelligent. With this, he imagined, would come an automated form of reasoning; an automation of the sensible. An advancing era of knowing was positioned on near the horizon, the new life of evolving machines was perceived to be just around the corner and it would bring with it automated forms of reasoning (see Chapter 1).
Cooper frames this in terms of what he imagined at the time would be a growing ability for computational reason. He also anticipated a relative and growing comfort with these forms of intelligence and what they might be used to achieve. His observation was that:
We’re comfortable with computers that enhance our logic, our memory, and we’ll be comfortable with reasoning machines. We’ll interact with them. I think they will come just in time because of the kinds of problems we have to solve, these very complex problems that are beyond the capacity of our minds, probably will be solved in interaction with such machines. (Cooper, 1998: 94)
Without wanting to sound too epochal, it could be said that we are living in algorithmic times. We may not want to go so far, and I find myself trying to resist the temptation, but it has become hard to draw any other conclusion. The type of ‘programmed sociality’ to which Taina Bucher (2018: 4) has referred has become impossible to deny, especially as the algorithm ‘induces, augments, supports, and produces sociality’. Different types of algorithms have come to have very large-scale social consequences, ranging from shaping what people discover, experience and consume as they go about their mediated lives, through to financial decisions, trading choices, differential treatment around insurance, credit and lending and then on to housing, border controls and a wide array of other processes and decisions that just simply cannot be listed. Such features of the contemporary landscape have led to the compelling conclusion that this is a type of ‘algorithmic culture’ (Striphas, 2015) cultivated within the structures and relations of an ‘algorithmic society’ (Peeters and Schuilenburg, 2021). Given these circumstances we may even now be living an ‘algorithmic life’ (Amoore and Piotukh, 2016). Such conclusions are certainly merited. Part of the reason we might think of these as algorithmic times, if you would let me stick with that slightly hyperbolic phrasing for a moment longer, is just how long the list would be if we were to try to itemize every single way that algorithms have social or individual consequences. And even then, because of their often-invisible roles and the sheer complexity of overlapping systems, the list would be impossible to complete. The algorithm has become too enmeshed in the social world for it to be untangled from it.
Yet, far from being sleek and uncontested technologies informed by mutually recognized ideals or shared notions of progress, these algorithmic times are fraught with tensions. Algorithmic thinking is tense. This book is concerned with elaborating and conceptualizing some of these competing forces. More than this though, this book argues that these algorithmic times can only really be grasped if we are to engage with the specific nature of these strains. Understanding algorithmic times requires a sustained focus upon the tensions of algorithmic thinking.
Published in the 7 March 2020 edition of the Financial Times, the private banking firm Investec ran a full-page advert that seemingly sought to defend human decision-making. Going against the apparently relentless tide of automation, it gestures towards the pitfalls of an irrational and inflexible form of algorithmic thinking. The advert poses what appears to be a rhetorical question: ‘Who would be most likely to grant you a mortgage? An algorithm? Or a human being?’ In the unlikely case that the reader is unsure as to their position on this question, the background is filled by a monochrome photo of a comfortably seated human – it is not clear if they are the imagined customer or a representative of the lenders.
This particular advert is suggestive of two related things. First, by responding to it directly the advert highlights the established materiality of algorithmic social ordering. In order to function its key message requires there to have been some form of existing encounter with algorithmic structures of some sort. Second, it is indicative of the way in which the very notion of the algorithm has moved into public consciousness (as discussed in Beer, 2017). The apparent rush towards being algorithmic, in which organizations seek to present themselves as devolving powers to the apparent neutrality, objectivity and heightened efficiency of algorithms, creates opportunities for others to present themselves as providing an alternative. This is not an alternative to being algorithmic, I would add, it is more often simply a different version of it. In other words, the push towards algorithmic properties creates a space in which the human can be knowingly and actively reinserted into these systems. In the case of the Investec promotion, the attempt is to appear to be algorithmic while not abandoning a sense of human values. In other words, it is an attempt to actively seek to present this as an organization that uses automation without appearing to be too automated. There is an active avoidance of that particular boundary. Investec are, it would seem, aiming to avoid overstepping the perceived limits of algorithmic thinking.
In the summer of 2018 the technology company Hdac ran a television advert depicting their version of the ‘smart home’. With clean lines and neutral colours, the automated home space was a picture of hyper-functional minimalism. In many regards it was an entirely unremarkable advert – its style and tone were comparable with typical technology company promotions. Despite the familiar stylistic features, it was the advert’s very prominent mentions of ‘blockchain’ that was particularly notable. For the first portion of the advert a small message appeared at the bottom of the screen telling the viewer that ‘Hdac Technology is building the future with the blockchain solution’. This talk of future-building immediately returns us to Lefebvre’s impression of the new life as discussed in Chapter 1 – the blockchain home is also just around the corner, it would seem. Later in the advert the voiceover reiterates the central message, adding that the ‘Hdac Technology platform is smart and secure thanks to the blockchain solution’. This text appeared again at the bottom of the screen. It is clear that the main message of the advert is that blockchain is responsible for enabling the various visions of convenience and technological adaptability being depicted. The blockchain is also given responsibility for ensuring the security of these spaces. Despite its prominence in the messaging, the advert did not go on to say what blockchain was, nor did it mention its functionality or how it was to be applied. Blockchain instead seemed to stand in for a secure (while nonspecific) technical apparatus. Blockchain was itself the message. This advert is illustrative of how blockchain is associated with notions of ideal types of data security; it is also illustrative of how the term blockchain can even act as a byword for technical systems that ensure this security.
Given its unexplained insertion in this advert and the fact that it is a term that can be used without the need for definition, it would seem that the concept of blockchain has already become a fixture of a wider technological and perhaps even public and media discourse (as discussed in Chow-White et al, 2020). It is a recognizable term. It is an established signifier.
The dimension of models derived on the basis of data is commonly restricted by the number of observations, or in the context of monitored systems, sensing nodes. This is particularly true for structural systems, which are typically high-dimensional in nature. In the scope of physics-informed machine learning, this article proposes a framework—termed neural modal ordinary differential equations (Neural Modal ODEs)—to integrate physics-based modeling with deep learning for modeling the dynamics of monitored and high-dimensional engineered systems. In this initiating exploration, we restrict ourselves to linear or mildly nonlinear systems. We propose an architecture that couples a dynamic version of variational autoencoders with physics-informed neural ODEs (Pi-Neural ODEs). An encoder, as a part of the autoencoder, learns the mappings from the first few items of observational data to the initial values of the latent variables, which drive the learning of embedded dynamics via Pi-Neural ODEs, imposing a modal model structure on that latent space. The decoder of the proposed model adopts the eigenmodes derived from an eigenanalysis applied to the linearized portion of a physics-based model: a process implicitly carrying the spatial relationship between degrees-of-freedom (DOFs). The framework is validated on a numerical example, and an experimental dataset of a scaled cable-stayed bridge, where the learned hybrid model is shown to out perform a purely physics-based approach to modeling. We further show the functionality of the proposed scheme within the context of virtual sensing, that is, the recovery of generalized response quantities in unmeasured DOFs from spatially sparse data.
Metamorphic robots are a new type of unmanned vehicle that can reconfigure and morph between a car mode and a biped walking machine mode. Such a vehicle is superior in trafficability because it can drive at high speeds on its wheels on structured pavement and walk on its legs on unstructured pavement. An engineering prototype of a metamorphic robot was proposed and designed based on the characteristics of wheeled–legged hybrid motion, and reconfiguration planning of the robot was conducted. A kinematics model of the reconfiguration process was established using the screw theory for metamorphic robots. To avoid component impact during the rapid global reconfiguration and achieve smoothness of the reconfiguration process, a rotation rule for each rotating joint was designed and the kinematics model was used to simulate and validate the motion of the system’s end mechanism (front frame) and the entire robot system. Based on the kinematics model and the rotation rules of the rotating joints, a zero-moment point (ZMP) calculation model of the entire robot mechanism in the reconfiguration process was established, and the stability of the reconfiguration motions was evaluated based on the ZMP motion trajectory. The foot landing position was optimized to improve the robot’s stability during the reconfiguration. Finally, the smoothness and stability of the reconfiguration motion were further validated by testing the prototype of the metamorphic robot.
Motivated by Ahmadi-Javid (Journal of Optimization Theory Applications, 155(3), 2012, 1105–1123) and Ahmadi-Javid and Pichler (Mathematics and Financial Economics, 11, 2017, 527–550), the concept of Tsallis Value-at-Risk (TsVaR) based on Tsallis entropy is introduced in this paper. TsVaR corresponds to the tightest possible upper bound obtained from the Chernoff inequality for the Value-at-Risk. The main properties and analogous dual representation of TsVaR are investigated. These results partially generalize the Entropic Value-at-Risk by involving Tsallis entropies. Three spaces, called the primal, dual, and bidual Tsallis spaces, corresponding to TsVaR are fully studied. It is shown that these spaces equipped with the norm induced by TsVaR are Banach spaces. The Tsallis spaces are related to the $L^p$ spaces, as well as specific Orlicz hearts and Orlicz spaces. Finally, we derive explicit formula for the dual TsVaR norm.
Zhu and He [(2018). A new closed-form formula for pricing European options under a skew Brownian motion. The European Journal of Finance 24(12): 1063–1074] provided an innovative closed-form solution by replacing the standard Brownian motion in the Black–Scholes framework using a particular skew Brownian motion. Their formula involves numerically integrating the product of the Guassian density and corresponding distribution function. Being different from their pricing formula, we derive a much simpler formula that only involves the Gaussian distribution function and Owen's $T$ function.
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail due to computational constraints. To represent key processes, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM (European Center for Medium-Range Weather Forecast-Hamburg-Hamburg) global climate aerosol model using the M7 microphysics, but high computational costs make it very expensive to run with finer resolution or for a longer time. We aim to use machine learning to emulate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input–output pairs to train a neural network (NN) on it. We are able to learn the variables’ tendencies achieving an average $ {R}^2 $ score of 77.1%. We further explore methods to inform and constrain the NN with physical knowledge to reduce mass violation and enforce mass positivity. On a Graphics processing unit (GPU), we achieve a speed-up of up to over 64 times faster when compared to the original model.
We prove new mixing rate estimates for the random walks on homogeneous spaces determined by a probability distribution on a finite group $G$. We introduce the switched random walk determined by a finite set of probability distributions on $G$, prove that its long-term behaviour is determined by the Fourier joint spectral radius of the distributions, and give Hermitian sum-of-squares algorithms for the effective estimation of this quantity.
We present PolicyCLOUD: a prototype for an extensible serverless cloud-based system that supports evidence-based elaboration and analysis of policies. PolicyCLOUD allows flexible exploitation and management of policy-relevant dataflows, by enabling the practitioner to register datasets and specify a sequence of transformations and/or information extraction through registered ingest functions. Once a possibly transformed dataset has been ingested, additional insights can be retrieved by further applying registered analytic functions to it. PolicyCLOUD was built as an extensible framework toward the creation of an analytic ecosystem. As of now, we have developed several essential ingest and analytic functions that are built-in within the framework. They include data cleaning, enhanced interoperability, and sentiment analysis generic functions; in addition, a trend analysis function is being created as a new built-in function. PolicyCLOUD has also the ability to tap on the analytic capabilities of external tools; we demonstrate this with a social dynamics tool implemented in conjunction with PolicyCLOUD, and describe how this stand-alone tool can be integrated with the PolicyCLOUD platform to enrich it with policy modeling, design and simulation capabilities. Furthermore, PolicyCLOUD is supported by a tailor-made legal and ethical framework derived from privacy/data protection best practices and existing standards at the EU level, which regulates the usage and dissemination of datasets and analytic functions throughout its policy-relevant dataflows. The article describes and evaluates the application of PolicyCLOUD to four families of pilots that cover a wide range of policy scenarios.
We discuss a recently proposed family of statistical network models—relational hyperevent models (RHEMs)—for analyzing team selection and team performance in scientific coauthor networks. The underlying rationale for using RHEM in studies of coauthor networks is that scientific collaboration is intrinsically polyadic, that is, it typically involves teams of any size. Consequently, RHEM specify publication rates associated with hyperedges representing groups of scientists of any size. Going beyond previous work on RHEM for meeting data, we adapt this model family to settings in which relational hyperevents have a dedicated outcome, such as a scientific paper with a measurable impact (e.g., the received number of citations). Relational outcome can on the one hand be used to specify additional explanatory variables in RHEM since the probability of coauthoring may be influenced, for instance, by prior (shared) success of scientists. On the other hand, relational outcome can also serve as a response variable in models seeking to explain the performance of scientific teams. To tackle the latter, we propose relational hyperevent outcome models that are closely related with RHEM to the point that both model families can specify the likelihood of scientific collaboration—and the expected performance, respectively—with the same set of explanatory variables allowing to assess, for instance, whether variables leading to increased collaboration also tend to increase scientific impact. For illustration, we apply RHEM to empirical coauthor networks comprising more than 350,000 published papers by scientists working in three scientific disciplines. Our models explain scientific collaboration and impact by, among others, individual activity (preferential attachment), shared activity (familiarity), triadic closure, prior individual and shared success, and prior success disparity among the members of hyperedges.
Wind turbine towers are subjected to highly varying internal loads, characterized by large uncertainty. The uncertainty stems from many factors, including what the actual wind fields experienced over time will be, modeling uncertainties given the various operational states of the turbine with and without controller interaction, the influence of aerodynamic damping, and so forth. To monitor the true experienced loading and assess the fatigue, strain sensors can be installed at fatigue-critical locations on the turbine structure. A more cost-effective and practical solution is to predict the strain response of the structure based only on a number of acceleration measurements. In this contribution, an approach is followed where the dynamic strains in an existing onshore wind turbine tower are predicted using a Gaussian process latent force model. By employing this model, both the applied dynamic loading and strain response are estimated based on the acceleration data. The predicted dynamic strains are validated using strain gauges installed near the bottom of the tower. Fatigue is subsequently assessed by comparing the damage equivalent loads calculated with the predicted as opposed to the measured strains. The results confirm the usefulness of the method for continuous tracking of fatigue life consumption in onshore wind turbine towers.
The effect of milorganite, a commercially available organic soil amendment, on soil nutrients, plant growth, and yield has been investigated. However, its effect on soil hydraulic properties remains less understood. Therefore, this study aimed to investigate the effect of milorganite amendment on soil evaporation, moisture retention, hydraulic conductivity, and electrical conductivity of a Krome soil. A column experiment was conducted with two milorganite application rates (15 and 30% v/v) and a non-amended control soil. The results revealed that milorganite reduced evaporation rates and the length of Stage I of the evaporation process compared with the control. Moreover, milorganite increased moisture retention at saturation and permanent wilting point while decreasing soil hydraulic conductivity. In addition, milorganite increased soil electrical conductivity. Overall, milorganite resulted in increased soil moisture retention; however, moisture in the soil may not be readily available for plants due to increased soil salinity.