To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A versatile architecture is presented to implement autonomous vehicles. The focus idea consists of a set of standalone modules, called wireless robotic components wireless robotic components (WRCs). Each component performs a particular function by means of a radio modem interface, a processing unit, and a sensor/actuator. The components interact through a coordinator that redirects asynchronous requests to the appropriate WRCs, configuring a built-in network. The WRC architecture has been tested in marine and terrestrial platforms to perform tasks of waypoint and wall following. Results show that the tested system complies with adaptability and portability that allow conforming a variety of autonomous vehicles.
With the rise of the “Maker Movement” and the entrepreneurial university, academic makerspaces became widespread. These facilities provide tools and machines that enable making and tinkering; and while the offerings, organizational and operational models, and outreach of the academic makerspaces can vary widely across institutions, their common value proposition is enabling innovation, entrepreneurship, and hands-on project-based learning and these studies are largely qualitative and exploratory by nature. Through a case study, this paper presents an in-depth analysis and insights on the users and usage of an academic makerspace. Using the data generated by and collected from the users of an academic makerspace, we evaluate the effects of having access to the makerspace on users' teaching and learning experiences, and their satisfaction with the offerings. Our results show that attracting courses and educators to the facilities played a strong role in growing the user base, courses and teaching activities introduced new teaching and learning activities to adopt the offerings, group and project work is positively impacted, and the users are very satisfied with the facilities and having the access to its offerings. The analysis also showed that the demand for the offerings can be challenging to manage during certain periods, most of the users come from three departments (mechanical, electrical, civil engineering), and the diversity of the users could improve with the introduction of new offerings, such as a wet lab for bio/chemistry experiments and a food lab to tinker with food processing and preparation.
Data-driven computational neuroscience facilitates the transformation of data into insights into the structure and functions of the brain. This introduction for researchers and graduate students is the first in-depth, comprehensive treatment of statistical and machine learning methods for neuroscience. The methods are demonstrated through case studies of real problems to empower readers to build their own solutions. The book covers a wide variety of methods, including supervised classification with non-probabilistic models (nearest-neighbors, classification trees, rule induction, artificial neural networks and support vector machines) and probabilistic models (discriminant analysis, logistic regression and Bayesian network classifiers), meta-classifiers, multi-dimensional classifiers and feature subset selection methods. Other parts of the book are devoted to association discovery with probabilistic graphical models (Bayesian networks and Markov networks) and spatial statistics with point processes (complete spatial randomness and cluster, regular and Gibbs processes). Cellular, structural, functional, medical and behavioral neuroscience levels are considered.
Drift analysis is one of the state-of-the-art techniques for the runtime analysis of randomized search heuristics (RSHs) such as evolutionary algorithms (EAs), simulated annealing, etc. The vast majority of existing drift theorems yield bounds on the expected value of the hitting time for a target state, for example the set of optimal solutions, without making additional statements on the distribution of this time. We address this lack by providing a general drift theorem that includes bounds on the upper and lower tail of the hitting time distribution. The new tail bounds are applied to prove very precise sharp-concentration results on the running time of a simple EA on standard benchmark problems, including the class of general linear functions. On all these problems, the probability of deviating by an r-factor in lower-order terms of the expected time decreases exponentially with r. The usefulness of the theorem outside the theory of RSHs is demonstrated by deriving tail bounds on the number of cycles in random permutations. All these results handle a position-dependent (variable) drift that was not covered by previous drift theorems with tail bounds. Finally, user-friendly specializations of the general drift theorem are given.
Digitalization and the momentous role being assumed by data are commonly viewed as pervasive phenomena whose impact is felt in all aspects of society and the economy. Design activity is by no means immune from this trend, and the relationship between digitalization and design is decades old. However, what is the current impact of this ‘data revolution’ on design? How will the design activity change? What are the resulting research questions of interest to academics? What are the main challenges for firms and for educational institutions having to cope with this change? The paper provides a comprehensive conceptual framework, based on recent literature and anecdotal evidence from the industry. It identifies three main streams: namely the consequences on designers, the consequences on design processes and the role of methods for data analytics. In turn, these three streams lead to implications at individual, organizational and managerial level, and several questions arise worthy of defining future research agendas. Moreover, the paper introduces relational diagrams depicting the interactions between the objects and the actors involved in the design process and suggests that what is occurring is by no means a simple evolution but a paradigmatic shift in the way artefacts are designed.
We live in an algorithmic world. There is currently no area of our lives that has not been touched by computation and its language and tools. Since when, in the early 1940s, a small group of people led by John von Neumann gathered to turn into reality the vision of a universal computing machine, humankind is experiencing a sort of permanent revolution in which our understanding of the world and our ways of acting on it are steadily transformed by the steps forward we make in processing information. Such a condition is vividly depicted by Alan Turing in one of the founding documents of the quest for artificial intelligence (AI): “in attempting to construct machines … we are providing mansions for the souls.”1 Computers and algorithms can be seen as the building blocks of a new, ever-expanding building – a cathedral, to use George Dyson’s metaphor2 – in which every human activity is going to be shaped by the digital architecture hosting it.
Algorithms in society are both innocuous and ubiquitous. They seamlessly permeate both our on- and offline lives, quietly distilling the volumes of data each of us now creates. Today, algorithms determine the optimal way to produce and ship goods, the prices we pay for those goods, the money we can borrow, the people who teach our children, and the books and articles we read – reducing each activity to an actuarial risk or score. “If every algorithm suddenly stopped working,” Pedro Domingos hypothesized, “it would be the end of the world as we know it.”1
Public administration in Norway and in many other countries has used computers for more than fifty-five years. It is normal and necessary. Of course, it is possible to imagine many more office buildings where thousands of men and women would do all the detailed processing of individual cases that are processed today by computers, but this alternative is not very realistic: Modern taxation systems, national social insurance schemes and management of many other welfare programs would not be feasible without the use of computers and the algorithmic law that is integrated in the software. Thus, the question is not if public administration should apply computer technology, but how this should be done. This chapter deals with important how-to questions.
Transparency has been in the crosshairs of recent writing about accountable algorithms. Its critics argue that releasing data can be harmful, and releasing source code won’t be useful.1 They claim individualized explanations of artificial intelligence (AI) decisions don’t empower people, and instead distract from more effective ways of governing.2 While criticizing transparency’s efficacy with one breath, with the next they defang it, claiming corporate secrecy exceptions will prevent useful information from getting out.3
This chapter’s thesis is simple: as a general matter, agreements are a functional and conceptually straightforward way for the law to recognize algorithms. In particular, using agreements to recognize algorithms into the law is better than trying to use the law of agency to do so.1 Casual speech and conceptualism have led to the commonplace notion of “electronic agents,” but the law of agreements is a more functional entry point for algorithms to interact with the law than the concept of vicarious action. Algorithms need not involve any vicarious action, and most of the law of agency translates very poorly to algorithms that lack intent, reasonable understanding, and legal personality in their own right; instead, algorithms cause activity that may have contractual or other agreement-based legal significance. Recognizing the power (and perhaps the necessity) of addressing algorithms by means of the law governing agreements and other legal instruments can free us from formalistic attempts to shoehorn algorithms into a limited set of existing legal categories.
The United States’ transition from an economy built on form contracts to an economy built on algorithmic contracts has been as subtle as it has been thorough. In the form contract economy, many firms used standard order forms to make and receive orders. Firms purchased products and services with lengthy terms of services. Even negotiated agreements between fairly sophisticated businesses involved heavy incorporation of standard form terms selected by lawyers.