To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Automated writing evaluation (AWE) technologies are common supplementary tools for helping students improve their language accuracy using automated feedback. In most existing studies, AWE has been implemented as a class activity or an assignment requirement in English or academic writing classes. The potential of AWE as a voluntary language learning tool is unknown. This study reports on the voluntary use of Criterion by English as a foreign language students in two content courses for two assignments. We investigated (a) to what extent students used Criterion and (b) to what extent their revisions based on automated feedback increased the accuracy of their writing from the first submitted draft to the last in both assignments. We analyzed students’ performance summary reports from Criterion using descriptive statistics and non-parametric statistical tests. The findings showed that not all students used Criterion or resubmitted a revised draft. However, the findings also showed that engagement with automated feedback significantly reduced users’ errors from the first draft to the last in 11 error categories in total for the two assignments.
Separation logic is a powerful program logic for the static modular verification of imperative programs. However, dynamic checking of separation logic contracts on the boundaries between verified and untrusted modules is hard because it requires one to enforce (among other things) that outcalls from a verified to an untrusted module do not access memory resources currently owned by the verified module. This paper proposes an approach to dynamic contract checking by relying on support for capabilities, a well-studied form of unforgeable memory pointers that enables fine-grained, efficient memory access control. More specifically, we rely on a form of capabilities called linear capabilities for which the hardware enforces that they cannot be copied. We formalize our approach as a fully abstract compiler from a statically verified source language to an unverified target language with support for linear capabilities. The key insight behind our compiler is that memory resources described by spatial separation logic predicates can be represented at run time by linear capabilities. The compiler is separation-logic-proof-directed: it uses the separation logic proof of the source program to determine how memory accesses in the source program should be compiled to linear capability accesses in the target program. The full abstraction property of the compiler essentially guarantees that compiled verified modules can interact with untrusted target language modules as if they were compiled from verified code as well. This article is an extended version of one that was presented at ICFP 2019 (Van Strydonck et al., 2019).
Renal Replacement Therapies generally associated to the Artificial Kidney (AK) are membrane-based treatments that assure the separation functions of the failing kidney in extracorporeal blood circulation. Their progress from conventional hemodialysis towards high-flux hemodialysis (HFHD) through the introduction of ultrafiltration membranes characterized by high convective permeation fluxes intensified the need of elucidating the effect of the membrane fluid removal rates on the increase of the potentially blood-traumatizing shear stresses developed adjacently to the membrane. The AK surrogate consisting of two-compartments separated by an ultrafiltration membrane is set to have water circulation in the upper chamber mimicking the blood flow rates and the membrane fluid removal rates typical of HFHD. Pressure drop mirrors the shear stresses quantification and the modification of the velocities profiles. The increase on pressure drop when comparing flows in slits with a permeable membrane and an impermeable wall is ca. 512% and 576% for $ \mathrm{CA}22/5\%{\mathrm{SiO}}_2 $ and $ \mathrm{CA}30/5\%{\mathrm{SiO}}_2 $ membranes, respectively.
Governing Privacy in Knowledge Commons explores how privacy impacts knowledge production, community formation, and collaborative governance in diverse contexts, ranging from academia and IoT, to social media and mental health. Using nine new case studies and a meta-analysis of previous knowledge commons literature, the book integrates the Governing Knowledge Commons framework with Helen Nissenbaum's Contextual Integrity framework. The multidisciplinary case studies show that personal information is often a key component of the resources created by knowledge commons. Moreover, even when it is not the focus of the commons, personal information governance may require community participation and boundaries. Taken together, the chapters illustrate the importance of exit and voice in constructing and sustaining knowledge commons through appropriate personal information flows. They also shed light on the shortcomings of current notice-and-consent style regulation of social media platforms. This title is also available as Open Access on Cambridge Core.
Recent developments in artificial intelligence, especially neural network and deep learning technology, have led to rapidly improving performance in voice assistants such as Siri and Alexa. Over the next few years, capability will continue to improve and become increasingly personalised. Today's voice assistants will evolve into virtual personal assistants firmly embedded within our everyday lives. Told through the view of a fictitious personal assistant called Cyba, this book provides an accessible but detailed overview of how a conversational voice assistant works, especially how it understands spoken language, manages conversations, answers questions and generates responses. Cyba explains through examples and diagrams the neural network technology underlying speech recognition and synthesis, natural language understanding, knowledge representation, conversation management, language translation and chatbot technology. Cyba also explores the implications of this rapidly evolving technology for security, privacy and bias, and gives a glimpse of future developments. Cyba's website can be found at HeyCyba.com.
Diverse and increasingly comprehensive data about our personal lives is collected. When these personal data are linked to health records or linked to other data collected in our environment, such as that collected by state administrations or financial systems, the data have huge potential for public health research and society in general. Precision medicine, including pharmacogenomics, particularly depends on the potential of data linkage. With new capacities to analyze linked data, researchers today can retrieve and assess valuable and clinically relevant information. One way to develop such linked data sets and to make them available for research is through health data cooperatives. An example of such a health data cooperation is MIDATA – a health data cooperative recently established in Switzerland and the main focus of this chapter. In response to concerns about the present health data economy, MIDATA was founded to provide a governance structure for data storage that supports individual’s digital self-determination by allowing MIDATA members to control their own personal data flow and to store such data in a secure environment.
Here we discuss how the use of artificial intelligence will change the way science is done. Deep learning algorithms can now surpass the performance of human experts, a fact that has major implications for the future of our discipline. Successful uses of AI technology all possess two ingredients for deep learning: copious training data and a clear way to classify it. When these two conditions are met, researchers working in tandem with AI technologies can organize information and solve scientific problems with impressive efficiency. The future of science will increasingly rely on human–machine partnerships, where people and computers work together, revolutionizing the scientific process. We provide an example of what this may look like. Hoping to remedy a present-day challenge in science known as the “reproducibility crisis,” researchers used deep learning to uncover patterns in papers that signal strong and weak scientific findings. By combining the insights of machines and humans, the new AI model acheives the highest predictive accuracy.
We begin by discussing the challenges of quantifying scientific impact. We introduce the h-index and explore its implications for scientists. We also detail the h-index’s strengths when compared with other metrics, and show that it bypasses all the disadvantages posed by alternative ranking systems. We then explore the h-index’s predictive power, finding that it provides an easy but relatively accurate estimate of a person’s acheivements. Despite its relative accuracy, we are aware of the h-index’s limitations, which we detail here with suggestions for possible remedies.
To describe coauthorship networks, we begin with the Erdös number, which links mathematicians to their famously prolific colleague through the papers they have collaborated on. Coauthorship networks help us capture collaborative patterns and identify important features that characterize them. We can also use them to predict how many collaborators a scientist will have in the future based on her coauthorship history. We find that collaboration networks are scale-free, following a power-law distribution. As a consequence of the Matthew effect, frequent collaborators are more likely to collaborate, becoming hubs in their networks. We then explore the small-world phenomenon evidenced in coauthorship networks, which is sometimes referred to as “six degrees of separation.” To understand how a network’s small-worldliness impacts creativity and success, we look to teams of artists collaborating on Broadway musicals, finding that teams perform best when the network they inhabit is neither too big or too small. We end by discussing how connected components within networks provide evidence for the “invisible college.”
We introduce the role that productivity plays in scientific success by describing Paul Erdös’ exceptional productivity. How does Erdös’ productivity measure up to other scientists? Is the exponential increase in the number of papers published due to rising productivity rates or to the growing number of scientists working in the discipline? We find that there is an increase in the productivity of individual scientists but that that increase is due to the growth of collaborative work in science. We also quantify the significant productivity differences between disciplines and individual scientists. Why do these differences exist? To answer this question, we explore Shockley’s work on the subject, beginning with his discovery that productivity follows a lognormal distribution. We outline his hurdle model of productivity, which not only explains why the productivity distribution is fat-tailed, but also provides a helpful framework for improving individual scientific output. Finally, we outline how productivity is multiplicative, but salaries are additive, a contradiction that has implications for science policy.
Here we address bias and causality, beginning with the bias against failure in the existing science of science research. Because the data available to us is mostly on published papers, we necessarily disregard the role that failure plays in a scientific career. This could be framed as a surviorship bias, where the “surviving” papers are those that make it to publication. This same issue can be seen as a flaw in our current definition of impact, since our use of citation counts keeps a focus on success in the discipline. We explore the drawbacks and upsides of variants on citation counts, including altmetrics like page views. We also look at how possible ways to expand the science of science to include unobservable factors, as we saw in the case of the credibility revolution in economics. Using randomized controlled trials and natural experiments, the science of science could explore causality more deeply. Given the tension between certainty and generalizability, both experimental and observational insights are important to our understanding of how science works.
While there is plenty of information available about the luminaries of science, here we discuss the relative lack of information about ordinary researchers. Luckily, because of recent advances in name disambiguation, the career histories of everyday scientists can now be analyzed, changing the way we think about scientific creativity entirely. We describe how the process of shuffling a career – moving the works a scientist publishes around randomly in time – helped us discover what we call the “random impact rule,” which dictates that, when we adjust for productivity, the highest impact work in a career can occur at any time. We also see that the probability of landmark work follows a cumulative distribution, meaning that the random impact rule holds true not just for the highest impact work in any career but also for other important works, too. While there is precedent for this rule in the literature – Simonton proposed the “constant probability of success” model in the 1970s – until recently we didn’t have the data on hand to test it. The random impact rule allows us to decouple age and creativity, instead linking periods of high productivity to creative breakthroughs.
We begin by asking how far back in the literature we should go when choosing discoveries to build on. In other words, how myopic is science in the age of Google Scholar? By looking at the age distribution of citations and identifying knowledge “hot spots,” we pinpoint the unique combinations of old and relatively new knowledge that are most likely to produce new breakthroughs. In doing so, we see that the way we build on past knowledge follows clear patterns, and we explore how these patterns shape future scientific discourse. We also look at the the impact that a citation’s jump–decay pattern has on the relevance of research over time, finding that all papers have an expiration date and that we can predict that date based on the jump–decay pattern.
We begin by acknowledging the sheer size of the citation index to date, and then discuss the disparity in citations that these papers receive. These differences in impact among papers can be captured by a citation distribution, which can be approximated by a power-law function. We compare power-law distributions to Gaussian distributions, illustrating the distinctions between the two and what they tell us about citation patterns. We then explore the differences in average number of citations between fields, which can make cross-disciplinary comparisions complicated. Luckily, we find that citation patterns are surprisingly universal relative to the field a paper is published in, which allows us identify common trends in citation and impact regardless of discipline. We end with a discussion of what citations don’t capture, given that they are frequently used as a proxy for impact. We pinpoint some potential flaws in this metric, but see citation patterns as a valuable way to gauge the collective wisdom of the scientific community.
The rise of social media has raised questions about the vitality of privacy values and concerns about threats to privacy. The convergence of politics with social media use amplifies the privacy concerns traditionally associated with political organizing, particularly when marginalized groups and minority politics are involved. Despite the importance of these issues, there has been little empirical exploration of how privacy governs political activism and organizing in online environments. This chapter explores how privacy concerns shape political organizing on Facebook, through detailed case studies of how groups associated with March for Science, Day Without Immigrants (“DWI”), and Women’s March govern information flows. These cases address distinct issues, while operating in similar contexts and on the same timescales, allowing for the exploration of privacy in governance of personal information flows in political organizing and Facebook sub-communities. Privacy practices and concerns differed between the cases, depending on factors such as the nature of the group, the political issues it confronts, and its relationships to other organizations or movements.
Given the jump–decay citation patterns discussed in the previous chapter, are we forced to conclude that the papers we publish will be relevant for only a few years? We find that while aggregate citations follow a clear pattern, the trajectories of individual citations are remarkably variable. Yet, by analyzing individual citation histories, we are able to isolate three parameters – immediacy, longevity, and fitness – that dictate a paper’s future impact. In fact, all citation histories are governed by a single formula, a fact which speaks the universality of the dynamics that at first seemed quite variant. We end by discussing how a paper’s ultimate impact can be predicted using one factor alone: its relative fitness. We show how papers with the same fitness will acquire the same number of citations in the long run, regardless of which journals they are published in.