To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Data is now one of, if not the world's most valuable resource. The adoption of data-driven applications across economic sectors has made data and the flow of data so pervasive that it has become integral to everything we as members of society do - from conducting our finances to operating businesses to powering the apps we use every day. For this reason, governing cross-border data flows is inherently difficult given the ubiquity and value of data, and the impact government policies can have on national competitiveness, business attractiveness and personal rights. The challenge for governments is to address in a coherent manner the broad range of data-related issues in the context of a global data-driven economy.
This book engages with the unexplored topic of why and how governments should develop a coherent and consistent strategic framework regulating cross-border data flows. The objective is to fill a very significant gap in the legal and policy setting by considering multiple perspectives in order to assist in the development of a jurisdiction's coherent and strategic policy framework.
Over the recent years, the theory of rewriting has been used and extended in order to provide systematic techniques to show coherence results for strict higher categories. Here, we investigate a further generalization to Gray categories, which are known to be equivalent to tricategories. This requires us to develop the theory of rewriting in the setting of precategories, which are adapted to mechanized computations and include Gray categories as particular cases. We show that a finite rewriting system in precategories admits a finite number of critical pairs, which can be efficiently computed. We also extend Squier’s theorem to our context, showing that a convergent rewriting system is coherent, which means that any two parallel 3-cells are necessarily equal. This allows us to prove coherence results for several well-known structures in the context of Gray categories: monoids, adjunctions, and Frobenius monoids.
One of the most fundamental properties of a proof system is analyticity, expressing the fact that a proof of a given formula F only uses subformulas of F. In sequent calculus, this property is usually proved by showing that the $\mathsf{cut}$ rule is admissible, i.e., the introduction of the auxiliary lemma H in the reasoning “if H follows from G and F follows from H, then F follows from G” can be eliminated. The proof of cut admissibility is usually a tedious, error-prone process through several proof transformations, thus requiring the assistance of (semi-)automatic procedures. In a previous work by Miller and Pimentel, linear logic ($\mathsf{LL}$) was used as a logical framework for establishing sufficient conditions for cut admissibility of object logical systems (OL). The OL’s inference rules are specified as an $\mathsf{LL}$ theory and an easy-to-verify criterion sufficed to establish the cut-admissibility theorem for the OL at hand. However, there are many logical systems that cannot be adequately encoded in $\mathsf{LL}$, the most symptomatic cases being sequent systems for modal logics. In this paper, we use a linear-nested sequent ($\mathsf{LNS}$) presentation of $\mathsf{MMLL}$ (a variant of LL with subexponentials), and show that it is possible to establish a cut-admissibility criterion for $\mathsf{LNS}$ systems for (classical or substructural) multimodal logics. We show that the same approach is suitable for handling the $\mathsf{LNS}$ system for intuitionistic logic.
This paper studies normalisation by evaluation for typed lambda calculus from a categorical and algebraic viewpoint. The first part of the paper analyses the lambda definability result of Jung and Tiuryn via Kripke logical relations and shows how it can be adapted to unify definability and normalisation, yielding an extensional normalisation result. In the second part of the paper, the analysis is refined further by considering intensional Kripke relations (in the form of Artin–Wraith glueing) and shown to provide a function for normalising terms, casting normalisation by evaluation in the context of categorical glueing. The technical development includes an algebraic treatment of the syntax and semantics of the typed lambda calculus that allows the definition of the normalisation function to be given within a simply typed metatheory. A normalisation-by-evaluation program in a dependently typed functional programming language is synthesised.
This study employs a bibliometric approach to analyse common research themes, high-impact publications and research venues, identify the most recent transformative research, and map the developmental stages of data-driven learning (DDL) since its genesis. A dataset of 126 articles and 3,297 cited references (1994–2021) retrieved from the Web of Science was analysed using CiteSpace 6.1.R2. The analysis uncovered the principal research themes and high-impact publications, and the most recent transformative research in the DDL field. The following evolutionary stages of DDL were determined based on Shneider’s (2009) scientific model and the timeline generated by CiteSpace, namely, the conceptualising stage (1980s–1998), the maturing stage (1998–2011), and the expansion stage (2011–now), with Stage 4 just emerging. Finally, the analysis discerned potential future research directions, including the implementation of DDL in larger-scale classroom practice and the role of variables in DDL.
In this paper, we identify some conditions to compare the largest order statistics from resilience-scale models with reduced scale parameters in the sense of mean residual life order. As an example of the established result, the exponentiated generalized gamma distribution is examined. Also, for the special case of the scale model, power-generalized Weibull and half-normal distributions are investigated.
Knowledge-based systems and their ontologies evolve due to different reasons. Ontology evolution is the adaptation of an ontology and the propagation of these changes to dependent artifacts such as queries and other ontologies. Besides identifying basic/simple changes, it is imperative to identify complex changes between two versions of the same ontology to make this adaptation possible. There are many definitions of complex changes applied to ontologies in the literature. However, their specifications across works vary both in formalization and textual description. Some works also use different terminologies to refer to a change, while others use the same vocabulary to refer to distinct changes. Therefore, there is a lack of a unified list of complex changes. The main goals of this paper are: (i) present the primary documents that identify complex changes; (ii) provide critical analyses about the set of the complex changes proposed in the literature and the documents mentioning them; (iii) provide a unified list of complex changes mapping different sets of complex changes proposed by several authors; (iv) present a classification for those complex changes; and (v) describe some open directions of the area. The mappings between the complex changes provide a mechanism to relate and compare different proposals. The unified list is thus a reference for the complex changes published in the literature. It may assist the development of tools to identify changes between two versions of the same ontology and enable the adaptation of artifacts that depend on the evolved ontology.
This paper is devoted to the study of the asset allocation problem for a DC pension plan with minimum guarantee constraint in a hidden Markov regime-switching economy. Suppose that four types of assets are available in the financial market: a risk-free asset, a zero-coupon bond, an inflation-indexed bond and a stock. The expected return rate of the stock depends on unobservable economic states, and the change of states is described by a hidden Markov chain. In addition, the CIR process is used to describe the evolution of the nominal interest rate. The contribution rate is also assumed to be stochastic. The goal of investment management is to minimize the convex risk measure of the terminal wealth in excess of the minimum guarantee constraint. First, we transform the partially observable optimization problem into the one with complete information using the Wonham filtering technique and deal with the minimum guarantee constraint by constructing auxiliary processes. Furthermore, we derive the optimal investment strategy by the BSDE approach. Finally, some numerical results are presented to illustrate the impacts of some important parameters on investment behaviors.
In recent years, the number of studies investigating the effectiveness of using digital games for incidental second language (L2) vocabulary learning has been rapidly increasing; however, there is still a lack of research identifying the factors that affect incidental L2 vocabulary learning. Hence, the current study examined vocabulary-related (word level, exposure frequency, salience) and learner-related (language proficiency, interest, viewing captions) variables and investigated factors affecting EFL students’ incidental vocabulary learning in the use of a vernacular (noneducational) murder mystery game (N = 59). The study employed a quantitative research method and descriptive and inferential statistics (repeated measures ANOVA and multiple linear regression). The results showed that playing the game greatly facilitated L2 vocabulary acquisition and retention. Among the vocabulary-related variables, the study found that only salience significantly influenced vocabulary acquisition. Regarding the learner-related variables, the students’ interest and viewing captions were positively related to vocabulary learning, whereas their language proficiency levels were negatively correlated. The study found that the students’ conscious attention, in conjunction with salience of the word, was the main facilitating factor in incidental vocabulary acquisition and retention in the game-enhanced language learning environment. The study suggested pedagogical implications for incidental vocabulary learning through game play based on the results of the study.
The winter stratospheric polar vortex (SPV) exhibits considerable variability in magnitude and structure, which can result in extreme SPV events. These extremes can subsequently influence weather in the troposphere from weeks to months and thus are important sources of surface predictability. However, the predictability of the SPV extreme events is limited to 1–2 weeks in state-of-the-art prediction systems. Longer predictability timescales of SPV would strongly benefit long-range surface prediction. One potential option for extending predictability timescales is the use of machine learning (ML). However, it is often unclear which predictors and patterns are important for ML models to make a successful prediction. Here we use explainable multiple linear regressions (MLRs) and an explainable artificial neural network (ANN) framework to model SPV variations and identify one type of extreme SPV events called sudden stratospheric warmings. We employ a NN attribution method to propagate the ANN’s decision-making process backward and uncover feature importance in the predictors. The feature importance of the input is consistent with the known precursors for extreme SPV events. This consistency provides confidence that ANNs can extract reliable and physically meaningful indicators for the prediction of the SPV. In addition, our study shows a simple MLR model can predict the SPV daily variations using sequential feature selection, which provides hints for the connections between the input features and the SPV variations. Our results indicate the potential of explainable ML techniques in predicting stratospheric variability and extreme events, and in searching for potential precursors for these events on extended-range timescales.
Given the importance of corpus linguistics in language learning, there have been calls for the integration of corpus training into teacher education programmes. However, the question of what knowledge and skills the training should target remains unclear. Hence, we advance our understanding of measures and outcomes of teacher corpus training by proposing and testing a five-component theoretical framework for measuring teachers’ perceived corpus literacy (CL) and its subskills: understanding, search, analysis, and the advantages and limitations of corpora. Also, we hypothesised that teacher CL is linked to their intention to use corpora in classroom teaching. Specifically, 183 teachers and student teachers received corpus training to develop their CL and then completed a survey to measure their CL and intention to use corpora in teaching in Likert-scale items together with open-ended questions. Confirmatory factor analysis indicated that a hierarchical factor structure for CL using the aforementioned five subfactors best fitted the data. Moreover, structural equation modelling indicated that CL is positively linked to the participants’ intention to integrate corpora into classroom teaching. While all five subskills are important for teachers, greater effort should be made to develop their corpus search and analysis skills, which can be viewed as the “bread and butter” of corpus training.
An increasing number of studies are exploring the benefits of automatic speech recognition (ASR)–based dictation programs for second language (L2) pronunciation learning (e.g. Chen, Inceoglu & Lim, 2020; Liakin, Cardoso & Liakina, 2015; McCrocklin, 2019), but how ASR recognizes accented speech and the nature of the feedback it provides to language learners is still largely under-researched. The current study explores whether the intelligibility of L2 speakers differs when assessed by native (L1) listeners versus ASR technology, and reports on the types of intelligibility issues encountered by the two groups. Twelve L1 listeners of English transcribed 48 isolated words targeting the /ɪ-i/ and /æ-ε/ contrasts and 24 short sentences that four Taiwanese intermediate learners of English had produced using Google’s ASR dictation system. Overall, the results revealed lower intelligibility scores for the word task (ASR: 40.81%, L1 listeners: 38.62%) than the sentence task (ASR: 75.52%, L1 listeners: 83.88%), and highlighted strong similarities in the error types – and their proportions – identified by ASR and the L1 listeners. However, despite similar recognition scores, correlations indicated that the ASR recognition of the L2 speakers’ oral productions mirrored the L1 listeners’ judgments of intelligibility in the word and sentence tasks for only one speaker, with significant positive correlations for one additional speaker in each task. This suggests that the extent to which ASR approaches L1 listeners at recognizing accented speech may depend on individual speakers and the type of oral speech.
This chapter explores the changes that AI brings about in corporate law and corporate governance, especially in terms of the challenges it poses for corporations. The law scholar Jan Lieder argues that whilst there is the potential to enhance the current system, there are also risks of destabilisation. Although algorithms are already being used in the board room, lawmakers should not consider legally recognizing e-persons as directors and managers. Rather, academia should evaluate the effects of AI on the corporate duties of boards and their liabilities. By critically examining three main topics, algorithms as directors, AI in a management board, and AI in a supervisory board, the author suggests the need for transparency in a company’s practices regarding AI for awareness-raising and the enhancement of overall algorithm governance, as well as the need for boards to report on their overall AI strategy and ethical guidelines relating to the responsibilities, competencies, and protective measures they established. Additionally, the author argues that a reporting obligation should require the boards to deal with questions of individual rights and explain how they relate to them.
Properties and behaviours at the systemic aggregate level are derived as statistical averages from probability distributions describing the likelihoods of the various states available to the components.
This chapter by the law scholar Antje von Ungern-Sternberg focuses on the legality of discriminatory AI which is increasingly used to assess people (profiling). Intelligent algorithms – which are free of human prejudices and stereotypes – would prevent discriminatory decisions, or so the story goes. However, many studies show that the use of AI can lead to discriminatory outcomes. From a legal point of view, this raises the question whether the law as it stands prohibits objectionable forms of differential treatment and detrimental impact. In the legal literature dealing with automated profiling, some authors have suggested that we need a ‘right to reasonable inferences’, i.e. a certain methodology for AI algorithms affecting humans. von Ungern-Sternberg takes up this idea with respect to discriminatory AI and claims that such a right already exists in antidiscrimination law. She argues that the need to justify differential treatment and detrimental impact implies that profiling methods correspond to certain standards. It is now a major challenge for lawyers and data and computer scientists to develop and establish those methodological standards.
We discuss forecasting of the transitions accompanying the intermittent dynamics of complex systems. Co-evolutionary dynamics is particularly challenging.
Assume we are able to obtain the joint probability for a set of time series representing a complex system. Based on the joint probabilities, information theory can help to analyse the nature of the interdependence in the system. It is particularly important to be able to distinguish between different types of emergent behaviour, such as synergy or redundancy.
In this chapter, law and technology scholar Jonathan Zittrain warns of the danger of relying on answers for which we have no explanations. There are benefits to utilising solutions discovered through trial and error rather than rigorous proof: though aspirin was discovered in the late 19th century, it was not until the late 20th century that scientists were able to explain how it worked. But doing so accrues ‘intellectual debt’. This intellectual debt is compounding quickly in the realm of AI, especially in the subfield of machine learning. Whereas we know that ML models can create efficient, effective answers, we don’t always know why the models come to the conclusions they do. This makes it difficult to detect when they are malfunctioning, being manipulated, or producing unreliable results. When several systems interact, the ledger moves further to the red. Society’s movement from basic science towards applied technology that bypasses rigorous investigative research inches us closer to a world in which we are reliant on an oracle AI, one in which we trust regardless of our ability to audit its trustworthiness. Zittrain concludes that we must create an intellectual debt ‘balance sheet’ by allowing academics to scrutinise the systems.
In this chapter, Fruzsina Molnár-Gábor and Johanne Giesecke consider specific aspects of how the application of AI-based systems in medical contexts may be guided under international standards. They sketch the relevant international frameworks for the governance of medical AI. Among the frameworks that exist, the World Medical Association’s activity appears particularly promising as a guide for standardisation processes. The organisation has already unified the application of medical expertise to a certain extent worldwide, and its guidance is anchored in the rules of various legal systems. It might provide the basis for a certain level of conformity of acceptance and implementation of new guidelines within national rules and regulations, such as those on new technology applications within the AI field. In order to develop a draft declaration, the authors then sketch out the potential applications of AI and its effects on the doctor–patient relationship in terms of information, consent, diagnosis, treatment, aftercare, and education. Finally, they spell out an assessment of how further activities of the WMA in this field might affect national rules, using the example of Germany.
The next chapters are dedicated to mathematical approaches of central relevance to the analysis and modelling of emergent behaviour amongst many interacting components.