To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Donald Trump, the Arab Spring, Brexit: digital media have provided political actors and citizens with new tools to engage in politics. These tools are now routinely used by activists, candidates, non-governmental organizations, and parties to inform, mobilize, and persuade people. But what are the effects of this retooling of politics? Do digital media empower the powerless or are they breaking democracy? Have these new tools and practices fundamentally changed politics or is their impact just a matter of degree? This clear-eyed guide steps back from hyperbolic hopes and fears to offer a balanced account of what aspects of politics are being shaped by digital media and what remains unchanged. The authors discuss data-driven politics, the flow and reach of political information, the effects of communication interventions through digital tools, their use by citizens in coordinating political action, and what their impact is on political organizations and on democracy at large.
Neural machine translation (NMT) has recently shown promising results on publicly available benchmark datasets and is being rapidly adopted in various production systems. However, it requires high-quality large-scale parallel corpus, and it is not always possible to have sufficiently large corpus as it requires time, money, and professionals. Hence, many existing large-scale parallel corpus are limited to the specific languages and domains. In this paper, we propose an effective approach to improve an NMT system in low-resource scenario without using any additional data. Our approach aims at augmenting the original training data by means of parallel phrases extracted from the original training data itself using a statistical machine translation (SMT) system. Our proposed approach is based on the gated recurrent unit (GRU) and transformer networks. We choose the Hindi–English, Hindi–Bengali datasets for Health, Tourism, and Judicial (only for Hindi–English) domains. We train our NMT models for 10 translation directions, each using only 5–23k parallel sentences. Experiments show the improvements in the range of 1.38–15.36 BiLingual Evaluation Understudy points over the baseline systems. Experiments show that transformer models perform better than GRU models in low-resource scenarios. In addition to that, we also find that our proposed method outperforms SMT—which is known to work better than the neural models in low-resource scenarios—for some translation directions. In order to further show the effectiveness of our proposed model, we also employ our approach to another interesting NMT task, for example, old-to-modern English translation, using a tiny parallel corpus of only 2.7K sentences. For this task, we use publicly available old-modern English text which is approximately 1000 years old. Evaluation for this task shows significant improvement over the baseline NMT.
In real-world applications, text classification models often suffer from a lack of accurately labelled documents. The available labelled documents may also be out of domain, making the trained model not able to perform well in the target domain. In this work, we mitigate the data problem of text classification using a two-stage approach. First, we mine representative keywords from a noisy out-of-domain data set using statistical methods. We then apply a dataless classification method to learn from the automatically selected keywords and unlabelled in-domain data. The proposed approach outperformed various supervised learning and dataless classification baselines by a large margin. We evaluated different keyword selection methods intrinsically and extrinsically by measuring their impact on the dataless classification accuracy. Last but not least, we conducted an in-depth analysis of the behaviour of the classifier and explained why the proposed dataless classification method outperformed supervised learning counterparts.
The various vision-based tactile sensors have been developed for robotic perception in recent years. In this paper, the novel soft robotic finger embedded with the visual sensor is proposed for perception. It consists of a colored soft inner chamber, an outer structure, and an endoscope camera. The bending perception algorithm based on image preprocessing and deep learning is proposed. The boundary of color regions and the position of marker dots are extracted from the inner chamber image and label image, respectively. Then the convolutional neural network with multi-task learning is trained to obtain bending states of the finger. Finally, the experiments are implemented to verify the effectiveness of the proposed method.
First-order expressive capabilities allow Bayesian networks (BNs) to model problem domains where the number of entities, their attributes, and their relationships can vary significantly between model instantiations. First-order BNs are well-suited for capturing knowledge representation dependencies, but literature on design patterns specific to first-order BNs is few and scattered. To identify useful patterns, we investigated the range of dependency models between combinations of random variables (RVs) that represent unary attributes, functional relationships, and binary predicate relationships. We found eight major patterns, grouped into three categories, that cover a significant number of first-order BN situations. Selection behavior occurs in six patterns, where a relationship/attribute identifies which entities in a second relationship/attribute are applicable. In other cases, certain kinds of embedded dependencies based on semantic meaning are exploited. A significant contribution of our patterns is that they describe various behaviors used to establish the RV’s local probability distribution. Taken together, the patterns form a modeling framework that provides significant insight into first-order expressive BNs and can reduce efforts in developing such models. To the best of our knowledge, there are no comprehensive published accounts of such patterns.
Consider two widely used definitions of equality. That of Leibniz: one value equals another if any predicate that holds of the first holds of the second. And that of Martin-Löf: the type identifying one value with another is occupied if the two values are identical. The former dates back several centuries, while the latter is widely used in proof systems such as Agda and Coq. Here we show that the two definitions are isomorphic: we can convert any proof of Leibniz equality to one of Martin-Löf identity and vice versa, and each conversion followed by the other is the identity. One direction of the isomorphism depends crucially on values of the type corresponding to Leibniz equality satisfying functional extensionality and Reynolds’ notion of parametricity. The existence of the conversions is widely known (meaning that if one can prove one equality then one can prove the other), but that the two conversions form an isomorphism (internally) in the presence of parametricity and functional extensionality is, we believe, new. Our result is a special case of a more general relation that holds between inductive families and their Church encodings. Our proofs are given inside type theory, rather than meta-theoretically. Our paper is a literate Agda script.
Trust between agents in multi-agent systems (MASs) is critical to encourage high levels of cooperation. Existing methods to assess trust and reputation use direct and indirect past experiences about an agent to estimate their future performance; however, these will not always be representative if agents change their behaviour over time.
Real-world distributed networks such as online market places, P2P networks, pervasive computing and the Smart Grid can be viewed as MAS. Dynamic agent behaviour in such MAS can arise from seasonal changes, cheaters, supply chain faults, network traffic and many other reasons. However, existing trust and reputation models use limited techniques, such as forgetting factors and sliding windows, to account for dynamic behaviour.
In this paper, we propose Reacting and Predicting in Trust and Reputation (RaPTaR), a method to extend existing trust and reputation models to give agents the ability to monitor the output of interactions with a group of agents over time to identify any likely changes in behaviour and adapt accordingly. Additionally, RaPTaR can provide an a priori estimate of trust when there is little or no interaction data (either because an agent is new or because a detected behaviour change suggests recent past experiences are no longer representative). Our results show that RaPTaR has improved performance compared to existing trust and reputation methods when dynamic behaviour causes the ranking of the best agents to interact with to change.
AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law.
Narrow reading has the potential to reduce vocabulary load and to provide rich opportunities for developing collocation knowledge, but these benefits rely on narrow reading increasing lexical repetition within a text. Hence, interest in narrow reading has been limited by the relatively small lexical effect of narrowing reading by topic (Nation, 2013). Nevertheless, research in data-driven learning and teaching and language corpora has reported positively on learners using concordances in a manner comparable to narrow reading. However, the potential for concordances to provide an increased lexical-repetition effect has not been assessed. This study bridges this gap by exploring the degree of lexical repetition available in concordances and identifies corpus composition as a key predictor of lexical repetition. The study uses standardised type-token ratio (sTTR) to analyse concordances extracted from corpora at three different levels of homogeneity/heterogeneity. The results show large, reliable variations in lexical repetition resulting from variation in corpus homogeneity/heterogeneity, and so identifies concordance-based narrow reading as a possible means of overcoming the limitations of traditional narrow reading by topic. The results are discussed with reference to pedagogical implications for language learners, teachers, and researchers.
With the increasing recognition of the pedagogical applications of corpus linguistics, there has been a growing interest in developing teachers’ corpus literacy to popularize the use of corpora in language education. This longitudinal study investigated Arab Gulf EFL student teachers’ immediate and long-term responses to corpus literacy instruction. After teaching a corpus literacy component to two classes of student teachers in a graduate computer-assisted language learning course they attended, the author collected focus group data about their views on this instruction and their own expected future uses of corpora in language learning, teaching and research. Two years later, a group of these student teachers (n = 19) responded to a follow-up questionnaire exploring their beliefs about corpus literacy integration and their multiple uses of corpora. The student teachers reported very positive immediate and long-term perceptions of corpus literacy instruction, but it was found that such instruction has not brought about all the desired changes in their long-term uses of online corpora as a linguistic and pedagogical resource, or their attitudes towards doing corpus-based TESOL research. However, it is expected that the popularization benefits gained from corpus literacy integration could lead to better future developments in using corpora for language education and research purposes in the target context.
Where did you last encounter a piece of political information? Chances are, you clicked on a link a friend sent you on a messaging app, read the preview to a piece on the Facebook wall of a colleague, or followed a retweet posted by an acquaintance on Twitter. Depending on your predilections for the ways of the ancients, you might also have picked up a printed newspaper or watched the news on a television set.
Two episodes from 2011 and 2016 bookend public expectations regarding the role of digital media in politics. In the wake of the protests and demonstrations in North Africa and the Middle East that we discussed in Chapter 5, the dominant public narrative portrayed social media as the keystone that enabled the opposition to coordinate a challenge to otherwise seemingly unwavering autocracies. Only social media offered disgruntled citizens the possibility of taking their discontent to the streets. Decentralized networks on top of real-time communication systems enabled activists to level the playing field against authoritarian regimes that previously had taken full advantage of their control over the official media and showed an unfettered capacity to repress any sign of dissent. It does not matter whether we see digital media as a causal factor; no account of the events in Egypt would be complete without a reference to the #jan25 hashtag on Twitter or the “We are all Khaled Said” site on Facebook (see Chapter 5).
It is June 2015 and the famous American reality-TV personality Donald Trump announces his bid for the Republican nomination to the 2016 race for the US presidency. Journalists, Republican donors, and prospective voters now have to decide if they should take his bid seriously. The history of American presidential campaigns is littered with celebrities and third-party candidates who tried to capitalize on their fame or success by entering politics. While some like Ronald Reagan, Arnold Schwarzenegger, or Michael Bloomberg proved to be successful, most celebrity candidacies turned out to be mere blips in the history of American politics. How should observers decide on whether Donald Trump’s bid fell into the first or the second category? The Trump campaign portrayed their candidate as being in touch with the long-forgotten people lacking a voice in US politics (Green 2017), a group that the campaign of the Democratic frontrunner Hillary Clinton helpfully labeled “deplorables” (Chozick 2016). To assess the validity of Trump’s claims, journalists decided to take to social media as a source of how well his message resonated with the public.