To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Typically, automatic Question Answering (QA) approaches use the question in its entirety in the search for potential answers. We argue that decomposing complex factoid questions into separate facts about their answers is beneficial to QA, since an answer candidate with support coming from multiple independent facts is more likely to be the correct one. We broadly categorize decomposable questions as parallel or nested, and we present a novel question decomposition framework for enhancing the ability of single-shot QA systems to answer complex factoid questions. Essential to the framework are components for decomposition recognition, question rewriting, and candidate answer synthesis and re-ranking. We discuss the interplay among these, with particular emphasis on decomposition recognition, a process which, we argue, can be sufficiently informed by lexico-syntactic features alone. We validate our approach to decomposition by implementing the framework on top of IBM Watson™, a state-of-the-art QA system, and showing a statistically significant improvement over its accuracy.
Machine learning techniques have been implemented to extract instances of semantic relations using diverse features based on linguistic knowledge, such as tokens, lemmas, PoS-tags, or dependency paths. However, there has been little work aiming to know which of these features works better in the relation extraction task, and less in languages other than English. In this paper, various features representing different levels of linguistic knowledge are systematically evaluated for biographical relation extraction. The effectiveness of these features was measured by training several supervised classifiers that only differ in the type of linguistic knowledge used to define their features. The experiments performed in this paper show that some basic linguistic knowledge (provided by lemmas and their combination in bigrams) behaves better than other complex features, such as those based on syntactic analysis. Furthermore, some feature combinations using different levels of analysis are proposed in order (i) to avoid feature overlapping as well as (ii) to evaluate the use of computationally inexpensive and widespread tools such as tokenization and lemmatization. This paper also describes two new freely available corpora for biographical relation extraction in Portuguese and Spanish, built by means of a distant-supervision strategy. Experiments were performed with five semantic relations and two languages, using these corpora.
This paper proposes a new method for semantic document analysis: densification, which identifies and ranks Wikipedia pages relevant to a given document. Although there are similarities with established tasks such as wikification and entity linking, the method does not aim for strict disambiguation of named entity mentions. Instead, densification uses existing links to rank additional articles that are relevant to the document, a form of explicit semantic indexing that enables higher-level semantic retrieval procedures that can be beneficial for a wide range of NLP applications. Because a gold standard for densification evaluation does not exist, a study is carried out to investigate the level of agreement achievable by humans, which questions the feasibility of creating an annotated data set. As a result, a semi-supervised approach is employed to develop a two-stage densification system: filtering unlikely candidate links and then ranking the remaining links. In a first evaluation experiment, Wikipedia articles are used to automatically estimate the performance in terms of recall. Results show that the proposed densification approach outperforms several wikification systems. A second experiment measures the impact of integrating the links predicted by the densification system into a semantic question answering (QA) system that relies on Wikipedia links to answer complex questions. Densification enables the QA system to find twice as many additional answers than when using a state-of-the-art wikification system.
In Modern Standard Arabic texts are typically written without diacritical markings. The diacritics are important to clarify the sense and meaning of words. Lack of these markings may lead to ambiguity even for the natives. Often the natives successfully disambiguate the meaning through the context; however, many Arabic applications, such as machine translation, text-to-speech, and information retrieval, are vulnerable due to lack of diacritics. The process of automatically restoring diacritical marks is called diacritization or diacritic restoration. In this paper we discuss the properties of the Arabic language and the issues that are related to the lack of the diacritical marking. It will be followed by a survey of the recent algorithms that were developed to solve the diacritization problem. We also look into the future trend for researchers working in this area.
Aliases play an important role in online environments by facilitating anonymity, but also can be used to hide the identity of cybercriminals. Previous studies have investigated this alias matching problem in an attempt to identify whether two aliases are shared by an author, which can assist with identifying users. Those studies create their training data by randomly splitting the documents associated with an alias into two sub-aliases. Models have been built that can regularly achieve over 90% accuracy for recovering the linkage between these ‘random sub-aliases’. In this paper, random sub-alias generation is shown to enable these high accuracies, and thus does not adequately model the real-world problem. In contrast, creating sub-aliases using topic-based splitting drastically reduces the accuracy of all authorship methods tested. We then present a methodology that can be performed on non-topic controlled datasets, to produce topic-based sub-aliases that are more difficult to match. Finally, we present an experimental comparison between many authorship methods to see which methods better match aliases under these conditions, finding that local n-gram methods perform better than others.
The idea of interfacing minds with machines has long captured the human imagination. Recent advances in neuroscience and engineering are making this a reality, opening the door to restoration and augmentation of human physical and mental capabilities. Medical applications such as cochlear implants for the deaf and neurally controlled prosthetic limbs for the paralyzed are becoming almost commonplace. Brain-computer interfaces (BCIs) are also increasingly being used in security, lie detection, alertness monitoring, telepresence, gaming, education, art, and human augmentation. This introduction to the field is designed as a textbook for upper-level undergraduate and first-year graduate courses in neural engineering or brain-computer interfacing for students from a wide range of disciplines. It can also be used for self-study and as a reference by neuroscientists, computer scientists, engineers, and medical practitioners. Key features include questions and exercises in each chapter and a supporting website.
In this chapter, we explore the range of applications for BCI technology. We have already touched upon some medical applications such as restoration of lost motor and sensory function when we examined invasive and noninvasive BCIs in previous chapters. Here we briefly review these applications before exploring applications in other areas such as entertainment, robotic control, gaming, security, and art.
Medical Applications
The field of brain-computer interfacing originated with the goal of helping the paralyzed and the disabled. It is therefore not surprising that some of the major applications of BCIs to date have been in medical technology, particularly restoring sensory and motor function.
Sensory Restoration
One of the most widely used commercial BCIs is the cochlear implant for the deaf,discussed in Section 10.1.1. The cochlear implant is an example of a BCI for sensoryrestoration, as are retinal implants being developed for the blind (Section 10.1.2).
There has not been much research on two other possible types of purely sensoryBCIs, namely, BCIs for somatosensation and BCIs for olfaction and taste. In the caseof the former, the need for a BCI is minimized because it is ot en possible to restoretactile sensation through skin grafting. However, as we saw in Chapter 11, there isconsiderable interest in somatosensory stimulation as a component of bidirectionalBCIs for allowing paralyzed individuals and amputees to, for example, sense objectsbeing grasped or touched by prosthetic devices.
In this chapter, we review the signal-processing methods applied to recorded brain signals in BCIs for tasks ranging from extracting spikes from the raw signals recorded from invasive electrodes to extracting features for classification. For many of the techniques, we use EEG as the noninvasive recording modality to illustrate the concepts involved, although the techniques could be applied to signals from other sources as well such as ECoG and MEG.
Spike Sorting
Invasive approaches to brain-computer interfacing typically rely on recording spikes from an array of microelectrodes. The goal of signal-processing methods for such an input signal is to reliably isolate and extract the spikes being emitted by a single neuron per recording electrode. This procedure is usually called spike sorting.
The signal recorded by an extracellular electrode implanted in the brain is typicallya mixture of signals from several neighboring neurons, with spikes from closerneurons producing larger amplitude del ections in the recorded signal. h is signalis ot en referred to as multiunit hash or neural hash (Figure 4.1A). Although hashcould also potentially be used as input to brain- computer interfaces, the more traditionalform of input is spikes from individual neurons. Spike sorting methods allowspikes from a single neuron to be extracted from hash.
Our brains evolved to control a complex biological device: our body. As we are finding out today, many millennia of evolutionary tinkering has made the brain a surprisingly versatile and adaptive system, to the extent that it can learn to control devices that are radically different from our body. Brain-computer interfacing, the subject of this book, is a new interdisciplinary field that seeks to explore this idea by leveraging recent advances in neuroscience, signal processing, machine learning, and information technology.
The idea of brains controlling devices other than biological bodies has long been a staple of science-fiction novels and Hollywood movies. However, this idea is fast becoming a reality: in the past decade, rats have been trained to control the delivery of a reward to their mouths, monkeys have moved robotic arms, and humans have controlled cursors and robots, all directly through brain activity.
What aspects of neuroscience research have made these advances possible? Whatare the techniques in computing and machine learning that are allowing brains tocontrol machines? What is the current state-of-the-art in brain-computer interfaces(BCIs)? What limitations still need to be overcome to make BCIs more commonplaceand useful for day-to-day use? What are the ethical, moral, and societal implicationsof BCIs? These are some of the questions that this book addresses.
We have thus far focused on BCIs that record signals from the brain and transform those signals to a control signal for an external device. In this chapter, we reverse the direction of control and discuss BCIs that can be used to stimulate and control specific brain circuits. Some of these BCIs have made the transition from the lab to the clinic and are currently being used by human subjects, such as cochlear implants and deep brain stimulators (DBS), while others are still in experimental stages. We divide these BCIs broadly into two classes: BCIs for sensory restoration and BCIs for motor restoration. We also consider the possibility of sensory augmentation.
Restoring Hearing: Cochlear Implants
One of the most successful BCI devices to date is the cochlear implant for restoring or enabling hearing in the deaf. The implant is a good example of how one can convert knowledge of information processing in a neural system, in this case the cochlea, into building a working BCI that can benefit people.
A holy grail of BCI research is to be able to control complex devices using noninvasive recordings of brain signals at high spatial and temporal resolution. Current noninvasive recording techniques capture changes in blood flow or fluctuations in electric/magnetic fields caused by the activity of large populations of neurons, but we are still far from a recording technique that can capture neural activity at the level of spikes noninvasively. In the absence of such a recording technique, researchers have focused on noninvasive techniques such as EEG, MEG, fMRI, and fNIR, and studied how the large-scale population-level brain signals recorded by these techniques can be used for BCI.
Electroencephalographic (EEG) BCIs
The technique of EEG involves recording electrical signals from the scalp (Section 3.1.2). The idea of using EEG to build a BCI was first suggested by Vidal (1973), but progress was limited until the 1990s when the advent of fast and cheap processors sparked a surge of interest in this area, leading to the development of a variety of EEG-based BCI techniques.
“Bionic vision: Amazing new eye chip helps two blind Brits to see again”
(Mirror, May 3, 2012)
“Paralyzed, moving a robot with their minds”
(New York Times, May 16, 2012)
“Stephen Hawking trials device that reads his mind”
(New Scientist, July 12, 2012)
These headlines, from just a few weeks of news stories in 2012, illustrate the growing fascination of the media and the public with the idea of interfacing minds with machines. What is not clear amid all this hype is: (a) What exactly can and cannot be achieved with current brain-computer interfaces (BCIs) (sometimes also called brain-machine interfaces or BMIs)? (b) What techniques and advances in neuroscience and computing are making these BCIs possible? (c) What are the available types of BCIs? and (d) What are their applications and ethical implications? The goal of this book is to answer these questions and provide the reader with a working knowledge of BCIs and BCI techniques.
Among the most important aspects of brain-computer interfacing are ethical issues – issues pertaining to the medical use of BCIs, the use of BCIs for human augmentation and other applications, and the potential for their misuse. Some of these issues fall under the rubric of neuroethics, but other issues are specific to technological aspects of BCIs.
BCI conferences and workshops sometimes include sessions on ethics, and there have been several articles discussing ethical aspects of BCIs and neural interfaces (e.g., Clausen, 2009; Haselager et al., 2009; Tamburrini, 2009; Salvini et al., 2008; Warwick, 2003). However, there are currently no official regulations or guidelines on BCI use, aside from conventional laws regarding medical and legal ethics. As with other technologies in the past, one can expect that as BCIs become more prevalent in society, laws and ethics pertaining to BCI use will likely be codified by medical and governmental regulatory agencies. In the meantime, this chapter surveys the variety of ethical issues and dilemmas surrounding BCI research and BCI use.