We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent developments in artificial intelligence, especially neural network and deep learning technology, have led to rapidly improving performance in voice assistants such as Siri and Alexa. Over the next few years, capability will continue to improve and become increasingly personalised. Today's voice assistants will evolve into virtual personal assistants firmly embedded within our everyday lives. Told through the view of a fictitious personal assistant called Cyba, this book provides an accessible but detailed overview of how a conversational voice assistant works, especially how it understands spoken language, manages conversations, answers questions and generates responses. Cyba explains through examples and diagrams the neural network technology underlying speech recognition and synthesis, natural language understanding, knowledge representation, conversation management, language translation and chatbot technology. Cyba also explores the implications of this rapidly evolving technology for security, privacy and bias, and gives a glimpse of future developments. Cyba's website can be found at HeyCyba.com.
This paper presents a corpus and experiments to mine possession relations from text. Specifically, we target alienable and control possessions and assign temporal anchors indicating when a possession relation holds between the possessor and possessee. We work with intra-sentential possessor and possessees that satisfy lexical and syntactic constraints. We experiment with traditional classifiers and neural networks to automate the task. In addition, we analyze the factors that help to determine possession existence and possession type and common errors made by the best performing classifiers. Experimental results show that determining possession existence relies on the entire sentence, whereas determining possession type primarily relies on the verb, possessor and possessee.
Deaf people communicate naturally using visual-spatial languages, called sign languages (SL). Although SLs are recognized as a language in many countries, the problems faced by Deaf people for accessing information remain. As a result, they have difficulties exercising their citizenship and access information in SLs, which usually leads to linguistic and knowledge acquisition delays. Some scientific works have been developed to address these problems related to the machine translation of spoken languages to sign languages. However, the existing machine translation platforms have some limitations, especially in syntactic and lexical nature. Thus, this work aims to develop a mechanism for machine translation to Libras, the Brazilian Sign Language, with syntactic-semantic adequacy. It consists of an automatic translation component for Libras based on syntactic-semantic translation rules and a formal syntactic-semantic rule description language. As proof of concept of the proposed approach, we created a specific grammar for Libras translation exploring these aspects and integrating these elements into VLibras Suite, a service for machine translation of digital content in Brazilian Portuguese (BP) to Libras. We performed several tests using this modified version of VLibras to measure the level of comprehension of the output generated by the new translator mechanism. In the computational experiments, as well as in the actual tests with Deaf and hearing users, the proposed approach was able to improve the results of the current VLibras version.