Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-75dct Total loading time: 0 Render date: 2024-05-17T05:51:51.466Z Has data issue: false hasContentIssue false

7 - Systems of Gesture Coding and Annotation

from Part II - Ways of Approaching Gesture Analysis

Published online by Cambridge University Press:  01 May 2024

Alan Cienki
Affiliation:
Vrije Universiteit, Amsterdam
Get access

Summary

As there are many different methods of linguistic analysis, there are many different ways of approaching gesture analysis. This chapter gives a selective overview of the current state of art on gesture coding and annotation systems. It opens with a discussion on the difference between coding and annotation, before it considers aims and challenges in gesture coding and annotation. Afterward, the chapter reviews existing systems and reflects on the interrelation between subject, research question, coding and annotation system. The chapter emphasizes that coding and annotations systems are always influenced by the particular theoretical framework in which they are situated. Accordingly, similar to the analysis of language, a theory-neutral analysis of gestures is not possible. Rather, theoretical assumptions influence subjects, aspects and levels of analysis and as such also make themselves visible in annotation systems. This will be illustrated by exemplary research topics in gestures studies: language, language development, cognition, interaction, and human–machine interaction. The account of the individual systems thereby does not aim at an extensive discussion, but rather focuses on their general logic for answering their particular research question. Here, differences between systems addressing the same research topic (e.g. language) as well as differences across research topics (e.g. language vs. interaction) will be explored. The chapter closes with some considerations on possible future developments.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alibali, M. W., & Goldin-Meadow, S. (1993). Gesture-speech mismatch and mechanisms of learning: What the hands reveal about a child’s state of mind. Cognitive Psychology, 25(4), 468523. https://doi.org/10.1006/cogp.1993.1012CrossRefGoogle ScholarPubMed
Alibali, M. W., Spencer, R. C., Knox, L., & Kita, S. (2011). Spontaneous gestures influence strategy choices in problem solving. Psychological Science, 22(9), 11381144. https://doi.org/10.1006/cogp.1993.1012CrossRefGoogle ScholarPubMed
Allen, S., Özyürek, A., Kita, S., Brown, A., Furman, R., Ishizuka, T., & Fujii, M. (2007). Language-specific and universal influences in children’s syntactic packaging of manner and path: A comparison of English, Japanese, and Turkish. Cognition, 102(1), 1648.CrossRefGoogle ScholarPubMed
Andrén, M. (2010). Children’s Gestures from 18 to 30 Months. (Unpublished doctoral dissertation.) Lund University, Lund. Retrieved from https://lup.lub.lu.se/search/publication/f3e27d0e-a023-475a-af6d-c23c3e9c19f1.Google Scholar
Beaupoil-Hourdel, P., Morgenstern, A., & Boutet, D. (2016). A child’s multimodal negations from 1 to 4: The interplay between modalities. In Larrivée, P. & Lee, C. (Eds.), Negation and polarity: Experimental perspectives (pp. 95123). Cham, Switzerland: Springer.CrossRefGoogle Scholar
Bird, S., & Liberman, M. (2001). A formal framework for linguistic annotation. Speech Communication 33(1/2), 2360.CrossRefGoogle Scholar
Boersma, P., & van Heuven, V. (2001). Speak and unSpeak with PRAAT. Glot International, 5(9/10), 341347.Google Scholar
Bohle, U. (2013). Approaching notation, coding, and analysis from a conversational analysis point of view. In Müller, C., Cienki, A., Fricke, E., Ladewig, S. H., McNeill, D., & Teßendorf, S. (Eds.), Body - language - communication: An international handbook on multimodality in human interaction (Vol. 1, pp. 9921007). Berlin, Germany: De Gruyter Mouton.Google Scholar
Boutet, D. (2015). Conditions formelles d’une analyse de la négation gestuelle [Formal conditions of an analysis of gestural negation]. Vestnik of Moscow State Linguistic University, 6(717), 116129.Google Scholar
Bressem, J. (2013a). A linguistic perspective on the notation of form features in gestures. In Müller, C., Cienki, A., Fricke, E., Ladewig, S. H., McNeill, D., & Teßendorf, S. (Eds.), Body - language - communication: An international handbook on multimodality in human interaction (Vol. 1, pp. 10791098). Berlin, Germany: De Gruyter Mouton.Google Scholar
Bressem, J. (2013b). Transcription systems for gestures, speech, prosody, postures, gaze. In Müller, C., Cienki, A., Fricke, E., Ladewig, S. H., McNeill, D., & Teßendorf, S. (Eds.), Body - language - communication: An international handbook on multimodality in human interaction (Vol. 1, pp. 10371058). Berlin, Germany: De Gruyter Mouton.Google Scholar
Bressem, J., Ladewig, S. H., & Müller, C. (2013). Linguistic Annotation System for Gestures (LASG). In Müller, C., Cienki, A., Fricke, E., Ladewig, S. H., McNeill, D., & Teßendorf, S. (Eds.), Body - language - communication: An international handbook on multimodality in human interaction (Vol. 1, pp. 10981125). Berlin, Germany: De Gruyter Mouton.Google Scholar
Bressem, J., Stein, N., & Wegener, C. (2017). Multimodal language use in Savosavo: Refusing, excluding and negating with speech and gesture. Pragmatics, 27(2), 173206. https://doi.org/10.1075/prag.27.2.01breGoogle Scholar
Brugman, H., Wittenburg, P., Levinson, S. C., & Kita, S. (2002). Multimodal annotations in gesture and sign language studies. Paper presented at The 3rd International Conference on Language Resources and Evaluation (LREC), Las Palmas, Gran Canaria.Google Scholar
Chu, M., & Kita, S. (2008). Spontaneous gestures during mental rotation tasks: Insights into the microdevelopment of the motor strategy. Journal of Experimental Psychology: General, 137(4), 706723. https://doi.org/10.1037/a0013157CrossRefGoogle ScholarPubMed
Cienki, A. (2015). Spoken language usage events. Language and Cognition, 7(04), 499514. https://doi.org/10.1017/langcog.2015.20CrossRefGoogle Scholar
Cienki, A., & Iriskhanova, O. K. (Eds.). (2018). Aspectuality across languages: Event construal in speech and gesture. Amsterdam, the Netherlands: John Benjamins.CrossRefGoogle Scholar
Colletta, J.-M. (2009). Comparative analysis of children’s narratives at different ages: A multimodal approach. Gesture, 9(1), 6196. https://doi.org/10.1075/gest.9.1.03colCrossRefGoogle Scholar
Colletta, J.-M., Kunene, R. N., Venouil, A., Kaufmann, V., & Simon, J.-P. (2008). Multi-track annotation of child language and gestures. Paper presented at the International LREC Workshop on Multimodal Corpora, Marrakesh, Morocco.Google Scholar
Condon, W. S., & Ogston, W. D. (1967). A segmentation of behavior. Journal of Psychiatric Research, 5 (3), 221235. https://doi.org/10.1016/0022-3956(67)90004-0CrossRefGoogle Scholar
Congdon, E. L., Novack, M. A., & Goldin-Meadow, S. (2018). Gesture in experimental studies: How videotape technology can advance psychological theory. Organizational Research Methods, 21(2), 489499. https://doi.org/10.1177/1094428116654548CrossRefGoogle Scholar
De Beugher, S., Brône, G., & Goedemé, T. (2018). A semi-automatic annotation tool for unobtrusive gesture analysis. Language Resources and Evaluation, 52(2), 433460. https://doi.org/10.1007/s10579-017-9404-9CrossRefGoogle Scholar
De Ruiter, J. P. (2000). The production of gesture and speech. In McNeill, D. (Ed.), Language and gesture (pp. 284311). Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Debras, C., & Cienki, A. (2012). Some uses of head tilts and shoulder shrugs during human interaction, and their relation to stancetaking. 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Conference on Social Computing, 932937. https://doi.org/10.1109/SocialCom-PASSAT.2012.136CrossRefGoogle Scholar
Deppermann, A. (2018). Sprache in der multimodalen Interaktion. [Language in multimodal interaction.] In Deppermann, A. & Reineke, S. (Eds.), Sprache im kommunikativen, interaktiven und kulturellen Kontext [Language in the communicative, interactive, and cultural context] (pp. 5185). Berlin, Germany: De Gruyter.CrossRefGoogle Scholar
Duncan, S. (1996). Grammatical form and “thinking-for-speaking” in Mandarin Chinese and English: An analysis based on speech-accompanying gestures. (Unpublished doctoral dissertation). University of Chicago, Chicago, IL.Google Scholar
Duncan, S. (2003). Gesture in language: Issues for sign language research. In Emmorey, K. (Ed.), Perspectives on classifier constructions in signed languages (pp. 259268). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Duncan, S. (2005). Gesture in signing: A case study in Taiwan Sign Language. Language and Linguistics, 6(2), 279318.Google Scholar
Enfield, N., Kita, S., & de Ruiter, J. P. (2007). Primary and secondary pragmatic functions of pointing gestures. Journal of Pragmatics, 39, 17221741. https://doi.org/10.1016/j.pragma.2007.03.001CrossRefGoogle Scholar
Ferré, G. (2011). Functions of three open-palm hand gestures. Multimodal Communication, 1(1), 520. https://doi.org/10.1515/mc-2012-0002Google Scholar
Fraser, B. (1999). What are discourse markers? Journal of Pragmatics, 31(7), 931952. https://doi.org/10.1016/S0378-2166(98)00101-5CrossRefGoogle Scholar
Goldin-Meadow, S., & Wagner Cook, S. (2012). Gesture in thought. In Holyoak, K. J. & Morrison, R. G. (Eds.), The Oxford handbook of thinking and reasoning (pp. 136). Oxford, UK: Oxford University Press.Google Scholar
Grimminger, A., & Rohlfing, K. J. (2019). Multimodale Kommunikation in frühen Buchlesesituationen und ihr Zusammenhang mit dem späteren Wortschatz. [Multimodal communication in early book reading situations and its relationship with later vocabulary.] Sprache · Stimme · Gehör [Speech · Voice · Hearing], 43(02), 9399. https://doi.org/10.1055/a-0851-9063Google Scholar
Gullberg, M. (2010). Methodological reflections on gesture analysis in second language acquisition and bilingualism research. Second Language Research, 26(1), 75102. https://doi.org/10.1177/0267658309337639CrossRefGoogle Scholar
Gullberg, M. (2011). Thinking, speaking and gesturing about motion in more than one language. In Pavlenko, A. (Ed.), Thinking and speaking in two languages (pp. 143169). Bristol, UK: Multilingual Matters.CrossRefGoogle Scholar
Gullberg, M., De Bot, K., & Volterra, V. (2008). Gestures and some key issues in the study of language development. Gesture, 8(2), 149179. https://doi.org/10.1075/gest.8.2.03gulCrossRefGoogle Scholar
Hilliard, C., & Cook, S. W. (2017). A technique for continuous measurement of body movement from video. Behavior Research Methods, 49(1), 112. https://doi.org/10.3758/s13428-015-0685-xCrossRefGoogle ScholarPubMed
Ide, N. (2017). Introduction: The handbook of linguistic annotation. In Ide, N. & Pustejovsky, J. (Eds.), Handbook of linguistic annotation (pp. 118). Cham, Switzerland: Springer.CrossRefGoogle Scholar
Jarmołowicz, E., Karpiński, M., Malisz, Z., & Szczyszek, M. (2007). Gesture, prosody and lexicon in task-oriented dialogues: multimedia corpus recording and labelling. In Esposito, A., Faundez-Zanuy, M., Keller, E., & Marinaro, M. (Eds.), Verbal and nonverbal communication behaviours (pp. 99110). Cham, Switzerland: Springer.CrossRefGoogle Scholar
Karpiński, M., Jarmołowicz-Nowikow, E., & Malisz, Z. (2008). Aspects of gestural and prosodic structure of multimodal utterances in Polish task-oriented dialogues. Speech and Language Technology, 11, 113122.Google Scholar
Kendon, A. (1972). Some relationships between body motion and speech. In Seigman, A. & Pope, B. (Eds.), Studies in dyadic communication (pp. 177216). Elmsford, NY: Pergamon Press.CrossRefGoogle Scholar
Kendon, A. (1995). Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics, 23, 247279. https://doi.org/10.1016/0378-2166(94)00037-FCrossRefGoogle Scholar
Kendon, A. (2004). Gesture: Visible action as utterance. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Kim, H. H., Ha, Y. S., Bien, Z., & Park, K. H. (2012). Gesture encoding and reproduction for human‐robot interaction in text‐to‐gesture systems. Industrial Robot: An International Journal, 39(6), 551556. https://doi.org/10.1108/01439911211268705CrossRefGoogle Scholar
Kipp, M. (2001). Anvil: A generic annotation tool for multimodal dialogue. Proceedings of the 7th European Conference on Speech Communication and Technology (Eurospeech), 13671370.CrossRefGoogle Scholar
Kipp, M., Neff, M., & Albrecht, I. (2007). An annotation scheme for conversational gestures: How to economically capture timing and form. Language Resources and Evaluation – Special Issue on Multimodal Corpora, 41, 325339. https://doi.org/10.1007/s10579-007-9053-5CrossRefGoogle Scholar
Kita, S., & Özyürek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal?: Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48, 1632. https://doi.org/10.1016/S0749-596X(02)00505-3CrossRefGoogle Scholar
Kopp, S. (2017). Computational gesture research: Studying the functions of gesture in human-agent interaction. In Church, R. B., Alibali, M. W. & Kelly, S. D. (Eds.), Why gesture: How the hands function in speaking, thinking and communicating (pp. 267284), Amsterdam, the Netherlands: John Benjamins.CrossRefGoogle Scholar
Kopp, S., & Wachsmuth, I. (2004). Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds, 15(1), 3952. https://doi.org/10.1002/cav.6CrossRefGoogle Scholar
Koutalidis, S., Kern, F., Németh, A., Mertens, U., Abramov, O., Kopp, S., & Rohlfing, K. J. (2019). Multimodal marking of information structure in 4-year-old German children. Paper presented at the 2020 Gesture and Speech in Interaction, Stockholm, Sweden.Google Scholar
Krauss, R. M., Chen, Y., & Gottesman, R. F. (2000). Lexical gestures and lexical access: A process model. In McNeill, D. (Ed.), Language and gesture (pp. 261283). Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Ladewig, S. H. (2010). Beschreiben, suchen und auffordern: Varianten einer rekurrenten Geste [Describing, searching, and requesting: Variations of a recurrent gesture.] Sprache und Literatur [Language and Literature], 41(1), 89111. https://doi.org/10.1163/25890859-041-01-90000006CrossRefGoogle Scholar
Lausberg, H., & Sloetjes, H. (2009). Coding gestural behavior with the NEUROGES–ELAN system. Behavioral Research Methods, 41(3), 841849. https://doi.org/10.3758/BRM.41.3.841CrossRefGoogle ScholarPubMed
Lausberg, H., & Sloetjes, H. (2015). The revised NEUROGES–ELAN system: An objective and reliable interdisciplinary analysis tool for nonverbal behavior and gesture. Behavior Research Methods, 48(3), 973993. https://doi.org/10.3758/s13428-015-0622-zCrossRefGoogle Scholar
McNeill, D. (1992). Hand and mind. What gestures reveal about thought. Chicago, IL: University of Chicago Press.Google Scholar
MacWhinney, B. (2000). The CHILDES Project: Tools for analyzing talk. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Madeo, R. C., Lima, C. A., & Peres, S. M. (2017). Studies in automated hand gesture analysis: an overview of functional types and gesture phases. Language Resources and Evaluation, 51(2), 547579. https://doi.org/10.1007/s10579-016-9373-4CrossRefGoogle Scholar
Martell, C. (2005). FORM: An experiment in the annotation of the kinematics of gesture. (Unpublished doctoral dissertation.) University of Pennsylvania, Philadelphia, PA.Google Scholar
Mittelberg, I. (2018). Gestures as image schemas and force gestalts: A dynamic systems approach augmented with motion-capture data analyses. Cognitive Semiotics, 11(1), 20180002. https://doi.org/10.1515/cogsem-2018-0002CrossRefGoogle Scholar
Mondada, L. (2014). Conventions for multimodal transcription. Retrieved from https://franz.unibas.ch/fileadmin/franz/user_upload/redaktion/Mondada_conv_multimodality.pdfGoogle Scholar
Mondada, L. (2016). Challenges of multimodality: Language and the body in social interaction. Journal of Sociolinguistics, 20(3), 336366. https://doi.org/10.1111/josl.1_12177CrossRefGoogle Scholar
Müller, C. (2004). Forms and uses of the Palm Up Open Hand. A case of a gesture family? In Müller, C. & Posner, R. (Eds.), Semantics and pragmatics of everyday gestures (pp. 233256). Berlin, Germany: Weidler Verlag.Google Scholar
Nevile, M. (2015). The embodied turn in research on language and social interaction. Research on Language and Social Interaction, 48(2), 121151. https://doi.org/10.1080/08351813.2015.1025499CrossRefGoogle Scholar
Oben, B., & Brône, G. (2015). What you see is what you do: On the relationship between gaze and gesture in multimodal alignment. Language and Cognition, 7(4), 546562. https://doi.org/10.1017/langcog.2015.22CrossRefGoogle Scholar
Oloff, F. (2010). Ankommen und Hinzukommen. Zur Struktur der Ankunft von Gästen. [Arriving and joining. On the structure of guests’ arrival]. In Schmitt, R. (Ed.), Koordination. Analysen zur multimodalen Interaktion [Coordination: Analyses of multimodal interaction] (pp. 171228). Tübingen, Germany: Narr.Google Scholar
Pouw, W., Trujillo, J. P., & Dixon, J. A. (2019). The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking. Behavior Research Methods, 52(2), 723740. https://doi.org/10.3758/s13428-019-01271-9CrossRefGoogle Scholar
Ripperda, J., Drijvers, L., & Holler, J. (2020). Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data. Behavior Research Methods, 52(4), 17831794. https://doi.org/10.3758/s13428-020-01350-2CrossRefGoogle ScholarPubMed
Ruth-Hirrel, L., & Wilcox, S. (2018). Speech-gesture constructions in cognitive grammar: The case of beats and points. Cognitive Linguistics, 29(3), 453493. https://doi.org/10.1515/cog-2017-0116CrossRefGoogle Scholar
Schmidt, T. (2004). EXMARaLDA: Ein System zur computergestützten Diskurstranskription. [EXMARaLDA: A system for computer-aided discourse transcription.] In Mehler, A. & Lobin, H. (Eds.), Automatische Textanalyse. Systeme und Methoden zur Annotation und Analyse natürlichsprachlicher Texte [Automatic text analysis: Systems and methods for annotation and analysis of natural language texts] (pp. 203218). Wiesbaden, Germany: Verlag für Sozialwissenschaften.Google Scholar
Schmitt, R. (2005). Zur multimodalen Struktur von turn-taking [On the multimodal structure of turn-taking]. Gesprächsforschung – Online-Zeitschrift zur verbalen Interaktion [Conversation Research: Online Journal of Verbal Interaction], 6, 1761.Google Scholar
Seeling, T., Fricke, E., Lynn, U., Schöller, D., & Bullinger, A. C. (2016). Natürliche User Interfaces durch den Einbezug von Nutzern gestalten: Implikationen für ein Entwickler-Gestenmanual. [Designing natural user interfaces through user involvement: Implications for a gesture-developer manual] Paper presented at the Useware Conference, Dresden, Germany.Google Scholar
Seyfeddinipur, M. (2006). Disfluency: Interrupting speech and gesture. (Unpublished doctoral dissertation.) Radboud University, Nijmegen, the Netherlands.Google Scholar
Shattuck-Hufnagel, S., & Ren, A. (2018). The prosodic characteristics of non-referential co-speech gestures in a sample of academic-lecture-style speech. Frontiers in Psychology, 9. 1514. https://doi.org/10.3389/fpsyg.2018.01514CrossRefGoogle Scholar
Spiro, I., Taylor, G., Williams, G., & Bregler, C. (2010). Hands by hand: Crowd-sourced motion tracking for gesture annotation. Paper presented at the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, San Francisco, CA.Google Scholar
Steen, F. F., Hougaard, A., Joo, J., Olza, I., Cánovas, C. P., Pleshakova, A., … Woźny, J. (2018). Toward an infrastructure for data-driven multimodal communication research. Linguistics Vanguard, 4(1). https://doi.org/10.1515/lingvan-2017-0041CrossRefGoogle Scholar
Streeck, J. (2017). Self-making man: A day of action, life, and language. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Stukenbrock, A. (2009). Herausforderungen der multimodalen Transkription: Methodische und theoretische Überlegungen aus der wissenschaftlichen Praxis. [Challenges in multimodal transcription: Methodological and theoretical considerations from scientific practice] In Birkner, K. & Stukenbrock, A. (Eds.), Die Arbeit mit Transkripten in Fortbildung, Lehre und Forschung [Working with transcripts in further training, teaching and research] (pp. 144168). Mannheim, Germany: Verlag für Gesprächsforschung.Google Scholar
Theofilis, K., Nehaniv, C., & Dautenhahn, K. (2014). Adaptive gesture extraction and imitation for human-robot interaction. Paper presented at the 2014 AAAI Fall Symposium Series, Arlington, VA.Google Scholar
Trujillo, J. P., Vaitonyte, J., Simanova, I., & Özyürek, A. (2019). Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research. Behavior Research Methods, 51(2), 769777. https://doi.org/10.3758/s13428-018-1086-8CrossRefGoogle ScholarPubMed
Turchyn, S., Moreno, I. O., Cánovas, C. P., Steen, F. F., Turner, M., Valenzuela, J., & Ray, S. (2018). Gesture annotation with a visual search engine for multimodal communication research. Paper presented at the Thirty-Second AAAI Conference on Artificial Intelligence. New Orleans, LA.CrossRefGoogle Scholar
Wagner, S. M., Nusbaum, H., & Goldin-Meadow, S. (2004). Probing the mental representation of gesture: Is handwaving spatial? Journal of Memory and Language, 50(4), 395407. https://doi.org/10.1016/j.jml.2004.01.002CrossRefGoogle Scholar
Wittenburg, P., Brugman, H., Russal, A., Klassmann, A., & Sloetjes, H. (2006). ELAN: A professional framework for multimodality research. Proceedings of LREC 2006, Fifth International Conference on Language Resources and Evaluation.Google Scholar
Zima, E. (2014). Gibt es multimodale Konstruktionen? Eine Studie zu [V (motion) in circles] und [all the way from X PREP Y]. [Are there multimodal constructions? A study on [V (motion) in circles] and [all the way from X PREP Y]] Gesprächsforschung – Online Zeitschrift zur verbalen Interaktion [Conversation Research: Online Journal of Verbal Interaction], 15, 148.Google Scholar
Zima, E., & Bergs, A. (2017). Multimodality and construction grammar. Linguistics Vanguard, 3(s1), 20161006. https://doi.org/10.1515/lingvan-2016-1006CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×