To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The puzzle of diversity: humans are the only animal species with a vast range of communication codes or languages. The possibility of communication without a shared language sheds light on the underlying capacity for communicative interaction – the ‘interaction engine’. The diversity of languages contrasts with the uniformity of the ways in which they are used in informal interaction around the world. The key properties of this interaction system are outlined.
There are clues to the origin of language in the interaction engine: (a) there is some continuity between the turn-taking behaviour of humans and other primates; (b) early hominins were most likely gesture communicators like present-day great apes; (c) there is early development of turn-taking in infants; (d) languages richly draw on spatial concepts, suggesting a gestural origin since gesture is a spatial communication system; (e) human mind reading may have an origin in a generalization of maternal empathy; (f) language syntax draws on interactive organization.
Introduces the central puzzle of the diversity of languages, made possible by an underlying ability to learn and use them – largely constituted by a system for communicative interaction – the ‘interaction engine’.
Communicative interaction forms the core of human experience. In this fascinating book Levinson, one of the world's leading scholars in the field, explores how human communicative interaction is structured, the demands it puts on our cognitive processing, and how its system evolved out of continuities with other primate systems. It celebrates the role of the 'interaction engine' which drives our social interaction, not only in human life, but also in the evolution of our species – showing how exchanges such as words, glances, laughter and face-to-face encounters bring us our greatest and most difficult experiences, and have come to define what it means to be human. It draws extensively on the author's fieldwork with speakers across multiple cultures and communities, and was inspired by his own experiences during the Covid lockdown, when humans were starved of the very social interaction that shapes our lives. This title is also available as open access on Cambridge Core.
This Element offers a primer for the study of meaning in a Construction Grammar approach. It reviews the main principles of meaning shared across constructionist frameworks, including its ubiquity in grammatical structure, its usage-based formation, and its nature as the output of cognitive representations. It also reviews the importance given to meaning in construction-based explanations of sentence composition, innovative language use, and language change. Paradoxically, the Element shows that there is no systematic framework delineating the rich structure of constructional meaning, which has led to theoretical disagreements and inconsistencies. It therefore proposes an operational model of meaning for practitioners of Construction Grammar. It details the characteristics of a complex interface of semantic, pragmatic, and social meaning, and shows how this framework sheds light on recent theoretical issues. The Element concludes by considering ways in which this framework can be used for future descriptive and theoretical research questions.
A rigorous, yet accessible and entertaining introduction to the field of logic, this book provides students with a unique insight into logic as a living field and how it connects to other fields of inquiry including philosophy, computer science, linguistics, and mathematics. With no background knowledge needed, students are introduced to a critical examination of 'classical logic', and the technical issues and paradoxes that may be encountered. Each chapter includes key pedagogical features such as marginal notes, definitions, chapter summaries and practice exercises. Arguments are backed up by authentic examples of logic within natural languages and everyday life. The flexible chapter structure allows instructors to tailor their teaching for either a one-semester or two-semester course, according to their students' needs and knowledge. Online resources include a companion website featuring further readings, class handouts, LaTeX resources, along with an Online Proof Evaluator allowing students to get real-time feedback.
Addresses the role of structure in semantic analysis from the perspective of theories of meaning using rich theories of types. Also relates the theory of frames to these type theories as introducing, to some extent, similar structure into semantic analysis. The authors show how a structured approach is necessary to appropriately analyse phenomena in areas as diverse as lexical semantics and the semantics of attitudinal constructions referring to psychological states. In particular, these are: polysemy taken together with copredication, and attitudes such as belief and knowledge. The authors argue that the very same structure required to define a rich system of types enables them to adequately analyse both of these phenomena, thus revealing similarities in two otherwise apparently unrelated topics in semantics. They also argue that such theories facilitate a semantic theory oriented towards a psychological and contextually situated view of meaning. This title is also available as open access on Cambridge Core.
They were great for settling questions of logical truth, validity, equivalence, and so on, but became unwieldy in an exponential hurry as the number of relevant atomic sentences increased. They also foundered on the rocks of ’s non-truth-functional constructions.
In this chapter, a number of important notions surrounding logical truth, logical equivalence, contradiction, and logical consequence will be explored and clarified. A shocking fact about classical logic will be encountered and examined: every argument with contradictory premises is deductively valid. Contradictions entail everything. This is not a feature of every formal logical system …
If the purported counterexamples to modus ponens (that appear in §7.7 of the previous chapter) are genuine, then modus ponens isn’t the only argument form that is in trouble – modus tollens and hypothetical syllogism look like they’re on no better footing. But are the counterexamples genuine? It would be easier to answer this question if we had a better grip on the semantics of indicative conditionals.
Heaven forbid we have eight. If we have eight relevant atomic sentences, we’re going to need 256 rows for our truth table. If we double that to sixteen atomic sentences, we are all of a sudden at 65,536 rows. The problem is that the number of rows we need grows exponentially with every added atomic sentence. I was ready to tap out at the 256 rows for eight atomic sentences. Preparing 65,536 rows for sixteen atomic sentences is not going to happen. An argument with thirty-two distinct atomic sentence would require 4,294,967,296 rows. Even if I could write out about 120 characters per minute (which I can’t), it would take me almost sixty-eight years of solid writing to fill out such a truth table. Factor in a bit of time for sleep, and that’s more than one whole lifetime just to fill out a truth table for an argument with thirty-two atomic sentences. To say the least, the truth table method doesn’t scale up very well.
It would be a scandal of philosophy and of human reasoning in general if we were unable to cast helpful light on the logic of conditionals. Conditionals loom large in both everyday and theoretical reasoning. They figure in the tight, rigorous proofs of mathematics, the subtle theoretical reasoning of quantum physics, the strategies of financial planners and generals, and even the loose contingency planning of vacationers and of educators trying to cobble together a plan to teach during a pandemic.
Carl Linnaeus dubbed his own species Homo sapiens, meaning something like “wise (or knowledgeable) man.” This is a bit overly self-congratulatory, but it does focus attention on a feature that sets humans apart. Humans inquire about the world and about themselves, and – sometimes, anyway – thereby acquire knowledge, wisdom, and understanding that surpasses that of even the most clever ostriches, squirrels, and mushrooms. Humans engage in inquiry about everything under the sun, and a good many things above it as well. Humans will even engage in inquiry about things that have no spatiotemporal relationship to the sun at all – things like the number 7, the orthocenter of a triangle, and the intricacies of the fictional world imagined in Frank Herbert’s Dune. At some critical stage in evolutionary history, humans even began to turn their inquiring gaze back on inquiry itself.