To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we discuss the notion and practice of “making an argument” in generative linguistics, taking examples from phonology, morphology, and syntax. Argumentation is central in linguistics, yet there are few explicit and thorough accounts of what it is to make an argument (though see Soames and Perlmutter 1979; Aarts 2001; Green and Morgan 2001; Kertész and Rákosi 2012 for argumentation in syntax). Our goal here is not to present a general philosophical discussion of argumentation, but rather to present the concept as it is typically practiced in linguistics. Through this, we will provide an overview of how to construct a linguistic analysis and support it. The chapter is structured as follows: we first outline how an argument is typically formulated, in abstract terms, based on the notion of supporting a hypothesis more generally. We then discuss various case studies of arguing for hypotheses of different degrees of abstraction, ranging from empirical arguments to theoretical arguments. We conclude with some discussion of writing style in argumentation.
Making an argument in linguistics
Making an argument is a creative exercise, to develop and motivate a hypothesis which provides some insight into some set of facts. There are a few linguists whose work has come to be associated with a strong emphasis on argumentation. David Perlmutter is one, and he writes in the introduction to Perlmutter 2010 (xx):
I have tried to emphasize four things in my work in linguistics: explicit arguments for one hypothesis over others, extending the range of languages and phenomena for which linguistic theory is to be held accountable, making explicit the ways languages differ and the ways they are alike, and explanation in linguistics. All four were already present in my 1968 doctoral dissertation (Perlmutter 1971), especially in the chapter arguing for surface structure constraints on the order of clitic pronouns in Spanish and French (Perlmutter 1970b)
Variation analysis takes as its object of study differences in linguistic form with no apparent change in meaning or function. While other methods of linguistic analysis try to eliminate variation by finding structural or semantic contexts that disambiguate the choice of linguistic form, variation analysis seeks to understand variation by assessing which dimensions of the linguistic and/or social context correlate with the occurrence of a particular variant form. Linguistic variation is analyzed within different subfields of linguistics, such as sociolinguistics, historical linguistics, corpus linguistics, first and second language acquisition, and phonetics, each of which addresses slightly different research questions. The primary focus of this chapter is the analysis of linguistic variation within sociolinguistics, though the methods discussed here apply in principle to the other subfields. I begin by defining the central construct of variation analysis, the linguistic variable and its identification at the levels of phonetics/phonology and grammar, before proceeding through the steps of variation analysis: circumscribing the variable context, formulating and testing hypotheses through coding tokens for different independent variables, statistical testing, and interpreting results. I include some comments on the relationship between variation analysis and linguistic theory.
Identifying linguistic variables
The analysis of variation begins by noting that two or more linguistic forms are “different ways of saying the same thing,” a phrase that will serve as a good provisional definition for the central construct of variation analysis, the linguistic variable. In this case, “the same thing” refers to a single underlying form (in phonology) or a single meaning or function (in morphosyntax), and the “different ways” refers to the variant forms (or variants). For example, some Spanish speakers sometimes produce a word like avión ‘airplane’ with a final alveolar [n] and sometimes with a final velar [ŋ]. Thus, in Spanish there is a linguistic variable (n) with two variants, [n] and [ŋ]. Similarly, when referring to the future, English speakers sometimes use a form of the present tense, as in (1a, b), sometimes a modal, as in (1c), and sometimes a periphrastic construction, as in (1d). These variant forms have an underlying discourse function in common, a reference to future time.
There are two exciting facets of language description: the fieldwork experience, which is necessary for data collection, and the process of discovery and analysis that leads to the description of the target language. In order for our record of language structures to be as accurate as possible, data collection is best conducted using rigorous methodology. The goal of language description is often not to capture just one speaker’s internal grammar but to represent prevalent patterns for a community of speakers. In that sense, grammatical description is “fake” in that no one speaker will instantiate all the structures described in the grammar; at the same time, however, the grammar is “real” because the facts described therein are accepted by most speakers as accurately representing their language. The main product of descriptive fieldwork, whether a grammar or a targeted description of particular parts of a grammar, must therefore include data from a variety of speakers, favoring the most frequent patterns and noting common variations based on social or contextual factors.
Speakers and fieldworkers
A typical fieldwork project requires the participation of several speakers, in part due to differing talents and interests. The primary consultants, the speakers who participate on a regular basis in a project, will be those who are excited by language study. Some speakers show an amazing amount of linguistic sophistication even without linguistic training; for example, even if a speaker is unable to explain word class membership using terms such as “verb” and “noun,” she might still identify the lexical category of a word by providing paradigms or synonyms (Dixon 1992). Some speakers show initiative by bringing their own analyses to field sessions or by asking community members for their opinions on constructions discussed with the field linguist. Some speakers may be good storytellers, others able to repeat things slowly and exactly to aid with transcription (a surprisingly difficult task; see Chapter 12). Some speakers may be ideal for recording conversations and narratives but may be too prescriptive to help with translation (they may be more interested in “correcting” data than commenting on it).
To write Faetar, you have to use the Italian spelling system, because it’s the only system the speakers know.
When you write Faetar, you should use French orthography because that will indicate the Gallic roots of the language.
Of course, the only option is to use the International Phonetic Alphabet to write Faetar, so that linguists around the world are able to understand the details of our unique language.
Introduction
The reconstructed vignette above, based on actual conversations with speakers of Faetar, an endangered language spoken in two small villages in southern Italy (Nagy 2000, 2011a), illustrates some of the many uses that transcription has. Researchers (and the transcribers they hire) may not even be aware of all the potential downstream uses of their transcriptions. The most common understanding of the purpose of transcription in linguistics is contained in the third statement. However, a linguist’s decision to transcribe in a standardized orthography or in the International Phonetic Alphabet (IPA) can influence later uses of the text. Deviations from the traditions of one’s field can even be perceived as ideologically charged. As Kendall (2008: 337) puts it,
the act of transcription [. . .] is often undertaken as a purely methodological activity, as if it were theory neutral. Each decision that is made while transcribing influences and constrains the resulting possible readings and analyses (Ochs 1979; Mishler 1991; Bucholtz 2000; Edwards 2001). Decisions as seemingly straightforward as how to lay out the text, to those more nuanced – like how much non-verbal information to include and how to encode minutiae such as pause length and utterance overlap – have far-reaching effects on the utility of a transcript and the directions in which the transcript may lead analysts.
A common objective of geophysics is to probe the properties of the Earth's interior based on data from observations. Geoscientists often use seismic data to build a model of the subsurface as a representation of various assessments of some simplified key aspects of the real world. The validity of each model depends on its consistency with observations. All observable datasets constitute a data space, and all possible models constitute a model space. Data fitting and model inversion are two complementary approaches in geophysics to relate the data space to the model space. Data fitting uses forward modeling to search for models that fit well with the observed data and satisfy our scientific intuition. Model inversion uses our scientific intuition to set up rules about how the models should behave and then determines the model variations that fit best with the available data. The usefulness of data fitting and model inversion is evident in many applications illustrated in this chapter.
The basic theories of seismic modeling and inverse theory are reviewed here. Data fitting is introduced in the first two sections via several seismic forward modeling methods and a simple example of regression. The basic theories on inverting a system of linear equations are given in the next three sections, in conjunction with the tomographic velocity analysis in Section 8.4. The least squares method as a classic linear inversion is widely applicable in geophysical data analysis and beyond. Some mathematic insights on inversion of linear equations are illustrated via several common ways of matrix decomposition. The common causes of non-uniqueness in geophysical inversion include insufficient constraining power of the data, the non-linear relationship between data and model, and dependency of that relationship on the solution. Several practical inverse solutions discussed here include the Backus–Gilbert method and the LSQR algorithm. Practically, seismic inversion is synonymous with the inverse imaging in Section 8.4. The inverse approach has the advantage of subjectively determining the values of model properties based on the given model parameterization and data. For some applications such as inverse filtering and tomographic velocity analysis, inversion is the preferred method because of its objectiveness in obtaining the solutions.
Linguists, across the subdisciplines of the field, use sound recordings for a great many purposes – as data, stimuli, and a medium for recording notes. For example, phoneticians often record speech under controlled laboratory conditions to infer information about the production and comprehension of speech in subsequent acoustic and perception studies, respectively. In addition to analyzing acoustic data, phoneticians may employ articulatory methods to observe more directly how speech is produced. By contrast, sociolinguists often record unscripted speech outside of a university environment, such as a speaker’s home. Sometimes these recordings themselves constitute the data (e.g., for sociophonetic analysis), while other times they may be transcribed at varying levels of detail (see Chapter 12), with the resultant text serving as the data (e.g., for the analysis of lexical or morphosyntactic variation and discourse analysis). In a similar vein, some language acquisitionists capture naturally occurring conversation in adult–child interactions. The research purposes of these recordings may not be determined until some time after the recordings are made, after a longitudinal corpus for a given child has been collected. It is likewise common for language documentarians to make extensive speech recordings in the field. Some field recordings simply serve as a record of elicitation sessions (e.g., when the researcher is ascertaining phrase structure), while others may be used for acoustic analysis (e.g., if phonetic elements of the language are the object of study). In the latter case, articulatory methods can be employed to more accurately describe phonetic properties of speech, such as a sound’s place of articulation or details of the airstream mechanism. As discussed in Chapter 8, sound recordings can also be used as stimuli in perception studies, where listeners may be asked first to listen to a brief audio recording and then to identify whether a particular string of sounds is a real word (Chapter 8); to evaluate how educated the speaker of a brief utterance sounds (Chapter 6); or to rate how accented an L2 speaker sounds (Chapter 7). Linguists may also make use of archival recordings to investigate questions of language change. Proficiency in making sound recordings is thus an increasingly useful skill for linguists of most persuasions.
Shortly after I arrived in Israel to begin fieldwork on language among Israeli lesbian and gay activists, I went out for a drink with three of my male informants. As we waited for the bartender to bring us our beers, Roee, one of the men I was with, leaned over to me and, indicating the bartender with his head, said wai, eize birz hu, naxon? (‘Wow, he’s a birz, isn’t he?’). Though I could tell that he was commenting on the bartender, I had to admit that I did not understand the word birz, and I asked Roee to translate it for me. Roee began to laugh, and then explained that the word birz meant ‘handsome man’ in an Israeli gay slang variety called oxtchit. My interest was immediately piqued. I had never heard of an Israeli gay slang variety before, and I was eager to know where the variety came from and how it was used. The men I was with that evening explained to me that oxtchit was a variety predominantly used by a specific kind of effeminate gay man in Israel, called oxtchot, though it was also sometimes used by other gay men as a “secret” variety.
From the previous chapters the reader should have become acquainted with many of the basic skills of seismic data analysis. Any practice of seismic data processing utilizes some of these skills to solve particular problems, and uses special tools to address more focused issues. Everyone in this field will encounter special issues in his/her career; hence knowing the common features of some special topics is very useful. In this chapter several special processing topics are reviewed to show the use of the basic data processing skills that we have learned, and to expose the reader to some widely seen processing topics. Each of these topics deals with issues associated with a particular problem or property. The first section introduces the issues involved in four aspects of seismic data acquisition: monitoring of source signals including fracking-induced micro-seismicity; monitoring background noises; seismic illumination analysis; and preservation of low-frequency signals. The second section is on suppression of multiple reflections, which is of service to many conventional seismic imaging methods that use only primary reflections. After defining common types of multiples, three classes of multiple suppression methods are introduced. The first is based on the differential moveout between primaries and multiples; the second exploits the periodicity of the multiples; and the third reduces all surface-related multiple energy via pre-stack inversion. The next section reviews the basics in seismic anisotropy, a property of the medium that causes a variation of the speed of seismic waves as a function of the traversing angle. Information on seismic anisotropy helps in improving the fidelity of seismic imagery in fault imaging, and in detecting the dominant orientations of fractures. The fourth section briefly covers multi-component seismic data processing, with an analysis of its pros and cons and with illustrations in wavefield separation, converted wave processing, and VSP data processing. The final section introduces the processing aspect of seismic attributes, including a variety of localized attributes, geometric attributes, and texture attributes, plus related processing in seismic-to-well tie and impedance inversion. To become an expert in the practice of these and other topics in seismic data processing, the reader must learn the fundamentals of seismic wave and ray theory, common issues in seismic data acquisition, processing and interpretation, and spend some time in processing and utilizing field seismic data.
Seismic data analysis transfers seismic records measured at the surface or along wellbores into imagery, estimates, and models of subsurface structures and properties. It covers the topics of digital seismic data processing, seismic migration, and subsurface model building that are useful in both exploration geophysics and solid Earth geophysics. Although several excellent books have covered these topics either from the viewpoint of exploration geophysics or that of solid Earth geophysics, I was motivated to write this book to deal with common seismic analysis methods for both aspects of geophysics. This book is intended as an introductory text on common and practical methods in seismic data analysis.
Most of the materials for this book originated as lecture notes for graduate courses in geophysics at University of Houston and Texas Tech University. Students on these courses usually have a variety of backgrounds: many are recent graduates from geophysics, geology, engineering, computer sciences, or other physical science disciplines, and others are employees in the petroleum industry. They intend to apply seismic data analysis skills to problems in exploration geophysics, solid Earth geophysics, and engineering and environmental sciences. Although they may have access to some commercial or free software in seismic processing, most of these students have not gone through a systematic review of common approaches to seismic data analysis and the practical limitations of each method. Hence, an effort has been made in this book to emphasize the concepts and practicality of common seismic analysis methods using tutorial and case examples or schematic plots.
As the most widely used subsurface imaging method in petroleum exploration, seismic migration attempts to place seismic reflection data into their correct spatial or temporal reflector positions. Similar to the echo sounding technique to fathom the water bottom from a boat, seismic migration maps the subsurface reflectors in two steps. Step one is to back-project the seismic data measured at the surface downwards using the wave equation and a velocity model, producing an extrapolated wavefield that is a function of space and time. Step two is to use an imaging condition to capture the positions of the subsurface reflectors from the extrapolated wavefield. These two steps are demonstrated by the three common seismic migration methods introduced in this chapter. First, Kirchhoff migration is the most intuitive and flexible migration method, and it uses the ray theory approximation in practice. Second, frequency domain migration is theoretically rigorous and made efficient by taking advantage of the Fourier transform, although it is less effective in the presence of strong lateral velocity variations. Like these two methods, most migrations simplify reality by assuming that the input data contain only primary reflections; hence some pre-processing procedures are necessary to suppress other seismic waves recorded. Third, reverse time migration is a full wave migration method that is capable of using both primary reflections and other waves such as refractions and multiple reflections.
Fundamentally, a velocity model is required for all seismic migration methods. A time migration uses layer-cake models without lateral velocity variations. In contrast, a depth migration may handle a significant level of lateral velocity variations in the velocity model. In the case of gently dipping reflectors, a post-stack migration may be sufficient, using post-NMO stacked traces to approximate zero-offset traces. In the presence of steeply dipping reflectors, a pre-stack migration is usually more suitable but takes many more computational resources. Depending on the complexity of the target structures, we may choose from a suite of migration methods, from the crude but fast post-stack time migration which is not sensitive to velocity variations, to the expensive pre-stack depth migration to handle steep reflector dips and strong lateral variations in the velocity models.
The practice of seismic data processing with digital records has been progressing for over six decades. Today all seismic processing projects are started with a set of scientific and business objectives in mind that often require specific processing flows; usually each flow involves some pre-processed data rather than the raw data. The pre-processing includes all preparation steps through which both major and relatively simple problems in the input data are cleaned up so that the main processing flow can function more effectively. While the pre-processing steps may be standard and even apparently routine, each step can be critical to the final result.
This chapter starts with illustrations of the most common pre-processing tasks. One important aspect of learning seismic data processing is to appreciate the physical processes that the wavelet from a seismic source has experienced, so that we may approximately undo or redo some of the processes in computers. For this reason, the filtering expression of seismic data processing is introduced. As a modern example, the processing of a multi-component dataset from vertical seismic profile is shown. This chapter examines several simple but common processing operators, including normal moveout, stacking, convolution, correlation, and Radon transform. Often the reason for using these techniques is to suppress the most common types of noise. The readers should try to envision the physical processes that each operator attempts to emulate. As an example of preliminary analysis, the effects of surface topography and near-surface velocity variations are analyzed using the concept of near-surface statics.
the importance of knowledge of space in the early years – it enables children to explore the world around them and develop the vocabulary to label and talk about it
the role of shape in children’s understanding of space, and the related concepts of location and transformation
the seven spatial perception skills.
In addition to the ability to work with numbers and reason logically, children need opportunities to develop their spatial skills and understanding of shape, location and transformation, and to engage in geometric or spatial reasoning. This form of knowledge is referred to broadly as spatial sense – an important objective of mathematics education in the 21st century. Some aspects of spatial sense are evident in the drawings from seven-year-old Laura’s journal (see below). Compared to her age peers, Laura has a well-developed spatial sense. However, her drawings suggest that she needs to develop an understanding of the relative size of adults and children. Hence, encouraging children to express themselves through drawing provides an insight into their spatial sense. Comparing a child’s drawings over time is one method of monitoring the progression of his spatial sense.
ways to represent quantity using physical items, words and numerals
activities to encourage children to count, compare and order numbers
ways to foster conceptual understanding of place value in the base 10 number system
how mathematical operations on a quantity increase or decrease its value.
Number knowledge is fundamental to understanding mathematical situations in the everyday world. To develop this understanding, young children need a range of experiences, such as counting, recognising and writing numerals, composing and decomposing numbers, and simple operations. Children’s understandings can be supported by the use of physical materials and tools that aid thinking. For example, the calculator is a useful tool when children are learning to count by twos and fives because children can check what they are saying with what the visual display of the calculator is showing. It is also useful for checking answers when children have completed an operation. Throughout their school years, children are expected to acquire increasingly complex understandings of number. Hence, it is important for children to master simple concepts and gradually build their proficiency with number.