To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The role of the court in human rights adjudication is quite different from the role of the court in an ordinary judicial review of administrative action. In human rights adjudication, the court is concerned with whether the human rights of the claimant have in fact been infringed, not with whether the administrative decision-maker properly took them into account.
There is a distinction to be made between the framework for judicial review…and the substantive question of whether particular human rights should be taken into account by administrative decision makers. There is a further question of the distinction between judicial review and human rights review – human rights review is concerned with the substantive question of whether rights have been infringed, while judicial review focuses on decision making procedure.
These two statements – one by a Law Lord (now a Justice of the Supreme Court of the United Kingdom) and the other by members of the Administrative Review Council in Australia – contrast judicial review (largely focused on procedure) and human rights review (concerned with the substance of rights protection). When viewed in isolation such comments leave the reader with the lingering impression that judicial review is different from, and perhaps inferior to, human rights review. After all, procedural safeguards never sound quite as desirable as substantive rights. However, it is clear that the authors of both statements do not hold this view – Baroness Hale has equated the removal of judicial review of governmental action which affects individual rights to subverting the rule of law, and the first line of the Administrative Review Council’s discussion paper, Judicial Review in Australia, notes that judicial review is important in ensuring the accountability of public officials. Both accountability and the rule of law are dear to human rights lawyers – without these overarching principles, human rights protections would languish.
We make, and break, promises to one another all the time. As every law student knows, to break one’s promise does not always result in exposure to legal liability. This may sometimes be as a result of changing times: for instance, the common law action for breach of promise to marry was commonly used in the 19th century but by 1976 had been abolished altogether in Australia, by statute. By contrast, the contractual doctrine of consideration has been required to make room for the power of non-contractual promises and representations in commercial, litigious and other private law circumstances through the development of promissory estoppel.
Sometimes, the nature of the promise or the circumstances in which it was made will be significant but, usually, the identity of the promisor is the most relevant of those circumstances. This contention supports the recent statement by Joseph Raz that ‘most undertakings and agreements are much less formally created, arising not so much out of explicit acts of commitment as out of the implied meaning and consequences of an interaction over time’. Rarely is the identity of the promisor of greater significance than when it is the government. The conundrum posed by Raz – ‘if promises are binding, if they are cogent ways for people to bind themselves, there must be a reason to do as one promised’ – should be presumptively reversed where government is the promisor, given its size and power, not to mention the usual understanding that such power as government has is held as though ‘on trust’ for the citizenry. In short, where it may be open to discussion whether and why an individual should keep his or her promises, government should need a compelling reason not to do so (or alternatively to provide compensation for its failure to do so).
In this chapter we shall discuss two important aspects, different but correlated, of the weak interactions: the CP-violation phenomena and the oscillations between members of flavoured, electrically neutral meson–antimeson pairs, the K0s, the B0s, the B0ss and the D0s. In each case, the states of definite flavour differ from those of definite mass and lifetime.
We begin with an elementary discussion of the neutral K system that will elucidate which are the states of definite strangeness, those of definite CP and those with definite mass and lifetime. We shall describe the oscillation between the former states, giving the relevant mathematical expressions and discussing the experimental evidence, including the observation of the regeneration of the initial flavour.
We then define the different modes of CP violation: in the wave function (or in the mixing), in the interference between decay with and without mixing, and in the decays. All modes happen for neutral mesons, the last one happens also for charged ones. We will describe how CP violation was discovered in the neutral K system in 1964.
In Section 8.6 we describe oscillations and CP violation in the B0 system, which needs a somewhat more-advanced formalism. In the same section we shall mention the discoveries of B0s oscillation, D0 mixing. These beautiful experimental results have been obtained at high-luminosity electron-positron colliders, KEKB in Japan and PEP2 in California, built on purpose and called beauty factories.
Tribunals are bodies ‘which stand on the frontiers between law and administration’. They ‘provide specialised machinery for the adjudication of cases that would otherwise be decided by the civil courts’. The first tribunals emerged in the 17th century in England, when a judicial power was given to Commissioners of Customs and Excise. Among others, a specialist tax tribunal was set up in England towards the end of the 18th century, and in the 19th century, adjudicative functions in areas unfamiliar to the courts, such as railways and canals, were allocated to bodies outside the court system. Nonetheless, for the most part tribunals are a 20th century phenomenon.
There have been various attempts to describe a tribunal – a task made more difficult by the variety of bodies which bear that label. The Australian Law Reform Commission (ALRC) defined tribunals in these terms:
a body which is not a court but carries out a mix of judicial or quasi-judicial tasks. The term covers a wide spectrum of bodies, both in terms of the subject matter dealt with and the processes used.
The discipline of subsurface seismic imaging, or mapping the subsurface using seismic waves, takes a remote sensing approach to probe the Earth's interior. It measures ground motion along the surface and in wellbores, then puts the recorded data through a series of data processing steps to produce seismic images of the Earth's interior in terms of variations in seismic velocity and density. The ground movements recorded by seismic sensors (such as geophones and seismometers onshore, or hydrophones and ocean bottom seismometers offshore) contain information on the media's response to the seismic wave energy that traverses them. Hence the first topic of this chapter is on seismic data and their acquisition, processing, and interpretation processes. Because nearly all modern seismic data are in digital form in order to be stored and analyzed in computers, we need to learn several important concepts about sampled time series such as sampling rate and aliasing; the latter is an artifact due to under-sampling. In exploration seismology, many useful and quantifiable properties of seismic data are called seismic attributes. Two of the most common seismic attributes are the amplitude and phase of seismic wiggles. They are introduced here together with relevant processing issues such as gain control, phase properties of wavelets, and the Hilbert transform, which enables many time-domain seismic attributes to be extracted. To process real seismic data, we also need to know the basic issues of data formats, the rules of storing seismic data in computers. To assure that the data processing works, we need to conduct many quality control checks. These two topics are discussed together because in practice some simple quality control measures need to be applied at the beginning stage of a processing project.
I would like to discuss an approach to the mind that considers language and similar phenomena to be elements of the natural world, to be studied by ordinary methods of empirical inquiry.
Noam Chomsky 1995
Linguists have forgotten, Mathesius argued, that the homogeneity of language is not an ‘actual quality of the examined phenomena,’ but ‘a consequence of the employed method’.
Uriel Weinreich, William Labov, and Marvin I. Herzog 1968
Some have seen in modern linguistic methodology a model or harbinger of a general methodology for studying the structure of human behavior.
Dell Hymes 1962
Overview
The three views expressed above remind us of the peculiar status of linguistics as a field. It represents a single discipline to the extent that it broadly shares a single object of analysis, but little else can be said to be uniform in terms of epistemology and method. Some linguists affiliate most closely with the social sciences, others with the natural sciences, and others with the humanities. Perhaps surprisingly, this diverse group has not (yet) splintered off into separate fields. Rather, the deep heterogeneity of the field has come to be seen by many as a strength, not a weakness. Recent years have witnessed a rise in creative synergies, with scholars drawing inspiration from the methods and data used by “neighboring” linguists in order to enrich and expand the scope of their own investigations.
The term “discourse” has a variety of meanings both within linguistics and outside of it and, correspondingly, discourse analysis refers to a wide range of analytic methods. In this chapter, we will focus on methods of discourse analysis that are associated with sociocultural linguistics, “a broad interdisciplinary field . . . encompassing the subfields of sociolinguistics and linguistic anthropology, among others” (Bucholtz and Hall 2005: 586). Given our emphasis on socially oriented approaches to discourse analysis, following Schiffrin (1994: 415) we define discourse as language embedded in social interaction – that is, unlike approaches to discourse that conceptualize it as a linguistic unit commensurate with (but larger than) a sentence or a morpheme, we regard discourse as fundamentally different from these other kinds of linguistic units. Under a formalist definition of discourse, for example, the organization of words into sentences is regarded as equivalent to the organization of sentences into discourse (see Kamp and Reyle 1993 and Lambrecht 1994 for more on the treatment of discourse from a formal perspective). Yet, as both Schiffrin (1994) and Cameron (2001) have pointed out, the process of determining whether a string of words constitutes a grammatical sentence or not relies upon linguistic knowledge, in contrast to the process of imposing coherence on a string of sentences (i.e., interpreting them as a discourse), which involves, for the most part, the mobilization of non-linguistic and contextual knowledge. Put another way, “discourse is not amenable to a ‘pure’ formalist analysis” (Cameron 2001: 13) in the way that other kinds of linguistic units are.
Deconvolution means to “undo” a convolution process. We may view each seismic trace as the result of convolving the subsurface seismic reflectivity with a seismic wavelet. Deconvolution can then be used to remove the seismic wavelet from the input seismic trace in order to yield the seismic reflectivity as the output. As a common time processing method, the main benefits of deconvolution include increasing data bandwidth and therefore resolution, suppressing periodicity such as multiples, and removing known wavelets. In practice we often only have the input seismic trace and want to find both the wavelet and the reflectivity. This non-uniqueness problem leads to the approach of predictive deconvolution, which assumes that the predictable components of the input trace belong to the seismic wavelet and the unpredictable components of the input trace belong to the reflectivity. To remove the effect of a known filter, we may use a frequency domain deconvolution which employs a “water level” to prevent division by zero.
As the amplitude and phase of real data vary with time, the deconvolution operator may be applied within a time window of the data. An adaptive deconvolution is a practical way to divide the data trace into time windows that overlap with each other, to apply deconvolution for each window, and then to integrate the deconvolved results together. By quantifying the distribution of seismic wiggles using the concept of entropy, minimum entropy deconvolution seeks to minimize the number of spikes on a seismic trace; this method works well in cases of few major reflectors. Finally, a method called extrapolation by deterministic deconvolution (EDD) is shown as a way to take predictions from sites of joint observations and to extrapolate into nearby sites that have only a single observation. This method provides the possibility of using seismic data to anticipate filtered versions of wellbore measurements.
This chapter introduces the fundamentals of inferential statistics – that is, methods that help you make inferences or predictions based on your sample data. More specifically, in most empirical studies, researchers cannot study the complete population of a phenomenon of interest – that is, the complete set of objects or speakers of interest – but only a small sample of the phenomenon under investigation. For example, instead of investigating all relative clauses, you investigate a (hopefully carefully constructed) sample of relative clauses in a (part of a) corpus; instead of testing all non-native speakers of a language, you test a (hopefully randomly selected) sample of speakers, and so on. Obviously, you hope that whatever results – percentages, means, correlation coefficients – you obtain from a sample (which you studied) will generalize to the population (which you did not study). However, if researchers draw different samples from the same population and compute point estimates of percentages, means, correlation coefficients, they will just as obviously also get different point estimates; they will encounter variability. The most important application of inferential statistics is to assist researchers in quantifying and studying this variability to (i) arrive at better estimates of population parameters, and (ii) test hypotheses and separate random/accidental from systematic/meaningful variation.
Section 2 will introduce several basic concepts that underlie most inferential statistics. Section 3 presents a set of questions based on Chapter 14 and Section 2 of this chapter that are necessary to identify which statistical test is applicable in a particular research scenario. Sections 4.1 and 4.2 then discuss a small selection of statistical tests involving frequency data of discrete/categorical data and central tendencies (means and medians) respectively.
Computers can be used for many different purposes in linguistic research. They can be used for data storage and search. They can be used as devices for speech analysis or synthesis. They can be used to present linguistic stimuli to subjects and record their responses. In all these applications, computers are used as sophisticated tools, and they are programmed according to purely practical criteria: as long as it gets the job done, the internal workings of the software are not the subject of the research.
However, computing can also become the focus of linguistic inquiry. Computers can be used to operationalize linguistic theories by implementing them as computer programs. This is done because linguistic theories may be so complex that their predictions can no longer be derived using verbal reasoning or pen-and-paper analysis. Moreover, turning a linguistic theory into a computer program forces the researcher to make her assumptions explicit. By running the program, and studying its behavior under a variety of circumstances, the researcher can test the theory against empirical findings and often discover unexpected consequences.
Researchers in the field of linguistics have a wide range of methodologies at their disposal. One approach that has seen a marked increase in recent decades is experimental research, which in the wider social sciences is open to a large number of possible designs (Kirk 2003, for example, lists forty for quantitative research alone; see also Lavrakas 2008). In this chapter, we provide an overview of experimental design options available to linguistics researchers, as well as a brief overview of mixed methods, an increasingly common option for investigating complex research questions. The discussion reviews general principles in experimental design, with examples from a selection of subfields. (For further details of experimental methods in theoretical linguistics and psycholinguistics in particular, see Chapters 3 and 8.) The chapter concludes with a discussion of common data collection techniques relevant to experimental designs that are used in a variety of subfields in linguistics.
Fundamentals
Researchers adopting a quantitative approach seek to investigate phenomena by collecting numerical data and analyzing those data statistically. To facilitate this statistical analysis and to control for extraneous variables, quantitative researchers typically recruit a large number of participants and carefully design all aspects of the study before collecting data. In this design process, the quantitative researcher faces a number of questions, including: Do I need more than one group? If so, how many groups are needed to address the research question(s)? How should participants be placed into groups? How will data be collected from the participants, and how often? If an experimental approach is adopted – for example, observations or measurements to be collected under relatively controlled conditions – what will the treatment consist of (e.g., stimuli, timed response, feedback)? How will extraneous variables be addressed?
Multivariate analysis deals with observations made on many variables simultaneously. Datasets with such observations arise across many areas of linguistic inquiry. For instance, Jurafsky et al. (2001) provide an overview of the many factors that co-determine a word’s acoustic duration (including its neighboring words, syntactic and lexical structure, and frequency). The importance of these factors is determined with the help of multiple regression modeling of data extracted from speech corpora. Koesling et al. (in press) used multivariate analysis to study the pitch contours of English tri-constituent compounds, with as predictors not only time and compound structure, but also speaker, word, a word’s frequency of occurrence, and the speaker’s sex. In morphology, the choice between two rival affixes can depend on a wide range of factors, as shown for various Russian affix pairs by Baayen et al. (in press). F. Jaeger (2010) showed that whether the complementizer that is present in an English sentence depends on more than fifteen different factors. Gries (2003) and Bresnan et al. (2007) clarified the many factors that join in determining the choice of particle placement and dative constructions, respectively. In psycholinguistics, multivariate methods are becoming increasingly important (see, e.g., Kuperman et al. 2009, for eye-tracking research), especially with the advent of so-called megastudies (Balota et al. 2004). Multivariate methods have a long history of use in sociolinguistics (Sankoff 1987), and play an important role in present-day dialectometry (Wieling 2013). What is common across all these studies is that they address linguistic phenomena for which monocausal explanations fail. Many phenomena can only be understood properly when a great many explananda are considered jointly. This is where multivariate statistics come into play.
Intersubjective agreement is best reached by convergence of several kinds of data with complementary sources of error.
(Labov 1972b)
Early in the development of modern sociolinguistics, William Labov taught us that there is no single best type of data or method for linguistic study, since all are limited. Thus, our fullest understandings are reached only through approaching our research questions from several angles, with each vantage point providing a unique perspective that offsets its necessarily limited scope.
In this chapter, I examine two important and complementary methods for collecting data on language in its social setting: surveys in which researchers more or less directly elicit information on linguistic features, patterns, and interrelations; and interviews in which connected speech is elicited. In addition, I consider methods designed specifically to elicit information on language attitudes, though both surveys and interviews can provide valuable information on language attitudes as well as language use. The chapter takes a largely variationist sociolinguistic approach, to complement the other chapters in Part I of this volume. Data collection methods designed to glean information on language variation can usefully inform other types of linguistic study; at the same time, elicitations associated with theoretical, anthropological, psycholinguistic, acquisitional, and other approaches to language study can be invaluable for sociolinguistic research. For example, data from a relatively relaxed, conversational interview might yield information on stigmatized language features that are resistant to elicitation via grammaticality judgment tasks; conversely, sociolinguists can use grammaticality judgment tasks to distinguish accidental gaps from genuine ungrammaticality (see Section 2.3).
All one can hope to do in a chapter on research ethics is raise issues, for the burden of resolution falls to each of us in dialogue with those who stand to be affected by our work. Different kinds of linguistic research raise very different ethical issues. And inasmuch as linguists encounter an enormous variety of cultures, people, and situations in the course of their research, we need to recognize that our own system of ethics may not be the same as that of the people we are working with (see, e.g., Holton 2009). So no discussion of research ethics can be comprehensive and no formal set of guidelines can anticipate unique actions or circumstances. For this reason, I will not attempt to set out specific actions one should take while doing research, but I will discuss a number of areas that every researcher should consider carefully when embarking on work with human participants.
Most of the readers of this volume will be subject to government regulation and university ethics review, which can be both a help and a hindrance in fostering ethical practice. A central purpose of this chapter is to encourage researchers to step back from the bureaucratic process and examine their own practices more generally. They might then step back into the bureaucratic process and engage in it intelligently and actively. For researchers who do not have such institutional oversight, this chapter can be a starting point for reflection about their practices and their potential consequences. In some cases, particularly in communities that have been disenfranchised in other ways, communities may wish to control who does what with their language (see Bowern 2008 and Holtan 2009 for a thorough discussion of these issues). In some cases, particularly in communities that have had a history of exploitation (such as native communities in North America), there is formal community review of research proposals. But in other cases – and even once one has passed this review – the researcher must take into consideration the interests of others in the community. The common focus on the primary individual or individuals we work with often distracts us from a consideration of how linguistic work affects others in the community.
Seismic resolution and fidelity are two important measures of the quality of seismic records and seismic images. Seismic resolution quantifies the level of precision, such as the finest size of subsurface objects detectable by the seismic data. Several definitions of seismic resolution are introduced in this chapter. Seismic fidelity quantifies the truthfulness, such as the genuineness of the data or the level to which the imaged target position matches its true subsurface position. Since seismic data are band-limited, seismic resolution is proportional to the frequency bandwidth of the data or the resulting images. If the bandwidth is too narrow, the resolution will be poor because a single subsurface reflector may produce a number of indistinguishable wiggles on the seismic traces. For multiple datasets with the same bandwidth, it is easier in practice to recognize or resolve events with the zero phase wavelet rather than the minimum phase or mixed phase wavelets. Seismic fidelity is about global resolution, the resolution in the big picture.
In principle, the highest-quality seismic imagery requires the highest level of seismic resolution and fidelity. However, in real cases the seismic resolution and fidelity are always limited because of limited seismic illumination, producing various types of seismic artifacts. It is therefore important to assess the level of seismic resolution and fidelity in each seismic processing project, as illustrated by examples in this chapter. One of the physical reduction factors for seismic resolution is seismic attenuation, the loss of energy of the seismic signal with the increase of traversing distance. Hence the final topic discussed here is on the estimation and correction for seismic attenuation in processing seismic data.