To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We live in an age of liberation movements. This is a sign of the degree to which freedom has become the central value of our culture. All sorts of demands are made from all sorts of quarters in the name of liberty. People seek recognition, equality, justice, but all as corollaries of ‘liberation’. This is something which has evolved in our civilization over the last three centuries. The process has been punctuated, and partly shaped, by paradigm statements by major thinkers, although it has amounted to much more than these: the movement of a whole culture. Among these statements, Kant's has been very important. It marked a crucial step on the way.
It would help to place Kant's theory of freedom in relation to other conceptions which developed before him. And for this purpose, I want to engage in some very schematic intellectual history, and contrast an influential ancient notion of freedom with a family of modern ones, following Benjamin Constant's De la liberté des anciens, comparée avec celle des modernes.
A notion of freedom quite common in the ancient world saw it as consisting in the status of the citizen. The citizen was free in one sense by contrast to the slave in that he was not servile; and in another sense in contrast to the metic or disfranchised, in that he could act as a citizen. Freedom, on this view, consisted in a certain place within society.
I would like to examine the issue of political atomism, or at least to try to clarify what this issue is. I want to say what I think atomist doctrines consist in, and to examine how the issue can be joined around them – this is, how they might be proved or disproved, or at least cogently argued for or against, and what in turn they may be used to prove.
The term ‘atomism’ is used loosely to characterize the doctrines of social contract theory which arose in the seventeenth century and also successor doctrines which may not have made use of the notion of social contract but which inherited a vision of society as in some sense constituted by individuals for the fulfilment of ends which were primarily individual. Certain forms of utilitarianism are successor doctrines in this sense. The term is also applied to contemporary doctrines which hark back to social contract theory, or which try to defend in some sense the priority of the individual and his rights over society, or which present a purely instrumental view of society.
Of course, any term loosely used in political discourse can be defined in a host of ways. And perhaps one should even leave out of philosophical discourse altogether those terms which tend to be branded as epithets of condemnation in the battle between different views. One might well argue that ‘atomism’ is one such, because it seems to be used almost exclusively by its enemies.
What did utilitarianism have going for it? A lot of things undoubtedly: its seeming compatibility with scientific thought; its this-worldly humanist focus, its concern with suffering. But one of the powerful background factors behind much of this appeal was epistemological. A utilitarian ethic seemed to be able to fit the canons of rational validation as these were understood in the intellectual culture nourished by the epistemological revolution of the seventeenth century and the scientific outlook which partly sprang from it.
In the utilitarian perspective, one validated an ethical position by hard evidence. You count the consequences for human happiness of one or another course, and you go with the one with the highest favourable total. What counts as human happiness was thought to be something conceptually unproblematic, a scientifically establishable domain of facts like others. One could abandon all the metaphysical or theological factors – commands of God, natural rights, virtues – which made ethical questions scientifically undecidable. Bluntly, we could calculate.
Ultimately, I should like to argue that this is but another example of the baleful effect of the classical epistemological model, common to Cartesians and empiricists, which has had such a distorting effect on the theoretical self-understanding of moderns. This is something which is above all visible in the sciences of man, but I think it has wreaked as great havoc in ethical theory.
The distortive effect comes in that we tend to start formulating our meta-theory of a given domain with an already formed model of valid reasoning, all the more dogmatically held because we are oblivious to the alternatives.
The main thesis of the last chapter has been that we ought to turn to look at what we do when we theorize; that when we do we see that theories serve more than descriptive and explanatory purposes, they also serve to define ourselves; and that such self-definition shapes practice. But if all this is true, I argued, then the use of theory as self-definition also has to be borne in mind when we come to explain, when we practise, social science.
For even though theory may be serving us, the social scientists, simply as an instrument of explanation, the agents whose behaviour we are trying to explain will be using (the same or another) theory, or prototheory, to define themselves. So that whether we are trying to validate a theory as self-definition, or establish it as an explanation, we have to be alive to the way that understanding shapes practice, disrupts or facilitates it.
But this raises a number of questions about the relation between the scientist's explanatory theory and the self-definitions of his subjects. Suppose they offer very different, even incompatible, views of the world and of the subjects' action? Does the scientist have the last word? Can he set the world-view of his subjects aside as erroneous? But to condemn this world-view does he not have to stand outside it, and is this external stance compatible with understanding their self-definitions?
We come here to one of the main issues of the debate around verstehende social science. And this had to arise.
Despite the appearance of variety in the papers published in this collection, they are the work of a monomaniac; or perhaps better, what Isaiah Berlin has called a hedgehog. If not a single idea, then at least a single rather tightly related agenda underlies all of them. If one had to find a name for where this agenda falls in the geography of philosophical domains, the term ‘philosophical anthropology’ would perhaps be best, although this term seems to make English-speaking philosophers uneasy.
I started on it with a polemical concern. I wanted to argue against the understanding of human life and action implicit in an influential family of theories in the sciences of man. The common feature of this family is the ambition to model the study of man on the natural sciences. Theories of this kind seem to me to be terribly implausible. They lead to very bad science: either they end up in wordy elaborations of the obvious, or they fail altogether to address the interesting questions, or their practitioners end up squandering their talents and ingenuity in the attempt to show that they can after all recapture the insights of ordinary life in their manifestly reductive explanatory languages.
Indeed, one could argue that the second and third pitfalls should rather be seen as the horns of a dilemma: either these inadequate theories avoid the interesting questions, or they show themselves up, and hence have to expend more and more energy defending themselves against the charge of irrelevancy.
Human beings are self-interpreting animals. This is a widely echoing theme of contemporary philosophy. It is central to a thesis about the sciences of man, and what differentiates them from the sciences of nature, which passes through Dilthey and is very strong in the late twentieth century. It is one of the basic ideas of Heidegger's philosophy, early and late. Partly through his influence, it has been made the starting point for a new skein of connected conceptions of man, self-understanding and history, of which the most prominent protagonist has been Gadamer. At the same time, this conception of man as self-interpreting has been incorporated into the work of Habermas, the most important successor of the post-Marxist line of thought known somewhat strangely as critical theory.
And one could go on. Through all this cross-talk about ‘hermeneutics’, the question of what one means by this basic thesis, that man is a self-interpreting animal, and how one can show that it is so, may still go unanswered. These are of course tightly related questions; and I would like to try to fumble my way towards an answer to them.
It may turn out to be a mistake, but I am tempted to try to put together the full picture that this thesis means to convey by stages; to lay out, in other words, a series of claims, where the later ones build on the earlier ones, and in that sense form a connected picture.
This is an attempt to resolve one of the issues that separate ‘positive’ and ‘negative’ theories of freedom, as these have been distinguished in Isaiah Berlin's seminal essay, ‘Two concepts of liberty’. Although one can discuss almost endlessly the detailed formulation of the distinction, I believe it is undeniable that there are two such families of conceptions of political freedom abroad in our civilization.
Thus there clearly are theories, widely canvassed in liberal society, which want to define freedom exclusively in terms of the independence of the individual from interference by others, be these governments, corporations or private persons; and equally clearly these theories are challenged by those who believe that freedom resides at least in part in collective control over the common life. We unproblematically recognize theories descended from Rousseau and Marx as fitting in this category.
There is quite a gamut of views in each category. And this is worth bearing in mind, because it is too easy in the course of polemic to fix on the extreme, almost caricatural variants of each family. When people attack positive theories of freedom, they generally have some Left totalitarian theory in mind, according to which freedom resides exclusively in exercising collective control over one's destiny in a classless society, the kind of theory which underlies, for instance, official communism. This view, in its caricaturally extreme form, refuses to recognize the freedoms guaranteed in other societies as genuine.
Language is a central area of concern in the twentieth century. This is evident on all sides. First, our century has seen the birth and explosive growth of the science of linguistics. And in a sense ‘explosive’ is the right word, because like the other sciences of man, linguistics is pursued in a number of mutually irreducible ways, according to mutually contradictory approaches, defended by warring schools. There are structuralists in the Bloomfieldian sense, there are proponents of transformational theories, there are formalists.
These schools and others have made a big impact. They are not just collections of obscure scholars working far from the public gaze. Names like Jakobson and Chomsky are known far outside the bounds of their discipline.
But what is even more striking is the partial hegemony, if one can put it this way, that linguistics has won over other disciplines. From Saussure and the formalists there has developed the whole formidable array of structuralisms, of which Lévi-Strauss is the pathfinder, which seek to explain a whole range of other things: kinship systems, mythologies, fashion (Barthes), the operations of the unconscious (Lacan), with theories drawn in the first place from the study of language. We find terms like ‘paradigm’, ‘syntagm’, ‘metaphor’, ‘metonymy’, used well beyond their original domain.
In the previous chapter an account of uninstantiated laws was given. According to this view, strictly there are no uninstantiated laws. Statements of uninstantiated law tell us that a certain law would govern the antecedent universal, if, contrary to fact, that universal existed, that is, was somewhere instantiated. Such an account deals fairly easily with the problem of ‘missing values’ of functional laws. But certain cases of uninstantiated laws have been suggested by Michael Tooley (1977) which the counterfactual account appears to be unable to handle.
TOOLE'S CASES
The cases to be discussed are not actual uninstantiated laws proposed by scientists. They are imaginary situations where, it is plausible to say, we should be inclined to postulate uninstantiated laws. Tooley himself was not concerned with the topic of uninstantiated laws for its own sake. Rather he wanted to use the cases, first to criticize the Regularity theory, and second to support the view that laws of nature are relations between universals. I largely followed him in his estimation of the cases in my 1978 (Ch. 24). Now I have doubts.
The Fundamental Particle case (1977, p. 669). Tooley imagines a world containing ten, and only ten, types of fundamental particle. Allowing that a particle may interact with a particle of its own type as well as with particles of other types, this allows for 55 interaction, laws governing the interaction of pairs of particles. Suppose that 54 of these laws are known.
I will try to sum up the main positive theses argued for in this essay.
Laws of nature are dyadic relations of necessitation (or probabilification) holding between universals. They are (higher-order) states of affairs, states of affairs which are simultaneously universals. The instantiations of these universals are the positive instances falling under the law. It is an intelligible notion that a particular first-order state of affairs should necessitate a further first-order state of affairs, yet not in virtue of the universals involved in these, states of affairs. But in a law of nature this same relation of necessitation links sorts of states of affairs, that is, universals. Such necessitations ‘might have been other than they are’, that is, they are contingent. Where one sort of state of affairs necessitates another, then it is entailed, in the absence of further interfering factors (which are always at least logically possible), that the first sort of state of affairs is constantly conjoined with the second sort of state of affairs.
All genuine laws are instantiated laws. Statements of uninstantiated law are to be construed as counterfactuals about what laws would hold if certain conditions were realized. Such statements depend for their truth upon the existence of higher-order laws. Given the higher-order law and the contrary-to-fact condition, then the uninstantiated law may be deduced.
Functional laws are higher-order laws governing those lower-order laws which can be deduced from a functional law after substituting particular values for independent variables.
It was argued in Chapter 2 that there are innumerable Humean uniformities which we think are neither laws of nature nor manifestations of such laws. In Chapter 3 it was argued that there are, or can be, laws of nature which are not manifested in Humean uniformities. It was also pointed out that in the case of probabilistic and functional laws the relation between the content of a law and the content of the manifestation of the law need not be identity, yet identity is demanded by the Regularity theory.
In this chapter, all these difficulties will be waived. Suppose it to be a law of nature that Fs are Gs, and that this law issues in the (actually instantiated) uniformity that each F is a G. Even given this favourable case for the Regularity theorist, there are great difficulties in identifying the law and the uniformity.
LACK OF INNER CONNECTION
Suppose it to be a law that Fs are Gs, and suppose there to be a plurality of Fs. Consider one of these: a. By hypothesis, it is a G. We can say that a's being F nomically necessitates a's being G.
Suppose, however, that we consider this particular instance falling under the law from the standpoint of the Regularity theory. What does the theory postulate to obtain? Nothing but the two states of affairs: ays being F, and a's being G.
The strength of the case against the Naive Regularity theory of laws in particular, and the Regularity theory of laws in general, should now be evident. Some of the difficulties which have been rehearsed have been appreciated by Regularity theorists themselves. It is rather generally recognized that the class of Humean uniformities includes members which we do not wish to include among the laws of nature. A number of Regularity theorists have also recognized that it is not even easy to say what a uniformity is. The trouble here is provided by strange predicates such as ‘grue’ Suppose that all Fs are Gs, but that the predicates ‘F’ and ‘G’ pick out what we would naturally think of as a heterogeneous miscellany. Is this a Humean uniformity? Intuitively, no. But the unsupplemented Naive Regularity theory seems to lack the resources to eliminate such ‘uniformities’.
More recently, Regularity theorists have faced up to the difficulties posed for their view by the apparent existence in nature of irreducibly probabilistic laws. There is no simple emendation of the Naive theory available for dealing with such laws. For instance, it is no good substituting relative frequencies for uniformities, because certain relative frequencies are only the most probable, not the inevitable, manifestations of probabilistic laws. The only way to preserve a Regularity theory seems to be to introduce objective single-case chances or propensities, and then assert that probabilistic laws are uniformities linking more ordinary properties with these new sorts of property.
The question ‘What is a law of nature?’ is a central question for the philosophy of science. But its importance goes beyond this relatively restricted context to embrace general epistemology and metaphysics. In this section I will first discuss the importance of the question for the philosophy of science and then-its importance in the wider context.
Natural science traditionally concerns itself with at least three tasks. The first is to discover the geography and history of the universe, taking ‘geography’ to cover all space and ‘history’ to cover all time, including future time. Astronomy is beginning to give us a picture of how the universe as a whole is laid out in space and time. Some other natural sciences give us an overview of more restricted spatio-temporal areas.
A second task is to discover what sorts of thing and what sorts of property there are in the universe and how they are constituted, with particular emphasis upon the sorts of thing and the sorts of property in terms of which other things are explained. (These explainers may or may not be ultimate explainers.)
The third task is to state the laws which the things in space and time obey. Or, putting it in the terms used in describing the second task, the third task is to state the laws which link sort of thing with sort of thing, and property with property.