This chapter aims to place the role of analytical chemistry into its archaeological context. It is a common fallacy that archaeology is about things – objects, monuments, landscapes. It is not: archaeology is ultimately about people. In a leading introductory text, Renfrew and Bahn (Reference Renfrew and Bahn1996: 17) state that ‘archaeology is concerned with the full range of past human experience – how people organized themselves into social groups and exploited their surroundings; what they ate, made and believed; how they communicated and why their societies changed’. In the same volume, archaeology is called ‘the past tense of cultural anthropology’ (Renfrew and Bahn Reference Renfrew and Bahn1996: 11), but it differs from anthropology in one crucial and obvious respect – in archaeology it is impossible to interview the subjects of study or to observe them directly in their everyday life. Of necessity, therefore, archaeology operates at a very different level of detail when compared to anthropology – in particular, it is always challenging to reconstruct the knowledge, motivations or beliefs of people in the past. Inferences about past societies can be made from the material evidence recovered from archaeological excavation – sometimes in the form of surviving artefacts or structures (i.e., the deliberate products of human activity) but often from associated evidence such as insect remains, pollen or animal bones, from which environmental and ecological information can be derived. Sometimes it is the soils and sediments of the archaeological deposit itself – their nature and stratigraphy – which provide the evidence or add information by providing a context. Hence the damaging effects of looting or the undisciplined recovery of artefacts, where objects are removed from their contexts without proper recording. It is always the result that in such cases information is lost, sometimes totally. Although a truism, it is safe to say that in modern archaeology the information derived from the site and the objects it contains is usually regarded as more valuable than the artefacts themselves.
Although archaeology is a historical discipline in that its aim is to reconstruct events in the past, it is not the same as history. If history is reconstructing the past from written sources, then 99.9% of humanity’s five million years or more of global evolution is beyond the reach of history. Even in historic times, where written records exist, there is still a distinctive role for archaeology, and not only in the majority of the world which remained illiterate. Documentary sources often provide evidence for ‘big events’ – famous people, battles and invasions, religious dogma and the history of states – but such information is inevitably biased. History is written by the literate elites, and usually by the victorious. We do not have to look far into our own recent history to realize that it can obscure parts of the past as well as illuminate it. In contrast, archaeology, apart from adding to our understanding of great people and events, can also provide the unwritten story of the unnamed common people – the everyday story of how they lived and died.
At the heart of archaeology is the process of reconstructing past activities from material remains, since this is a stepping stone to understanding the minds of people in the past. It is this focus on material evidence which creates the need for scientific approaches. Since every archaeological excavation might be thought of as an unrepeatable scientific experiment (in the sense of a data-gathering exercise which can only be done once), there is a practical and moral requirement to extract the maximum possible information from the generally mundane collection of bones, stone tools, shards of broken pots, corroded metalwork and biological assemblages which constitute the vast bulk of archaeological finds. Ancient technologies (pottery firing, metal smelting, etc.) are reconstructed not only from a scientific study of the surviving artefacts but also from detailed analysis of the furnaces and manufacturing debris. Trade routes can be inferred from fragments of broken glass or pottery manufactured in one place but found in another (generally referred to as provenance studies). The economies of ancient societies are reconstructed from their material remains and the details of food supply, farming practices and resource exploitation from a wide range of environmental remains. In this respect, archaeology has much in common with modern forensic science – events, chronologies, relationships and motives are reconstructed from the careful and detailed study of a wide range of material evidence. In order to emphasize the extensive links between a plethora of scientific disciplines and archaeology, it is instructive to challenge new students to name a science which has no relevance to modern-day archaeology. One can easily go through the scientific alphabet, from astronomy to zoology, and find many obvious applications. It is possible, of course, to ask the same question of the social sciences and of the engineering and medical sciences. Since the subject of study in archaeology is the whole of human history, it is not surprising that few (if any) academic disciplines exist which have no relevance or application to archaeology. It is inherently an interdisciplinary subject, drawing on evidence from many disciplines but united by the desire to better understand human behaviour in the past.
Overviews of the engagement between science and archaeology are provided in a number of edited volumes, including Brothwell and Higgs (Reference Brothwell and Higgs1963, Reference Brothwell and Higgs1969), Ciliberto and Spoto (Reference Ciliberto and Spoto2000), Henderson (Reference Henderson2000), Brothwell and Pollard (Reference Brothwell and Pollard2001), Richards and Britton (Reference Richards and Britton2020) and Pollard et al. (Reference Pollard, Armitage and Makarewicz2023a). More specific volumes on the applications of chemistry to archaeology include the series of conference proceedings entitled Archaeological Chemistry by the American Chemical Society: Beck (Reference Beck1974), Carter (Reference Carter1978), Lambert (Reference Lambert1984), Allen (Reference Allen1989), Orna (Reference Orna1996), Jakes (Reference Jakes2002), Glascock et al. (Reference Glascock, Speakman and Popelka-Filcoff2007), Armitage and Burton (Reference Armitage and Burton2013) and Armitage and Fraser (Reference Fraser2023). Another important series of conferences are those organized by the Materials Research Society of the USA under the title Materials Issues in Art and Archaeology: Sayre et al. (Reference Sayre, Vandiver, Druzik and Stevenson1988), Vandiver et al. (Reference Vandiver, Druzik and Wheeler1991, Reference Vandiver, Druzik, Wheeler and Freestone1992, Reference Vandiver, Druzik, Galván Madrid, Freestone and Wheeler1995, Reference Vandiver, Druzik, Merkel and Stewart1997, Reference Vandiver, Goodway and Mass2002, Reference Vandiver, Mass and Murray2005, Reference Vandiver, McCarthy, Tykot, Ruvelcaba-Sil and Casadio2008, Reference Vandiver, Weidong, Ruvalcaba Sil, Reedy and Frame2011, Reference Vandiver, Li, Sciau and Maines2017) and Shugar et al. (Reference Shugar, Vandiver, Reedy and Golfomitsou2017), and also the published proceedings of the International Archaeometry Symposia. Other specialized volumes include Tite (Reference Tite1972), Bowman (Reference Bowman1991), Pollard and Heron (Reference Pollard and Heron1996, 2008), Lambert (Reference Lambert1997), Goffer (Reference Goffer2007), Weiner (Reference Weiner2010), Price and Burton (Reference Price and Burton2011) and Pollard et al. (Reference Pollard, Heron and Armitage2017). A number of more general works also contain much information about the history and technology of archaeological materials, including the eight-volume A History of Technology (Singer, Holmyard, Hall and others between 1954 and 1984), Thorpe’s Dictionary of Applied Chemistry in twelve volumes (between 1937 and 1956) and Joseph Needham’s monumental Science and Civilisation in China (1954 to present).
1.1 The History of Analytical Chemistry in Archaeology
For the reasons given earlier, there is a strong moral and practical requirement to extract the maximum information from the material remains recovered during archaeological investigation. In addition to structural analysis using visual, microscopic and spectroscopic techniques, a major source of information has been analytical chemistry applied to artefacts. This now involves the use of instrumental methods of chemical analysis for the detection and quantification of inorganic elements and the measurement of isotopic abundances but also includes an array of methods for organic analysis including proteomics and DNA analysis. The history of the engagement of chemical analysis with archaeological materials is now well documented. Caley (Reference Caley1949, Reference Caley1951, Reference Caley1967) summarizes the early applications of chemistry to archaeology, and more recent discussions include Pollard et al. (Reference Pollard, Heron and Armitage2017) and Pollard (Reference Pollard2025).
In the late eighteenth century, the use of analytical chemistry in archaeology arose from a simple curiosity to find out what these objects were made from, but, by the mid nineteenth century, more sophisticated questions were being asked – most notably relating to provenance. The term is used here to describe the observation of a systematic relationship between the chemical composition of an artefact (most often using trace elements, present at less than 0.1% by weight, or isotopes of elements such as lead and strontium) and the chemical characteristics of the source of one or more of the raw materials involved in its manufacture. This contrasts with the use of the same term in art history, where it is taken to mean the find spot of an object or more generally its whole curatorial history. Some textbooks on geoarchaeology have used the term provenience for find spot and provenance for the process of discovering the source of raw materials (e.g., Rapp and Hill Reference Rapp and Hill1998: 134), but this distinction has not been universally adopted. Since provenance has been such a dominant theme in archaeological chemistry, further consideration is given later to the theory of provenance studies.
One of the earliest known chemical analyses is that of a gold crown carried out by Archimedes of Syracuse (c. 287–c. 212 bce), using the displacement of water to determine the gold content (Pollard Reference Pollard2015a). There is, however, a much longer history of assaying the precious metals by fire or touchstone going back at least to the second millennium bce (Pollard Reference Pollard2015a). The needs of miners and metalworkers to assay ores and a range of metals undoubtedly provided a significant impetus for the development of analytical chemistry (Greenaway Reference Greenaway1962, Reference Greenaway1964). Trial by fire was replaced in the late eighteenth century ce by a dissolution method, ultimately giving rise to quantitative gravimetric analytical chemistry. This consisted of precipitating a known compound from a solution created by dissolving the sample in a suitable solvent. By weighing the amount of sample dissolved and weighing the dried precipitate, the proportion of the precipitated element can be calculated, providing allowance can be made for the form in which the element is precipitated (e.g., if tin (Sn) is precipitated as tin oxide (SnO2), then the weight of the precipitate would require correction by a factor of 0.79 to allow for the oxygen present).Footnote 1 By employing a sequence of specific precipitations, a set of different elements can be quantified from the same solution. Thus, trial by fire gradually gave way to gravimetric analysis, originally known as the humid method. Torbern Bergman (1735–1784) at the University of Uppsala, Sweden, published a protocol for the analysis of gemstones entitled Disquisitio chemica de terra gemmarum (Chemical investigation of the earth of gems) (Bergman Reference Bergman1777), in which he overcame the insolubility of gemstones by using an alkali fusion. His methodology was subsequently improved by Martin Heinrich Klaproth (1743–1817) in Berlin (Klaproth Reference Klaproth1792–1793) and Nicolas-Louis Vauquelin (1763–1829) in Paris (Vauquelin Reference Vauquelin1799). These three protocols have been described and compared by Oldroyd (Reference Oldroyd1973) and mark the beginning of modern analytical chemistry.
The history of analytical chemistry itself has relied extensively on the contributions of great scientists such as Martin Heinrich Klaproth, and it is interesting to see how many of these pioneers included archaeological material in their work. Following a successful career as a pharmacist, Klaproth devoted himself to the chemical analysis of minerals from all over the world. He is credited with the discovery of three new elements – uranium, zirconium and cerium – and the naming of the elements titanium, strontium and tellurium, isolated by others but sent to him for confirmation. His collected works were published in five volumes from 1795 to 1810 under the title Beiträge zur chemischen Kenntniss der Mineralkörper (Contributions to the Chemical Knowledge of Mineral Substances), to which a sixth – Chemische Abhandlungen gemischten Inhalts (Chemical Treatises of Mixed Content) – was added in 1815 (Klaproth Reference Klaproth1795–1810, Reference Klaproth1815). In addition to these monumental contributions to mineralogical chemistry, Klaproth determined gravimetrically the approximate composition of six Greek and nine Roman copper alloy coins, a number of other metal objects, and a few pieces of Roman glass. His paper (Klaproth Reference Klaproth1792–1793), entitled ‘Mémoire de numismatique docimastique’ (‘Memoir of Experimental Numismatics’), was presented to the Royal Academy of Sciences and Belles-Lettres of Berlin on 9 July 1795 and published in 1798, although the volume which contains it is dated 1792–3. He was appointed professor at the Artillery Officer Academy in Berlin, and in 1809 became the first Professor of Chemistry at the newly created University of Berlin. Although Caley (Reference Caley1949) believed that these were the first published analyses of archaeological metals, they were in fact preceded by Jean Michel Jerome Dizé (1764–1852) and Johann Christian Wiegleb (1732–1800), who reported a set of analyses of eight coins in 1790 (Dizé Reference Dizé1790) and three bronze axes in 1777 (Wiegleb Reference Wiegleb1777), respectively (Pollard Reference Pollard2018a). Klaproth’s analyses of glass, however, were the first such analyses ever published.
Another leading scientist of the nineteenth century, Sir Humphry Davy (1778–1829), discoverer of nitrous oxide (N2O, or ‘laughing gas’, subsequently used as a dental anaesthetic and today as a general painkiller), identifier of the chemical nature of chlorine gas and inventor of the miner’s safety lamp, also played a part in developing archaeological chemistry. In 1815, he read a paper to the Royal Society in London concerning the chemical analysis of ancient pigments collected by himself in ‘the ruins of the baths of Livia, and the remains of other palaces and baths of ancient Rome, and in the ruins of Pompeii’ (Davy Reference Davy1815). Michael Faraday (1791–1867), the discoverer of electromagnetic induction, reported in a series of letters published by others in the journal Archaeologia that he had studied a wide range of archaeological materials, including a copper alloy coin, glass and various fluids (Gage Reference Gage1832), enamelled bronze, glass, fuel residue, food residue and oil (analysed by tasting, which is no longer the preferred method!) (Gage Reference Gage1836), and Roman lead glaze pottery (Diamond Reference Diamond1847). Although not the first analysis of archaeological ceramics (Pollard Reference Pollard2015b), Theodore William Richards (1868–1928), the first American chemist to receive the Nobel Prize, analysed a number of sherds of Athenian pottery from the Boston Museum of Fine Arts at Harvard University and published them in the American Chemical Journal (Richards Reference Richards1895). Many other eminent chemists of the nineteenth century (including Kekulé, Berzelius and Berthelot) all contributed to the growing knowledge of the chemical composition of ancient materials. Perhaps with the exception of Marcelin Berthelot (1827–1907), who published extensively on archaeological materials and the history of alchemy, their archaeological interests were small compared to their overall contribution to chemistry, but it is nevertheless instructive to see how these great scientists included the analysis of archaeological objects as part of their overall process of discovery.
The appearance of the first appendices of chemical analyses in a major archaeological report marks the beginning of the systematic collaboration between archaeology and chemistry. Examples include the analysis of four Assyrian bronzes and a sample of glass in Austen Henry Layard’s Discoveries in the Ruins of Nineveh and Babylon (Layard Reference Layard1853) and Heinrich Schliemann’s Mycenae (Schliemann Reference Schliemann1878). So distinguished was this latter publication that William Gladstone, in an interregnum between his tenures as British prime minister, wrote the preface to the English version. The scientific reports in both of these publications were overseen by John Percy (1817–1889), originally trained as a medical doctor, but, by 1864, a lecturer on metallurgy to the artillery officers at Woolwich and subsequently at the Royal School of Mines in London. Percy wrote four major volumes on metallurgy with significant sections on the early production and use of metals (Percy Reference Percy1861, Reference Percy1864, Reference Percy1870, Reference Percy1875). Because of his firsthand experience of now lost metallurgical processes, these books remain important sources even today. The analysis of the metal objects from Mycenae showed the extensive use of native gold and both copper and bronze, which was used predominantly for weapons. Percy wrote in a letter to Schliemann dated August 10, 1877 that ‘Some of the results are, I think, both novel and important, in a metallurgical as well as archaeological point of view’ (Schliemann Reference Schliemann1878: 417).
Toward the end of the nineteenth century, chemical analyses became more common in excavation reports, and new questions, beyond the simple ones of identification and determination of manufacturing technology, began to be asked. In 1892, Marie-Adolphe Carnot (1839–1920) published a series of three papers which suggested that fluorine uptake in buried bone might be used to provide an indication of the age of the bone (Carnot Reference Carnot1892a, Reference Carnot1892b, Reference Carnot1892c), pre-empting by nearly 100 years the recent interest in the chemical interaction between bone and the burial environment. Fluorine uptake was heavily relied upon, together with the determination of increased uranium and decreased nitrogen, during the investigation of the infamous ‘Piltdown Man’ (Weiner et al. Reference Weiner, Oakley and Le Gros Clark1953–6; Oakley Reference Oakley, Brothwell and Higgs1969). This methodology became known as the F. U. N.Footnote 2 method of dating when applied to fossil bone (Oakley Reference Oakley and Pyddoke1963). Subsequently such methods have been shown to be strongly environmentally dependent, and only useful, if at all, for providing relative dating evidence.
The adoption of instrumental measurement techniques such as optical emission spectroscopy (OES; see Section 3.3) during the 1920s and 1930s gave rise to new analytical methods which were subsequently applied to archaeological chemistry.Footnote 3 At the time, one of the principal research aims for the analysis of archaeological materials was to understand the development of ancient bronze metalwork, especially in terms of identifying the sequence of copper alloys used during the European Bronze Age. Huge programmes of metal analyses were initiated in Germany, Britain and the Soviet Union, leading to several substantial publications of analytical data (e.g., Otto and Witter Reference Otto and Witter1952; Junghans et al. Reference Junghans, Sangmeister and Schröder1960, Reference Junghans, Sangmeister and Schröder1968, Reference Junghans, Sangmeister and Schröder1974; Caley Reference Caley1964a; Chernykh Reference Chernykh1970; Chernykh and Bartseva Reference Chernykh, Bartseva, Bartseva, Voznesenskaia and Chernykh1972; see Section 3.6). These ambitious projects have had varying degrees of impact on archaeology, with the work of Chernykh and his colleagues in Moscow being perhaps the most sophisticated and influential (Chernykh Reference Chernykh1992; Pollard et al. Reference Pollard, Bray, Cuénod, Hommel, Hsu, Liu, Perucchetti, Pouncett and Saunders2018; Pollard Reference Pollard2025). However, as discussed in Section 3.6, they have left a legacy of many tens of thousands of analyses of bronze artefacts from Europe and Asia which provide both an opportunity and a challenge for modern archaeologists.
As a result of the rapid scientific and technological advances precipitated by the Second World War, the immediate post-war years witnessed a wider range of analytical techniques being deployed in the study of the past, including X-ray analysis and electron microscopy (Chapter 5), neutron activation analysis (Section 3.5) and mass spectrometry (Chapter 6). Large-scale analytical programmes were extended to materials other than metal, such as faience beads, glass and ceramics. Faience, an artificial siliceous material made by sintering quartz at high temperatures, was first produced in the Near East, and during the second millennium bce it was distributed widely across prehistoric Europe as far as England and Scotland. In 1956, Stone and Thomas used OES to ‘find some trace element, existent only in minute quantities, which might serve to distinguish between the quartz or sand and the alkalis used in the manufacture of faience and glassy faience in Egypt and in specimens found elsewhere in Europe’ (Stone and Thomas Reference Stone and Thomas1956: 68). This study represents a clear example of the use of chemical criteria to establish provenance – that is, to determine whether faience beads recovered from sites in Britain were of local manufacture, or imported from Egypt or the eastern Mediterranean. This question was of great archaeological significance. For many years, it had generally been assumed that significant technological innovations originated in the east and had diffused westwards – a theory termed diffusionism in the archaeological literature, and encapsulated in the phrase ex Oriente lux (‘out of the East, light’, a saying associated with Montelius (Reference Montelius1899), but in circulation before then). Although the initial OES results were equivocal, the data were subsequently re-evaluated by Newton and Renfrew (Reference Newton and Renfrew1970), who suggested a local origin for the beads on the basis of the levels of tin, aluminium and magnesium. This conclusion was supported by a subsequent re-analysis of most of the beads using neutron activation analysis (NAA) by Aspinall et al. (Reference Aspinall, Warren, Crummett and Newton1972).
During the late 1950s and early 1960s, the diffusionist archaeological philosophies of the 1930s were replaced by radical new theoretical approaches derived from anthropology and the social sciences. This became known as New Archaeology and represented an explicit effort to explain past human action rather than simply describe it. The philosophy of science played a significant role in providing the terminology for this more statistical and quantitative approach to archaeology (see Trigger Reference Trigger2006). As a consequence, New Archaeology reinvigorated research into prehistoric trade and exchange. The movement of a population, whether via invasion or diffusion of peoples, was no longer seen as the principal instigator of cultural change. Instead, internal processes within society were emphasized, although evidence for ‘contact’ arising from exchange of artefacts and natural materials (as proxy indicators for the transmission of ideas) was still seen as an important factor and one in which the chemical analysis of artefacts and raw materials might be useful. This increased interest in the distribution of materials initiated a ‘golden era’ in archaeometry (a term coined in the 1950s by Christopher Hawkes at Oxford to signify the application of scientific methodologies to archaeology) as a wide range of scientific techniques were employed in the hope of chemically characterizing those rocks which were used for tools or buildings, such as obsidian and marble, as well as ceramics, metals, glass and natural materials, such as amber (see Pollard Reference Pollard2025). These characterization studies were aimed at ‘the documentation of culture contact on the basis of hard evidence, rather than on supposed similarities of form’ (Renfrew Reference Renfrew1979: 17). Quantitative chemical data formed part of the basis of this ‘hard evidence’, which made it necessary for archaeologists to become familiar with the tools and practices of analytical chemistry, as well as the numerical manipulation of large amounts of data.
For many years, the applications of analytical chemistry to archaeology focused primarily on inorganic artefacts – the most obviously durable objects in the archaeological record – or occasionally on geological organic materials such as amber and jet. Since the 1980s, increasing attention has been directed towards biological materials – starting with natural products such as waxes and resins, but extending to accidental survivals such as food residues and, above all, human remains, including bone collagen (O’Connell Reference O’Connell, Pollard, Armitage and Makarewicz2023), lipids (Cramp et al. Reference Cramp, Bull, Casanova, Dunne, Roffet-Salque, Whelton, Evershed, Pollard, Armitage and Makarewicz2023), DNA (Brown Reference Brown, Pollard, Armitage and Makarewicz2023) and, most recently of all, proteins (Hendy Reference Hendy, Pollard, Armitage and Makarewicz2023). Perhaps surprisingly, the preservation of a wide range of biomolecules has now been demonstrated in a large number of archaeological contexts (Brown and Brown Reference Brown and Brown2011). This is probably due to three main factors: the increasing sensitivity of the analytical instrumentation brought to bear on such samples, the potential for high ‘information’ content, and the increasing willingness to look for surviving material in the first place.
However, it is clear that, to be of any value in archaeology, the analytical data from whatever source need to be interpreted in terms of what they mean for understanding human behaviour. In other words, chemical analysis in archaeology needs to be more than a descriptive exercise that simply documents the composition of ancient materials. Too many papers have been published under the banner of ‘characterizing’ some particular group of archaeological objects without any serious attempt to convert these data into something meaningful about the humans who produced them. This step is often much more difficult than producing the primary analytical data; as DeAtley and Bishop (Reference DeAtley, Bishop, Bishop and Lange1991: 371) have pointed out, no analytical technique has ‘built-in interpretative value for archaeological investigations; the links between physical properties of objects and human behaviour producing the variations in physical states of artefacts must always be evaluated’. There has been a constant call from within the parent discipline of archaeology for meaningful scientific data which address real and current problems, and which engages with modern archaeological theories. This demand for relevance, although self-evidently compelling, must be qualified by two caveats – firstly, the concept of what is meaningful in archaeology will change as archaeology itself evolves, and secondly, analytical data on archaeological artefacts may be of relevance to disciplines other than archaeology. An example of the latter is the use of stable isotope measurements on wood recovered from archaeological sites to reconstruct past climatic conditions, which was never envisaged by the original analysts (Siegwolf et al. Reference Siegwolf, Brooks, Roden and Saurer2022). On the former point, Trigger (Reference Trigger, Farquhar, Hancock and Pavlish1988: 1) states that ‘archaeologists have asked different questions at different periods. Some of these questions have encouraged close relations with the biological and physical sciences, while other equally important ones have discouraged them’. Only a close relationship between those generating the analytical data and those considering the archaeological problems (ideally, of course, so close that they are encircled by the same cranium) can ensure that costly data does not languish forever in the unopened appendices of archaeological publications or in some esoteric scientific journal, unknown to archaeologists.
1.2 Basic Archaeological Questions
This short introduction has outlined the origins of many of the issues addressed by the application of analytical chemistry to archaeology. They can be divided, somewhat arbitrarily, into those questions which use chemical methods to address specific issues of direct interest to archaeology (the ‘what’, ‘where’, ‘how’ and ‘why’ types of question), and those studies which attempt to understand the processes acting upon archaeological material before, during and after burial. This latter category can and often does address specific issues in archaeology such as site formation processes, differential survival phenomena, and the conservation of archaeological objects and sites, but can be of more fundamental interest in terms of how to protect cultural heritage.
What Is It? Identification
Perhaps the simplest archaeological question that can be answered by chemical means is ‘What is this object made from?’ The chemical identity of many archaeological artefacts may be uncertain for a number of reasons. Simply, it may be too small, corroded, or dirty to be identified by eye. Alternatively, it may be made of a material which cannot be identified visually or by the use of simple tests. An example might be a metal object made of a silvery-coloured metal, such as a coin. It may be ‘pure’ silver (containing more than about 95% silver), or it could be a silver-rich alloy which has been debased and the surface enriched (artificially or naturally) to give a silvery appearance. It could equally be a coin with a silver surface but a base metal core, produced by plating. It might also be an alloy designed to look like silver but containing little or no precious metal, such as ‘nickel silver’ (cupronickel alloys, as used in modern ‘silver’ coinage). Conceivably, it could consist of some more exotic silvery metal such as platinum, but this would excite great interest if identified in a European context prior to the mid eighteenth century ce, since this metal, extensively used in the New World, was supposedly unavailable in Europe before that time.
In general, to answer this basic question, the required levels of analysis are relatively simple, subject to the usual constraints posed by archaeological materials (primarily the need to be as nearly as possible ‘non-destructive’). Consequently, one preferred technique for many years has been X-ray fluorescence (XRF), because of its non-destructive nature, its restricted sample preparation requirements, and its simultaneous multi-element capability (see Section 5.2). During the 1960s, an air-path machine was developed in Oxford specifically to allow the non-destructive analysis of larger museum objects (Hall Reference Hall1960), and since then portable handheld XRF systems have been used on museum displays or at archaeological excavations (Shugar and Mass Reference Shugar and Mass2012; Section 5.2). XRF is capable of both quantitative and qualitative analysis.Footnote 4 Qualitative analysis can have important implications when distinguishing between pure silver, debased silver and cupronickel coins, but determining the exact proportion of silver in silver coinage can be more valuable, giving an important indicator of the economic condition of the society from which it comes. The fineness of silver or gold coinage is often directly linked to the state of the finances of the issuing authority (debasement indicating times of financial stress), and therefore ‘time-fineness curves’ have been constructed for cultures as diverse as the Roman Empire and Tudor England, amongst many others (Butcher and Ponting Reference Butcher and Ponting2020). This assumes, of course, that an accurate quantitative measurement can be made using a particular method on a particular sample. Understanding the complexities involved in such work is one of the major benefits of archaeologists having a better understanding of analytical chemistry.
Identification of organic materials in archaeological contexts has historically been of minor significance compared to inorganic analysis, with the exception of the work on amber by Helm (Reference Helm1885), described later. Amber is, of course, highly visible in the archaeological record, but a major step forward in the analysis of organic materials came with the realization that traces of organic deposits (either visible, as in food residues on ceramics, or occluded within another matrix) could be identified and analysed using appropriate techniques. Quantitative organic analysis is discussed in more detail in Chapter 9, but an example of a situation in which the simple identification of an organically derived raw material used to manufacture artefacts is important is the discrimination between jet, shale and various forms of coal, widely used for decorative black objects in European prehistory. Traditionally, the classification of small objects made from various black materials was carried out by eye using a number of simple criteria such as colour and physical properties (Pollard et al. Reference Pollard, Bussell and Baird1981). Geological samples have always been identified either by such methods or by using streak tests (and more rigorously by thin section petrology), but the small size of most archaeological finds and the nature of the destructive sampling required renders such methods difficult to apply. These identifications are, however, rather important because of the restricted number of geological sources of jet when compared to other similar materials such as shales or high-grade coal. In the British Bronze Age, for example, if an object from a Wessex burial context in southern England is identified as jet, then it is automatically taken as evidence of trading links with Whitby on the northeastern coast of England (approximately 400 km distant), since this is the nearest significant source of jet. Other similar materials, such as shales and the various workable types of coal, are more widely distributed. Analytical work, initially by neutron activation analysis (NAA) and then using XRF, showed that inorganic composition could be used to partially discriminate between these sources. It also showed that many of the original attributions of objects recovered from Wessex culture graves in southern England were likely to be incorrect (Bussell et al. Reference Bussell, Pollard and Baird1981). Subsequent work refined the procedures (Hunter et al. Reference Hunter, McDonnell, Pollard, Morris and Rowlands1993) and more recently, organic mass spectrometry using pyrolysis-gas chromatography-mass spectrometry (Py-GC-MS) has made further progress in characterizing such material (Watts et al. Reference Watts, Pollard and Wolff1999). Hindsight suggests that, given the organic nature of such materials, the use of organic techniques of analysis such as Fourier-transform infrared spectroscopy (FTIR; see Section 4.2) might have yielded an earlier and more convincing solution to the problem, but the approach that was taken reflects the historical trajectory of analytical work in archaeology, starting as it did largely from the study of inorganic materials.
Where Is It from? The Provenance Hypothesis
As has been noted, much of the early analytical work in archaeology examined ancient metal objects, initially with a view to understanding their composition and the technology needed to produce the artefacts. Very quickly, however, other more sophisticated archaeological questions emerged. As noted by Harbottle (Reference Harbottle1990), the Czech scholar Jan Erazim Wocel (1802–1871) suggested that differences in chemical composition could be used to distinguish between Celtic and Roman metal (Wocel Reference Wocel1854: 8) and even to provide relative dates for their manufacture and use. During the 1840s, Karl Christian Traugott Friedemann Göbel (1794–1851), a German chemist at the University of Dorpat in Estonia, began a study of large numbers of copper alloy artefacts from the Baltic region, comparing the compositions of those recovered from local excavations with known artefacts of prehistoric, Greek, and Roman origin. He concluded that the artefacts were probably Roman in origin (Göbel Reference Göbel1842). The French mineralogist Augustin-Alexis Damour (1808–1902) was one of the first to explicitly propose that the geographical source of archaeological artefacts could be determined scientifically: ‘mineralogy and chemistry must make known the characteristics and composition of the artefacts unearthed’ (translated from the French in Damour Reference Damour1865: 313). He applied this to a study of prehistoric ‘Celtic’ stone axes, particularly of jade. By comparing French jade axes to geological samples from all over the world, he was able to shed some new light on the movements and migrations of the peoples of prehistoric times. He was, however, suitably cautious in his interpretation. When he discovered that the closest chemical match for a particular axe was with New Zealand jade, he concluded that it was necessary to analyse many more samples before accepting that there was indeed no source nearer than New Zealand.
The work of Otto Helm (1826–1902), a German apothecary from Gdansk, Poland, to provenance amber towards the end of the nineteenth century constitutes one of the earliest fully systematic applications of the natural sciences in archaeology. He had a specific archaeological problem in mind – that of determining the geographical source of over 2,000 amber beads excavated by Heinrich Schliemann (1822–1890) at Mycenae in Greece. In the English translation of the excavation monograph, Schliemann (Reference Schliemann1878: 203–204) noted that ‘It will, of course, for ever remain a secret to us whether this amber is derived from the coast of the Baltic or from Italy, where it is found in several places, but particularly on the east coast of Sicily.’ A full account of the investigations made and the success claimed by Helm, along with the eventual shortcomings, has been compiled by Curt Beck (Reference Beck1986), who in the 1960s published, with his co-workers, the results of some 500 analyses using infrared (IR) spectroscopy, demonstrating for the first time successful discrimination between Baltic and non-Baltic European fossil resins (Beck et al. Reference Beck, Wilbur and Meret1964, Reference Beck, Wilbur, Meret, Kossove and Kermani1965). As a result of this work (see Section 4.4), it is possible to state that the vast majority of prehistoric European amber does derive from the Baltic coastal region.
Interestingly, therefore, the idea that chemical composition might indicate raw material source appears in archaeology many years in advance of the same idea in geochemistry. The quantitative study of the partitioning behaviour of the elements between iron-rich and silicate-rich phases in the Earth’s crust was carried out in the first half of the twentieth century, giving a much better understanding of the chemical behaviour of the elements in geological systems and resulting in the geochemical classification of the elements as lithophile and siderophile. Much of this early work was summarized by Goldschmidt in his seminal work on geochemistry (Goldschmidt Reference Goldschmidt1954). It was really not until this theoretical basis had been established that the concept of chemical provenance using trace elements acquired currency in geochemistry, almost a hundred years after the idea had emerged empirically in archaeology. A possible explanation for this is the fact that the idea of provenance (based on stylistic or other visual characteristics) had a long history in archaeology, going back to at least the eighteenth century (Trigger Reference Trigger2006). In the absence of any scientific means of dating artefacts in museum and private collections, a great deal of attention was paid to the observation of stylistic development within particular classes of artefacts and the search for ‘parallels’ in other collections, some of which might, hopefully, be associated with dateable material such as coins or inscriptions. These methods effectively gave a relative chronology for a particular set of objects, long before the advent of scientific dating techniques, and allowed proposals to be made about where certain objects might have originated, if they were deemed to be imports. It is not surprising, therefore, that in the early chemical studies, but more particularly with the advent in the 1920s of instrumental methods of analysis, the composition of an object was added to the list of characteristics which might be used to indicate either the ‘provenance’ of the object or the position of an object in some evolutionary sequence of form or decoration. Thus were born the great ambitious programmes of analytical studies of ancient artefacts, perhaps typified by the SAM programme for the analysis of European Bronze Age metalwork during the 1950s, described earlier and in Section 3.6. Although lacking the underpinning geochemical theory provided by Goldschmidt and others at about the same time, it appears that in this respect archaeology can be shown to have developed a methodological framework subsequently used elsewhere, rather than simply borrowing existing techniques from other disciplines, as has sometimes been asserted.
With all of this work, scientific analysis progressed beyond the generation of analytical data on single specimens to, as stated by Harbottle (Reference Harbottle, Ericson and Earle1982: 14), ‘establishing a group chemical property’. In this major review of chemical characterization studies in archaeology, Harbottle lists a wide range of materials which have been studied analytically, but reminded practitioners that
with a very few exceptions, you cannot unequivocally source anything. What you can do is characterize the object, or better, groups of similar objects found in a site or archaeological zone by mineralogical, thermoluminescent, density, hardness, chemical, and other tests, and also characterize the equivalent source materials, if they are available, and look for similarities to generate attributions. A careful job of chemical characterization, plus a little numerical taxonomy and some auxiliary archaeological and/or stylistic information, will often do something almost as useful: it will produce groupings of artefacts that make archaeological sense. This, rather than absolute proof of origin, will often necessarily be the goal.
Since the development of the concept in the nineteenth century and the review by Harbottle (Reference Harbottle, Ericson and Earle1982), a great deal of thought has been given to the theory of provenance in archaeology, with contributions from Cherry and Knapp (Reference Cherry, Knapp and Gale1991), Tite (Reference Tite1991), Wilson and Pollard (Reference Wilson, Pollard, Brothwell and Pollard2001), Pollard (Reference Pollard, Pollard, Armitage and Makarewicz2023, Reference Pollard2025) and Pollard and Liu (Reference Pollard and Liu2024), amongst others. Wilson and Pollard (Reference Wilson, Pollard, Brothwell and Pollard2001: 507–508) set out six criteria which need to be met for the ‘provenance hypothesis’ to be valid:
(i) The prime requirement is that some chemical (or isotopic) characteristic of the geological raw material(s) is carried through unchanged, or predictably related, to the finished object. This provides a ‘geochemical fingerprint’.
(ii) This fingerprint must vary between potential geological sources available in the past, and such variation must be related to specific geographical occurrences of the raw material, as opposed to providing information about a broad depositional environment. In other words, inter-source variation must be greater than intra-source variation for successful source discrimination.
(iii) Such characteristic fingerprints must be measurable with sufficient precision in the finished artefacts, in order to enable discrimination between competing potential sources.
(iv) No ‘mixing’ of raw materials should occur (either before or during processing or as a result of the recycling of material), or any such mixing can be adequately accounted for.
(v) Any post-depositional chemical processes must either have no effect on the characteristic fingerprint, or any such alteration must be detectable so that some satisfactory allowance can be made.
(vi) Any observed patterns of movement of raw materials or finished goods must be relatable to human behaviour, such as trade or exchange.
Subsequently, Pollard (Reference Pollard, Pollard, Armitage and Makarewicz2023) discussed the potential effects of recycling of materials on provenance studies, and, following a consideration of the possible time lags in some circulation systems, added a seventh criterion:
(vii) The chronological dimension of the inferred movement of material must be evaluated, to allow a more realistic set of socially meaningful archaeological conclusions to be derived.
These are stringent requirements and are often not fully met in practice. In particular, the requirement of a stable geochemical fingerprint transferred from source to objects may not be met in the case of synthetic materials, especially those that have been subjected to high temperatures. In the case of ceramics, for example, it is rarely possible to match the finished product with a source clay bed, for several reasons:
(i) clays are often extremely inhomogeneous, and the ingenuity of the potter is in blending clays (and non-plastic inclusions) to give the correct physical properties for the desired vessel;
(ii) raw clays are almost always processed and refined to remove coarse particles before use, which will alter the chemical composition in a manner only broadly predictable;
(iii) firing affects the mineralogical and chemical composition of clays, again in ways only partially predictable from the thermal properties of clay minerals and the volatility of the constituents.
Because of this, it has become commonplace to compare fired ceramic material with fired ceramic material (rather than the raw clay itself) assumed to be representative of a particular production centre. Material of ‘assumed provenance’ can be used, but for preference, ‘kiln wasters’ are often employed as comparative material. These are vessels which have failed in the firing for some reason and have been dumped close to the kiln (it is assumed that nobody would transport such useless material over any distance). Although ideal in terms of contextual security, wasters are, by definition, products which have failed in the kiln for some reason and therefore may be chemically atypical of the kiln’s production if failure is related to faulty preparation. This introduces a further complexity into the chain of archaeological inference. It has recently been argued that, particularly for ceramics, the practice of provenance determination is as much a study of the skills of the clay producers and potters in controlling the procurement and treatment of the raw materials as it is of the geological source of the clay (Pollard and Liu Reference Pollard and Liu2024).
The influence of high-temperature processing (and particularly of reduction processes in metalworking) on the trace element composition of the finished product has long been the source of debate and experimentation. It appears to be an obvious conclusion that the trace element composition of a piece of smelted metal depends on a number of factors, only one of which is the trace element composition of the ore(s) used. Other factors will include the degree of beneficiation and mineralogical purity of the ore(s) and the reduction and refining technology employed (temperature, redox, heating cycle). Thus, changes observed in the composition of finished metal objects may be the result of changes in ore source, as desired in provenance studies, but may also represent changes in processing technology, or at least be influenced by such changes. Further complications arise in the provenance of metals when we consider the possibility of recycling scrap metal (Pollard Reference Pollard2025). Many authors have recognized this as a theoretical complication but have struggled to find practical ways of dealing with it, although this is beginning to change with suitable numerical modelling (Pollard and Liu Reference Pollard and Liu2023).
Given all of these potential complications in the inference of source from analytical data derived from manufactured materials, a fruitful line of thinking has developed, based not on the desire to produce some absolute statement about the source of some particular manufactured product, but on the observation that in the archaeological context it is change which is important. Providing we have a suitably precise chronology, the analytical data from a related set of objects can unequivocally indicate when a particular characteristic in a trace element concentration, or an isotopic ratio, changes. Rather than simply assume that this is due to a change in the exploitation of the source material, it may be more realistic in complex societies to infer that there has been some change in the pattern of production and/or circulation – perhaps it is the result of change in raw material source, but it could also be a change in the pattern of mixing or smelting of raw materials from different sources, or a change in the recycling strategy. Although this may be less satisfactory than a result which says ‘this material came from this mine’, such an observation is archaeologically no less valuable in that its tells us that something has changed in the production process; indeed, given that it probably reflects the reality of the complexity of the ancient production and trading patterns, it may actually be a more important conclusion. One of the more distressing aspects of the conviction that it should be possible to source any archaeological material has been the accompanying demand for constantly improving analytical sensitivity, combined with ever more sophisticated data manipulation, on the assumption that these will automatically lead to improved archaeological interpretability. Self-evidently, better analyses and appropriate data handling are of themselves highly desirable, but it is not necessarily the case that they will automatically give rise to improved understanding if the underlying systems are inherently complex. Despite the sophistication of the analytical techniques, the fundamental limitations of the studied system must be remembered. In order to be successful, a project requires carefully chosen samples to answer a well-constructed archaeological question, which in turn must be securely based on an appropriate archaeological model of the situation.
How and When Was It Made? Manufacturing Technology, Date and Authenticity
Another sub-set of questions which can be meaningfully addressed via chemical analysis (usually combined with visual and/or microscopic analysis) relates to the determination of the technology used to produce an object. Often manufacturing technology can be adequately determined by careful visual and microscopic examination of the object or the debris from the production site, although experience has shown that laboratory or field simulations (experimental archaeology) are essential to a full understanding of ancient technologies and can reveal some unexpected results (Coles Reference Coles1979; Mathieu Reference Mathieu2002). Occasionally, chemical analyses can be applied to help elucidate the production process, either of the object itself or sometimes of the waste material from the process, such as the vast quantities of vitreous slag produced during iron manufacture. In this case a knowledge of the purity of the iron produced, the composition of the waste slag and the composition of any residual slag included in the metal can be combined to acquire both an understanding of the general nature of the technology involved (e.g., differentiating between bloomery or blast furnace) and detailed picture of the operating conditions of the process (Thomas and Young Reference Thomas, Young and Pollard1999).
Given the increasing interest in our recent industrial heritage (industrial archaeology) and the resulting pressures to expand the legal protection and public explanation of its monuments, it is becoming more important to improve our understanding of the manufacturing processes employed (some of which, despite being from our very recent past, are now all but forgotten). Experience has shown that even contemporary literary and patent evidence cannot always be taken as reliable. Studies of the post-medieval European brass industry, for example, indicate that the date of patenting a particular process in England appears to bear little relationship to the introduction of the process as determined from a study of the metal (Pollard et al. Reference Pollard, Heron and Armitage2017: 258). The history of the production of brass (the alloy of copper with zinc) is particularly instructive, both in terms of using analysis to understand ancient technologies and using composition to date objects.
Because of the volatility of metallic zinc, it is understood that prior to the importation (and subsequent manufacture) of zinc into Europe in the seventeenth century ce, brass was made by an indirect process, involving the heating together of metallic copper with a zinc ore, either carbonate or oxide. It used to be argued that thermodynamic constraints limited the uptake of zinc into the copper to about 30% in indirect processes, but experimental work has shown that this is not the case (Bourgarit and Bauchau Reference Bourgarit and Bauchau2010; Bourgarit and Thomas Reference Bourgarit and Thomas2011). However, a very large number of analyses have shown that in practice Roman and early medieval brasses in Europe are limited to a maximum of 20–30% zinc. European brass with more than 30% zinc is not seen before about 1650 ce and is taken to be a product of the direct process, giving an approximate date for the introduction of the use of metallic zinc. The British patent for a direct process was not taken out until 1738, by which time it would appear that the technology had been available for several decades.
The use of a ‘cut-off’ value of c. 30% zinc to distinguish between direct and indirect manufacturing processes for brass is obviously somewhat limited: it is clearly possible to make brass containing less than 30% zinc by the direct process, and as discussed earlier, 30% is not necessarily a reliable upper limit for indirect brass. During the 1990s there was some interest in the possibility that certain high temperature anthropogenic metal producing processes might introduce measurable isotopic fractionation into some metals in the product (Budd et al. Reference Budd, Pollard, Scaife and Thomas1995a). Early theoretical interest concentrated on lead, since any temperature-induced fractionation could largely invalidate lead isotope provenance studies. Fractionation effects were subsequently measured but deemed not to be significant in provenance studies (Cui and Wu Reference Cui and Wu2010). Given the volatility of zinc, it might be expected that zinc could be particularly susceptible to anthropogenically induced fractionation and that the direct and indirect processes might show different patterns. Theoretical studies and experimental observations on zinc did indeed demonstrate that anthropogenic processes in brass manufacture could introduce measurable isotopic fractionation (Budd et al. Reference Budd, Lythgoe, McGill, Pollard, Scaife and Pollard1999), but this has yet to be followed up.
The earlier example shows how the changing composition of brass over time can be related to the determination of manufacturing technology and can also give a rough indication of the date of manufacture – at least, it can give an indication of a date before which a particular object could not have been manufactured, providing our understanding of the relevant technology is accurate and complete. This leads directly to the possibility of chemical ‘authentication’ of some ancient objects. If we accept that any European brass object containing more than 30% zinc must be dated to sometime after the introduction of the direct process into Europe (c. 1650 ce), then we can assert that the ‘Drake Plate’, dated to 17 June 1579 and said to have been left by Sir Francis Drake to claim the San Francisco Bay area in the name of Queen Elizabeth I of England, cannot be authentic. Analysis of the plate (Hedges Reference Hedges1979; Michel and Asaro Reference Michel and Asaro1979) by X-ray fluorescence showed it to have a very high zinc content (around 35%) with very few impurities above 0.05%. This was quite unlike any other brass analysed from the Elizabethan period, which typically had around 20% zinc and between 0.5 and 1% each of tin and lead. It was therefore adjudged unlikely to be of Elizabethan manufacture (a view supported by that fact that it had a thickness consistent with the No. 8 American Wire Gage standard used in the 1930s, when the plate first appeared).
A wide range of archaeological materials have been subjected to scientific authenticity studies (Fleming Reference Fleming1975; Pernicka et al. Reference Pernicka, Schwab, Lockhoff, Haustein, Pernicka, von Berswordt-Wallrabe and Wagner2008; Craddock Reference Craddock2009). Where possible, this takes the form of a direct determination of the date of the object, such as by radiocarbon dating for organic materials (the most famous example of which is undoubtedly the Shroud of Turin, dated to the fourteenth century ce; Damon et al. Reference Damon, Donahue, Gore, Hatheway, Jull, Linick, Sercel, Toolin, Bronk, Hall, Hedges, Housley, Law, Perry, Bonani, Trumbore, Woelfli, Ambers, Bowman, Leese and Tite1989) or thermoluminescence analysis for ceramics and the casting cores of metal objects. For metal objects without ceramic casting cores, it has generally taken the form of chemical analysis and comparison with reliably dated objects from the same period, or, as in the case of the Drake Plate, with technological improbability. Coins have been particularly useful such studies, firstly since their variations in fineness (the purity of the precious metal content) can give a reasonably reliable calibration curve with which to date or authenticate other coins, and secondly because the fineness of the precious metals in circulation can give a great deal of information about the economic conditions prevalent at the time (e.g., Metcalf and Schweizer Reference Metcalf and Schweizer1971). Authenticity has been a particular concern for all the major museums in the world, and many have facilities for carrying out a number of tests similar to those described here in advance of making any acquisition.
Considerably more controversial, however, is the situation with respect to the commercial trade in antiquities, where access to scientific laboratories willing to carry out authentication on objects of undefined provenance has been partially blamed for encouraging the uncontrolled looting of some of the richest archaeological sites in the world (Chippindale Reference Chippindale1991). This view has been contested by some (one might ask why a ‘scientific’ determination of authenticity promotes an illicit trade, whereas an art historical ‘opinion’ is to be encouraged?), but it is undoubtedly the case that looting continues to be a major issue, particularly in areas of conflict. The 1970 UNESCO Convention on the Means of Prohibiting and Preventing the Illicit Import, Export and Transfer of Ownership of Cultural Property provided an international agreement designed to protect cultural objects by controlling their trade and offer a means by which governments can cooperate to recover stolen cultural objects. Along with the 1992 Valletta Convention (Convention for the Protection of the Archaeological Heritage of Europe), these agreements make it very unlikely that any reputable scientific laboratory will carry out commercial authenticity testing for the illicit art market.
Chemical Analysis of Human Remains
Perhaps the greatest growth in chemical studies applied to archaeology over the last 40 years has been in the study of human remains (e.g., Price Reference Price1989a; Sandford Reference Sandford1993a; Pate Reference Pate1994; Cox and Mays Reference Cox and Mays2000; Ambrose and Katzenberg Reference Ambrose and Katzenberg2002; Lee-Thorp Reference Lee-Thorp2008; Lambert and Grupe Reference Lambert and Grupe2013; Katzenberg and Grauer Reference Katzenberg and Grauer2019; O’Connell Reference O’Connell, Pollard, Armitage and Makarewicz2023). One origin of this interest can probably be found in the scientific work carried out to reveal the Piltdown fraud, in which a composite skull constructed from a medieval human cranium, an orangutan jawbone and some chimpanzee teeth was presented to the Geological Society of London in 1912, purported to be the ‘missing link’ between humans and apes. As mentioned earlier, measurement of low fluorine levels combined with evidence for artificial staining of the jaw combined to reveal the fraud (Oakley Reference Oakley1954–1955). More likely, perhaps, increased analysis of human remains was simply part of the general rise in studies of bones in archaeology and palaeontology. Starting in the late 1970s, researchers began to analyse the trace elements in human bone mineral, initially strontium (Sr or Sr/Ca ratio), on the assumption that mammalian kidneys discriminate against strontium in the bloodstream and therefore that carnivores should have lower levels of strontium than omnivores or vegetarians. Other trace elements were added to the consideration (barium and zinc for ‘dietary reconstruction’, and lead for detecting environmental pollution), but it rapidly became clear that buried bone mineral was susceptible to post-mortem diagenetic effects (uptake or loss of trace elements, depending on groundwater conditions), and so the measured levels in archaeological bone were unlikely to represent those in vivo (Hancock et al. Reference Hancock, Grynpas and Pritzker1989; Price Reference Price and Price1989b; Radosevich Reference Radosevich and Sandford1993; Sandford Reference Sandford and Sandford1993b; Burton et al. Reference Burton, Price and Middleton1999). One effect of this realization was the switch of focus to dental enamel, the hardest and most diagenetically resistant tissue in the body, and the widespread adoption of the measurement of stable carbon (13C/12C) and nitrogen (15N/14N) isotopes in the extracted and purified collagenous fraction as indicators of diet. It is important to note that, unlike trace elements, collagen is not widely present in the burial environment and also that there are accepted criteria for ‘good quality’ collagen (atomic C:N ratios, where for collagen this ratio should be 2.9 ≤ C:N ≤ 3.6; DeNiro Reference DeNiro1985), although improved quality control has been recommended more recently (Guiry and Szpak Reference Guiry and Szpak2021).
There are many reviews of dietary reconstruction using isotopic measurements on bone collagen (DeNiro Reference DeNiro1987; Schwarcz and Schoeninger Reference Schwarcz and Schoeninger1991; van der Merwe Reference van der Merwe and Pollard1992; Ambrose Reference Ambrose and Sandford1993; Schoeninger Reference Schoeninger2014; O’Connell Reference O’Connell, Pollard, Armitage and Makarewicz2023), bone lipid (Stott et al. Reference Stott, Evershed, Jim, Jones, Rogers, Tuross and Ambrose1999; Cramp et al. Reference Cramp, Bull, Casanova, Dunne, Roffet-Salque, Whelton, Evershed, Pollard, Armitage and Makarewicz2023) and bone and dental carbonate (Ambrose and Norr Reference Ambrose, Norr, Lambert and Grupe1993; Forshaw Reference Forshaw2014). Most authors have concluded that if some collagen survives in a molecularly recognizable form, then the isotopic signal measured on this surviving collagen is unchanged from that which would have been measured in vivo. The length of post-mortem time that collagen may be expected to survive in bone is difficult to predict but is affected by factors such as temperature, extremes of pH, the presence of organic acids and the presence of any damage to the collagen structure itself. According to Collins et al. (Reference Collins, Neilsen-Marsh, Hiller, Smith, Roberts, Prigodich, Wess, Csapò, Millard and Turner-Walker2002), however, the thermal history of the sample (the integrated time–temperature history) is the key factor influencing survival. It is to be expected, therefore, that in hotter temperature regimes the likelihood of collagen survival for more than a few tens of thousands of years is low. This is why researchers interested in the evolution of hominin diets have resorted to isotopic measurements on carbon in dental enamel carbonates, which do appear to survive unaltered for longer (Sponheimer et al. Reference Sponheimer, Lee-Thorp, de Ruiter, Codron, Codron, Baugh and Thackeray2005).
The scientific study of human remains has expanded considerably in the last few years (Roberts Reference Roberts, Pollard, Armitage and Makarewicz2023). Collagen is only one of the proteins (albeit the most abundant) that can survive in archaeological bone – other biomolecules of interest include cholesterol, which might outlast collagen under some circumstances and thus provide an alternative dietary indicator. Lipids are also abundant in bone. The isotopes of carbon and oxygen (18O/16O) can be measured in the carbonate mineral phase in bone (particularly dental enamel), along with strontium levels, to provide deep-time dietary and environmental data if it can be demonstrated that the mineral is unaltered. Proteins are composed of long chains of individual amino acids, the majority of which show chirality, so the ratio of left-handed to right-handed forms is dependent on time, temperature and environmental conditions. This is termed amino acid racemization and offers a dating technique over long time periods (Penkman Reference Penkman, Pollard, Armitage and Makarewicz2023). Initially adopted enthusiastically, then largely rejected, it has been rehabilitated as a relative dating tool throughout the Quaternary, providing both time and temperature variations are considered (Penkman et al. Reference Penkman, Preece, Bridgland, Keen, Meijer, Parfitt, White and Collins2013). Other loci for the survival of materials of dietary and environmental interest include dental calculus, which can preserve fragments of food, starch granules, pollen, as well as a wide range of biomolecules and DNA (Fagernäs and Warinner Reference Fagernäs, Warinner, Pollard, Armitage and Makarewicz2023).
Organic Residue Analysis (ORA) in Archaeology
It has been shown earlier that the analysis of macroscopic organic materials – especially amber – played a significant role in the development of archaeological chemistry in the nineteenth century. During the ‘golden age’, however, archaeological chemists generally paid more attention to the analysis of inorganic artefacts – both natural stone and synthetic materials (ceramics, metals, glass and glazes). This is partly because these are the most obviously durable components in the archaeological record, but it also reflects the rapid rate of development of instrumental methods for inorganic chemical analysis. It used to be thought that the survival of organic remains was only to be expected in a limited number of unusual preservational environments (such as those presenting extreme aridity, cold, or waterlogging) or as a result of deliberate action such as mummification. With more sensitive analytical techniques, however, the preservation of a wide range of biomolecules has now been demonstrated in a much broader and far less exceptional range of archaeological contexts. Consequently, attention has returned to organic materials, including natural products (such as waxes and resins), accidental survivals (such as food residues), and, above all, human remains, including bone, protein, lipids and DNA (Steele Reference Steele, Armitage and Burton2013). The methodology and instrumentation for this work has been imported not only from chemistry, biochemistry and molecular biology, but also from organic geochemistry, which has grown from a discipline interested in the chemical origins of oil and coal into one which studies the short-term alteration and long-term survival of a very wide range of biomolecules (Engel and Macko Reference Engel and Macko1993; Killops and Killops Reference Killops and Killops2013; Schwarzbauer and Jovančićević Reference Schwarzbauer and Jovančićević2020).
Although the archaeological record contains a vast array of organic materials, both visible and invisible (animal and plant remains, pollen, phytoliths, wood, leather, shell, etc.), there also exists a class of amorphous organic residues which lack diagnostic structures and are best examined chemically. These can include food deposits surviving (either visibly on the surface or invisibly absorbed into the fabric) in pottery containers used for cooking, storing or serving solids and liquids; gums and resins used for hafting, waterproofing, sealing or glueing; fuel for lighting; manure for soil improvement, ritual and symbolic uses; the balms in the wrappings for burial of mummified bodies; and traces of dyes on ancient textiles (Evershed Reference Evershed2008). Most recently, it has been shown that organic residues can be extracted from the corrosion layers of metal artefacts, opening up a whole new field of organic research (Carvalho et al. Reference Carvalho, Henry, McCullagh and Pollard2022a, Reference Carvalho, Pires, Ward, Domoney, McCullagh and Pollard2022b; Wilkin et al. Reference Wilkin, Bayarsaikhan, Ganbold, Batsuuri, Ishtseren, Nakamura, Eregzen, Ventresca-Miller and Miller2024).
The sorts of questions asked of organic remains are very similar to those asked of inorganic materials – what are they? How were they made? Where do they come from? How old are they? They are, however, particularly interesting from the perspective of asking the question ‘what was it used for’? This is especially relevant in the case of organic residues on ceramics and metal vessels, where it is frequently the residue that can directly inform on the use of a vessel – often more successfully than the traditional indirect approaches based on inferences made from form or ethnographic parallels. Further details on the analysis of organic materials in archaeological contexts are given in Section 9.7.
1.3 Questions of Degradative Processes
Analytical chemistry has also been used to address questions which do not relate directly to archaeological interpretation but which nevertheless have importance for understanding the processes acting upon the archaeological record and the materials within it. Moreover, archaeology is a key component of the tourist industry in many countries. Consequently, there is a growing need to manage the preservation and presentation of the archaeological resource in the face of increasing pressure from development and processes such as coastal erosion and climate change. This has given rise to a whole new field of research called heritage science. Up until this century, most national bodies with responsibility for protecting archaeological heritage have operated according to a policy of preservation by record when archaeological remains are threatened. In effect, this meant that the archaeological site was completely excavated and recorded before destruction, resulting in many very large-scale excavations during the 1970s and 1980s (such as the one carried out at the important Viking site of Coppergate in York, England). As well as resulting in the destruction of physical remains, it is an expensive and slow process to fully excavate a large site, producing many tons of material requiring study and storage. Consequently, a new policy has been considered in many countries, focusing on the concept of preservation in situ. This requires that any development on archaeologically sensitive sites must ensure that damage to the known archaeology is minimized by designing the whole development to be as non-intrusive as possible. This includes taking steps such as locating piles and other load-bearing structures away from archaeological features and designing sub-surface structures around existing archaeology. The basic assumption is that by minimizing the direct damage, the majority of the archaeology is preserved for future generations to study. A related concept is preservation by reburial, in which previously excavated archaeological structures are reburied rather than preserved above ground, which often requires constant and costly maintenance interventions. This strategy has been used to protect some of the more vulnerable buildings of the Puebloan culture between 850 and 1250 ce in Chaco Canyon, New Mexico (Ford et al. Reference Ford, Demas, Agnew, Blanchette, Maekawa, Taylor and Dowdy2004). Here the assumption is that reburial will recreate the original burial environment and therefore continue the preservation conditions which prevailed before excavation.
The problem with both of these approaches is that the fundamental science necessary to quantify the interaction between archaeological deposits and the burial environment is not always well-understood and is certainly insufficient to predict how these deposits might change in response to external forcing. A wealth of relevant practical experience has been built up, but often the scientific underpinning for the policy is empirical. Quantitative modelling is necessary to produce an assessment of risk, and in particular to evaluate the damaging effects of changes in soil/groundwater conditions and soil chemistry following a disturbance (such as excavation, reburial, major construction, or climate change). This approach has been best exemplified at the Bryggen, Norway, where the preservation of the submerged wooden waterfront buildings has been monitored and modelled by considering changes to the groundwater flow and chemistry (de Beer et al. Reference de Beer, Matthiesen and Christensson2012). Such models can lead to better conservation strategies for sub-surface artefacts and management plans for standing monuments, providing there is explicit knowledge of how and at what rate the degradative processes of specific materials respond to changing environmental conditions (Huisman and van Os Reference Huisman and van Os2016).
Degradative Processes (Diagenesis)
Most material that enters the archaeological record degrades until it ceases to be a macroscopically recognizable entity. If this were not so, then the world would be littered with the bones and other physical remains of both our ancestors and all the creatures that ever have lived! Molecular evidence may remain, but for all intents and purposes the objects have disappeared. Exceptions to this general rule constitute the material evidence upon which archaeological inference is based – but these are the exception rather than the norm. Some materials, such as stone, almost always survive degradative processes (although they may succumb to other physical processes such as translocation or frost shatter). Others – such as skin, hair and organic fabrics – only survive in exceptional circumstances (e.g., extremes of cold or dryness). Many materials (metals, glass, and some of the more resistant organic materials, e.g., amber) will undergo some degradation but are likely to survive for a considerable time in a recognizable and recoverable form. Biological hard tissue (e.g., bone, teeth, horn, shell) undergoes particularly complex patterns of degradation because of its composite nature (having both organic and mineral phases), but in general (apart from particularly resistant tissue such as enamel) should not be expected to survive for more than a few thousand or tens of thousands of years (Collins et al. Reference Collins, Neilsen-Marsh, Hiller, Smith, Roberts, Prigodich, Wess, Csapò, Millard and Turner-Walker2002).
Chemical and biological degradation processes are part of the wider phenomenon termed taphonomy, originally defined as the process of transition of a biological organism from the biosphere to the lithosphere (Efremov Reference Efremov1940). It includes all natural and anthropogenic processes which create death assemblages before deposition, as well as those chemical, physical and biological processes which act on the assemblage after deposition (these are often termed diagenetic processes). Although defined in the context of organic materials, it is also possible, archaeologically speaking, to conceive of the post-depositional ‘taphonomy’ of non-biotic material (e.g., metal and ceramic artefacts), since they too experience change as a result of environmental interaction (Wilson and Pollard Reference Wilson and Pollard2002). Analytical chemistry has a fundamental role to play in helping to understand some of the major aspects of taphonomic change. Some processes are likely to be primarily chemical in nature, such as the electrochemical corrosion of a metal object in an aqueous environment (McNeil and Selwyn Reference McNeil, Selwyn, Brothwell and Pollard2001; Selwyn Reference Selwyn2004; Carvalho Reference Carvalho, Pollard, Armitage and Makarewicz2023), although even here microbiological mediation is likely to be important (Little and Lee Reference Little and Lee2014). Some processes are physical, such as mineralogical changes taking place in ceramics as a result of interaction with groundwater (Freestone Reference Freestone, Brothwell and Pollard2001; Maritan Reference Maritan2020). Others, such as the degradation of organic materials, may be largely biological (Cronyn Reference Cronyn, Brothwell and Pollard2001), although chemical hydrolysis may also have an important role. The degradation of composite biomaterials like bone clearly involves both biological and chemical processes (Turner-Walker Reference Turner-Walker, Pollard, Armitage and Makarewicz2023). Whatever the driving force, however, analytical chemistry is essential as a means of measuring, monitoring, modelling and verifying these processes.
It is useful to think of diagenesis in thermodynamic terms. An object, once it reaches its ‘final depositional environment’, seeks to achieve equilibrium with its chemical environment, with the net rate of change slowing down as equilibrium is approached. This gives rise to the concept of an object being ‘stable’ in its burial environment (providing, of course, the equilibrium position is one of survival rather than complete loss). Strictly speaking, it is only metastable, since any alteration to that context through environmental (e.g., climate change) or anthropogenic (e.g., excavation) agency will cause the object to move towards a new position of equilibrium, resulting in further change. The cautious use of the term ‘final depositional environment’ is deliberate, since although the physical location of a buried object might be fixed over archaeological time, it is unlikely that the local physical, chemical or biological conditions will be constant over a longer timescale (particularly if this includes periods of major climatic fluctuations, e.g., Ice Ages). Thus, an object might be expected to experience a sequence of metastable conditions throughout its post-depositional and post-excavational existence. We can visualize this history as a series of diagenetic trajectories or pathways. In a stable burial environment, the diagenetic pathway is in principle pre-determined by the nature of the object and of the burial environment, and the interaction between them. This trajectory might lead to perfect preservation or complete destruction, but more often to some intermediary state. If the burial conditions change, then the object will set off on a new trajectory, but it will always tend towards a more altered state (in other words, as entropy dictates, it cannot spontaneously recover its original state). Naturally, the complexity of the real burial environment makes these simplistic views rather difficult to interpret in practice. In particular, the concept of non-commutativity is important – the order in which things happen can have an influence on the final outcome (e.g., the sequence of insect or microbial colonization on a carcass can drastically affect the rate of decay). Overall, the situation is similar to the familiar conflict in chemistry between thermodynamics (generally well understood) determining which reactions are possible and kinetics (generally less well understood) determining which of these reactions will actually happen.
Material–Environment Interactions
The objective of understanding degradative (diagenetic) processes is to improve our knowledge of the factors controlling the preservation of archaeological evidence in the burial environment. Once an object is buried, the potential for survival is governed by the interaction (chemical, physical and biological) of the material with its depositional environment. It is, however, likely that the history of the object before ‘burial’ will also have a significant influence on the trajectory of the post-depositional processes. In the case of biological material, this pre-depositional history might be the dominant factor in dictating the long-term fate of the object. For example, the survival of animal bone might well be dictated largely by the length of surface exposure of the carcass before burial. It is felt by many that the long-term fate of biological material is in fact determined by what happens in the first few days and weeks after death – in other words, the long-term diagenetic trajectory (leading to total destruction or preservation) is dictated by what happens in these first few days. There is therefore a consequential relationship between what happens over the forensic time scale (days to months), through the lifetime of archaeological deposits (decades to thousands of years), ultimately to the preservation of geological material (millennia).
There is, however, little systematic understanding of the factors which control preservation for the wide range of materials encountered archaeologically and very little in the way of predictive models. Soil pH (crudely speaking, acidity; see Section 13.1) and Eh (oxidation state, or, equally crudely, oxygen availability) are often referred to as the ‘master variables’ in the consideration of soil chemistry (Pollard Reference Pollard, Corfield, Hinton, Nixon and Pollard1998a) and are thought to be the main controlling parameters. However, their measurement in the field is not always easy or even possible because of fluctuating conditions, particularly when above the water table. Moreover, the chemical composition of the soil water is determined by a complex interaction between the mineralogical, organic and atmospheric composition of the soil, further complicated by speciation, redox and solubility factors within the soil solution (Lindsay Reference Lindsay1979). Again, direct measurement in the field is often difficult, since the very act of collecting and measuring the water might alter the complex equilibria within it. Nevertheless, knowledge of such factors is vital for understanding the chemical environment of buried archaeological objects. In response to these practical difficulties, a whole family of groundwater geochemical modelling programmes have been developed over the last fifty years (Jenne Reference Jenne1979; Bundschuh and Zilberbrand Reference Bundschuh and Zilberbrand2011). These allow speciation to be calculated for given total ion concentrations under specified conditions and the modelling of the behaviour of particular mineral species in contact with waters of specified chemistry, enabling the stabilities of such systems and their responses to environmental change to be predicted.
Although it seems clear that this approach has a great deal to offer, geochemical modelling has, to date, rarely been used in archaeological research. There are probably several reasons for this, but an obvious one is the difficulty in setting up the conceptual models appropriate for studying archaeological processes, since this is not the purpose for which the programmes were developed. A related problem is the lack of published thermodynamic data for some of the reactions needed. The potential use of geochemical models in the study of bone diagenesis was suggested some time ago (Pollard Reference Pollard, Beavis and Barker1995) but only preliminary studies of the inorganic phase have yet been carried out (Wilson Reference Wilson2004). Hydrological modelling of the bone–water system (designed to model the movement of water) has received more attention (Hedges and Millard Reference Hedges and Millard1995) and applications of these models to the uptake of uranium into bone from groundwater have met with some success (Millard and Hedges Reference Millard and Hedges1996), enabling more precise dates to be produced from uranium-series dating of bone (Pike et al. Reference Pike, Hedges and Van Calsteren2002). Another particular example involving the application of soil carbon turnover models to buried archaeological material has been suggested by Pollard (Reference Pollard2012).
The investigation of archaeological copper (Thomas Reference Thomas1990) and lead (Edwards Reference Edwards, Corfield, Hinton, Nixon and Pollard1996) corrosion has been carried out using very simple thermodynamic modelling packages. Modelling packages have advanced significantly since these early applications, with current models capable of handling many geochemical processes simultaneously, and microbially mediated processes can now also be tentatively investigated (e.g., Bethke Reference Bethke2022). This software has been used to simulate dynamic laboratory experiments and field observations relating to the influence of agrochemicals on the rate of corrosion of buried metal (Wilson et al. Reference Wilson, Pollard, Hall and Wilson2006). Practical experience now suggests that a fruitful way of studying complex material–environment interaction systems (such as those encountered in archaeology) is to combine long-term field experiments with laboratory microcosm studies (which can be better controlled than field studies) and then using geochemical modelling to interpret the resulting data. It would appear that a more holistic understanding of the geochemical aspects of diagenesis is achievable using such an approach.
Conservation Science
Conservation in an archaeological context means the investigation, stabilization and, in some cases, reconstruction of the entire spectrum of archaeological materials and structures. As a profession, however, conservation includes any and all things which might be put into a museum, such as ethnographic artefacts and objects of industrial and military interest, as well as works of art. The term ‘conservation science’ has emerged to denote the sub-discipline of conservation which includes the characterization of the constituent materials and production techniques of archaeological objects, the study and understanding of decay processes, and the study and evaluation of conservation products and techniques (Garside and Richardson Reference Garside and Richardson2021; Watkinson Reference Watkinson, Pollard, Armitage and Makarewicz2023). It also includes issues surrounding the environmental monitoring of display conditions, the impact of visitor numbers, and the like (Cronyn Reference Cronyn1990). Chemistry is generally at the heart of the conservation process since the first step in conservation is to stabilize the object by preventing any further degradation. This requires an understanding of the composition of the object itself and the mechanisms by which such objects degrade, which usually requires chemical and micro-structural analysis and the identification of corrosion products.
The rise in conservation science has created a growing demand for analytical capacity. In the museum context, non-destructive (or quasi non-destructive techniques) such as XRF (Chapter 5) are often preferred for the analysis of inorganic objects, although microanalysis by LA-ICP-MS (Chapter 7) is growing in importance since the resulting ablation craters are virtually invisible to the naked eye. Raman and infrared spectroscopy (Chapter 4) are now in great demand for structural information on pigments and the identification of corrosion products to complement X-ray diffraction (Section 5.4).
1.4 Summary
Almost since the origins of systematic archaeology, analytical chemistry has played a major role in the interpretation of the archaeological record. Initially this involved the simple identification of the materials of antiquity, but it quickly expanded to include the elucidation of technological processes in the past (how things were made), and the identification of the source of raw materials used to create artefacts, subsequently known as provenance studies (where things were made). With a few early exceptions, archaeological chemistry has largely relied on instrumental developments in mainstream analytical chemistry to provide the means of carrying out these analyses. Up until the early twentieth century, this mainly involved the use of gravimetric analysis (wet chemistry), but this was gradually replaced by instrumental methods, starting with optical emission spectrometry. Given the analytical tools available, the majority of work focussed on the inorganic materials of antiquity, with the major exception of amber. Developments in instrumental capabilities subsequently extended the range and detection levels of elements measurable and also opened up the field of organic analysis of visible and invisible organic materials in the archaeological record, including the study of human remains.
Given the importance of analytical chemistry in archaeology, it is incumbent on the archaeologist who wishes to use such data to understand the potential and the limitations of analytical chemistry, including the necessity for quality assurance. It is also important that the chemist working on archaeological material understands the constraints and methodologies of archaeology, including the appropriate theoretical framework. Better still, and increasingly so in recent years, the two specialisms should be united within the same individual.