In June 1927 the New Health Society (NHS) issued a “Wholemeal Manifesto” that claimed wholemeal flour was “superior in nutritional value and vitamin content” to its white counterpart, which was “not a complete food for man.” The manifesto maintained that the daily requirement of vitamin B could only be “ensured by the use of wholemeal flour” in the working-class diet, which relied heavily on bread. Published in the mass-circulation Daily Mail, which welcomed this “striking and convincing confirmation” of its long-standing campaign for wholemeal bread, the manifesto was signed by “thirteen distinguished medical men and scientists.” The signatories included the NHS president Sir William Arbuthnot Lane and leading members of society, namely, the eugenicist Caleb Williams Saleeby, the radiologist Alfred Jordan, the physician S. Henning Belfrage, and the biochemist Robert Plimmer. Lane attributed “many of the ills of civilization” to “wrong habits of feeding” in a series of articles published in the Daily Mail the previous autumn, in which he highlighted the importance of vitamins to health and advocated wholemeal bread. The Journal of the American Medical Association, which reported on the “bread controversy,” noted that in support of his arguments Lane invoked the “accumulated intelligence of those who have made dietetics a life study—Gowland Hopkins, Plimmer, Hindhede, McCollum, McCarrison and others.”
Using the NHS's “Wholemeal Manifesto” as a jumping-off point, this chapter explores the debate about different types of bread and flour in Britain during the interwar years. Claims about the poor nutritional value of white bread and advocacy of wholemeal can be traced back to the nineteenth century, but the issue received renewed attention in the wake of the discovery of vitamins and the launch of health education pressure groups such as the NHS in the 1920s. The bread controversy focused on chemical properties and nutritional benefits, but these were framed within a wider cultural perspective. There was no general shift in favor of brown bread among nutritionists in interwar Britain, as suggested by Uwe Spiekermann. Rather, this chapter highlights that doctors, nutrition experts, and scientists continued to debate the merits of different types of bread and flour.
In a sense, any scientist who embarks on a research program exploring particular connections between diet and health, and who aspires to somehow alter diets in accordance with expected findings, has started a process of setting standards. And, frequently, the links between research and action are by no means simple and linear: research first, followed by policymaking or propaganda. Nutrition scientists are often, for example, publicly enthusiastic about the value of specific nutrients, before beginning new research that they hope will enhance the importance of these nutrients—and themselves. This essay explores an episode in which two groups of nutrition scientists attempted to demonstrate connections between nutrition and health, each focusing on their own favored nutrients. In the concluding section, it will be suggested that this affair illustrates the way in which the priorities of nutrition science are often set. The “activist” dimension of nutrition science, the impulse to set standards, generally means that research priorities reflect the health or social problems that are considered important at the time in wider public, political, medical, and scientific circles.
The context for the story explored in this chapter is Britain during the interwar period. During the First World War, nutrition scientists in Britain had organized and achieved some measure of consensus, largely focused on the need to maintain the efficiency of the military and civilian population via the application of energy and protein standards. Despite resistance from government and civil servants, they had gained a degree of influence on wartime food policy. But soon after peace returned, their hopes to institutionalize a role in policymaking dissipated when the state largely withdrew from the management of food supplies. Researchers who attempted to make their reputations and careers in nutrition science and its applications now lacked the previous benefits of operating in a prolonged national emergency. One way to enhance the profile and power of the field in this situation was to demonstrate that nutrition could address the most significant and intractable problems of the day, such as morbidity and mortality from infections among humans and farm animals.
Late nineteenth-century nutrition science occupied an ambiguous space. It emerged in the same decades as the development of quantitative, precision, and apparatus-based methods in physiology and the spread of statistical thinking, and it combined these two very different approaches in unique ways. Nutrition science connected theory with practice, studies of individuals with studies of populations, methods from the natural sciences with methods from the social sciences. It was an experimental science and an applied science. Its information sources were found in the lab, in the verbal accounts of experimental subjects, in markets, institutions, private homes, hospitals, shops, and workplaces. It aimed for the precision of physics and chemistry, but was confronted with the enormous variability of its subjects, objects, and external circumstances, and with discrepancies between the artificially controlled conditions of the lab and the variable conditions of human life.
The early history of nutrition science has been explored from a variety of perspectives. Most studies focus on Germany and the United States, the two countries that led in the development of its theories, methods, and applications in the later nineteenth and early twentieth centuries. Kenneth Carpenter, Frederic L. Holmes, and Nicolas Mani have described this history from a largely internal perspective, delineating the main fi gures and theoretical development of nutritional physiology. Studies that approach it from a social historical perspective have also not been lacking. In particular, Harvey Levenstein and Naomi Aronson have shown how research on nutrition in the United States was intimately connected to social concerns and, in Aronson's case, to questions of social control and biopolitics. Similar interactions between laboratory research and social context have been confi rmed in studies on a number of countries.
Studies focused primarily on physiological theory pay homage to the social context of their application, and social historical works frequently describe the theories on which nutrition policies and reform initiatives were based, but the impression can arise that the laboratory was the site for the production of nutritional knowledge, and the apparatus of the exact sciences—experiments, instruments, and precision techniques—the means of its generation. Although laboratory studies were crucial in the history of nutrition science, their methods and results were not uncontested.
As a vehicle for conveying nutritional information to the individual consumer, the modern US food-package label has evolved steadily throughout the twentieth century, and its content and format are regularly revised to reflect and apply new knowledge in the fields of medical, nutritional, and regulatory science. Today, the most prominent feature of that label is the Nutrition Facts panel, which has, since it first appeared in 1994, become the most widely recognized graphic in the world. Although the current food label focuses on nutrition and health, the roots of mandatory food labeling lay in the commercial landscape of the Progressive Era in American history.
In 1913 the first mandatory food labeling law was enacted—the Gould Net Weight Amendment to the 1906 Pure Food and Drugs Act—which required that all packaged foods have the “quantity of their contents plainly and conspicuously marked on the outside of the package in terms of weight, measure, or numerical count.” The United States thereby became the first country in the modern world to enact mandatory food labeling in lieu of voluntary compliance with published standards, and paved the way for the twentieth-century adoption of mandatory nutrition labeling.
In addition to its groundbreaking legal significance, the obscure Gould law is also historically important as a small yet compelling example of the critical role that standardization in many fields played in changing the commercial, social, and economic landscape of America during the Progressive Era (ca. 1890–1920).
In an era challenged by immigration, industrialization, and urbanization, the scope and scale of such change was unprecedented. Reforming activists drew support from a growing middle class, professionals began to transform many fields of endeavor, and in science and engineering, in particular, greater efficiency served as both goal and symbol of the Progressives’ commitment to the nation's overall advancement.
Over time, the Progressives profoundly influenced local, state, and federal government agencies, originally in an effort to curb political corruption, but increasingly as a means of promoting order and efficiency. One key to their success was their reliance on new scientific knowledge, methodology, and tools.
How is biopolitical knowledge made? Encompassing everything from pro- and antinatalist tax policies to antismoking campaigns, biopolitics refers to a wide spectrum of scientifically informed practices that aim to optimize the human resources of modern nations. In recent years, historians have devoted significant effort to understanding the role of scientists, doctors, and social planners in creating and administering biopolitical regimes. In the German context, for instance, scholars have carefully unpacked how ambitious young “genetics doctors” in the 1930s worked in concert with the state to establish research centers and implement policies for assessing and improving the nation's racial fitness. Excellent as this scholarship has been, it has largely assumed something that the following essay questions: that the most important players in modern biopolitical dramas have been technocratic elites working through the state in a largely top-down manner. In this context, modern nutrition science offers many opportunities for expanding how we think about biopolitics. As a field seeking to understand and manage the physical fitness of vast human populations via diet, modern nutrition science is certainly an important example of modern biopolitics, yet one to which historians have devoted little attention. It is, moreover, a particularly instructive field because it has encompassed such a diversity of actors. As we might expect, experimentalists, clinicians, and policymakers played star roles in inventing and implementing nutrition science. More surprisingly, however, lay people with little to no scientific training also participated in this effort and, indeed, often pushed experts to change what they thought they knew about diet. As a field of biopolitical knowledge and practice, in other words, nutrition science has been made by a heterogenous group of players whose dynamics do not follow the top-down model. This essay develops that claim by focusing on a specific episode in the history of nutrition science, the unmaking of the protein standard in late nineteenth- and early twentieth-century Germany, yet also aspires to raise larger methodological questions about how scholars approach the history of biopolitics more generally.
The World Food Programme predicted a famine would hit Niger in 2005. Even before the locusts came, drought had despoiled millet fields across the southern grasslands. As refugees began to collect around the town of Maradi, the international community mobilized. Doctors without Borders set up six emergency care centers, and the United Nations sent 7,000 tons of food.1 Niger's prime minister, Hama Amadou, traveled to France to appeal for aid. Taking advantage of his absence, the president and Amadou's political rival, Mamadou Tandja, assembled the press corps to announce there was “no famine” and “the people of Niger look well-fed.”2 The denials split the aid community. The Swiss aid agency agreed that the crisis had been overplayed, finding “a big gap between the reality facing us on the ground and what the international community is saying.”3 The United States, which was arming and training Niger's military, disputed the United Nation's figures.4 As the catastrophe unfolded, so did the debate. In 2008, social scientists were still working on models to determine whether a famine had occurred.5
Denials of this kind are not unusual. They predictably follow an international relief appeal that incorporates the word “famine.” When international organizations raised concerns in 2008 about mass starvation in Ethiopia, the government of Meles Zanawi issued a statement affirming that “famine does not exist in Ethiopia. It is a story made up by the foreign media and aid organizations.”6 Earthquakes, floods, and hurricanes provoke intense debates about environmental causes, the responsibility of governments, and the geography of need, and so it might be said that all disasters are politically constructed, but the narration of famine contains a further degree of subjectivity, a determination of whether an emergency exists at all. Decisions to declare or not declare may cost lives. Failure to announce a famine may delay a response until it is too late, but a premature or mistaken declaration invites a kind of collateral damage by causing thousands to gather at relief camps where they fall prey to disease.
Could meat eating change your personality? In the early nineteenth century, some French commentators argued that it could, and that meat eating could even define the collective character of a people. Julien-Joseph Virey, a pharmacist and anthropologist, proclaimed in 1813 that consuming meat had made the Romans “vigorous, energetic and bellicose,” and that the retention of Roman meat-eating habits by northern Europeans, particularly the Germans and the English, had given them the same characteristics. In the modern states of Greece and Rome, in contrast, Virey argued, the people had embraced a more vegetable-heavy diet and consequently had lost some of their “force and vigor.” Meat, Virey suggested, made for an aggressive, passionate, and forceful population, a population that could embark on the conquest of others. This message was popular throughout much of the nineteenth century, and in the 1890s most scientists still agreed that meat should hold a central place in the French diet. Yet there were also signs that experts were starting to revisit the meat question. For example, in 1893 the military doctors Louis Henri Polin and Henri Joseph Labit, in their text L'hygiène alimentaire, while stating that “meat-eating peoples are robust and strong,” also conceded that it was possible to live without meat. As they noted, “it is incontestable and demonstrated by experience: being deprived of meat is more easily supported perhaps than being deprived of vegetables.” Jean Rouget and Charles Dopter, two researchers from the Valde- Grace military hospital, went even further in their book, also called Hygiène alimentaire, which was published in 1906. For them, the ideal dietary preference found a balance between meat and vegetables. But then they added: if a vegetarian supplemented his or her diet with butter, milk, eggs, cheese, and other fats, then this diet “was eminently rational.” In other words, vegetarianism was not only a viable but perhaps in some cases a logical choice as well for French men and women in the modern age.
Why did questions arise about the role of meat in the French diet in the late nineteenth and early twentieth centuries? In the tropical colonies, as I have argued elsewhere, a belief in the need for reduced meat diets arose from fears that warm climates and the stresses of colonial life had a negative impact on French digestive systems.
In his classical work on military tactics, The General Principles of War, Friedrich the Great (1712–86) stated that anyone wanting to build an army must start with the belly. Procuring sufficient food was crucial in warfare, since food supply influenced both the soldiers’ performance and their morale. Nevertheless, provisions were basic. Friedrich's soldiers were supplied with just two pounds of bread per day, and food procurement and preparation were largely left to the individual soldier. Military food consumption thus followed traditional, regional, and social differences. Whereas officers ate in restaurants, with their families, or with householders, the common soldiers prepared their foods in cooperative cooking communities called Menagen. They purchased food collectively and hired a cook or cooked in turn. As a result, the peacetime food of regular soldiers was similar to that of the civilian population. It was shaped by custom and varied according to rank.
During wartime, food was requisitioned from the civilian population, which was obligated to accommodate soldiers and reimbursed for its expenses. The army organized food provision only when larger forces were assembled and where opportunities for self-catering or quartering seemed uncertain. This began to change only during the eighteenth century, after the establishment of standing armies and permanent barracks. Two decrees, announced in 1827 and 1831, emerged from these changes and opened the possibility that some food would be centrally supplied during campaigns. However, the state did not provide complete and healthy food in peacetime. When the army supplied only bread, soldiers continued to purchase the rest of their food from marketers in or near garrisons, and marketers accompanied the army on military campaigns. More fundamental change had to wait until 1858 when the food provision regulation (Verpflegungsreglement) was enacted.
This essay follows developments in the soldier's rations between 1850 and 1960, focusing not on organizational changes but on scientific and medical discourses on dietary standards, which began in the middle of the nineteenth century. Two developments shaped these discourses. They were, first, deeply influenced by the Crimean War (1853–56). Military tactics had changed from static to mobile warfare, and the experiences of the war showed that the military was unable to supply adequate food to its more mobile troops.
“Nutrition is not a discipline, it is an agenda.” This declaration, made by the French American nutrition scientist Jean Mayer has been adopted as the mission statement of the Friedman School of Nutrition Science and Policy at Tufts University in Massachusetts. As the title of the school indicates, it offers a program that is explicitly socially engaged, but the statement also demonstrates that the disciplinary status of nutrition as a university program is not a matter of consensus. This has long been the case. In the United Kingdom after the Second World War, for example, one nutrition scientist, John Yudkin, returning from his military service to a chair of physiology, set about constructing a university discipline and degree in nutrition, combining biochemical, physiological, and social scientific approaches. In response, Robert Garry, president of the British Nutrition Society, declared in 1953 that nutrition should be regarded not as a science or discipline but as a “meeting place of the sciences and of scientists.” Yet nutrition scientists interact not only with scientists but also with many others: professionals such as doctors and veterinarians, politicians, administrators, policymakers, representatives of funding bodies, industrialists, agriculturalists, campaigners, lay and alternative experts, and media representatives. They meet members of the general public and their families. And all those they meet eat and have views on nutrition.
Around the world and throughout history, one of the commonest subjects of everyday domestic discourse has been food, and food has likewise been a persistent topic in public and political discourse. Historically, this discourse has often been shaped by concern about hunger and food scarcity and insecurity. In more recent times, such concerns have been joined by worries about overabundance, overeating, and the seeming inscrutability of world food systems. During the last two centuries, these discourses have been increasingly shaped by the sciences of food and nutrition, as scientific and medical actors participated in public health, social welfare, and political policy debates surrounding food and diet. From the early days of the field, scientific nutritional knowledge became the basis for advertising copy and product innovation in the food and pharmaceutical industries. It was embraced by campaigners pursuing a wide variety of causes and by publics interested in maintaining and improving health.
This chapter examines the contemporary redefinition of Galician folk music during the political transition to democracy and the establishment of Galician autonomy, as well as the role contemporary Galician folk music has played in the construction of a modern Galician cultural identity in the global age. Since the mid-1970s the recovery of Galician musical and cultural heritage has gone hand in hand, somewhat paradoxically, with innovation, transformation and hybridization. Parallel to this process, contemporary Galician folk music has become one of the key cultural expressions of a modern Galician identity that is to a large extent based on the distinctness and richness of its traditional music, even if this genre has undergone a complex process of hybridization entailing the merging of old and new forms, rural and urban manifestations and local and global trends.
This redefinition of Galician folk music has developed in parallel with the major political and social changes that have occurred in Galicia during this period, and the significant cultural developments in literature, audio-visual arts, rock music and fashion, which have all played a key role in the process of collective self-discovery and self-construction. Two major historical events have governed these developments: the process of cultural normalization as a result of the establishment of Galician political autonomy after the Transition, on the one hand, and the globalization of the cultural industries with Galicia's response to the new cultural climate and economic currents of our global age, on the other.
Owing to their inextricable link with capitalism and modernity, urban spaces have been at the centre of modern European thought about identity and territory. In the Galician context, however, debates about identity, modernity and space have developed along particularly fluid lines, with the urban not always occupying centre stage. This, of course, has much to do with living conditions in a fragmented and contested territory which, until the 1980s, was basically rural and still characterized on many levels by pre-industrial economic practices and values. A brief summary of how this situation has been dramatically transformed was presented in an article published in the Galician cultural magazine Grial (Seoane Pérez, Pérez Caramés and Otero Millán 2012). According to this report, in 2012 Galicia's administrative map showed the existence of 30,000 settlements, which is half of the Spanish total, although Galicia only occupies 6 per cent of Spain's territory (2012: 46). The process of urbanization of a traditionally rural society has therefore been fast-paced, with current figures showing that ‘2.2% of the municipalities gather 25.7% of the population, while 63% of municipalities only gather 16.6% of the population’ (González Laxe 2012: 20). The same thing can be said if we look at this shift from an economic perspective. The coastal cities of Vigo and A Coruña concentrate 36 per cent of the Galician private sector: if we add the other main five ‘towns’ – Lugo, Ourense, Pontevedra, Santiago de Compostela and Ferrol – the figure reaches 62.2 per cent (González Laxe 2012: 21).
It does not take long for anybody interested in Galicia to come across a reference to the writer Rosalía de Castro. Whether in the textual and visual body of Galician cultural history or in the material and imaginary landscapes of the country's ongoing national construction, the name of Rosalía de Castro resonates with power, symbolizing a collective heritage. For the first-time or occasional visitor to Galicia her significance will be felt in the many monuments, statues, street names and city parks or gardens across the country that bear her name or image: Santiago de Compostela's monument to Rosalía de Castro, raised in the city's Alameda park in 1917, Rosalía de Castro Street in one of Vigo's vibrant central neighbourhoods or the Parque Rosalía de Castro in Lugo are only a few examples among the many instances of commemorative practices in her name. For the more specialized reader in Galician culture and history, references to de Castro form the substrate of a shared structure of meaning which has been seen as historically bonding the community together with extraordinary success. However, for all their immediate obviousness and unquestionable coherence to several generations of Galicians – living in and away from their country – the life, work and legacy of Rosalía de Castro continue to mount a challenge for literary critics, historians and public actors engaged in the various discourses of the nation coexisting in Galicia.
Email your librarian or administrator to recommend adding this to your organisation's collection.