To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A conviction has been growing among some observers that contemporary human health could be substantially improved if we would just emulate our hunter–gatherer ancestors in dietary matters. A look at this contention seems an appropriate way to bring this work to a close because the subject takes us full circle – linking contemporary issues of food and nutrition with our Paleolithic past. In addition, it offers an opportunity for some summary, and, finally, it provides a chance to remind ourselves of how ephemeral food and nutritional dogma can be.
In fact, in retrospect we call such fleeting tenets “Food Fads” (see the treatment by Jeffrey M. Pilcher, this work chapter VI.12), and in the United States at least, it was not very long ago that vitamin E capsules were being wistfully washed down in the hope of jump-starting sluggish libidos (Benedek, this work chapter VI.15). Not long before that, the egg was enshrined as the “perfect” food, with milk in second place, and cholesterol, now an apparently significant ingredient in the gooey deposits that plug heart arteries, was not a word in everyday vocabularies (Tannahill 1989).
In those “B.C.” (before cholesterol) days, meat was in, and the starchy foods (potatoes, breads, and pastas), although full of fiber, were out – considered bad for a person’s waistline and health, not to mention social standing. Garlic had a similarly dismal reputation – only foreigners ate it – and only winos drank wine. Who could have foreseen then that we would soon toss all of that nutritional lore that guided us into the ash heap of history and embrace its polar opposite – the “Mediterranean Diet” (Keys and Keys 1975; Spiller 1991; see also Marion Nestle, chapter V.C.1).
Broadly stated, pica is the term given to the compulsive consumption of substances not generally considered food. However, a precise definition of pica is somewhat elusive because understandings of what constitutes “food,” what symptoms signify pica, and explanations of what causes the condition vary with historical and cultural context. A more specific definition of pica is “the chronic, compulsive eating of nonfoods such as earth, ashes, chalk, and lead-paint chips …” (Hunter 1973: 171), but it may also include a “false or craving appetite” or “deliberate ingestion of a bizarre selection of food,” as well as the compulsive ingestion of nonnutritive or nonfood items such as ice and ice water (Parry-Jones and Parry-Jones 1994: 290).
Pica, in various forms, has been widely noted historically and geographically, primarily in medical texts and anthropological writings (see, for example, Laufer 1930; Cooper 1957; Anell and Lagercrantz 1958). Its practice, although not considered a disease, is of medical concern because ingestion of some substances may result in disease. Additionally, there are types of pica that have been linked by medical researchers to the correction of mineral deficiencies (see, for example, Coltman 1969; Crosby 1971; Hunter 1973). Pica is classified by the DSM-III-R (American Psychiatric Association 1987) and the ICD-10 of the World Health Organization (1992) as an eating disorder, along with anorexia nervosa, bulimia, and infant rumination. Various forms of pica have also been associated with mental retardation.
That people do not live “by bread alone” is emphatically demonstrated by the domestication of a range of foodstuffs and the cultural diversity of food combinations and preparations. But even though many foods have been brought under human control, it was the domestication of cereals that marked the earliest transition to a food-producing way of life. Barley, one of the cereals to be domesticated, offered a versatile, hardy crop with an (eventual) tolerance for a wide range of climatic and ecological conditions. Once domesticated, barley also offered humans a wide range of valuable products and uses.
The origins of wheat and barley agriculture are to be found some 10, 000 years ago in the ancient Near East. Cereal domestication was probably encouraged by significant climatic and environmental changes that occurred at the end of the glaciated Pleistocene period, and intensive harvesting and manipulation of wild cereals resulted in those morphological changes that today identify domesticated plants. Anthropologists and biologists continue to discuss the processes and causes of domestication, as we have done in this book’s chapter on wheat, and most of the arguments and issues covered there are not reviewed here. All experts agree, however, on the importance of interdisciplinary research and multiple lines of evidence in reconstructing the story of cereal domestication.
Readers of this chapter may note some close similarities to the evidence for wheat domestication and an overlap with several important archaeological sites. Nonetheless, barley has a different story to tell. Barley grains and plant fragments are regular components of almost all sites with any plant remains in the Near East, regardless of period or food-producing strategy. Wild barley thrives widely in the Near East today – on slopes, in lightly grazed and fired pastures, in scrub-oak clearings, in fields and field margins, and along roadsides. These circumstances suggest a different set of research questions about barley domestication,such as: What was barley used for? Was its domestication a unique event? And how long did barley domestication take
The chicken (Gallus gallus or Gallus domesticus) is generally considered to have evolved from the jungle fowl (G. gallus), which ranges throughout the area between eastern India and Java. Within the nomenclature, G. domesticus is normally used by scholars who believe in a polyphyletic origin for the domestic chicken (from G. gallus, Gallus sonnerati, and Gallus lafayettei), whereas G. gallus is used by those who support a unique origin from the various subspecies of wild G. gallus. Debates regarding the origin and spread of the domestic chicken focus both on its genetic basis and the “hearth area” of its initial domestication.
The osteological identification of domestic chickens has been made both on a contextual basis (i.e., the occurrence of Gallus bones outside of the birds' normal wild range) and on osteometric grounds (i.e., the occurrence of bones that are larger than those of modern wild jungle fowl and therefore would seem to be the result of selective breeding). Recent research of this nature has resulted in a radical revision of the standard view of the domestication of the chicken. The presence of domestic fowl bones in third-millennium-B.C. archaeological excavation contexts at Harappa and Mohenjo-Daro in Pakistan led earlier writers (Zeuner 1963; Crawford 1984) to assume that the chicken was first domesticated in this area. However, in 1988, B. West and B.-X. Zhou presented archaeological data showing domestic chickens to be present at China’s Yangshao and Peiligan Neolithic sites, which dated from circa 6000 to 4000 B.C. As a consequence, because wild forms of Gallus are entirely absent in China, and as the climate would have been inimical to them in the early Holocene, it seems likely that chickens were domesticated elsewhere at an even earlier date. In the absence of evidence from India, Southeast Asia (i.e., Thailand) has been put forward as a likely hearth area (West and Zhou 1988).
Keshan disease (KD) is a unique endemic cardiomyopathy in China with high incidence and mortality. Its etiology and pathogenesis are not as yet completely clear.
In the winter of 1935, an outbreak of an unknown disease with sudden onset of precardial oppression, pain, nausea, vomiting (yellowish fluid), and fatal termination in severe cases occurred in Keshan County, in Heilongjiang Province of northern China. Because its cause was not known, it was named after the place of outbreak by a Japanese military surgeon (Apei 1937).
Later, Keshan disease was also reported from other parts of China and, in fact, research now indicates that the condition has been prevalent in that country for close to 200 years. The earliest-known account of the disease was found in an inscription on a stone pillar at Jinling Temple, Xiaosi village, Huanglong County, Shaanxi Province, in 1812 (Shan and Xue 1987).
Epidemiological Characteristics
There are three major epidemiological characteristics of Keshan disease. The first is its regional distribution. Keshan disease areas are focally distributed in a belt extending from northeast to southwest China and usually located in hilly land. There are isolated spots known as “safety islands” surrounded by affected areas. The second is population susceptibility. Children below 15 years of age and women of childbearing age in northern China, and children below 10 years of age in southern China, constitute the most susceptible populations. They all live in rural areas and in farm families. The third characteristic is seasonal prevalence. The peak season of Keshan disease in northern China is in winter, but in the south it is in summer. There is also a natural fluctuation of prevalence from year to year.
Human populations often have been obliged to subsist on an all-vegetable diet because of a poverty or scarcity of animal foods. The term “vegetarianism,” nevertheless, is usually reserved for the practice of voluntary abstention from flesh on the basis of religious, spiritual, ethical, hygienic, or environmental considerations. These in turn have led to still finer distinctions regarding exactly what nonmeat articles of diet are permissible, resulting in the fragmentation of vegetarians into several groups. The great majority of adherents are “lacto-ovo” vegetarians, who reject flesh but find dairy products and eggs acceptable. Smaller groups include “vegans,” who admit no animal products whatsoever into their diet; “lacto-vegetarians,” who consume milk but not eggs; “ovo-vegetarians,” who allow eggs but not milk; “fruitarians,” who eat only fruits and nuts; “raw foodists”; and “natural hygienists,” who scorn even vegetable foods if these have been processed or refined. And because – for all those classes – vegetarianism implies a concern to persuade others to adopt meatless diets, the history of vegetarianism is, at core, the history of the development of arguments used to justify and to proselytize for a vegetable diet.
Vegetarianism in Eastern Religion
The most notable examples of a religious basis for vegetarianism are to be found in Asian culture. Hinduism, though not requiring a strictly vegetable diet, has fostered a significant tradition of vegetarianism among certain believers for more than two millennia. The practice is still more widespread in Buddhism, where the doctrine of ahimsa, or nonviolent treatment of all beings, forbids adherents to kill animals for food. Many Buddhists do, nevertheless, eat meat, supporting the indulgence with the argument that the animal was killed by others. Jainism, likewise, espouses ahimsa and specifically denies meat to any practitioners of the faith (Barkas 1975; Akers 1983: 157–64).
The topics of diet (the foods that are eaten) and nutrition (the way that these foods are used by the body) are central to an understanding of the evolutionary journey of humankind. Virtually every major anatomical change wrought by that journey can be related in one way or another to how foods are acquired and processed by the human body. Indeed, the very fact that our humanlike ancestors had acquired a bipedal manner of walking by some five to eight million years ago is almost certainly related to how they acquired food. Although the role of diet and nutrition in human evolution has generally come under the purview of anthropology, the subject has also been of great interest to scholars in many other disciplines, including the medical and biological sciences, chemistry, economics, history, sociology, psychology, primatology, paleontology, and numerous applied fields (e. g., public health, food technology, government services). Consideration of nutriture, defined as “the state resulting from the balance between supply of nutrition on the one hand and the expenditure of the organism on the other,” can be traced back to the writings of Hippocrates and Celsus and represents an important heritage of earlier human cultures in both the Old and New Worlds (McLaren 1976, quoted in Himes 1987: 86).
The purpose of this chapter is threefold: (1) to present a brief overview of the basic characteristics of human nutriture and the history of human diet; (2) to examine specific means for reconstructing diet from analysis of human skeletal remains; and (3) to review how the quality of nutrition has been assessed in past populations using evidence garnered by many researchers from paleopathological and skeletal studies and from observations of living human beings. (See also Wing and Brown 1979; Huss-Ashmore, Goodman, and Armelagos 1982; Goodman, Martin, et al. 1984; Martin, Goodman, and Armelagos 1985; Ortner and Putschar 1985; Larsen 1987; Cohen 1989; Stuart-Macadam 1989. For a review of experimental evidence and its implications for humans, see Stewart 1975.) Important developments regarding nutrition in living humans are presented in a number of monographic series, including World Review of Nutrition and Dietetics, Annual Review of Nutrition, Nutrition Reviews, and Current Topics in Nutrition and Disease.
The sweet potato (Ipomoea batatas, Lam.) and the yams (genus Dioscorea) are root crops that today nurture millions of people within the world’s tropics. Moreover, they are plants whose origin and dispersals may help in an understanding of how humans manipulated and changed specific types of plants to bring them under cultivation. Finally, these cultivars are important as case studies in the diffusion of plant species as they moved around the world through contacts between different human populations.
This chapter reviews the questions surrounding the early dispersals of these plants, in the case of the sweet potato from the New World to the Old, and in the case of yams their transfers within the Old World. In so doing, the sweet potato’s spread into Polynesia before European contact is documented, and the issue of its penetration into Melanesia (possibly in pre-Columbian times) and introduction into New Guinea is explored. Finally, the post-Columbian spread of the sweet potato into North America, China, Japan, India, Southeast Asia, and Africa is covered. In addition, a discussion of the domestication and antiquity of two groups of yams, West African and Southeast Asian, is presented, and the spread of these plants is examined, especially the transfer of Southeast Asian varieties into Africa.
The evidence presented in this chapter can be viewed fundamentally as primary and secondary. Primary evidence consists of physical plant remains in the form of charred tubers, seeds, pollen, phytoliths, or chemical residuals. Secondary evidence, which is always significantly weaker, involves the use of historical documents (dependent on the reliability of the observer), historical linguistics (often impossible to date), stylistically dated pictorial representations (subject to ambiguities of abstract representation), remanent terracing, ditches or irrigation systems (we cannot know which plants were grown), tools (not plant specific), and the modern distribution of these plants and their wild relatives (whose antiquity is unknown).
Part V comprises a history of food and drink around the world, from the beginnings of agriculture in the Near East to recent excitement generated by the “Mediterranean diet.” It is divided chronologically as well as geographically, which invariably creates anomalies and overlap that invite explanation. Illustrative is the treatment together of South Asia and the Middle East in view of the culinary impact of Islam on both regions. Or again, because of an abundance of available authorities on food and drink in the various European countries (and their many regions), that section could easily have mushroomed to the point of giving lie to a title that promised “world history.” Thus, we have dealt with Greece, Italy, and the Iberian countries under the rubrics of “The Mediterranean” and “Southern Europe.”
For the Americas, we have two Caribbean entries, which might seem somewhat disproportionate. But it should be noted that the chapter that provides a pre-Columbian historical background for the Caribbean region does so for South America and lowland Central America as well, whereas the chapter treating the period since 1492 reveals the mélange of cultures and cuisines of the region, in which those of Africa blended with others of Europe and Asia, even though the dishes often centered on plants originally cultivated by the region’s indigenous peoples.
In Part V, alarm about the danger of the demise of national and regional cuisines is sometimes expressed – triggered, at least in part, by fast-food chains, born in the United States but now reproducing wildly across the globe from Mexico City to Moscow, Bridgetown to Brussels, and Phnom Penh to Paris. It is interesting to note how the standardized nature of fast foods contrasts so starkly with the usage of foods in an earlier period of globalization, which took place during the centuries following the Columbian voyages. Then, burgeoning nationalism ensured that although various cultures adopted most of the same foods, they prepared them differently, just as regional cuisines arose in similar fashion to proclaim a distinctiveness from the metropolis.
Until the nineteenth century, unspecified chronic anemia was known as chlorosis, or the “green sickness,” referring to the extreme pallor that characterized severe cases. For centuries, “chlorosis, or green sick-Dutch painters portrayed the pale olive complexion of chlorosis in portraits of young women” (Farley and Foland 1990: 89). Although such extreme cases are not common in Western societies today, less severe acquired anemia is quite common. In fact, acquired anemia is one of the most prevalent health conditions in modern populations.
Technically, anemia is defined as a subnormal number of red blood cells per cubic millimeter (cu mm), subnormal amount of hemoglobin in 100 milliliter (ml) of blood, or subnormal volume of packed red blood cells per 100 ml of blood, although other indices are usually also used. Rather than imputing anemia to unrequited love, modern medicine generally imputes it to poor diets that fail to replenish iron loss resulting from rapid growth during childhood, from menstruation, from pregnancy, from injury, or from hemolysis. One of today’s solutions to the frequency of acquired anemia is to increase dietary intake of iron. This is accomplished by indiscriminate and massive iron fortification of many cereal products, as well as the use of prescription and nonprescription iron supplements, often incorporated in vitamin pills. However, a nutritional etiology of anemia as dietary has, in the past, been assumed more often than proven. Determining such an etiology is complicated by the fact that the hematological presentation of dietary-induced iron deficiency anemia resembles the anemia of chronic disease. Of the many types and causes of acquired anemia, only those associated with diet and chronic disease are discussed here (for an overview of others, see Kent and Stuart-Macadam this volume).
Part VI takes up questions of food and nutrition that have historical as well as contemporary relevance. It begins with two chapters that continue a now decades-long debate over the extent to which improved nutrition may be responsible for reduced mortality within populations – a debate that has centered on, but certainly has not been limited to, the circumstances surrounding the population increases of the countries of Europe since the eighteenth century.
These are followed by a group of chapters that, although not specifically addressing matters of mortality decline, do help to illuminate some of its many aspects. An elaboration of the concept of synergy, for example, emphasizes the important role that pathogens (or their absence) play in the nutritional status of an individual, whereas the chapter on famine reveals the circumstances within which synergy does some of its deadliest work.
Stature, discussed next, is increasingly employed by historians as a proxy for nutritional status, and final adult height can frequently be a function of the nutrition of the mother before she gives birth and of the infant and child following that event – a subject treated in the following chapter. A chapter on adolescent nutrition and fertility, harking back to matters of population increase, is succeeded by another concerned with the linkage between the nutrition of a child and its mental development.
By way of a transition to a second group of chapters in Part VI focusing on culture and foods is a chapter on the biological and cultural aspects of human nutritional adaptation.
Wine is the fermented juice (must) of grapes, and for thousands of years humans have been attempting to perfect a process that occurs naturally. As summer turns into fall, grapes swell in size. Many will eventually burst, allowing the sugars in the juice to come into contact with the yeasts growing on the skins. This interaction produces carbon dioxide, which is dissipated, and a liquid containing alcohol (ethanol) in combination with a plethora of organic compounds related to aroma and taste that have yet to be fully enumerated. Many people have found drinking this liquid so highly desirable that they have been willing to expend enormous effort to find ways of improving its quantity and quality.
In some places both viticulture (grape growing) and viniculture (wine making) emerged as specialized crafts, which today have achieved the status of sciences within the field known as enology. In general, three basic types of wine are produced: (1) still or table wines with alcohol contents in the 7 to 13 percent range; (2) sparkling wines from a secondary fermentation where the carbon dioxide is deliberately trapped in the liquid; and (3) fortified wines whereby spirits are added to still wines in order to boost their alcohol contents into the 20 percent range.
Vine Geography
Grape-bearing vines for making wine belong to the genus Vitis, a member of the family Ampelidaceae. Vines ancestral to Vitis have been found in Tertiary sediments dating back some 60 million years, and by the beginning of the Pleistocene, evolution had produced two subgenuses – Euvitis and Muscadiniae. Both were distributed across the midlatitude portions of North America and Eurasia. Glaciation, however, exterminated the Muscadines with the exception of an area extending around the Gulf of Mexico and into the southeastern United States, where one species, Vitis rotundifolia, has been used to make sweet wines that go by the regional name of scuppernong.
In the Pacific Islands (or Oceania) great distances, distinct island environments, and successive waves of peoples reaching island shores have all shaped foodways, including gathering, hunting, and fishing, agricultural practices and animal husbandry, and modern food distribution systems.
The peoples of Oceania (which was subdivided by Eurocentric cartographers into Melanesia, Polynesia, and Micronesia) arrived at their island homes over a span of many thousands of years. The various islands have substantial differences in natural resources, and the inhabitants have had different experiences with explorers, colonizers, and missionaries. But since the 1960s, many of the peoples and lands of Oceania have had in common their own decolonization and integration into the global economy. What follows is a description of the history and culture of food and nutrition in the Pacific Islands that recognizes diversity yet also attempts to leave the reader with an impression of the whole.
The Pacific Region
In the vastness of the Pacific Ocean are some of the world’s smallest nations and territories. Politically there are 22 states, excluding both Hawaii and New Zealand. The region’s giant is Papua New Guinea. With a total land area of 462,000 square kilometers, it is over five times larger than all the other Pacific states combined. This nation, inhabited for many thousands of years longer than the rest of the region, is also home to over 60 percent of the region’s population of 6 million individuals, whose diversity is illustrated by the more than 800 languages spoken in Papua New Guinea alone. Fiji is the only other Oceanic territory with a population of more than 500,000. By contrast, Tokelau, a territory of New Zealand, is made up of three coral atolls with a combined land area of 10 square kilometers and a population of 1,600. Cultural definitions of the region,however, incorporate New Zealand as well as Hawaii. New Zealand is treated elsewhere in this work, but for comparative purposes, this chapter includes several references to its original inhabitants,the Maori.
Southeast Asia, geographically and culturally diverse, stretches from Burma (Myanmar), through Thailand and the Indochinese and Malay peninsulas, to islanded Indonesia. Some would include the Philippines and Indonesian New Guinea as parts of Southeast Asia, but this study adds only the Philippines. European scholars called the region “Farther India” for its location “beyond the Ganges” (Coedes 1968). It is separated from China by the Himalayas and their eastern extension. Each country in the region has other mountain chains, channeling rivers to the South China, Java, Celebes, and other Indonesian seas, and to the Indian Ocean. Lowland plains south of the highest ranges of the mainland are home to most of the populations of Burma, Thailand, Malaysia, Cambodia (Kampuchea), Laos, and Vietnam. The region is also insular: Indonesia has over 13,000 islands, spreading some 5,400 kilometers (3,300 miles). Most people live on or near oceans or river deltas.
Southeast Asia is in the tropical belt along the equator, with little temperature variation – about 15.5 to 24 degrees Celsius (60 to 75 degrees Fahrenheit) in winter to 29 to 32 degrees Celsius (85 to 90 degrees Fahrenheit) in the dry summer months (Hanks 1972). This is monsoon Asia, and annual rainfall amounts to several hundred millimeters (over 100 inches). North Pacific winds bring rain from the northeast down the South China Sea from October until March, and there is a southwesterly monsoon in summer from May to September (Jin-Bee 1963). Rain is not constant, but brief showers or thunderstorms are always imminent. Temperatures and precipitation are noticeably lower in higher parts of the region. Europeans early recognized the comfort of the foothills and built hill-station retreats where their accustomed temperate plants – fruits, flowers, trees, and vegetables – all flourished.
Legend has it that when Emperor Tang, the founder of the Shang dynasty (sixteenth to eleventh centuries B.C.), appointed his prime minister, he chose Yi Yin, a cook widely renowned for his great professional ability. Indeed, in the Chinese classics (the oldest of which date from the eighth and seventh centuries B.C.) the art of proper seasoning and the mastery of cooking techniques are customary metaphors for good government (Chang 1977: 51; Knechtges 1986). Moreover, in certain contexts the expression tiaogeng, literally “seasoning the soup,” must be translated as “to be minister of state”!
That government should be likened to the cooking process is not really surprising, considering that the foremost task of the emperor was to feed his subjects. Seeing the sovereign, the intermediary between heaven and earth, in the role of provider of food is in keeping with a mythical vision of primeval times. According to legend, the first humans, clad in animal skins, lived in caves or straw huts and fed on raw animals, indiscriminately devouring meat, fur, and feathers in the same mouthful. Shennong, the Divine Farmer, one of the mythical Three August Sovereigns and founders of civilization, taught men to cultivate the five cereals and acquainted them with the blessings of agriculture (Zheng 1989: 39) after Suiren had taught them to make fire for cooking their foods. In mythology, cooking is associated with the process of civilization that put an end to the disorder of the earliest ages and led to a distinction between savagery and civilized human behavior.
Throughout Chinese history, the cooking of foodstuffs and the cultivation and consumption of cereals were considered the first signs of the passage from barbarity to culture. Thus the Chinese of the Han ethnic group set themselves apart from surrounding nationalities who, they said, had no knowledge of agriculture or did not know about the cooking of food (Legge 1885: 223; Couvreur 1950, 1: 295; Chang 1977)
A tropical root crop, manioc is also known as cassava, mandioca, aipim, the tapioca plant, and yuca. The term cassava comes from the Arawak word kasabi, whereas the Caribs called the plant yuca (Jones 1959). The word manioc, however, is from maniot in the Tupí language of coastal Brazil; mandioca derives from Mani-óca, or the house of Mani, the Indian woman from whose body grew the manioc plant, according to Indian legends collected in Brazil (Cascudo 1984). Domesticated in Brazil before 1500, Manihot esculenta (Crantz), formerly termed Manihot utilissima, is a member of the spurge family (Euphorbiaceae), which includes the rubber bean and the castor bean (Cock 1985).
The manioc plant is a perennial woody shrub that reaches 5 to 12 feet in height, with leaves of 5 to 7 lobes that grow toward the end of the branches. The leaves are edible and may be cooked like spinach, but in terms of food, the most significant part of the plant is its starchy roots, which often reach 1 to 2 feet in length and 2 to 6 inches in diameter. Several roots radiate like spokes in a wheel from the stem, and each plant may yield up to 8 kilograms of roots (Jones 1959; Cock 1985; Toussaint-Samat 1992).
There are two principal varieties of manioc – the sweet and the bitter. The sweet varieties have a shorter growing season, can be harvested in 6 to 9 months, and then can simply be peeled and eaten as a vegetable without further processing. If not harvested soon after maturity, however, sweet manioc deteriorates rapidly. The bitter varieties require 12 to 18 months to mature but will not spoil if left unharvested for several months. Thus, people can harvest them at their leisure. The main disadvantage to the bitter varieties is that they may contain high levels of cyanogenic glycosides, which can cause prussic-acid poisoning if the roots are not processed properly (Jones 1959; Johns 1990).
We can think of the world, for any person, as divided into the self and everything else. The principal material breach of this fundamental dichotomy occurs in the act of ingestion, when something from the world (other) enters the body (self). The mouth is the guardian of the body, a final checkpoint, at which the decision is made to expel or ingest a food.
There is a widespread belief in traditional cultures that “you are what you eat.” That is, people take on the properties of what they eat: Eating a brave animal makes one brave, or eating an animal with good eyesight improves one’s own eyesight (reviewed in Nemeroff and Rozin 1989). “You are what you eat” seems to be “believed” at an implicit level, even among educated people in Western culture (Nemeroff and Rozin 1989). It is an eminently reasonable belief, since combinations of two entities (in this case, person and food) usually display properties of both. Thus, from the psychological side, the act of eating is fraught with affect; one is rarely neutral about what goes in one’s mouth. Some of our greatest pleasures and our greatest fears have to do with what we eat.
The powerful effect associated with eating has a strong biological basis. Humans, like rats, cockroaches, raccoons, herring gulls, and other broadly omnivorous species, can thrive in a wide range of environments because they discover nutrients in many sources. But although the world is filled with sources of nutrition, there are two problems facing the omnivore (or generalist). One is that many potential foods contain toxins. A second is that most available (nonanimal) foods are nutritionally incomplete. An apt selection of a variety of different plant foods is required for the survival of omnivorous animals to the extent that they cannot find sufficient animal foods. Animal foods tend to be complete sources of nutrition, but they are harder to come by because they are less prevalent and because they are often hard to procure (for example, they move). Hence, the omnivore must make a careful selection of foods, avoiding high levels of toxins and ingesting a full range of nutrients. Any act of ingestion, especially of a new potential food, is laden with ambivalence: It could be a good source of nutrition, but it might also be toxic.
Algae are eukaryotic photosynthetic micro- and macroorganisms found in marine and fresh waters and in soils. Some are colorless and even phagotrophic or saprophytic. They may be picoplankton, almost too small to be seen in the light microscope, or they could be up to 180 feet long, such as the kelp in the kelp forests in the Pacific Ocean.
Algae are simple, nucleated plants divided into seven taxa: (1) Chlorophyta (green algae), (2) Charophyta (stoneworts), (3) Euglenophyta (euglenas), (4) Chrysophyta (golden-brown, yellow-green algae and diatoms), (5) Phaeophyta (brown algae), (6) Pyrrophyta (dinoflagellates), and (7) Rhodophyta (red algae). A taxon of simple, nonnucleated plants (prokaryotes) called Cyanobacteria (blue-green bacteria) is also included in the following discussion as they have a long history as human food.
Algae are eaten by many freshwater and marine animals as well as by several terrestrial domesticated animals such as sheep, cattle, and two species of primates: Macaca fuscata in Japan (Izawa and Nishida 1963) and Homo sapiens. The human consumption of algae, or phycophagy, developed thousands of years ago, predominantly among coastal peoples and, less commonly, among some inland peoples. In terms of quantity and variety of species of algae eaten, phycophagy is, and has been, most prevalent among the coastal peoples of Southeast Asia, such as the ancient and modern Chinese, Japanese, Koreans, Filipinos, and Hawaiians.
History and Geography
The earliest archaeological evidence for the consumption of algae found thus far was discovered in ancient middens along the coast of Peru. Kelp was found in middens at Pampa, dated to circa 2500 B.C. (Moseley 1975); at Playa Hermosa (2500–2275 B.C.); at Concha (2275–1900 B.C.); at Gaviota (1900–1750 B.C.); and at Ancon (1400–1300 B.C.) (Patterson and Moseley 1968). T. C. Patterson and M. E. Moseley (1968) believe that these finds indicate that marine algae were employed by the ancient Peruvians to supplement their diets.