To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
About 10,000 years ago, humans started changing the way they made a living as they began what would be a lengthy transition from foraging to farming. This transformation, known as the Neolithic Revolution, was actually comprised of many revolutions, taking place in different times and places, that are often viewed collectively as the greatest of all human strides taken in the direction of progress. But such progress did not mean better health. On the contrary, as the following chapters indicate, hunter-gatherers were, on the whole, considerably better nourished and much less troubled with illnesses than their farmer descendants. Because hunter-gatherers were mobile by necessity, living in bands of no more than 100 individuals they were not capable of supporting the kinds of ailments that flourished as crowd diseases later on. Nor, as a rule, did they pause in one spot long enough to foul their water supply or let their wastes accumulate to attract disease vectors – insects, rodents, and the like. In addition, they possessed no domesticated animals (save the dog late in the Paleolithic) who would have added to the pollution process and shared their own pathogens.
In short, hunter-gatherers most likely had few pathogenic boarders to purloin a portion of their nutritional intake and few illnesses to fight, with the latter also sapping that intake. Moreover, although no one questions that hunter-gatherers endured hungry times, their diets in good times featured such a wide variety of nutriments that a healthy mix of nutrients in adequate amounts was ensured.
Many in Western societies as well as upper-class members of non-Western societies consider French cookery to be the world’s most refined method of food preparation. This reputation has mainly to do with the grande cuisine, a style of cooking offered by high-class restaurants and generally regarded as the national cuisine of France. The grande cuisine attained its status because it emphasizes the pleasure of eating rather than its purely nutritional aspects. Whereas all cuisines embody notions of eating for pleasure, it was only in France, specifically in Paris at the beginning of the nineteenth century, that a cuisine that focused on the pleasure of eating became socially institutionalized. Moreover, it was the bourgeois class of the period that used this emphasis on eating for pleasure for their cultural development. Previously, the aristocracy had determined the styles and fashions of the times, including the haute cuisine, but this privilege was temporarily lost with the French Revolution.
The middle class also used the grande cuisine to demonstrate a cultural superiority over other social groups with growing economic power and, thus, the potential to rise on the social ladder. At the same time, restaurants – new and special places created for the grande cuisine – came into being. Spatially institutionalized, the grande cuisine was transformed into a matter of public concern and considerable debate (Aron 1973).
The institutionalization of a cuisine that emphasized the pleasure of eating had many effects, not the least of which was that in France, more than in other European societies, eating and drinking well came to symbolize the “good life” (Zeldin 1973–7). As such, the grande cuisine became culturally important for all French classes, not only for the middle class that had created it, with the result that cooking and discussions about food and the qualities of wines came to be of paramount moment. Indeed, this self-conscious stylization of eating and drinking by all classes of France led to the description of the French by other Europeans as pleasure-oriented, and the characterization of the French style of living as savoir vivre.
Alcoholic beverages have been a part of human culture since at least the Neolithic period. Yet until recently, beverages made from fruits, grains, or honey were considered to be what historian Wolfgang Schivelbusch (1992) has called “organic,” meaning that the amount of sugar in the ingredients produced the amount of alcohol in the drinks. Examples of such beverages are beer and wine. Beginning in the period from about A. D. 800 to 1300, however, people in China and the West learned to distill alcoholic liquids. This chapter traces the history of distilled alcohol and discusses the nature of several kinds of liquor.
Distillation and Alcoholic Beverages
Distillation is a method for increasing the alcohol content (and, thus, the potency) of a liquid already containing alcohol – the existing alcohol content usually the result of the fermentation of vegetable sugars. The distillation process separates the alcohol from other parts of the solution by the heating of the liquid to 173° Fahrenheit, a temperature sufficient to boil alcohol but not water. The resulting steam (vaporized alcohol) is collected and condensed, returning it to liquid form – but a liquid with a much higher proportion of alcohol than before. Repeating the process increases the liquor’s potency yet further. Because distilled alcohol contains bad-tasting and dangerous chemicals called fusel oils (actually forms of alcohol) and congeners, both by-products of the distilling process, it is often aged in a procedure, originating in the eighteenth century, that rids the beverage of these chemicals. As the liquid ages, its container (preferably made of wood) colors and flavors it to produce a smoother and better-tasting product (Ray 1974).
Iron has played a critical role in the evolution of life. The ancient Greeks, believing iron to be a special gift sent to earth by one of the gods, named it sideros, or star (Liebel, Greenfield, and Pollitt 1979). As the second most common metal, iron accounts for 5 percent of the earth’s crust; it is also found in both sea- and freshwater (Bernat 1983). Scientists believe that the earth’s atmosphere was originally a reducing one with very low oxygen pressure. As a result, large amounts of reduced iron would have been available for living organisms (Bothwell et al. 1979). Iron is an essential element for all organisms, with the possible exception of some Lactobacillus (Griffiths 1987; Payne 1988). In animals, the processes of DNA replication, RNA synthesis, and oxygen and electron transport require iron. Today most iron in the environment exists in an oxidized state and is less available to organisms. However, the problems of extracting insoluble iron have been overcome during evolution. A variety of sophisticated mechanisms have evolved that are specific to different kingdoms and/or different species (e.g., mechanisms plants use to be able to live in acidic or iron-poor environments) (Bothwell et al. 1979). Such mechanisms in animals include iron complexing agents, which transport iron and deliver it to cells, and low-molecular-weight compounds, such as fructose and amino acids, that reduce iron into a soluble form (Griffiths 1987; Simmons 1989: 14).
Metabolic processes within humans involve the presence of free radicals, that is, substances that are reactive because of instability in the arrangement of electrons (Wadsworth 1991). Iron may be present as a free radical. Such instability makes iron highly likely to donate or accept electrons.As a result, iron is versatile and able to serve a number of functions within cells. These functions include acting as a catalyst in electron transport processes and serving as a transporter of oxygen. Iron is a key component of hemoglobin, the oxygen carrier found in red blood cells. It is involved in many other extracellular processes as well (Woods, DeMarco, and Friedland 1990). Iron also is required for collagen synthesis, the production of antibodies, removal of fats from the blood, conversion of carotene to vitamin A, detoxification of drugs in the liver, and the conversion of fuel nutrients to energy (Long and Shannon 1983). In addition to its importance in the maintenance of normal metabolic processes, iron involvement in pathological change and initiation of disease is a critical facet of host defense.
The continent of South America has been a place of origin of many important food plants. Moreover, plant and animal introductions to the Americas made both before and after Columbus have provided an extraordinary diversity of food sources. Culinary traditions based on diverse foodstuffs show the imprint of indigenous, European, and African cultures. This is because food production and consumption in these lands stem from an environmental duality of both temperate and tropical possibilities. Moreover, throughout the twentieth century in South America, the binary distinction between food produced for commercial purposes and for subsistence needs has continued in a way that is unknown in North America. Contrasting nutritional standards and perturbations in supply add to the complexity of the total food situation in South America.
Domesticated Food Sources
The pre-Columbian peoples of South America domesticated more than 50 edible plants, several of which were such efficient sources of food that they subsequently have served as nutritional anchors for much of the rest of the world. The potato, manioc, and sweet potato, each belonging to different plant families, are among the top 10 food sources in the world today. The potato (Solanum tuberosum and related species) clearly originated in South America, where prior to European contact it was cultivated in the Andes through a range of 50 degrees of latitude. Archaeological remains of these tubers are scanty, but there is little doubt that Andean peoples have been eating potatoes for at least 5,000 years. The center of greatest morphological and genetic variability of potatoes is in southern Peru and northern Bolivia where they fall into five chromosome (ploidy) levels. That the potato is an efficient source of carbohydrates is well known, but it also provides not insignificant amounts of protein (in some varieties more than 5 percent), vitamins, and minerals. In the Andes, the tuber is traditionally boiled, but now it is also fried. Chuño, a dehydrated form of the fresh tuber, may have been the world’s first freeze-dried food. Working at high elevation, Indians still go through the laborious process of exposing fresh potatoes to both above and below-freezing temperatures before stepping on them with bare feet in order to make this easily stored form of food.
The ready availability of safe, wholesome food is often taken for granted by citizens of modern societies. However, maintaining the safety of a large, diverse food supply is a challenging undertaking that requires coordinated effort at many levels. In this chapter, the principles of food safety are discussed first with regard to traditional foods and then again as they concern novel foods developed through genetic modification.
Definitions and Priorities
The term “food safety,” as used today, encompasses many diverse areas, including protection against food poisoning and assurance that food does not contain additives or contaminants that would render it unsafe to eat. The term evolved mainly in the context of preventing intoxication by microbial poisons that act quickly (within hours to a day or two of exposure) and often induce such serious symptoms as convulsive vomiting and severe diarrhea, or respiratory failure and death (Cliver 1990). An example of the former is staphylococcal food poisoning, caused by the proteinaceous enterotoxins of the pathogenic bacterium Staphylococcus aureus (Cliver 1990). Botulism is an example of the latter, caused by the neurotoxins synthesized by Clostridium botulinum (Cliver 1990).
Both staphylococcal food poisoning and botulism result from ingesting toxins that are preformed in the implicated foods. For illness to ensue, it is necessary only to ingest the toxin, not the microbe itself. Food poisoning may also follow the ingestion of certain pathogenic bacteria, such as Salmonella, which produce gastrointestinal (GI) infections, and Escherichia coli 0157:H7, which produces an infection and a potent toxin within the GI tract (Cliver 1990). The infection results in bloody diarrhea, while the toxin enters the bloodstream and induces kidney damage.
Eggs from many species of fowl have doubtless been consumed since the very beginning of humankind’s stay on earth. In historical times, ancient Romans ate peafowl eggs, and the Chinese were fond of pigeon eggs. Ostrich eggs have been eaten since the days of the Phoenicians, whereas quail eggs, as hard-cooked, shelf-stable, packaged products, are now featured on many gourmet food counters in the United States and Japan. Other eggs consumed by various ethnic groups include those from plovers, partridges, gulls, turkeys, pelicans, ducks, and geese. Turtle eggs have been highly prized, and in starvation situations, any eggs, even those of alligators, have been relied upon.
In this chapter, however, only avian eggs (and these mostly from the chicken) are discussed. Avian eggs in themselves constitute a huge subject: In 1949, A. L. Romanoff and A. J. Romanoff published a book in which they attempted to compile all the facts known, at the time, about the eggs of birds. It contained over 2,400 reference citations.
It is almost obligatory in writing about eggs to first deal with that age-old question: Which came first, the chicken or the egg? Those who believe in creationism rely on holy books, like the Bible, which specify that animals were created. Thus, the chicken came first. But, as Harold McGee has pointed out, the eggs of reptiles preceded by far the evolution of the first birds; consequently, “[e]ggs … are millions of years older than birds.” He added that “Gallus domesticus, the chicken, more or less as we know it, is only 4 to 5 thousand years old, a latecomer even among the domesticated animals” (1984: 55).
Most research on nutrition and human mental development has focused on protein–energy malnutrition (PEM), which consists of deficits in energy and protein as well as other nutrients (Golden 1988). But there is also an extensive literature on the importance to mental development of trace elements and vitamins, as well as the impact of short-term food deprivation. Thus, although the bulk of this essay focuses on PEM and mental development, we begin with an examination of these other areas of concern.
Vitamins and Trace Elements
General Vitamin and Mineral Deficiencies
It is well understood that severe vitamin deficiencies may have drastic effects on mental development. Serious thiamine and niacin deficiencies, for example, as well as those of folic acid and vitamin B12, can cause neuropathy (Carney 1984). But milder, subclinical vitamin deficiencies are much more common, and thus their influence on mental development is presumably of much greater importance. Unfortunately, the extent to which multivitamin and mineral supplements influence intelligence in schoolchildren remains unknown, although this question has been the subject of at least five clinical trials (Schoenthaler 1991).
One study of 90 Welsh children using a multivitamin-mineral supplement over a nine-month period indicated that supplementation produced an increase in nonverbal IQs (Benton and Roberts 1988). A similar study of 410 children in the United States over 13 weeks also revealed an overall increase in nonverbal IQs (Schoenthaler 1991). However, in a Belgian study of 167 children who were supplemented for five months, only boys whose diets had previously been nutritionally deficient showed an increase in verbal IQs (Benton and Buts 1990). Other studies, one in London and the other in Scotland, reported no significant effects of supplementation (Naismith et al. 1988; Crombie et al. 1990).
Describing the principal sources of food for the inhabitants of Africa south from the Sahara is a relatively easy task. Most diets are dominated by products made from a single staple crop, and there are not all that many of them. Maize, sorghums, pearl or bulrush millet, and rice are the prominent grains, and cassava, yams, and bananas or plantains account for most of the vegetatively propagated varieties. Furthermore, their general geographies can be explained, for the most part, by annual totals and seasonality of rainfall. For example, near the dry margins of cropping, pearl millet makes its greatest dietary contribution, whereas the equatorial zone is where bananas and plantains come to the fore. Even adding in the role played by livestock, one that varies from insignificant to crucial, does not overly complicate the picture. Among farmers, fowl are fairly ubiquitous, while sheep, goats, and cattle are kept wherever diseases, especially sleeping sickness, do not prohibit them. When aridity intervenes to make crop cultivation too hazardous to rely upon, the herding of camels or cattle becomes the primary subsistence activity.
The problems come when attempting to go much beyond this level of generality. There is a plethora of other foods that are important to diets, including those from wild sources, and matters get even more difficult to sort out when issues of history, culture, and nutritional adequacy must be addressed. The region’s human diversity is enormous, and most food systems display a complex interweaving of influences, ranging from the distant past to the present. Unfortunately, trying to understand what has happened through time is hindered by a dearth of information. The written record is sparse before the twentieth century, and archaeology,so far, has produced very few dates. As a result, temporal insights often must rely on somewhat less precise sources of information, such as paleobotany, historic and comparative linguistics, and cultural anthropology.
The oil palm (Elaeis guineensis) is a native of West Africa. It flourishes in the humid tropics in groves of varying density, mainly in the coastal belt between 10 degrees north latitude and 10 degrees south latitude. It is also found up to 20 degrees south latitude in Central and East Africa and Madagascar in isolated localities with a suitable rainfall. It grows on relatively open ground and, therefore, originally spread along the banks of rivers and later on land cleared by humans for long-fallow cultivation (Hartley 1988: 5–7).
The palm fruit develops in dense bunches weighing 10 kilograms (kg) or more and containing more than a thousand individual fruits similar in size to a small plum. Palm oil is obtained from the flesh of the fruit and probably formed part of the food supply of the indigenous populations long before recorded history. It may also have been traded overland, since archaeological evidence indicates that palm oil was most likely available in ancient Egypt. The excavation of an early tomb at Abydos, dated to 3000 B.C., yielded “a mass of several Kilograms still in the shape of the vessel which contained it” (Friedel 1897).
A sample of the tomb material was submitted to careful chemical analysis and found to consist mainly of palmitic acid, glycerol in the combined and free state, and a mixture of azelaic and pimelic acids. The latter compounds are normal oxidation products of fatty acids, and the analyst concluded that the original material was probably palm oil, partly hydrolyzed and oxidized during its long storage.
The tomato is a perennial plant, generally cultivated as an annual crop. It can be grown in open fields, weather permitting, or in protective structures when temperatures are extreme. In commercial operations, tomatoes are usually planted as a row crop and harvested mechanically when they are still in the green stage. They can also be trained on trellises and harvested throughout most of the year by hand. Tomatoes adapt well and easily to a wide diversity of soils and climates, but they produce best in well-drained soil and temperate climate, with at least a few hours of sunlight each day.
The tomato contains significant amounts of the vitamins A and C, although probably less than the general public has been led to believe. Its importance as a provider of these vitamins depends more on the quantity consumed than on the amount of the vitamins in each fruit. Its vivid color, the fact that it can be used as both a raw and cooked vegetable, and its ability to blend easily with other ingredients has made the tomato a popular international food item and one of the most important vegetables on the world market.
Enormous changes have taken place in the use and distribution of the tomato since the time of its prehistoric origins as a wild, weedy plant. A multidisciplinary research strategy, using archaeological, taxonomical, historical, and linguistic sources is employed in this chapter to trace this remarkable transformation. And finally, special attention is given to the tomatoes of Mexico because that region is believed to have been the center of the domestication of the species and because it is there that tomatoes have the longest history of use, beginning with the indigenous population.
According to the first edition of the Encyclopaedia Britannica (1771), aphrodisiacs are “medicines which increase the quantity of seed, and create an inclination for venery.” Since the twentieth-century advent of sexual endocrinology, the definition of an aphrodisiac has become restricted to “a substance which excites sexual desire” (Steadman’s Medical Dictionary, 25th edition, 1990).The search for aphrodisiacs is rooted in universal anxieties about sexual performance and fertility. In many instances since ancient times, a distinction has been made between substances that were alleged to improve fertility (quantity of seed) and those that only stimulate the sex drive (inclination to venery). Some authorities held that the latter could only be achieved by achieving the former.
The scope of this essay is limited geographically to Europe and the Near East and, so far as possible, to foods and their preparation. Adequate nourishment has always been recognized as a requirement for health and a normal level of sexual activity, although the norm for the latter undoubtedly varies somewhat among cultures.
In ancient medical practices, when and by what indications nutritive and medicinal qualities of foods were differentiated is uncertain. A rather clear distinction, however, was made by Heracleides of Tarentum, a Greek physician in the first century B.C. In writing about aphrodisiacs, he said that “bulbs, snails, eggs and the like are supposed to produce semen, not because they are filling, but because their very nature in the first instance has powers related in kind to semen” (Athenaeus 1951: 275).
Traditional foodways have played an intrinsic part in the daily lives of the Native American peoples in the Arctic and Subarctic. Unlike other Americans, whose visits to their local grocery stores for food are seldom memorable, the people of Minto could look at a piece of dried fish and remember where they caught it, the activity on the river, and congratulations received from family members. The point is that food is more intimate for those who catch, grow, or gather it than for those who simply drop it into a shopping cart. The procurement, processing, preparation, and serving of food unites such people with their history, their future, and each other. The use of local resources serves as a direct emotional and spiritual link to the environment on which they depend.
This chapter explores the prehistoric, historic, and current dietary patterns of Native Americans in Alaska, Canada, and Greenland. Because of the wide variety of cultures in the Subarctic and Arctic, this is necessarily a general discussion.
The People
The Native American groups of the Arctic and Subarctic consist of two major genetic and linguistic populations – the Northern Athapaskan Indians and the Eskimo. In Alaska and Canada, the Eskimo are generally coastal people who are believed to have entered North America some 9,000 years ago. The older denizens are the Northern Athapaskans, located for the most part in the interior of Alaska and Canada, who are thought to have crossed the Bering Strait about 15,000 years ago.
Environment
Subarctic
The Tanana people are Athapaskans who reside in the area of Minto Flats on the Alaska Plateau, which is dissected by the Yukon, Tanana, and Kuskokwim rivers. The landscape includes mountain ranges of 3,000 to 4,000 feet, rivers, streams, marshes, grassy fields, and islands.
“Pig” is a term used synonymously with “hog” and “swine” for the one domesticated suid species, Sus scrofa domesticus. In livestock circles, a pig becomes a hog when it passes the weight threshold of 50 kilograms. The word “swine” transcends age and sex but to many has a pejorative ring.A “gilt” is any immature version of a sow, whereas a “barrow” is a young, castrated male that can never grow up to become a boar. After a piglet has been weaned, it becomes a “shoat.” Most of these terms are not used by a general public whose only encounter with this animal is in the supermarket. The meat of this often maligned beast yields some of the world’s best-tasting flesh and provides good-quality protein in large amounts.
Domestication
All domesticated pigs originated from the wild boar (Sus scrofa) (Epstein 1984). Within that one wild species, more than 20 subspecies are known in different parts of its natural range, which has extended from the British Isles and Morocco in the West to Japan and New Guinea in the East. But where in this vast stretch of territory the first domestication occurred is still uncertain, although the earliest archaeological records (c. 7000–5000 B.C.) have been concentrated in the Middle East and eastern Mediterranean.
Indeed, the recovery of bones of domesticated pigs has been done at Jericho (Palestine), Jarmo (Iraq), Catal Huyuk (Turkey), and Argissa-Margula (Greece), as well as other sites. But bones older than any of those from these sites were uncovered in 1994 at Hallan Cemi in southeastern Turkey. There, in the foothills of the Taurus Mountains, the pig was apparently kept as early as 8000 B.C., making it the oldest known domesticated creature besides the dog. Moreover, pig keeping at this site was found to predate the cultivation of wheat or barley. Both findings contradict the long-held twin assertions that sheep and goats were the world’s earliest domesticated herd animals and that crop growing preceded the raising of herd animals.
In writing about the history of food and drink in pre-Columbian North America, one is reminded that for the temperate part of the continent, we are describing cultures primarily known only through archaeological and archival research. Very few native populations survived the events of the past five centuries, and those that did endured considerable cultural modifications. Nonetheless, many of the foods and drinks they used became important legacies to the new North American and global foodways that emerged after 1492, and certainly such foods were critical to the survival of the first European colonists who established permanent communities there.
Perhaps the most important of these were pumpkins, squash, beans, and maize (corn), and although few of these crops were originally domesticated in temperate North America, today they, as well as indigenous cultivation and preparation techniques, continue to be valued.
The practice of mixing maize, beans, and squash in gardens was developed by Native Americans, who also contributed many maize dishes, including hominy, grits and other gruels, breads made with corn flour, corn on the cob, and succotash (Hudson 1976: 498–9). Early North Americans gave sunflowers to the world’s economy and contributed to the develop development of modern strawberry, blackberry, raspberry, blueberry, cranberry, hickory, and pecan varieties (Trager 1970: 278–80; Hedrick 1972). Finally, such preservation techniques as drying fruits or vegetables and curing meat by smoking over hickory coals have Native American antecedents (Hudson 1976: 499).
The basic elements of healthful diets are well established (USDHHS 1988; National Research Council 1989; USDA/USDHHS 1995). They provide adequate amounts of energy and essential nutrients, reduce risks for diet-related chronic diseases, and derive from foods that are available, affordable, safe, and palatable. A very large body of research accumulated since the mid-1950s clearly indicates that healthful diets are based primarily on fruits, vegetables, and grains, with smaller quantities of meat and dairy foods than are typically included in current diets in the United States and other Western countries (James 1988; USDHHS 1988; National Research Council 1989).
Throughout the course of history, societies have developed a great variety of ways to combine the foods that are available to them (as a result of geography, climate, trade, and cultural preferences) into characteristic dietary patterns. In some areas, typical diets have developed patterns so complex, varied, and interesting in taste that they have come to be identified as particular cuisines. Some of these, most notably those of Asia and the Mediterranean, seem to bless the populations that consume them with substantially lower levels of coronary heart disease, certain cancers, diabetes mellitus, and other chronic diseases than those suffered by other peoples. Consequently, such apparent relationships between cuisines and health have created much interest in traditional dietary patterns.
Illustrative is the current interest in Mediterranean diets that has been stimulated by the unusually low levels of chronic diseases and the longer life expectancies enjoyed by adults residing in certain regions bordering the Mediterranean Sea (WHO 1994). context of those factors usually associated with disease prevention in industrialized countries, such as educational levels, financial status, and health-care expenditures. Indeed, the percentages of those who are poor in Mediterranean regions are often quite high relative to those of more developed economies (World Bank 1993). To explain this paradox, researchers have focused on other lifestyle characteristics associated with good health, and especially on the various constituents of the typical Mediterranean diet.
Buckwheat (Fagopyrum esculentum Möench) is a crop commonly grown for its black or gray triangular seeds. It can also be grown as a green manure crop, a companion crop, a cover crop, as a source of buckwheat honey (often for the benefit of bees), and as a pharmaceutical plant yielding rutin, which is used in the treatment of capillary fragility. Buckwheat belongs to the Polygonaceae family (as do sorrel and rhubarb). Whereas cereals such as wheat, maize, and rice belong to the grass family, buckwheat is not a true cereal. Its grain is a dry fruit.
Buckwheat is believed to be native to Manchuria and Siberia and, reportedly, was cultivated in China by at least 1000 B.C. However, fragments of the grain have been recovered from Japanese sites dating from between 3500 and 5000 B.C., suggesting a much earlier date for the grain’s cultivation. It was an important crop in Japan and reached Europe through Turkey and Russia during the fourteenth and fifteenth centuries A.D., although legend would have it entering Europe much earlier with the returning Crusaders. Buckwheat was introduced into North America in the seventeenth century by the Dutch, and it is said that its name derives from the Dutch word bochweit (meaning “beech wheat”), because the plant’s triangular fruits resemble beechnuts. In German the name for beech is Buche, and for buckwheat, Buchweizen. Buckwheat has a nutty flavor and, when roasted (kasha), a very strong one. It is a hardy plant that grew in Europe where other grains did not and, thus, supplied peasants in such areas with porridge and pancakes.
Throughout the world there is enough food to feed every human being. Yet hunger and malnutrition persist. “Food security” – that is, access to culturally acceptable nutriments, through normal channels, in quantities sufficient for daily life and work – should be among the most basic of universal human rights. Hunger, poverty, and marginalization are caused by political and economic forces and decisions, which result in entitlement failures that undermine food security at the household level.
Having enough to eat depends upon access to at least a minimum “floor” level of the means of subsistence. In one sense, human history may be viewed as a gradual expansion of a sense of responsibility for others, which helps to secure that minimum “floor” for ever-increasing numbers of people. The concept of an entitlement to subsistence for households within one’s own clan has been accepted for ages. Such “food security” became available to citizens of Greece and Rome thousands of years ago and was extended to most Europeans beginning about 200 years ago (Kates and Millman 1990: 398–9).
In spite of this record of progress, however, hundreds of millions of people throughout the world suffer unnecessarily from hunger and malnutrition, and, although the proportion of hungry people is diminishing, their total number continues to grow. Between 1990 and 2000, the absolute number of hungry people was projected to continue to increase and then gradually decline to a level of about 3 percent of the world’s population in 2050. “In the meantime, half of the world’s women who carry the seeds of our future may be anemic, a third of the world’s children may be wasted or stunted in body or mind, and perhaps a fifth of the world’s people can never be sure of their daily bread, chapati, rice, tortilla, or ugali” (Kates and Millman 1990: 405). Today, some 1 billion children, women, and men daily confront chronic hunger and, consequently, the specters of starvation, undernutrition, deficiencies of iron, iodine, and vitamin A, and nutrient-depleting diseases (Kates 1996: 4–6).