To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Peanut or groundnut (Arachis hypogaea L.) is a major world crop and member of the Leguminosae family, subfamily Papilionoidae. Arachis is Greek for “legume, ” and hypogaea means “below ground.” Arachis, as a genus of wild plants, is South American in origin, and the domesticated Arachis hypogaea was diffused from there to other parts of the world. The origin of Arachis hypogea var. hypogaea was in Bolivia, possibly as an evolutionary adaptation to drought (Krapovickas 1969). Certainly the archaeological evidence of the South American origins is secure. However, the debate about the pre-Columbian presence of New World plants in Asia (especially India) remains unresolved. The other species of Arachis that was domesticated prehistorically by South American Indians was A. villosulicarpa, yet the latter has never been cultivated widely.
As the peanut’s nutritional and economic importance became recognized, it was widely cultivated in India, China, the United States, Africa, and Europe. Thus, the peanut is another of the New World food crops that are now consumed worldwide. The peanut is popular as a food in Africa and in North America, especially in the United States; peanut-fed pigs produce the famous Smithfield ham of Virginia, and peanut butter is extremely popular. There is also much interest in peanut cultivation in the United States.
Botanically, the varieties of peanuts are distinguished by branching order, growth patterns, and number of seeds per pod. The two main types of peanuts, in terms of plant growth, are “bunch or erect, ”which grow upright, and “runners or prostrate,” which spread out on or near the ground. Commercially, peanuts are grouped into four market varieties: Virginia, Runner, Spanish, and Valencia. The former two include both bunch and runner plants, and the latter two are bunch plants. Details on the life cycle and growth of the peanut and its harvesting are provided later in this chapter (Lapidis 1977). Table II.D.2.1 shows the various characteristics of the four varieties.
The science of nutrition has influenced consumers in their choices about the kinds and optimal amounts of food to eat, but there are other influences as well, such as prosperity levels within a given population, the efficiency of transportation and distribution systems, and the standards of hygiene maintained by food producers, processors, and retailers.
One factor, however, that has not received much scholarly attention is the increased role of the state, either through direct or indirect means, in the production, distribution, and consumption of food. Only recently have historians addressed the development of food policies (mostly those in Europe) in order to understand the state’s role in controlling and monitoring food supplies.
In early modern European societies, the maintenance of public order and the control of food supply were intimately related; religious and secular authorities alike had a vested interest in ensuring that production met the demands of consumption. The actions of these authorities (the distribution of food or price-fixing, for example) were largely responses to localized crises. What distinguishes modern food policies from their early modern antecedents are the intended goals of these policies, as well as the scientific nature of their implementation.
The rise of industrialization and urbanization in the nineteenth century prompted new concerns about food supplies. The competitive nature of an industrialized, capitalist food market increased popular anxieties about adulteration; one of the more important roles of the state was to regulate the hygienic quality of food supplies. The economic conditions of the nineteenth century provoked greater concern with population’s risking dietary deficiencies and, therefore, poor health. Social reformers and scientific experts took a more active and deliberate interest in monitoring the health of the laboring classes through the measurement of food consumption levels.
If the history of a dietary culture is, in many ways, the history of a people, then the evolution of Korea’s dietary traditions clearly reflects that nation’s turbulent history. Geography and environment play a decisive role in determining the foundation of a nation’s dietary culture, whereas complex political, economic, and social conditions and interactions with other cultures contribute to further development.
Traditional dietary strategies must balance the need for sufficient calories and specific nutrients with the need to avoid or minimize diseases associated with foods that are contaminated, spoiled, or otherwise unhealthy. An account of traditional diets should, therefore, deal with food- and waterborne diseases as well as with typical foods and cooking methods. Once dietary habits and food preferences have been established, they become a central part of the culture and are highly resistant to change.
It is not uncommon, however, to find that in the course of exchanges between cultures, foreign foods have become so thoroughly adapted to local conditions that their origins are quite forgotten. In a rapidly changing and interdependent world, it is important to understand the historical background of traditional diets and the impact of modernization in order to maintain and develop dietary strategies that balance cherished traditions with new circumstances. An understanding of the traditional foods of Korea, therefore, requires a brief overview of Korean geography and history.
Korea occupies the mountainous peninsula south of Manchuria; the Yellow Sea separates Korea from mainland China to the west. Japan is only 206 kilometers (km) away across the southern Korea Straits. Because of its strategic location, Korea has a history that has been intimately linked to developments in China, Japan, and other Asian countries. The total size of the peninsula is about that of the state of New York. It was artificially divided along the 38th parallel as the result of World War II and the Korean War, with the area of the northern zone about 122,370 square kilometers (sq km) and that of the Republic of Korea about 98,173 sq km. The peninsula is approximately 1,000 km in total north-south length and 216 km wide at its narrowest point, with a rugged coastline about 17,269 km long. Korea has long been a cultural bridge and a mediator between China and Japan and often the target of their territorial ambitions and aggression. Devastated and exhausted by centuries of conflict, the “Hermit Kingdom” during the sixteenth century embarked on a policy of isolationism that kept Korea virtually unknown to the West until the last decades of the nineteenth century.
Bananas represent one of the most important fruit crops, second only to grapes in the volume of world production (Purseglove 1988). J. F. Morton (1987) indicates that bananas are the fourth largest fruit crop after grapes, citrus fruits, and apples. Bananas and plantains are starchy berries produced by hybrids and/or sports of Musa acuminata Colla and Musa balbisiana. Rare genome contributions from another species may have occurred but are not yet well documented (Simmonds 1986). Additionally, fe'i bananas are obtained from Musa troglodytarum. Bananas may be differentiated from plantains on the basis of moisture content, with bananas generally averaging 83 percent moisture and plantains 65 percent (but intermediate examples may also be found) (Lessard 1992). Bananas may be eaten raw or cooked. Plantains are usually eaten cooked. Commonly, bananas which are eaten raw are referred to as dessert bananas. Throughout this essay, the term “bananas” is used to refer to both bananas and plantains.
Bananas, being primarily carbohydrates (22.2 to 31.2 percent), are low in fats, cholesterol, and sodium. Potassium levels are high (400 milligrams to 100 grams of pulp). Bananas are also good sources of ascorbic acid, 100 grams providing 13.3 to 26.7 percent of the U.S. RDA (Stover and Simmonds 1987). During ripening, the starch component is gradually converted to simple sugars (fructose, glucose, and sucrose), while the moisture content of the pulp increases. The time of conversion to simple sugars can also be used to differentiate plantains/cooking bananas (later conversion) from bananas that are eaten raw (earlier conversion).
Banana Plants
Bananas are monocarpic (fruiting once, then dying), perennial, giant herbs that usually are propagated via lateral shoots (suckers). Leaves are produced by a single apical meristem, which typically forms only a low short stem or pseudobulb.The leaves are tightly rolled around each other, producing a pseudostem with a heart of young, emerging, rolled leaves ending with the terminal production of a huge inflorescence (usually sterile) and, finally, the starchy fruits: bananas or plantains. 175
This final portion of the book is perhaps the most ambitious. It was initially conceived of as a dictionary of the exotic plants mentioned in the text, which our authors would otherwise be called upon to identify in their chapters and, in so doing, interrupt their narratives.
The expansion of Part VIII began when it was decided to include entries on all plant foods mentioned in the text and continued when it became apparent that the various fruits of the world do not lend themselves to generalized essays, because many have been mostly seasonal items in the diets of relatively few – and often unrelated – people. For example, the ancient Malaysians ate the “Java apple” (Eugenia javanica) when it was ripe, whereas, on the other side of the world, Native Americans of Brazil did the same with their “pitanga” (Eugenia uniflora). The plants that produce these two fruits are both in the same genus of the family Myrtaceae, but there is little that historically connects their human consumers (unlike the consumers of maize or wheat or potatoes). Thus, save for a few staples (bananas and plantains, for example), fruits really did not seem to belong in the earlier parts of the work dealing with staple foods, and when it was decided to treat fruits in individual dictionary entries – and not as botanical families, or even, as a rule, as genera – there seemed no question that these entries should be included in Part VIII.
The difficulty that confronts one at the very beginning of a study of the dog as human food is the lack of convincing evidence relating to the use of dogs (Canis familiaris) as food by early humans. Most of the discussions in published articles are either speculative or employ relatively recent evidence of dog consumption and extrapolate these data to the distant past.
There are some fine published discussions that relate to the eating of dogs in more recent times. One of the best is by Margaret Titcomb (1969), who documents the importance of the dog to human cultures in the ancient Pacific area. Titcomb believes (as do I) that the dog of the Pacific Islands was originally carried by canoe from the Asian mainland along with the pig and chicken for food on the voyage.
It was fed carefully upon vegetables such as taro and breadfruit so that its flesh was palatable to its human owners. Captain James Cook noted in his journal of 1784 that dogs were fed and herded with the hogs. Titcomb (1969) relates that they were fattened on poi and that an informant remembered that relatives penned up puppies to keep them from eating unclean food. They were fed three times a day on poi and broth until fattened. Natives considered baked dog a great delicacy, but they never offered it to foreigners, who held dog meat in great abhorrence, according to the wife of a missionary in 1820.
Preparation of dogs as food is abstracted by Titcomb (1969), who writes:“Like the pig, the dog was killed by strangulation in order not to lose the blood. The blood was poured into a calabash (gourd) used for cooking, and red hot stones were added, causing the blood to coagulate; it was then ready to eat.” The meat might be baked or cut up and wrapped in ti leaf bundles and cooked in boiling water. An informant stated that the upper part of the backbone was preferred by many diners, although some liked the ribs best, and others preferred the brain. The latter has been attested to by dog skulls, with holes in the crania, found in archaeological excavations.
Dietary patterns in Russia display marked continuities over most of the past millennium or so. Staple food-stuffs have remained remarkably constant, and despite the introduction of new foods and beverages in later centuries and the gradual eclipse of a few items, the diets of the vast majority of the population underwent little qualitative change until well into the nineteenth century. Russia, relatively isolated from the West until the reigns of Peter I and Catherine in the eighteenth century, was as conservative in its cuisine as it was in politics and society, and the sharp gap between rich and poor was reflected in what they ate and drank.
Russia is defined for the purposes of this study as the lands inhabited by the modern eastern Slavic peoples, the Belorussians in the west, the Ukrainians in the south, and the Russians in the north and center of “European Russia.” Brief mention is made of the Baltic, Transcaucasian, Siberian, and central Asian peoples, primarily as their foods influenced the diets of their Slavic rulers in the Russian Empire and the Soviet Union. Imperial Russia also controlled Finland and much of Poland during the nineteenth century, but these areas are not considered here.
Peoples ancestral to the modern eastern Slavs apparently began spreading out from their homeland in the territory near the modern borders of Poland, Belarus, and Ukraine around the seventh century. They moved into the forests of central and northern Russia at the expense of scattered Finnic peoples, most of whom were eventually absorbed or displaced. Expansion into the grasslands of the Ukraine and beyond was much slower because the steppes were dominated by pastoral peoples of Turkic and Mongol stock. The medieval Kievan state was able to hold the horsemen at bay for a while, but by the twelfth century the Slavs began to retreat northward under nomad pressure. Not until the sixteenth century was the new Muscovite state strong enough to begin the reconquest of the Ukraine and extend Russian power down the Volga. Traditional Russian cuisine developed in the forest zone but was profoundly influenced by expansion into the grasslands and along trade routes.
Over the course of the twentieth century, cardiovascular disease (CVD) has become the leading cause of death in the United States. CVD is also a significant cause of morbidity and mortality in many other industrialized countries and regions, such as Scandinavia, the United Kingdom, Australia, and Canada. Most CVD is manifested as coronary artery disease (CAD), usually based on atherosclerosis. This “epidemic” of CVD has been attributed to the poor lifestyle habits of members of late-twentieth-century industrialized, urban society, who smoke tobacco, exercise rarely, and indulge in fat-laden diets (Kannel 1987).
A striking similarity of these factors leading to disease is that each – in most cases – can be modified by an individual at risk for coronary disease, even without professional guidance and in the absence of public health initiatives. But risk factors are not always easily eliminated. Addiction to tobacco is difficult to overcome. Exercise may be problematic for some people, given constraints on time posed by other obligations. Everyone, however, must eat, and perhaps for this reason, of all the possible causes of heart-related diseases, diet has received the most attention.
This chapter explores the relationship between nutrition and heart-related diseases by describing selected nutrients that have been implicated in the pathogenesis, prevention, or treatment of CAD. Most of the available data come from population studies. It appears unlikely that any single nutrient will soon be identified as the specific dietary agent that causes atherosclerotic diseases. Moreover, any individual nutrient is but a small part of a larger group of chemicals that make up any particular “food”.
The question of prehistoric dietary practices has become an important one. Coprolites (desiccated or mineralized feces) are a unique resource for analyzing prehistoric diet because their constituents are mainly the undigested or incompletely digested remains of food items that were actually eaten. Thus they contain direct evidence of dietary intake (Bryant 1974b, 1990; Spaulding 1974; Fry 1985; Scott 1987; Sobolik 1991a, 1994a, 1994b). In addition they can reveal important information on the health, nutrition, possible food preparation methods, and overall food economy and subsistence of a group of people (Sobolik 1991b; Reinhard and Bryant 1992).
Coprolites are mainly preserved in dry, arid environments or in the frozen arctic (Carbone and Keel 1985). Caves and enclosed areas are the best places for preserved samples and there are also samples associated with mummies. Unfortunately, conditions that help provide such samples are not observed in all archaeological sites.
Coprolite analysis is important in the determination of prehistoric diets for two significant reasons. First, the constituents of a coprolite are mainly the remains of intentionally eaten food items. This type of precise sample cannot be replicated as accurately from animal or plant debris recovered from archaeological sites. Second, coprolites tend to preserve small, fragile remains, mainly because of their compact nature, which tends to keep the constituents separated from the site matrix. These remains are typically recovered by normal coprolitic processing techniques, which involve screening with micron mesh screens rather than the larger screens used during archaeological excavations.
In prehistoric times, the water content of the immature coconut fruit was more important as a drink than was any part of the mature nut as a food. In recent history, the emphasis has also been on a nonfood use of coconuts as oil. The oil extracted from the kernel of the ripe coconut is an industrial raw material for products ranging from soap to explosives. From prehistory to the present, coconut has served many human communities around the tropics in a variety of ways. In 1501, King Manuel of Portugal itemized some of its uses at a time when the coconut was first becoming known in Europe:‘[F]rom these trees and their fruit are made the following things: sugar, honey, oil, wine, vinegar, charcoal and cordage … and matting and it serves them for everything they need. And the aforesaid fruit, in addition to what is thus made of it, is their chief food, particularly at sea” (Harries 1978: 277).
Unfortunately, it is not possible to provide as much information as one might want on the coconut in prehistory. This is because heat and humidity work against the preservation of fossils, and thus there is a dearth of archaeological materials, coprolites, and biological remains on tropical seashores where the coconut palm is native. Coconut residues do not accumulate because the palm grows and fruits the year round. This makes crop storage unnecessary and, in fact, because of their high water content, coconut seednuts cannot be stored; they either grow or rot. And the tender, or jelly, coconut is even less likely to survive in storage.
The sweet liquid in the immature fruit, however, is safe to drink where ground water may be saline or contaminated. It is a very pleasant drink, and coconuts are readily transported by land or sea.
Rather than undertaking a categoric examination of the myriad toxins in food, this essay highlights various considerations that should provide a sense of perspective in viewing toxins as a whole. It is important to realize that toxic substances must negotiate the various degradation and propulsive properties of a gastrointestinal tract in order to be absorbed and exert a harmful effect on the body. The ability of the toxin to be absorbed helps determine the amount of a substance that must be ingested before toxic effects become manifest. Moreover, the handling of ingested toxins by an immature gastrointestinal tract of a premature or term newborn infant may be different from that of the fully developed gastrointestinal tract.
Development of the tract begins during the first 12 weeks of gestation as it matures from a straight tube to one that is progressively convoluted, and the surface area for absorption increases. Over the next six months, the gut acquires a sophisticated immune system and the capacity to digest complex carbohydrates, fats, and proteins. Not all of these mechanisms, however, are fully functional until several months after birth.
The extent to which ingested substances, including food and toxins, are absorbed by the intestine is dependent on the capabilities it has developed to deal with carbohydrates, fats, proteins, water, and ions. These are highly complex issues about which varying amounts of information are understood. Nevertheless, a general concept of how absorption occurs may help put the discussion of food toxins in context.
Rice has long been the main staple of the traditional Japanese diet. It is not only consumed daily as a staple food but also used to brew sake, a traditional alcoholic drink. Japanese cuisine has developed the art of providing side dishes to complement consumption of the staple food. Table manners were also established in the quest for more refined ways of eating rice and drinking sake at formal ceremonial feasts. The history of the Japanese diet, which is inseparable from rice, started therefore with the introduction of rice cultivation.
Subsistence during the Neolithic period in Japan (known as the Jōmon era, beginning about 12,000 years ago) was provided by hunting and gathering. Agriculture did not reach the Japanese archipelago until the very end of the Neolithic period. Collecting nuts (especially acorns and chestnuts) and hunting game were common activities, and a large variety of marine resources was intensively exploited throughout the period. The Jōmon era, however, ended with a shift from hunting and gathering to sedentary agriculture.
The Yangtze delta in China is considered to be the original source for the practice of rice cultivation in Japan. Continuous waves of migrants bearing knowledge of the technique reached Japan from the continent around 2,400 years ago via two major routes. One was through the Korean peninsula and the other was a direct sea route from China. Rice production techniques were accompanied by the use of metal tools, which provided high productivity and a stable supply. Population increased rapidly, and localized communities appeared in the following Yayoi era (1,700 to 2,400 years ago). Paddy-field rice cultivation was then under way except in the northern Ainu-dominated region of Hokkaido and in the southern Okinawa islands, an island chain between Kyūmshū (the southernmost main island of Japan) and Taiwan.
Potassium (K) is found in virtually all aerobic cells and is essential to life. It is the third most abundant element in the human body (after calcium and phosphorus) and the eighth most abundant element in the earth’s crust, with a mass percent of 1.8, which means that every 100 grams (g) of the earth’s crust contains 1.8 g of potassium. Potassium is a very reactive alkali metal with an atomic number of 19 and an atomic weight of 39.098 atomic mass units (amu). Its outer “4s” electron is not bound very tightly to the atom, which is therefore easily ionized to K+ (Dean 1985), and potassium reacts readily with chlorine to form the salt potassium chloride. Potassium chloride is a white crystalline solid at room temperature with alternating potassium ions and chloride ions on the lattice sites. Potassium is found primarily in seawater and in natural brines in the form of chloride salt. The minerals mica and feldspar also contain significant quantities of potassium (Dean 1985).
The Discovery of Elemental Potassium
Potassium was first isolated in 1807 by Humphry Davy (1778–1829), who electrolyzed “potash” with a newly invented battery designed to contain a series of voltaic cells, with electrodes made out of zinc and copper plates dipped in a solution of nitrous acid and alum. In Davy’s time, the term “potash” referred to any number of different compounds, including “vitriol of potash” (potassium sulfate), “caustic potash” (potassium hydroxide), and “muriate of potash” (potassium chloride as well as potassium carbonate), the last of which was formed by leaching ashes from a wood fire and evaporating the solution to near dryness in an iron pot.
Celiac disease has been recognized for centuries (Dowd and Walker-Smith 1974) by physicians aware of its major symptoms of diarrhea and gastrointestinal distress accompanied by a wasting away in adults and a failure to grow in children. The Greek physician Aretaeus (first century A.D.) called the condition coeliac diathesis – coeliac deriving from the Greek word koeliakos, or abdominal cavity. The British physician Samuel Gee provided what is generally considered the first modern, detailed description of the condition, which he termed the coeliac affection in deference to Aretaeus, in a lecture presented at St. Bartholomew’s Hospital in London (Gee 1888). At present, celiac disease (or, especially in Britain, coeliac disease) is the most commonly used term for the condition, although various others may be encountered, including celiac syndrome, celiac sprue, nontropical sprue, and glutensensitive enteropathy.
There were perceptions, certainly since Gee’s time, that celiac disease was a consequence of, or at least affected by, diet. Gee (1888: 20) noted that “[a]child, who was fed upon a quart of the best Dutch mussels daily, throve wonderfully, but relapsed when the season for mussels was over.” Such associations with diet led to wide-ranging dietary prescriptions and proscriptions (Haas 1924; Sheldon 1955; Weijers, Van de Kamer, and Dicke 1957; Anderson 1992). Some physicians recommended exclusion of fats – others, exclusion of complex carbohydrates. At times, so many restrictions were applied simultaneously that it became impossible to maintain a satisfactory intake of calories.
Together with economic growth and technological advances, improvements in health and longevity are the typical hallmarks of a population’s transition to modern society. Among the earliest countries to undergo such experiences were England and France, where mortality rates began declining steadily during the eighteenth century. Elsewhere in western and northern Europe, health and longevity began to improve during the nineteenth century. In the twentieth century, this pattern has been replicated in developing countries throughout the world.
Understanding the causes that underlie this pattern of mortality decline is important not only as a matter of historical interest but also because of the practical implications for policies that aim to improve life in developing countries, and for forecasting changes in mortality in developed countries. Accordingly, there has been much interest in identifying the causes of patterns of mortality decline and measuring their impact. By the 1960s, a consensus had emerged that the factors underlying mortality trends could be delineated within four categories, as reported in a study by the United Nations (UN) (1953): (1) public-health reforms, (2) advances in medical knowledge, (3) improved personal hygiene, and (4) rising income and standards of living. A later UN study (1973) added as an additional category “natural causes,” such as a decline in the virulence of pathogens.
Vitamin D is a fat-soluble substance required by most vertebrates, including humans, to keep blood calcium and phosphate levels within a narrow normal range and thereby maintain a normal skeleton and optimal cellular function. The term, vitamin D, is a misnomer. Vitamin D is not a vitamin. It is synthesized in the skin, and so, unlike other vitamins, which are essential dietary components, it does not satisfy the criteria for classification as a vitamin. Nor is it a hormone because it is biologically inactive and must be metabolized by the body into a multihydroxylated version, known as calcitriol, which is biologically active and the true hormonal form. Thus vitamin D is more accurately described as a prohormone. The natural form of the vitamin, known as vitamin D3, is a cholesterollike substance produced in the skin by a nonenzymatic process involving ultraviolet light and heat. An artificial form of the vitamin, with an altered side chain, known as vitamin D2, is derived from the plant sterol ergosterol and is often used instead of vitamin D3 as a dietary supplement.
Most of the complexity associated with the nomenclature in the vitamin D field stems from confusion surrounding its discovery during the period 1919 to 1922. Early research showed that the deficiency associated with lack of vitamin D (rickets in children or osteomalacia in adults) was cured by seemingly unrelated treatments: exposure to sunlight or ingestion of a fat-soluble substance. The early nutritional pioneers of that period, including Sir Edward Mellanby and Elmer V. McCollum, realized that several related factors would cure rickets and that one of these substances, vitamin D3, could be made in the skin. Students often ponder the fate of vitamin D1. It was a short-lived research entity comprising a mixture of vitamins D2 and D3, and the term has no value today. Vitamin D3 is sometimes referred to as cholecalciferol or, more recently, calciol; vitamin D2 is known as ergocalciferol or ercalciol. The discovery of the hydroxylated versions of vitamin D by Hector F. DeLuca and Egon Kodicek in the 1967 to 1971 period led to a major expansion of our knowledge of a number of biologically active compounds, but calcitriol is the singularly most important version of these. For purposes of discussing the history of foodstuffs, we shall use the term vitamin D to describe all substances that can be activated to produce biological effects on calcium and phosphate metabolism in humans.
The Sumerians may have said it best: “Food: That’s the thing! Drink: That’s the thing!” (Gordon 1959: 142). From bread and beer to wine and cheese, the people of the ancient Near East and North Africa developed a rich cuisine based on a set of crops and livestock domesticated in Southwest Asia, and a sophisticated technology of food preparation and preservation. This chapter traces the history of diet and foods of hunter-gatherers who lived at the end of the Stone Age in the Near East and North Africa, the impact of the development and spread of agriculture, and the social context of food and drink in early Mesopotamian and Egyptian civilization.
Geographical Background
Patterns of subsistence in any society reflect geography and cultural development. The civilizations of the ancient Near East and North Africa developed in a complex environmental mosaic that encompassed coasts and inland plateaus, high mountains and lands below sea level, barren deserts, fertile plains, and dense woodlands. The boundaries of the environmental zones have shifted over the years because the region has known both dry periods and moister phases. People, too, have wrought changes on the land as they assisted the movement of plants and animals from their original homelands. Over the millennia, humans have turned deserts into gardens with irrigation, and have transformed naturally productive lands into deserts by overgrazing and fuel cutting. Specifying the environmental picture at any particular place and time is not an easy task.
People, too, have wrought changes on the land as they assisted the movement of plants and animals from their original homelands. Over the millennia, humans have turned deserts into gardens with irrigation, and have transformed naturally productive lands into deserts by overgrazing and fuel cutting. Specifying the environmental picture at any particular place and time is not an easy task.
Obesity is a dimension of body image based on a society’s consideration of acceptable body size and, as such, is the focus of anthropological, sociological, and psychological study (de Garine and Pollock 1995). However, most of the research on obesity in Western societies has focused on medical issues ranging from genetic etiology to therapeutic interventions. Overfatness or obesity is a major health problem in countries that are affluent and is increasing in prevalence among the socioeconomic elite of those that are modernizing. An estimated 90 million Americans – one-third of the population – are substantially above their range of desirable body weight; in some other populations more than half of their members fit into this category.
Of course, some fat or adipose tissue is essential for life and serves a number of functions. It provides metabolic fuel; thermal insulation; a reservoir for vitamins, hormones, and other chemicals; and protection for the viscera and dermal constituents, such as blood vessels, nerves, and glands (Beller 1977). However, an excessive accumulation of fat is associated with an increased risk for diabetes, hypertension, cardiovascular and musculoskeletal problems, and in general, a reduced life expectancy. Moreover, in many societies, fatness elicits a psychosocial stigma.
Definitions and Diagnosis
Body weight is the most widely used anthropometric indicator of nutritional reserves, and weight relative to height is an acceptable measure of body size for growth monitoring and for most epidemiological surveys. Overweight and obesity, though often used synonymously, are not the same. S. Abraham and co-workers (1983) clearly made the distinction in analyzing data from the first U.S. National Health and Nutrition Examination (NHANES) survey. Overweight was defined as an excess in body weight relative to a range of weights for height. In this report, individuals over the 85th percentile of weight for height standards are considered overweight. Obesity was defined as an excess of body fat based on the sum of the triceps (upper arm) skinfold and subscapular (back) skinfold. Skinfold measurements using calipers that pinch a fold of skin and subcutaneous fat at specific sites (for example, waist, abdomen, thighs, upper arm, and back) are used in equations to estimate body fat stores and are compared with reference percentile tables (Himes 1991).
Following 1492, the Caribbean basin became a cultural meeting ground that remains unsurpassed for the variety of influences: European, Asian, African, and American. At times, the clash of cultures led to tragedy, such as the destruction of pre-Columbian Indians by European diseases or the centuries of African enslavement on sugar plantations. But the Caribbean people have also produced cultural triumphs, not the least of which are the tropical dishes of island cooking.
Cuisine can provide important insights into the process of cultural change. Each new group of immigrants to the Caribbean, from Taino “natives” (originally from South America) to Spanish conquistadors and from African slaves to Asian laborers, brought with them their knowledge of foods and how to prepare them. Island cuisine drew together maize and manioc from America, domesticated pigs and cattle from Europe, garden plants, such as okra and akee, from Africa, and citrus fruits and rice from Asia. Unfortunately, notwithstanding this rich variety of foods, poverty has made malnutrition a recurring problem in the region. Slaves (and many whites) suffered from a frightful variety of nutrition-related diseases, many of which have returned to haunt the impoverished masses of the twentieth century. Modernization has, meanwhile, threatened to replace traditional dishes with a processed and packaged uniformity of industrial foods. But island cooks have adapted to pressures, both economic and ecological, to create a genuinely global cuisine with a uniquely local taste.