To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Food biotechnology – the use of recombinant deoxyribonucleic acid (rDNA) and cell fusion techniques to confer selected characteristics upon food plants, animals, and microorganisms (Mittal 1992; Carrol 1993) – is well understood as a means to increase agricultural productivity, especially in the developing world. The great promise of biotechnology is that it will help solve world food problems by creating a more abundant, more nutritious, and less expensive food supply. This theoretical promise is widely appreciated and beyond dispute (Rogers and Fleet 1989; U.S. Congress 1992).
Nonetheless, food biotechnology has elicited extraordinary levels of controversy. In the United States and in Europe, the first commercial food products of genetic engineering were greeted with suspicion by the public, vilified by the press, and threatened with boycotts and legislative prohibitions. Such reactions reflect widespread concerns about the safety and environmental impact of these products, as well as about their regulatory status, ethical implications, and social value. The reactions also reflect public fears about the unknown dangers of genetic engineering and deep distrust of the biotechnology industry and its governmental regulators (Davis 1991; Hoban 1995).
Biotechnology industry leaders and their supporters, however, dismiss these public concerns, fears, and suspicions as irrational. They characterize individuals raising such concerns as ignorant, hysterical, irresponsible, antiscientific, and “troglodyte,” and they describe “biotechnophobia” as the single most serious threat to the development, growth, and commercialization of the food biotechnology industry (Gaull and Goldberg 1991: 6). They view anti-biotechnology advocates as highly motivated and well funded and believe them to be deliberately “interweaving political, societal and emotional issues … to delay commercialization and increase costs by supporting political, non-science based regulation, unnecessary testing, and labelling of foods” (Fraley 1992: 43).
Of the two species of domesticated anatines, the Muscovy duck (Cairina moschata) is larger, less vocal, and characterized by a fleshy protuberance on the head of the male. It is a duck of tropical American origin, whose wild ancestors nested in trees, whereas the common duck (Anas platyrhynchos) was domesticated in the Old World from the ground-dwelling mallard. The two species can mate and produce offspring, but such offspring cannot reproduce.
The Muscovy duck is misnamed, for it never had any special association with Moscow. Most likely the name is the result of a garbled corruption of canard musqué (“musk duck”); however, this French term is not an accurate descriptor either. Depending on area, various Spanish names are used for the duck in Latin America. Among these are pato criollo (“native duck”), pato real (“royal duck”), pato almisclado (“musk duck”), pato machacón (“insistent duck”), and pato perulero (“Peru duck”). In Brazilian Portuguese, it is most commonly called pato do mato (“forest duck”). Indigenous names for this bird indicate its New World origin, including ñuñuma in Quechua, sumne in Chibcha, and tlalalacatl in the Nahuatl language of Mexico.
Before the European conquest of the Americas, the bird’s apparent distribution extended from north central Mexico to the Rio de la Plata in Argentina (Donkin 1989). It was and still is kept in a wide range of environments, which include the islands of the Caribbean, deserts, humid tropics, temperate plains, and the high elevations of the Andes. Although several colonial chronicles refer to C. moschata, such sources do not indicate that these domesticated birds were particularly important to household economies. In Mexico, the turkey has had greater importance in the houseyard. In South America, the Muscovy had no poultry competitors until the Spaniards and Portuguese brought chickens, which – because of their greater egg-laying capacity – were widely adopted.
In the title of a delightful little book published in 1885, Vincent Holt asks, Why Not Eat Insects? The “why not” is hard to explain logically, nutritionally, or on the basis of the sheer abundance of these creatures. However, for Europeans and North Americans the eating of insects, or entomophagy, is considered a curiosity at best. And for many the idea is downright repulsive. Insects as food are found only in cartoons, or perhaps on the odd occasion when suitably disguised under a layer of chocolate. Yet for these same people, other invertebrate animals, such as oysters, snails, crayfish, and lobsters, are not only accepted as food but even viewed as delicacies.
In many other parts of the world, however, insects are considered good to eat and are appreciated for their taste as well as nutritional value. Some, like the giant queen ants (Atta sp.) of Colombia, are prized as delicacies and supposedly function as aphrodisiacs as well. Others, like the mompani worms of Africa, are frequently included in the diet and much enjoyed. Still others, like the cock chafer grubs of Ireland, although not much esteemed, have been used when more desirable foods were not available.
The purpose of this chapter is to present an overview of the role of insects in the diet in different parts of the world and in different time periods. We first review the use of insects as food within major geographical areas, in the present as well as in the historic and prehistoric past when information is available (a comprehensive list of insect species used as food is provided in Table II.G.15.1). We then summarize general patterns of insect use and provide information on the nutritional value of some commonly consumed insects.
When the first created man saw the animals that God had made, it is said that he presumptuously, over-rating his powers, asked that he too might be given the creative power to fashion others like them. God granted his request and man tried his prentice hand. But the result was the buffalo, and man seeing that it was not good, asked in disgust that the creative power might be taken back again from him for ever. The buffalo, however, remained as the only living handiwork of man.
(Bradley-Birt 1910: 115)
Although of limited value as a clue to the origins of the domesticated water buffalo (Bubalus bubalis), this tale from India does reflect the rather low opinion of this bovine held by many, including, it seems, scientists who have shown relatively little interest in it. Considering the large size of its population, its widespread distribution, and its essential role in the economic lives of millions of people, especially in southern and eastern Asia, it is remarkable that so little is known about the water buffalo. Bovine admiration and scholarly attention have been reserved for those more highly regarded distant relatives of the buffalo, the taurine and zebu cattle.
Admittedly, some admirable efforts have been made within the last few decades to remedy this situation. Most of the work that has been done has focused on the present-day conditions and future potential of buffalo husbandry, with breeding, management, and productivity being of central concern. Research on the cultural and historical aspects of the buffalo, however, has been very limited. Certainly, in the discussion of animal domestication, the buffalo has been largely ignored.
Grain sorghum (Sorghum bicolor [Linn.]Moench) is a native African cereal now also widely grown in India, China, and the Americas. Sorghum ranks fifth in world cereal grain production, and fourth in value (after rice, wheat, and maize) as a cereal crop. It is grown on 40 to 50 million hectares annually, from which up to 60 million metric tons of grain are harvested. In Africa and Asia traditional cultivars are grown, usually with low agricultural inputs, and average yields are below 1 metric ton per hectare. But more than 3 metric tons of grain are harvested per acre in the Americas, where farmers plant modern sorghum hybrids. Sorghum is more tolerant to drought and better adapted for cultivation on saline soils than is maize. It holds tremendous promise as a cereal to feed the rapidly expanding populations of Africa and Asia. In the Americas it is replacing maize as an animal feed.
Morphology and Distribution
The grass genus Sorghum Moench is one of immense morphological variation. It is taxonomically subdivided into sections Chaetosorghum, Heterosorghum, Parasorghum, Stiposorghum, and Sorghum (Garber 1950), and these sections are recognized as separate genera by W. D. Clayton (1972). The genus Sorghum is here recognized to include: (1) a complex of tetraploid (2n = 40) rhizomatous taxa (S. halapense [Linn.]Pers.) that are widely distributed in the Mediterranean region and extend into tropical India; (2) a rhizomatous diploid (2n = 20) species (S. propinquum [Kunth]Hitchc.) that is distributed in Southeast Asia and extends into adjacent Pacific Islands; and (3) a nonrhizomatous tropical African diploid (2n = 20) complex (S. bicolor [Linn.]Moench) that includes domesticated grain sorghums and their closest wild and weedy relatives (de Wet and Harlan 1972). Genetic introgression is common where wild rhizomatous or spontaneous nonrhizomatous taxa become sympatric with grain sorghums, and derivatives of such introgression have become widely distributed as weeds in sorghum-growing regions.
Anorexia nervosa is a psychophysiological disorder – usually of young females – characterized by a prolonged refusal to eat or maintain normal body weight, an intense fear of becoming obese, a disturbed body image in which the emaciated patient feels overweight, and the absence of any physical illness that would account for extreme weight loss. The term “anorexia” is actually a misnomer, because genuine loss of appetite is rare and usually does not occur until late in the illness. In reality, most anorectics are obsessed with food and constantly struggle to deny natural hunger.
In anorexia nervosa, normal dieting escalates into a preoccupation with being thin, profound changes in eating patterns, and a loss of at least 25 percent of the original body weight. Weight loss is usually accomplished by a severe restriction of caloric intake, with patients subsisting on fewer than 600 calories per day. Contemporary anorectics may couple fasting with self-induced vomiting, use of laxatives and diuretics, and strenuous exercise.
The most consistent medical consequences of anorexia nervosa are amenorrhea (ceasing or irregularity of menstruation) and estrogen deficiency. In most cases amenorrhea follows weight loss, but it is not unusual for amenorrhea to appear before noticeable weight loss has occurred. The decrease in estrogen causes many anorectics to develop osteoporosis, a loss of bone density that is usually seen only in postmenopausal women (Garfinkel and Garner 1982).
The process of capture, taming, and eventual domestication of most animals is a difficult and lengthy process, often consisting of a trial-and-error approach. One notable exception was the domestication of the North American wild turkey, Meleagris gallopavo. The U.S. National Park Service archaeologist Jean Pinkley, while stationed at Mesa Verde National Park, put forth a logical scenario outlining the unique process of taming and domesticating the prehistoric pueblo turkeys of that area. Pinkley (1965) has pointed out that some domesticated animals apparently first exploited humans before becoming another of their agricultural conquests. The turkey is such an example.
The pueblo turkeys had become extinct in the Mesa Verde area by historic times, and the Park Service reintroduced breeding stock in 1944 (Pinkley 1965). This permitted observation of the wild turkeys and their relationship with the employees of the park. The turkeys were timid at first, but as they learned where food could be found – in this case, feeding stations that the government set out for small birds – they took over these sources. They also moved into warm roosting places available in the park’s residential areas, and despite efforts to chase them away by tossing Fourth of July cherry bombs and firing guns into the air, the birds continued to congregate in and around park dwellings.
There is little reason to believe that prehistoric humans in Mesa Verde were not tormented by turkeys in much the same manner, and sooner or later, when it dawned on them that the birds could not be driven off or frightened away, the Pueblo Indians would out of despair have begun to corral them to protect crops and foodstores. At this point, recognition of the turkey as a source of food and materials for bone tools would presumably have been a logical next step.
Oat (Avena L.) includes 29 to 31 species (depending on the classification scheme) of wild and domesticated annual grasses in the family Gramineae (Poaceae) that comprise a polyploid series, with diploid, hexaploid, and tetraploid forms (Baum 1977; Leggett 1992). The primary cultivated species are hexaploids, A. sativa L. and A. byzantina C. Koch, although 5 other species have to some extent been cultivated for human consumption. These are the tetraploid A. abyssinica Hochst and the diploids A. strigosa Schreb., A. brevis Roth., A. hispanica Ard., and A. nuda L. Nevertheless, oat consumed in human diets this century has been almost exclusively hexaploids.
The separation of the two cultivated hexaploids is based on minor, and not always definitive, morphological differences and is of more historical than contemporary relevance. A. byzantina (red oat) was the original germ plasm base of most North American fall-sown cultivars, whereas A. sativa was the germ plasm base of spring-sown cultivars. Late twentieth-century breeding populations in both ecogeographic regions contain intercrosses of both species. This has led to the almost exclusive use of the term A. sativa in describing new cultivar releases.
Oat is the fifth most economically important cereal in world production after wheat, rice, corn, and barley. It is cultivated in temperate regions worldwide, especially those of North America and Europe, where it is well adapted to climatic conditions of adequate rainfall, relatively cool temperatures, and long days (Sorrells and Simmons 1992). Oat is used primarily for animal feed, although human consumption has increased in recent years.
The importance of nutrition to the preservation of human health cannot be reasonably denied. However, the extent of its power may have been overstated in recent years. For millions of Americans, “natural foods and vitamins” are seen as almost magical preservers of health, beauty, and longevity. Indeed, claims for the healing properties of nutrients have become an integral part of the post–World War II “baby-boomer” generation’s vision of the world. For many, “faith” in the power of proper nutrition is part of a secular religion that comes close to denying the inevitability of aging and death. Vitamin C is considered a panacea one day and beta-carotene the next, as are such foods as broccoli and garlic. With such a cornucopia of natural “medicines,” who would ever think that the history of humankind would reveal so much disease and ill health?
Although such popular exaggerations of the benefits of various nutriments are easily dismissed by serious scholars, other more scholarly claims are not. One suggestion dealing with the historical importance of nutrition has received remarkably widespread support in academic circles and, among historians, has become the orthodox explanation for understanding a key aspect of the modern world: increased longevity.
The McKeown Thesis
The classic formulation of this explanation was provided by the medical historian Thomas McKeown (1976, 1979), who argued that the reasons for the decline of mortality in the Western world over the last three hundred years have been largely the result of rising living standards, especially increased and improved nutrition. Equally important, the decline was not the result, as so many had rather vaguely believed, of any purposeful medical or public-health interventions.
The majority of foods found in modern northern Europe – which includes the lands around the North Sea and the Baltic Sea and those of northern Alpine region – are not indigenous to the area. It is here, however, that one of the most stable of humankind’s agricultural systems was established, and one that has proved capable of providing densely populated areas with a high standard of living. Such an agricultural bounty has helped northern Europe to become one of the most prosperous areas of the world.
The Paleolithic Period
The northern European environment underwent drastic change several times during the Pleistocene. Glaciers coming from Scandinavia and the Alps covered a large part of the landscape with glacigenic sediment several times during the Ice Age. Forests retreated from northern Europe and were replaced by a type of vegetation that can be regarded as a mixture of those of tundra and steppe. In this environment, forest-adapted herbivores were replaced by large grazing species such as caribou (Rangifer tarandus), wild horse (Equus sp.), and mammoth (Mammonteus primigenius). These species, associated in small or bigger herds, migrated from the north to the south and vice versa in a yearly cycle. In summer they fled north from the multitude of biting insects (to Jutland, for example), and in winter they were attracted by the somewhat higher temperatures in areas of the south, such as that just north of the Alps.
Reindeer herds proved to be a very good source of food for Paleolithic reindeer hunters, whose widespread presence in northern Europe is well established by excavations. The hunters migrated with the herds from the south to the north and back again. Prehistoric humans located their temporary dwelling places so as to achieve a maximum vantage point – usually so they could hunt downhill using their lances and bows or a kind of harpoon made of stone and bone material (Bandi 1968: 107–12; Kuhn-Schnyder 1968: 43–68; Rust 1972: 19–20, 65–9).
The central organ of the human circulatory system, the heart, must be among the most remarkable creations of nature. In the longest-living individuals, it works continuously for a hundred or more years, executing something like 4,000 million working strokes and moving 350,000 cubic meters of blood, enough to make a small lake. In individuals who die of heart disease, it is not, as a rule, the heart itself that fails but some auxiliary mechanism, like one of its arteries, or the pacemaker. If an artery, supplying a small part of the heart, is blocked, the tissues receiving oxygen and nutrients from that vessel die. If the area involved is not so large as to endanger the entire heart, the damage is gradually repaired by the immune system. The dead cells are removed but cannot be replaced; the gap they leave is filled with scar tissue. While the repair is carried out, the heart continues to work.
Among other remarkable properties of the heart is a virtual immunity from cancer and a good resistance to inflammatory diseases. When the body is at rest, the heart contracts approximately once every second. Its contraction – the systole – lasts about one-third of a second; its relaxation period – the diastole – two-thirds of a second. During hard physical exercise, the heart rate increases about three times.
The arterial system is like a many-branched tree. Its trunk, the aorta, is about three centimeters in diameter at its origin. The branches become progressively smaller and end in a network of capillaries of microscopic size. On the return side, blood is collected by small venules, which join to form veins and end in two large venous trunks. The entire length of the system is more than enough to encircle the earth.
As early as the 1930s, experiments on laboratory animals revealed that diet can considerably influence the process of cancer causation and development (carcinogenesis) (Tannenbaum 1942a, 1942b; Tannenbaum and Silverstone 1953). It was several decades later, however, that the first epidemiological studies appeared to indicate that diet could play a role in human cancer. A key conference held in 1975, entitled “Nutrition in the Causation of Cancer,” summarized the existing knowledge and hypotheses (Wynder, Peters, and Vivona 1975). From that moment, research in experimental systems, including animal models and epidemiological studies, increased rapidly, providing extensive information on the impact of nutritional traditions and specific macro- and micronutrients on several types of cancer. Considerable progress had already been made in several underlying sciences. For example, advances had been achieved in understanding the mechanisms of action of nutrients, the process of carcinogenesis, and the classification of carcinogens according to their mode of action (Kroes 1979;Weisburger and Williams 1991).
In particular, epidemiological studies on the international variations in incidence rates for certain cancers pointed to the existence of one or more exogenous factors that could be controlled. Observational studies had been conducted with migrants from countries with lower incidence rates to countries with higher incidence rates. A rapid increase from the lower to the higher incidence in those migrants supported the suggestion that environmental causes, and especially prevailing dietary habits, may influence the development of a number of neoplasms.
Maize (Zea mays L.), a member of the grass family Poaceae (synonym Gramineae), is the most important human dietary cereal grain in Latin America and Africa and the second most abundant cultivated cereal worldwide. Originating in varying altitudes and climates in the Americas, where it still exhibits its greatest diversity of types, maize was introduced across temperate Europe and in Asia and Africa during the sixteenth and seventeenth centuries. It became a staple food of Central Europe, a cheap means of provisioning the African-American slave trade by the end of the eighteenth century, and the usual ration of workers in British mines in Africa by the end of the nineteenth century. In the twentieth century, major increases in maize production, attributed to developments in maize breeding, associated water management, fertilizer response, pest control, and ever-expanding nutritional and industrial uses, have contributed to its advance as an intercrop (and sometimes as a staple) in parts of Asia and to the doubling and tripling of maize harvests throughout North America and Europe. High-yield varieties and government agricultural support and marketing programs, as well as maize’s biological advantages of high energy yields, high extraction rate, and greater adaptability relative to wheat or rice, have all led to maize displacing sorghum and other grains over much of Africa.
On all continents, maize has been fitted into a wide variety of environments and culinary preparations; even more significant, however, it has become a component of mixed maize-livestock economies and diets. Of the three major cereal grains (wheat, rice, and maize), maize is the only one not grown primarily for direct human consumption. Approximately one-fifth of all maize grown worldwide is eaten directly by people; two-thirds is eaten by their animals; and approximately one-tenth is used as a raw material in manufactured goods, including many nonfood products.
The relationship between nutrition and adolescent fertility has been a topic of much discussion in recent research on human biology. The apparent increase in the incidence of teenage pregnancy in Western societies has led some researchers to wonder whether there are biological as well as cultural factors that influence this phenomenon (Vinovskis 1988). Studies of adolescents in different geographic and socioeconomic settings have demonstrated that the age at which sexual maturity is reached is not fixed but is heavily shaped by numerous influences, such as fatness at adolescence, physique, health status, genetics, degree of physical activity, and socioeconomic status (Maresh 1972; Johnson 1974; Short 1976; Zacharias, Rand, and Wurtman 1976; Frisch 1978; Meyer et al. 1990; Moisan, Meyer, and Gingras 1990; Wellens et al. 1990).
Since the reproductive process requires energy, reproductive ability is curtailed in times of food scarcity or when calories burned through physical exertion or exercise exceed the amount provided by food intake. Undernourished women, for example, reach menarche later and experience menopause earlier than do well-nourished ones. Poorly nourished women also have higher frequencies of irregular menstruation and anovulatory menstrual cycles, with menstruation and ovulation disappearing entirely if malnutrition is severe. During pregnancy, malnourished women have a greater likelihood of miscarriage, and if they do carry the infant to term, they experience a longer period of lactational amenorrhea. In men, severe malnutrition leads to loss of libido, a decrease in prostate fluid and in sperm count and mobility, and, eventually, the loss of sperm production altogether. For children and adolescents, undernutrition delays the onset of puberty in both boys and girls, and limits the fecundity of those who have achieved sexual maturity (Frisch 1978).
Australia and New Zealand are Pacific Rim countries situated on the southwestern edge of that vast ocean. But although Australia has been peopled for at least 50,000 years (some now say 70,000), and New Zealand for just over 1,000, the dominant foodways of both have been shaped over just the last 200 years – since the beginning of British settlement in Australia in 1788. The indigenous people, the Aborigine in Australia and the Maori in New Zealand, are now minorities in their own lands (Aborigines comprise less than 2 percent of Australia’s population and Maori about 15 percent of New Zealand’s), and the foods and beverages they consume have been markedly influenced by food and drink of British origin. Indeed, from a contemporary perspective, food and drink in Australia and New Zealand – the lands “down under” – predominantly derive from the strong British heritage.
In this chapter, the environments of Australia and New Zealand are briefly described, not only because they are notably unique but also because they were so amenable to “ecological imperialism” (Crosby 1978). The food systems of the indigenous peoples, although now vastly altered, are also outlined, but the bulk of the chapter is devoted to the processes that produced contemporary patterns of food and drink consumption among both the immigrants and the indigenous peoples.
Natural Environments
Australia
Because of its transitional position between the low and middle latitudes, about 40 percent of Australia is located within the tropics. However, the southwestern and southeastern littoral zones lie within the mid-latitudes and have temperature and rainfall regimes somewhat similar to those of western and Mediterranean Europe and, consequently, have proven conducive to the naturalization of European flora and fauna. The continent is an ancient and stable one. Large parts of it have an aspect of sameness, with almost monotonous expanses of flat land and sweeping vistas (McKnight 1995), and only in the Eastern Highlands is there great topographical variety.
In the United States, the past three decades have witnessed tremendous changes in the way the public views the foods it buys. Unlike counterparts in the developing world, where problems of food availability and food quantities still dominate, consumers in the United States (and the West in general) have become increasingly interested in the nutritional quality of the foodstuffs they are offered. As a consequence, nutrition labeling has emerged to play a key role in government regulation of the food supply, in informing consumers about the constituents of the foods they eat, and in the formulation and marketing of food products by the manufacturers.
The importance of food labels has come about in spite of the fact that during the last 20 years or so, policy decisions regarding the implementation of nutrition labeling have been made in a political environment that emphasizes nonintervention in the operation of market economies. Recent legislation in industrialized countries has taken, for the most part, a minimalist approach to the regulation of nutritional quality. But although still controversial, labeling is seen as an acceptable “information remedy” that requires relatively little market intervention, and most Western nations have established some form of legislative guidelines for the regulation of nutrition labeling. With the adoption of the Nutrition Labeling and Education Act (NLEA) in 1994 (Caswell and Mojduszka 1996), the United States is currently in the forefront of establishing a mandatory and comprehensive national nutrition labeling policy.
The growing interest in nutrition policy reflects the understanding that foods represent major potential risks to public health because of such factors as foodborne organisms, heavy metals, pesticide residues, food additives, veterinary residues, and naturally occurring toxins. But although these are very real health hazards, scientists believe that the risks associated with nutritional imbalances in the composition of the diet as a whole are the most significant in the longer run, particularly in industrialized countries, where the high percentage of fat in daily diets seems to significantly threaten public health (Henson and Traill 1993).The recognition of nutritional quality as a value in itself has led to a separation of the nutritional from other food safety issues and a growing tendency to develop legislation targeted specifically at nutrition.
Pellagra is a chronic disease that can affect men, women, and – very rarely – children. The onset is insidious. At first, the afflicted experience malaise but have no definite symptoms. This is followed by the occurrence of a dermatitis on parts of the body exposed to sunlight. A diagnosis of pellagra is strongly indicated when the dermatitis appears around the neck and progresses from redness at the onset to a later thickening and hyperpigmentation of the skin in affected areas. The dermatitis appears during periods of the year when sun exposure is greatest. Other symptoms, including soreness of the mouth, nausea, and diarrhea, begin either concurrently with the skin changes or shortly thereafter. Diarrhea is associated with impaired nutrient absorption, and as a result of both dietary inadequacies and malabsorption, pellagrins frequently show clinical signs of multiple nutritional deficiencies.
Late signs of pellagra include mental confusion, delusions of sin, depression, and a suicidal tendency. Occasionally, these psychiatric signs are accompanied by a partial paralysis of the lower limbs. In the final stages of the disease, wasting becomes extreme as a result of both a refusal to eat (because of nausea) and pain on swallowing (because of fat malabsorption). Death is from extreme protein–energy malnutrition, or from a secondary infection such as tuberculosis, or from suicide (Roe 1991).
Pellagra as a Deficiency Disease
Since 1937, it has been known that pellagra is the result of a deficiency of the B vitamin niacin (Sydenstricker et al. 1938), and that the deficiency usually arises as a consequence of long-term subsistence on a diet lacking in animal protein or other foods that would meet the body’s requirement for niacin (Carpenter and Lewin 1985). However, a sufficient quantity in the diet of the amino acid tryptophan – identified as a “precursor” of niacin (meaning that the body can convert tryptophan into niacin) – has also been found to cure or prevent the disease (Goldsmith et al. 1952). This explains why milk, for example, helps combat pellagra: Milk contains relatively little niacin but is a good source of tryptophan. Pellagra is most strongly associated with diets based on staple cereals, especially maize. This grain has historically been the daily fare of those who develop the disease: Maize is high in niacin content, but much of this niacin is in a chemically bound form that prevents absorption of the vitamin by the body (Goldsmith 1956).