To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In conventional scientific usage, when the word metabolism is joined with energy, it takes on a somewhat different meaning than when it is joined with protein. The latter – protein metabolism – usually includes consideration of the biochemical pathways of amino acids, the building blocks of protein, whereas energy metabolism is frequently assumed to include only the specific role of energy without consideration, in any detailed way, of the pathways involved in the breakdown and synthesis of the various carbohydrates and lipids that supply food energy.
In this chapter, the major thrust concerned with energy is an emphasis on historical considerations of understanding and meeting human food energy needs. In the case of protein, the role of amino acids in generating energy, protein quality (including digestibility), and human protein requirements will be given emphasis. Finally, there is some discussion of protein–energy relationships, the problems connected with an excess and a deficit of food energy in the diet, and protein–energy malnutrition.
Food Energy
The most pressing problem for humans throughout their history has been the basic one of securing food to satisfy hunger and food-energy needs. But the fact that the human population in different parts of the world (despite subsisting on different diets), seemed before about 1900 to experience approximately the same level of health had led some physiologists to believe that all foods were rather similar (McCollum 1957). In fact, this view was rooted in the ancient “many foods–single aliment” concept of Hippocrates and Galen. By the turn of the twentieth century, however, views were rapidly changing, and although knowledge of the specialized roles of amino acids, vitamins, and minerals was hazy and often contradictory, proteins, fats, and carbohydrates (starches and sugars) could be distinguished and analyzed in foods and diets. Thus, with the advances in food analysis and refinements of nutritional needs the “many foods– single aliment”concept was no longer tenable.
Diabetes mellitus (DM) is a heterogeneous group of endocrine disorders characterized by hyperglycemia (high blood sugar levels) during fasting or following a meal. Other characteristic symptoms of diabetes include excessive urination, urine containing sugar, hunger, thirst, fatigue, and weight loss. The disorder is caused by a resistance to the action of insulin, or a lack or insufficient production of insulin to transport glucose from the blood into cells where it is used as the primary energy source for cellular metabolism. Although diabetes has been a recognized disease for at least two millennia, only since the mid–1970s has there been a consensus on the classification and diagnosis of DM.
Insulin-dependent diabetes mellitus, also called juvenile diabetes or Type I diabetes, is an autoimmune disease that generally affects individuals under the age of 20 and has an acute onset. Noninsulin-dependent diabetes mellitus, Type II, or maturity onset diabetes mellitus, has a complex etiology often associated with obesity and most frequently occurs among individuals over 40 years of age. Ninety to 95 percent of diabetes worldwide is of the latter type. Gestational diabetes appears to be a subset of Type II diabetes, and there are rare genetic syndromes, such as hemochromatosis, drugs, and infections, associated with pancreatic diseases that can cause diabetes. The underlying pathophysiology of Type II diabetes involves the increasing resistance of cells, particularly muscle and adipose (fat) cells, to the transport of glucose across the cell membrane. This resistance or impaired glucose tolerance leads to the classic diagnostic criterion of abnormally high blood sugar concentrations.
The llama (Lama glama) and alpaca (Lama pacos) are among the few domesticated ungulates whose most important function has not been that of providing food for the people who control them. The llama has been kept primarily as a beast of burden, whereas the more petite alpaca is most valued as a source of an extraordinarily fine fleece. These South American members of the camel family may share a common biological ancestry from the guanaco (Lama guanicoe), for although they have long been designated as separate species, the closeness of the relationship is reflected in fertile offspring when they crossbreed. An alternative point of view now gaining in popularity is that the alpaca descended from the vicuña (Lama vicugna), since both animals are about the same size (44 to 65 kilograms [kg]) and both have the capacity to regenerate their incisor teeth. The distribution of both the llama and the alpaca has been traditionally centered in the Andean Highlands of Peru and Bolivia, with peripheral populations of the former in Chile, Argentina, and Ecuador. In the past three decades growing interest has increased their population on other continents, especially in North America.
Camelid Meat as Human Food
Both animals have been an important source of food in the part of the central Andes where husbandry has been most intensive. In neither case, however, are they raised primarily for their flesh, which is consumed after their most valued functions diminish with age. However, llamas possibly had a more important meat function in the pre-Pizarro Andes before the introduction of European barnyard creatures. The movement of herds from the highlands to the coast could have been a way both to transport goods and to move protein-on-the hoof to the more densely populated coast, where at the time, meat was much rarer than in the highlands (Cobo 1956). David Browman (1989) suggested that when camelid utilization in the highlands expanded northward, starting around 1000 B.C., and long before the Inca civilization was established, meat production appeared to have been the most important use of these animals. But whether of primary or secondary importance, the protein and fat supplied by this meat have contributed to the health of the animals’ Andean keepers, whose diet consists mainly of starch.
Taro is the common name of four different root crops that are widely consumed in tropical areas around the world. Taro is especially valued for its starch granules, which are easily digested through the bloodstream, thus making it an ideal food for babies, elderly persons, and those with digestive problems. It is grown by vegetative propagation (asexual reproduction), so its spread around the world has been due to human intervention. But its production is restricted to the humid tropics, and its availability is restricted by its susceptibility to damage in transport.
Taro is most widely consumed in societies throughout the Pacific, where it has been a staple for probably 3,000 to 4,000 years. But it is also used extensively in India, Thailand, the Philippines, and Southeast Asia, as well as in the Caribbean and parts of tropical West Africa and Madagascar (see Murdock 1960; Petterson 1977). Moreover, in the last quarter of the twentieth century taro entered metropolitan areas such as Auckland, Wellington, Sydney, and Los Angeles, where it is purchased by migrants from Samoa and other Pacific Island nations who desire to maintain access to their traditional foods (Pollock 1992).
Although taro is the generic Austronesian term for four different roots, true taro is known botanically as Colocasia esculenta, or Colocasia antiquorum in some of the older literature. We will refer to it here as Colocasia taro. False taro, or giant taro, is the name applied to the plant known botanically as Alocasia macrorrhiza. It is less widely used unless other root staples are in short supply. We will refer to it as Alocasia taro.
Probably the earliest domesticated herd animal in the Old World, the sheep (Ovis aries) makes an unparalleled contribution of food and fiber. The great advantage of these small ruminants is their ability to digest the cellulose of wild grasses and coarse woody shrubs in their complex stomachs and convert it into usable products.
Origin and Domestication
Sheep were domesticated on the flanks of the Taurus–Zagros Mountains, which run from southern Turkey to southern Iran. Within that arc is found the urial (Ovis orientalis), a wild sheep now generally regarded as the ancestor of the domesticated sheep. Early archaeological evidence of sheep under human control comes from Shanidar Cave and nearby Zaqi Chemi in Kurdistan. Sheep bones recovered in abundance at these two sites have been dated to between 8,000 and 9,000 years ago and contrast with other Neolithic sites close to the Mediterranean, where similar evidence of domesticated sheep is rare. However, accurate species identification has posed problems, for the bones of goats and sheep are often difficult to distinguish from one another. Therefore, some archaeological reports have grouped them together as “sheep/goat” or “caprine.”
The domestication process that transformed O. orientalis into O. aries involved several key changes. The body size of the sheep was reduced from that of the urial. Diminution could have been accomplished over many generations by culling out larger, aggressive males as sires. Selection also occurred for hornlessness, but this process is not complete. Although many breeds of domesticated female (and some male) sheep typically have no horns, in other males the horns have only been reduced in size. Domesticated sheep also have a long tail as compared with the wild ancestor. The most significant physical transformation of the animal was the replacement of the hairy outercoat with wool fibers, which turned the sheep into much more than a food source. As early as 6,000 years ago,woolly sheep had differentiated from hairy sheep, and in ancient Mesopotamia, the raising of wool-bearing animals was a major activity in lowland areas. Selection for white-wooled animals explains the gradual dominance of that color.
Beer and ale are mildly alcoholic beverages made from the action of yeast fermenting a usually grain-based mixture. Throughout their history, they have constituted both a refreshing social drink and an important energy-rich food. The basic ingredients of most beers and ales have included grain, water, yeast, and (more recently) hops, and despite many regional variations, the process of fermenting the grain has changed little over time. To be completely accurate, it must be noted that ale is defined as unhopped beer; in this chapter, however, the terms “beer” and “ale” are employed interchangeably for the period before hops were used.
The Chemical Basis of Fermentation
Before fermentation can take place, yeast, a single-cell fungus occurring naturally in several varieties, must be allowed to act on the sugar present in grain. This releases two crucial by-products, alcohol and carbon dioxide. A grain often used for this purpose is barley – even though, in its natural state, it contains only a trace amount of free sugar – because of its high content of starch, a complex polymer of sugar. Barley also contains substances known collectively as diastases, which convert the barley starches into sugar to be used as food for the growing plant. When barley is crushed and dried carefully, the essential starches and diastases are released and preserved, rendering a substance called “malt.”
Until sometime around the ninth century, “beer” was actually “ale,” made by a process known as mashing, whereby the barley malt was mixed with hot – but not boiling – water. The effect of the hot water was to induce the diastases to act immediately in breaking down the complex starches into sugar. This process is referred to as conversion and results in “wort,” one of its most essential products. The mashing procedure not only produced the brown, sugary wort but also permitted inert elements of the barley, such as the husks, to be drawn off. In the production of pure ale (such as the first human brewers would have made), all that remained was for yeast to act upon the wort so that the sugars could be converted into alcohol and carbon dioxide.
Food is what Marcel Mauss (1967) called a “total social fact.” It is a part of culture that is central, connected to many kinds of behavior, and infinitely meaningful. Food is a prism that absorbs a host of assorted cultural phenomena and unites them into one coherent domain while simultaneously speaking through that domain about everything that is important. For example, for Sardinians, bread is world (Counihan 1984). In the production, distribution, and consumption of bread are manifest Sardinian economic realities, social relations, and cultural ideals. An examination of foodways in all cultures reveals much about power relations, the shaping of community and personality, the construction of the family, systems of meaning and communication, and conceptions of sex, sexuality, and gender. The study of foodways has contributed to the understanding of personhood across cultures and historical periods (see Messer 1984).
Every coherent social group has its own unique alimentary system. Even cultures in the process of disintegration reveal their plight in the ways they deal with and think about eating. Cultures articulate and recognize their distinctiveness through the medium of food. The English call the French “Frogs” because of their habit (wildly barbarian to the English) of eating the legs of that creature (Leach 1964: 31). In the Amazon region, Indian tribes that appear alike in the eyes of an outsider nonetheless distinguish themselves from one another in part through their different habits, manners, and conceptions of eating. Maligned other groups are defined as those who eat people and animals thought disgusting, as for example, “frogs and snakes and mice” (Gregor 1985: 14). Food systems are, of course, intimately related to the local environment, but in most cultures “only a small part of this edible environment will actually be classified as potential food. Such classification is a matter of language and culture, not of nature” (Leach 1964: 31). The study of foodways enables a holistic and coherent look at how humans mediate their relationships with nature and with one another across cultures and throughout history.
Historically, dietary salt (sodium chloride) has been obtained by numerous methods, including solar evaporation of seawater, the boiling down of water from brine springs, and the mining of “rock” salt (Brisay and Evans 1975). In fact, R. P. Multhauf (1978) has pointed out that “salt-making” in history could be regarded as a quasi-agricultural occupation, as seen in frequent references to the annual production as a “harvest.” Such an occupation was seasonal, beginning with the advent of warm weather or the spring high tide and ceasing with the onset of autumnal rains. Multhauf has argued further that the quest for salt led to the development of major trade routes in the ancient world. The historian Herodotus, for example, described caravans heading for the salt oases of Libya, and great caravan routes also stretched across the Sahara, as salt from the desert was an important commodity exchanged for West African gold and slaves. Similarly huge salt deposits were mined in northern India before the time of Alexander the Great, and in the pre-Columbian Americas, the Maya and Aztecs traded salt that was employed in food, in medicines, and as an accessory in religious rituals. In China, evidence of salt mining dates from as early as 2000 B.C.
Homer termed salt “divine,” and Plato referred to it as “a substance dear to the gods.” Aristotle wrote that many regarded a brine or salt spring as a gift from the gods. In the Bible (Num. 18: 19), it is written: “This is a perpetual covenant of salt before the Lord with you and your descendants also.” In the Orient, salt was regarded as the symbol of a bond between parties eating together. In Iran, “unfaithful to salt” referred to ungrateful or disloyal individuals. The English word “salary” is derived from salarium, the Latin word for salt, which was the pay of Roman soldiers. Moreover, Roman sausages were called salsus because so much salt was used to make them (Abrams 1983).
Over the past 2,000 years, scholars have produced a vast literature on food prejudices and taboos. This literature, however, is complicated by confusing etymology and indiscriminate or inconsistent application of several terms, such as food aversions, avoidances, dislikes, prejudices, prohibitions, rejections, and taboos/tabus.
The term aversion is used by food-habit researchers primarily in the context of disliked or inappropriate foods, whereby individuals elect not to consume items because of specific, defined, biological or cultural criteria. Some human food aversions, for example, are immediate as when foods are tasted and disliked because of sensory properties of odor, taste, and texture. Other foods are avoided because of biological-physiological conditions posed by nausea and vomiting, “heartburn” or “acid stomach,” intestinal distress associated with flatulence, or acute diarrhea. Still other food aversions are cultural or psychological in origin, as evidenced when individuals report that they dislike specific foods even though the items have never been consumed by them. In such instances, anticipation triggers avoidance or aversive behavior, and merely the color, shape, or images of the food source itself are enough to elicit aversion and the individual decision not to eat.
The word taboo or tabu, in contrast, implies a moral or religious context of foods or food-related behavior. Taboo, the Polynesian concept to “set apart,” includes the suggestion that some human activities, and eating behavior specifically, may be either protective or deleterious to the environment, to the consumer, or to society at large. Food-related taboos in this context are identical to dietary prohibitions, whereby foods and food-related behaviors are forbidden for specific positive or negative reasons.
The discovery of the chief nutrients has been essentially a twentieth-century phenomenon. In 1897, Dutch researcher Christian Eijkman, while investigating beriberi in the Dutch East Indies, showed that a diet of polished rice caused the disease and that the addition of the rice polishings to the diet cured it. Fifteen years later, Polish chemist Casimir Funk proposed that not only beriberi but scurvy, pellagra, and rickets were caused by an absence of a dietary substance he called vitamine; and the age of vitamins was under way.
This is not to say that much earlier research did not undergird such twentieth-century breakthroughs. The importance to human health of some minerals, such as iron, had long been at least vaguely recognized, and by 1800, it was understood that blood contained iron; since the eighteenth century, some kind of dietary deficiency had been a periodic suspect as the cause of scurvy; and protein was discovered in the nineteenth century. But in addition to both water- and fat-soluble vitamins, the importance and functions of most of the major minerals and the trace minerals, along with amino acids, were all twentieth-century discoveries, as were the essential fatty acids and the nutritional illness now called protein–energy malnutrition (PEM).
One important consequence of the new knowledge was the near-total eradication of the major deficiency diseases. Pellagra, which had ravaged southern Europe and the southern United States, was found to be associated with niacin deficiency; beriberi, the scourge of rice-consuming peoples in the Far East, was linked with thiamine deficiency; and scurvy was finally – and definitively – shown to be the result of vitamin C deficiency.
Sesame (Sesamum indicum L.) belongs to the Pedaliaceae, a small family of about 15 genera and 60 species of annual and perennial herbs. These occur mainly in the Old World tropics and subtropics, with the greatest number in Africa (Purseglove 1968). Sesame is a crop of hot, dry climates, grown for its oil and protein-rich seeds. The oil is valued for its stability, color, nutty flavor, and resistance to rancidity.
A large number of cultivars are known (Bedigian, Smyth, and Harlan 1986). These differ in their maturation time, degree of branching, leaf shape and color, and number of flowers per leaf axil, which may be 1 or 3. The locules in the capsule usually number 4 or 8. The known cultivars also vary in length of capsule, in intensity of flower color, and especially in seed color, which ranges from pure white to black, with intervening shades of ivory, beige, tan, yellow, brown, red, and gray. The seeds are about 3 millimeters long and have a flattened pear shape. The capsules open automatically when dry, causing the seed to scatter.
Production
Sesame is usually grown as a rain-fed crop. It has many agricultural advantages: It sets seed and yields relatively well under high temperatures, it is tolerant of drought, and it does reasonably well on poor soils. It is very sensitive to day length and is intolerant of waterlogging. The major obstacle to the expansion of sesame is its habit of shattering: The absence of nonshattering cultivars, suitable for machine harvest, results in labor-intensive harvest seasons. Because of this obstacle, the crop is not suitable for large-scale commercial production (although breeding for non-shattering traits has been ongoing). Instead, sesame has typically been grown on a small scale for local consumption or in places where labor is cheap.
Calcium is the fifth most abundant element in the biosphere, after oxygen, silicon, aluminum, and iron. It is present in high concentration in seawater and in all fresh waters that support an abundant biota. Fortuitously, the calcium ion has just the right radius to fit neatly within the folds of various peptide chains. Calcium thereby stabilizes and activates a large number of structural and catalytic proteins essential for life. In this capacity calcium serves as a ubiquitous second messenger within cells, mediating such diverse processes as mitosis, muscle contraction, glandular secretion, blood coagulation, and interneuronal signal transmission. Controlling these activities requires careful regulation of the concentration of calcium in critical fluid compartments. This regulation is accomplished in two basic ways.
At a cellular level, calcium is ordinarily sequestered within intracellular storage compartments. It is released into the cell sap when needed to trigger various cellular activities, and then quickly pumped back into its storage reservoirs when the activity needs to be terminated. This control mode is exemplified by the accumulation and release of calcium by the sarcoplasmic reticulum of striated muscle. The second type of control, utilized by many tissues in higher organisms, is the tight regulation of the calcium level in the blood and extracellular fluids that bathe all the tissues. Individual cells, needing a pulse of calcium, simply open membrane channels and let calcium pour in from the bathing fluid; they then pump it back out when the particular activity needs to cease.
The American bison (Bison bison) is more closely related to cattle than to true buffalo, such as the water buffalo (Bubalus bubalis.) Nonetheless, early European settlers called the unfamiliar animal they encountered in North America a “buffelo” [sic] and this misnomer has persisted to the present. Because we are so accustomed to thinking of the American bison as a buffalo, the terms are used interchangeably in this chapter.
Long perceived to be an environmental casualty of the conquest of the Great Plains, the American bison has gradually reasserted its presence in North America. Today, there are around 200,000 of the animals alive on the continent, and the danger of species extinction appears to have passed (Callenbach 1996). However, any further expansion of the population is likely to be linked, at least in part, to the animal’s economic usefulness, especially as a food source. At present, only a limited number of people are acquainted with the taste of bison. For buffalo meat to become part of the national diet, the advertising and agricultural industries will have to reintroduce a food which, at an earlier time, was essential to many of the continent’s inhabitants.
In size and appearance, the bison is imposing and distinctive. A mature male stands from 5 to 6 feet tall at the shoulder and may weigh from 1,800 to 2,400 pounds. Noticeably smaller, females seldom weigh more than 800 pounds. Despite its bulk, the bison is quick and agile and can sprint at speeds of up to 30 miles per hour. Like domestic cattle and sheep, it is cloven hooved; unlike them, both the male and female possess large, curved horns and a prominent hump at the shoulder. Buffalo are usually dark brown in color, although their hue lightens to brownish-yellow during the spring. The animals have two types of hair: a long, coarse, shaggy growth covering the neck, head, and shoulders, and a shorter,woolly growth found on the remainder of the body.
The history of the scientific documentation of the need for fat in the diet began with the early nineteenth-century work of Michel Eugene Chevreul (Mayer and Hanson 1960). He showed that lard contained a solid fat, which he termed stearine, and a liquid fat he called elaine (later shown to be the isomer of oleine), and in 1823, this work was published in a treatise, Chemical Investigations of Fats of Animal Origin. Chevreul also crystallized potassium stearate, naming it ‘mother-of-pearl’ and calling its acidified product “margarine” (from the Greek word for mother-of-pearl). In addition, Chevreul isolated various acids from fats and distinguished them on the basis of their melting points.
Meanwhile in 1822, Edmund Davy had reported that iodine would react with fats, and by the end of the century, work by L. H. Mills and Baron Hubl led to the procedure devised by J.J.A. Wijs in 1898 for determining a fat’s “iodine value” or “iodine number” – a measure of the extent to which a fat is unsaturated, based on its uptake of iodine. Highly saturated coconut oil, for example, has an iodine number of 8 to 10, whereas that of highly unsaturated linseed oil ranges from 170 to 202.
Phospholipids were described in 1846 by N. T. Gobley, who found that egg yolk had a substance that contained nitrogen and phosphorus in addition to glycerol and fatty acids. He named it lecithin. The nitrogenous base was shown to be choline by A. Strecker in 1868, and J.W. L. Thudichem described kephalin in 1884 (Mayer and Hanson 1960).
Magnesium is one of the most plentiful elements in nature and the fourth most abundant metal in living organisms. It is extremely important in both plant and animal metabolism. Photosynthesis does not proceed when the magnesium atom is removed from the chlorophyll molecule. Magnesium also plays a key role in many enzyme reactions that are critical to cellular metabolism and is one of the main determinants of biological excitation (Aikawa 1981).
Despite its ubiquitous distribution and the multiplicity of its actions, magnesium has long been considered as a microelement with a vague physiological role, and not until the early 1930s was it recognized as an essential nutrient. Magnesium deficiency in humans was only described in 1951, and according to several experts, it continues to be diagnosed less frequently than it should be (Whang 1987).
There are many explanations for the reluctance to allow magnesium deficiency a place in medicine; among them are the difficulties in measuring magnesium, which have restrained the accumulation of knowledge. Moreover, the essentially intracellular location of the magnesium ion has discouraged the detection of its deficit. Lastly, because magnesium is so widely distributed in foods, its dietary intake has been assumed to be sufficient to meet the body’s requirements.
Actually, pure magnesium deficiency is quite rare. Marginal deficiency of magnesium, however, is believed to occur fairly often in the general population.Moreover, in most diseases causing magnesium deficiency, significant nutritional factors exist.
There are at least two reasons why the nutritional status of women should be distinguished from that of men. The first is that a woman’s nutritional status has a direct impact on her children. Better-nourished mothers lead to better-nourished infants by virtue of prepregnancy nutritional status, weight gain during pregnancy, and diet during lactation. This approach to women’s nutritional status encapsulates the traditional “breeder and feeder” view.
The second reason is that women exhibit certain nurturing and allocative behaviors, reflecting societal roles, that enhance the food and nutrition security of the entire household and of children in particular. This behavior is most commonly demonstrated in the way women allocate their time and their own income and is particularly visible in certain types of female-headed households. Through both the direct and indirect links, women are the “gatekeepers” of the food and nutritional status of their household’s members.
The objective of this chapter is to summarize the literature underlying such links between gender and nutrition within a conceptual framework. Eight main links are identified and discussed in turn, although it should be recognized that this organization is merely a convenient representation of the issues, and that there is considerable overlap across links.
Link 1. Mother’s Nutritional Status, Infant and Child Health, and Supplementary Feeding
Birth weight is the single most important determinant of neonatal and infant mortality and of child growth to the age of 7. A number of maternal factors have been shown to be significant determinants of birth weight; most important is the mother’s progravid weight and weight gain during pregnancy. Women entering pregnancy with a low preconception weight are several times more likely to produce a low-birth-weight baby (one less than 2,500 grams). Mean birth weight increases, and the incidence of low birth weight decreases, as the preconception weight of the mother increases (Lechtig et al. 1975).
The food history of Native Americans before the time of Columbus involved ways of life ranging from big-game hunting to (in many cases) sophisticated agriculture. The history of foodways in North America since Columbus has been the story of five centuries of introduced foodstuffs, preparation methods, and equipment that accompanied peoples from Europe, Asia, and Africa, with the food culture of North America having been enriched by each addition.
The Sixteenth Century
Most narrative histories of North America give little attention to the sixteenth century, even though two earthshaking events took place during this time that were to alter the continent’s history fundamentally. One was the demographic collapse of the native populations in the face of Eurasian diseases, such as smallpox. This made possible the second, which was the establishment of European settlements along the eastern seaboard without substantial native resistance.
The Native Americans
The peoples of North America, who numbered perhaps 20 million in 1492, dwelled in societies of many different types, with their cultures shaped by their foodways. Thus, those who depended on hunting and gathering usually lived in roaming bands, whereas maize agriculture normally implied settled life in villages or towns. In the north and west of the continent, the hunter–gatherer lifestyle still predominated in 1492. Game varied from bison on the Great Plains to rats in the deserts of the Southwest. Men hunted and women gathered in these usually nomadic, band-level societies. Some bands, like the Mi'kmaq of Nova Scotia, grew one crop, such as tobacco, and hunted and gathered the remainder of their food supply (Prins 1996). It is important to note that the European picture of Indians as primitives did not allow for such sophistication. The Mi’kmaq knew perfectly well what agriculture was but chose to obtain their food from the wild and to plant only tobacco, which the wild could not provide. Where a staple food could be collected easily, such as in parts of California (where acorns were the daily fare) or in northern Minnesota (where wild rice was the staple), the natives frequently formed settlements (Linsenmeyer 1976).