To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the revolutionary year of 1848, former China merchant Asa Whitney stood before the Pennsylvania legislature to unveil a “skeleton map.” Standard world maps were marred by an epistemological error, he argued; centering where “Europe, Asia and Africa” met, they pushed North America to “one side of all, as if of no importance.” Whitney’s cartography righted this wrong, showing America as it really was: “in the centre of all.” More than a salve to hemispheric pride, Whitney thought his map demonstrated that the “belt of the globe” – the east-west band running across Europe and Asia that contained “the population and the commerce of all the world” – was missing its buckle, a gap in the zone he proprietarily called “our continent.”
During the decades separating the two world wars, Americans often defined themselves politically in reference to the Soviet Union. For most individuals, this meant eyeing Moscow with suspicion or hostility. A significant number of reformers and radicals, however, drew inspiration from the October Revolution. They viewed the Soviet Union as a vast, daring experiment that wedded scientific planning with ideals of equality in all areas of human endeavor: economics, statecraft, nation-building, gender relations, and so on. The most devoted enthusiasts joined the American Communist Party (CP), which, since its founding in 1919, committed itself to establishing a Soviet-style “dictatorship of the proletariat” in the United States. Hundreds of thousands of individuals belonged to the party, affiliated “front-groups,” and allied organizations, for greater or lesser periods of time, between the 1920s and 1940s. Always plagued by high turnover within its ranks, the CP never came close to achieving its ultimate goal. Still, it exerted a profound influence on the American left by transmitting ideas and policies formulated in Moscow to the United States.
Independence or union? Alone or connected? In revolutionary America, this was a false choice. Most defined the choice of revolution – the choice for the North American provinces, and for themselves – as between remaining in one complex polity and creating another. The choice was between competing unions. But there had always been more than one union in the colonists’ Atlantic world, and, after independence, the possibilities for association proliferated. The Age of Revolutions witnessed an astounding array of imaginative plans for integrating peoples, places, and ideas. Only in retrospect does the reciprocity between independence and union seem paradoxical. Many still remember that the Declaration of Independence “dissolve[d]” the “political connections” to Great Britain, but not Congress’s simultaneous assumption of the power to “contract alliances, establish commerce, and to do all other acts and things which independent states may of right do.” Although Patrick Henry’s demand for “liberty or death” may have rallied some to the cause, Benjamin Franklin’s repeated injunction to “join, or die” better captured revolutionary imperatives.
The US’s relationship to “Islam” – that is Islam as a construct rather than a reflection of the religion and its adherents – is contradictory. In Covering Islam, Edward Said argued that “Islam,” as a part fictional and part ideological designation, is not only based on “patent inaccuracy” but also “unrestrained ethnocentrism.” While Said focused on the racial and cultural hatred of the Muslim “other” produced in the West by cultural thinkers, experts, journalists, and policymakers, this analysis explores both racialized constructs and glorified ones. In both cases, these constructions are ethnocentric in that they represent (and help to construct) a narrow understanding that tends to prioritize US geopolitical interests. While policymakers set the terms of discussion and are the “primary definers” of a topic, cultural products, from films to news stories, also play a role in shaping how state policy is defined. Further, culture can also provide, on rare occasions, an opportunity to address American policies critically.
In the early twentieth century, corporations and very large privately held companies circled the globe in search of raw materials and new markets for the flood of consumer goods that rapid industrialization had produced. Standard Oil, Singer Manufacturing, Ford Motor, United Fruit, Coca-Cola, Victor Talking Machine, Edison Manufacturing, Firestone Tire and Rubber, and the British American Tobacco companies, along with countless others, sought to source rubber, tobacco, bananas, cotton, and many other raw materials from outside the United States. Simultaneously, many of these same companies sold a proliferating array of commodities across the globe. As part and parcel of this industrial development, culture industries – including the film, record, print media, and advertising industries – circulated brand glyphs, advertisements, films, sound recordings, and print narratives in a variety of formats across diverse international markets. By both accident and design, these cultural products interacted with other commodities and sold unstable interpretations of this very process of globalization to consumers.
Uncertainty lies at the heart of early American intellectual history. Renaissance explorers and colonizers of North America sought to reconcile their monarch’s intentions, God’s will, and their own interests. The religious reformers who followed them questioned the state of their own souls and the security of their covenant with their Lord. The disputed status of truth – raised both by the New Science and the reformed emphasis on verifiable evidence of religious conversion highlighted the limits of human knowledge in colonial settings. The Indigenous people thrown into contact and conflict with European newcomers and the African born compelled to a transatlantic voyage and labor in the Americas grappled with worlds transformed and sought to define their uncertain places within them. Nor did these problems abate in the eighteenth century. The now established creole colonies wrestled with their political and social place in a consolidating British Empire. The expansion of slavery and the continuing conflicts with Indigenous groups produced new languages and understandings of race, while threatening the creole sense of equality with Europe. Enslaved Americans produced their own cultural and religious traditions while Native Americans experimented with forms of proto-nationalist thinking.
Indigenous Americans’ encounter with the world began at the water’s edge. Initial encounters with Europeans happened on beaches, on islands, and on the water itself. As foreign powers began to colonize the Americas, saltwater fringes would form some of the most profitable and contested regions. This fact, which scholars have only recently started to examine with care, goes against common assumptions about where “borderlands” and “frontiers” are supposed to take shape. Looking to the continent’s margins reveals a distinct category of contested spaces that did not work by the same rules as terrestrial ones. From the fifteenth to the eighteenth century, two particular American coastal regions faced economic, political, and cultural changes that were all connected to the underlying ecological dynamism of shorelines.
Migration scholars frame the history of immigration to the United States in terms of “pre-1965” and “post-1965” to emphasize the major demographic changes that US society experienced after the passage of the Immigration and Naturalization Act of 1965, also known as the Hart–Celler Act. After decades of rabid nativism, US society, the story goes, became the most diverse in its history after Congress repealed the draconian 1924 Immigration Act in 1965. The 1924 law imposed a near total ban on immigration from Asia and introduced the national origins quota system to curtail immigration from eastern and southern Europe. Scholars portray the years from 1924 to 1965 as a period characterized by a lull in immigration, isolationism, and xenophobia. Yet, the period was much more dynamic than it first appears and, especially after the late 1930s, ushered in many of the changes that led to the demographic shifts usually attributed to the passage of the 1965 Immigration and Nationality Act.
Since the end of World War II, the production of American knowledge of the world has undergone both exponential expansion and a radical transformation. Before the war, missionaries and military planners and the earliest multinational corporations were the primary sites for the production of knowledge of the rest of the world. World War II and the coming of the global Cold War underscored for US elites the need to better understand the world, which underwrote the creation of standing intelligence agencies – agencies whose knowledge and personnel drew on and were interdigitated with growing US multinational corporations, a rapidly expanding academic enterprise grounded in Area Studies programs as well as development and international affairs programs, and world-spanning print-media empires.
With good reason, the historian Bernard Bailyn – writing near the end of the twentieth century – described the “peopling of British North America” as “the most sweeping and striking development in this millennium of Western history.” The movement of people from one side of the ocean to the other “transformed at first half the globe, ultimately the whole of it, more fundamentally than any development except the Industrial Revolution.” He was not alone in believing this. Bismarck, as he reminded us, had called the migration of so many “the decisive fact in the modern world.” The movement of men and women from the Old World to the New, and this is fairly uncontestable, made British North America and then the United States distinctive, creating a diverse cultural landscape, just as the flow increased the productive capacity of the West in ways that no one could have anticipated. Migration produced untold wealth, expanded the territorial footprints of colonies and then states, remade the political economy and the social fabric of the broader Atlantic, and would make the United States one of the most powerful nations in the world. The story of migration is the story of America.
At the end of the nineteenth century, Japan and the United States stepped forward onto the world stage as two non-European great powers that would dominate the Asia Pacific region, eclipsing the European military and economic powers that had prevailed before. After the Sino-Japanese War of 1894–5 and the Spanish-American War of 1898, overseas expansion and spheres of influence began to play major roles in both nations’ self-conceptions. Japan, seeking to avoid becoming a colonial possession itself by acquiring colonial territory, eventually chose to become a continental power and would dominate the Asian mainland. The United States, by contrast, focused less on the imperialist acquisitions of territory per se than on the expansion of American commerce and trade. These two world views and approaches to expansion in Asia would eventually collide at Pearl Harbor.
The decades since World War II have been characterized by distinct patterns of conflict and cooperation between US foreign policy and humanitarian organizations. This is, in many ways, predictable given their goals and what each wants from the other. American humanitarian organizations pledge a commitment to the idea of saving distant strangers in urgent, life-threatening circumstances. And just like all humanitarian organizations, they need two fundamental things to do so. Money, and lots of it. Assembling, transporting, distributing, administering, and overseeing the massive amounts of aid requires considerable money, material, and staff. They also need access to the affected populations. This access is mainly controlled by states and armed groups. In natural disasters this tends to be a minor challenge; governments and affected populations typically welcome any and all aid. Human-made disasters such as war, though, are an entirely different matter. Delivering aid in war zones is much less straightforward.
Development assistance served as an important tool of American foreign policy in the postwar decades, when the United States was confronted with a series of new challenges and opportunities arising from the global constellation coproduced by the Cold War and the decolonization of the former European colonies in Africa and Asia. The ways in which American governmental and nongovernmental organizations employed development aid were highly diverse. Military aid and relatively small technical assistance programs dominated the agenda when the first wave of decolonization took place in Asia. When the African colonies began to gain independence in the late 1950s, at the same time that the Soviet Union became a stronger presence in the so-called Third World, US aid budgets and purposes increased notably. While the United States provided the largest total amounts of aid, the development field internationalized rapidly in the postwar years, and within the Western bloc there was at times little agreement about best practices and appropriate contribution levels.
On November 17, 1862, the New York Times reported that the transatlantic slave trade continued to flourish, arguing, “no commerce was ever more profitable than the traffic in Africans, provided those engaged could go on unmolested.”1 Even after the United States abolished the “African Trade” in 1807, slave trading across the Atlantic remained alluring to merchants unconvinced of the lucrativeness of “legitimate” trade with West and Central Africans. The Times tracked over 150 vessels that engaged in the slave trade from 1858 to 1861. Of that number, American authorities had seized thirty-six ships in US ports alone, primarily in the South. The commandeering of these ships exposed some American citizens’ complicity in perpetuating a trade then considered the shame of the “civilized” world. The Times detailed the circumstances under which officials impounded slave ships and arrested “Slavers” during the period preceding the Civil War.
Native peoples shaped European colonial ventures in North America from the beginning of New World exploration deep into the so-called national era. When European colonists planted themselves in the Americas, they did not so much initiate a new era than become swept into an Indigenous current that extended back several millennia. European colonialism in North America is, as a persisting myth has it, a story of forging a new people in a new world but, more immediately, it is a story of European newcomers struggling to understand and control the ancient worlds the Indians had made.