Cambridge Catalogue  
  • Help
Home > Catalogue > The Cambridge History of Law in America
The Cambridge History of Law in America


  • Page extent: 976 pages
  • Size: 228 x 152 mm
  • Weight: 1.39 kg


 (ISBN-13: 9780521803076)




Belief that the United States occupies an exceptional place in world history has been a persistent element of the American creed. The founding of the nation was a new birth of freedom, Americans have been taught; it delivered them from the class conflict and ideological strife that have plagued the rest of the modern world. Not infrequently, seekers of the ultimate source of the United States’ exceptionalism have settled on the peculiarly fragmented nature of its government. The nation was born in a revolt against the modern state. In Europe, standing armies, centralized taxation, juryless courts, and national bureaucracies loyal to a distant sovereign were the hallmarks of the proudest monarchies. To Revolutionary America, they were evidence of tyrannous intent, “submitted to a candid world.” To prevent such abominations from reappearing in the new nation, Americans shattered sovereignty into legislative, executive, and judicial fragments and embedded them in their states’ written constitutions. The Federal Constitution of 1787 went further, for it also divided sovereignty between the national government and the states. The result, as John Quincy Adams observed, was “the most complicated government on the face of the globe.”1

   The new nation had plenty of law and plenty of local governments ready, willing, and able to promote private economic endeavor with grants of public land and public money. What the United States lacked, however, was centralized administration, a counterpart to the royal bureaucracies of Europe capable of consistently implementing national policies. The central government had to entrust the enforcement of an order to “agents over whom it frequently has no control, and whom it cannot perpetually direct,” explained Alexis de Tocqueville. Tocqueville approved of such an arrangement for a democracy, because it prevented a tyrannous majority from imposing its will on the nation. If the American state ever became as wieldy as its European counterparts, he warned, “freedom would soon be banished from the New World.”2

   Centralized administration finally came to the United States in the twentieth century in three waves of state-building. Each was consolidated into a durable political “regime,” an amalgam of institutions, elites, social forces, and ideas that, for a time, established fundamental set of assumptions about politics for all major political actors. Each political regime emerged when war, other national emergency, or a period of unusual social ferment created a demand for a new bureaucracy or the transformation of an existing one. These administrative responses followed no master plan. The new or reformed administrative bodies were hastily assembled from whatever form of governance seemed most promising in the midst of political battles in a deeply divided state.

   Administration was employed domestically in five different ways. First, it was used to conduct command-and-control regulation through administrative orders that told social or economic actors how to behave. Second, it was employed in the work of social insurance, the public provision of compensation for the misfortunes that regularly beset the members of industrial societies. Third, it was used to deploy the power of the state to collect tariffs, impose taxes, and issue public debt. Not infrequently, the ends sought were social or economic, as well as fiscal. Fourth, administration was used in the conduct of state capitalism – the public creation of economic infrastructure or the conferral of grants, loans, and other public benefits to encourage private individuals to create the infrastructure themselves. Finally, new administrative structures were created to assist or supplant the courts in the work of social police, the preservation of domestic tranquility.

   Once each state-building moment passed, a period of consolidation ensued, during which older institutions, elites, and socially dominant groups reasserted themselves until an accommodation of the old and new was reached. Here, we begin with the consolidation of the 1920s, in which the new bureaucracies managed to acquire a subordinate place within a state dominated by courts and political parties. Centralized administration came into its own in a second cycle of state-building and consolidation, which commenced in the New Deal, fully emerged during World War Ⅱ, and persisted well into the Cold War. We conclude with a third cycle, set off by the new “public interest” politics of the 1960s and 1970s and brought to a halt by a series of contractions in the 1980s and 1990s.

   Tocqueville’s warning notwithstanding, administration won a place in the American polity, but only on terms fixed by lawyers – not only those appointed to the judiciary or government legal staffs but also those in private law firms, corporate law departments, and public interest groups. Throughout the twentieth century, the lawyers, their clients, and their political allies demanded that bureaucrats respect an ancient ideal, that of “a government of laws and not of men.” Each consolidation had its own version of the ideal, which located the sources of the “laws” that constrained the “men” (and women) of government in different entities: the bench, the needs of modern society, or the welfare of a nation of consumers. In each consolidation, political actors dominant in an earlier political regime invoked the “supremacy of law” ideal to constrain an administrative innovation that placed them at a disadvantage. But only in the last of the twentieth century’s three cycles did consolidation attempt a general contraction of the administrative state. Significantly, this was the only one of the twentieth century’s consolidations in which economists, the dominant profession of the market, effectively rivaled lawyers, the dominant profession of the state, as articulators of public policy.


Our chronological point of departure, 1920, came just after the crest of the wave of state-building that had occurred during the Progressive era. That wave emerged at the state and local level in the 1890s and reached the federal government by World War Ⅰ. During the 1920s, most of the new bureaucracies struggled to become autonomous parts of the American state. On one side they were challenged by judges, who doubted the bureaucrats’ expertise and commitment to due process. On another, they faced demands for appointments and policies that promoted the interests of the nation’s bottom-up, patronage-oriented political parties. Administration, then, was contained by older, more familiar political structures; in the 1920s the American state still bore more than a passing resemblance to the one Tocqueville knew.

   In 1920, price-and-entry regulation by independent commission, created outside the regular departments of the executive branch, was the most salient feature of the American administrative state. Railroad commissions had been the first to arrive on the scene, established by the states after the Civil War and at the federal level, in the guise of the Interstate Commerce Commission (ICC), in 1887. Commissions limited entry into a regulated industry to firms with the requisite know-how and financial backing. They also set the rates businesses could charge for their goods and services and imposed a host of other rules. Railroad commissions, for example, developed and enforced detailed safety regulations, ordered companies to share freight cars, and decreed when railroads might abandon service to particular stations.

   At the federal level, the ICC was joined in 1913 by the Federal Reserve Board, which governed the banking industry, and in 1914 by the Federal Trade Commission (FTC), which policed unfair business practices. In the states, the focus of regulation shifted away from intercity railroads (which became the ICC’s exclusive preserve) to other matters. In Texas, for example, the “Railroad Commission” regulated the increasingly important oil and gas industry. More common was a turn to the regulation of municipal utilities, such as electricity, water, natural gas, streetcars, and subways. New York and Wisconsin created the first public utilities commissions (PUCs) in 1907. Seven years later all but three states had at least one PUC.

   The bellwether program of social insurance, in the United States as elsewhere, was workers’ compensation, a system of fixed payments to the victims of workplace injuries and their dependents. Between 1911 and 1920 forty-two American states enacted compensation schemes for industrial accidents; two more followed in the 1920s. After several false starts, federal commissions for harbor workers and the residents of the District of Columbia were created in 1927 and 1928.

   American reformers argued that the United States ought to follow other industrial nations by extending the social insurance concept to cover life’s other misfortunes, such as old age, unemployment, and illness. An indigenous precedent existed in pensions for Civil War veterans and their dependents, but it was a somewhat dubious one, as a series of Republican administrations had put the system to partisan use. Only in the category of “mothers’ pensions” did the United States lead the world. These quite meager payments were intended to keep mothers who lacked able-bodied husbands in the home, where they could look after their children. Forty states had some form of mothers’ pensions by the end of 1920. Four other states and the District of Columbia followed suit in the next decade.

   The most important administrative innovations in the area of fiscal management involved taxation. State and local governments had long relied on property taxation to finance their activities, but by the end of the nineteenth century the manipulation of assessments by political machines had become a scandal. One Progressive reform was to shift responsibility from local officials to statewide “equalization boards.” Another was to shift to new forms of taxation that were more difficult to use to reward political friends and punish political enemies. Income taxation soon became the reformers’ tax of choice. Wisconsin implemented an income tax in 1911 as part of a broad campaign of Progressive reform. After spreading to other states, income taxes would account for 22 percent of all state revenue in 1922.

   On the federal level, the ratification of the Sixteenth Amendment in 1913 was quickly followed by the adoption of a modest income tax, covering only 2 percent of the American workforce and intended as a first step in reducing federal reliance on tariffs. Coverage expanded with the United States’ entry into World War Ⅰ, and a new tax on profits was instituted. The staff of the Bureau of Internal Revenue (the predecessor of the Internal Revenue Service) increased from 4,000 in 1913 to 15,800 in 1920. Prominent economists and Wall Street lawyers were appointed to high positions in the Treasury Department, where they formed a tax policy group of unprecedented ability and sophistication. Although some of the wartime innovations – such as the excess profits tax – did not survive the Republicans’ return to power in 1921, World War Ⅰ remained an object lesson in how to use federal taxes to make economic and even social policy.

   In the field of state capitalism, most conferrals of public benefits to promote economic development still followed the nineteenth-century practice of distributing grants outright, with few strings attached. Such grants might have become vehicles of planning had recipients been required to follow specific policies (such as the preservation of the environment) and some administrative body been given the job of making sure that they did. But the dominant policy in the distribution of public largess had not been planning, but rather what the legal historian Willard Hurst called “the release of individual creative energy.”3 That policy persisted into the 1920s.

   More creative use of administration was evident in the construction and maintenance of public infrastructure. Road-building had long been the work of local governments, but in 1916 Washington stepped in with a “grant-in-aid” program. Public ownership of other forms of transportation was rarer, although the railroad industry was briefly nationalized during World War Ⅰ and a permanent, government-owned “merchant marine” was created when transatlantic shipping became too risky for private carriers. State ownership of other public utilities was also limited. Revelations of political corruption brought an end to a late-nineteenth-century trend toward the creation of city-owned water, gas, and streetcar companies. Thereafter, urban voters preferred private ownership coupled with regulation by a statewide public utility commission. At the federal level, war again provided the impetus for an exceptional case of state ownership. In 1916 Woodrow Wilson approved the development of hydroelectric power at a government-owned dam across the Tennessee River at Muscle Shoals, Alabama, for use in the production of explosives and fertilizer. Completed in 1925, the facility’s full potential was not realized until a staunch advocate of public power, Franklin Delano Roosevelt, won the presidency.

   In the field of social police, administrators captured relatively little ground from the courts, which invoked the powerful constitutional tradition that held their procedures to be the surest defender of the rights and liberties of the subject. The settlement of labor disputes was a case in point. Many states had created boards for the voluntary mediation and arbitration of labor disputes after the Civil War, and a federal system for arbitrating railway labor disputes was established after the Pullman boycott of 1894. During World War Ⅰ, the U.S. Army insisted on minimum labor standards in its contracts for uniforms, and the federal government created several commissions and boards to mediate labor disputes. The most powerful of these agencies, the National War Labor Board (NWLB), brought labor leaders and businessmen together under the joint chairmanship of a former president (William Howard Taft) and a nationally known labor lawyer (Frank Walsh). But the state boards had no power to compel workers or employers to accept their recommendations, and the NWLB was abolished in 1919. Criminal prosecutions and court injunctions remained the dominant mode of policing labor disputes until the New Deal.

   Only in the field of immigration, where the objects of social policing were not citizens, did administration make major inroads on the judiciary. For most of the nineteenth century, federal courts had directed the exclusion of aliens. Even Chinese immigrants, singled out for especially unfavorable treatment in 1882, could remove their cases from the purview of customs officials into federal courts. In 1891, however, Congress established a Bureau of Immigration and subsequently empowered it to decide the citizenship status of all immigrants. The U.S. Supreme Court put some of the Bureau’s determinations beyond judicial review in the Ju Toy decision of 1905. Equally deferential decisions would help keep immigration an area of extraordinary administrative discretion throughout the twentieth century.

   The administrators of the Progressive state were thus a miscellany of officials, scattered across the social and economic landscape, who answered to no single authority, tyrannous or otherwise. Still, their mere presence was hard for lawyers to square with the Tocquevillean notion that Americans were exceptionally free from governmental control. They turned to an Englishman, Albert Venn Dicey, for help. In his Introduction to the Study of the Law of the Constitution (1885), Dicey contrasted the “rule of law” in common law countries with the “administrative law” that prevailed in France and other civil law jurisdictions on the European continent. In common law countries, Dicey argued, citizens could contest the actions of administrators in the “ordinary courts of the land” – that is, in courts of general jurisdiction whose main work was the resolution of the disputes of private parties. In France and elsewhere, citizens could only appeal to specialized courts embedded in the very bureaucracies whose orders they contested. Translated into an American idiom, Dicey taught that American could have both bureaucracy and a “government of laws,” so long as administrators’ actions could be challenged in courts presided over by common law judges.

   Throughout the twentieth century, American judges routinely pledged their fidelity to Dicey’s notion of the rule of law. Just as routinely, they departed from it in practice. One striking example involved the non-delegation doctrine, the principle that lawmaking power vested in a legislature might not be delegated to any other public institution or official. Applied strictly, the doctrine would have kept any number of administrative agencies from promulgating rules and regulations in support of their statutory missions. In a series of decisions between 1904 and 1928, however, the U.S. Supreme Court upheld sweeping delegations by employing the fiction that administrative officials were merely executing the clearly defined will of Congress. So long as a statute embodied an “intelligible principle,” the Court decided, the non-delegation doctrine was satisfied. Vague standards such as the ICC’s charge to set “just and reasonable” rates or the Federal Radio Commission’s mandate to issue licenses in accordance with the “public interest, convenience, and necessity” easily passed constitutional scrutiny.

   Courts also deferred to administrators by refusing to make their own determinations of the facts supporting administrative rulings. In 1897 the U.S. Supreme Court had crippled the ICC by permitting railroads to introduce new evidence in federal court when contesting the commission’s request for an injunction. By the 1920s judges had rejected the “de novo review” of most facts and upheld agencies’ findings whenever backed by substantial evidence in the record, even though the judges themselves would have decided the matter differently if free to do so. To be sure, de novo review was not abandoned totally. In Crowell v. Benson (1932), for example, Chief Justice Charles Evans Hughes insisted that federal courts make their own determination of the facts “upon which the enforcement of the constitutional rights of the citizen depend.”4 But other judges did not apply Hughes’s “constitutional fact” doctrine widely, and soon commentators were complaining that the judiciary had abdicated in favor of the ICC, public utility commissions, and workers’ compensation commissions.

   Many other forms of administration were immune from even “substantial evidence” review on the ground that they dispensed “privileges” rather than determined “rights.” For example, unless statutes provided otherwise, courts could not interfere with administrators as they distributed pensions, deported aliens, sold public land, awarded government contracts and loans, parceled out grants-in-aid to the states, employed public workers, or decided which periodicals were eligible for the Post Office’s low-cost, “second-class” mailing privilege.

   Some observers attributed the judges’ infidelity to Dicey’s ideal to a failure of will when confronting an avalanche of administrative decisions. Others maintained that they were simply recognizing obvious and inherent differences between adjudication and administration. The judges who staffed Dicey’s “ordinary courts” were of necessity generalists. Administrators, in contrast, developed and applied the specialized expertise that modern times demanded. Courts were passive bodies that acted only when some party brought disputes before them; administrators could conduct investigations on their own initiative. Courts issued final decrees in discrete cases; administrators could continuously review prior decisions and engaged in rulemaking based on knowledge acquired by their own staffs.

   Judges deferred to administrators with a reputation for employing their expertise and procedural flexibility competently and in the public interest. If they suspected that decisions were made for personal gain or to reward a political constituency, they usually found a way to avenge the rule of law. In the 1920s, the varying treatment that federal judges accorded agencies they trusted and those they did not can be seen by contrasting the ICC and the FTC. Federal judges were extremely deferential to the ICC and placed some of its “negative orders” (decisions not to proceed against the subject of a complaint) beyond judicial review. In contrast, they ran roughshod over the FTC. The U.S. Supreme Court insisted that federal judges make their own determination of what constituted “unfair methods of competition.” When intermediate federal courts reversed the FTC’s findings of facts, the Supreme Court usually affirmed, even though Congress had directed that the commission’s determinations be considered “conclusive.”

   The difference in judicial treatment turned on the great disparity in the professionalism of the two agencies’ staffs and the extent to which their procedures tracked those of the courts. The ICC had a tradition of non-partisanship dating from the appointment of its first chairman, the great Michigan judge Thomas Cooley. It had able economists and secretaries, and in 1916 its large legal staff was brought within the federal civil service. In most respects, its procedures were familiar to any courtroom lawyer, and its orders were backed up with published opinions that compared favorably with those of the courts. The FTC was another matter. From the start it was plagued by weak commissioners, selected more for their service to their party than their knowledge of business affairs. From 1925 onward, its chairman was William E. Humphrey, an outrageously partisan and pro-business Republican. Neither the commissioners nor their politically appointed lawyers paid any attention to the FTC’s small economic staff, and the commissioners gave little indication of the reasoning behind their decisions. Senators roamed the halls at will in search of commissioners to lobby.

   At the end of the 1920s, then, administration was a familiar but subordinate feature of the American state. The speed and flexibility that made it an attractive alternative to courts and legislatures also attracted the suspicions of a jealous judiciary and the unwanted attention of politicians seeking new ways to reward contributors and constituents. Many American bureaucracies had acquired administrative “capacity” – the ability to solve problems and achieve ends – but few enjoyed express judicial or legislative recognition of their “autonomy” – the ability to formulate goals and policies independently of private interests, political parties, and other arms of the state. That would be forthcoming only after an unprecedented economic crisis, a second world war, and a recasting of administrative procedure in ways that allowed lawyers greater leverage within the administrative process itself.


The legal history of the American administrative state did not deviate from the path of uncoordinated, sporadic growth on ground left unoccupied by courts and party-dominated legislatures until an economic catastrophe of unprecedented proportions hit the nation at the end of the 1920s. The stock market crash of 1929 and the ensuing downward spiral of business activity left nearly a quarter of the American workforce unemployed and elicited a wide range of proposals from reformers, universities, civic associations, private foundations, and government officials. The Republican president Herbert Hoover was cautious in sampling these wares, but his Democratic successor enthusiastically experimented with one innovative use of administration after another. Typically, new “emergency” or “alphabet” agencies were created as independent commissions to implement the proposals. The most successful agencies acquired the funds, staff, and procedures to formulate policies without returning to Congress and to obtain compliance with its orders with only occasional resorts to the courts.

   Two vast schemes of command-and-control regulation created during the first months (the First Hundred Days) of Franklin Roosevelt’s presidency showed how vital “state autonomy” was for a new agency. The National Recovery Administration (NRA) was created to reduce the overproduction of goods that was the most puzzling phase of the depression. In 1933 no profession, academic discipline, or arm of the state had the detailed knowledge of the hundreds of industries that the NRA regulated, so its administrators turned the job of drafting regulations over to “code authorities” made up of leading businessmen. In theory, the NRA’s staff was to review their work, but the staff lacked the expertise and authority to second-guess the industrial representatives. By early 1935 most observers were convinced that the legislative power Congress had delegated to a supposedly independent agency was actually being exercised by the industrialists themselves. In contrast, the principal agricultural agency of the First Hundred Days, the Agricultural Adjustment Administration (AAA), was more successful in its quest for autonomy. It attacked the problem of excess supply by paying farmers to cut back on their production of wheat, corn, cotton, tobacco, rice, hogs, and milk, with the money coming from a tax on the processors of these items. Local committees of farmers were to assist in deciding whose acreage was to be reduced and how subsidies were to be distributed, but they did so under the direction of the large and well-established extension service of the U.S. Department of Agriculture and with the assistance of experts in the country’s many land-grant universities.

   Similar success was enjoyed by the Securities and Exchange Commission (SEC), created in 1934 after a year’s experience with the regulation of the issuance of stocks and bonds by the FTC. The Securities and Exchange Commission bore a superficial resemblance to the NRA in that it asked stock dealers and exchanges to codify their best practices and relied on accountants to develop and enforce the intricate reporting requirements for each new issue of stocks and bonds. But the SEC was no rubber stamp: unusually able lawyers had drafted its organic act and served as commissioners or members of its legal staff. The agency retained a reputation for efficiency and expertise long after other New Deal agencies had slipped into quiescence. The SEC also benefited from the unusual sensitivity of securities markets to publicity. The issuance of an administrative “stop order,” which blocked an offering until some discrepancy in a company’s registration statement was resolved, could scare off investors. The damage was done long before the order could be challenged in court.

   The New Deal also produced a landmark in the history of social insurance and social provision, the Social Security Act of 1935. One part of the statute federalized the states’ mothers’ pensions, but, at the insistence of Southern Democrats, it left broad discretion to state officials. In the South, officials were careful not to let these “welfare” payments upset the domination of whites. Everywhere, recipients had to submit to intrusive, stigmatizing guidelines. The statute’s provisions for wage earners, such as unemployment insurance and old age insurance, were quite different. These “social security” payments were funded by the contributions of workers and their employers and were treated as unconditional entitlements. Old age pensions were exclusively administered by a federal Social Security Board; unemployment payments were distributed under strict guidelines set by federal officials.

printer iconPrinter friendly version AddThis