To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Impersonation and then identity theft in America emerged in the legal space between a civil system with a high tolerance for market risk and losses incurred by impostors, and a later-developing criminal system preoccupied with fraud or forgery against the government. Negotiable instruments, generally paper checks, borrowed from seventeenth-century England, enabled a geographically far-flung commercial system of paper-based but impersonal exchanges at a time before widespread availability of centrally-issued currency or regulated banks. By assigning loss rather than catching criminals, the “impostor rule” made and continues to make transactions with negotiable instruments valid even if fraudulent. This large body of commercial law has stood essentially unchanged for three hundred years and has facilitated a system rife with impersonation which criminal and federal laws did not address until the late 20th century. English common law, American legal treatises, court cases, law review articles, and internal debates behind the Uniform Commercial Code tell the story of a legal system at the service of commerce through the unimpeded transfer of paper payments. Combining the fields of legal history and criminal justice with the approaches of emerging research in both identification and paperwork studies, this article explains the ongoing policy problems of identity theft.
In recent years there has been growing acknowledgement of the place of workhouses within the range of institutional provision for mentally disordered people in nineteenth-century England. This article explores the situation in Bristol, where an entrenched workhouse-based model was retained for an extended period in the face of mounting external ideological and political pressures to provide a proper lunatic asylum. It signified a contest between the modernising, reformist inclinations of central state agencies and local bodies seeking to retain their freedom of action. The conflict exposed contrasting conceptions regarding the nature of services to which the insane poor were entitled.
Bristol pioneered establishment of a central workhouse under the old Poor Law; ‘St Peter’s Hospital’ was opened in 1698. As a multi-purpose welfare institution its clientele included ‘lunatics’ and ‘idiots’, for whom there was specific accommodation from before the 1760s. Despite an unhealthy city centre location and crowded, dilapidated buildings, the enterprising Bristol authorities secured St Peter’s Hospital’s designation as a county lunatic asylum in 1823. Its many deficiencies brought condemnation in the national survey of provision for the insane in 1844. In the period following the key lunacy legislation of 1845, the Home Office and Commissioners in Lunacy demanded the replacement of the putative lunatic asylum within Bristol’s workhouse by a new borough asylum outside the city. The Bristol authorities resisted stoutly for several years, but were eventually forced to succumb and adopt the prescribed model of institutional care for the pauper insane.
Within the colonial setting of the Belgian Congo, the process of cutting the body, whether living or dead, lent itself to conflation with cannibalism and other fantastic consumption stories by both Congolese and Belgian observers. In part this was due to the instability of the meaning of the human body and the human corpse in the colonial setting. This essay maps out different views of the cadaver and personhood through medical technologies of opening the body in the Belgian Congo. The attempt to impose a specific reading of the human body on the Congolese populations through anatomy and related Western medical disciplines was unsuccessful. Ultimately, practices such as surgery and autopsy were reinterpreted and reshaped in the colonial context, as were the definitions of social and medical death. By examining the conflicts that arose around medical technologies of cutting human flesh, this essay traces multiple parallel narratives on acceptable use and representation of the human body (Congolese or Belgian) beyond its medical assignation.
If a German couple wanted to get married today, they would have to consult the German Civil Code, the Bürgerliches Gesetzbuch or BGB, for information on how to do so. From the BGB, they would learn that—provided that they are competent, more than 18 years of age, not related in a direct line or (half-) siblings, and not currently married—they can get married before the Standesbeamter or civil registrar. They would also learn that should they want a divorce in the future, any proceedings would have to be brought in the family court, which is a special division within the German civil courts of first instance, and that the judge hearing their case would be required to consider whether their marriage has “failed”: a state of affairs that that judge would be legally compelled to presume if one or both of them wanted the divorce (and they had lived apart for a prescribed number of years).
In 2014 the World Health Organization (WHO) was widely criticised for failing to anticipate that an outbreak of Ebola in a remote forested region of south-eastern Guinea would trigger a public health emergency of international concern (pheic). In explaining the WHO’s failure, critics have pointed to structural restraints on the United Nations organisation and a leadership ‘vacuum’ in Geneva, among other factors. This paper takes a different approach. Drawing on internal WHO documents and interviews with key actors in the epidemic response, I argue that the WHO’s failure is better understood as a consequence of Ebola’s shifting medical identity and of triage systems for managing emerging infectious disease (EID) risks. Focusing on the discursive and non-discursive practices that produced Ebola as a ‘problem’ for global health security, I argue that by 2014 Ebola was no longer regarded as a paradigmatic EID and potential biothreat so much as a neglected tropical disease. The result was to relegate Ebola to the fringes of biosecurity concerns just at the moment when the virus was crossing international borders in West Africa and triggering large urban outbreaks for the first time. Ebola’s fluctuating medical identity also helps explain the prominenceof fear and rumours during the epidemic and social resistance to Ebola control measures. Contrasting the WHO’s delay over declaring a pheic in 2014, with its rapid declaration of pheics in relation to H1N1 swine flu in 2009 and polio in 2014, I conclude that such ‘missed alarms’ may be an inescapable consequence of pandemic preparedness systems that seek to rationalise responses to the emergence of new diseases.
This paper explores the social, medical, institutional and enumerative histories of blindness in British India from 1850 to 1950. It begins by tracing the contours and causes of blindness using census records, and then outlines how colonial physicians and observers ascribed both infectious aetiologies and social pathologies to blindness. Blindness was often interpreted as the inevitable consequence of South Asian ignorance, superstition and backwardness. This paper also explores the social worlds of the Blind, with a particular focus on the figure of the blind beggar. This paper further interrogates missionary discourse on ‘Indian’ blindness and outlines how blindness was a metaphor for the perceived civilisational inferiority and religious failings of South Asian peoples. This paper also describes the introduction of institutions for the Blind in addition to the introduction of Braille and Moon technologies.
In recent decades, historians of English psychiatry have shifted their major concerns away from asylums and psychiatrists in the nineteenth century. This is also seen in the studies of twentieth-century psychiatry where historians have debated the rise of psychology, eugenics and community care. This shift in interest, however, does not indicate that English psychiatrists became passive and unimportant actors in the last century. In fact, they promoted Lunacy Law reform for a less asylum-dependent mode of psychiatry, with a strong emphasis on professional development. This paper illustrates the historical dynamics around the professional development of English psychiatry by employing Andrew Abbott’s concept of professional development. Abbott redefines professional development as arising from both abstraction of professional knowledge and competition regarding professional jurisdiction. A profession, he suggests, develops through continuous re-formation of its occupational structure, mode of practice and political language in competing with other professional and non-professional forces. In early twentieth-century England, psychiatrists promoted professional development by framing political discourse, conducting a daily trade and promoting new legislation to defend their professional jurisdiction. This professional development story began with the Lunacy Act of 1890, which caused a professional crisis in psychiatry and led to inter-professional competition with non-psychiatric medical service providers. To this end, psychiatrists devised a new political rhetoric, ‘early treatment of mental disorder’, in their professional interests and succeeded in enacting the Mental Treatment Act of 1930, which re-instated psychiatrists as masters of English psychiatry.
The influence of a range of actors is discernible in nutrition projects during the period after the Second World War in the South Pacific. Influences include: international trends in nutritional science, changing ideas within the British establishment about state responsibility for the welfare of its citizens and the responsibility of the British Empire for its subjects; the mixture of outside scrutiny and support for projects from post-war international and multi-governmental organisations, such as the South Pacific Commission. Nutrition research and projects conducted in Fiji for the colonial South Pacific Health Service and the colonial government also sought to address territory-specific socio-political issues, especially Fiji’s complex ethnic poli,tics. This study examines the subtle ways in which nutrition studies and policies reflected and reinforced these wider socio-political trends. It suggests that historians should approach health research and policy as a patchwork of territorial, international, and regional ideas and priorities, rather than looking for a single causality.
Privately owned, for-profit, seasonal, walled-off venues that boasted a variety of flora and fauna in addition to other services like stages, buildings, thespians, musicians, exhibits and victuals, New York City's Vauxhall and Ranelagh commercial pleasure gardens blossomed by the mid-eighteenth century. An in-depth analysis of these pleasure gardens not only contributes to how we understand colonists’ endeavours to support a more complete leisure sector, but also reveals the nuanced nature of the ‘rural vs. urban’ or ‘wilderness vs. civilization’ dyads. Ultimately, urban colonists hoped to embrace the rural nature of their surroundings to make their ‘cities in the wilderness’ more accessible, healthy places.