To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
investigate how public relations professionals can best manage social media crises of high interest to mainstream media
explore the growing importance of social media as a source of news for mainstream media organisations and the consequences for public relations professionals when social media generates ‘bad news’ stories for their organisations
examine journalists’ news values and news routines and the importance of understanding how they can deepen or dissolve a social media crisis
outline strategies to deal with a social media crisis and engage effectively with mainstream media.
Introduction
Organisations cannot forecast when an angry Facebook fan’s post will spark a national debate in mainstream media, or a deluge of negative comments about a new corporate video on YouTube will catapault it onto breakfast television. However, being able to spot news value in an online forum and knowing how and why the news media might respond can help public relations professionals to deal effectively with such crises. This involves understanding and respecting the perspective of the journalist (Doorley & Garcia, 2011) and the norms and values that guide their work. Developing good relationships with journalists, understanding news cycles and having a proactive public relations plan are some of the important principles for crisis management (Regester & Larkin, 2008) and help to ensure organisations are prepared to deal with these emerging scenarios in a new media landscape. How well a social media crisis is handled will define people’s perceptions of the brand, company or organisation. This chapter explores these issues through the case of Australian mass market retailer Target’s 2012 media crisis over its fashions for girls aged 7–14. The controversy was sparked when a mother posted a message on the company’s Facebook page asking it to sell some clothes that didn’t make young girls ‘look like tramps’. The case offers lessons on the importance of monitoring social media and being prepared to step in and speak up in order to steer the public conversation that now takes place across multiple platforms.
The analogy of the boiled frog has been recalled by scholars when dealing with the concept of risk, issues and crisis communication. A frog dropped into a saucepan of boiling water quickly jumps out. Placing a cold-blooded frog in a saucepan of cold water and slowly warming the water to boiling point sees the frog become accustomed to the heat and finally succumb. Many organisations react quickly (if often chaotically) to an emergency, but in a familiar environment where they view the potential risk as too remote to damage them, they tend to ignore it – and in some cases, succumb.
The tale is a useful illustration as we determine the differences and similarities of risk, issues and crisis communication and their interconnectivity and how they all come to be considered areas of practice in the profession of public relations (PR). This text seeks to examine these areas and assist the reader in discussing questions that can resolve the respective and combined challenges. This chapter will also discuss the synergies of these three areas and as you read through the case studies in the text you should ponder the existence of each element collectively and individually. They can co-exist and they can stand alone; while the evidence of one may be patently clear it is hoped that after your study of these cases you can observe the missing elements and ponder what might have been had a certain strategy been employed.
The concepts of personhood and human dignity are widely used in contemporary healthcare ethics. This chapter provides a brief overview of how the concept of human dignity came to be so important in healthcare ethics, and examines how the concept’s widespread use and relationship to the concept of personhood have led to problems regarding its meaning and relevance. A practical solution is then presented.
The rise of the concept of human dignity in healthcare ethics
The word dignity is derived from the Latin dignus, which means worthy. Since dignity refers to worth, human dignity refers to the worth of the human. What makes the concept of human dignity important for ethics is that, unlike the dignity of a queen or the dignitaries at an awards ceremony, which expresses the worth or status of particular human individuals in relation to others, human dignity is meant to express a worth that is equally shared by all human individuals. It is not meant to be dependent upon their social status, economic wealth, race, gender or anything else. Moreover, it is meant to affirm a worth beyond price. Human individuals are said to be moral goods or ends in themselves, not merely good or useful as means to achieving other ends.
If all the world hated you and believed you wicked,
while your own conscience approved of you and absolved
you from guilt, you would not be without friends.
– Charlotte Brontë, Jane Eyre
Conscience is widely recognised as a universal, yet highly personal, human phenomenon. In recent times, however, the role of conscience in public and professional life has increasingly been challenged. This chapter explores whether conscience is indeed the ‘friend’ or ‘foe’ of healthcare professionals and all those who seek their services. It proposes a definition of conscience that is grounded in traditional ethics, and introduces the reader to the concept and practice of conscientious objection. This is followed by an analysis of challenges to the legitimacy of conscientious objection in a contemporary healthcare system, and a discussion of how the case for or against conscientious objection ultimately depends on competing models of healthcare and professionalism. The chapter concludes with proposals for how conscientious objection might best be accommodated in a contemporary healthcare setting.
What is conscience?
Most people – including most scholars – presume that conscience exists, and that its function is to alert us to a potential conflict between values and to indicate which values should guide our choices (Morton & Kirkwood, 2009).
Studying any discipline is worthwhile simply for the knowledge it brings and the skills in reasoning that can be learned and honed. For healthcare professionals, study – formal and informal – is part of the vocation: competence and excellence are not benchmarks to be achieved once in a lifetime and then relegated to the shelves, as one might a sporting trophy. Rather, the health professions demand a commitment to lifelong learning. Part of that professional commitment is an awareness of and engagement with the ethics of health and healthcare. The skills of this philosophical discipline are just as vital as clinical skills and the various dimensions of physiological and psychological knowledge that are essential for the work of a healthcare professional.
Why ethics?
Ethics has been an important sub-set of philosophy since the time of the ancient Greeks. For those early philosophers, there were three big questions to be answered by every human being and every society:
What does it mean to be or to exist?
How do we know – what is knowledge and how do we go about knowing?
How should we live – what is a good life?
It is the third of these questions that is the core concern of this book.
In her book Feminist Politics and Human Nature, Alison Jaggar (1983) argues that, in some form or another, feminism has always existed. For as long as women have experienced social, political or cultural subordination, there have been either individual or communal women who have resisted this reduction in status. Over the last two centuries, a visible and organised feminist movement has emerged. This chapter will present a historical timeline of feminist theory that will include an identification of several proto-feminists, and explore the social, cultural and ethical foundations of contemporary feminist theory. The three main ‘waves’ of feminism will be analysed, key writers and activists in these areas identified, and some workable definitions of relevant broad terms presented. Emancipation, suffragettes, radical feminism, liberation movement, care ethics and global feminism are fundamental terms that will be explored in this chapter. It will be argued that contemporary feminist theory is predominately a Western construct and that for application in non-Western, indigenous and culturally and linguistically diverse cultures, theoretical adaptation should be considered.
In the final section of the chapter, an application of one element of contemporary feminist theory will be presented in connection with healthcare ethics: relational autonomy. A case study involving the care of a woman with an unplanned pregnancy will be highlighted to assist with the application of relational autonomy and the connection with healthcare ethical reasoning.
‘My Hippocratic oath tells me to cut a gangrenous appendix out of the human body. The Jews are the gangrenous appendix of mankind. That’s why I cut them out.’ So explained one of the doctors working in a Nazi death camp during World War II. He thought the procedures he carried out at Auschwitz were justifiable on medical grounds. Killing Jews, Gypsies, Catholics, the mentally and physically disabled, homosexuals and other vulnerable people by various methods - including starvation, lethal injection or poisoning with carbon monoxide - furthered Hitler’s aim of cleansing the gene pool and ensuring the Aryan race’s dominion over all other races (Cook, 2014, citing Lifton, 2000: 232).
Unfortunately, the Allied Powers do not escape culpability for the misuse of science and medicine. In recent years, reports have emerged of unethical medical experiments conducted by the Americans in Guatemala after World War II. Dr John Cutler, who eventually rose to be US Assistant Surgeon-General, led this project and was associated with a parallel one in Tuskegee, Alabama. To study the course of sexually transmitted diseases and the effect of penicillin, the US Public Health Service deliberately infected soldiers, prostitutes, prisoners and mental patients with syphilis without their knowledge or consent.
At the heart of many contemporary healthcare ethical debates is the question of when does human life begin. In February 2012, the prestigious Journal of Medical Ethics published an article titled ‘After birth abortion: Why should the baby live?’ by Australian ethicists Alberto Giubilini and Francesca Minerva (2012). This controversial article affirmed the authors’ position that ‘after-birth abortion’ should be morally permissible in all cases where abortion is also permissible, including cases where the newborn is not disabled. Following its publication, the large portion of international responses to the article vehemently opposed the moral premises articulated by the authors that essentially supported infanticide. However, such a controversial premise is not new in historical or ethical discourse. Infanticide – the intentional killing of an infant by the mother – has been a part of many cultural practices over the centuries, from ancient Sparta to the more recent practice of intentionally killing newborn infants due to economic, cultural or social pressures placed on mothers and families. In 1988, Australian ethicists Helga Kuhse and Peter Singer also presented the argument that infanticide may be ethically justifiable in their book Should the Baby Live? The Problem of Handicapped Infants. A theological, philosophical and scientific analysis of the question of when human life begins can present some accord, but with contemporary ethical discourse from healthcare ethicists such as Kuhse and Singer (1988, 1990), Giubilini and Minerva (2012) and Julian Savulescu, there is an increasing divergence on what was once a universally accepted norm: that human life has moral inviolability from conception. This chapter will explore this issue by presenting some of the contemporary views, including theological, philosophical and scientific perspectives, as well as the cultural and historical influences on the question of when does human life begin. A particular focus to this discussion will be from a Christian anthropological framework, with an emphasis on the moral teachings of the Catholic Church.
Empathy for the patient is a key attribute that is much prized in the healthcare professional. It is much harder, however, to specify what empathy is. It is to be distinguished from sympathy and compassion, as well as sentimentality. This chapter will explore the nature of empathy, drawing on the work of Edith Stein (1989). Care is central to healthcare but, in view of the discussion on empathy presented, this chapter will take a look at the ethics of care. The ethics of care argues for the importance of contextualising the relationships between caregivers and care-receivers.
The great advances in medical technology and the discovery of new, powerful drugs in recent times have led to a belief that almost all diseases and health problems can be treated efficaciously. We are told that the frontiers of what is medically possible are continually being pushed back, and that medical advances in areas such as nanotechnology and stem cell research promise to be able to treat currently incurable diseases. These advances are truly exciting, but healthcare is much more than the treatment of bodies that have succumbed to disease, or suffer from some disability. It is a truism to say that healthcare professionals do not treat bodies, but persons. Human beings who are suffering from health problems need much more than simply attention paid to their physical needs; they also need comfort and reassurance. Furthermore, it is also recognised that they need to have their psychological and spiritual needs met. In the healthcare setting, these needs are recognised through the provision of counselling and chaplaincy services. This, however, does not mean that the individual healthcare professional – such as a doctor or nurse – is relieved of a responsibility to treat patients as people by an assumption that the comforting and reassurance are the jobs of other members of the healthcare team.
Providing good care – medical and nursing – to people whose lives are drawing to an end continues to pose challenges for Australians. The challenges include both practical and ethical ones. Among the practical issues are those to do with how to ensure access to good end-of-life care – in particular, palliative care – to everyone to whom it is owed. Among the ethical concerns are questions to do with how best to understand what is owed to whom. This discussion will focus on the ethical issues. Although the writer of the Hippocratic Oath insists that it is part of a doctor’s duty to keep his patients free from injustices they can do to themselves, justice is generally understood to be what is owed to others. The idea of what is owed to others can be understood in a range of ways – from the relatively specific idea of fulfilling responsibilities defined by prior undertakings to the relatively inclusive idea of acting uprightly in any actions that have a bearing on others. In the former sense, justice is often referred to as ‘fairness’, and in the latter sense it is often a label for the whole of virtue. In the first section of this chapter, I outline four ways in which justice as what is owed to others is understood: a ‘utilitarian’ understanding, a ‘libertarian’ understanding, an ‘egalitarian’ understanding and a ‘pluralist’ understanding. I will briefly indicate what I take to be the strengths and weaknesses of each approach, explaining why I think the pluralist view is the most reasonable. I will also touch on an independent but related issue: how each approach tends to view the other – the person to whom justice is owed. In the second section, I will assume a ‘pluralist’ understanding of justice, and spell out one of its implications for the treatment and care of people at the end of their lives so that they have genuine choices in treatment and forms of care.
Utilitarian thought in the Western world could be said to originate with Jeremy Bentham, an eighteenth-century philosopher and social reformer. Bentham introduced the principle of acting for the greatest good for the greatest number, which is at the heart of utilitarianism. Also known as consequentialism, utilitarianism has various forms, such as act and rule utilitarianism and preference utilitarianism. In its modern form, utilitarianism proposes that an action will be morally good if the good outcomes of the action outweigh the bad. This is irrespective of the nature of the action itself, so achieving the desired consequences is at the heart of judging the moral nature of the action. Its appeal is due to the conception of weighing up courses of action and deciding to pursue the one that results in the greatest benefits to all. However, it has its weaknesses, and these will also be considered in this chapter.
There is something appealingly intuitive about consequentialism and its offshoot, utilitarianism. Our daily lives are filled with choices, and unless we are prepared to simply act at random, in order to make the best choice we need to be able to evaluate them all. While in some cases there may be little to distinguish between choices, we normally try to choose the option that affords us most satisfaction. We choose a new car, a new phone or a new job by first working out a set of criteria that will help us to make up our minds and then by applying these to the available choices. For example, in buying a car we might compare different models on engine size, fuel economy, reliability and other relevant factors. In the end, our deliberations lead us to buy the particular car that best satisfies the set of criteria we have chosen. It would be odd if we chose a car we did not like or that we knew had major faults, such as bad brakes or poor steering. Our deliberation is directed towards weighing up our options in terms of good or optimal consequences.
Virtue ethics is one of the major normative ethical theories, and has a very long history, having its origins, among other places, in the eudemonic ethics of Aristotle. Virtue ethics begins by considering the question of what is meant by the good for human beings and answers that it is to live a life of moral virtue. For Aristotle (1976), four cardinal moral virtues were required: courage, temperance, justice and prudence. Added to these in the Christian era are the theological virtues of faith, hope and charity, which also make a distinctive contribution to ethical decision-making in the healthcare context.
Character and virtue
The beginnings of virtue ethics can be found in reflecting on the kind of healthcare practitioner by whom we would like to be treated. If we are facing delicate brain surgery to alleviate a particular condition, we would want a good brain surgeon, not a poor one. Similarly, when we are later returned to the ward after a successful operation, we would want to be looked after by a competent nurse, rather than one who is incapable of performing their duties. This is true in every area of life. No one wants their MRI scans interpreted by an incompetent physician or to be represented by an inept lawyer. The slow lane at the checkout at the supermarket, where the checkout operator is bumbling and unskilled, is also to be avoided. In all these cases – as in every situation where we rely on the skills and competencies of others – we want those providing various services not simply to have some minimal competence, but to carry out their roles or activities well.
This chapter begins with the question ‘What is ethics?’ It is not just a list of prohibitions, but rather a reflection on what we consider to be good or bad. It involves an evaluative and disciplined study of what we regard as morally good and what we see as morally bad. This is required if we are to decide how to act. Moral judgements, it will be shown in this chapter, are not simply based on what we think, but on ethical theories. There are two kinds of ethical theories: meta-ethical theories and normative ethical theories. The former are about the kinds of ways in which we can think about the nature of ethical principles and judgements, such as whether they are conventional or universal. Normative ethical theories provide a framework of moral principles that can help us decide whether or not an action is morally right.
Introduction: What is ethics?
Ethics, as it is commonly understood, is connected with various bans against wrong-doing – particularly in business or in the professions. In people’s private lives, it is seen as demanding that, as far as possible, the actions someone chooses to perform have minimal effect on others around them – in some sense, that what people do is morally right. According to such a view, ethics is a means of regulating human behaviour, and so acts as a constraint on human action. This is, in fact, a very simplistic understanding of ethics. Ethics is not about any of the following:
prohibitions concerned with sex
ideal systems, such as codes of behaviour, which are all very noble in theory, but no good in practice
something intelligible only in the context of religion, or
personal likes and dislikes.
In other words, it is neither relative to a particular time, culture or place, nor is it merely the expression of subjective wants and desires (Singer, 2011).
One the most vexed questions in healthcare involves patient autonomy, and the extent to which patients are able to make decisions about their healthcare. It may not always be the case that patients will be able to understand the information that they have been given and decide what advice regarding their treatment they should follow. The question of autonomy and the extent to which patients are able to give informed consent to the treatment being recommended to them becomes more difficult as their health deteriorates. These issues will be explored in this chapter.
The idea that patients need to be asked to consent to the medical and healthcare treatment that is being proposed to them is reasonably modern. After all, the healthcare practitioner is the professional, the person with the expertise to decide what treatments are needed by the patient in order to return to health. If a patient needs an operation to remove a tumour, they are expected to accept the surgeon’s advice and have the operation. The Hippocratic Oath states nothing about asking patients for their informed consent before the physician prescribes medication or performs surgery. Despite this, it has generally become accepted that, because patients are autonomous, self-determining human persons, they need to be fully informed of the treatment options available to them, and to decide, having assessed the available information and taken appropriate advice, what treatments they will undertake. This is the recognition that a pathway to health will involve the active participation of patients – that is, it is not simply a matter of patients passively receiving treatment. The whole person needs to be involved in recovery to full health.