Hostname: page-component-68c7f8b79f-lqrcg Total loading time: 0 Render date: 2026-01-02T10:51:42.325Z Has data issue: false hasContentIssue false

Do New Technologies Change Our Moral Norms?

Published online by Cambridge University Press:  02 January 2026

Robin Hillenbrink*
Affiliation:
University of Twente, Enschede, Netherlands
Guido Löhr
Affiliation:
Vrije Universiteit, Amsterdam, Netherlands
*
*Corresponding author: Robin Hillenbrink; Email: r.hillenbrink@utwente.nl

Abstract

New technologies don’t just change how we act – they can also reshape our moral beliefs, practices, norms and values. Technological innovations like the mechanical ventilator, autonomous weapons and gene editing challenge existing concepts, such as death, responsibility and health, and create situations our conceptual frameworks can no longer fully explain. As these technologies disrupt familiar ideas and beliefs, they push us to rethink our ethical norms and values. Understanding how technology drives these shifts is necessary if we want to respond thoughtfully to the moral challenges of the future.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Royal Institute of Philosophy.

Introduction

In an interview with The Guardian, Meta’s Mark Zuckerberg said that the rise of social media changed our privacy norms. ‘People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people.’

Assuming that people’s expectations about their rights to their data have changed, should we believe Zuckerberg about the cause of this moral change? Was it his new technology that changed our moral norms? Or did we and our use of this technology cause the change?

Perhaps a bit of both?

Morality and Concepts

That technology changes our social practices is nothing new or controversial. Our practices are, of course, different now that we have smartphones from what they were before we could be online everywhere we go.

Several philosophers, however, have recently argued that technology has a more profound effect on us. They have argued that technologies have the power to change our moral norms by changing or putting pressure on our moral concepts.

Typical moral concepts are good or evil, right or wrong, obligation or prohibition. This sounds a bit abstract but essentially, this is about action – what we can and can’t do in a society. When you say that someone is evil, this signals to others not to interact with them. To declare an action immoral or wrong means ‘don’t do it!’ or ‘we don’t do that here’.

So, moral concepts are concepts that determine what we consider to be moral or immoral actions. They are the measure of good behaviour in a society. An action that is moral might be permissible or even required. But moral concepts go beyond good and evil. To declare an action a violation of our privacy means we are not entitled to do it. To say that certain actions are required in a friendship (like caring for each other) means that this is what you ought to do.

So, whether something is to be classified under the concept of privacy or friendship (or not) has moral implications. You are morally required to respect others’ privacy and the label of a close friendship demands you treat your friend a certain way that you are not required to treat a stranger.

Mechanisms of Moral Change

New technologies can put pressure on our morality and moral concepts. If a new tool or technology becomes available, new options for action emerge that are not yet morally regulated.

We might not know whether checking your Facebook ex’s pictures with a fake account counts as spying or curiosity. If the former, then this seems morally bad. If the latter, it is morally permissible.

This process of moral disruption – situations where it is not clear whether something is moral or not – can be illustrated by numerous examples.

Mechanical Ventilators and the Concept of Death

One of the most prominent examples of the impact of technology on our conceptual schemes and consequently on our morality as a whole is the introduction of mechanical ventilators.

A mechanical ventilator is a medical device used to facilitate respiration by administering air when a patient’s lungs are not able to do so on their own.

By the 1960s, some patients with severe brain injury could indefinitely retain their respiratory function using mechanical ventilation. However, determining the medical and moral status of these patients was a complex matter: while they showed no significant brain activity, they now retained the ability to breathe with the support of the ventilator.

The lack of a fitting concept to attribute to these persons confronted doctors with classificatory uncertainty: were these patients to be considered dead or alive? What concept should be applied to understand their ambiguous status?

Autonomous Weapons and the Concept of Accountability

Autonomous weapons, also known as Lethal Autonomous Weapon Systems (LAWS) or killer robots, are military systems that can independently select and engage targets without direct human intervention. They use Artificial Intelligence (AI), sensors, and machine learning to operate without needing real-time control by humans.

With the development of LAWS, machines are introduced that are capable of making independent life-and-death decisions without necessary direct human control or intervention, or moral judgement. These new technological entities create new possibilities, such as the (partial) removal of human judgement in lethal actions, and a breakdown of moral and legal responsibility for harm being done.

Because these technologies operate with varying degrees of independence, it is difficult to attribute responsibility and accountability for their actions and mistakes.

‘New technologies can put pressure on our morality and moral concepts. If a new tool or technology becomes available, new options for action emerge that are not yet morally regulated.’

For example, if a drone strikes the wrong target, who is accountable? Is it the programmer who designed the software, the military commander who deployed the system, the manufacturer who produced it, or the system itself? Existing frameworks, such as the laws of armed conflict, assume human decision-making and clear chains of responsibility, which are assumptions that autonomous systems disrupt.

In academic debates, this is what is called the responsibility gap – situations where it is unclear who should be held morally or legally responsible for the actions or outcomes of technological systems.

CRISPR and the Concepts of Health and Life

CRISPR-Cas9 introduces the possibility of genetically modifying humans and therefore introduces the option of modified humans as a new type of entity.

The technology opens up previously unimaginable possibilities – such as altering genes to cure genetic diseases, enhancing human traits, or even editing the genes of future generations.

For instance, questions increase regarding the distinction between therapy and enhancement, and natural and artificial. Additionally, choosing or adapting the interpretations of concepts like ‘treatment’ and ‘disability’ seems to gain more urgency. Should the concepts of ‘treatment’ or ‘healthcare’ be applied to gene editing, or should we be categorizing this as ‘enhancement’ immediately?

Or perhaps a new concept is needed altogether to explain the place of gene editing in treatment and healthcare. Making such conceptual decisions will in part determine if people have a right to gene-editing (for their future children), who does, and if they do, under which circumstances. Similarly, the concept of ‘human nature’ itself is destabilized further – what does it mean to be human and talk about the concept of human nature if our genetic makeup can be designed and modified at will?

Our current concepts appear insufficient and inadequate for addressing such ontological and ethical questions.

Learning from the Past

The complexity of these scenarios, and the emergence of responsibility gaps, has led to debates about whether existing concepts of accountability are applicable and sufficient, or whether entirely new concepts are needed.

Some scholars propose ‘distributed accountability’, which acknowledges that accountability may be shared among multiple actors and the autonomous system itself. This case study illustrates the conceptual challenges posed by this technology, because it ambiguates the understanding of different concepts, such as accountability and responsibility, and blurs the line between human and machine agency.

‘... if a drone strikes the wrong target, who is accountable? Is it the programmer who designed the software, the military commander who deployed the system, the manufacturer who produced it, or the system itself?’

However, without a clear way to categorize these systems, and without the right concepts to understand their role in violent decisions being made, ethical questions of accountability, etc., cannot be answered.

Here, the lack of a suitable concept sparked moral deliberation, because of the moral weight and moral and legal implications that each choice would bring with it. For instance, if these patients were considered fully dead, it could be argued that their organs could be used for transplantation. In a world with long organ donation lists, and limited organs available, such a decision would have a significant impact. However, if they were considered to be alive in some way, such actions would be impermissible.

The example of the mechanical ventilator illustrates the necessity of fitting concepts, and how technology can create situations in which our existing conceptual scheme is no longer sufficient.

The case of the ambiguous state of mechanical ventilator patients was resolved by introducing a new concept: ‘brain death’. With this new concept, patients could be categorized, and physicians and lawmakers could respond accordingly to the new concept.

The response to this new concept and new categorization of specific patients led to other practical moral changes, such as the moral and legal permissibility of removing the organs of brain-dead patients for transplantation – preferably with previously given consent – and new beliefs about what was morally permissible in these cases.

In this example, we see a clear instance of a necessary adaptation of our conceptual scheme and consequently a necessary adaptation of our moral beliefs, practices, norms and values following a technological development and disruption of the conceptual scheme.

Predicting the Future

Changes in our conceptual scheme need to take place to make up for these current conceptual shortcomings. This might involve introducing a concept of autonomous agency for machines, or adapting existing concepts of moral responsibility, accountability and culpability to accommodate non-human actors.

If such conceptual changes take hold, they will inevitably lead to shifts in moral norms and practices surrounding war and conflict. Following new understanding through new concepts that suffice better to describe the situation at hand, we may develop ethical norms for autonomous systems, adapt international law to account for machine autonomy, and public beliefs could shift about the morality of war, human responsibility and accountability, and the acceptable limits of technology in conflict.

To address these current questions and future challenges, our conceptual scheme will most likely have to evolve. New concepts such as ‘genetic justice’ may be needed, and existing concepts like ‘treatment’ and ‘health’ might need to be expanded or redefined. With such conceptual changes taking hold, other elements of morality will likely be reshaped, such as ethical norms, beliefs and practices.

New ethical standards for genetic modification will emerge, ethical and public views on health and enhancement will shift, and the boundary between medical treatment and genetic design may become more blurred. In this way, morality can change through necessary technology-induced adaptations to our conceptual schemes.

Conclusion

When new technologies introduce unprecedented entities, actions or possibilities, our existing concepts can become inadequate to fully understand and respond to these changes, and the (moral) questions that they give rise to.

These conceptual disruptions, as seen in the cases of the mechanical ventilator, autonomous weapons and human gene editing, create moral uncertainties that necessitate the adaptation of our conceptual scheme in one way or another. Through these conceptual changes – whether the introduction of new concepts like ‘brain death’, or the adaptation of existing concepts like ‘accountability’ or ‘treatment’ – our moral beliefs, norms, values and practices change.

Thus, the case studies we have discussed illustrate how technological developments can reshape our morality by challenging existing moral and morally significant concepts and therefore our general understanding of ethical questions. As technology continues to advance, it will undoubtedly generate new conceptual and ethical challenges, and demand from us further conceptual adaptations.