Hostname: page-component-89b8bd64d-b5k59 Total loading time: 0 Render date: 2026-05-13T08:39:35.148Z Has data issue: false hasContentIssue false

How to co-create content moderation policies: the case of the AutSPACEs project

Published online by Cambridge University Press:  15 May 2024

Georgia Aitkenhead
Affiliation:
The Alan Turing Institute, London, UK
Susanna Fantoni
Affiliation:
Citizen Science Contributors
James Scott
Affiliation:
Citizen Science Contributors
Sophia Batchelor
Affiliation:
The Alan Turing Institute, London, UK
Helen Duncan
Affiliation:
The Alan Turing Institute, London, UK
David Llewellyn-Jones
Affiliation:
The Alan Turing Institute, London, UK
Callum Mole
Affiliation:
The Alan Turing Institute, London, UK
Otis Smith
Affiliation:
The Alan Turing Institute, London, UK
Martin Stoffel
Affiliation:
The Alan Turing Institute, London, UK
Robin Taylor
Affiliation:
Citizen Science Contributors
Kirstie Whitaker
Affiliation:
The Alan Turing Institute, London, UK
Bastian Greshake Tzovaras*
Affiliation:
The Alan Turing Institute, London, UK
*
Corresponding author: Bastian Greshake Tzovaras; Email: bgreshaketzovaras@turing.ac.uk

Abstract

The moderation of user-generated content on online platforms remains a key solution to protecting people online, but also remains a perpetual challenge as the appropriateness of content moderation guidelines depends on the online community that they aim to govern. This challenge affects marginalized groups in particular, as they more frequently experience online abuse but also end up falsely being the target of content-moderation guidelines. While there have been calls for democratic, community-moderation, there has so far been little research into how to implement such approaches. Here, we present the co-creation of content moderation strategies with the users of an online platform to address some of these challenges. Within the context of AutSPACEs—an online citizen science platform that aims to allow autistic people to share their own sensory processing experiences publicly—we used a community-based and participatory approach to co-design a content moderation solution that would fit the preferences, priorities, and needs of its autistic user community. We outline how this approach helped us discover context-specific moderation dilemmas around participant safety and well-being and how we addressed those. These trade-offs have resulted in a moderation design that differs from more general social networks in aspects such as how to contribute, when to moderate, and what to moderate. While these dilemmas, processes, and solutions are specific to the context of AutSPACEs, we highlight how the co-design approach itself could be applied and useful for other communities to uncover challenges and help other online spaces to embed safety and empowerment.

Information

Type
Translational Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press
Figure 0

Figure 1. The different stages of the moderation co-creation for AutSPACEs.

Figure 1

Figure 2. The content moderation workflow from submitting an experience to publication. Depending on the content, individual experiences are assigned one of three labels: (1) Green, for experiences that are unproblematic. (2) Red, for experiences that are unacceptable. (3) Yellow, for experiences that do not cross into the unacceptable but might be distressing or upsetting.

Supplementary material: File

Aitkenhead et al. supplementary material

Aitkenhead et al. supplementary material
Download Aitkenhead et al. supplementary material(File)
File 104.3 KB
Submit a response

Comments

No Comments have been published for this article.