Hostname: page-component-89b8bd64d-j4x9h Total loading time: 0 Render date: 2026-05-07T14:27:03.722Z Has data issue: false hasContentIssue false

The Right Accounting of Wrongs: Examining Temporal Changes to Human Rights Monitoring and Reporting

Published online by Cambridge University Press:  07 February 2022

Daniel Arnon*
Affiliation:
University of Arizona
Peter Haschke
Affiliation:
University of North Carolina at Ascheville and
Baekkwan Park
Affiliation:
East Carolina University
*
*Corresponding author. Email: danielarnon@email.arizona.edu
Rights & Permissions [Opens in a new window]

Abstract

Scholars contend that the reason for stasis in human rights measures is a biased measurement process, rather than stagnating human rights practices. We argue that bias may be introduced as part of the compilation of the human rights reports that serve as the foundation of human rights measures. An additional source of potential bias may be human coders, who translate human rights reports into human rights scores. We first test for biases via a machine-learning approach using natural language processing and find substantial evidence of bias in human rights scores. We then present findings of an experiment on the coders of human rights reports to assess whether potential changes in the coding procedures or interpretation of coding rules affect scores over time. We find no evidence of coder bias and conclude that human rights measures have changed over time and that bias is introduced as part of monitoring and reporting.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press
Figure 0

Figure 1. Sources of bias in each stage.

Figure 1

Table 1. Data (texts) and labels

Figure 2

Figure 2. Fixed-rolling-window forecasting.

Figure 3

Figure 3. Accuracy and bias across algorithms over time.Notes: Shown are the measured in- and out-of-window accuracy and bias for 29 machine-learning models. The gray-colored lines represent 29 models and the solid black line is an average value across all the models. Left panels use Section 1 of SD report on PTS-SD scores. Middle panels use AI reports on PTS-AI scores. Right panels use only Section 1 of the SD reports on CIRI aggregate scores. The top panels for out-of-window accuracy show an increasing slope, indicating that for all of these measures, standards have changed. It should be noted that each year in the plots represents the midpoint of the ten-year training window.

Figure 4

Figure 4. Accuracy and bias across algorithms over time.Notes: Shown are the measured in- and out-of-window accuracy and bias for 29 machine-learning models. The gray-colored lines represent 29 models and the solid black line is an average value across all the models: (4) torture (top-left), (5) political (top-right), (6) imprisonment (bottom-left), and (7) political disappearances (bottom-right). We use only the relevant sections of the SD reports, based on the measures' coding rules.

Figure 5

Figure 5. Experimental design.Notes: In 2006, reports were first coded contemporaneously. Ten years later, during the coding of 2015 reports, coders were given reports from both 2015 and 2005, with all temporal information redacted. We then compared the originally coded 2005 reports with the scores of the same reports assigned in 2016. Results of the comparison are reported in the following.

Figure 6

Table 2. Difference of means between original and new scores

Figure 7

Figure 6. Distribution of scores: 2006 v. 2016.Notes: Shown is the distribution of PTS scores: scores assigned in 2016 (left panel); and original scores coded in 2006 (right panel).

Supplementary material: Link

Arnon et al. Dataset

Link
Supplementary material: PDF

Arnon et al. supplementary material

Arnon et al. supplementary material

Download Arnon et al. supplementary material(PDF)
PDF 2.1 MB