Hostname: page-component-89b8bd64d-72crv Total loading time: 0 Render date: 2026-05-12T23:56:28.940Z Has data issue: false hasContentIssue false

‘Tailception’: using neural networks for assessing tail lesions on pictures of pig carcasses

Published online by Cambridge University Press:  15 November 2018

J. Brünger
Affiliation:
Multimedia Information Processing Group, Computer Science Institute, University of Kiel, Hermann-Rodewald-Str. 3, 24118 Kiel, Germany
S. Dippel*
Affiliation:
Institute of Animal Welfare and Animal Husbandry, Friedrich-Loeffler-Institut, Dörnbergstr 25/27, 29223 Celle, Germany
R. Koch
Affiliation:
Multimedia Information Processing Group, Computer Science Institute, University of Kiel, Hermann-Rodewald-Str. 3, 24118 Kiel, Germany
C. Veit
Affiliation:
Institute of Animal Welfare and Animal Husbandry, Friedrich-Loeffler-Institut, Dörnbergstr 25/27, 29223 Celle, Germany
*

Abstract

Tail lesions caused by tail biting are a widespread welfare issue in pig husbandry. Determining their prevalence currently involves labour intensive, subjective scoring methods. Increased societal interest in tail lesions requires fast, reliable and cheap systems for assessing tail status. In the present study, we aimed to test the reliability of neural networks for assessing tail pictures from carcasses against trained human observers. Three trained observers scored tail lesions from automatically recorded pictures of 13 124 pigs. Nearly all pigs had been tail docked. Tail lesions were classified using a 4-point score (0=no lesion, to 3=severe lesion). In addition, total tail loss was recorded. Agreement between observers was tested prior and during the assessment in a total of seven inter-observer tests with 80 pictures each. We calculated agreement between observer pairs as exact agreement (%) and prevalence-adjusted bias-adjusted κ (PABAK; value 1=optimal agreement). Out of the 13 124 scored pictures, we used 80% for training and 20% for validating our neural networks. As the position of the tail in the pictures varied (high, low, left, right), we first trained a part detection network to find the tail in the picture and select a rectangular part of the picture which includes the tail. We then trained a classification network to categorise tail lesion severity using pictures scored by human observers whereby the classification network only analysed the selected picture parts. Median exact agreement between the three observers was 80% for tail lesions and 94% for tail loss. Median PABAK for tail lesions and loss were 0.75 and 0.87, respectively. The agreement between classification by the neural network and human observers was 74% for tail lesions and 95% for tail loss. In other words, the agreement between the networks and human observers were very similar to the agreement between human observers. The main reason for disagreement between observers and thereby higher variation in network training material were picture quality issues. Therefore, we expect even better results for neural network application to tail lesions if training is based on high quality pictures. Very reliable and repeatable tail lesion assessment from pictures would allow automated tail classification of all pigs slaughtered, which is something that some animal welfare labels would like to do.

Information

Type
Research Article
Copyright
© The Animal Consortium 2018 
Figure 0

Figure 1 Scoring key used for assessing tail lesions and total tail loss on pictures from pig carcasses. Tail lesions and losses were scored independently of each other. ‘Lesion‘ was defined as broken skin. The tail loss one picture shows the longest remaining ‘stump‘ which was still considered as tail loss (longer stumps would be classified as tail loss 0). Centimetres given are subjective estimates from a picture.

Figure 1

Table 1 Number of pig carcase pictures scored by human observers and used for training and validating neural networks

Figure 2

Figure 2 Architecture of a part detection network used for locating tails in pictures of pig carcases. The network learns to activate pixels in the specified areas which can then be used for positioning the region-of-interest windows for cutting out the relevant picture section (tail) for subsequent classification.

Figure 3

Figure 3 Results of inter-observer agreement tests of three human observers scoring tail lesions or tail loss, respectively, from pig carcase pictures. Each dot represents the exact agreement (%) or prevalence-adjusted bias-adjusted κ (PABAK; range 0 to 1), respectively, for one observer-pair during one test (consecutive test number on x-axis; n=80 pictures per test). Grey vertical line=start of data collection.

Figure 4

Figure 4 Normalized confusion matrix for the predictions of the tail lesion classification network based on 13 124 pig tail pictures annotated by human observers. True label=tail lesion severity score assigned by humans, Predicted label=score predicted by neural network. The colouring indicates the normalised distribution of numbers of pictures per cell.

Figure 5

Figure 5 Example pictures of slaughter pig tails from the verification of the tail lesion severity classification network (top row). From left to right, pictures represent tail lesion scores 0, 1, 2 and 3, respectively (Figure 1). The bottom row shows the respective gradient-map made by the network, in which warmer colours indicate a larger influence of the respective pixel on the final classification result.

Figure 6

Figure 6 Three examples for misclassification of pig tail lesion severity scores by the network. (a) and (b) were assigned lesion score 1 by a human and lesion score 0 by the network, (c) was assigned lesion score 3 by a human and score 2 by the network.