Hostname: page-component-89b8bd64d-5bvrz Total loading time: 0 Render date: 2026-05-06T15:42:37.082Z Has data issue: false hasContentIssue false

Calibrated river-level estimation from river cameras using convolutional neural networks

Published online by Cambridge University Press:  04 May 2023

Remy Vandaele*
Affiliation:
Department of Meteorology, University of Reading, Reading, United Kingdom Department of Computer Science, University of Reading, Reading, United Kingdom
Sarah L. Dance
Affiliation:
Department of Meteorology, University of Reading, Reading, United Kingdom Department of Mathematics and Statistics, University of Reading, Reading, United Kingdom Department of Computer Science, Reading, United Kingdom
Varun Ojha
Affiliation:
Department of Computer Science, University of Reading, Reading, United Kingdom
*
Corresponding author: Remy Vandaele; Email: r.a.vandaele@reading.ac.uk

Abstract

Monitoring river water levels is essential for the study of floods and mitigating their risks. River gauges are a well-established method for river water-level monitoring but many flood-prone areas are ungauged and must be studied through gauges located several kilometers away. Taking advantage of river cameras to observe river water levels is an accessible and flexible solution but it requires automation. However, current automated methods are only able to extract uncalibrated river water-level indexes from the images, meaning that these indexes are relative to the field of view of the camera, which limits their application. With this work, we propose a new approach to automatically estimate calibrated river water-level indexes from images of rivers. This approach is based on the creation of a new dataset of 32,715 images coming from 95 river cameras in the UK and Ireland, cross-referenced with gauge data (river water-level information), which allowed us to train convolutional neural networks. These networks are able to accurately produce two types of calibrated river water-level indexes from images: one for continuous river water-level monitoring, and the other for flood event detection. This work is an important step toward the automated use of cameras for flood monitoring.

Information

Type
Application Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press
Figure 0

Figure 1. Example for the association process. A camera has produced images $ {I}_1 $ to $ {I}_3 $ with their timestamps represented by dashed lines projected on the time axis. The acceptable 30 minute time-ranges around the camera timestamps are represented in red. The reference gauge station has produced six water-level measurements $ {w}_1 $ to $ {w}_6 $, with their timestamps represented by dashed lines projected on the time axis. The associations are represented in blue. Image $ {I}_1 $ is associated with gauge level $ {w}_1 $ as they are produced at the same timestamp. Image $ {I}_2 $ has no gauge measurement produced at its timestamp, but both $ {w}_3 $ and $ {w}_4 $ are produced within the 30-minute time range. We choose to associate $ {I}_2 $ with $ {w}_4 $ as it is the closest in time. Image $ {I}_3 $ has no gauge measurement within the 30-minute time range, so the image will be discarded.

Figure 1

Figure 2. Representation of the camera visual inspection process, as explained in Section 2.1. The first camera in Exebridge presented in (a), associated with the EA gauge 45,122 suggests a good association as the image associated with the lowest river water level shows the lowest water level among the three images, and the image associated with the highest river water level shows the highest water level among the three images. The camera in Kintore presented in (b), associated with the EA gauge L0001 suggests a bad association as the image associated with the average water level shows the lowest water level among the three images.

Figure 2

Figure 3. Locations of the selected cameras and their associated gauges.

Figure 3

Table 1. Learning parameters tested during the grid search.

Figure 4

Figure 4. Sample images from the four cameras of the test set.

Figure 5

Figure 5. MAE scores obtained by applying RWN (dashed line) and Filtered-RWN (dotted line) on the camera images of Diglis, Evesham, Strensham, and Tewkesbury. The bars represent the MAE scores obtained with the standardized river water-level indexes produced by the gauges within a 50 km radius, as described in Section 3.1.1.

Figure 6

Figure 6. Monitoring of the river water levels during 2020 at Diglis, Evesham, Strensham, and Tewkesbury by applying RWN and Filtered-RWN to the camera images, compared to the river water-level data produced by the gauge associated with the camera.

Figure 7

Figure 7. Camera images observing the flood event at Diglis, on 27 February 2020 (left), and on 24 December 2020 (right).

Figure 8

Figure 8. Balanced Accuracy scores obtained by applying CWN (dashed line) and Filtered-CWN (dotted line) on the camera images of Diglis, Evesham, Strensham, and Tewkesbury. The bars represent the Balanced Accuracy scores obtained by the gauges within a 50 km radius producing a flood classification index, as described in Section 3.2.1.

Figure 9

Table 2. Contingency table for the classification of flooded images using CWN.

Figure 10

Figure 9. Examples of images misclassified by Classification-WaterNet.