Hostname: page-component-77f85d65b8-g98kq Total loading time: 0 Render date: 2026-03-27T16:39:02.706Z Has data issue: false hasContentIssue false

Deep-blur: Blind identification and deblurring with convolutional neural networks

Published online by Cambridge University Press:  15 November 2024

Valentin Debarnot
Affiliation:
Departement Mathematics and computer science, Basel University, Basel, Switzerland
Pierre Weiss*
Affiliation:
Institut de Recherche en Informatique de Toulouse (IRIT), CNRS & Université de Toulouse, Toulouse, France Centre de Biologie Intégrative (CBI), Laboratoire de biologie Moléculaire, Cellulaire et du Développement (MCD), CNRS & Université de Toulouse, Toulouse, France
*
Corresponding author: Pierre Weiss; Email: pierre.weiss@cnrs.fr
Rights & Permissions [Opens in a new window]

Abstract

We propose a neural network architecture and a training procedure to estimate blurring operators and deblur images from a single degraded image. Our key assumption is that the forward operators can be parameterized by a low-dimensional vector. The models we consider include a description of the point spread function with Zernike polynomials in the pupil plane or product-convolution expansions, which incorporate space-varying operators. Numerical experiments show that the proposed method can accurately and robustly recover the blur parameters even for large noise levels. For a convolution model, the average signal-to-noise ratio of the recovered point spread function ranges from 13 dB in the noiseless regime to 8 dB in the high-noise regime. In comparison, the tested alternatives yield negative values. This operator estimate can then be used as an input for an unrolled neural network to deblur the image. Quantitative experiments on synthetic data demonstrate that this method outperforms other commonly used methods both perceptually and in terms of SSIM. The algorithm can process a 512 $ \times $ 512 image under a second on a consumer graphics card and does not require any human interaction once the operator parameterization has been set up.1

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press
Figure 0

Figure 1. The deep-blur architecture. The first part of the network identifies the parameter $ \hat{\boldsymbol{\unicode{x03B3}}} $. In this article, we use a ResNet architecture. The estimated parameter $ \hat{\boldsymbol{\unicode{x03B3}}} $ is given as an input of a second deblurring network. This one is an unrolled Douglas–Rachford algorithm. The yellow blocks are convolution layers with ReLU and batch normalization. The red ones are average pooling layers. The green ones are regularized inverse layers of the form $ {\mathbf{x}}_{t+1}={\left({\mathbf{A}}^{\ast}\left(\hat{\boldsymbol{\unicode{x03B3}}}\right)\mathbf{A}\left(\hat{\boldsymbol{\unicode{x03B3}}}\right)+\lambda \mathbf{I}\right)}^{-1}\mathbf{A}\left(\hat{\boldsymbol{\unicode{x03B3}}}\right)\mathbf{y} $. The violet blocks are U-Net-like neural networks with weights learned to provide a sharp image $ \hat{\mathbf{x}} $.

Figure 1

Figure 2. Examples of eigen-PSF and eigen-space variation bases for a wide-field microscope.(28)

Figure 2

Figure 3. Examples of results for the identification network with convolution kernels defined through Fresnel approximation. Top: the original and blurred and noisy $ 400\times 400 $ images. Bottom: the true $ 31\times 31 $ kernel used to generate the blurry image and the corresponding estimation by the neural network. Notice that there is a large amount of white Gaussian noise added to the blurred image. The image boundaries have been discarded from the estimation process to prevent the neural network from using information that would not be present in real images.

Figure 3

Figure 4. On the left: a $ 100\times 100 $ table representing the SNR of the PSF. In this table, we evaluated the identification network for 100 images (left to right) and 100 kernels (top to bottom) with no noise. As can be seen, there are horizontal and vertical stripes. This means that some images and some kernels make the identification problem easier or harder. In the middle: an image making the identification problem hard (column 23). On the right: a kernel making the identification harder (row 65).

Figure 4

Figure 5. Stability of the kernel estimation with respect to noise level (left) and amplitude of the Zernike coefficients in the noiseless regime (right).

Figure 5

Figure 6. Deep-blur in action in the noiseless setting. Quantitative evaluations are reported in Table 1. When available, the estimated blur kernel is displayed at the bottom right. First row: original images. Second row: blurry-noisy images. Third row: deep-blur. Fourth row:(5)Fifth row:(4)Sixth row:(19).

Figure 6

Table 1. Reconstruction results for different noise levels and different methods

Figure 7

Figure 7. Deep-blur in action with a medium noise level ($ \alpha =0.025 $, $ \beta =0.05 $). Quantitative evaluations are reported in Table 1. When available, the estimated blur kernel is displayed at the bottom right. First row: original images. Second row: blurry-noisy images. Third row: deep-blur. Fourth row: (5) Fifth row: (4) Sixth row: (19).

Figure 8

Figure 8. Deep-blur in action in a high-noise regime ($ \alpha =0.12 $, $ \beta =0.24 $). Quantitative evaluations are reported in Table 1. When available, the estimated blur kernel is displayed at the bottom right. First row: original images. Second row: blurry-noisy images. Third row: deep-blur. Fourth row: (5) Fifth row: (4) Sixth row: (19).

Figure 9

Figure 9. Blind deblurring examples on real images taken from(43), see the samples for more details. In this experiment, only the noise level was set manually, the rest of the process is fully automatized. For this experiment, no ground truth is available and the results have to be assessed by visual inspection.

Figure 10

Figure 10. Deep-blur applied to spatially varying blur operators on microscopy images (not seen during training). The blur operators are sampled from a family estimated using a real wide-field microscope. First row: the original images. Second row: blurry-noisy images. Third row: the blind deblurring result with deep-blur. The SSIM of the resulting deblurred image is displayed below. Fourth row: The true blur operator. We display 4 evenly spaced impulse responses in the field of view. Fifth row: The estimated blur operator. The SNR of the estimated kernel is displayed in the caption in dB.