Hostname: page-component-6766d58669-7cz98 Total loading time: 0 Render date: 2026-05-16T16:33:02.580Z Has data issue: false hasContentIssue false

Automatic exposure compensation using an image segmentation method for single-image-based multi-exposure fusion

Published online by Cambridge University Press:  02 January 2019

Yuma Kinoshita
Affiliation:
Tokyo Metropolitan University, Tokyo, Japan
Hitoshi Kiya*
Affiliation:
Tokyo Metropolitan University, Tokyo, Japan
*
Corresponding author: Hitoshi Kiya, Email: kiya@tmu.ac.jp

Abstract

In this paper, an automatic exposure compensation method is proposed for image enhancement. For the exposure compensation, a novel image segmentation method based on luminance distribution is also proposed. Most single-image-enhancement methods often cause details to be lost in bright areas in images or cannot sufficiently enhance contrasts in dark regions. The image-enhancement method that uses the proposed compensation method enables us to produce high-quality images which well represent both bright and dark areas by fusing pseudo multi-exposure images generated from a single image. Here, pseudo multi-exposure images are automatically generated by the proposed exposure compensation method. To generate effective pseudo multi-exposure images, the proposed segmentation method is utilized for automatic parameter setting in the compensation method. In experiments, image enhancement with the proposed compensation method outperforms state-of-the-art image enhancement methods including Retinex-based methods, in terms of both entropy and statistical naturalness. Moreover, visual comparison results show that the proposed compensation method is effective in producing images that clearly present both bright and dark areas.

Information

Type
Original Paper
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
Copyright © The Authors, 2018
Figure 0

Fig. 1. Pseudo multi-exposure image fusion (MEF). Our main contributions are to propose an image segmentation method to calculate suitable parameters and to propose a novel exposure compensation method based on the segmentation method.

Figure 1

Fig. 2. Proposed image segmentation. Each separated area Pm is color-coded in the right image. Separated areas {Pm} are given by the GMM-based clustering method, where GMM is fit by using luminance distribution of input image I.

Figure 2

Fig. 3. Proposed segmentation-based exposure compensation, which can automatically calculate M parameters {αm}, although conventional methods cannot. (a) Conventional [19], (b) Proposed.

Figure 3

Fig. 4. Results of the proposed method (Chinese garden). (a) Input image x (0EV). Entropy: 5.767. Naturalness: 0.4786. (b) Result of the proposed segmentation, separated areas {Pm} (M=7,K=10). (c) Final enhanced result of the proposed method, fused image y. Entropy: 6.510. Naturalness: 0.1774. (d–j) Adjusted images ($\hat{\bi x}_1$, $\hat{{\bi x}}_2$, $\hat{{\bi x}}_3$, $\hat{{\bi x}}_4$, $\hat{\bi x}_5$, $\hat{\bi x}_6$, $\hat{\bi x}_7$, respectively) produced by the proposed segmentation-based exposure compensation. In (b) each color indicates area.

Figure 4

Fig. 5. Results of the proposed method (Trashbox). (a) Input image x (−6EV). Entropy: 0.249. Naturalness: 0.0000. (b) Result of the proposed segmentation, separated areas {Pm} (M=3,K=10). (c) Final enhanced result of the proposed method, fused image y. Entropy: 6.830. Naturalness: 0.4886. (d–f) Adjusted images ($\hat{\bi x}_1$, $\hat{\bi x}_2$, $\hat{\bi x}_3$, respectively) produced by the proposed segmentation-based exposure compensation. In (b), each color indicates area.

Figure 5

Fig. 6. Results under the use of fixed parameters M and αm (Arno). (a) Input image x (−0EV). Entropy: 6.441. Naturalness: 0.1996. (b): enhanced results with fixed M and αm, fixed M=3 and {αm} = { − 2, 0, 2}. Entropy: 6.597. Naturalness: 0.3952. (c) Fixed M=5 and {αm} = { − 4, − 2, 0, 2, 4}. Entropy: 6.745. Naturalness: 0.5430. (d): Fixed M=7 and {αm} = { − 8, − 4, …, 4, 8}. Entropy: 6.851. Naturalness: 0.6812.), (e) Enhanced result of the proposed method (M=5,K=10). Entropy: 6.640. Naturalness: 0.6693. (f): enhanced results with fixed M. Fixed M= 3. Entropy: 6.787.Naturalness: 0.6555. (g) Fixed M=5. Entropy: 6.614.]DIFdelland Naturalness: 0.6615. (h) Fixed M=7. Entropy: 6.542. Naturalness: 0.5861. Zoom-in of the boxed region is shown in bottom of each image.

Figure 6

Fig. 7. Comparison of the proposed method with image-enhancement methods (Window). Zoom-ins of boxed regions are shown in bottom of each image. The proposed method can produce clear images without under- or over-enhancement. (a) Input image x (−1EV). Entropy: 3.811. Naturalness: 0.0058. (b) HE. Entropy: 5.636. Naturalness: 0.6317. (c) CLAHE [1]. Entropy: 5.040. Naturalness: 0.0945. (d) AGCWD [2]. Entropy: 5.158. Naturalness: 0.1544. (e) CACHE [3]. Entropy: 5.350. Naturalness: 0.1810. (f) LLIE [6]. Entropy: 4.730. Naturalness: 0.0608. (g) LIME [4]. Entropy: 7.094. Naturalness: 0.9284. (h) SRIE [5]. Entropy: 5.950. Naturalness: 0.2548. (i) BIMEF [21]. Entropy: 5.967. Naturalness: 0.2181. (j) Proposed. Entropy: 6.652. Naturalness: 0.7761.

Figure 7

Fig. 8. Comparison of the proposed method with image-enhancement methods (Estate rsa). Zoom-ins of boxed regions are shown in the bottom of each image. The proposed method can produce clear images without under- or over-enhancement. (a) Input image x (−1.3EV). Entropy: 4.288. Naturalness: 0.0139. (b) HE. Entropy: 6.985. Naturalness: 0.7377. (c) CLAHE [1]. Entropy: 6.275 and Naturalness: 0.4578. (d) AGCWD [2]. Entropy: 6.114. Naturalness: 0.4039. (e) CACHE [3]. Entropy: 7.469. Naturalness: 0.7573. (f) LLIE [6]. Entropy: 5.807. Naturalness: 0.2314. (g) LIME [4]. Entropy: 7.329. Naturalness: 0.8277. (h) SRIE [5]. Entropy: 5.951. Naturalness: 0.3488. (i) BIMEF [21]. Entropy: 6.408. Naturalness: 0.6757. (j) Proposed. Entropy: 6.749. Naturalness: 0.6287.

Figure 8

Table 1. Examples of number M of areas {Pm} separated by the proposed segmentation (K = 10)

Figure 9

Fig. 9. Experimental results for discrete entropy. (a) Input image, (b) HE, (c) CLAHE, (d) AGCWD, (e) CACHE, (f) LLIE, (g) LIME, (h) SRIE, (i) BIMEF, and (j) Proposed. Boxes span from the first to the third quartile, referred to as Q1 and Q3, and whiskers show maximum and minimum values in the range of [Q1 − 1.5(Q3 − Q1), Q3 + 1.5(Q3 − Q1)]. Band inside box indicates median.

Figure 10

Fig. 10. Experimental results for statistical naturalness. (a) Input image, (b) HE, (c) CLAHE, (d) AGCWD, (e) CACHE, (f) LLIE, (g) LIME, (h) SRIE, (i) BIMEF, and (j) Proposed. Boxes span from the first to the third quartile, referred to as Q1 and Q3, and whiskers show maximum and minimum values in the range of [Q1 − 1.5(Q3 − Q1), Q3 + 1.5(Q3 − Q1)]. Band inside box indicates median.