Hostname: page-component-77f85d65b8-457wm Total loading time: 0 Render date: 2026-03-26T10:35:09.142Z Has data issue: false hasContentIssue false

Lossless image coding using hierarchical decomposition and recursive partitioning

Published online by Cambridge University Press:  09 September 2016

Mortuza Ali
Affiliation:
School of Engineering and IT, Federation University Australia, Churchill, VIC, Australia
Manzur Murshed*
Affiliation:
School of Engineering and IT, Federation University Australia, Churchill, VIC, Australia
Shampa Shahriyar
Affiliation:
Faculty of IT, Monash University, Churchill, VIC, Australia
Manoranjan Paul
Affiliation:
School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW, Australia
*
Corresponding author:M. Murshed Email: manzur.murshed@federation.edu.au

Abstract

State-of-the-art lossless image compression schemes, such as JPEG-LS and CALIC, have been proposed in the context-adaptive predictive coding framework. These schemes involve a prediction step followed by context-adaptive entropy coding of the residuals. However, the models for context determination proposed in the literature, have been designed using ad-hoc techniques. In this paper, we take an alternative approach where we fix a simpler context model and then rely on a systematic technique to efficiently exploit spatial correlation to achieve efficient compression. The essential idea is to decompose the image into binary bitmaps such that the spatial correlation that exists among non-binary symbols is captured as the correlation among few bit positions. The proposed scheme then encodes the bitmaps in a particular order based on the simple context model. However, instead of encoding a bitmap as a whole, we partition it into rectangular blocks, induced by a binary tree, and then independently encode the blocks. The motivation for partitioning is to explicitly identify the blocks within which the statistical correlation remains the same. On a set of standard test images, the proposed scheme, using the same predictor as JPEG-LS, achieved an overall bit-rate saving of 1.56% against JPEG-LS.

Information

Type
Original Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Authors, 2016
Figure 0

Fig. 1. Binarization of grayscale version of Image 6 from the Kodak test set [13]: (a) the original image; (b) signed-magnitude representation of the residuals where the MED predictor from JPEG-LS [1] was used for prediction; (c) binarization of the residual magnitudes using BPD; and (d) binarization of the residual magnitudes using the HD technique. While binarization using BPD yields seven bit planes, HD results in a binary tree. Associated with each node of the tree is a value and a bitmap. The 0’s (black pixels) in the bitmap denote that residual magnitudes at corresponding positions are less than or equal to the value associated with the node, while 1’s (white pixels) denote that they are larger than that. In the bitmaps, the red pixels denote “don't care” positions.

Figure 1

Table 1. Huffman codes for the syntax elements.

Figure 2

Fig. 2. Partitioning of the bitmap, associated with the left node of the root node in Fig. 1(d), achieved by the proposed bitmap encoding algorithm.

Figure 3

Fig. 3. Conceptual block diagram of the proposed scheme. The scheme essentially consists of three stages: prediction, binarization, and bitmap coding. The stages are decoupled in the sense that one is free to choose the specific techniques to be used in those stages. For example, we can choose to use JPEG-LS predictor, HD binarization scheme, and BCRP–CABAC algorithm in prediction, binarization, and bitmap coding stages, respectively.

Figure 4

Fig. 4. Context conditioned entropies of different bit planes achieved by different BPD schemes using the context models: (a) First-order entropy; (b) [X, Y]=[ri−1, j, k, ri, j−1, k]; (c) [Z]=[ri, j, k−1]; and (d) $[X,Y,Z] = [r_{i,j,k-11,r_{i-1,j,k}, r_{i,j-1,k}}]$.

Figure 5

Table 2. Compression rates (bits/pixel) achieved by different schemes on the Kodak grayscale images [13]. BPD–CABAC refers to the scheme where the bit planes resulting from BPD are encoded using CABAC, while BPD–BCRP–CABAC denotes the scheme where those bit planes are encoded using BCRP-CABAC. BPDG–CABAC and BPDG–BCRP–CABAC are variants of BPD–CABAC and BPD–BCRP–CABAC where Gray codes are used for binarization of the residuals.

Figure 6

Table 3. Compression rates (bits/pixel) achieved by different schemes based on HD binarization. HDMid-CABAC refers to the scheme where the bitmaps, resulting from HD binarization using mid magnitude as the division boundary, are encoded using CABAC, while HDMid-BCRP-CABAC denotes the scheme where those bitmaps are encoded using BCRP-CABAC. HDAvg-CABAC and HDAvg-BCRP-CABAC are variants of HDMid-CABAC and HDMid-BCRP-CABAC where average magnitude is used in the HD binarization stage, instead of mid magnitude, as the division boundary.

Figure 7

Table 4. Compression efficiency (bits/pixel) of the proposed scheme against JPEG-LS. Since the proposed scheme also used the same predictor as JPEG-LS, the results demonstrate the efficacy of the proposed scheme in exploiting the spatial homogeneity.