A review on high dynamic range (HDR) image quality assessment

Publications

Share / Export Citation / Email / Print / Text size:

International Journal on Smart Sensing and Intelligent Systems

Professor Subhas Chandra Mukhopadhyay

Exeley Inc. (New York)

Subject: Computational Science & Engineering, Engineering, Electrical & Electronic

GET ALERTS

eISSN: 1178-5608

DESCRIPTION

32
Reader(s)
41
Visit(s)
0
Comment(s)
0
Share(s)

VOLUME 14 , ISSUE 1 (Feb 2021) > List of articles

A review on high dynamic range (HDR) image quality assessment

Irwan Prasetya Gunawan * / Ocarina Cloramidina / Salmaa Badriatu Syafa’ah / Rizcy Hafivah Febriani / Guson Prasamuarso Kuntarto / Berkah Iman Santoso

Keywords : Reduce-reference (RR), Objective quality assessment, Image quality assessment (IQA), High dynamic range (HDR), Inverse tone mapping operator (ITMO), Multi-exposure fusion (MEF)

Citation Information : International Journal on Smart Sensing and Intelligent Systems. Volume 14, Issue 1, Pages 1-17, DOI: https://doi.org/10.21307/ijssis-2021-010

License : (BY-NC-ND-4.0)

Received Date : 10-December-2020 / Published Online: 12-July-2021

ARTICLE

ABSTRACT

This paper presents a literature review on the method of measuring high dynamic range (HDR) image quality. HDR technology can help maximize user satisfaction level when using HDR images-based visual services. The advance of HDR technology indirectly presents a more difficult challenge to the image quality assessment method due to the high sensitivity of the human visual system (HVS) to various kinds of distortions that may arise in HDR images. This is related to the process of HDR image generation, which in general can be classified into two broad categories: the formation using the multiple exposure fusion (MEF) method and the inverse tone mapping operator (ITMO) method. In this paper, we will outline how HDR image quality measurement method works and describe some examples of these measurement methods which are related to the way the HDR images are fabricated. From these methods, it can be seen that most of them are still focused on full-reference and no-reference quality models. We argue that there is still room for the development of reduced-reference HDR image quality assessment.

Graphical ABSTRACT

Introduction

Nowadays, multimedia presentations are becoming more and more important particularly because our world is increasingly digitized and always connected. Multimedia presentations may include different modalities that may range from simple text, audio, speech, sound, images, to more complex content such as touch sense and smell (Rahayu, 2011).

Visual-based multimedia presentations aim at reconstructing visual information that corresponds to the perception of the human visual system (HVS). Recently, high dynamic range (HDR) imaging is considered as one of the technological advances that can accomplish this purpose (Narwaria et al., 2015). Ideally, HDR imaging requires special tools, devices, and processing pipelines that are different from those used for today’s ordinary image processing in dealing with low dynamic range (LDR)/standard dynamic range (SDR) images.

However, considering the prevalence of today’s conventional imaging technology, HDR can also take advantage of SDR/LDR image processing methods. For example, this can be seen from the rise of smartphone and DSLR cameras in the past few years that can be used to capture HDR-processed images (Kundu et al., 2017a, b; Mantiuk et al., 2016). HDR images obtained in this way are usually created using the inverse tone mapping operator (ITMO) and multi-exposure fusion (MEF) methods because these two methods can produce images in a very wide range of lighting conditions (Azimi et al., 2015). These two methods were found to be able to process images that can produce visual information that has a range similar to that of the HVS. In addition, the resulting HDR images processed by these methods can also look natural, more attractive and informative, and can even reduce the noise level that may have been in the image (Kundu et al., 2017a, b; Ma et al., 2015; Rovid et al., 2007; Varkonyi-Koczy et al., 2008).

The development of HDR technologies will certainly require a special image quality assessment (IQA) method that is tailored to the characteristics of the HDR images. HDR technologies place more challenges on quality measurement methods due to the very high sensitivity of the HVS to errors and distortions on the images. IQA algorithm usually plays an important role in the image processing pipeline (Opozda and Sochan, 2014; Zhu et al., 2018a, b). It also aims that the quality measured consistently reflects the perceived quality of the image by the HVS.

There are two broad categories of image quality measurement methods, namely subjective and objective measurement methods. Subjective image quality measurement is considered the most reliable method because it directly involves human viewers in carrying out the quality evaluation of the images displayed. This method can represent how human visual systems’ perception responds to given visual stimuli. Unfortunately, there are major drawbacks for subjective methods: it is quite expensive to be done consistently, and it requires a lot of time to implement. Therefore, objective image quality measurement methods that do not involve human viewers have increased quite rapidly.

Based on the availability of original images used as a reference for quality measurements, objective image quality assessment can be classified into three categories: full-reference (FR), no-reference (NR), and reduced-reference (RR) methods. For conventional LDR/SDR images, a wide variety of FR, RR, and NR objective measurement methods have been around over the past few years. For HDR images, however, many have developed FR/NR methods but very few or even less have done the same thing for the RR method.

On the contrary, the use of the RR method to measure the quality of visual services by utilizing reduced information, under current conditions when video streaming services are on the rise, for example, can be very useful for service providers such as telcos or ISPs in monitoring the quality of their products. Or on the other hand, clients can ensure that the quality of service they receive is really as promised by the content provider.

Considering the various explanations that we have given above, this paper will provide a review of the objective quality evaluation method for HDR images. The presentation of this paper will be organized as follows. In the following sections, we will briefly describe the HDR image processing flow in general. Subsequently, an explanation about the image quality assessment method will be given in the third section, which will be followed in the fourth section by a further explanation of some of the HDR image quality measurement models found in the literature. Our concept of quality assessment for HDR images in a reduced-reference fashion is outlined in the fifth section. Finally, this paper will conclude with some closing remarks in the sixth section.

HDR imaging

HDR imaging pipeline

An illustration of HDR image formation and processing is given in Figure 1. In this illustration, it shows how HDR images are acquired from the source, processed by encoding/decoding methods involving data compression techniques, and then displayed and evaluated for their quality (Artusi et al., 2017; Mantiuk et al., 2016). First of all, HDR images can be produced either by a camera capturing objects from the real world or by computer graphics that create a model-based image. After that, the HDR images can be compressed and encoded so that they can be stored or transmitted more efficiently, by converting the image data format so that it requires less storage capacity or less bandwidth for transmission. Subsequently, the images can be displayed in various types of display devices, either natively or using conventional LDR/SDR display devices.

Figure 1:

HDR imaging pipeline; redrawn from Artusi et al. (2017) and Mantiuk et al. (2016).

10.21307_ijssis-2021-010-f001.jpg

The display of HDR image content is still largely limited by the capabilities of the display device used. For devices with lower specifications to display HDR images properly, a tone mapping method that can capture the wider dynamic range of HDR images and convert them into a narrower dynamic range on conventional devices is needed. The color correction method can also be employed to resolve any mismatch between HDR content and the capabilities of the display devices. On the other hand, there is also an inverse tone mapping algorithm which can be used to reconstruct HDR content from a single SDR image or the multi-exposure image fusion (MEF) method which is able to produce HDR content from a combination of several SDR images with different exposure.

Last but not least, HDR image quality assessment is performed with the main objective to assess the various algorithms used in the pipeline.

HDR creation

Methods of constructing HDR images using MEF and ITMO have been widely described in previous studies that can be found in the literature.

MEF can be categorized as a method of combining images that has been introduced since the 1980s, but recently it has received more attention for further research (Gu et al., 2012; Li et al., 2012; Song et al., 2012; Zhang and Cham, 2012). Since humans act as users for most applications that apply MEF methods, these methods require an easy and simple but reliable quality assessment (Shen et al., 2013; Song et al., 2012; Zeng et al., 2014). A list of MEF-based methods that are relevant to the HDR image construction is given in Table 1.

Table 1.

Summary of MEF-based HDR images.

10.21307_ijssis-2021-010-t001.jpg

A number of multi-exposure fusion methods (Goshtasby, 2005; Reinhard et al., 2010) used localized fusion weights without sufficient consistency considerations over a large area which could lead to an unnatural appearance of the fusion result. Other proposed general fusion methods are also not optimal for individual applications and only apply to grayscale images. Goshtasby (2005) used a merging method based on the maximum information content obtained from a still camera to obtain multiple exposure images of static objects/scenes. The method uses a process of dividing the image into blocks of uniform size and then selecting images for each block with the maximum information available on that block. Mertens et al. (2009) proposed a fusion technique of the exposure sequence bracket into a high-quality image without having to convert it to the HDR domain. This method does not require the physical formation of HDR images so the process is simpler and more computationally efficient. In this way, the camera response curve calibration calculation is not required. The method combines several different exposures, taking suitable image contrast, high saturation, and good exposure to guide the image merging process. Song et al. (2012) demonstrate the initial image is estimated by maximizing visual contrast and gradient scenes, and the fused image is synthesized by pressing the image gradient reversal. A similar method of MEF based on gradients is proposed in the study of Gu et al. (2012). Based on Li et al. (2012) and Mertens et al. (2009) that improved the details of the fused image by solving the problem of quadratic optimization. The median and recursive MEF-based filter method was developed in the study of Li and Kang (2012), taking into account local contrast, brightness, and color differences. The use of media filters can also handle dynamic scenes. The new gradient-based approach to extract image details is introduced by Rovid et al. (2007). Multiple exposure images of the same scene as the input data are used. The images are divided into regions during the process.

On the other hand, in the last two decades, HDR image formation techniques using SDR content using ITMO methods have also been proposed. A list of ITMO-based methods that are relevant with the HDR image construction is given in Table 2. For example, in the study of Landis (2002), a global expansion technique was introduced for the first time using the exponential function for SDR image pixel values above a certain threshold to form an HDR image. This method works well for image-based lighting (IBL), but the results are not satisfactory in the HDR image visualization. Subsequently, Banterle et al. (2006) applied the inverse photographic tone reproduction method developed by Reinhard et al. (2002) to produce SDR image expansion. In this method, the median cut is used to estimate the area on the image with a high luminance value which is then used to map the pixel expansion. After that, linear interpolation is used to get the final HDR image. In this way, images that have good quality can be produced because the method can remove noise and blocky effects. Unfortunately, while this method works well for still images, it is not sufficient for video processing.

Table 2.

Summary of ITMO-based HDR images.

10.21307_ijssis-2021-010-t002.jpg

Furthermore, an ITMO method with simple linear expansion has also been proposed by Akyüz et al. (2007). In this method, psychophysical experiments have also been used which show that SDR image content can be displayed properly on an HDR screen. The disadvantage of this method is that it is not able to boost contrast in saturated regions. Another ITMO operator proposed in the study of Kinoshita et al. (2017) uses Reinhard’s global operator (Reinhard et al., 2002), which shows that the resulting images have good structural similarities and lower computation costs than other methods. Kovaleski and Oliveira (2009) also implemented a fully automated ITMO process with the aid of a neural network approach. In this approach, a cross-bilateral technique is used that can improve image details over a wide exposure range, especially in over-exposed image areas which are usually a problem in previous studies. The image brightness correction function is commonly used for the reverse tone mapping method and the results show superior quality when compared to conventional methods because there are less distortion artifacts displayed. Unfortunately, there is still a little problem with the color disappearance and difficulties in reviving the texture of the under- or over-exposed image areas.

False contour/edge artifacts

HDR creation such as described in the previous section is prone to distortion artifacts in the form of false edge/contour. The edge image derived from HDR-processed images should be further analyzed to provide more useful information related to the false edge/contour. Contour detection is usually done after edge detection (Lokmanwar and Bhalchandra, 2019).

Contour detection is one of the most important earlier steps in the segmentation process and object detection as well as understanding of image scene/content. Contour analysis, which begins with the detection process, is increasingly being used to produce high-quality image segmentation. Contour analysis is also used to handle more complex contours more efficiently even though it is used for images with cluttered backgrounds that we often get in pictures we take from real-life images (Manno-Kovacs, 2019). For example, we can use the modified Harris for edges and corners (MHEC) method which is considered efficient for contour detection purposes. Unfortunately, there are still drawbacks to this method which involves an iterative process that is slower than the others.

Contour detection can also lead to image edge detection errors identified as false contour. Typically, such false contours are found in low frequency and smooth gradient image areas (Ahn and Kim, 2005). A number of image processing techniques can make these false contours more visible than ever before; for example, contrast enhancement, sharpness enhancement, color modification, and so on. Some of these techniques are actually used in the HDR image generation process.

A summary of various contour detection methods is given in Table 3.

Table 3.

Summary of contour detection methods.

10.21307_ijssis-2021-010-t003.jpg

Image quality assessment

Image quality measurement methods have become a fairly hot research topic in recent years. The application of the method can be very broad, starting from the quality assessment of image coding techniques, monitoring the quality of services, watermarking, image enhancement, applications in the medical and entertainment world, and others. One of the fundamental quality assessment methods is subjective measurement methods, which, although they are expensive and time consuming, are still used as a reference for other objective methods. Objective methods are usually used as alternatives to reduce costs and time, apart from being easy to implement (Chandler, 2013; Ma et al., 2015). In the following subsections we will describe a little more about subjective and objective quality evaluation methods.

Subjective methods

A short list of subjective assessment methods is given in Table 4. Subjective quality measurement is a controlled experiment with human participation to measure the perceived quality of the image or video displayed to the user. In such experiment, the golden standard for benchmarking purposes is the human judgment without the advice of others (Patil and Patil, 2017). Not only that, subjective methods can also give insight into human behavior in the context of image quality assessment (Ma et al., 2015). It has been realized for a long time that the task of image quality evaluation to human viewers involves not only physiological process but there is also a psychological aspect to the process. As a consequence, the subjective methods also lend themselves to be used as a benchmark for various algorithms and methods in image/video processing, including image quality assessment algorithms.

Table 4.

Summary of several subjective assessment methods.

10.21307_ijssis-2021-010-t004.jpg

Subjective assessment may employ either single stimulus and double stimulus (Patil and Patil, 2017). In the assessment process, a group of observers are exposed to images with various quality and asked to evaluate these images. Evaluation is recorded as a subjective score, and for the same image, different scores recorded by different observers are averaged.

Subjective methods are not without their own shortcomings. There are several problems that can be associated with subjective methods (Hands, 1998). First of all, the method may take longer time to proceed, not to mention also costly. Since subjective experiment is equipped to make the subject evaluate every image in the dataset, it may take hours to finish. In order to achieve statistical validity for the evaluation, the number of human viewers involved in the experiment must also be large enough so that results are not obtained by chance. Lately, however, crowdsourcing method has also been employed to get more viewers involved in the evaluation process in a much shorter time than in a traditional subjective experiment (Kundu et al., 2017a, b). Such crowdsource-based methods are not without its challenges; for example, unlike traditional subjective experiment in a laboratory environment, there is only limited or even no control over the experimental setup (display device, illumination condition, viewing distance, etc.).

Regardless of the experimental setup used, subjective evaluation is costly because observers as test subjects must be recruited and paid. The traditional subjective experiment may cost more because the measurement may require a laboratory set up that can be difficult to organize with calibrated, specialized equipment. Subjective evaluation may also not be suitable for certain application (Winkler, 2005); for example, real-time situations where immediate responses are expected.

These problems are the main reasons why researchers turned to objective tests that can provide faster and more practical results.

Objective methods

Objective measurements are increasingly popular for image/video coding comparison. The evaluation is expressed as a mathematical formula that can be computed without human intervention. To get a better evaluation, subjective scores from subjective experiments may be used as a reference for these objective models. Objective quality measurement usually takes into account various types of distortion that may be present on the images: blur distortion, motion blurred, edge, contouring, blocking artifacts, granular noise, jerkiness, dirty window, etc. Objective image quality assessment methods lend itself to various applications such as quality control system, image processing algorithm benchmark, and transmission systems optimization.

The objective image quality measurement method can be differentiated based on the technique used to quantify the image quality. Quantification can be based on different errors (Narwaria et al., 2015), structural information (Aydin et al., 2008; Ma et al., 2015; Yeganeh and Wang, 2013), and also machine learning (Jia et al., 2017).

The quality metric based on error differences (Narwaria et al., 2015) can benefit from various image processing methods/algorithms both in spatial/frequency domain. The quality is then quantified based on the spatial-temporal or frequency domain analysis. Other methods (Ma et al., 2015; Yeganeh and Wang, 2013) based on structural similarity information use multi-scale analysis for a measure of signal quality. This method uses a structural similarity index (SSIM) metric that is modified with a natural scene statistical approach (NSS). Then recently, there has also been an approach like (Jia et al., 2017) using saliency map-based machine learning to improve the performance of the NR method. Such models are not without problems; for example, the problem was discovered due to a significant gap in luminance values when such a model was applied to HDR images.

The Video Quality Expert Group (VQEG) has listed three basic categories for image/video quality assessment methods. Their categories are based on the availability of reference images. These are full-reference, reduced-reference, and no-reference methods (RRNR-TV Group, 2004; Video Quality Experts Group, 2002; VQEG, 2000).

Full-reference (FR) methods evaluate image/video quality by comparing test images/video with the original, undistorted version of the images/videos (Opozda and Sochan, 2014). No-reference (NR) image quality models, on the other hand, try to mimic how HVS or the human eye perceive image quality without the need of original, reference image. It is also sometimes referred to as blind image quality assessment (Patil and Patil, 2017). Reduced-reference (RR) image quality assessment provides a balanced and trade-off solution between the two extremes represented by FR and NR quality models. RR methods are designed to use only partial data about the reference image to evaluate the processed one. Partial data can be formed of features extracted from the undistorted signals which are then compared with features extracted from the processed or degraded images (Gunawan, 2006). RR quality assessment was originally proposed to track the changes of visual quality that may be present in the video information distributed through communication networks.

As a method that employs overhead data for its purpose, RR quality evaluation concerns with the data rate used to transmit this side information. If, for example, high data rate side channel is somehow available, then RR method can use larger quantity of information about the reference images. If the side channel is big enough, it may also possible to send the whole original reference picture. On the other hand, if the data rate of the side channel is small, it is mandatory that RR method can also works with only a small side information.

HDR image quality assessment

In this section, some HDR IQA models found in the literature will be outlined. The outline will only cover the important elements of various FR and NR methods. To the best of author’s knowledge, there have been no literatures on HDR IQA in a RR framework to date. The summary of HDR image quality assessment is given chronologically in Table 5.

Table 5.

Summary of several HDR IQA methods.

10.21307_ijssis-2021-010-t005.jpg

Full-reference model

There are several FR (full-reference) models for HDR image quality assessment; for example Duan et al., (2020), Krasula et al. (2020), Ma et al. (2015), Mantiuk et al. (2011), Yeganeh and Wang (2013).

HDR visual difference predictor (HDR-VDP) and HDR-VDP-2, proposed by Mantiuk et al. (2005) and its successor, (Mantiuk et al., 2011), are FR methods based on error metric. The metric uses various visual models based on contrast sensitivity in diverse lighting conditions. The models were also tested against psychophysical measurements to select the best parameters that can be adjusted with the data. Some feature invariant metrics based on structural similarity was also employed by this model.

Tone-mapped quality index (TMQI), proposed by Yeganeh and Wang (2013), is an objective quality evaluation on tone-mapped images in an FR framework. This method combined multi-scale capability of structural similarity measure (SSIM) (Wang et al., 2003) with a measure of naturalness. The SSIM in TMQI is used to evaluate the structural weaknesses across images, based on contrast, lighting, and local structure. Naturalness is based on statistics of thousand images portraying various types of natural scenery. These two parameters are then combined in a certain ways similar to a weighted sum of each parameter by taking into account sensitivity of each parameter to the overall quality.

MEF-IQA, proposed by Ma et al. (2015), is an FR method specialized for MEF-based images. It also uses multi-scale structural similarity, but now combined with structural consistency. It works by adapting HVS to extract structural information from natural images. MEF algorithms can use MEF-IQA to tune the parameters for the MEF. MEF-IQA also came with its own subjective data for their evaluation. The dataset consists of 17 original pictures that are subjected to various exposure levels. There are classical and sophisticated MEF algorithms being used to create the resulting MEF images.

Another FR model of HDR image quality assessment is local dimming algorithms (Duan et al., 2020). This method is a full-reference quality assessment technique that is applied to a number of backlight local dimming (BLD) algorithms. BLD algorithms are usually used to improve image contrast ratio and provide power efficiency for modern displays. The paper also offers a subjective evaluation procedure on each BLD generated images in which subjects must submit rank of these images based on their most natural looking.

Features fusion for natural tone-mapped images quality evaluation (FFTMI) (Krasula et al., 2020) is another method of tone-mapped HDR image quality assessment based on carefully selected perceptual relevant features. The features are combined in a linear fashion to avoid over fitting of the model when combined using a machine learning technique. Features are grouped into several categories, based on the availability of the reference image/feature. From an FR model, they used contrast/structure similarity and locally weighted mean phase angle (LWMPA) similarity measures. On the hand, from an NR model they took contrast, colorfulness, sharpness, aesthetics, saliency, and any other estimators not belonging to any previous categories. Based on their selection procedure, they came up with FFTMI metrics derived from FR TMQI-II structural similarity, FR feature similarity index for tone-mapped image (FSITM), and NR feature naturalness.

In the study of Ellahi et al. (2020), the hidden Markov model (HMM) as a test of similarity to assess TMO perceived quality is proposed. The findings suggest that the proposed HMM-based method that emphasizes temporal information yields better evaluation metrics than traditional approaches based solely on visual-spatial information.

No-reference model

As can be seen from Table 5, there are more NR models than FR models available in the literature; for example Guan et al. (2018), Kundu et al. (2017a, b), Ravuri et al. (2019), Yue et al. (2020), among others.

Blind high dynamic range image quality assessment using deep learning (DL-NRIQA), proposed by Jia et al. (2017), is a no-reference image quality assessment (NRIQA) method by combining deep convolutional neural networks (CNNs) with saliency maps on high dynamic range (HDR) images. Similarly, the HDR image GRADient evaluator (HIGRADE) is an NR model proposed by Kundu et al. (2017a, b). It is based on bandpass standard measurement in addition to natural scene statistics (NSS). NSS descriptors are employed to construct features. It works by an assumption that HDR process usually alters the image gradient NSS feature. The discrepancy can be used by the model to infer quality predictions.

In the study of Ravuri et al. (2019), a no-reference quality assessment technique for tone-mapped images was proposed. The method consists of two stages. In the first one, it uses convolutional neural network (CNN) to produce a distortion map from the tone-mapped images. In the second stage, the distortion map is modeled using an asymmetric generalized Gaussian distribution (AGGD). The quality score is then estimated based on the AGGD parameters with a help from SVR (support vector regression) method. The distortion map can also be used as features to estimate the quality index of tone-mapped images.

The method presented in the study of Yue et al. (2020) is proposed to use multiple quality-sensitive features for both MEF and ITMO-based HDR images. The features are based on colorfulness, exposure, and naturalness. The metrics is developed in the absence of any reference images. SVR is used to bridge the extracted features and the associated subjective ratings for the quality model.

In the study of Fang et al. (2021), a robust visual blind quality evaluation method for analyzing the visual characteristics of TMI using gradient and chromatic statistics (VQGC) is proposed. The method is motivated by the perception mechanism that the human visual system (HVS) is sensitive to image structures variation. They used the magnitude of the gradient to predict structural distortion accurately, the orientation to measure the variation of the image structure, and the magnitude and orientation of the relative gradients to capture microstructural changes. They also used color invariant descriptors to capture visual degradation of colors with local binary patterns (LBP) on four colored feature maps. Subsequently, the final quality conscious feature vector is obtained from the amalgamation of gradient and chromatic features, which is applied to assess the perceived quality of TMI by supporting vector regression (SVR).

Proposed method framework

Motivation

We can see from the previous section that for HDR imaging, there are numerous FR/NR methods, whilst none so far for RR. On the contrary, for LDR/SDR images there have been plenty of FR/NR/RR methods for quite some time, such as illustrated in the research roadmap in Figure 2. Therefore, our present study will focus on the investigation of the reduced-reference objective quality evaluation for HDR image. In particular, we are interested in the investigation of usable features for the RR model.

Figure 2:

Our proposed research road map.

10.21307_ijssis-2021-010-f002.jpg

Based on the research roadmap, we use a framework like the one given in Figure 3. Our proposed method uses a simple feature based on some derivatives of gradient image (for example, edges, false edges, or contour) for the RR feature. Features made with the framework as described in Figure 3 can be built not only by utilizing edge strength, but also can use false contour/edge map information, histograms, or local features in the desired image area (region of interest, ROI) with certain criteria. As part of the RR feature, we may use false edge/contour map which is extracted from the luminance image. Therefore, the color image that is used in the process must first be converted into a gray scale image before subsequent steps.

Figure 3:

Research framework for current proposed method.

10.21307_ijssis-2021-010-f003.jpg

We noted that similar features based on gradient have been used in previous works on HDR-related quality evaluation reported by others, but only in a full-reference or no-reference framework. In our present study, we would like to investigate how this simple feature can be adopted for RR feature in an HDR-related quality assessment framework.

We are interested in this feature because we noted that there are notable changes on the edges of the generated HDR image based on MEF and ITMO. This, for example, is illustrated in Figures 4 and 5, where we have an original image in HDR format and its associated global-adjusted MEF-based processed image, taken from the dataset (The University of Texas at Austin, 2006). By comparing these figures we can see that global brightness of the processed image is shifted compared to that of the original one. This is also reflected in the global shift of their histograms. The gradient images also show that there are differences between that of the original and the processed one in terms of strength and thickness. Histograms of gradient, on the other hand, exhibit little differences; they only demonstrate some minor changes.

Figure 4:

Original and test/processed images and their histograms from the dataset. The test images were processed using global adjustment method.

10.21307_ijssis-2021-010-f004.jpg
Figure 5:

The gradient of the original and test/processed images and their histograms. The test images were processed using global adjustment method.

10.21307_ijssis-2021-010-f005.jpg

Therefore, it is reasonable that the reduced-reference approach presented in this paper makes use of relative comparison of the derivatives of gradient images. For example, by comparing the false edge/contour map (FCEM) of a processed image (due to MEF or ITMO, for example) with the gradient image derived from the reference image that we assume contains no artifacts or distortions, one may be able to estimate the quality of the processed image relative to the reference image. Any discrepancy in the processed image will be shown by an increase or decrease in FCEM strength/magnitude.

Conclusions

We have reviewed various HDR image quality assessment methods in the literature and found that many have focused on the development of the FR and NR models. From these models, there are several perceptual attributes that can be beneficial for quality assessment purposes: contrast, details, color, and artifacts. Many algorithms also use natural statistics descriptor, feature naturalness, and feature similarity, which lend themselves to the use of no-reference method. However, we believe that RR model is also useful for several application scenarios, notably for monitoring purposes. Therefore, development of RR model is still considered necessary. In line with that argument, we have initiated research on the development of RR model for HDR IQA, using a research roadmap presented in Figure 2. Some of our preliminary results using feature based on a simple calculation on the images were also given in the previous section, and the result shows that the proposed method is promising although there is still room for further improvement.

Acknowledgements

The author would like to thank the Indonesian Ministry of Research and Higher Education for the funding of the research presented in this paper under the contracts No. 225/SP2H/AMD/LT/DRPM/2020 and No. 83.ADD/LL3/PG/2020, and Universitas Bakrie, Indonesia, under the contracts No. 087/SPK/LPP-UB/III/2020 and No. 107/SPK/LPP-UB/III/2020.

References


  1. Ahn, W. and Kim, J. -S. 2005. Flat-Region Detection and False Contour Removal in the Digital TV Display. 2005 IEEE International Conference on Multimedia and Expo, pp. 1338–1341.
  2. Akyüz, A. O. , Fleming, R. , Riecke, B. E. , Reinhard, E. and Bülthoff, H. H. 2007. Do HDR displays support LDR content? ACM Transactions on Graphics 26(3): 38.
  3. Alpert, T. and Evain, J. 1997. Subjective quality evaluation – the SSCQE and DSCQE methodologies. EBU Technical Review 271: 12–20, available at: https://tech.ebu.ch/publications/trev_271-evain=0pt.
  4. Artusi, A. , Richter, T. , Ebrahimi, T. and Mantiuk, R. K. 2017. High dynamic range imaging technology [Lecture notes]. IEEE Signal Processing Magazine 34(5): 165–172.
  5. Aydin, T. O. , Mantiuk, R. , Myszkowski, K. and Seidel, H. -P. 2008. Dynamic range independent image quality assessment. ACM Transactions on Graph 27(3): 69, available at: http://doi.acm.org/10.1145/1360612.1360668=0pt.
  6. Azimi, M. , Boitard, R. , Oztas, B. , Ploumis, S. , Tohidypour, H. R. , Pourazad, M. T. and Nasiopoulos, P. 2015. Compression efficiency of HDR/LDR content. Quality of Multimedia Experience (QoMEX), 2015 Seventh International Workshop on IEEE, pp. 1–6.
  7. Banterle, F. , Ledda, P. , Debattista, K. and Chalmers, A. 2006. Inverse tone mapping. Proceedings of the 4th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia, ser. GRAPHITE ’06, ACM, New York, NY, pp. 349–356, available at: http://doi.acm.org/10.1145/1174429.1174489=0pt.
  8. Chandler, D. M. 2013. Seven challenges in image quality assessment: past, present, and future research. ISRN Signal Processing, Vol. 2013.
  9. Chua, T. W. and Shen, L. 2017. Contour detection from deep patch-level boundary prediction. 2017 IEEE 2nd International Conference on Signal and Image Processing (ICSIP), pp. 5–9.
  10. Duan, L. , Debattista, K. , Lei, Z. and Chalmers, A. 2020. Subjective and objective evaluation of local dimming algorithms for HDR images. IEEE Access 8(51): 692–702.
  11. Durand, F. and Dorsey, J. 2000. Interactive tone mapping. Eurographics, Springer, Vienna, pp. 219–230.
  12. Eilertsen, G. , Mantiuk, R. K. and Unger, J. 2015. Real-time noise-aware tone mapping. ACM Transactions on Graphics 34(6): 98:1–198:15, available at: http://doi.acm.org/10.1145/2816795.2818092=0pt.
  13. El Mezeni, D. and Saranovac, L. 2018. Temporal adaptation control for local tone mapping operator. Journal of Electrical Engineering 69(4): 261–269.
  14. Ellahi, W. , Vigier, T. and Le Callet, P. 2020. HMM-based framework to measure the visual fidelity of tone mapping operators. 2020 IEEE International Conference on Multimedia Expo Workshops (ICMEW), pp. 1–6.
  15. Fang, Y. , Zhu, H. , Ma, K. , Wang, Z. and Li, S. 2020. Perceptual evaluation for multi-exposure image fusion of dynamic scenes. IEEE Transactions on Image Processing 29: 1127–1138.
  16. Fang, Y. , Yan, J. , Du, R. , Zuo, Y. , Wen, W. , Zeng, Y. and Li, L. 2021. Blind quality assessment for tone-mapped images by analysis of gradient and chromatic statistics. IEEE Transactions on Multimedia 23: 955–966.
  17. Fattal, R. , Lischinski, D. and Werman, M. 2002. Gradient domain high dynamic range compression. ACM Transactions on Graphics 21(3): 249–256.
  18. Ferwerda, J. A. , Pattanaik, S. N. , Shirley, P. and Greenberg, D. P. 1996. A model of visual adaptation for realistic image synthesis. Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’96, ACM, New York, NY, pp. 249–258. available at: http://doi.acm.org/10.1145/237170.237262.
  19. Goshtasby, A. A. 2005. Fusion of multi-exposure images. Image and Vision Computing 23(6): 611–618.
  20. Gu, B. , Li, W. , Wong, J. , Zhu, M. and Wang, M. 2012. Gradient field multi-exposure images fusion for high dynamic range image visualization. Journal of Visual Communication and Image Representation 23(4): 604–610.
  21. Guan, F. , Jiang, G. , Song, Y. , Yu, M. , Peng, Z. and Chen, F. 2018. No-reference HDR image quality assessment method based on tensor space. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp. 1218–1222.
  22. Gunawan, I. P. 2006. Reduced-reference impairment metrics for digitally compressed video. PhD dissertation, University of Essex.
  23. Hands, D. S. 1998. Mental processes in the evaluation of digitally-coded television pictures. PhD dissertation, University of Essex.
  24. Huang, F. , Zhou, D. , Nie, R. and Yu, C. 2018a. A color multi-exposure image fusion approach using structural patch decomposition,” IEEE Access 6: 42877–42 885.
  25. Huang, Q. , Kim, H. Y. , Tsai, W. , Jeong, S. Y. , Choi, J. S. and Kuo, C. J. 2018b. Understanding and removal of false contour in HEVC compressed images. IEEE Transactions on Circuits and Systems for Video Technology 28(2): 378–391.
  26. Jia, S. , Zhang, Y. , Agrafiotis, D. and Bull, D. 2017. Blind high dynamic range image quality assessment using deep learning. 2017 IEEE International Conference on Image Processing (ICIP), IEEE, pp. 765–769.
  27. Jiang, M. , Shen, L. , Zheng, L. , Zhao, M. and Jiang, X. 2020. Tone-mapped image quality assessment for electronics displays by combining luminance partition and colorfulness index. IEEE Transactions on Consumer Electronics 66(2): 153–162.
  28. Kim, D. and Kim, M. 2020. Learning-based low-complexity reverse tone mapping with linear mapping. IEEE Transactions on Circuits and Systems for Video Technology 30(2): 400–414.
  29. Kinoshita, Y. , Shiota, S. and Kiya, H. 2017. “Fast inverse tone mapping with reinhard global operator”, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, New Orleans, LA, March 5–9.
  30. Kinoshita, Y. , Shiota, S. , Kiya, H. and Yoshida, T. 2018. Multi-exposure image fusion based on exposure compensation. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp. 1388–1392.
  31. Kovaleski, R. P. and Oliveira, M. M. 2009. High-quality brightness enhancement functions for real-time reverse tone mapping. The Visual Computer 25(5): 539–547, available at: https://doi.org/10.1007/s00371-009-0327-3=0pt.
  32. Krasula, L. , Fliegel, K. and Le Callet, P. 2020. FFTMI: features fusion for natural tone-mapped images quality evaluation. IEEE Transactions on Multimedia 22(8): 2038–2047.
  33. Kundu, D. , Ghadiyaram, D. , Bovik, A. C. and Evans, B. L. 2017a. Large-scale crowdsourced study for tone-mapped HDR pictures. IEEE Transactions on Image Processing 26(10): 4725–4740.
  34. Kundu, D. , Ghadiyaram, D. , Bovik, A. C. and Evans, B. L. 2017b. No-reference quality assessment of tone-mapped HDR pictures. IEEE Transactions on Image Processing 26(6): 2957–2971.
  35. Landis, H. 2002. Production-ready global illumination. Siggraph Course Notes 16: 87–101.
  36. Larson, G. W. , Rushmeier, H. and Piatko, C. 1997. A visibility matching tone reproduction operator for high dynamic range scenes. IEEE Transactions on Visualization and Computer Graphics 3(4): 291–306.
  37. Li, S. and Kang, X. 2012. Fast multi-exposure image fusion with median filter and recursive filter. IEEE Transactions on Consumer Electronics 58(2): 626–632.
  38. Li, Z. G. , Zheng, J. H. and Rahardja, S. 2012. Detail-enhanced exposure fusion. IEEE Transactions on Image Processing 21(11): 4672–4676.
  39. Lokmanwar, S. D. and Bhalchandra, A. S. 2019. Contour detection based on Gaussian filter. 2019 3rd International Conference on Electronics, Communication and Aerospace Technology (ICECA), pp. 722–725.
  40. Ma, K. , Zeng, K. and Wang, Z. 2015. Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing 24(11): 3345–3356.
  41. Manno-Kovacs, A. 2019. Direction selective contour detection for salient objects. IEEE Transactions on Circuits and Systems for Video Technology 29(2): 375–389.
  42. Mantiuk, R. , Daly, S. J. , Myszkowski, K. and Seidel, H. -P. 2005. Predicting visible differences in high dynamic range images: model and its calibration. Human Vision and Electronic Imaging X, vol. 5666, International Society for Optics and Photonics, pp. 204–215.
  43. Mantiuk, R. , Myszkowski, K. and Seidel, H. -P. 2006. A perceptual framework for contrast processing of high dynamic range images. ACM Transactions on Applied Perception (TAP) 3(3): 286–308.
  44. Mantiuk, R. , Kim, K. J. , Rempel, A. G. and Heidrich, W. 2011. HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Transactions on Graphics (TOG) 30(4): 40.
  45. Mantiuk, R. K. , Tomaszewska, A. and Mantiuk, R. 2012. Comparison of four subjective methods for image quality assessment. Computer Graphics Forum 31(8): 2478–2491, available at: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8659.2012.03188.x=0pt.
  46. Mantiuk, R. K. , Myszkowski, K. and Seidel, H. -P. 2016. High dynamic range imaging. Wiley Encyclopedia of Electrical and Electronics Engineering, available at: https://www.cl.cam.ac.uk/rkm38/hdri_book.html=0pt.
  47. Mertens, T. , Kautz, J. and Van Reeth, F. 2009. Exposure fusion: a simple and practical alternative to high dynamic range photography. Computer Graphics Forum 28(1): 161–171.
  48. Narwaria, M. , Silva, M. P. D. and Callet, P. L. 2015. HDR-VQM: an objective quality measure for high dynamic range video. Signal Processing: Image Communication 35: 46–60, available at: https://www.sciencedirect.com/science/article/pii/S0923596515000703?via=0pt.
  49. Nuutinen, M. , Virtanen, T. , Leisti, T. , Mustonen, T. , Radun, J. and Häkkinen, J. 2016. A new method for evaluating the subjective image quality of photographs: dynamic reference. Multimedia Tools and Applications 75(4): 2367–2391, available at: https://doi.org/10.1007/s11042-014-2410-7=0pt.
  50. Opozda, S. and Sochan, A. 2014. The survey of subjective and objective methods for quality assessment of 2D and 3D images. Theoretical and Applied Informatics 26(1-2): 39–67.
  51. Patil, S. B. and Patil, S. R. 2017. Survey on approaches used for image quality assessment. 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), pp. 987–991.
  52. Persson, M. 2014. “Subjective image quality evaluation using the softcopy quality ruler method,” student Paper.
  53. Qiu, G. , Guan, J. , Duan, J. and Chen, M. 2006. Tone mapping for HDR image using optimization a new closed form solution. 18th International Conference on Pattern Recognition (ICPR’06), vol. 1, pp. 996–999.
  54. Rahayu, F. N. 2011. Quality of experience for digital cinema presentation. PhD Thesis, Norwegian University of Science and Technology, available at: https://brage.bibsys.no/xmlui/handle/11250/2370392 http://hdl.handle.net/11250/2370392=0pt.
  55. Rana, A. , Valenzise, G. and Dufaux, F. 2019. Learning-based tone mapping operator for efficient image matching. IEEE Transactions on Multimedia 21(1): 256–268.
  56. Ravuri, C. S. , Sureddi, R. , Reddy Dendi, S. V. , Raman, S. and Channappayya, S. S. 2019. Deep no-reference tone mapped image quality assessment. 2019 53rd Asilomar Conference on Signals, Systems, and Computers, pp. 1906–1910.
  57. Redi, J. , Liu, H. , Alers, H. , Zunino, R. and Heynderickx, I. 2010. “Comparing subjective image quality measurement methods for the creation of public databases”, In Farnand, S. P. and Gaykema, F. (Eds), Image Quality and System Performance VII, vol. 7529 International Society for Optics and Photonics, SPIE, pp. 19–29, available at: https://doi.org/10.1117/12.839195=0pt.
  58. Reinhard, E. , Stark, M. , Shirley, P. and Ferwerda, J. 2002. Photographic tone reproduction for digital images. ACM Transactions on Graphics 21(3): 267–276, available at: http://doi.acm.org/10.1145/566654.566575=0pt.
  59. Reinhard, E. , Heidrich, W. , Debevec, P. , Pattanaik, S. , Ward, G. and Myszkowski, K. 2010. High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting Morgan Kaufmann, Amsterdam.
  60. Rovid, A. , Varkonyi-Koczy, A. R. , Hashimoto, T. , Balogh, S. and Shimodaira, Y. 2007. Gradient based synthesized multiple exposure time HDR image. 2007 IEEE Instrumentation Measurement Technology Conference IMTC 2007, pp. 1–6.
  61. RRNR-TV Group 2004. Test plan draft version 1.7 h, =2 plus 4 3 minus 4, available at: http://www.vqeg.org=0pt.
  62. Sheikh, H. R. , Sabir, M. F. and Bovik, A. C. 2006. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing 15(11): 3440–3451.
  63. Shen, R. , Cheng, I. and Basu, A. 2013. QoE-based multi-exposure fusion in hierarchical multivariate Gaussian CRF. IEEE Transactions on Image Processing 22(6): 2469–2478.
  64. Song, M. , Tao, D. , Chen, C. , Bu, J. , Luo, J. and Zhang, C. 2012. Probabilistic exposure fusion. IEEE Transactions on Image Processing 21(1): 341–357.
  65. The University of Texas at Austin 2006. LIVE Public-Domain Subjective Image Quality Database, available at: http://live.ece.utexas.edu/research/quality/subjective.htm=0pt.
  66. van Dijk, A. M. , Martens, J. -B. and Watson, A. B. 1995. “Quality asessment of coded images using numerical category scaling”, In Ohta, N. , Lemke, H. U. and Lehureau, J. C. (Eds), Advanced Image and Video Communications and Storage Technologies, Vol. 2451 International Society for Optics and Photonics SPIE, Amsterdam, pp. 90–101, available at: https://doi.org/10.1117/12.201231=0pt
  67. Varkonyi-Koczy, A. R. , Rovid, A. and Hashimoto, T. 2008. Gradient-based synthesized multiple exposure time color HDR image. IEEE Transactions on Instrumentation and Measurement 57(8): 1779–1785.
  68. Video Quality Experts Group 2002. =2 plus 4 3 minus 4, available at: http://www.vqeg.org=0pt.
  69. VQEG 2000. Final report from the Video Quality Expert Group on the validation of objective models of video quality assessment – Phase I, VQEG, March, available at: http://www.vqeg.org=0pt.
  70. Wang, Z. , Simoncelli, E. P. and Bovik, A. C. 2003. Multiscale structural similarity for image quality assessment. The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, vol. 2, IEEE, pp. 1398–1402.
  71. Wang, X. , Jiang, Q. , Shao, F. , Gu, K. , Zhai, G. and Yang, X. 2021. Exploiting local degradation characteristics and global statistical properties for blind quality assessment of tone-mapped HDR images. IEEE Transactions on Multimedia 23: 692–705.
  72. Winkler, S. 2005. Digital Video Quality: Vision Models and Metrics John Wiley & Sons, Chicester.
  73. Yeganeh, H. and Wang, Z. 2013. Objective quality assessment of tone-mapped images. IEEE Transactions on Image Processing 22(2): 657–667.
  74. Yue, G. , Yan, W. and Zhou, T. 2020. Reference less quality evaluation of tone-mapped HDR and multiexposure fused images. IEEE Transactions on Industrial Informatics 16(3): 1764–1775.
  75. Yun, S. -H. , Kim, T. -C. and Kim, J. H. 2012. Single exposure-based image fusion using multi-transformation. Consumer Electronics (GCCE), 2012 IEEE 1st Global Conference on IEEE, pp. 142–143.
  76. Zeng, K. , Ma, K. , Hassen, R. and Wang, Z. 2014. Perceptual evaluation of multi-exposure image fusion algorithms. Quality of Multimedia Experience (QoMEX), 2014 Sixth International Workshop on. IEEE, pp. 7–12.
  77. Zhang, W. and Cham, W. -K. 2012. Gradient-directed multiexposure composition. IEEE Transactions on Image Processing 21(4): 2318–2323.
  78. Zhu, W. , Zhai, G. , Hu, M. , Liu, J. and Yang, X. 2018a. Arrow’s impossibility theorem inspired subjective image quality assessment approach. Signal Processing 145: 193–201, available at: http://www.sciencedirect.com/science/article/pii/S0165168417304164=0pt.
  79. Zhu, W. , Zhai, G. , Hu, M. , Liu, J. and Yang, X. 2018b. Arrow’s impossibility theorem inspired subjective image quality assessment approach. Signal Processing 145: 193–201.
XML PDF Share

FIGURES & TABLES

Figure 1:

HDR imaging pipeline; redrawn from Artusi et al. (2017) and Mantiuk et al. (2016).

Full Size   |   Slide (.pptx)

Figure 2:

Our proposed research road map.

Full Size   |   Slide (.pptx)

Figure 3:

Research framework for current proposed method.

Full Size   |   Slide (.pptx)

Figure 4:

Original and test/processed images and their histograms from the dataset. The test images were processed using global adjustment method.

Full Size   |   Slide (.pptx)

Figure 5:

The gradient of the original and test/processed images and their histograms. The test images were processed using global adjustment method.

Full Size   |   Slide (.pptx)

REFERENCES

  1. Ahn, W. and Kim, J. -S. 2005. Flat-Region Detection and False Contour Removal in the Digital TV Display. 2005 IEEE International Conference on Multimedia and Expo, pp. 1338–1341.
  2. Akyüz, A. O. , Fleming, R. , Riecke, B. E. , Reinhard, E. and Bülthoff, H. H. 2007. Do HDR displays support LDR content? ACM Transactions on Graphics 26(3): 38.
  3. Alpert, T. and Evain, J. 1997. Subjective quality evaluation – the SSCQE and DSCQE methodologies. EBU Technical Review 271: 12–20, available at: https://tech.ebu.ch/publications/trev_271-evain=0pt.
  4. Artusi, A. , Richter, T. , Ebrahimi, T. and Mantiuk, R. K. 2017. High dynamic range imaging technology [Lecture notes]. IEEE Signal Processing Magazine 34(5): 165–172.
  5. Aydin, T. O. , Mantiuk, R. , Myszkowski, K. and Seidel, H. -P. 2008. Dynamic range independent image quality assessment. ACM Transactions on Graph 27(3): 69, available at: http://doi.acm.org/10.1145/1360612.1360668=0pt.
  6. Azimi, M. , Boitard, R. , Oztas, B. , Ploumis, S. , Tohidypour, H. R. , Pourazad, M. T. and Nasiopoulos, P. 2015. Compression efficiency of HDR/LDR content. Quality of Multimedia Experience (QoMEX), 2015 Seventh International Workshop on IEEE, pp. 1–6.
  7. Banterle, F. , Ledda, P. , Debattista, K. and Chalmers, A. 2006. Inverse tone mapping. Proceedings of the 4th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia, ser. GRAPHITE ’06, ACM, New York, NY, pp. 349–356, available at: http://doi.acm.org/10.1145/1174429.1174489=0pt.
  8. Chandler, D. M. 2013. Seven challenges in image quality assessment: past, present, and future research. ISRN Signal Processing, Vol. 2013.
  9. Chua, T. W. and Shen, L. 2017. Contour detection from deep patch-level boundary prediction. 2017 IEEE 2nd International Conference on Signal and Image Processing (ICSIP), pp. 5–9.
  10. Duan, L. , Debattista, K. , Lei, Z. and Chalmers, A. 2020. Subjective and objective evaluation of local dimming algorithms for HDR images. IEEE Access 8(51): 692–702.
  11. Durand, F. and Dorsey, J. 2000. Interactive tone mapping. Eurographics, Springer, Vienna, pp. 219–230.
  12. Eilertsen, G. , Mantiuk, R. K. and Unger, J. 2015. Real-time noise-aware tone mapping. ACM Transactions on Graphics 34(6): 98:1–198:15, available at: http://doi.acm.org/10.1145/2816795.2818092=0pt.
  13. El Mezeni, D. and Saranovac, L. 2018. Temporal adaptation control for local tone mapping operator. Journal of Electrical Engineering 69(4): 261–269.
  14. Ellahi, W. , Vigier, T. and Le Callet, P. 2020. HMM-based framework to measure the visual fidelity of tone mapping operators. 2020 IEEE International Conference on Multimedia Expo Workshops (ICMEW), pp. 1–6.
  15. Fang, Y. , Zhu, H. , Ma, K. , Wang, Z. and Li, S. 2020. Perceptual evaluation for multi-exposure image fusion of dynamic scenes. IEEE Transactions on Image Processing 29: 1127–1138.
  16. Fang, Y. , Yan, J. , Du, R. , Zuo, Y. , Wen, W. , Zeng, Y. and Li, L. 2021. Blind quality assessment for tone-mapped images by analysis of gradient and chromatic statistics. IEEE Transactions on Multimedia 23: 955–966.
  17. Fattal, R. , Lischinski, D. and Werman, M. 2002. Gradient domain high dynamic range compression. ACM Transactions on Graphics 21(3): 249–256.
  18. Ferwerda, J. A. , Pattanaik, S. N. , Shirley, P. and Greenberg, D. P. 1996. A model of visual adaptation for realistic image synthesis. Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’96, ACM, New York, NY, pp. 249–258. available at: http://doi.acm.org/10.1145/237170.237262.
  19. Goshtasby, A. A. 2005. Fusion of multi-exposure images. Image and Vision Computing 23(6): 611–618.
  20. Gu, B. , Li, W. , Wong, J. , Zhu, M. and Wang, M. 2012. Gradient field multi-exposure images fusion for high dynamic range image visualization. Journal of Visual Communication and Image Representation 23(4): 604–610.
  21. Guan, F. , Jiang, G. , Song, Y. , Yu, M. , Peng, Z. and Chen, F. 2018. No-reference HDR image quality assessment method based on tensor space. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp. 1218–1222.
  22. Gunawan, I. P. 2006. Reduced-reference impairment metrics for digitally compressed video. PhD dissertation, University of Essex.
  23. Hands, D. S. 1998. Mental processes in the evaluation of digitally-coded television pictures. PhD dissertation, University of Essex.
  24. Huang, F. , Zhou, D. , Nie, R. and Yu, C. 2018a. A color multi-exposure image fusion approach using structural patch decomposition,” IEEE Access 6: 42877–42 885.
  25. Huang, Q. , Kim, H. Y. , Tsai, W. , Jeong, S. Y. , Choi, J. S. and Kuo, C. J. 2018b. Understanding and removal of false contour in HEVC compressed images. IEEE Transactions on Circuits and Systems for Video Technology 28(2): 378–391.
  26. Jia, S. , Zhang, Y. , Agrafiotis, D. and Bull, D. 2017. Blind high dynamic range image quality assessment using deep learning. 2017 IEEE International Conference on Image Processing (ICIP), IEEE, pp. 765–769.
  27. Jiang, M. , Shen, L. , Zheng, L. , Zhao, M. and Jiang, X. 2020. Tone-mapped image quality assessment for electronics displays by combining luminance partition and colorfulness index. IEEE Transactions on Consumer Electronics 66(2): 153–162.
  28. Kim, D. and Kim, M. 2020. Learning-based low-complexity reverse tone mapping with linear mapping. IEEE Transactions on Circuits and Systems for Video Technology 30(2): 400–414.
  29. Kinoshita, Y. , Shiota, S. and Kiya, H. 2017. “Fast inverse tone mapping with reinhard global operator”, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, New Orleans, LA, March 5–9.
  30. Kinoshita, Y. , Shiota, S. , Kiya, H. and Yoshida, T. 2018. Multi-exposure image fusion based on exposure compensation. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp. 1388–1392.
  31. Kovaleski, R. P. and Oliveira, M. M. 2009. High-quality brightness enhancement functions for real-time reverse tone mapping. The Visual Computer 25(5): 539–547, available at: https://doi.org/10.1007/s00371-009-0327-3=0pt.
  32. Krasula, L. , Fliegel, K. and Le Callet, P. 2020. FFTMI: features fusion for natural tone-mapped images quality evaluation. IEEE Transactions on Multimedia 22(8): 2038–2047.
  33. Kundu, D. , Ghadiyaram, D. , Bovik, A. C. and Evans, B. L. 2017a. Large-scale crowdsourced study for tone-mapped HDR pictures. IEEE Transactions on Image Processing 26(10): 4725–4740.
  34. Kundu, D. , Ghadiyaram, D. , Bovik, A. C. and Evans, B. L. 2017b. No-reference quality assessment of tone-mapped HDR pictures. IEEE Transactions on Image Processing 26(6): 2957–2971.
  35. Landis, H. 2002. Production-ready global illumination. Siggraph Course Notes 16: 87–101.
  36. Larson, G. W. , Rushmeier, H. and Piatko, C. 1997. A visibility matching tone reproduction operator for high dynamic range scenes. IEEE Transactions on Visualization and Computer Graphics 3(4): 291–306.
  37. Li, S. and Kang, X. 2012. Fast multi-exposure image fusion with median filter and recursive filter. IEEE Transactions on Consumer Electronics 58(2): 626–632.
  38. Li, Z. G. , Zheng, J. H. and Rahardja, S. 2012. Detail-enhanced exposure fusion. IEEE Transactions on Image Processing 21(11): 4672–4676.
  39. Lokmanwar, S. D. and Bhalchandra, A. S. 2019. Contour detection based on Gaussian filter. 2019 3rd International Conference on Electronics, Communication and Aerospace Technology (ICECA), pp. 722–725.
  40. Ma, K. , Zeng, K. and Wang, Z. 2015. Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing 24(11): 3345–3356.
  41. Manno-Kovacs, A. 2019. Direction selective contour detection for salient objects. IEEE Transactions on Circuits and Systems for Video Technology 29(2): 375–389.
  42. Mantiuk, R. , Daly, S. J. , Myszkowski, K. and Seidel, H. -P. 2005. Predicting visible differences in high dynamic range images: model and its calibration. Human Vision and Electronic Imaging X, vol. 5666, International Society for Optics and Photonics, pp. 204–215.
  43. Mantiuk, R. , Myszkowski, K. and Seidel, H. -P. 2006. A perceptual framework for contrast processing of high dynamic range images. ACM Transactions on Applied Perception (TAP) 3(3): 286–308.
  44. Mantiuk, R. , Kim, K. J. , Rempel, A. G. and Heidrich, W. 2011. HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Transactions on Graphics (TOG) 30(4): 40.
  45. Mantiuk, R. K. , Tomaszewska, A. and Mantiuk, R. 2012. Comparison of four subjective methods for image quality assessment. Computer Graphics Forum 31(8): 2478–2491, available at: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8659.2012.03188.x=0pt.
  46. Mantiuk, R. K. , Myszkowski, K. and Seidel, H. -P. 2016. High dynamic range imaging. Wiley Encyclopedia of Electrical and Electronics Engineering, available at: https://www.cl.cam.ac.uk/rkm38/hdri_book.html=0pt.
  47. Mertens, T. , Kautz, J. and Van Reeth, F. 2009. Exposure fusion: a simple and practical alternative to high dynamic range photography. Computer Graphics Forum 28(1): 161–171.
  48. Narwaria, M. , Silva, M. P. D. and Callet, P. L. 2015. HDR-VQM: an objective quality measure for high dynamic range video. Signal Processing: Image Communication 35: 46–60, available at: https://www.sciencedirect.com/science/article/pii/S0923596515000703?via=0pt.
  49. Nuutinen, M. , Virtanen, T. , Leisti, T. , Mustonen, T. , Radun, J. and Häkkinen, J. 2016. A new method for evaluating the subjective image quality of photographs: dynamic reference. Multimedia Tools and Applications 75(4): 2367–2391, available at: https://doi.org/10.1007/s11042-014-2410-7=0pt.
  50. Opozda, S. and Sochan, A. 2014. The survey of subjective and objective methods for quality assessment of 2D and 3D images. Theoretical and Applied Informatics 26(1-2): 39–67.
  51. Patil, S. B. and Patil, S. R. 2017. Survey on approaches used for image quality assessment. 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), pp. 987–991.
  52. Persson, M. 2014. “Subjective image quality evaluation using the softcopy quality ruler method,” student Paper.
  53. Qiu, G. , Guan, J. , Duan, J. and Chen, M. 2006. Tone mapping for HDR image using optimization a new closed form solution. 18th International Conference on Pattern Recognition (ICPR’06), vol. 1, pp. 996–999.
  54. Rahayu, F. N. 2011. Quality of experience for digital cinema presentation. PhD Thesis, Norwegian University of Science and Technology, available at: https://brage.bibsys.no/xmlui/handle/11250/2370392 http://hdl.handle.net/11250/2370392=0pt.
  55. Rana, A. , Valenzise, G. and Dufaux, F. 2019. Learning-based tone mapping operator for efficient image matching. IEEE Transactions on Multimedia 21(1): 256–268.
  56. Ravuri, C. S. , Sureddi, R. , Reddy Dendi, S. V. , Raman, S. and Channappayya, S. S. 2019. Deep no-reference tone mapped image quality assessment. 2019 53rd Asilomar Conference on Signals, Systems, and Computers, pp. 1906–1910.
  57. Redi, J. , Liu, H. , Alers, H. , Zunino, R. and Heynderickx, I. 2010. “Comparing subjective image quality measurement methods for the creation of public databases”, In Farnand, S. P. and Gaykema, F. (Eds), Image Quality and System Performance VII, vol. 7529 International Society for Optics and Photonics, SPIE, pp. 19–29, available at: https://doi.org/10.1117/12.839195=0pt.
  58. Reinhard, E. , Stark, M. , Shirley, P. and Ferwerda, J. 2002. Photographic tone reproduction for digital images. ACM Transactions on Graphics 21(3): 267–276, available at: http://doi.acm.org/10.1145/566654.566575=0pt.
  59. Reinhard, E. , Heidrich, W. , Debevec, P. , Pattanaik, S. , Ward, G. and Myszkowski, K. 2010. High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting Morgan Kaufmann, Amsterdam.
  60. Rovid, A. , Varkonyi-Koczy, A. R. , Hashimoto, T. , Balogh, S. and Shimodaira, Y. 2007. Gradient based synthesized multiple exposure time HDR image. 2007 IEEE Instrumentation Measurement Technology Conference IMTC 2007, pp. 1–6.
  61. RRNR-TV Group 2004. Test plan draft version 1.7 h, =2 plus 4 3 minus 4, available at: http://www.vqeg.org=0pt.
  62. Sheikh, H. R. , Sabir, M. F. and Bovik, A. C. 2006. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing 15(11): 3440–3451.
  63. Shen, R. , Cheng, I. and Basu, A. 2013. QoE-based multi-exposure fusion in hierarchical multivariate Gaussian CRF. IEEE Transactions on Image Processing 22(6): 2469–2478.
  64. Song, M. , Tao, D. , Chen, C. , Bu, J. , Luo, J. and Zhang, C. 2012. Probabilistic exposure fusion. IEEE Transactions on Image Processing 21(1): 341–357.
  65. The University of Texas at Austin 2006. LIVE Public-Domain Subjective Image Quality Database, available at: http://live.ece.utexas.edu/research/quality/subjective.htm=0pt.
  66. van Dijk, A. M. , Martens, J. -B. and Watson, A. B. 1995. “Quality asessment of coded images using numerical category scaling”, In Ohta, N. , Lemke, H. U. and Lehureau, J. C. (Eds), Advanced Image and Video Communications and Storage Technologies, Vol. 2451 International Society for Optics and Photonics SPIE, Amsterdam, pp. 90–101, available at: https://doi.org/10.1117/12.201231=0pt
  67. Varkonyi-Koczy, A. R. , Rovid, A. and Hashimoto, T. 2008. Gradient-based synthesized multiple exposure time color HDR image. IEEE Transactions on Instrumentation and Measurement 57(8): 1779–1785.
  68. Video Quality Experts Group 2002. =2 plus 4 3 minus 4, available at: http://www.vqeg.org=0pt.
  69. VQEG 2000. Final report from the Video Quality Expert Group on the validation of objective models of video quality assessment – Phase I, VQEG, March, available at: http://www.vqeg.org=0pt.
  70. Wang, Z. , Simoncelli, E. P. and Bovik, A. C. 2003. Multiscale structural similarity for image quality assessment. The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, vol. 2, IEEE, pp. 1398–1402.
  71. Wang, X. , Jiang, Q. , Shao, F. , Gu, K. , Zhai, G. and Yang, X. 2021. Exploiting local degradation characteristics and global statistical properties for blind quality assessment of tone-mapped HDR images. IEEE Transactions on Multimedia 23: 692–705.
  72. Winkler, S. 2005. Digital Video Quality: Vision Models and Metrics John Wiley & Sons, Chicester.
  73. Yeganeh, H. and Wang, Z. 2013. Objective quality assessment of tone-mapped images. IEEE Transactions on Image Processing 22(2): 657–667.
  74. Yue, G. , Yan, W. and Zhou, T. 2020. Reference less quality evaluation of tone-mapped HDR and multiexposure fused images. IEEE Transactions on Industrial Informatics 16(3): 1764–1775.
  75. Yun, S. -H. , Kim, T. -C. and Kim, J. H. 2012. Single exposure-based image fusion using multi-transformation. Consumer Electronics (GCCE), 2012 IEEE 1st Global Conference on IEEE, pp. 142–143.
  76. Zeng, K. , Ma, K. , Hassen, R. and Wang, Z. 2014. Perceptual evaluation of multi-exposure image fusion algorithms. Quality of Multimedia Experience (QoMEX), 2014 Sixth International Workshop on. IEEE, pp. 7–12.
  77. Zhang, W. and Cham, W. -K. 2012. Gradient-directed multiexposure composition. IEEE Transactions on Image Processing 21(4): 2318–2323.
  78. Zhu, W. , Zhai, G. , Hu, M. , Liu, J. and Yang, X. 2018a. Arrow’s impossibility theorem inspired subjective image quality assessment approach. Signal Processing 145: 193–201, available at: http://www.sciencedirect.com/science/article/pii/S0165168417304164=0pt.
  79. Zhu, W. , Zhai, G. , Hu, M. , Liu, J. and Yang, X. 2018b. Arrow’s impossibility theorem inspired subjective image quality assessment approach. Signal Processing 145: 193–201.

EXTRA FILES

COMMENTS