# A no-reference objective image quality metric based on perceptually weighted local noise

- Tong Zhu
^{1}and - Lina Karam
^{1}Email author

**2014**:5

https://doi.org/10.1186/1687-5281-2014-5

© Zhu and Karam; licensee Springer. 2014

**Received: **15 April 2013

**Accepted: **23 December 2013

**Published: **16 January 2014

## Abstract

This work proposes a perceptual based no-reference objective image quality metric by integrating perceptually weighted local noise into a probability summation model. Unlike existing objective metrics, the proposed no-reference metric is able to predict the relative amount of noise perceived in images with different content, without a reference. Results are reported on both the LIVE and TID2008 databases. The proposed no-reference metric achieves consistently a good performance across noise types and across databases as compared to many of the best very recent no-reference quality metrics. The proposed metric is able to predict with high accuracy the relative amount of perceived noise in images of different content.

## Introduction

Reliable assessment of image quality plays an important role in meeting the promised quality of service (QoS) and in improving the end user’s quality of experience (QoE). There is a growing interest to develop objective quality assessment algorithms that can predict perceived image quality automatically. These methods are highly useful in various image processing applications, such as image compression, transmission, restoration, enhancement, and display. For example, the quality metric can be used to evaluate and control the performance of individual system components in image/video processing and transmission systems.

One direct way to evaluate video quality is through subjective tests. In these tests, a group of human subjects are asked to judge the quality under a predefined viewing condition. The scores given by observers are averaged to produce the mean opinion score (MOS). However, subjective tests are time-consuming, laborious, and expensive. Objective image quality (IQA) assessment methods can be categorized as full reference (FR), reduced reference (RR), and no reference (NR) depending on whether a reference, partial information about a reference, or no reference is used for calculation. Quality assessment without a reference is challenging. A no-reference metric is not relative to a reference image, but rather an absolute value is computed based on some characteristics of the test image.

Of particular interest to this work is the no-reference noisiness objective metric. Noisiness and blurriness are two key distortions in multiple applications, and typically there is a tradeoff to balance between noisiness and blurriness. For example, in soft-thresholding for image denoising [1], the image could be blurry when the threshold is high, while the image could remain noisy when the threshold is low. Also, in Wiener-based super-resolution [2], too much regularization will result in less noise at the expense of more blur. The reconstructed image could be blurry when the auto-correlation function is modeled to be too flat, while the reconstructed image could be noisy when the auto-correlation function is modeled to be too sharp. No-reference image sharpness/blur metrics have been widely discussed [3, 4]. However, these image sharpness/blur metrics typically fail in the presence of noise. The sharpness metric may increase when noise increases. A no-reference noise-immune image sharpness metric was also proposed [5]. Furthermore, all the edge-based sharpness metrics can be easily applied in the wavelet domain as described in [5] to provide resilience to noise. Still, it lacks the ability to assess the impairment due to noise. For visual quality assessment of noisiness, many full-reference metrics are presented in [6], such as peak signal-to-noise ratio (PSNR), multi-scale structural similarity (MS-SSIM) [7], noise quality measure (NQM) [8], and information fidelity criterion (IFC) [9]. However, these full-reference metrics require the reference image for calculation. There is a need to develop a no-reference noisiness quality metric. Furthermore, such noisiness metric could be further combined with the no-reference blur metrics [3, 4] to provide a better prediction of image quality for several applications including super-resolution, image restoration, and other multiply distorted images. A global estimate of image noise variance was used as a no-reference noisiness metric in [10]. The histogram of the local noise variances is used to derive the global estimate. However, the locally perceived visibility of noise is not considered. Similarly in [11], noisiness is expressed by the sum of estimated noise amplitudes and the ratio of noise pixels. Both the metrics of [10, 11] do not account for the effects of locally varying noise on the perceived noise impairment and they do not exploit the characteristics of the human visual system (HVS).

To tackle this issue, this paper firstly presents a full-reference image noisiness metric which integrates perceptually weighted local noise into a probability summation model. This proposed metric can predict the perceptual noisiness in images with high accuracy. In addition, a no-reference objective noisiness metric is derived based on local noise standard deviation, local perceptual weighting, and probability summation. The experimental results show that the proposed FR and NR metrics show better and more consistent performance across databases and distortion types, when compared with several very recent FR and NR metrics.

The remainder of this paper is organized as follows. A perceived noisiness model based on probability summation is presented first followed by details on the contrast sensitivity thresholds computation. A full-reference perceptually weighted noise (FR-PWN) metric is proposed next based on perceptual weighting using the computed contrast sensitivity thresholds and probability summation. After that, a no-reference perceptually weighted noise (NR-PWN) metric is further derived. Performance results and comparison with existing metrics are presented followed by a conclusion.

## Perceptual noisiness model based on probability summation

*y*as

*y*

^{′}(

*i*,

*j*) is the original undistorted image. The probability of detecting a noise distortion at location (

*i*,

*j*) can be modeled as an exponential having the following form

where JND(*i*,*j*) is the JND value at (*i*,*j*) and it depends on the mean intensity in a local neighborhood region surrounding pixel (*i*,*j*). *β* is a parameter whose value is chosen to maximize the correspondence of (2) with the experimentally determined psychometric function for noise detection. In psychophysical experiments that examine summation over space, a value of about 4 has been observed to correspond well to probability summation [12].

*R*[13]. The probability summation hypothesis is based on the following two assumptions: (1) A noise distortion is detected if and only if at least one detector senses the presence of a noise distortion; (2) The probabilities of detection are independent; i.e., the probability that a particular detector will signal the presence of a distortion is independent of the probability that any other detector will. The measurement of noise detection in a region

*R*is then given by

From (4), it can be seen that *P*_{noise}(*R*) increases if *D*_{
R
} increases and vice versa. So *D*_{
R
} can be used as a noisiness metric over region *R*. However, the probability of noise detection does not directly translate to noise annoyance level. In this work, the *β* parameter in (4) and (5) is replaced with *α*=*β* × *s*, which has the effect of steering the slope of the psychometric function in order to translate noise detection levels into noise annoyance levels. The factor *s* was found experimentally to be 1/16 resulting in a value of 0.25 for *α*. More details about how JND(*i*,*j*) is computed is given in the Section ‘Perceptual contrast sensitivity threshold model and JND computation’.

## Perceptual contrast sensitivity threshold model and JND computation

*t*

_{128}is generated for a region with a mean grayscale value of 128 as follows:

*L*

_{min}and

*L*

_{max}are the minimum and maximum display luminances,

*M*

_{g}is the total number of gray scale levels, and

*T*is given by the following parabolic approximation [15]:

*T*

_{min}is the luminance threshold at frequency,

*f*

_{min}, where the threshold is minimum.

*ω*

_{ x }and

*ω*

_{ y }represent, respectively, the horizontal width and the vertical height of a pixel in degrees of visual angle,

*K*is the steepness of the parabola.

*N*is the local neighborhood size and is set to 8.

*T*

_{min},

*f*

_{min}, and

*K*can be computed as [15]:

*L*

_{ T }= 13.45 cd/m

^{2},

*S*

_{0}= 94.7,

*α*

_{ T }= 0.649,

*α*

_{ f }= 0.182,

*f*

_{0}= 6.78 cycle/deg,

*L*

_{ f }= 300 cd/m

^{2},

*K*

_{0}= 3.125,

*α*

_{ K }= 0.0706 and

*L*

_{ K }= 300 cd/m

^{2}. Equations 10 to 12 give

*T*

_{min},

*f*

_{min}, and

*K*as functions of local background luminance

*L*. For a background intensity value of 128, given a gamma-corrected display, the corresponding local background luminance is computed as follows:

*L*

_{min}and

*L*

_{max}denote the minimum and maximum luminances of the display. Once the JND for a region with mean grayscale value of 128,

*t*

_{128}, is calculated using (6), the JND for regions with other mean grayscale values are approximated as follows [16]:

where ${I}_{{n}_{1},{n}_{2}}$ is the intensity level at pixel location (*n*_{1},*n*_{2}) in a *N* × *N* region surrounding pixel (*i*,*j*). It should be noted that the indices (*n*_{1},*n*_{2}) are used to denote the location with respect to the top left corner of the *N* × *N* region, while the indices (*i*,*j*) are used to denote the location with respect to the top left corner of the whole image. $\text{Mean}\left({I}_{{n}_{1},{n}_{2}}\right)$ is the mean value over the considered *N* × *N* region surrounding pixel (*i*,*j*). *α*_{
T
} is a correction exponent that controls the degree to which luminance masking occurs and is set to *α*_{
T
} = 0.649, as given in [16]. JND(*i*,*j*) in (5) is computed using (14). In our implementation, *N* = 8 was used for the *N* × *N* region.

## Full-reference noisiness metric

*M*×

*M*. The block will be the region of interest

*R*

_{b}. The block size is chosen to correspond with the foveal region. Let

*r*be the visual resolution of the display in pixels per degree,

*v*the viewing distance in centimeters, and

*d*the display resolution in pixels per centimeter. Then the visual resolution can be calculated as follows [17]:

*r*⌋)

^{2}[17]. For example, for a viewing distance of 60 cm and 31.5 pixels/cm display, the number of pixels contained in the foveal region is (64)

^{2}, corresponding to a block size of 64 × 64. Using (5), the perceived noise distortion within a block

*R*

_{b}is given by

*i*,

*j*) is the JND at location (

*i*,

*j*) and is computed using (14). Using the probability summation model as discussed previously, the noisiness measure

*D*for the whole image

*I*is obtained by using a Minkowski metric for inter-block pooling as follows:

The resulting distortion measure, *D*, normalized by the number of blocks, is adopted as the proposed full-reference metric FR-PWN. This full-reference metric not only works for noisiness, but could also work for other additive distortions.

## No-reference noisiness metric

*i*,

*j*) in (16) can not be computed. Therefore, there is a need to develop a no-reference noisiness quality metric. Figure 2 shows the block diagram which summarizes the proposed no-reference NR-PWN metric. From (14), it can be seen that JND(

*i*,

*j*) depends on the local mean of the neighborhood surrounding (

*i*,

*j*). For the proposed NR metric, the local mean for a pixel (

*i*,

*j*) belonging to a region

*R*

_{ N }is taken to be the mean of region

*R*

_{ N }and is denoted by mean(

*R*

_{ N }). Consequently, Equation 14 can be written as follows:

*R*

_{ N }) will be calculated for all pixel (

*i*,

*j*) belonging to the same

*R*

_{ N }, and different JND(

*R*

_{ N }) will be calculated separately for each

*R*

_{ N }within the considered region of interest block

*R*

_{b}. The size of the block

*R*

_{b}is chosen to approximate a foveal region (e.g., 64 × 64 as discussed previously). Using

*p*,

*q*as the indices within a local neighborhood

*R*

_{ N }, the proposed NR metric is derived from the presented FR metric (16) as follows:

*N*

^{2}

*E*[ |(error(

*p*,

*q*)|

^{ α }] under the ergodicity assumption, where

*N*×

*N*is the size of each local neighborhood

*R*

_{ N }. Also, if error(

*p*,

*q*) is a Gaussian distribution process with a mean of 0 and a standard deviation of ${\sigma}_{{R}_{N}}$, using the central absolute moments of a Gaussian distribution process [18], it can be shown that

*t*) is the gamma function

*α*, define a constant C as

*R*

_{b}is given by

*I*can be computed as follows:

The resulting noise measure *D*, normalized by the number of blocks, is adopted as the proposed no-reference NR-PWN metric.

In (24), the noise variance ${\sigma}_{{R}_{N}}$ is estimated directly from the test image, without the reference image. Multiple methods are available to estimate the noise variance, such as fast noise variance estimation (FNV) [19] and generalized cross validation (GCV)-based method [20, 21]. In our implementation, the GCV method was used for computing the local noise variance. Similar results were also obtained using the FNV [19] noise estimation method.

## Performance results

The performance of the proposed FR-PWN and NR-PWN metrics is assessed using the LIVE [6] and TID2008 [22] databases.The LIVE database [6] consists of 29 RGB color image. The images are distorted using different distortion types: JPEG2000, JPEG, Gaussian blur, white noise, and bit errors. The difference mean opinion score (DMOS) for each image is provided. The white noise part of the LIVE database includes 174 images with a noise standard deviation ranging from 0 to 2. White noise was added to the RGB components of images after scaling between 0 and 1. All of the white noise images (174 images) from the LIVE database are used in our experiments. The TID2008 database [22] consists of 25 reference images (512 × 384) and 1,700 distorted images. The images are distorted using 17 types of distortions, including additive Gaussian noise, high-frequency noise, JPEG2000, and Gaussian blur. The MOS was obtained using a total of 838 observers with 256,428 comparisons of the visual quality of distorted images. All of the additive Gaussian noise image (100 images) and high-frequency noise images (100 images) from the TID2008 database are used in our experiments. As mentioned in [22], additive zero-mean noise is often present in images and it is commonly modeled as a white Gaussian noise. This type of distortion is included in most studies of quality metric effectiveness. High-frequency noise is an additive non-white noise which can be used for analyzing spatial frequency sensitivity of the HVS [23]. High-frequency noise is typical in lossy image compression and watermarking.

*M*

_{ i }is the quality metric for image

*i*, ${\text{MOS}}_{{P}_{i}}$ is the predicted MOS or DMOS. Figure 3 shows the DMOS score and predicted DMOS obtained using NR-PWN for the LIVE database.

**Performance evaluation for the LIVE database**

Metrics | PLCC | SROCC | |
---|---|---|---|

FR | DCTune [25] | 0.9288 | 0.9324 |

PQS [26] | 0.9603 | 0.9535 | |

NQM [8] | 0.9885 | 0.9854 | |

Fuzzy S7 [27] | 0.9038 | 0.9199 | |

BSDM (S4) [28] | 0.9559 | 0.9327 | |

MS-SSIM [7] | 0.9737 | 0.9805 | |

IFC [9] | 0.9766 | 0.9625 | |

| 0.9846 | 0.9835 | |

RR | QAI [29] | 0.8889 | 0.8639 |

NR | BLINDS-II(SVM) [30] | 0.9799 | 0.9691 |

BLINDS-II(Prob.) [30] | 0.9854 | 0.9783 | |

HNR [31] | 0.962 | N/A | |

BRISQUE [32] | 0.9851 | 0.9786 | |

NIQE [33] | 0.9773 | 0.9662 | |

BIQI [34] | 0.9538 | 0.9510 | |

LBIQ [35] | 0.9761 | 0.9702 | |

Estimated noise standard deviation | 0.9497 | 0.9713 | |

| 0.9770 | 0.9816 |

**Performance evaluation using SROCC for the TID2008 database**

Metrics | Additive Gaussian | High-frequency | |
---|---|---|---|

noise | noise | ||

FR | MS-SSIM [7] | 0.8094 | 0.8685 |

DCTune [25] | 0.8415 | 0.8721 | |

NQM [8] | 0.7679 | 0.9015 | |

| 0.8818 | 0.9194 | |

NR | BLINDS-II (SVM) [30] | 0.6600 | N/A |

BLINDS-II (Prob.) [30] | 0.6956 | 0.7454 | |

BRISQUE [32] | 0.829 | 0.6234 | |

NIQE [33] | 0.7775 | 0.8539 | |

GRNN [36] | 0.7532 | N/A | |

Li et al. [37] | 0.7043 | N/A | |

| 0.8020 | 0.9136 |

From Table 1, it can be observed that the proposed FR-PWN metric outperforms the existing FR metrics for the LIVE database while achieving a similar performance as the NQM [8] metric. Table 2 shows that the proposed FR-PWN metric outperforms the existing FR metrics for the TID2008 database, on both Gaussian noise and high-frequency noise. The proposed NR-PWN metric comes close in performance to the proposed FR-PWN metric for both the LIVE and the TID2008 databases. In particular, Table 1 shows that the proposed NR-PWN metric performs better than existing NR metrics except for the Blinds-II and BRISQUE metrics in terms of PLCC. The proposed NR-PWN metric outperforms all the considered NR metrics in terms of SROCC and even existing FR metrics except the full-reference NQM [8] for the LIVE database. Table 2 shows that the proposed NR-PWN metric surpasses existing NR metrics except BRISQUE [32] for additive Gaussian noise, and that it significantly outperforms existing FR and NR metrics for high-frequency noise. Particularly, it should be noted that the performance of BRISQUE [32] drops dramatically on high-frequency noise and is significantly lower than the proposed metric. In addition, many of the shown state-of-the-art metrics including BLINDS-II [30], NIQE [33], and BRISQUE [32] use 80% of the data for training [30, 32, 33]. Consequently, these may not perform well on new distortions outside the training set, such as high-frequency noise (Table 2). In contrast, the proposed NR-PWN does not require training and still performs well on this new distortion.

Furthermore, it is worth indicating that as shown in Tables 1 and 2, the existing metrics exhibit differences in performance across different databases and types of distortions. It is noted in [38] that the performance of many image quality metrics could be quite different across databases. The difference in performance can be attributed to the differences in quality range, distortions, and contents across databases. Despite this, the results obtained show that the proposed FR-PWN and NR-PWN metrics achieve consistently a good performance across noise types (white noise and high-frequency noise) and across databases as compared to the existing quality metrics. For example, the proposed FR-PWN metric exhibits a performance similar to NQM [8] for the LIVE database, while it significantly outperforms NQM [8] for white noise images from TID2008. Also, the existing BLINDS-II [30] performs fairly well for the LIVE database, but its performance significantly decreases when applied to TID2008. It is also interesting to note that although the mathematical derivations for the proposed NR-PWN is based on white noise, the proposed NR-PWN metric performs consistently well for high-frequency noise, a non-white noise.

The performance results presented in Tables 1 and 2 for the proposed NR-PWN metric are obtained using the GCV method [20, 21] for local variance estimation. If the local variance is estimated using the FNV method [19], the resulting SROCC values are 0.9627 for the LIVE database additive Gaussian noise, 0.7850 for the TID2008 database additive Gaussian noise, and 0.9210 for the TID2008 database high-frequency noise, respectively.

*L*

_{max}of the monitor. However, the performance of the proposed metrics are resilient to different

*L*

_{max}values. In Tables 1 and 2, the proposed metrics are calculated using

*L*

_{max}= 175 cd/m

^{2}. The

*L*

_{max}in real viewing conditions may vary from 100 cd/m

^{2}for CRT monitors to 300 cd/m

^{2}for LCD monitors. Table 3 shows the performance of the proposed metric in terms of SROCC using different values of

*L*

_{max}, for both the LIVE and the TID2008 databases. It can be observed that the proposed metrics are not sensitive to the selection of

*L*

_{max}.

**SROCC of the proposed metrics using different**
L
_{
max
}

L | 100 | 175 | 300 | |
---|---|---|---|---|

LIVE additive | FR-PWN | 0.9835 | 0.9835 | 0.9835 |

Gaussian noise | NR-PWN | 0.9816 | 0.9816 | 0.9816 |

TID2008 additive | FR-PWN | 0.8816 | 0.8818 | 0.8818 |

Gaussian noise | NR-PWN | 0.8020 | 0.8020 | 0.8020 |

TID2008 high- | FR-PWN | 0.9194 | 0.9194 | 0.9197 |

frequency noise | NR-PWN | 0.9136 | 0.9136 | 0.9136 |

## Conclusions

This paper proposed both a full-reference and a no-reference noisiness metrics. The no-reference noisiness metric is derived from the proposed full-reference metric and integrates noise variance estimation and perceptual contrast sensitivity thresholds into a probability summation model. The proposed metrics can predict the relative noisiness in images based on the probability of noise detection. Results show that the proposed metrics achieve a consistently good performance across noise types and across databases as compared to the existing quality metrics. Further work can be performed to develop a no-reference quality metric for multiply distorted images.

## Declarations

## Authors’ Affiliations

## References

- Donoho D: De-noising by soft-thresholding.
*IEEE Trans. Inf. Theory*1995, 41(3):613-627. 10.1109/18.382009MathSciNetView ArticleMATHGoogle Scholar - Hardie R: A fast image super-resolution algorithm using an adaptive Wiener Filter.
*IEEE Trans. Image Process*2007, 16(12):2953-2964.MathSciNetView ArticleGoogle Scholar - Ferzli R, Karam L: A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB).
*IEEE Trans. Image Process*2009, 18(4):717-728.MathSciNetView ArticleGoogle Scholar - Narvekar N, Karam L: A no-reference image blur metric based on the cumulative probability of blur detection (CPBD).
*IEEE Trans. Image Process*2011, 20(9):2678-2683.MathSciNetView ArticleGoogle Scholar - Ferzli R, Karam L: No-reference objective wavelet based noise immune image sharpness metric. In
*IEEE International Conference on Image Processing, ICIP 2005*. IEEE, Piscataway; 2005:I-405–8.View ArticleGoogle Scholar - Sheikh H, Sabir M, Bovik A: A statistical evaluation of recent full reference image quality assessment algorithms.
*IEEE Trans. Image Process*2006, 15(11):3440-3451.View ArticleGoogle Scholar - Wang Z, Simoncelli E, Bovik A: Multiscale structural similarity for image quality assessment. In
*Proceedings of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers, 2004*. Pacific Grove; 9–12 November 2003:1398-1402.Google Scholar - Damera-Venkata N, Kite T, Geisler W, Evans B, Bovik A: Image quality assessment based on a degradation model.
*IEEE Trans. Image Process*2000, 9(4):636-650. 10.1109/83.841940View ArticleGoogle Scholar - Sheikh H, Bovik A, De Veciana G: An information fidelity criterion for image quality assessment using natural scene statistics.
*IEEE Trans. Image Process*2005, 14(12):2117-2128.View ArticleGoogle Scholar - Farias MCQ, Mitra S: No-reference video quality metric based on artifact measurements. In
*IEEE International Conference on Image Processing, ICIP 2005*. IEEE, Piscataway; 2005:III-141–144.View ArticleGoogle Scholar - Choi MG, Jung JH, Jeon JW: No-reference image quality assessment using blur and noise.
*Int. J. Electrical Electron. Eng*2009, 3: 318-322.Google Scholar - Hontsch I, Karam L: Locally adaptive perceptual image coding.
*IEEE Trans. Image Process*2000, 9(9):1472-1483. 10.1109/83.862622MathSciNetView ArticleMATHGoogle Scholar - Robson JG, Graham N: Probability summation and regional variation in contrast sensitivity across the visual field.
*Vis. Res*1981, 21: 409-418. 10.1016/0042-6989(81)90169-3View ArticleGoogle Scholar - Karam L, Sadaka N, Ferzli R, Ivanovski Z: An efficient selective perceptual-based super-resolution estimator.
*IEEE Trans. Image Process*2011, 20(12):3470-3482.MathSciNetView ArticleGoogle Scholar - Ahumada A, Peterson H: Luminance-model-based DCT quantization for color image compression.
*Human Vision, Visual Processing, and Digital Display III, SPIE Proc*1992, 1666: 365-374. 10.1117/12.135982View ArticleGoogle Scholar - Watson AB: DCT quantization matrices visually optimized for individual images. Human Vision, Visual Processing, and Digital Display IV, SPIE Proc 1913: 202-216. (SPIE, San Jose, 1993)Google Scholar
- Liu Z, Karam L, Watson A: JPEG2000 encoding with perceptual distortion control.
*IEEE Trans. Image Process*2006, 15(7):1763-1778.View ArticleGoogle Scholar - Wilkelbauer A: Moments and absolute moments of the normal distribution. (2012), . (Accessed 15 April 2013) http://arxiv.org/abs/1209.4340 (2012), . (Accessed 15 April 2013)
- Immerkær J: Fast noise variance estimation.
*Comput. Vis. Image Underst*1996, 64(2):300-302. . http://www.sciencedirect.com/science/article/pii/S1077314296900600 10.1006/cviu.1996.0060View ArticleGoogle Scholar - Wahba G: Estimating the smoothing parameter. In
*Spline Models for Observational Data*. Society for Industrial Mathematics, Philadelphia; 1990:45-65.View ArticleGoogle Scholar - Garcia D: Robust smoothing of gridded data in one and higher dimensions with missing values.
*Comput. Stat. Data Anal*2010, 54(4):1167-1178. 10.1016/j.csda.2009.09.020View ArticleMathSciNetMATHGoogle Scholar - Ponomarenko N, Lukin V, Zelensky A, Egiazarian K, Carli M, Battisti F: TID2008-A database for evaluation of full-reference visual quality assessment metrics.
*Adv. Mod. Radioelectronics*2009, 10: 30-45.Google Scholar - Ponomarenko N, Silvestri F, Egiazarian K, Carli M, Astola J, Lukin V: On between-coefficient contrast masking of DCT basis functions. In
*Proceedings of the Third International Workshop on Video Processing and Quality Metrics*. Scottsdale; January 2007.Google Scholar - VQEG: Final report from the Video Quality Experts Group on the validation of objective models of video quality assessment. 2000.http://www.vqeg.org/ . (Accessed 15 April 2013)Google Scholar
- Watson AB: DCTune: A technique for visual optimization of DCT quantization matrices for individual images.
*Soc. Inf. Display Dig. Tech. Papers*1993, 24: 946-949.Google Scholar - Miyahara M, Kotani K, Algazi V: Objective picture quality scale (PQS) for image coding.
*IEEE Trans. Commun*1998, 46(9):1215-1226. 10.1109/26.718563View ArticleGoogle Scholar - Weken DV, Nachtegael M, Kerre EE: Using similarity measures and homogeneity for the comparison of images.
*Image Vis. Comput*2004, 22(9):695-702. 10.1016/j.imavis.2004.03.002View ArticleGoogle Scholar - Avcibas I, Avcıbaş I, Sankur B, Sayood K: Statistical evaluation of image quality measures.
*J. Electron. Imaging*2002, 11: 206-223. 10.1117/1.1455011View ArticleGoogle Scholar - Wang Z, Wu G, Sheikh H, Simoncelli E, Yang EH, Bovik A: Quality-aware images.
*IEEE Trans. Image Process*2006, 15(6):1680-1689.View ArticleGoogle Scholar - Saad M, Bovik A, Charrier C: Blind image quality assessment: a natural scene statistics approach in the DCT domain.
*IEEE Trans. Image Process*2012, 21(8):3339-3352.MathSciNetView ArticleGoogle Scholar - Shen J, Li Q, Erlebacher G: Hybrid no-reference natural image quality assessment of noisy, blurry, JPEG2000, and JPEG images.
*IEEE Trans. Image Process*2011, 20(8):2089-2098.MathSciNetView ArticleGoogle Scholar - Mittal A, Moorthy A, Bovik A: No-reference image quality assessment in the spatial domain.
*IEEE Trans. Image Process*2012, 21(12):4695-4708.MathSciNetView ArticleGoogle Scholar - Mittal A, Soundararajan R, Bovik A: Making a completely blind image quality analyzer.
*IEEE Signal Process. Lett*2013, 20(3):209-212.View ArticleGoogle Scholar - Moorthy A, Bovik A: A two-step framework for constructing blind image quality indices.
*IEEE Signal Process. Lett*2010, 17(5):513-516.View ArticleGoogle Scholar - Tang H, Joshi N, Kapoor A: Learning a blind measure of perceptual image quality. In
*2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*. IEEE, Piscataway; 2011:305-312.Google Scholar - Li C, Bovik A, Wu X: Blind image quality assessment using a general regression neural network.
*IEEE Trans. Neural Netw*2011, 22(5):793-799.View ArticleGoogle Scholar - Li C, Tang G, Wu X, Ju Y: No-reference image quality assessment with learning phase congruency feature.
*J. Electrical Inf. Technol*2012, 35(2):484-488.View ArticleGoogle Scholar - Tourancheau S, Autrusseau F, Sazzad Z, Horita Y: Impact of subjective dataset on the performance of image quality metrics. In
*15th IEEE International Conference on Image Processing*. IEEE Piscataway; 2008:365-368.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.