- Research Article
- Open Access

# Towards Video Quality Metrics Based on Colour Fractal Geometry

- Mihai Ivanovici
^{1}Email author, - Noël Richard
^{2}and - Christine Fernandez-Maloigne
^{2}

**2010**:308035

https://doi.org/10.1155/2010/308035

© Mihai Ivanovici et al. 2010

**Received:**29 April 2010**Accepted:**30 July 2010**Published:**10 August 2010

## Abstract

Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

## Keywords

- Fractal Dimension
- Video Sequence
- Video Frame
- Video Quality
- Human Visual System

## 1. Video Quality Metrics

There is a plethora of metrics for the assessment of image and video quality [1]. They used to be: (i) *full reference* or *reference based*, when both the video sequence at the transmitter and the video sequence at the receiver are available, then the sequence at receiver is compared to the original sequence at transmitter, and (ii) *no reference* or *without reference*, when the video sequence at the transmitter is not available; therefore, only the video sequence at the receiver is being analyzed. Recently a third class of metrics emerged: the so-called "reduced-reference" [2, 3] which are based on the sequence at the receiver and on some features extracted from the original signal at the transmitter. This is the case of the fractal measures we propose.

For the quality assessment of an image or a video sequence, the metrics can be also divided into *subjective* and *objective*. During the last decade, several quality measures, both subjective and objective, have been proposed, especially for the assessment of the quality of an image after lossy compression, image rendering on screen or for digital cinema [4]. Most of them use models of the human visual system to express the image perception as a specific pass-band filter (to be more precise, a pass-band filter for the achromatic vision and a low pass-filter for the chromatic one) [5]. In this paper we explore a well-known property of the human visual system, that is, to be "sensitive" to the visual complexity of the image. We use fractal features—thus a multiscale approach—to estimate this complexity. In addition, we rely on the hypothesis that the fractal geometry is capable of characterizing the image complexity in its whole—the space—frequency complexity and the colour content–-thus the complexity of the image reflected in a certain colour space, and any of the aspects of the image degradation, like a more spread power spectrum and local discontinuities of the natural correlation of the image.

The most complex metrics are based on models of the human visual system, but some of them are now classical signal fidelity metrics like the signal-to-noise ratio (SNR) and its variant peak SNR (PSNR), the mean-squared error (MSE) and root MSE (RMSE) which are simply distance measures. These simple measures are unable to capture the degradation of the video signal from a user perspective [6]. On the other hand, the subjective video quality measurements are time consuming and must meet complex requirements (see the ITU-T recommendations [7–10]) regarding the conditions of the experiments, such as viewing distance and room lighting. However, the objective metrics are usually preferred, because they can be implemented as algorithms and are human-error free.

The Video Quality Experts Group (VQEG) (http://www.vqeg.org/) is the main organization dealing with the perceptual quality of the video signal and they reported on the existing metrics and measurement algorithms [11]. A survey of video-quality metrics based on models of the human vision system can be found in [12] and several no-reference blockiness metrics are studied and compared in [13]. A more recent state-of-the-art of the perceptual criteria for image quality evaluation can be found in [14]. OPTICOM (http://www.opticom.de/) is the author of one metric for video quality evaluation called "Perceptual Evaluation of Video Quality" (PEVQ), which is a reference-based metric used to measure the quality degradation in case of any video application running in mobile or IP-based networks. The PEVQ Analyzer [15] measures several parameters in order to characterize the degradation: brightness, contrast, PSNR, jerkiness, blur, blockiness, and so forth. Some of the first articles that proposed quality metrics inspired by the human perception [16, 17] drew also the attention on some of the drawbacks of the MSE and the importance of subjective tests. Among the unanimously accepted metrics for the quantification of the user-perceived degradation are the ones proposed by Winkler use image attributes like sharpness and colourfulness [18–20]. In [21], the authors propose a no-reference quality metric also based on the contrast, but taking into account the human perception, and in [22], the hue feature is exploited. Wang proposes in [23] a metric based on the structural similarity between the original image and the degraded one. The structural similarity (SSIM) unifies in its expression several aspects: the similarity of the local patch luminances, contrast, and structure. This metric was followed by a more complex one, based on wavelets, as an extension of SSIM to the complex wavelet domain, inspired by the pattern recognition capabilities of the human visual system [24]. Together with Wang, Rajashekar is the author of one of the latest image quality metric based on an adaptive spatiochromatic signal decomposition [25, 26]. The method constructs a set of spatiochromatic function basis for the approximation of several distortions due to changes in lighting, imaging, and viewing conditions. Wavelets are also used by Chandler and Hemami to develop a visual signal-to-noise ratio (VSNR) metric [27] based on their recent psychophysical findings [28–30]. Related to the wavelets, a multiresolution model based on the natural scene statistics is used in [31].

*dirtiness*would be useful.

The degradation that affects the video frames is in fact a mixture of several impairments, including blockiness and the sudden occurrence of new colours. The modifications of the image content reflect both in the colour histograms—a larger spread of the histogram due to the presence of new colours—and the spectral representation of the luminance and chrominance (high frequencies due to blockiness). Given all the above considerations, we believe that metrics like blur, contrast, brightness, and even blockiness lose their meaning, and they are not able to reflect the degradation; therefore, they cannot be applied for such degraded video frames. Metrics able to capture all the aspects of the degradation that reflect the colour spread–-the amount of new colours occurring in the degraded video frames would be more appropriate. We, therefore, consider that the approaches based on multiscale analysis and image complexity are more adapted to the video-quality assessment. Fractal analysis-based approaches offer the possibility to synthesize into just one measure adapted to the human visual system, all the relevant features for the quality of an image (e.g., colourfulness and sharpness) instead of analyzing all image characteristics independently and then to find a way to combine the intermediate results. Due to its multiscale nature, the fractal analysis is in accordance with the spirit of all multiresolution wavelet-based approaches mentioned before, which unfortunately work only for gray-scale images. Therefore, one of the advantages of our approach would be the fact that it also takes into account the colour information. In addition, the fractal measures are invariant to any linear transformation like translation and rotation.

Our choice is also justified by the way that humans perceive the fractal complexity. In a study on human perception conducted on fractal pictures [33], the authors conclude that "the hypothesis on the applicability and fulfillment of Weber-Fechner law for the perception of time, complexity and subjective attractiveness was confirmed". Their tests aimed at correlating the human perception of time, complexity, and aesthetic attractiveness with the fractal dimension and the Lyapunov exponent, based on the hypothesis that the perception of fractal objects may reveal insights of the human perceptual process. In [34], the most attractive fractals appeared to be the ones with the fractal dimension comprised between 1.1 and 1.5. According to [35], "the prevalence of fractals in our natural environment has motivated a number of studies to investigate the relationship between a pattern's fractal character and its visual properties", for example, [36, 37]. The authors of [35] investigate the visual appeal as a function of the fractal dimension, and they establish three intervals: [1.1–1.2] low preference, [1.3–1.5] high preference, and [1.6–1.9] low preference. Pentland finds in this psychophysical studies [38, 39] that for the one-dimensional fractional Brownian motion and the two-dimensional Brodatz textures, the correlation between the fractal dimension and the perceived roughness is more than 0.9.

Last but not least, the very essence of the word "complex" of Latin-etymology—meaning "twisted together", designating a system composed of closely connected components—emphasizes the presence of multiple components that interact with each other, generating an emergent property [40].

## 2. Fractal Analysis

The fractal geometry introduced by Mandelbrot in 1983 to describe self-similar sets called fractals [41] is generally used to characterize natural objects that are impossible to describe by using the classical (Euclidian) geometry. The fractal dimension and lacunarity are the two most-known and widely used fractal analysis tools. The fractal dimension characterizes the complexity of a fractal set, by indicating how much space is filled, while the lacunarity is a mass distribution function indicating how the space is occupied [42]. These two fractal properties are successfully used to discriminate between different structures exhibiting a fractal-like appearance [43–45], for classification and segmentation, due to their invariance to scale, rotation, or translation. The fractal geometry proved to be of a great interest for the digital image processing and analysis in an extremely wide area of applications, like finance [46], medicine [44, 47, 48], and art [49].

There exist several different mathematical expressions for the fractal dimension, but the box-counting is the most popular due to the simplest algorithmic formulation, compared to the original Hausdorff definition expressed for continuous functions [50]. The box-counting definition of the fractal dimension is , where is the number of boxes of size needed to completely cover the fractal set. The first practical approach belongs to Mandelbrot, but that was followed by the elegant probability measure of Voss [51, 52]. On a parallel research path, Allain and Cloitre [53] and Plotnick et al. [54] developed their approach as a version of the basic box-counting algorithm. All the other approaches for the computation of the fractal dimension, like -parallel body method [55] (a.k.a. covering-blanket approach, Minkowsky sausage, or morphological covers) or fuzzy [56] are more complex from a point of view of implementation and more difficult to extend to a multidimensional colour space. However, we proposed in [57] a colour extension of the covering blanket approach based on a probabilistic morphology. On the other hand, despite the large number of algorithmic approaches for the computation of the fractal dimension and lacunarity, only few of them offer the theoretical background that links them to the Hausdorff dimension.

However, such tools were developed long time ago for grey-scale small-size images, but due to the evolution of the acquisition techniques the spatial resolution significantly increased and, in addition, the world of images became coloured. The very few existing approaches for the computation of fractal measures for colour images are restricted to a marginal colour analysis, or they transform a gray-scale problem in false colour [48]. In the following section, we briefly present our colour extension of the existing probabilistic algorithm by Voss [51], fully described in [58], which were validated on synthetic colour fractal images [59] and used to characterize the colour textures representing psoriatic lesions, in the context of a medical application in dermatology [60]. Then, we show how the colour fractal dimension and lacunarity can be used to characterize the degradation of the video signal for a video streaming application. Without loss of generality, we present the results we obtain in the case of an MPEG-4 video-streaming application.

## 3. Colour Fractal Dimension and Lacunarity

*box*), centered in an arbitrary point of the set . In other words, is the probability that the signal "visited" the box of size . The matrix is normalized so that , where is the maximum number of pixels that are included in a box of size . Given the total number of points in the image is , the number of boxes that contain points is . Thus, the total number of boxes needed to cover the image is

Consequently is proportional to , where is the fractal dimension to be estimated.

If a gray-scale image is considered to be a discrete surface , where is the luminance in every point of the space, then a colour image is a hyper-surface in a 3-dimensional colour space. Thus, we deal with a 5-dimensional hyper-space where each pixel is a 5-dimensional vector. We use RGB for the representation of colours due to its cubical organization, even though it is not a Euclidian uniform space. The classical algorithm of Voss uses boxes of variable size centered in the each pixel of the image and counts how many pixels fall inside that box. We generalize this by counting the pixels for which the Minkowski infinity norm distance to the center of the hyper-cube is smaller than . Practically, for a certain square of size in the plane, we count the number of pixels that fall inside a 3-dimensional RGB cube of size , centered in the current pixel –-the colour of the current pixel. The theoretical development and validation on synthetic colour fractal images can be found in [58].

The lacunarity characterizes the topological organisation of a fractal object, an image in our particular case, being a scale-dependent measure of spatial heterogeneity. Images with small lacunarity are more homogeneous with respect to the size distribution and spatial arrangement of gaps. On the other hand, images with larger lacunarity are more heterogeneous. In addition, lacunarity must be taken into consideration after inspecting the fractal dimension: in a similar manner with the Hue-saturation couple in colour image analysis, the lacunarity becomes of greater importance when complexity, that is, the fractal dimension, increases.

## 4. Approach Argumentation and Validation

In Figure 1, we present two video frames: one from the original video sequence and the corresponding degraded video frames from the sequence at the receiver, along with the pseudoimage representing the absolute difference between the former two. The computed colour fractal dimensions are 3.14, 3.31, and 3.072, respectively. One can see that the larger fractal dimension reflects the increased complexity of the degraded video frame. The increased complexity comes from the blockiness effect, as well as from the dirtiness and the augmented colour content (see also the 3D histograms in Figure 3).

Because the lacunarity is a measure of how the space is occupied, we present in Figure 3 the 3D histograms in the RGB colour space, as a visual justification. One can see that the histogram of the degraded video frame is more spread than the one of the original video frame, indicating a more rich image from the point of view of its colour content.

Given that it is almost impossible to estimate the impact of the artifacts in the spatial domain, without any reference (original video signal), in the frequence domain is clearly enough that the artifacts induce very high frequencies and a specific modification of the spectrum which could be close to a complexity induced by a fractal model.

In addition, due to the complexity of the colour Fourier transform based on Quaternionic approaches, our approach is the more suitable at this moment for a real-time implementation. For an image of size , the complexity of a parallel implementation of our approach would be , while for a 2D Fast Fourier Transform the best case is of complexity.

## 5. Experimental Results

From the plethora of IP-based video application, we chose an MPEG-4 streaming application. Streaming applications usually use RTP (Real-Time Protocol) over UDP; therefore, the traffic generated by such an application is inelastic and doesnot adapt to the network conditions. In addition, neither UDP itself or the video streaming application implement a retransmission mechanism. Therefore, the video streaming applications are very sensitive to packet loss: any lost packet in the network will cause missing bits of information in the MPEG video stream.

Given that packet loss is the major issue for an MPEG-4 video streaming application, in our experiments the induced packet loss percentage varied from 0% to 1.3%. Above this threshold, the application cannot longer function (i.e., the connection established between the client and the server breaks), and tests cannot be performed. The test setup is depicted in Figure 9(b): the MPEG-4 streaming server we used was the Helix streaming server from Real Networks (http://www.realnetworks.com/) and the MPEG-4 client was mpeg4ip (http://mpeg4ip.sourceforge.net/). We modified the source code of the client to record the received video sequence as individual frames in bitmap format. We ran the tests using three widely used video sequences: "football", "female", and "train", MPEG-4 coded. The video sequences were 10 seconds long, with 250 frames, each of size. The average transmission rate was approximately 1 Mb/s, which was a constrained from using a trial version of the MPEG-4 video streaming server–-however it represents a realistic scenario.

The monitoring system we designed and implemented uses two Fast Ethernet network taps to "sniff" the application traffic on the links between two Linux PCs that run the video streaming server and client. The traffic is further recorded as packet descriptors by the four programmable Alteon UTP (Unshielded Twisted Pair) and NICs (Network Interface Card), two for each tap, in order to mirror the full-duplex traffic. From each packet, all the information required for the computation of the network quality of service (QoS) parameters is extracted and stored in the local memory as packet descriptors. The host PCs, that control the programmable NICs, periodically collect this information and store it in descriptor files. These traffic traces are analyzed in order to accurately quantify the quality degradation induced by the network emulator: one-way delay, jitter, and packet loss, as instantaneous or average values, as well as histograms. In parallel, the video signal is recorded for the offline processing. Since the two measurements described above are correlated from the point of view of time, the effects of the measured network degradation on the quality of the video signal can be estimated by the module denoted user-perceived quality (UPQ) meter. More results and details about the experimental setup are to be found in [62–64].

*important*or

*severe*degradation (top);

*less-affected*frames (middle) and

*special*or green degraded frames (bottom). The difference between the colour fractal dimension of the degraded and the original corresponding video frame will be considerable for the first two images that exhibit an important degradation–-that is, almost the entire image is affected by severe blockiness, and the scene cannot be understood. will be small, but still positive for less affected images (the football players may no longer be identifiable, but the rest of the scene is unchanged). For the "green" images the colour fractal dimension is smaller than the one of the corresponding original frames, therefore, the will be negative.

## 6. Comparison

where is the maximum intensity level, that is, for an image.

Images | SNR [dB] | PSNR [dB] | MSE | SSIM | VSNR | |
---|---|---|---|---|---|---|

10.1, 10.2 | 0.17 | 12.3316 | 0.0585 | 0.3907 | 2.1754 | |

10.5, 10.6 | 0.319 | 13.0221 | 0.0499 | 0.226 | 1.4855 | |

10.9, 10.10 | 0.378 | 18.8619 | 0.0130 | 0.5199 | ||

10.13, 10.14 | 0.178 | 19.7353 | 0.0106 | 0.6199 | 5.7999 | |

10.17, 10.18 | 0.205 | 14.3382 | 0.0368 | 0.2868 | 3.8740 | |

10.21, 10.22 | 4.2135 | 0.3790 | 0.4158 | 6.4629 | ||

10.25, 10.26 | 5.2437 | 0.2990 | 0.3717 | 6.2717 |

We plan to perform a further comparison between the metrics on larger databases of test images. In addition, we have to mention the fact that the SSIM and VSNR were mainly used to assess the quality degradation induced by the image compression algorithms, case in which the image degradation is not as violent as in our experiments. Therefore the right way to compare our method against all the existing approaches is not straightforward and, definitely, not amongst the goals of the current paper.

The constant for the complexity of SSIM approach is given by the size of the window for computing the local mean and variance— —and the circular-symmetric Gaussian weighting function that is, used when computing the map of local SSIM values. The maximum complexity bounds in case of VSNR is clearly given by the complexity of the discrete wavelet transform (DWT) that is, used. It is known that an efficient implementation of DWT is in . The following relationship is evident: ; however, the complexity of a parallel implementation of our approach would be in .

## 7. Subjective Tests

The original hypothesis was that the quality perceived is directly proportional to the fractal complexity of an image. In order to validate from a subjective point of view the approach we proposed for the assessment of the video quality, we performed several subjective tests, on different video frames from video sequences—sport videos of football matches, in particular. The aim of the experiments was to prove that the complexity of colour fractal images is in accordance with the human perception; therefore, the colour fractal analysis-based tools are appropriate for the development of video quality metrics.

*reference-based*. After presenting the minimum and the maximum degradation that may affect the video frames, the individuals were asked to grade the perceived degradation with a score comprised between 0 and 5, according to the levels of degradation presented in Table 3, in accordance with the quality levels specified by the ITU.

Levels of perceived image degradation.

0 | No degradation at all |
---|---|

1 | imperceptible |

2 | perceptible, but not annoying |

3 | slightly annoying |

4 | annoying |

5 | very annoying |

If we exclude the images 10.22 and 10.26, for which the estimated colour fractal dimension variation is negative because of the important degradation and lack of information, the correlation coefficient between the MOS and is 0.8523. Despite of the fact that these results must be extended to a bigger image set, the approach creates a new perspective on the perception of colour image complexity. If we take into account the two images, 10.22 and 10.26, the correlation between mean score and estimated colour fractal complexity is 0.4857. This result, induced by the negative value for the colour fractal complexity variation, may lead to new developments for colour fractal measures. Clearly enough, the perceived complexity of those images is lower than the one of the others.

We conclude that the fractal dimension reflects the perceived visual complexity of the degraded images, as long as the degradation is not extreme and is not negative. We plan to run more subjective experiments in order to augment the pertinence of the results from a statistical point of view and to propose a better colour fractal estimator to deal with this minor numerical inconsistency.

## 8. Conclusions

We conclude that the colour lacunarity itself can be used as a no-reference metric to detect the important degradation of the video signal at the receiver. The colour fractal dimension and lacunarity can be definitely used as a reference-based metrics, but this is usually impossible in a real environment setup when the original signal is not available at the receiver. The colour fractal dimension is not enough to be used as a stand-alone metric but in a reduced-reference scenario, the fractal features we propose—the colour fractal dimension and the colour lacunarity–-can be used to objectively assess any degradation of the received video signal and, given that they are correlated to the human perception, they can be used for the development of quality of experience metrics. An important aspect, which represents an invaluable advantage, is the robustness of the fractal measures to any modification of the video signal during the broadcast, like translation, rotation, mirroring or even cropping (e.g., when the image format is changed from to ).

For the computation of the two metrics we propose a colour extension of the classical probabilistic algorithm designed by Voss. We show that our approach is able to capture the relative complexity of the video frames and the sum of aspects that characterize the degradation of an image, thus the colour fractal dimension and lacunarity can be used to characterize and objectively assess the degradation of the video signal. To support our approach and conclusions, we also investigated the 3D histograms, the co-occurrence matrices and the power density functions of the original and degraded video frames. In addition, we present the results of our subjective tests. Given that the fractal features are well correlated to the perceived complexity by the human visual system, they are of great interest as objective metrics in a video quality analysis tool set.

Our choice of using the RGB colour space perfectly suits the probabilistic approach, and the extension from cubes to hypercubes was natural and intuitive. We are aware of the fact that the RGB colour space may not be the best choice when designing an image analysis algorithm from the point of view of the human visual system and given that a perceptual objective metric is desired, we plan to further develop our colour fractal metrics by using other colour spaces, for example, Lab or HSL, capable of better capturing and reflecting the human perception of colours, but with a higher computational cost.

## Authors’ Affiliations

## References

- Fernandez-Maloigne C:
**Fundamental study for evaluating image quality.***Proceedings of the Annual Meeting of TTLA, December 2008, Taiwan*Google Scholar - Yamada T, Miyamoto Y, Serizawa M, Harasaki H:
**Reduced-reference based video quality metrics using representative-luminance values.***Image Communication*2009,**24**(7):525-547.Google Scholar - Oelbaum T, Diepold K:
**Building a reduced reference video quality metric with very low overhead using multivariate data analysis.***Proceedings of the 4th International Conference on Cybernetics and Information Technologies, Systems and Applications (CITSA '07), 2007*Google Scholar - Fernandez-Maloigne C, Larabi MC, Anciaux G:
**Comparison of subjective assessment protocols for digital cinema applications.***Proceedings of the 1st International Workshop on Quality of Multimedia Experience (QoMEX '09), July 2009, San Diego, Calif, USA*Google Scholar - Rosselll V, Larabl M-C, Fernandez-Malolgne C:
**Objective quality measurement based on anisotropic contrast perception.***Proceedings of the 4th European Conference on Colour in Graphics, Imaging, and Vision (CGIV '08), June 2008*108-111.Google Scholar - Wang Z, Bovik AC:
**Mean squared error: love it or leave it? A new look at Signal Fidelity Measures.***IEEE Signal Processing Magazine*2009,**26**(1):98-117.View ArticleGoogle Scholar - ITU-R Recommendation BT.500 :
*Subjective quality assessment methods of televisionpictures*. International Telecommunications Union; 1998.Google Scholar - ITU-T Recommendation P.910 :
*Subjective video quality assessment methods formultimedia applications*. International Telecommunications Union; 1996.Google Scholar - ITU-R Recommendation J.140 :
*Subjective assessment of picture quality in digitalcable television systems*. International Telecommunications Union; 1998.Google Scholar - ITU-T Recommendation J.143 :
*User requirements for objective perceptual videoquality measurements in digital cable television*. International Telecommunica-tions Union; 2000.Google Scholar - Video Quality Experts Group :
**The validation of objective models of video quality assessment.**Final report, 2004Google Scholar - van den Branden Lambrecht CJ:
**Survey of image and video quality metricsbased on vision models.**presentation, August 1997Google Scholar - Winkler S, Sharma A, McNally D:
**Perceptual video quality and blockiness metrics for multimedia streaming applications.***Proceedings of the 4th International Symposium on Wireless Personal Multimedia Communications, September 2001*553-556.Google Scholar - Pappas TN, Safranek RJ, Chen J:
**Perceptual criteria for image quality evaluation.**In*Handbook of Image and Video Processing*. 2nd edition. Academic Press, San Diego, Calif, USA; 2000:669-686.Google Scholar - OPTICOM GmbH Germany :
**Pevq—advanced perceptual evaluation of videoquality.**white paper, 2005Google Scholar - Teo PC, Heeger DJ:
**Perceptual image distortion.***Proceedings of IEEE International Conference of Image Processing, 1994*982-986.View ArticleGoogle Scholar - Karunasekera SA, Kingsbury NG:
**A distortion measure for blocking artifacts in images based on human visual sensitivity.***IEEE Transactions on Image Processing*1995,**4**(6):713-724. 10.1109/83.388074View ArticleGoogle Scholar - Winkler S:
**Visual fidelity and perceived quality: towards comprehensive metrics.***Human Vision and Electronic Imaging, January 2001, Proceedings of SPIE***4299:**114-125.Google Scholar - Winkler S:
**Issues in vision modeling for perceptual video quality assessment.***Signal Processing*1999,**78**(2):231-252. 10.1016/S0165-1684(99)00062-6View ArticleMATHGoogle Scholar - Winkler S:
*Digital Video Quality: Vision Models and Metrics*. John Wiley &Sons, New York, NY, USA; 2005.View ArticleGoogle Scholar - Bringier B, Richard N, Larabi MC, Fernandez-Maloigne C:
**No-reference perceptual quality assessment of colour image.***Proceedings of the 14th European Signal ProcessingConference (EUSIPCO '06), September 2006, Florence, Italy*Google Scholar - Quintard L, Larabi M-C, Fernandez-Maloigne C:
**No-reference metric based on the color feature: application to quality assessment of displays.***Proceedings of the 4th European Conference on Colour in Graphics, Imaging, and Vision (CGIV '08), June 2008*98-103.Google Scholar - Wang Z, Bovik AC, Sheikh HR, Simoncelli EP:
**Image quality assessment: from error visibility to structural similarity.***IEEE Transactions on Image Processing*2004,**13**(4):600-612. 10.1109/TIP.2003.819861View ArticleGoogle Scholar - Sampat MP, Wang Z, Gupta S, Bovik AC, Markey MK:
**Complex wavelet structural similarity: a new image similarity index.***IEEE Transactions on Image Processing*2009,**18**(11):2385-2401.View ArticleMathSciNetGoogle Scholar - Rajashekar U, Wang Z, Simoncelli EP:
**Quantifying color image distortions based on adaptive spatio-chromatic signal decompositions.***Proceedings of IEEE International Conference on Image Processing (ICIP '09), November 2009, Cairo, Egypt*2213-2216.Google Scholar - Rajashekar U, Wang Z, Simoncelli EP:
**Perceptual quality assessment of color images using adaptive signal representation.***Human Vision and Electronic Imaging XV, January 2010, San Jose, Calif, USA, Proceedings of SPIE***7527:**View ArticleGoogle Scholar - Chandler DM, Hemami SS:
**VSNR: a wavelet-based visual signal-to-noise ratio for natural images.***IEEE Transactions on Image Processing*2007,**16**(9):2284-2298.View ArticleMathSciNetGoogle Scholar - Chandler DM, Lim KH, Hemami SS:
**Effects of spatial correlations and global precedence on the visual fidelity of distorted images.***Human Vision and Electronic Imaging XI, January 2006, San Jose, Calif, USA, Proceedings of SPIE***6057:**View ArticleGoogle Scholar - Chandler DM, Hemami SS:
**Effects of natural images on the detectability of simple and compound wavelet subband quantization distortions.***Journal of the Optical Society of America A*2003,**20**(7):1164-1180. 10.1364/JOSAA.20.001164View ArticleGoogle Scholar - Chandler DM, Hemami SS:
**Suprathreshold image compression based on contrast allocation and global precedence.***Human Vision and Electronic Imaging VIII, January 2003, Santa Clara, Calif, USA, Proceedings of SPIE***5007:**73-86.View ArticleGoogle Scholar - Sheikh HR, Bovik AC, Cormack L:
**No-reference quality assessment using natural scene statistics: JPEG2000.***IEEE Transactions on Image Processing*2005,**14**(11):1918-1927.View ArticleGoogle Scholar - Malkowski M, Claßen D:
**Performance of video telephony services in UMTS using live measurements and network emulation.***Wireless Personal Communications*2008,**46**(1):19-32. 10.1007/s11277-007-9353-5View ArticleGoogle Scholar - Mitina OV, Abraham FD:
**The use of fractals for the study of the psychology of perception: psychophysics and personality factors, a brief report.***International Journal of Modern Physics C*2003,**14**(8):1047-1060. 10.1142/S0129183103005182View ArticleMATHGoogle Scholar - Sprott JC:
**Automatic generation of strange attractors.***Computers and Graphics*1993,**17**(3):325-332. 10.1016/0097-8493(93)90082-KView ArticleMathSciNetGoogle Scholar - Taylor RP, Spehar B, Wise JA, Clifford CWG, Newell BR, Hagerhall CM, Purcell T, Martin TP:
**Perceptual and physiological responses to the visual complexity of fractal patterns.***Nonlinear Dynamics, Psychology, and Life Sciences*2005,**9**(1):89-114.Google Scholar - Knill DC, Field D, Kersten D:
**Human discrimination of fractal images.***Journal of the Optical Society of America A*1990,**7**(6):1113-1123. 10.1364/JOSAA.7.001113View ArticleGoogle Scholar - Cutting JE, Garvin JJ:
**Fractal curves and complexity.***Perception and Psychophysics*1987,**42**(4):365-370. 10.3758/BF03203093View ArticleGoogle Scholar - Pentland AP:
**Fractal-based description of natural scenes.***IEEE Transactions on Pattern Analysis and Machine Intelligence*1984,**6**(6):661-674.View ArticleGoogle Scholar - Pentland AP:
**On perceiving 3-d shape and texture.***Proceedings of the Symposium on Computational Models in Human Vision, 1986, Rochester, NY, USA*Google Scholar - Ghosh K, Bhaumik K:
**Complexity in human perception of brightness: a historical review on the evolution of the philosophy of visual perception.***OnLine Journal of Biological Sciences*2010,**10**(1):17-35. 10.3844/ojbsci.2010.17.35View ArticleGoogle Scholar - Mandelbrot BB:
*The Fractal Geometry of Nature*. W.H. Freeman and Co, New York, NY, USA; 1982.MATHGoogle Scholar - Tolle CR, McJunkin TR, Rohrbaugh DT, LaViolette RA:
**Lacunarity definition for ramified data sets based on optimal cover.***Physica D*2003,**179**(3-4):129-152. 10.1016/S0167-2789(03)00029-0View ArticleMathSciNetMATHGoogle Scholar - Chen W-S, Yuan S-Y, Hsiao H, Hsieh C-M:
**Algorithms to estimating fractal dimension of textured images.***Proceedings of IEEE Interntional Conference on Acoustics, Speech, and Signal Processing (ICASSP '01), May 2001*1541-1544.Google Scholar - Lee W-L, Chen Y-C, Hsieh K-S:
**Ultrasonic liver tissues classification by fractal feature vector based on M-band wavelet transform.***IEEE Transactions on Medical Imaging*2003,**22**(3):382-392. 10.1109/TMI.2003.809593View ArticleMathSciNetGoogle Scholar - Frazer GW, Wulder MA, Niemann KO:
**Simulation and quantification of the fine-scale spatial pattern and heterogeneity of forest canopy structure: a lacunarity-based method designed for analysis of continuous canopy heights.***Forest Ecology and Management*2005,**214**(1–3):65-90.View ArticleGoogle Scholar - Peters EE:
*Fractal Market Analysis: Applying Chaos Theory to Investmentand Economics*. John Wiley & Sons, New York, NY, USA; 1952.Google Scholar - Nonnenmacher TF, Losa GA, Weibel ER:
*Fractals in Biology and Medicine*. Birkhäuser, New York, NY, USA; 1994.View ArticleMATHGoogle Scholar - Manousaki AG, Manios AG, Tsompanaki EI, Tosca AD:
**Use of color texture in determining the nature of melanocytic skin lesions—a qualitative and quantitative approach.***Computers in Biology and Medicine*2006,**36**(4):419-427. 10.1016/j.compbiomed.2005.01.004View ArticleGoogle Scholar - Taylor RP, Spehar B, Clifford CWG, Newell BR: The visual complexity of pollock's dripped fractals. Proceedings of the International Conference of Complex Systems, 2002Google Scholar
- Falconer K:
*Fractal Geometry, Mathematical Foundations and Applications*. John Wiley & Sons, New York, NY, USA; 1990.MATHGoogle Scholar - Voss R:
**Random fractals: characterization and measurement.**In*Scaling Phenomena in Disordered Systems*. Plenum Press, New York, NY, USA; 1985:1-11.Google Scholar - Keller JM, Chen S, Crownover RM:
**Texture description and segmentation through fractal geometry.***Computer Vision, Graphics and Image Processing*1989,**45**(2):150-166. 10.1016/0734-189X(89)90130-8View ArticleGoogle Scholar - Allain C, Cloitre M:
**Characterizing the lacunarity of random and deterministic fractal sets.***Physical Review A*1991,**44**(6):3552-3558. 10.1103/PhysRevA.44.3552View ArticleMathSciNetGoogle Scholar - Plotnick RE, Gardner RH, Hargrove WW, Prestegaard K, Perlmutter M:
**Lacunarity analysis: a general technique for the analysis of spatial patterns.***Physical Review E*1996,**53**(5):5461-5468. 10.1103/PhysRevE.53.5461View ArticleGoogle Scholar - Maragos P, Sun F:
**Measuring the fractal dimension of signals: morphological covers and iterative optimization.***IEEE Transactions on Signal Processing*1993,**41**(1):108-121. 10.1109/TSP.1993.193131View ArticleMATHGoogle Scholar - Pedrycz W, Bargiela A:
**Fuzzy fractal dimensions and fuzzy modeling.***Information Sciences*2003,**153:**199-216.View ArticleMATHGoogle Scholar - Ivanovici M, Richard N:
**Colour covering blanket.***Proceedings of the International Conference on Image Processing, Computer Vision and Pattern Recognition, July 2010, Las Vegas, Nev, USA*Google Scholar - Ivanovici M, Richard N: Fractal dimension of colour fractal images. IEEE Transactions on Image Processing. InrevisionGoogle Scholar
- Ivanovici M, Richard N:
**Colour fractal image generation.***Proceedings of the International Conference on Image Processing, Computer Vision and Pattern Recognition, July 2009, Las Vegas, Nev, USA*93-96.Google Scholar - Ivanovici M, Richard N, Decean H:
**Fractal dimension and lacunarity of psoriatic lesions—a colour approach.***Proceedings of the 2nd WSEAS International Conference on Biomedical Electronics and Biomedical Informatics (BEBI '09), August 2009, Moskow, Russia*199-202.Google Scholar - Ivanovici M, Richard N:
**The lacunarity of colour fractal images.***Proceedings of the International Conference on Image Processing (ICIP '09), November 2009, Cairo, Egypt*453-456.Google Scholar - Ivanovici M:
**Objective performance evaluation for mpeg-4 video streaming applications.***Scientific Bulletin of University "POLTEHNICA" Bucharest C*2005,**67**(3):55-64.Google Scholar - Ivanovici M, Beuran R:
**User-perceived quality assessment for multimedia applications.***Proceedings of the 10th International Conference on Optimization of Electricaland Electronic Equipment (OPTIM '06), Ma 2006*55-60.Google Scholar - Ivanovici M, Beuran R:
**Correlating quality of experience and quality of service for network applications.**In*Quality of Service Architectures for Wireless Networks: Performance Metrics and Management*. IGI-Global; 2010:326-351.View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.