 Research Article
 Open Access
Towards Video Quality Metrics Based on Colour Fractal Geometry
 Mihai Ivanovici^{1}Email author,
 Noël Richard^{2} and
 Christine FernandezMaloigne^{2}
https://doi.org/10.1155/2010/308035
© Mihai Ivanovici et al. 2010
 Received: 29 April 2010
 Accepted: 30 July 2010
 Published: 10 August 2010
Abstract
Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for grayscale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the userperceived video quality degradation and we validated them through experimental results obtained for an MPEG4 video streaming application; finally, the results are compared against the ones given by unanimouslyaccepted metrics and subjective tests.
Keywords
 Fractal Dimension
 Video Sequence
 Video Frame
 Video Quality
 Human Visual System
1. Video Quality Metrics
There is a plethora of metrics for the assessment of image and video quality [1]. They used to be: (i) full reference or reference based, when both the video sequence at the transmitter and the video sequence at the receiver are available, then the sequence at receiver is compared to the original sequence at transmitter, and (ii) no reference or without reference, when the video sequence at the transmitter is not available; therefore, only the video sequence at the receiver is being analyzed. Recently a third class of metrics emerged: the socalled "reducedreference" [2, 3] which are based on the sequence at the receiver and on some features extracted from the original signal at the transmitter. This is the case of the fractal measures we propose.
For the quality assessment of an image or a video sequence, the metrics can be also divided into subjective and objective. During the last decade, several quality measures, both subjective and objective, have been proposed, especially for the assessment of the quality of an image after lossy compression, image rendering on screen or for digital cinema [4]. Most of them use models of the human visual system to express the image perception as a specific passband filter (to be more precise, a passband filter for the achromatic vision and a low passfilter for the chromatic one) [5]. In this paper we explore a wellknown property of the human visual system, that is, to be "sensitive" to the visual complexity of the image. We use fractal features—thus a multiscale approach—to estimate this complexity. In addition, we rely on the hypothesis that the fractal geometry is capable of characterizing the image complexity in its whole—the space—frequency complexity and the colour content–thus the complexity of the image reflected in a certain colour space, and any of the aspects of the image degradation, like a more spread power spectrum and local discontinuities of the natural correlation of the image.
The most complex metrics are based on models of the human visual system, but some of them are now classical signal fidelity metrics like the signaltonoise ratio (SNR) and its variant peak SNR (PSNR), the meansquared error (MSE) and root MSE (RMSE) which are simply distance measures. These simple measures are unable to capture the degradation of the video signal from a user perspective [6]. On the other hand, the subjective video quality measurements are time consuming and must meet complex requirements (see the ITUT recommendations [7–10]) regarding the conditions of the experiments, such as viewing distance and room lighting. However, the objective metrics are usually preferred, because they can be implemented as algorithms and are humanerror free.
The Video Quality Experts Group (VQEG) (http://www.vqeg.org/) is the main organization dealing with the perceptual quality of the video signal and they reported on the existing metrics and measurement algorithms [11]. A survey of videoquality metrics based on models of the human vision system can be found in [12] and several noreference blockiness metrics are studied and compared in [13]. A more recent stateoftheart of the perceptual criteria for image quality evaluation can be found in [14]. OPTICOM (http://www.opticom.de/) is the author of one metric for video quality evaluation called "Perceptual Evaluation of Video Quality" (PEVQ), which is a referencebased metric used to measure the quality degradation in case of any video application running in mobile or IPbased networks. The PEVQ Analyzer [15] measures several parameters in order to characterize the degradation: brightness, contrast, PSNR, jerkiness, blur, blockiness, and so forth. Some of the first articles that proposed quality metrics inspired by the human perception [16, 17] drew also the attention on some of the drawbacks of the MSE and the importance of subjective tests. Among the unanimously accepted metrics for the quantification of the userperceived degradation are the ones proposed by Winkler use image attributes like sharpness and colourfulness [18–20]. In [21], the authors propose a noreference quality metric also based on the contrast, but taking into account the human perception, and in [22], the hue feature is exploited. Wang proposes in [23] a metric based on the structural similarity between the original image and the degraded one. The structural similarity (SSIM) unifies in its expression several aspects: the similarity of the local patch luminances, contrast, and structure. This metric was followed by a more complex one, based on wavelets, as an extension of SSIM to the complex wavelet domain, inspired by the pattern recognition capabilities of the human visual system [24]. Together with Wang, Rajashekar is the author of one of the latest image quality metric based on an adaptive spatiochromatic signal decomposition [25, 26]. The method constructs a set of spatiochromatic function basis for the approximation of several distortions due to changes in lighting, imaging, and viewing conditions. Wavelets are also used by Chandler and Hemami to develop a visual signaltonoise ratio (VSNR) metric [27] based on their recent psychophysical findings [28–30]. Related to the wavelets, a multiresolution model based on the natural scene statistics is used in [31].
The degradation that affects the video frames is in fact a mixture of several impairments, including blockiness and the sudden occurrence of new colours. The modifications of the image content reflect both in the colour histograms—a larger spread of the histogram due to the presence of new colours—and the spectral representation of the luminance and chrominance (high frequencies due to blockiness). Given all the above considerations, we believe that metrics like blur, contrast, brightness, and even blockiness lose their meaning, and they are not able to reflect the degradation; therefore, they cannot be applied for such degraded video frames. Metrics able to capture all the aspects of the degradation that reflect the colour spread–the amount of new colours occurring in the degraded video frames would be more appropriate. We, therefore, consider that the approaches based on multiscale analysis and image complexity are more adapted to the videoquality assessment. Fractal analysisbased approaches offer the possibility to synthesize into just one measure adapted to the human visual system, all the relevant features for the quality of an image (e.g., colourfulness and sharpness) instead of analyzing all image characteristics independently and then to find a way to combine the intermediate results. Due to its multiscale nature, the fractal analysis is in accordance with the spirit of all multiresolution waveletbased approaches mentioned before, which unfortunately work only for grayscale images. Therefore, one of the advantages of our approach would be the fact that it also takes into account the colour information. In addition, the fractal measures are invariant to any linear transformation like translation and rotation.
Our choice is also justified by the way that humans perceive the fractal complexity. In a study on human perception conducted on fractal pictures [33], the authors conclude that "the hypothesis on the applicability and fulfillment of WeberFechner law for the perception of time, complexity and subjective attractiveness was confirmed". Their tests aimed at correlating the human perception of time, complexity, and aesthetic attractiveness with the fractal dimension and the Lyapunov exponent, based on the hypothesis that the perception of fractal objects may reveal insights of the human perceptual process. In [34], the most attractive fractals appeared to be the ones with the fractal dimension comprised between 1.1 and 1.5. According to [35], "the prevalence of fractals in our natural environment has motivated a number of studies to investigate the relationship between a pattern's fractal character and its visual properties", for example, [36, 37]. The authors of [35] investigate the visual appeal as a function of the fractal dimension, and they establish three intervals: [1.1–1.2] low preference, [1.3–1.5] high preference, and [1.6–1.9] low preference. Pentland finds in this psychophysical studies [38, 39] that for the onedimensional fractional Brownian motion and the twodimensional Brodatz textures, the correlation between the fractal dimension and the perceived roughness is more than 0.9.
Last but not least, the very essence of the word "complex" of Latinetymology—meaning "twisted together", designating a system composed of closely connected components—emphasizes the presence of multiple components that interact with each other, generating an emergent property [40].
2. Fractal Analysis
The fractal geometry introduced by Mandelbrot in 1983 to describe selfsimilar sets called fractals [41] is generally used to characterize natural objects that are impossible to describe by using the classical (Euclidian) geometry. The fractal dimension and lacunarity are the two mostknown and widely used fractal analysis tools. The fractal dimension characterizes the complexity of a fractal set, by indicating how much space is filled, while the lacunarity is a mass distribution function indicating how the space is occupied [42]. These two fractal properties are successfully used to discriminate between different structures exhibiting a fractallike appearance [43–45], for classification and segmentation, due to their invariance to scale, rotation, or translation. The fractal geometry proved to be of a great interest for the digital image processing and analysis in an extremely wide area of applications, like finance [46], medicine [44, 47, 48], and art [49].
There exist several different mathematical expressions for the fractal dimension, but the boxcounting is the most popular due to the simplest algorithmic formulation, compared to the original Hausdorff definition expressed for continuous functions [50]. The boxcounting definition of the fractal dimension is , where is the number of boxes of size needed to completely cover the fractal set. The first practical approach belongs to Mandelbrot, but that was followed by the elegant probability measure of Voss [51, 52]. On a parallel research path, Allain and Cloitre [53] and Plotnick et al. [54] developed their approach as a version of the basic boxcounting algorithm. All the other approaches for the computation of the fractal dimension, like parallel body method [55] (a.k.a. coveringblanket approach, Minkowsky sausage, or morphological covers) or fuzzy [56] are more complex from a point of view of implementation and more difficult to extend to a multidimensional colour space. However, we proposed in [57] a colour extension of the covering blanket approach based on a probabilistic morphology. On the other hand, despite the large number of algorithmic approaches for the computation of the fractal dimension and lacunarity, only few of them offer the theoretical background that links them to the Hausdorff dimension.
However, such tools were developed long time ago for greyscale smallsize images, but due to the evolution of the acquisition techniques the spatial resolution significantly increased and, in addition, the world of images became coloured. The very few existing approaches for the computation of fractal measures for colour images are restricted to a marginal colour analysis, or they transform a grayscale problem in false colour [48]. In the following section, we briefly present our colour extension of the existing probabilistic algorithm by Voss [51], fully described in [58], which were validated on synthetic colour fractal images [59] and used to characterize the colour textures representing psoriatic lesions, in the context of a medical application in dermatology [60]. Then, we show how the colour fractal dimension and lacunarity can be used to characterize the degradation of the video signal for a video streaming application. Without loss of generality, we present the results we obtain in the case of an MPEG4 videostreaming application.
3. Colour Fractal Dimension and Lacunarity
Consequently is proportional to , where is the fractal dimension to be estimated.
If a grayscale image is considered to be a discrete surface , where is the luminance in every point of the space, then a colour image is a hypersurface in a 3dimensional colour space. Thus, we deal with a 5dimensional hyperspace where each pixel is a 5dimensional vector. We use RGB for the representation of colours due to its cubical organization, even though it is not a Euclidian uniform space. The classical algorithm of Voss uses boxes of variable size centered in the each pixel of the image and counts how many pixels fall inside that box. We generalize this by counting the pixels for which the Minkowski infinity norm distance to the center of the hypercube is smaller than . Practically, for a certain square of size in the plane, we count the number of pixels that fall inside a 3dimensional RGB cube of size , centered in the current pixel –the colour of the current pixel. The theoretical development and validation on synthetic colour fractal images can be found in [58].
The lacunarity characterizes the topological organisation of a fractal object, an image in our particular case, being a scaledependent measure of spatial heterogeneity. Images with small lacunarity are more homogeneous with respect to the size distribution and spatial arrangement of gaps. On the other hand, images with larger lacunarity are more heterogeneous. In addition, lacunarity must be taken into consideration after inspecting the fractal dimension: in a similar manner with the Huesaturation couple in colour image analysis, the lacunarity becomes of greater importance when complexity, that is, the fractal dimension, increases.
4. Approach Argumentation and Validation
In Figure 1, we present two video frames: one from the original video sequence and the corresponding degraded video frames from the sequence at the receiver, along with the pseudoimage representing the absolute difference between the former two. The computed colour fractal dimensions are 3.14, 3.31, and 3.072, respectively. One can see that the larger fractal dimension reflects the increased complexity of the degraded video frame. The increased complexity comes from the blockiness effect, as well as from the dirtiness and the augmented colour content (see also the 3D histograms in Figure 3).
Because the lacunarity is a measure of how the space is occupied, we present in Figure 3 the 3D histograms in the RGB colour space, as a visual justification. One can see that the histogram of the degraded video frame is more spread than the one of the original video frame, indicating a more rich image from the point of view of its colour content.
Given that it is almost impossible to estimate the impact of the artifacts in the spatial domain, without any reference (original video signal), in the frequence domain is clearly enough that the artifacts induce very high frequencies and a specific modification of the spectrum which could be close to a complexity induced by a fractal model.
where .
In addition, due to the complexity of the colour Fourier transform based on Quaternionic approaches, our approach is the more suitable at this moment for a realtime implementation. For an image of size , the complexity of a parallel implementation of our approach would be , while for a 2D Fast Fourier Transform the best case is of complexity.
5. Experimental Results
From the plethora of IPbased video application, we chose an MPEG4 streaming application. Streaming applications usually use RTP (RealTime Protocol) over UDP; therefore, the traffic generated by such an application is inelastic and doesnot adapt to the network conditions. In addition, neither UDP itself or the video streaming application implement a retransmission mechanism. Therefore, the video streaming applications are very sensitive to packet loss: any lost packet in the network will cause missing bits of information in the MPEG video stream.
Given that packet loss is the major issue for an MPEG4 video streaming application, in our experiments the induced packet loss percentage varied from 0% to 1.3%. Above this threshold, the application cannot longer function (i.e., the connection established between the client and the server breaks), and tests cannot be performed. The test setup is depicted in Figure 9(b): the MPEG4 streaming server we used was the Helix streaming server from Real Networks (http://www.realnetworks.com/) and the MPEG4 client was mpeg4ip (http://mpeg4ip.sourceforge.net/). We modified the source code of the client to record the received video sequence as individual frames in bitmap format. We ran the tests using three widely used video sequences: "football", "female", and "train", MPEG4 coded. The video sequences were 10 seconds long, with 250 frames, each of size. The average transmission rate was approximately 1 Mb/s, which was a constrained from using a trial version of the MPEG4 video streaming server–however it represents a realistic scenario.
The monitoring system we designed and implemented uses two Fast Ethernet network taps to "sniff" the application traffic on the links between two Linux PCs that run the video streaming server and client. The traffic is further recorded as packet descriptors by the four programmable Alteon UTP (Unshielded Twisted Pair) and NICs (Network Interface Card), two for each tap, in order to mirror the fullduplex traffic. From each packet, all the information required for the computation of the network quality of service (QoS) parameters is extracted and stored in the local memory as packet descriptors. The host PCs, that control the programmable NICs, periodically collect this information and store it in descriptor files. These traffic traces are analyzed in order to accurately quantify the quality degradation induced by the network emulator: oneway delay, jitter, and packet loss, as instantaneous or average values, as well as histograms. In parallel, the video signal is recorded for the offline processing. Since the two measurements described above are correlated from the point of view of time, the effects of the measured network degradation on the quality of the video signal can be estimated by the module denoted userperceived quality (UPQ) meter. More results and details about the experimental setup are to be found in [62–64].
6. Comparison
where is the maximum intensity level, that is, for an image.
Comparison between the and SNR, PSNR, MSE, SSIM, and VSNR.
Images 
 SNR [dB]  PSNR [dB]  MSE  SSIM  VSNR 

10.1, 10.2  0.17 
 12.3316  0.0585  0.3907  2.1754 
10.5, 10.6  0.319 
 13.0221  0.0499  0.226  1.4855 
10.9, 10.10  0.378 
 18.8619  0.0130  0.5199 

10.13, 10.14  0.178 
 19.7353  0.0106  0.6199  5.7999 
10.17, 10.18  0.205 
 14.3382  0.0368  0.2868  3.8740 
10.21, 10.22 

 4.2135  0.3790  0.4158  6.4629 
10.25, 10.26 

 5.2437  0.2990  0.3717  6.2717 
We plan to perform a further comparison between the metrics on larger databases of test images. In addition, we have to mention the fact that the SSIM and VSNR were mainly used to assess the quality degradation induced by the image compression algorithms, case in which the image degradation is not as violent as in our experiments. Therefore the right way to compare our method against all the existing approaches is not straightforward and, definitely, not amongst the goals of the current paper.
Complexity of approaches.
Approach  CFD  SNR  PSNR  MSE  SSIM  VSNR 

Complexity 






The constant for the complexity of SSIM approach is given by the size of the window for computing the local mean and variance— —and the circularsymmetric Gaussian weighting function that is, used when computing the map of local SSIM values. The maximum complexity bounds in case of VSNR is clearly given by the complexity of the discrete wavelet transform (DWT) that is, used. It is known that an efficient implementation of DWT is in . The following relationship is evident: ; however, the complexity of a parallel implementation of our approach would be in .
7. Subjective Tests
The original hypothesis was that the quality perceived is directly proportional to the fractal complexity of an image. In order to validate from a subjective point of view the approach we proposed for the assessment of the video quality, we performed several subjective tests, on different video frames from video sequences—sport videos of football matches, in particular. The aim of the experiments was to prove that the complexity of colour fractal images is in accordance with the human perception; therefore, the colour fractal analysisbased tools are appropriate for the development of video quality metrics.
Levels of perceived image degradation.
0  No degradation at all 

1  imperceptible 
2  perceptible, but not annoying 
3  slightly annoying 
4  annoying 
5  very annoying 
The MOS and standard deviation.
Image  (10.2)  (10.6)  (10.10)  (10.14)  (10.18)  (10.22)  (10.26) 

MOS  4.6296  4.2963  4.1852  2.2222  2.1111  5.0000  3.4444 
 0.4921  0.6688  0.6815  0.6980  0.8006  0  0.8006 
 0.17  0.319  0.378  0.178  0.205 


CFD  3.31  3.357  3.373  2.983  3.179  2.284  2.464 
If we exclude the images 10.22 and 10.26, for which the estimated colour fractal dimension variation is negative because of the important degradation and lack of information, the correlation coefficient between the MOS and is 0.8523. Despite of the fact that these results must be extended to a bigger image set, the approach creates a new perspective on the perception of colour image complexity. If we take into account the two images, 10.22 and 10.26, the correlation between mean score and estimated colour fractal complexity is 0.4857. This result, induced by the negative value for the colour fractal complexity variation, may lead to new developments for colour fractal measures. Clearly enough, the perceived complexity of those images is lower than the one of the others.
We conclude that the fractal dimension reflects the perceived visual complexity of the degraded images, as long as the degradation is not extreme and is not negative. We plan to run more subjective experiments in order to augment the pertinence of the results from a statistical point of view and to propose a better colour fractal estimator to deal with this minor numerical inconsistency.
8. Conclusions
We conclude that the colour lacunarity itself can be used as a noreference metric to detect the important degradation of the video signal at the receiver. The colour fractal dimension and lacunarity can be definitely used as a referencebased metrics, but this is usually impossible in a real environment setup when the original signal is not available at the receiver. The colour fractal dimension is not enough to be used as a standalone metric but in a reducedreference scenario, the fractal features we propose—the colour fractal dimension and the colour lacunarity–can be used to objectively assess any degradation of the received video signal and, given that they are correlated to the human perception, they can be used for the development of quality of experience metrics. An important aspect, which represents an invaluable advantage, is the robustness of the fractal measures to any modification of the video signal during the broadcast, like translation, rotation, mirroring or even cropping (e.g., when the image format is changed from to ).
For the computation of the two metrics we propose a colour extension of the classical probabilistic algorithm designed by Voss. We show that our approach is able to capture the relative complexity of the video frames and the sum of aspects that characterize the degradation of an image, thus the colour fractal dimension and lacunarity can be used to characterize and objectively assess the degradation of the video signal. To support our approach and conclusions, we also investigated the 3D histograms, the cooccurrence matrices and the power density functions of the original and degraded video frames. In addition, we present the results of our subjective tests. Given that the fractal features are well correlated to the perceived complexity by the human visual system, they are of great interest as objective metrics in a video quality analysis tool set.
Our choice of using the RGB colour space perfectly suits the probabilistic approach, and the extension from cubes to hypercubes was natural and intuitive. We are aware of the fact that the RGB colour space may not be the best choice when designing an image analysis algorithm from the point of view of the human visual system and given that a perceptual objective metric is desired, we plan to further develop our colour fractal metrics by using other colour spaces, for example, Lab or HSL, capable of better capturing and reflecting the human perception of colours, but with a higher computational cost.
Authors’ Affiliations
References
 FernandezMaloigne C: Fundamental study for evaluating image quality. Proceedings of the Annual Meeting of TTLA, December 2008, TaiwanGoogle Scholar
 Yamada T, Miyamoto Y, Serizawa M, Harasaki H: Reducedreference based video quality metrics using representativeluminance values. Image Communication 2009,24(7):525547.Google Scholar
 Oelbaum T, Diepold K: Building a reduced reference video quality metric with very low overhead using multivariate data analysis. Proceedings of the 4th International Conference on Cybernetics and Information Technologies, Systems and Applications (CITSA '07), 2007Google Scholar
 FernandezMaloigne C, Larabi MC, Anciaux G: Comparison of subjective assessment protocols for digital cinema applications. Proceedings of the 1st International Workshop on Quality of Multimedia Experience (QoMEX '09), July 2009, San Diego, Calif, USAGoogle Scholar
 Rosselll V, Larabl MC, FernandezMalolgne C: Objective quality measurement based on anisotropic contrast perception. Proceedings of the 4th European Conference on Colour in Graphics, Imaging, and Vision (CGIV '08), June 2008 108111.Google Scholar
 Wang Z, Bovik AC: Mean squared error: love it or leave it? A new look at Signal Fidelity Measures. IEEE Signal Processing Magazine 2009,26(1):98117.View ArticleGoogle Scholar
 ITUR Recommendation BT.500 : Subjective quality assessment methods of televisionpictures. International Telecommunications Union; 1998.Google Scholar
 ITUT Recommendation P.910 : Subjective video quality assessment methods formultimedia applications. International Telecommunications Union; 1996.Google Scholar
 ITUR Recommendation J.140 : Subjective assessment of picture quality in digitalcable television systems. International Telecommunications Union; 1998.Google Scholar
 ITUT Recommendation J.143 : User requirements for objective perceptual videoquality measurements in digital cable television. International Telecommunications Union; 2000.Google Scholar
 Video Quality Experts Group : The validation of objective models of video quality assessment. Final report, 2004Google Scholar
 van den Branden Lambrecht CJ: Survey of image and video quality metricsbased on vision models. presentation, August 1997Google Scholar
 Winkler S, Sharma A, McNally D: Perceptual video quality and blockiness metrics for multimedia streaming applications. Proceedings of the 4th International Symposium on Wireless Personal Multimedia Communications, September 2001 553556.Google Scholar
 Pappas TN, Safranek RJ, Chen J: Perceptual criteria for image quality evaluation. In Handbook of Image and Video Processing. 2nd edition. Academic Press, San Diego, Calif, USA; 2000:669686.Google Scholar
 OPTICOM GmbH Germany : Pevq—advanced perceptual evaluation of videoquality. white paper, 2005Google Scholar
 Teo PC, Heeger DJ: Perceptual image distortion. Proceedings of IEEE International Conference of Image Processing, 1994 982986.View ArticleGoogle Scholar
 Karunasekera SA, Kingsbury NG: A distortion measure for blocking artifacts in images based on human visual sensitivity. IEEE Transactions on Image Processing 1995,4(6):713724. 10.1109/83.388074View ArticleGoogle Scholar
 Winkler S: Visual fidelity and perceived quality: towards comprehensive metrics. Human Vision and Electronic Imaging, January 2001, Proceedings of SPIE 4299: 114125.Google Scholar
 Winkler S: Issues in vision modeling for perceptual video quality assessment. Signal Processing 1999,78(2):231252. 10.1016/S01651684(99)000626View ArticleMATHGoogle Scholar
 Winkler S: Digital Video Quality: Vision Models and Metrics. John Wiley &Sons, New York, NY, USA; 2005.View ArticleGoogle Scholar
 Bringier B, Richard N, Larabi MC, FernandezMaloigne C: Noreference perceptual quality assessment of colour image. Proceedings of the 14th European Signal ProcessingConference (EUSIPCO '06), September 2006, Florence, ItalyGoogle Scholar
 Quintard L, Larabi MC, FernandezMaloigne C: Noreference metric based on the color feature: application to quality assessment of displays. Proceedings of the 4th European Conference on Colour in Graphics, Imaging, and Vision (CGIV '08), June 2008 98103.Google Scholar
 Wang Z, Bovik AC, Sheikh HR, Simoncelli EP: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 2004,13(4):600612. 10.1109/TIP.2003.819861View ArticleGoogle Scholar
 Sampat MP, Wang Z, Gupta S, Bovik AC, Markey MK: Complex wavelet structural similarity: a new image similarity index. IEEE Transactions on Image Processing 2009,18(11):23852401.View ArticleMathSciNetGoogle Scholar
 Rajashekar U, Wang Z, Simoncelli EP: Quantifying color image distortions based on adaptive spatiochromatic signal decompositions. Proceedings of IEEE International Conference on Image Processing (ICIP '09), November 2009, Cairo, Egypt 22132216.Google Scholar
 Rajashekar U, Wang Z, Simoncelli EP: Perceptual quality assessment of color images using adaptive signal representation. Human Vision and Electronic Imaging XV, January 2010, San Jose, Calif, USA, Proceedings of SPIE 7527:View ArticleGoogle Scholar
 Chandler DM, Hemami SS: VSNR: a waveletbased visual signaltonoise ratio for natural images. IEEE Transactions on Image Processing 2007,16(9):22842298.View ArticleMathSciNetGoogle Scholar
 Chandler DM, Lim KH, Hemami SS: Effects of spatial correlations and global precedence on the visual fidelity of distorted images. Human Vision and Electronic Imaging XI, January 2006, San Jose, Calif, USA, Proceedings of SPIE 6057:View ArticleGoogle Scholar
 Chandler DM, Hemami SS: Effects of natural images on the detectability of simple and compound wavelet subband quantization distortions. Journal of the Optical Society of America A 2003,20(7):11641180. 10.1364/JOSAA.20.001164View ArticleGoogle Scholar
 Chandler DM, Hemami SS: Suprathreshold image compression based on contrast allocation and global precedence. Human Vision and Electronic Imaging VIII, January 2003, Santa Clara, Calif, USA, Proceedings of SPIE 5007: 7386.View ArticleGoogle Scholar
 Sheikh HR, Bovik AC, Cormack L: Noreference quality assessment using natural scene statistics: JPEG2000. IEEE Transactions on Image Processing 2005,14(11):19181927.View ArticleGoogle Scholar
 Malkowski M, Claßen D: Performance of video telephony services in UMTS using live measurements and network emulation. Wireless Personal Communications 2008,46(1):1932. 10.1007/s1127700793535View ArticleGoogle Scholar
 Mitina OV, Abraham FD: The use of fractals for the study of the psychology of perception: psychophysics and personality factors, a brief report. International Journal of Modern Physics C 2003,14(8):10471060. 10.1142/S0129183103005182View ArticleMATHGoogle Scholar
 Sprott JC: Automatic generation of strange attractors. Computers and Graphics 1993,17(3):325332. 10.1016/00978493(93)90082KView ArticleMathSciNetGoogle Scholar
 Taylor RP, Spehar B, Wise JA, Clifford CWG, Newell BR, Hagerhall CM, Purcell T, Martin TP: Perceptual and physiological responses to the visual complexity of fractal patterns. Nonlinear Dynamics, Psychology, and Life Sciences 2005,9(1):89114.Google Scholar
 Knill DC, Field D, Kersten D: Human discrimination of fractal images. Journal of the Optical Society of America A 1990,7(6):11131123. 10.1364/JOSAA.7.001113View ArticleGoogle Scholar
 Cutting JE, Garvin JJ: Fractal curves and complexity. Perception and Psychophysics 1987,42(4):365370. 10.3758/BF03203093View ArticleGoogle Scholar
 Pentland AP: Fractalbased description of natural scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence 1984,6(6):661674.View ArticleGoogle Scholar
 Pentland AP: On perceiving 3d shape and texture. Proceedings of the Symposium on Computational Models in Human Vision, 1986, Rochester, NY, USAGoogle Scholar
 Ghosh K, Bhaumik K: Complexity in human perception of brightness: a historical review on the evolution of the philosophy of visual perception. OnLine Journal of Biological Sciences 2010,10(1):1735. 10.3844/ojbsci.2010.17.35View ArticleGoogle Scholar
 Mandelbrot BB: The Fractal Geometry of Nature. W.H. Freeman and Co, New York, NY, USA; 1982.MATHGoogle Scholar
 Tolle CR, McJunkin TR, Rohrbaugh DT, LaViolette RA: Lacunarity definition for ramified data sets based on optimal cover. Physica D 2003,179(34):129152. 10.1016/S01672789(03)000290View ArticleMathSciNetMATHGoogle Scholar
 Chen WS, Yuan SY, Hsiao H, Hsieh CM: Algorithms to estimating fractal dimension of textured images. Proceedings of IEEE Interntional Conference on Acoustics, Speech, and Signal Processing (ICASSP '01), May 2001 15411544.Google Scholar
 Lee WL, Chen YC, Hsieh KS: Ultrasonic liver tissues classification by fractal feature vector based on Mband wavelet transform. IEEE Transactions on Medical Imaging 2003,22(3):382392. 10.1109/TMI.2003.809593View ArticleMathSciNetGoogle Scholar
 Frazer GW, Wulder MA, Niemann KO: Simulation and quantification of the finescale spatial pattern and heterogeneity of forest canopy structure: a lacunaritybased method designed for analysis of continuous canopy heights. Forest Ecology and Management 2005,214(1–3):6590.View ArticleGoogle Scholar
 Peters EE: Fractal Market Analysis: Applying Chaos Theory to Investmentand Economics. John Wiley & Sons, New York, NY, USA; 1952.Google Scholar
 Nonnenmacher TF, Losa GA, Weibel ER: Fractals in Biology and Medicine. Birkhäuser, New York, NY, USA; 1994.View ArticleMATHGoogle Scholar
 Manousaki AG, Manios AG, Tsompanaki EI, Tosca AD: Use of color texture in determining the nature of melanocytic skin lesions—a qualitative and quantitative approach. Computers in Biology and Medicine 2006,36(4):419427. 10.1016/j.compbiomed.2005.01.004View ArticleGoogle Scholar
 Taylor RP, Spehar B, Clifford CWG, Newell BR: The visual complexity of pollock's dripped fractals. Proceedings of the International Conference of Complex Systems, 2002Google Scholar
 Falconer K: Fractal Geometry, Mathematical Foundations and Applications. John Wiley & Sons, New York, NY, USA; 1990.MATHGoogle Scholar
 Voss R: Random fractals: characterization and measurement. In Scaling Phenomena in Disordered Systems. Plenum Press, New York, NY, USA; 1985:111.Google Scholar
 Keller JM, Chen S, Crownover RM: Texture description and segmentation through fractal geometry. Computer Vision, Graphics and Image Processing 1989,45(2):150166. 10.1016/0734189X(89)901308View ArticleGoogle Scholar
 Allain C, Cloitre M: Characterizing the lacunarity of random and deterministic fractal sets. Physical Review A 1991,44(6):35523558. 10.1103/PhysRevA.44.3552View ArticleMathSciNetGoogle Scholar
 Plotnick RE, Gardner RH, Hargrove WW, Prestegaard K, Perlmutter M: Lacunarity analysis: a general technique for the analysis of spatial patterns. Physical Review E 1996,53(5):54615468. 10.1103/PhysRevE.53.5461View ArticleGoogle Scholar
 Maragos P, Sun F: Measuring the fractal dimension of signals: morphological covers and iterative optimization. IEEE Transactions on Signal Processing 1993,41(1):108121. 10.1109/TSP.1993.193131View ArticleMATHGoogle Scholar
 Pedrycz W, Bargiela A: Fuzzy fractal dimensions and fuzzy modeling. Information Sciences 2003, 153: 199216.View ArticleMATHGoogle Scholar
 Ivanovici M, Richard N: Colour covering blanket. Proceedings of the International Conference on Image Processing, Computer Vision and Pattern Recognition, July 2010, Las Vegas, Nev, USAGoogle Scholar
 Ivanovici M, Richard N: Fractal dimension of colour fractal images. IEEE Transactions on Image Processing. InrevisionGoogle Scholar
 Ivanovici M, Richard N: Colour fractal image generation. Proceedings of the International Conference on Image Processing, Computer Vision and Pattern Recognition, July 2009, Las Vegas, Nev, USA 9396.Google Scholar
 Ivanovici M, Richard N, Decean H: Fractal dimension and lacunarity of psoriatic lesions—a colour approach. Proceedings of the 2nd WSEAS International Conference on Biomedical Electronics and Biomedical Informatics (BEBI '09), August 2009, Moskow, Russia 199202.Google Scholar
 Ivanovici M, Richard N: The lacunarity of colour fractal images. Proceedings of the International Conference on Image Processing (ICIP '09), November 2009, Cairo, Egypt 453456.Google Scholar
 Ivanovici M: Objective performance evaluation for mpeg4 video streaming applications. Scientific Bulletin of University "POLTEHNICA" Bucharest C 2005,67(3):5564.Google Scholar
 Ivanovici M, Beuran R: Userperceived quality assessment for multimedia applications. Proceedings of the 10th International Conference on Optimization of Electricaland Electronic Equipment (OPTIM '06), Ma 2006 5560.Google Scholar
 Ivanovici M, Beuran R: Correlating quality of experience and quality of service for network applications. In Quality of Service Architectures for Wireless Networks: Performance Metrics and Management. IGIGlobal; 2010:326351.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.