Open Access

Perceptual Image Representation

EURASIP Journal on Image and Video Processing20072007:098181

DOI: 10.1155/2007/98181

Received: 1 August 2006

Accepted: 2 July 2007

Published: 10 September 2007

Abstract

This paper describes a rarity-based visual attention model working on both still images and video sequences. Applications of this kind of models are numerous and we focus on a perceptual image representation which enhances the perceptually important areas and uses lower resolution for perceptually less important regions. Our aim is to provide an approximation of human perception by visualizing its gradual discovery of the visual environment. Comparisons with classical methods for visual attention show that the proposed algorithm is well adapted to anisotropic filtering purposes. Moreover, it has a high ability to preserve perceptually important areas as defects or abnormalities from an important loss of information. High accuracy on low-contrast defects and scalable real-time video compression may be some practical applications of the proposed image representation.

[123456789101112131415161718192021]

Authors’ Affiliations

(1)
Théorie des Circuits et Traitement du Signal (TCTS) Lab, Faculté Polytechnique de Mons
(2)
Laboratoire de Télécommunications et Télédétection (TELE), Université Catholique de Louvain

References

  1. Hubel DH: Eye, Brain, and Vision, Scientific American Library, no. 22. W. H. Freeman, New York, NY, USA; 1989.Google Scholar
  2. Treisman AM, Gelade G: A feature-integration theory of attention. Cognitive Psychology 1980,12(1):97-136. 10.1016/0010-0285(80)90005-5View ArticleGoogle Scholar
  3. Crabtree JW, Spear PD, McCall MA, Jones KR, Kornguth SE: Contributions of Y- and W-cell pathways to response properties of cat superior colliculus neurons: comparison of antibody- and deprivation-induced alterations. Journal of Neurophysiology 1986,56(4):1157-1173.Google Scholar
  4. Itti L, Koch C: A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research 2000,40(10–12):1489-1506.View ArticleGoogle Scholar
  5. Le Meur O, Le Callet P, Barba D, Thoreau D: A coherent computational approach to model bottom-up visual attention. IEEE Transactions on Pattern Analysis and Machine Intelligence 2006,28(5):802-817.View ArticleGoogle Scholar
  6. Walker KN, Cootes TF, Taylor CJ: Locating salient object features. Proceedings of the 9th British Machine Vision Conference (BMVC '98), September 1998, Southampton, UK 2: 557-566.Google Scholar
  7. Mudge TN, Turney JL, Volz RA: Automatic generation of salient features for the recognition of partially occluded parts. Robotica 1987,5(2):117-127. 10.1017/S0263574700015083View ArticleGoogle Scholar
  8. Stentiford FWM: An estimator for visual attention through competitive novelty with application to image compression. Proceedings of the 22nd Picture Coding Symposium (PCS '01), April 2001, Seoul, Korea 101-104.Google Scholar
  9. Boiman O, Irani M: Detecting irregularities in images and in video. Proceedings of the 10th IEEE International Conference on Computer Vision (ICCV '05), October 2005, Beijing, China 1: 462-469.View ArticleGoogle Scholar
  10. Itti L, Baldi P: A principled approach to detecting surprising events in video. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), June 2005, San Diego, Calif, USA 1: 631-637.Google Scholar
  11. Näätänen R, Gaillard AWK, Mäntysalo S: Early selective-attention effect on evoked potential reinterpreted. Acta Psychologica 1978,42(4):313-329. 10.1016/0001-6918(78)90006-9View ArticleGoogle Scholar
  12. Tales A, Newton P, Troscianko T, Butler S: Mismatch negativity in the visual modality. NeuroReport 1999,10(16):3363-3367. 10.1097/00001756-199911080-00020View ArticleGoogle Scholar
  13. Crottaz-Herbette S: Attention spatiale auditive et visuelle chez des patients héminégligents et des sujets normaux: étude clinique, comportementale et électrophysiologique, M.S. thesis. University of Geneva, Geneva, Switzerland; 2001.Google Scholar
  14. Tribus M: Thermodynamics and Thermostatics: An Introduction to Energy, Information and States of Matter, with Engineering Applications. D. Van Nostrand, New York, NY, USA; 1961.Google Scholar
  15. Stanford LR: W-cells in the cat retina: correlated morphological and physiological evidence for two distinct classes. Journal of Neurophysiology 1987,57(1):218-244.Google Scholar
  16. Mancas M, Mancas-Thillou C, Gosselin B, Macq B: A rarity-based visual attention map: application to texture description. Proceedings of IEEE International Conference on Image (ICIP '06), September 2006, San Antonio, Tex, USA 445-448.Google Scholar
  17. Mancas M, Unay B, Gosselin B, Macq D: Computational attention for defect localisation. Proceedings of ICVS Workshop on Computational Attention & Applications (WCAA '07), March 2007, Bielefeld, GermanyGoogle Scholar
  18. Wren CR, Azarbayejani A, Darrell T, Pentland AP: Pfinder: real-time tracking of the human body. IEEE Transactions on Pattern Analysis and Machine Intelligence 1997,19(7):780-785. 10.1109/34.598236View ArticleGoogle Scholar
  19. Bradley AP, Stentiford FWM: JPEG 2000 and region of interest coding. Digital Image Computing: Techniques and Applications (DICTA '02), January 2002, Melbourne, Australia 303-308.Google Scholar
  20. Perona P, Malik J: Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence 1990,12(7):629-639. 10.1109/34.56205View ArticleGoogle Scholar
  21. Barash D, Comaniciu D: A common framework for nonlinear diffusion, adaptive smoothing, bilateral filtering and mean shift. Image and Vision Computing 2004,22(1):73-81. 10.1016/j.imavis.2003.08.005View ArticleGoogle Scholar

Copyright

© Matei Mancas et al. 2007

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.