Skip to main content


  • Research Article
  • Open Access

Per-Sample Multiple Kernel Approach for Visual Concept Learning

EURASIP Journal on Image and Video Processing20102010:461450

  • Received: 1 May 2009
  • Accepted: 19 January 2010
  • Published:


Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL) methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL) approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.


  • Computer Vision
  • Kernel Weight
  • Concept Learn
  • Publisher Note
  • Multiple Kernel Learning

Publisher note

To access the full article, please see PDF.

Authors’ Affiliations

Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100080, China
National Engineering Laboratory for Video Technology, School of EE & CS, Peking University, Beijing, 100871, China
Graduate University, Chinese Academy of Sciences, Beijing, 100039, China


© Jingjing Yang et al. 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.