Thermal spatio-temporal data for stress recognition
© Sharma et al.; licensee Springer. 2014
Received: 2 December 2012
Accepted: 13 May 2014
Published: 4 June 2014
Stress is a serious concern facing our world today, motivating the development of a better objective understanding through the use of non-intrusive means for stress recognition by reducing restrictions to natural human behavior. As an initial step in computer vision-based stress detection, this paper proposes a temporal thermal spectrum (TS) and visible spectrum (VS) video database ANUStressDB - a major contribution to stress research. The database contains videos of 35 subjects watching stressed and not-stressed film clips validated by the subjects. We present the experiment and the process conducted to acquire videos of subjects' faces while they watched the films for the ANUStressDB. Further, a baseline model based on computing local binary patterns on three orthogonal planes (LBP-TOP) descriptor on VS and TS videos for stress detection is presented. A LBP-TOP-inspired descriptor was used to capture dynamic thermal patterns in histograms (HDTP) which exploited spatio-temporal characteristics in TS videos. Support vector machines were used for our stress detection model. A genetic algorithm was used to select salient facial block divisions for stress classification and to determine whether certain regions of the face of subjects showed better stress patterns. Results showed that a fusion of facial patterns from VS and TS videos produced statistically significantly better stress recognition rates than patterns from VS or TS videos used in isolation. Moreover, the genetic algorithm selection method led to statistically significantly better stress detection rates than classifiers that used all the facial block divisions. In addition, the best stress recognition rate was obtained from HDTP features fused with LBP-TOP features for TS and VS videos using a hybrid of a genetic algorithm and a support vector machine stress detection model. The model produced an accuracy of 86%.
Stress is a part of everyday life, and it has been widely accepted that stress, which leads to less favorable states (such as anxiety, fear, or anger), is a growing concern to a person's health and well-being, functioning, social interaction, and financial aspects. The term stress was coined by Hans Selye, which he defined as ‘the non-specific response of the body to any demand for change’ . Stress is a natural alarm, resistance, and exhaustion system  for the body to prepare for a fight or flight response to either defend or make the body adjust to threats and changes. The body shows stress through symptoms such as frustration, anger, agitation, preoccupation, fear, anxiety, and tenseness . When chronic and left untreated, stress can lead to incurable illnesses (e.g., cardiovascular diseases , diabetes , and cancer ), relationship deterioration [7, 8], and high economic costs, especially in developed countries [9, 10]. It is important to recognize stress early to diminish the risks. Stress research is beneficial to our society with a range of benefits, motivating interest and posing technical challenges in computer science in general and affective computing in particular.
Various computational techniques have been used to objectively recognize stress using models based on techniques such as Bayesian networks , decision trees , support vector machines , and artificial neural networks . These techniques have used a range of physiological (e.g., heart activity [15, 16], brain activity [17, 18], galvanic skin response , and skin temperature [12, 20]) and physical (e.g., eye gaze , facial information ) measures for stress as inputs. Physiological signal acquisition requires sensors to be in contact with a person, and this can be obtrusive . In addition, the physiological sensors are usually required to be placed on specific locations of the body, and sensor calibration time is usually required as well, e.g., approximately 5 min is needed for the isotonic gel to settle before galvanic skin response readings can be taken satisfactorily using the BIOPAC System . The trend in this area of research is leading towards obtaining symptom of stress measures through less or non-intrusive methods. This paper proposes a stress recognition method using facial imaging and does not require body contact with sensors unlike the usual physiological sensors.
A relatively new area of research is recognition of stress using facial data in the thermal (TS) and visible (VS) spectrums. Blood flow through superficial blood vessels, which are situated under the skin and above the bone and muscle layer of the human body, allows TS images to be captured. It has been reported in the literature that stress can be successfully detected from thermal imaging  due to changes in skin temperature under stress. In addition, facial expressions have been analyzed  and classified [25–27] using TS imaging. Commonly, VS imaging has been used for modeling facial expressions, and associated robust facial recognition techniques have been developed [28–30]. However, from our understanding, the literature has not developed computational models for stress recognition using both TS and VS imaging together as yet. This paper addresses the gap and presents a robust method to use information from temporal and texture characteristics of facial regions for stress recognition.
Automatic facial expression analysis is a long researched problem. Techniques have been developed for analyzing the temporal dynamics of facial muscle movements. A detailed survey of facial expression recognition methods can be found in . Further, vision-based facial dynamics have been used for affective computing tasks such as pain monitoring  and depression analysis . This motivated us to explore vision-based stress analysis where inspiration can be taken from the vast field of facial expression analysis. Descriptors such as the local binary pattern (LBP) have been developed for texture analysis and have been successfully applied to facial expression analysis [25, 33, 34], depression analysis , and face recognition . A particular LBP extension for analysis of temporal data - local binary patterns on three orthogonal planes (LBP-TOP) - has gained attention and is suitable for the work in this study. LBP-TOP provides features that incorporate appearance and motion, and is robust to illumination variations and image transformations . This paper presents an application of LBP-TOP to TS and VS videos.
Various facial dynamics databases have been proposed in the literature. For facial expression analysis, one of the most popular databases is the Cohn-Kanade + , which contains facial action coding system (FACS) and generic expression labels. Subjects were asked to pose and display various expressions. There are other databases in the literature which are spontaneous or close to spontaneous, such as RU-FACS , Belfast , VAM , and AFEW . However, these are limited to emotion-related labels which do not serve the problem in the paper, i.e., stress classification. Lucey et al.  proposed the UNBC McMasters database comprising video clips where patients were asked to move the arm up and their reaction was recorded. For creating ANUStressDB, subjects were shown stressful and non-stressful video clips. This database is similar to that in .
There are various forms of stressors, i.e., demands or stimuli that cause stress [23, 40–42] validated by self-reports (e.g., self-assessment [43, 44]) and observer reports (e.g., human behavior coder ). Some examples of stressors are playing video (action) games [45, 46], solving difficult mathematical/logical problems , and listening to energetic music . Among these stressors are films, which were used to stimulate stress in this work. In this work, we develop a computed stress measure using facial imaging in VS and TS. Our work analyzes dynamic facial expressions that are as natural as possible elicited by a typical stressful, tense, or fearful environment from film clips. Unlike the previous work in the literature that uses posed facial expressions for classification , the work presented in this paper provides an investigation of spontaneous facial expressions as responses or reactions to environments portrayed by the films.
This paper describes a method for collecting and computationally analyzing data for stress recognition from TS and VS videos. A stress database (ANUStressDB) of videos of faces is presented. An experiment was conducted to collect the data where experiment participants watched stressful and non-stressful film clips. ANUStressDB contains videos of 35 subjects watching film clips that created stressed and not-stressed environments validated by the person. Facial expressions in the videos were stimulated by the film clips. Spatio-temporal features were extracted from the TS and VS videos, and these features were provided as inputs to a support vector machine (SVM) classifier to recognize stress patterns. A hybrid of a genetic algorithm (GA) and SVM was used to select salient divisions of facial block regions and determine whether using the block regions improved the stress recognition rate. The paper compares the quality of the stress classifications produced from using LBP-TOP and HDTP (our thermal spatio-temporal descriptor) features from TS and VS data with and without using facial block selection.
The organization of the paper is as follows: Section 2 presents the experiment for TS, VS, and self-reported data collection. Section 3 describes the facial imaging processing steps for the TS and VS data. The new thermal spatio-temporal descriptor, HDTP, is proposed in Section 4. Stress classification models are described in Section 5. Section 6 presents the results, an analysis of the results, and suggestions for future work.
2 Data collection from the film experiment
Participants watched two types of films either labeled as stressed or not-stressed. Stressed films had stressful content (e.g., suspense with jumpy music), whereas not-stressed films created illusions of meditative environments (e.g., swans and ducks paddling in a lake) and had content that was not stressful or at least was relatively less stressful compared with films labeled as stressed. There were six film clips for each type of film. The survey done by experiment participants validated the film labels. The survey asked participants to rate the films they watched in terms of levels of stress portrayed by the film and the degree of tension and relaxation they felt. Participants found the films that were labeled stressed as stressful and films labeled not-stressed as not stressful with a statistical significance of p < 0.001 according to the Wilcoxon test.
Note the usage of the terms film and video in this paper. We use the term film to refer to a video portraying entertaining content, colloquially called a ‘film’ or ‘movie’, which a participant watched during the experiment. We use the term video to refer to a visual recording of a participant's face and its movement during the time period while they watched a film. Thus in this paper, a film is something which is watched, while a video is something recorded about the watcher.
3 Face pre-processing pipeline
4 Spatio-temporal features
According to a study that investigated facial expression recognition using LBP-TOP features, VS and near-infrared images produced similar facial expression recognition rates, provided that VS images had strong illumination . Due to the fact that TS videos are defined by colors and different color variations, LBP-TOP features may not be able to fully exploit thermal information provided in TS videos and in particular capture thermal patterns for stress. In addition, LBP-TOP features have been mainly extracted from image sequences of people told to show some facial expression, which is not like the image sequences obtained from our film experiment. In our film experiment, participants watched films and involuntary facial expressions were captured. The recordings may have more subtle facial expressions of the kind of facial expressions analyzed in the literature using LBP-TOP. With the subtleness in facial movement, it is possible that LBP-TOP may not be able to offer as much information for stress analysis. These points motivate the development of a new set of features that exploits thermal patterns in TS videos for stress recognition. We propose a new type of feature for TS videos that captures dynamic thermal patterns in histograms (HDTP). This feature makes use of thermal data in each frame of a TS video of a face over the course of the video.
4.1 Histogram of dynamic thermal patterns
HDTP captures normalized dynamic thermal patterns, which enables individual-independent stress analysis. Some people may be more tolerant to some stressors than others [54, 55]. This could mean that some people may show higher degree responses to stress than others. Additionally in general, the baseline for human response can vary from person to person. To consider these characteristics in features used for individual-independent stress analysis, ways have been developed to normalize data for each participant for their type of data . HDTP is defined in terms of a participant's overall thermal state to minimize individual bias in stress analysis.
As an illustration, consider that the statistic used is the standard deviation and the facial block region for which we want to develop a histogram is situated at the top right corner of the facial region in the XY plane (FBR1) for video V1 when a participant P i was watching film F1. In order to create a histogram, the bin locations and sizes need to be calculated. To do this, the standard deviation needs to be calculated for all frames in FBR1 in all videos (V1-12) for P i . This will give standard deviation values from which the global minimum and maximum can be obtained and used to calculate the bin location and sizes. Then, the histogram for FBR1, for V1, and for P i is calculated by filling the bins with the standard deviation values for each frame in FBR1. This method then provides normalized features that also take into account the image and motion, and can be used as inputs to a classifier.
5 Stress classification system using a hybrid of a support vector machine and a genetic algorithm
SVMs have been widely used in the literature to model classification problems including facial expression recognition [27, 33, 34]. Provided a set of training samples, a SVM transforms the data samples using a nonlinear mapping to a higher dimension with the aim to determine a hyperplane that partitions data by class or labels. A hyperplane is chosen based on support vectors, which are training data samples that define maximum margins from the support vectors to the hyperplane to form the best decision boundary.
It has been reported in the literature that thermal patterns for certain regions of a face provide more information for stress than other regions . The performance of the stress classifier can degrade if irrelevant features are provided as inputs. As a consequence and due to its benefits noted in literature, the classification system was extended to include a feature selection component, which used a GA to select facial block regions appropriate for the stress classification. GAs are inspired by biological evolution and the concept of survival of the fittest. A GA is a global search technique and has been shown to be useful for optimization problems and problems concerning optimal feature selection for classification .
The GA evolves a population of candidate solutions, represented by chromosomes, using crossover, mutation, and selection operations in search for a better quality population based on some fitness measure. Crossover and mutation operations are applied to chromosomes to achieve diversity in the population and reduce the risk of the search being stuck with a local optimal population. After each generation during the search, the GA selects chromosomes, probabilistically mostly made up of better quality chromosomes, for the population in the next generation to direct the search to more favorable chromosomes.
GA implementation settings for facial block region selection
Number of generations
Stochastic uniform selection
In summary, various stress classification systems using a SVM were developed which differed in terms of the following input characteristics:
VSLBP-TOP: LBP-TOP features for VS videos
TSLBP-TOP: LBP-TOP features for TS videos
TSHDTP: HDTP features (as described in Section 4.1) for TS videos
VSLBP-TOP + TSLBP-TOP: VSLBP-TOP and TSLBP-TOP
VSLBP-TOP + TSHDTP: VSLBP-TOP and TSHDTP
TSLBP-TOP + TSHDTP: TSLBP-TOP and TSHDTP
VSLBP-TOP + TSLBP-TOP + TSHDTP
These inputs were also provided as inputs to the GA-SVM classification systems to determine whether the system produced better stress recognition rates.
6 Results and discussion
Results show that when HDTP features for TS videos (TSHDTP) were provided as input to the SVM classifier, there were improvements in the stress recognition measures. The best recognition measures for the SVM were obtained when VSLBP-TOP + TSHDTP was provided as input. It produced a recognition rate that was at least 0.10 greater than the recognition rate for inputs without TSHDTP where the range for recognition rates was 0.13. This provides evidence that TSHDTP had a significant contribution towards the better classification performance and suggests that TSHDTP captured more patterns associated with stress than VSLBP-TOP and TSLBP-TOP. The performance for the classification was the lowest when TSLBP-TOP was provided as input.
The features were also provided as inputs to a GA which selected facial block regions with a goal to disregard irrelevant facial block regions for stress recognition and to improve the SVM-based recognition measures. Performances of the classifications using 10-fold cross-validation on the different inputs are provided in Figure 8. For all types of inputs, GA-SVM produced significantly better stress recognition measures. According to the Wilcoxon non-parametric statistical test, the statistical significance was p < 0.01. Similar to the trend observed for stress recognition measures produced by the SVM, TSHDTP also contributed to the improved results in GA-SVM. The best recognition measures were obtained when VSLBP-TOP + TSLBP-TOP + TSHDTP was provided as input to the GA-SVM classifier. The performance of the classifier was highly similar when it received VSLBP-TOP + TSHDTP as inputs with a difference of 0.01 in the recognition rate. Results show that when a combination of at least two of VSLBP-TOP, TSLBP-TOP, and TSHDTP was provided as input, then it performed better than when only one of VSLBP-TOP, TSLBP-TOP, or TSHDTP was used.
Further, stress recognition systems provided with TSHDTP as input produced significantly better stress recognition measures than inputs with TSHDTP replaced by TSLBP-TOP (p < 0.01). This suggests that stress patterns were better captured by TSHDTP features than TSLBP-TOP features.
In addition, blocks selected by the GA in the GA-SVM classifier for the different inputs were recorded. When VSLBP-TOP was given as inputs to a GA, the blocks that produced better recognition results were the blocks that corresponded to the cheeks and mouth regions on the XY plane. For VSLBP-TOP, fewer blocks were selected and they were situated around the nose. On the other hand for TSHDTP, more blocks were used in the classification - nose, mouth, and cheek regions and regions on the forehead were selected by the GA. Future work could extend the investigation by more complex block definitions to find and use more precise regions showing symptoms of stress for classification.
Future work could also investigate other block selection methods different from the GA used in this work. The GA search took approximately 5 min to reach convergence, but it could take longer if the chromosome is extended to encode more general information for a block, e.g., coordinate values and the size for the block. The literature has claimed that a GA usually takes longer execution times than other types of feature selection techniques, such as correlation analysis . Therefore in future, other block selection methods could be investigated that do not require execution times as long as a GA and still produce stress recognition measures comparable to the GA hybrid.
The ANU Stress database (ANUStressDB) was presented which has videos of faces in temporal thermal (TS) and visible (VS) spectrums for stress recognition. A computational classification model of stress using spatial and temporal characteristics of facial regions in the ANUStressDB was successfully developed. In the process, a new method for capturing patterns in thermal videos was defined - HDTP. The approach was defined so that it reduced individual bias in the computational models and enhanced participant-independent recognition of symptoms of stress. For computing the baseline for stress classification, a SVM was used. Facial block regions selected informed by a genetic algorithm improved the rates of the classifications regardless of the type of video - videos in TS or VS. The best recognition rates, however, were obtained when features from TS and VS videos were provided as inputs to the GA-SVM classifier. In addition, stress recognition rates were significantly better for classifiers provided with HDTP features instead of LBP-TOP features for TS. Future work could extend the investigation by developing features for facial block regions to capture more complex patterns and examining different forms of facial block regions for stress recognition.
- Selye H: The stress syndrome. Am. J. Nurs. 1965, 65: 97-99. 10.1097/00000446-196505000-00023Google Scholar
- Hoffman-Goetz L, Pedersen BK: Exercise and the immune system: a model of the stress response? Immunol. Today 1994, 15: 382-387. 10.1016/0167-5699(94)90177-5View ArticleGoogle Scholar
- Sharma N, Gedeon T: Objective measures, sensors and computational techniques for stress recognition and classification: a survey. Comput. Methods Prog. Biomed. 2012, 108: 1287-1301. 10.1016/j.cmpb.2012.07.003View ArticleGoogle Scholar
- Miller GE, Cohen S, Ritchey AK: Chronic psychological stress and the regulation of pro-inflammatory cytokines: a glucocorticoid-resistance model. Health Psychology Hillsdale 2002, 21: 531-541.View ArticleGoogle Scholar
- Surwit RS, Schneider MS, Feinglos MN: Stress and diabetes mellitus. Diabetes Care 1992, 15: 1413-1422. 10.2337/diacare.15.10.1413View ArticleGoogle Scholar
- Vitetta L, Anton B, Cortizo F, Sali A: Mind body medicine: stress and its impact on overall health and longevity. Ann. N. Y. Acad. Sci. 2005, 1057: 492-505. 10.1196/annals.1322.038View ArticleGoogle Scholar
- Seltzer JA, Kalmuss D: Socialization and stress explanations for spouse abuse. Social Forces 1988, 67: 473-491. 10.1093/sf/67.2.473View ArticleGoogle Scholar
- Johnson PR, Indvik J: Stress and violence in the workplace. Employee Counsell. Today 1996, 8: 19-24.View ArticleGoogle Scholar
- The American Institute of Stress. (05/08/10), America's no. 1 health problem - why is there more stress today? . Accessed 5 August 2010 http://www.stress.org/
- Lifeline Australia, Stress costs taxpayer $300K every day, 2009 Accessed 10 August 2010 http://www.lifeline.org.au
- Liao W, Zhang W, Zhu Z, Ji Q: A real-time human stress monitoring system using dynamic Bayesian network. San Diego, CA, USA, 25 June 2005. Computer Vision and Pattern Recognition - Workshops, CVPR WorkshopsGoogle Scholar
- Zhai J, Barreto A: Stress recognition using non-invasive technology. Melbourne Beach, Florida, 2006. Proceedings of the 19th International Florida Artificial Intelligence Research Society Conference FLAIRS 395-400.Google Scholar
- Wang J, Korczykowski M, Rao H, Fan Y, Pluta J, Gur RC, McEwen BS, Detre JA: Gender difference in neural response to psychological stress. Soc. Cogn. Affect. Neurosci. 2007, 2: 227. 10.1093/scan/nsm018View ArticleGoogle Scholar
- Sharma N, Gedeon T: Stress Classification for Gender Bias in Reading - Neural Information Processing vol. 7064. Edited by: Lu B-L, Zhang L, Kwok J. Springer, Berlin; 2011:348-355.Google Scholar
- Ushiyama T, Mizushige K, Wakabayashi H, Nakatsu T, Ishimura K, Tsuboi Y, Maeta H, Suzuki Y: Analysis of heart rate variability as an index of noncardiac surgical stress. Heart Vessel. 2008, 23: 53-59. 10.1007/s00380-007-0997-6View ArticleGoogle Scholar
- Seong H, Lee J, Shin T, Kim W, Yoon Y: The analysis of mental stress using time-frequency distribution of heart rate variability signal. San Francisco, CA, USA, 1–4 September 2004, vol 1. Annual International Conference of Engineering in Medicine and Biology Society, 2004 283-285.View ArticleGoogle Scholar
- Morilak DA, Barrera G, Echevarria DJ, Garcia AS, Hernandez A, Ma S, Petre CO: Role of brain norepinephrine in the behavioral response to stress. Prog. Neuro-Psychopharmacol. Biol. Psychiatry 2005, 29: 1214-1224. 10.1016/j.pnpbp.2005.08.007View ArticleGoogle Scholar
- Haak M, Bos S, Panic S, Rothkrantz LJM: Detecting stress using eye blinks and brain activity from EEG signals. Chez Technical University, Prague, 2008. Proceeding of the 1st Driver Car Interaction and Interface (DCII 2008)Google Scholar
- Shi Y, Ruiz N, Taib R, Choi E, Chen F: Galvanic skin response (GSR) as an index of cognitive load. San Jose, CA, USA, 2007, 28 April - 3 May 2007. CHI '07 extended abstracts on Human factors in computing systems 2651-2656.View ArticleGoogle Scholar
- Reisman S: Measurement of physiological stress. 1997, 4–6 April 1997. Bioengineering Conference 21-23.Google Scholar
- Dinges DF, Rider RL, Dorrian J, McGlinchey EL, Rogers NL, Cizman Z, Goldenstein SK, Vogler C, Venkataraman S, Metaxas DN: Optical computer recognition of facial expressions associated with stress induced by performance demands. Aviat. Space Environ. Med. 2005, 76: B172-B182.Google Scholar
- BIOPAC Systems Inc, BIOPAC Systems, 2012 . Accessed 10 February 2011 http://www.biopac.com/
- Yuen P, Hong K, Chen T, Tsitiridis A, Kam F, Jackman J, James D, Richardson M, Williams L: W. Oxford, Emotional & physical stress detection and classification using thermal imaging technique. London, 2009, 3 December 2009. 3rd International Conference on Crime Detection and Prevention (ICDP 2009) 1-6.Google Scholar
- Jarlier S, Grandjean D, Delplanque S, N'Diaye K, Cayeux I, Velazco MI, Sander D, Vuilleumier P, Scherer KR: Thermal analysis of facial muscles contractions. IEEE Trans. Affect. Comput. 2011, 2: 2-9.View ArticleGoogle Scholar
- Zhao G, Pietikainen M: Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29: 915-928.View ArticleGoogle Scholar
- Hernández B, Olague G, Hammoud R, Trujillo L, Romero E: Visual learning of texture descriptors for facial expression recognition in thermal imagery. Comput. Vis. Image Underst. 2007, 106: 258-269. 10.1016/j.cviu.2006.08.012View ArticleGoogle Scholar
- Trujillo L, Olague G, Hammoud R, Hernandez B: Automatic feature localization in thermal images for facial expression recognition. San Diego, CA, USA, 2005, 20, 21 and 25 June 2005. IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, 2005. CVPR Workshops 14.Google Scholar
- Manglik PK, Misra U, Maringanti HB: Facial expression recognition. 2004, The Hague, Netherlands, 10–13 October 2004. IEEE International Conference on Systems, Man and Cybernetics 2220-2224.Google Scholar
- Neggaz N, Besnassi M, Benyettou A: Application of improved AAM and probabilistic neural network to facial expression recognition. J. Appl. Sci. 2010, 10: 1572-1579.View ArticleGoogle Scholar
- Sandbach G, Zafeiriou S, Pantic M, Rueckert D: Recognition of 3D facial expression dynamics. Image Vis. Comput. 2012, 30: 762-773. 10.1016/j.imavis.2012.01.006View ArticleGoogle Scholar
- Zeng Z, Pantic M, Roisman GI, Huang TS: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31: 39-58.View ArticleGoogle Scholar
- Lucey P, Cohn JF, Prkachin KM, Solomon PE, Matthews I: Painful data: The UNBC-McMaster shoulder pain expression archive database. 2011, Santa Barbara, CA, USA, 21–25 March 2011. IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011) 57-64.Google Scholar
- Taini M, Zhao G, Li SZ, Pietikainen M: Facial expression recognition from near-infrared video sequences. 2008, Tampa, Florida, USA, 8–11 December 2008. 19th International Conference on Pattern Recognition (ICPR) 1-4.Google Scholar
- Michel P, Kaliouby RE: Real time facial expression recognition in video using support vector machines. Vancouver, British Columbia, Canada, 5–7 November 2003. the Proceedings of the 5th International Conference on Multimodal Interfaces, 2003Google Scholar
- Ahonen T, Hadid A, Pietikainen M: Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28: 2037-2041.View ArticleMATHGoogle Scholar
- Bartlett MS, Littlewort GC, Frank MG, Lainscsek C, Fasel IR, Movellan JR: Automatic recognition of facial actions in spontaneous expressions. J. Multimed. 2006, 1: 22-35.View ArticleGoogle Scholar
- Douglas-Cowie E, Cowie R, Schröder M: A new emotion database: considerations, sources and scope. ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion 2000, 39-44.Google Scholar
- Grimm M, Kroschel K, Narayanan S: The Vera am Mittag German audio-visual emotional speech database. Hannover, Germany, 23–26 June 2008. IEEE International Conference on Multimedia and Expo 2008, 865-868.Google Scholar
- Dhall A, Goecke R, Lucey S, Gedeon T: A semi-automatic method for collecting richly labelled large facial expression databases from movies. IEEE Multimedia 2012, 19: 34-41.View ArticleGoogle Scholar
- Zhai J, Barreto A: Stress detection in computer users based on digital signal processing of noninvasive physiological variables. 2006, New York City, NY, USA, 30 August - 3 September 2006. Proceedings of the 28th IEEE EMBS Annual International Conference 1355-1358.Google Scholar
- Hjortskov N, Rissén D, Blangsted A, Fallentin N, Lundberg U, Søgaard K: The effect of mental stress on heart rate variability and blood pressure during computer work. Eur. J. Appl. Physiol. 2004, 92: 84-89. 10.1007/s00421-004-1055-zView ArticleGoogle Scholar
- Healey JA, Picard RW: Detecting stress during real-world driving tasks using physiological sensors. IEEE Trans. Intell. Transport. Syst. 2005, 6: 156-166. 10.1109/TITS.2005.848368View ArticleGoogle Scholar
- Niculescu A, Cao Y, Nijholt A: Manipulating stress and cognitive load in conversational interactions with a multimodal system for crisis management support. In Development of Multimodal Interfaces: Active Listening and Synchrony. Springer, Dublin Ireland; 2010:134-147.View ArticleGoogle Scholar
- Vizer LM, Zhou L, Sears A: Automated stress detection using keystroke and linguistic features: an exploratory study. Int. J. Hum. Comput. Stud. 2009, 67: 870-886. 10.1016/j.ijhcs.2009.07.005View ArticleGoogle Scholar
- Lin T, John L: Quantifying mental relaxation with EEG for use in computer games. Las Vegas, NV, USA, 26–29 June 2006. International Conference on Internet Computing, 2006 409-415.Google Scholar
- Lin T, Omata M, Hu W, Imamiya A: Do physiological data relate to traditional usability indexes? In Proceedings of the 17th Australia Conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future. Narrabundah, Australia; 2005:1-10.Google Scholar
- Lovallo WR: Stress & Health: Biological and Psychological Interactions. Sage Publications, Inc., California; 2005.Google Scholar
- Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I: The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). San Francisco, CA, USA; 2010:94-101.Google Scholar
- Gross JJ, Levenson RW: Emotion elicitation using films. Cognit. Emot. 1995, 9: 87-108. 10.1080/02699939508408966View ArticleGoogle Scholar
- Struc V, Pavesic N: The complete Gabor-Fisher classifier for robust face recognition. EURASIP Advances in Signal Processing 2010, 2010: 26.MATHGoogle Scholar
- Struc V, Pavesic N: Gabor-based kernel partial-least-squares discrimination features for face recognition. Informatica (Vilnius) 2009, 20: 115-138.MATHGoogle Scholar
- Struc V: The PhD Toolbox: Pretty Helpful Development Functions for Face Recognition. 2012. . Accessed 12 September 2012 http://luks.fe.uni-lj.si/sl/osebje/vitomir/face_tools/PhDface/Google Scholar
- Mathworks, Vision TemplateMatcher System Object R2012a 2012.http://www.mathworks.com.au/help/vision/ref/vision.templatematcherclass.html . Accessed 12 September 2012
- APA: American Psychological Association, Stress in America. APA, Washington, DC; 2012.Google Scholar
- Holahan CJ, Moos RH: Life stressors, resistance factors, and improved psychological functioning: an extension of the stress resistance paradigm. J. Pers. Soc. Psychol. 1990, 58: 909.View ArticleGoogle Scholar
- Frohlich H, Chapelle O, Scholkopf B: Feature selection for support vector machines by means of genetic algorithm. Sacramento, California, USA, 3–5 November 2003. 15th IEEE International Conference on Tools with Artificial Intelligence 2003, 142-148.View ArticleGoogle Scholar
- Yu L, Liu H: Feature selection for high-dimensional data: a fast correlation-based filter solution. Los Angeles, CA, 23–24 June 2003. 12th International Conference on Machine Learning 2003, 856-863.Google Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.