Skip to main content

Analysis of thermal videos for detection of lie during interrogation

Abstract

The lie-detection tests are traditionally carried out by well-trained experts using polygraph machines. However, it is time-consuming, invasive, and, overall, a cumbersome process, not admissible by the court of law. Trained individuals can easily flaunt these tests. On the other hand, facial thermal imaging could be more effective as it is noninvasive and could be a stealth method of tracking the facial blood flow patterns, which have been proven to detect deceit. This paper presents a method based on facial thermal imaging to detect deception in human subjects. The major issue in such research is the lack of proper real-life databases which emulate the crime scenes. In this paper, first, we have developed a database based on almost real-life theft incidents with due diligence using isolated subjects over a period of time at a government hospital under the plea of free health checkup. The experiment has been conducted at Midnapore Medical College and Hospital, West Bengal, India, with proper ethical committee approval. The participants are selected at the behest of the police department with habitual crime records. Most of them have been repeatedly charged with petty crimes of pick-pocketing and stealing. They are invited individually at different instances of time under the plea of medical checkup where they have been enticed to steal cash. It is followed by a two-stage process, a friendly interaction followed by a slightly tougher interrogation. Their forehead and periorbital region skin surface temperature are recorded by a hidden thermal camera. Upon analysis, conspicuous changes in the temperature profile and blood flow pattern have been observed in the individuals who stole money and those who did not.

1 Introduction

Lie detection is a challenging task even for experts [1]. The main clue to differentiate liars from truth-tellers is their expression of mental state [2]. Though liars try to conceal their activity, they experience certain feelings like shame, guilt, anxiety, anger, disgust, and fear from inside [3]. Such feelings are reflected on their face. In contrast, truth-tellers do not experience such feelings.

Certain reflexes and physiological activities of the human body are subconscious as they are not controlled by the motor or sensory nervous system. Activities like blood pressure, pulse rate, gastrointestinal mobility, sweating, urinary bladder emptying, body temperature, etc., are controlled by the autonomic nervous system. Activation and reflex action of this system are subconscious. Whenever there is an increased blood flow to an area, temperature of that area increases. When a person lies, he/she is stressed, and the reflex action of the autonomic nervous system is activated, increasing the blood flow to some distinct areas of the face. The areas most affected are the periorbital and forehead area. Due to increased blood flow, the skin’s surface temperature in those areas increases [4].

We can identify whether a person is lying based on the changes in physiological parameters. Polygraph is a technology commonly used for lie detection by capturing these changes. Blood pressure, respiratory, cardiovascular, and electrodermal activity are the common parameters used in lie detection [5, 6]. The polygraph method has limitations because of the dearth of trained experts, and it being an invasive procedure [7, 8]. Even if the interrogation session is of a small duration, the time taken to process it is very long [9].

The motivating factors to work out non-invasive methods are:

  • Unpredictable behavior of participants under present contact-based lie-detection method.

  • Time-taking and cumbersome procedure to ascertain the detection.

  • Lack of well-trained experts.

Research has found that there is a measurable change in the behavioral and physiological parameters of a deceiver from a truth-teller during the time of interrogation. These changes are extremely important to distinguish between liars and truth-tellers [10,11,12,13]. Some of the most common non-invasive methods use videos, audio, text [14], a fusion of video, audio and text [15], and thermal imaging [16] for the detection of deceit.

In this work, we focus on the use of thermal imaging for the detection of deceit. Using thermal imaging one can easily measure parameters like respiratory rate [17], pulse rate [18], blood flow [12] and blood flow distribution [18] in a non-invasive manner. The facial blood flow pattern is affected when a person is lying or trying to deceive others. This change in blood flow beneath the skin causes change in skin temperature which can be measured by the use of a thermal camera [19,20,21].

2 Related works

There are two significant aspects of deceit detection:

  1. 1.

    Creation of an appropriate database.

  2. 2.

    Proper examination of the thermal signature to differentiate lie and truth.

In most of the cases a mock crime scenario is planned to simulate deceit, which is essential for database creation. Specific biomarkers are used to identify lies and deceit. The work done on the above two points is described in the following paragraphs.

Simulating guilt and lying: Few databases are available in the field of deceit detection based on a mock crime scenario. Examples of such scenarios performed in the past are concealing a banned object [22], stealing money [7, 23] or jewelry [24] and attempt to kill a mannequin for stealing [7]. The experiment done by Frank and Ekman in 2004 is one of the ideal models among these scenarios [25]. In this experiment, they created a mock crime scenario of stealing money. Some of the subjects stole money as per instruction, and the rest acted according to their own decision. After the experiment, all the subjects were interrogated. In the end, reward was given to truth-tellers and punishment to liars.

In the present work, the experiment has been designed uniquely to emulate real-life theft scenarios. Here the participants were neither informed previously about the experiment nor were instructed to steal. So, when some of them stole and later lied about it, they could have experienced a real feeling of guilt inside them.

Umut Sen et al. [26] have also used real-life scenarios for deceit detection, but instead of an experiment, they have used video clips from real court trials. They have tried to detect deceit in real-life data using verbal, acoustic and visual modalities.

Deceit detection using thermal imaging: The most common physiological parameter which can be captured by the thermal camera is temperature variation of skin surface due to change in blood flow rate. Liars experience two types of stress: one is of acute onset, and the other is slow and sustained. In both cases, different facial regions have an increased blood flow rate. In the first case, there is a sudden increase in blood flow in the periorbital area, whereas in the second case, there is a slow increase in the rate in the forehead area [27, 28].

Several researchers have carried out experiments on deceit detection using thermal imaging. Rajoub and Zwiglear [3] have found that if subjects have committed a crime and lie, they have increased skin surface temperature around the periorbital area at the time of interrogation. This increase in temperature was marked particularly when they were answering questions pertinent to the crime, but no such change was observed when they faced other questions unrelated to the crime. On the other hand, for innocent persons, no change in temperature was marked [3]. The use of statistical and machine learning methods have been highlighted by both Pollina et al. and Gunes and Piccardi for classifying deceit from truthful cases [8, 29]. It has been reported that methods for deceit detection using thermal imaging have an accuracy of about 87% [13] to 91.7% [8].

In the present study, a low-cost thermal camera has been used. The temperature of two regions of the face, i.e., forehead and periorbital area, have been recorded and subsequently processed using an incremental tracking algorithm.

The technical contributions of this paper are as follows:

  • In contrast to the previous works, where most of the experiments have been conducted with a mock crime enacted scenario, our experiment is conducted in a natural setting with real-life stealing.

  • Participants of the present study are selected at the behest of the police department. They have a past record of habitual stealing. The recording has been carried out in a concealed manner such that the participants are unaware of the experiment.

  • Later, the facial thermal videos were analyzed to measure the pattern of blood flow in particular areas of the face to differentiate deceit from non-deceit.

  • In this work, we have implemented an algorithm that tracks the ROI in the face taking care of the movements of the head of the subjects unlike [7] who had carried out the analysis of blood flow rate assuming a completely stationary subject for a very short period of time.

The remaining part of the paper has been organized in the following manner. Section 3 discusses the design of the experiment, protocol, and experimental setup. The methodology followed for the detection of deceit is given in Sect. 4. Section 5 discusses the result, Sect.  6 discusses the result analysis and the conclusion is given in Sect. 7.

3 Design of experiment

Ethical committee approval has been taken to conduct the experiments in several phases from the administration of the Midnapore medical college and hospital, where the experiment has been conducted. The subjects for the experiment have been selected with due diligence at the behest of the city police. Subjects who have a track record of pick-pocketing and stealing and do not have any mental or physical health disorders have been chosen for the experiment. The design of the experiment was planned in such a way that a real situation of stealing could be created. The subjects were brought under the plea of routine medical checkups to the hospital at different times individually. They were asked to wait alone in an isolated room where some currency notes had been dropped. This process would entice the subjects to steal. During this waiting, they were served tea and snacks. Here, they wait for about an hour or so. Subsequently, the subjects were invited to a different room where they were subjected to a friendly interaction process. After some time (about 30 min), another trained expert enters the room for a tougher interrogation as done in police custody. The thermal video and audio recordings have been carried out in a concealed manner.

3.1 Selection of subjects

The study includes 33 male subjects with an age range of 18 to 40 years. All subjects were explained that they would have a normal health checkup in the hospital, and in this context, they were invited to the hospital on different times and dates. A set of health-related basic questionnaires (General Health Questionnaire-GHQ) were given to each subject to rule out any mental illness. At the end of the experiment, a detailed information about the interactive sessions was explained to each individual, and those who gave consent were included in the study.

3.2 Ethical approval

The ethical committee of Midnapore medical college and hospital has approved the experiment protocol. A free medical checkup was carried out for subjects. Due compensation was also paid to every subject after the experiment. Before participation in the experiment, written consent was taken from each subject, where it was mentioned that their data would be used for future research work.

3.3 Data availability

The thermal video database can be accessed at https://thermalvideoskgp.blogspot.com/2019/07/thermal-inquisitions.html.

3.4 Experiment protocol

The subjects were brought individually to the medical school under the pretext of standard health checkups at the behest of the police. The subjects were brought on different times and days to avoid any chance of interaction among themselves.

The experiment has been divided into three stages:

  • Waiting in a room where an opportunity of stealing has been created.

  • Friendly interaction by trained experts.

  • Interrogation by trained experts.

The experiment was conducted in separate rooms. In the first stage, while waiting in an isolated room, the subject was enticed to pick up currency notes dropped beneath the table. In some cases, a wallet with cash was also kept deliberately beside the chair. After about 45 to 60 min of waiting, friendly interaction was carried out in the second room with the subject. Subsequently, the interrogation took place in the same room.

The flowchart for the experimental procedure is given in Fig. 1.

Fig. 1
figure 1

The experimental flowchart

The act of stealing This part of the experiment happens in the first room. Some currency notes have been left on the table or haphazardly dropped on the floor beneath the table or chairs. This was done to lure the subjects to steal the money. Without exception, it has been observed that this method of luring the subjects has been very successful as, in most cases, the subjects took the cash from the floor or wallet, leaving the wallet.

Friendly interaction In this stage, a friendly interaction for the subjects was arranged in the second room by a trained faculty member. Some basic questions about their health problems have been asked to the subjects. They are also asked about their family history and questions related to their personal, social, and economic background. These questions are oriented in such a way as to make the subject feel at ease. The facial thermal video was recorded in a concealed manner, as shown in Fig.  2.

Fig. 2
figure 2

The interrogation room

In the next stage, the same setup was used to interrogate the subject. This interrogation by another trained expert was used to find out if he had picked up the cash.

The interrogation stage This stage also happens in the second room. The interrogator involved in this research process has been trained accordingly to handle the questionnaires. He has been chosen to be a male person and is unaware whether the subject has stolen or not while in the waiting room. During the process of interrogation, some of the subjects expressed that they had stolen the cash while in the waiting room. Despite repeated questions, some of the subjects did not admit their act. These persons are taken as liars, and those who expressed their act of stealing in front of the interrogator are taken as truth-tellers. Those who did not steal and admit the same are also taken as truth-tellers. As in the previous stage, the thermal video of the subject has been recorded by a concealed camera. In the end, the subject was explained thoroughly about the experiment, and adequate compensation was paid to each one in the third room.

The change in the facial blood-flow pattern in the subjects during the process of interrogation (deviation from the normal) can possibly lead to the differentiation of truth and lie. The first stage of interaction is set up in such a friendly manner that during the process of interrogation, some of them open up themselves. However, some are not as evasive as expected [30].

3.5 Experiment setup

A closed, air-conditioned room was chosen for the interrogation session so that there was no disturbance to the process. The experimental setup is shown in Fig.  2. The thermal camera and the voice recorder were kept in a concealed manner. The temperature of the room has been regulated at 22 °C.

Recording devices FLIR One Pro USB C camera was used for the thermal video recording of the interrogation at a frame rate of 2.5034 frames/s and a visual resolution of 19,200 pixels. A Sony voice recorder was used for audio recording at a sampling rate of 44.1 kHz. A box of the medical instrument having a hole for the lens of the camera was used to cover the thermal camera. A newspaper was used to hide the audio recorder.

Illumination setup The room was illuminated with standard roof lights [31]. This makes the subjects feel at ease without any special lights.

Inclusion, exclusion criteria for choosing the subject:

Inclusion criteria

  • Age range between 15 and 40 years.

  • Normal/corrected vision.

  • Normal hearing ability.

Exclusion criteria

  • Persons with sleep disorder/physiological illness.

  • History of any head injury.

  • Color blindness.

A total of 33 participants were invited for the experiment, out of which the data of five participants had to be excluded because of their excessive head movements during the interview, which caused their faces to go out of the camera view. The records of the rest 28 participants for the experiment are used for analysis. It has been found that 14 subjects either did not steal cash or admitted the act, while the other 14 participants stole the cash but did not admit their acts during interrogation. This data is given in Table 1.

Table 1 Data regarding the number of people who stole

4 Methodology

The analysis of the thermal videos for the detection of truth and lies has been carried out by annotating and delineating the audio and thermal data. Here, the blood flow rates estimated from the heat maps of various facial regions have been used as the features to differentiate between the truth-tellers and liars. The block diagram of the methodology is given in Fig. 3. As it is observed from the block diagram the first step is the selection of the region of interest (ROI) which comprises the forehead and the periorbital region of the face. Then the second step is the tracking of the ROI using a proper algorithm discussed below. Then the blood flow rate is calculated in the ROI using Eq. 6. The blood flow rate and frame number obtained from the algorithm are the input features to the SVM classifier. Finally, the SVM classifier separates the subjects into two categories, i.e., truth-tellers and liars. All the methods, i.e., ROI tracking of the forehead and periorbital region of the face, the calculation of the blood flow rate in the ROI, and the use of Support Vector Machine in classifying the truth-tellers and liars, have been described below.

Fig. 3
figure 3

Block diagram of the deceit detection system

As the thermal resolution of the camera was not very good, we were unable to measure parameters like the respiratory rate and the pulse rate. In this work, we are focusing only on measuring the temperature and blood flow rate in a non-invasive way using the thermal camera.

It is established from previous studies that there is an increase in blood flow rate in the periorbital area due to immediate stress, whereas if the stress builds up progressively in a sustained manner, there is an increase in blood flow rate in the forehead area. Hence independent analysis has been carried out both for the periorbital and forehead area.

The analysis of the blood flow rate involves three parts, namely,

  1. 1.

    Region of interest (ROI) tracking.

  2. 2.

    Estimating the blood flow rate.

  3. 3.

    Separating the lie and truth response using support vector machine (SVM).

4.1 Region of interest tracking

The tracking of ROI is an important step towards finding the blood flow rate in a particular region. We tried to implement the automatic tracking of the face using some face tracking algorithms like Kanade–Lucas–Tomasi (KLT). The problem we faced was that this algorithm did not work well with thermal videos. The tracking of the face is not done properly by using this algorithm. Therefore, we used the incremental tracking algorithm, which has been described below. Here a method developed by Asvadi et al. [32] has been used for ROI tracking. The algorithm uses the RGB histogram of the ROI for tracking. It involves the creation of an object model, creation of the confident map, finding of the new centroid and updation of the object model. The obtained ROI in each frame is used to find the average blood flow rate at a particular frame or time.

4.1.1 Creation of the object model

An object model is created using the RGB histogram of the object and the background region. The object or the ROI is selected manually in the first frame as a rectangle. The object and surrounding rectangles are chosen in such a way that the number of pixels in the object region is the same as the number of pixels in the region surrounding the object. This can be done by choosing the width of the surrounding region as \(W=\sqrt{2} \times w\) and height as \(H=\sqrt{2} \times h\) as shown in Fig. 4. Here, the w and h are the width and height of the selected object region. W and H are the width and height of the selected background rectangle. The selected object region is inside the solid red rectangle, and the surrounding background region is the area between the red and dashed black rectangles. In this figure, a sample ROI of the face is shown, but the actual ROI which is used in the algorithm are the forehead and periorbital region of the face.

Fig. 4
figure 4

Object and background rectangles selected from face where the red box shows a sample ROI

The object model is created by using the 3D joint RGB histogram of the object and background region. A quantized 3D joint RGB histogram is calculated for the regions representing the inner rectangle and the background area. The object model can be found out using the following relation:

$$L_{\text{s}}=\max \left\{ \ln \frac{\max \left\{ H_{\text{o}}(s), \varepsilon \right\} }{\max \left\{ H_{\text{b}}(s), \varepsilon \right\} }, 0\right\} ,$$
(1)

where \(H_{\text{o}}(s)\) is the histogram computed within the object rectangle, and \(H_{\text{b}}(s)\) is the histogram for the background region. Here, 8 bins have been used in each channel for histogram quantization. So the index s ranges from 1 to \(8^{3}\) and \(8^{3}\) is the total number of histogram seeds. Here \(\varepsilon\) is set to 1.

4.1.2 Finding of confident map

The confident map \(M\left( x_{i}, y_{i}\right)\) is created from the object model \(L_\text{s}\) from the object region \(I\left( x_{i}, y_{i}, c_{j}\right)\) as given below:

$$L_{\text{s}}: I\left( x_{i}, y_{i}, c_{j}\right) \mapsto M\left( x_{i}, y_{i}\right),$$
(2)

where \((x_{i}, y_{i})\) is the pixel location in the image coordinate and \(c_{j}\) is the color channel of image.

4.1.3 Finding the new centroid

This part of the algorithm relies on the fact that the change in the object location will not be ballistic. So the center of the object rectangle is shifted to the centroid of the current confident map. The center of the object rectangle is shifted from the old location \((x_{i}, y_{i})\) to new location \((x_{\text{new}}, y_{\text{new}})\) using Eqs. 3 and 4:

$$\chi _{\text{new}}= \frac{\sum _{i=1}^{N}\left( M_{i} \times x_{i}\right) }{\sum _{i=1}^{N} M_{i}},$$
(3)
$$y_{\text {new }}= \frac{\sum _{i=1}^{N}\left( M_{i} \times y_{i}\right) }{\sum _{i=1}^{N} M_{i}}.$$
(4)

In this way, the shifting of the object rectangle is continued till the mean shift in centroid is 2 or the maximum number of iterations (here it is taken as 6) is reached. This is called mean shift convergence.

4.1.4 Updation of the model

When the object location at the present frame is determined using the mean shift, the positive log-likelihood ratio \(L_{p}^{t}\) is calculated and it is used to update the previous object model \(L_{p}^{t-1}\) by using the following relation:

$$L_{p}^{t+1} \leftarrow (1-\gamma ) \times L_{p}^{t-1}+\gamma \times L_{p}^{t},$$
(5)

where \(t+1\), t, and \(t-1\) are indexes for the next, current, and previous frames, respectively. p indicates the randomly \(\alpha\) percent selection of the positive log-likelihood ratio seeds s. Here \(\alpha\) is set to \(5 \%\). \(\gamma\) is a forgetting factor which is set to 0.1. \(L_{p}^{t+1}\) is the upgraded object model which will be used to find the object in the next frame.

The ROI is manually selected in the first frame, and the object model (Sect. 4.1.1), confident map, and centroid are calculated. For the subsequent frames, the ROI is tracked by finding the confident map, finding the new centroid, and updating the object model based on the detected ROI as given in Sects. 4.1.2 to 4.1.4. The tracking of the forehead region for a subject in intermittent frames is shown in Fig. 5.

Fig. 5
figure 5

Tracking of forehead region in intermittent frames. The frames proceed left to right in each of the rows

Algorithm 1
figure a

Finding the blood flow rate

4.2 Blood flow rate

The blood flow rate is calculated for each frame except the first (taken as zero) from the tracked ROI. It is related to the temperature gradient by the relation [7]:

$$\frac{\text{d}V_\text{S}}{\text{d}t} = \frac{T_\text{B}(C_\text{S} + K_\text{c} / (3d))-C}{(T_\text{B} - T_\text{S})^2} \frac{\text{d}T_\text{S}}{\text{d}t},$$
(6)

where \(C_\text{S}\) = the heat capacity of skin, \(V_\text{s}\) is the blood flow rate at the skin level, \(T_\text{B}\) = 310 K is the blood temperature at the body core, \(T_\text{s}\) is the skin temperature, \(K_\text{c}\) = 0.168 kcal/m/h/K is the thermal conductivity of skin, d = the depth of core temperature point from skin surface and C is a constant. The average blood flow rate for each frame is computed using Eq. 6. The initial blood flow rate is assumed to be zero.

The consolidated algorithm for the ROI selection and calculation of the blood flow rate is given in Algorithm 1. Finally, we have used two ROI’s which consist of the forehead and the periorbital region. The blood flow rate of the forehead and periorbital region are analyzed separately.

The incremental tracking algorithm works well with the thermal videos unlike other algorithms like KLT and Viola Jones. The SVM is a standard generic method for classification, which is applicable to thermal images as well. This paper uses SVM for deceit classification in real-time. We have improvised the incremental tracking algorithm to find out the blood flow rate in the ROIs in all the frames of the thermal videos of the subjects. On the basis of the blood flow rate analysis, SVM has been used to differentiate the liars from the truth-tellers.

We have compared our algorithm with respect to the KLT tracking algorithm. KLT tracking in a particular thermal video of a subject is shown in Fig. 6. It is evident from the figure that KLT cannot be used for tracking the ROI in the face in the thermal videos. Therefore, we have used a different algorithm in our work. This tracks the ROI, taking care of the movements of the face. Therefore we could not compare the final accuracy of our tracking algorithm with respect to other algorithms.

Fig. 6
figure 6

A particular frame during KLT tracking in a thermal video

4.3 Support vector classification

SVM is an effective machine learning tool proposed by Vapnik for binary classification problems [33]. In a two-class classifier, the goal is to construct a hyperplane, as shown in Fig. 7, which separates the data points of each class while maximizing the distance between the two classes from the hyperplane. Mathematically, the hyperplane is represented by the equation:

$$\textbf{W}^\text{T}\textbf{x}+\textbf{b}=0,$$
(7)

where W is the weight vector and b is the bias. The optimal hyperplane divides the data points(x) into two such that the data points of each class are on two sides of the plane. That is,

$$\begin{aligned} & \textbf{if}\; \textbf{W}^\text{T}\textbf{x}+\textbf{b}>0,\quad \mathbf {x\; is\; in\; class 1.} \\ & \textbf{if}\;\textbf{W}^\text{T}\textbf{x}+\textbf{b}<0,\quad \mathbf {x\; is\; in\; class 2.} \end{aligned}$$

The output of algorithm 1 for a thermal video of a subject gives the blood flow rate for each frame of the thermal video. Each of these outputs, i.e., blood flow rate and frame number together, is taken as a data point. The algorithm 1 gives a two-dimensional feature vector whose features are blood flow rate and frame number. The blood flow rate and frame number obtained from the algorithm are the input features of the SVM classifier. The blood flow rate of all the subjects starts from zero as the initial value in the first frame is taken as zero. The value of blood flow rate lies between 0 and 0.1 for all the subjects, and they are not normalized. Also, there are 100 frames of the thermal video taken in our analysis. The data points of all subjects are separated into lie and truth cases, and the hyperplane separating the two classes is found out using SVM.

Fig. 7
figure 7

Optimal hyperplane separating the two distinct classes

5 Result

The data include 14 cases of truth and 14 cases of lie, as shown in Table 1. The plot of the blood flow rate of the forehead and periorbital region of the subjects is shown in Fig. 8. The hyperplane separating the truth and lie cases is shown using dotted lines. The plot shown above in the figure consists of 100 frames in the thermal video. These 100 frames of the video are part of the hard interrogation. It can be observed from the graph of the forehead and the periorbital region that, except for the 3 cases, all have been properly separated by the separating hyperplane. The misclassification includes three lies in the forehead region and three lies in the periorbital region. Therefore the classification accuracy obtained is 89.28%. It is observed that there is a difference in the pattern of the rise of blood flow rate for lie and truth cases both in the periorbital and forehead region. For deceit cases, the rise is rapid; whereas, for the truth cases, the rising rate in intensity is slow and smooth. Though the dotted line separates the truth and lie cases, it can be observed that there are some overlapping on the separating line during the initial frames. This is because the blood flow rate is calculated by taking the initial condition as zero. Also, it can be observed that the separation of the truth and lie cases becomes evident as the interrogation progresses. This change of pattern of blood flow rate for deceit (rapidly increasing slope) vs. non-deceit (moderately increasing slope) cases can be used to differentiate between them.

Fig. 8
figure 8

Graph showing support vector machine classification of lie and truth

We have compared the proposed method with previous work. Pavlidis et al. [7] classified the subjects into deceptive and truthful groups by finding the slope products of the blood flow rate curves. If the slope product (in angle) crosses a threshold value then it is classified as deceptive else it is considered as truth. We also implemented this method in our own database and compared the values of recall, precision, F1 score, and accuracy by both methods. The results are provided in Table 2.

Table 2 Comparison of recall, precision, accuracy and F1 score in the periorbital and forehead region by using the Pavlidis method of slope product and our own proposed method

It can be observed from the above table that we get 89.28% accuracy value which is much higher than the accuracy obtained by using the method as in [7].

The 28 subjects are arranged into two categories, out of which subject numbers 1 to 14 are in the truth class and subject numbers 15 to 28 are in the deceit class. From the forehead data, we found out that subject numbers 21, 24, and 27 (who were actually liars) were classified in the truth class by the SVM. Similarly, from the periorbital data, subject numbers 17, 21, and 24 (who were actually liars) were classified in the truth class by the SVM. All the subjects who told the truth were classified correctly by the SVM, which resulted in the number of true negatives (TN) being 14 and false positives (FP) being 0. By combining the forehead and the periorbital data using an OR operation (if both outputs are deceit, the final decision is deceit, if both are true, the final decision is true; and if one of them is true, then the final decision is also true), we got true positives as 12 (TP = 12) and false negatives as 2 (FN = 2). This results in a final accuracy of 92.86%, which is more than our previous accuracy of 89.28%. The confusion matrix for the combined forehead data and the periorbital data is shown in Fig. 9.

Fig. 9
figure 9

Confusion matrix of the combined forehead data and the periorbital data

6 Discussion

This paper uses a novel method for simulating guilt. The standard binary classifiers such as a linear SVM could classify the thermal responses into truth and lie with 89.28% accuracy, which is greater than that obtained by Pavlidis [7]. The better performance in the classification might be due to the following reasons.

Firstly, the experimental protocol used a real-life crime scenario in which a real act of stealing takes place. The participants have not been asked to enact, unlike the previous works. This might be the reason for the evident thermal signature. The experiments have been done unobtrusively so that the behavior of the subjects is not affected. The natural behavior of the subjects was the advantage of the experiment, though at the same time, there was some difficulty, like the excess movement of the head by some subjects due to their natural behavior or stress causing difficulty in recording. Such movement caused difficulty in proper tracking because sometimes the face was going out of camera view. A pilot study was done in 2019 by the authors, taking ten subjects, but due to less number of subjects, such difficulty was not faced at that time which is seen in the present study [34].

Secondly, most of the participants have a record of petty thefts, unlike the previous works where subjects are normal people who are asked to enact. Seventy percent of the 28 participants stole during the experiment. The fact that 70 percent of people stole shows that the participants are habitual stealers and make the database special as the behavior of normal people and habitual stealers are different. During interrogation, it could easily be understood that some people have been adamant regarding their act.

The results obtained also corroborate with earlier findings that the blood flow rate increased steeply for the deceit cases while it increased steadily for non-deceit cases.

7 Conclusion

The proposed method has been successful in simulating guilt, as it is evident from results obtained from the thermal signatures. Here, the blood flow rate at the forehead and periorbital region is found out from the thermal videos. The results show that almost all the responses could be properly segregated.

In the future scope, we would like to include the voice as well as thermal parameters for detecting deceit which would help in overcoming the shortcoming of the movement of people beyond the camera view.

Availability of data and materials

The thermal video database can be accessed at https://thermalvideoskgp.blogspot.com/2019/07/thermal-inquisitions.html.

Abbreviations

SVM:

Support vector machine

ROI:

Region of interest

GHQ:

General Health Questionnaire

References

  1. M.G. Aamodt, H. Custer, Who can best catch a liar? A meta-analysis of individual differences in detecting deception. Forensic Examiner 15(1), 6–11 (2006)

    Google Scholar 

  2. P.A. Granhag, M. Hartwig, A new theoretical perspective on deception detection: on the psychology of instrumental mind-reading. Psychol. Crime Law 14(3), 189–200 (2008)

    Article  Google Scholar 

  3. B.A. Rajoub, R. Zwiggelaar, Thermal facial analysis for deception detection. IEEE Trans. Inf. Forensics Secur. 9(6), 1015–1023 (2014)

    Article  Google Scholar 

  4. J.E. Hall, Guyton and Hall Textbook of Medical Physiology E-book, 11th edn. (Elsevier Health Sciences, Philadelphia, 2010)

    Google Scholar 

  5. P.D. Drummond, J.W. Lance, Facial flushing and sweating mediated by the sympathetic nervous system. Brain 110(3), 793–803 (1987)

    Article  Google Scholar 

  6. J.M. Vendemia, M. Schillaci, R.F. Buzan, E. Green, S. Meek, Credibility assessment: psychophysiology and policy in the detection of deception. Am. J. Forensic Psychol. 24(4), 53 (2006)

    Google Scholar 

  7. I. Pavlidis, J. Levine, Thermal image analysis for polygraph testing. IEEE Eng. Med. Biol. Mag. 21(6), 56–64 (2002)

    Article  Google Scholar 

  8. D.A. Pollina, A.B. Dollins, S.M. Senter, T.E. Brown, I. Pavlidis, J.A. Levine, A.H. Ryan, Facial skin surface temperature changes during a “concealed information’’ test. Ann. Biomed. Eng. 34(7), 1182–1189 (2006)

    Article  Google Scholar 

  9. P. Tsiamyrtzis, J. Dowdall, D. Shastri, I. Pavlidis, M. Frank, P. Ekman, Lie detection-recovery of the periorbital signal through tandem tracking and noise suppression in thermal facial video, in Proceedings of SPIE sensors, and command, control, communications, and intelligence (C3I) technologies for homeland security and homeland defense IV, vol. 5778 (2005), pp. 29–31

  10. D.T. Lykken, The GSR in the detection of guilt. J. Appl. Psychol. 43(6), 385 (1959)

    Article  Google Scholar 

  11. I. Pavlidis, J. Levine, P. Baukol, Thermal imaging for anxiety detection, in Proceedings IEEE workshop on computer vision beyond the visible spectrum: methods and applications (Cat. No. PR00640) (2000), pp. 104–109

  12. J.J. Furedy, G. Ben-Shakhar, The roles of deception, intention to deceive, and motivation to avoid detection in the psychophysiological detection of guilty knowledge. Psychophysiology 28(2), 163–171 (1991)

    Article  Google Scholar 

  13. P. Tsiamyrtzis, J. Dowdall, D. Shastri, I.T. Pavlidis, M. Frank, P. Ekman, Imaging facial physiology for the detection of deceit. Int. J. Comput. Vis. 71(2), 197–214 (2007)

    Article  Google Scholar 

  14. S.H. Abd, I.A. Hashim, A.S.A. Jalal, Automated deception detection systems, a review. Iraqi J. Sci. (2021). https://doi.org/10.24996/ijs.2021.SI.2.8

    Article  Google Scholar 

  15. S. Chebbi, S.B. Jebara, Deception detection using multimodal fusion approaches. Multimed. Tools Appl. 82, 1–30 (2021)

    Google Scholar 

  16. S. Satpathi, S. Bagchi, A. Routray, P.S. Satpathi, R. Dash, Adaptive change detection of the temperature pattern of the face for identifying deceit, in IECON 2021—47th annual conference of the IEEE industrial electronics society (IEEE, 2021), pp. 1–6

  17. J. Fei, Z. Zhu, I. Pavlidis, Imaging breathing rate in the CO2 absorption band, in 27th annual international conference of the IEEE-EMBS 2005 engineering in medicine and biology society (IEEE, 2005), pp. 700–705

  18. N. Sun, M. Garbey, A. Merla, I. Pavlidis, Imaging the cardiovascular pulse, in IEEE computer society conference on computer vision and pattern recognition, 2005. CVPR 2005, vol. 2 (IEEE, 2005), pp. 416–421

  19. B.M. DePaulo, J.J. Lindsay, B.E. Malone, L. Muhlenbruck, K. Charlton, H. Cooper, Cues to deception. Psychol. Bull. 129(1), 74 (2003)

    Article  Google Scholar 

  20. P. Ekman, Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage (Revised Edition) (WW Norton & Company, New York, 2009)

    Google Scholar 

  21. A. Vrij, K. Edward, K.P. Roberts, R. Bull, Detecting deceit via analysis of verbal and nonverbal behavior. J. Nonverbal Behav. 24(4), 239–263 (2000)

    Article  Google Scholar 

  22. K. Harmer, S. Yue, K. Guo, K. Adams, A. Hunter, Automatic blush detection in “concealed information” test using visual stimuli, in 2010 international conference of soft computing and pattern recognition (SoCPaR) (IEEE, 2010), pp. 259–264

  23. A.K. Webb, C.R. Honts, J.C. Kircher, P. Bernhardt, A.E. Cook, Effectiveness of pupil diameter in a probable-lie comparison question test for deception. Leg. Criminol. Psychol. 14(2), 279–292 (2009)

    Article  Google Scholar 

  24. U. Jain, B. Tan, Q. Li, Concealed knowledge identification using facial thermal imaging, in 2012 IEEE international conference on acoustics, speech and signal processing (ICASSP) (IEEE, 2012), pp. 1677–1680

  25. M.G. Frank, P. Ekman, Appearing truthful generalizes across different deception situations. J. Pers. Soc. Psychol. 86(3), 486 (2004)

    Article  Google Scholar 

  26. U.M. Sen, V. Perez-Rosas, B. Yanikoglu, M. Abouelenien, M. Burzo, R. Mihalcea, Multimodal deception detection using real-life trial data. IEEE Trans. Affect. Comput. 13(1), 306–319 (2020)

    Article  Google Scholar 

  27. K.K. Park, H.W. Suk, H. Hwang, J.-H. Lee, A functional analysis of deception detection of a mock crime using infrared thermal imaging and the concealed information test. Front. Hum. Neurosci. 7, 70 (2013)

    Article  Google Scholar 

  28. Z. Zhu, P. Tsiamyrtzis, I. Pavlidis, Forehead thermal signature extraction in lie detection, in 2007 29th annual international conference of the IEEE engineering in medicine and biology society (IEEE, 2007), pp. 243–246

  29. H. Gunes, M. Piccardi, Bi-modal emotion recognition from expressive face and body gestures. J. Netw. Comput. Appl. 30(4), 1334–1345 (2007)

    Article  Google Scholar 

  30. A. Vrij, Detecting Lies and Deceit: Pitfalls and Opportunities (Wiley, Chichester, 2008)

    Google Scholar 

  31. Y.K. Cheong, V.V. Yap, H. Nisar, A novel face detection algorithm using thermal imaging, in 2014 IEEE symposium on computer applications and industrial electronics (ISCAIE) (IEEE, 2014), pp. 208–213

  32. A. Asvadi, H. Mahdavinataj, M. Karami, Y. Baleghi, Online visual object tracking using incremental discriminative color learning. CSI J. Comput. Sci. Eng. 12(2), 16–28 (2014)

    Google Scholar 

  33. V.N. Vapnik, The Nature of Statistical Learning Theory (Springer, New York, 1995)

    Book  Google Scholar 

  34. S. Satpathi, K.M.I.Y. Arafath, A. Routray, P.S. Satpathi, Detection of deceit from thermal videos on real crime database, in 2020 11th international conference on computing, communication and networking technologies (ICCCNT) (IEEE, 2020) pp. 1–6

Download references

Acknowledgements

First of all, we would like to show our gratitude to staffs of Department of Microbiology, Midnapore Medical College and Hospital for their support in arranging the experiment. Secondly, we would like to thank the local security of Midnapore Medical college for arranging the subjects.

Funding

There was no funding for the research reported.

Author information

Authors and Affiliations

Authors

Contributions

All the four authors contributed equally to this manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Saswata Satpathi.

Ethics declarations

Ethics approval and consent to participate

The ethical committee of Midnapore medical college and hospital has approved the experiment protocol. A free medical checkup is carried out for subjects. Due compensation is also paid to every subject after the experiment. Before participation in the experiment a written consent was taken from each subject where it was mentioned that their data will be used for future research work.

Consent for publication

All the authors have given consent for publication.

Competing interests

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Satpathi, S., Arafath, K.M.I.Y., Routray, A. et al. Analysis of thermal videos for detection of lie during interrogation. J Image Video Proc. 2024, 9 (2024). https://doi.org/10.1186/s13640-024-00624-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-024-00624-5

Keywords