- Research
- Open Access

# Fall detection in dusky environment

- Ying-Nong Chen
^{1}, - Chi-Hung Chuang
^{2}Email author, - Hsin-Min Lee
^{1}, - Chih-Chang Yu
^{3}and - Kuo-Chin Fan
^{1}

**2016**:16

https://doi.org/10.1186/s13640-016-0115-8

© Chen et al. 2016

**Received:**29 October 2014**Accepted:**13 March 2016**Published:**31 March 2016

## Abstract

Accidental fall is the most prominent factor that causes the accidental death of elder people due to their slow body reaction. Automatic fall detection technology integrated in a health-care system can assist human monitoring the occurrence of fall, especially in dusky environments. In this paper, a novel fall detection system focusing mainly on dusky environments is proposed. In dusky environments, the silhouette images of human bodies extracted from conventional CCD cameras are usually imperfect due to the abrupt change of illumination. Thus, our work adopts a thermal imager to detect human bodies. The proposed approach adopts a coarse-to-fine strategy. Firstly, the downward optical flow features are extracted from the thermal images to identify fall-like actions in the coarse stage. The horizontal projection of motion history images (MHI) extracted from fall-like actions are then designed to verify the incident by the proposed nearest neighbor feature line embedding (NNFLE) in the fine stage. Experimental results demonstrate that the proposed method can distinguish the fall incidents with high accuracy even in dusky environments and overlapping situations.

## Keywords

- Fall detection
- Optical flow
- Motion history image
- Nearest neighbor feature line

## 1 Introduction

Moylan [1] illustrated the gravity of falls as a health risk with abundant statistics. Larson [2] described the importance of falls in elderly. The National Center for Health Statistics showed that more than one third of ages 65 or older fall each year. Moreover, 60 % of lethal falls occur at home, 30 % occur in public region, and 10 % happen in health-care institutions for ages 65 or older [3]. In the literatures of fall detection, Tao et al. [4] applied the aspect ratio of the foreground object to detect fall incidents. Their system firstly tracks the foreground objects and then analyzes the sequences of features for fall incident detection. Anderson et al. [5] also applied the aspect ratio of the silhouette to detect fall incidents. The rationale based mainly on the fact that the aspect ratio of the silhouette is usually very large when the fall incidents occur. On the contrary, the aspect ratio is much smaller when the fall incidents do not occur. Juang [6] proposed a neural fuzzy network method to classify the human body postures, such as standing, bending, sitting, and lying down. In [7], Foroughi et al. proposed a fall detection method using an approximated eclipse of human body silhouette and head pose as features for multi-class support vector machine (SVM). Rougier et al. [8] applied the motion history image (MHI) and variations of human body shape to detect falls. In [9], Foroughi et al. proposed a modified MHI integrating the time motion image (ITMI) as the motion feature. Then, the eigenspace technique was used for motion feature reduction and fed into individual neural network for each activity. Liu et al. [10] proposed a nearest neighbor classification method to classify the ratio of human body silhouette of fall incidents. In order to differentiate between the fall and lying, the time difference between fall and lying was used as a key feature. Liao et al. [11] proposed a slip and fall detection system based on Bayesian Belief Network (BBN). They used the integrated spatiotemporal energy (ISTE) map to obtain the motion measure. Then, the BBN model of the causality of the slip and fall was constructed for fall prevention. Olivieri et al. [12] proposed a spatiotemporal motion feature to represent activities termed motion vector flow instance (MVFI) templates. Then, a canonical eigenspace technique was used for MVFI template reduction and template matching.

In this paper, a novel fall detection mechanism based on coarse-to-fine strategy which is workable in dusky environments is proposed. In the coarse stage, the downward optical flow features are extracted from the thermal images to identify fall-like actions. Then, the horizontal projected motion history image (MHI) features of fall-like actions are used in the fine stage to verify the fall by the nearest neighbor feature line embedding.

The contributions of this work are listed as follows: (1) using the thermal imager instead of CCD camera to capture intact human body silhouettes; (2) proposing a coarse-to-fine strategy to detect fall incidents; (3) proposing a nearest neighbor feature line embedding method for fall detection which improves the original nearest feature line embedding method; (4) proposing a scheme to detect fall incidents even though occlusion occurs.

The rest of this paper is organized as follows. In Section 2, the concept of nearest feature line embedding (NFLE) algorithm presented in our previous work [13] will be briefly reviewed. Then, the fall detection based on coarse-to-fine strategy and the nearest neighbor feature line embedding (NNFLE) algorithm are presented in Section 3. Experimental results are illustrated in Section 4 to demonstrate the soundness and effectiveness of the proposed fall detection method. Finally, conclusions are given in Section 5.

## 2 Nearest feature line embedding (NFLE)

The NFLE transformation is a linear transformation method based on a nearest feature space (NFS) strategy [13] originating from an nearest linear combination (NLC) methodology [14]. Since the points on the nearest feature line (NFL) are linearly interpolated or extrapolated from each pair of feature points, the performance is better than those of point-based methods. In addition, the NFL metric is embedded into the transformation through the discriminant analysis phase instead of in the matching phase.

Consider *Nd*-dimensional samples *X* = [*x*
_{1}, *x*
_{2}, … *x*
_{
N
}] constituting *N*
_{
c
} classes, the corresponding class label of *x*
_{
i
} is denoted as \( {l}_{x_i}\in \left\{1,2,3,\dots {N}_c\right\} \) and a specified point *y*
_{
i
} = *w*
^{
T
}
*x*
_{
i
} in the transformed space. The distance from point *y*
_{
i
} to the feature line is defined as ‖*y*
_{
i
} − *f*
^{(2)}(*y*
_{
i
})‖, in which *f*
^{(2)} is a function generated by two points, and *f*
^{(2)}(*y*
_{
i
}) is the projected point of the line. A number of \( {C}_2^{N-1} \) possible lines for point *y*
_{
i
} will be generated. The scatter computation of feature points to feature lines can be obtained and embedded in the discriminant analysis. In consequence, this approach is termed as NFLE.

The weight values *w*
^{(2)}(*y*
_{
i
}) (being 1 or 0) constitute a connected relationship matrix of size \( N\times {C}_2^{N-1} \) for *N* feature points to their corresponding projection points *f*
^{(2)}(*y*
_{
i
}). Consider the distance \( \left\Vert {y}_i-{f}_{m,n}^{(2)}\left({y}_i\right)\right\Vert \) for point *y*
_{
i
} to a feature line *L*
_{
m,n
} that passes through two points *y*
_{
m
} and *y*
_{
n
}; the projection point \( {f}_{m,n}^{(2)}\left({y}_i\right) \) can be represented as a linear combination of points *y*
_{
m
} and *y*
_{
n
} by \( {f}_{m,n}^{(2)}\left({y}_i\right)={y}_m+{t}_{m,n}\left({y}_n-{y}_m\right) \), in which *t*
_{
m,n
} = (*y*
_{
i
} − *y*
_{
m
})^{
T
}(*y*
_{
m
} − *y*
_{
n
})/(*y*
_{
m
} − *y*
_{
n
})^{
T
}(*y*
_{
m
} − *y*
_{
n
}). The mean square distance for all training samples to their corresponding NFLs is minimized and its representation is given by the following lemma.

*Lemma 2.1*: The mean square distance from the training points to the NFLs can be represented in the form of a Laplacian matrix.

*y*

_{ i }, the vector from point

*y*

_{ i }to the projection point \( n{f}_{\left(m,n\right)}^{(2)}\left({y}_i\right) \) of the NFL

*N L*

_{ m,n }which passes through points

*y*

_{ m }and

*y*

_{ n }can be obtained as follows:

*t*

_{ n,m }= 1 −

*t*

_{ m,n }and

*i*≠

*m*≠

*n*. Two values in the

*i*th row in matrix

*M*are set as

*M*

_{ i,m }=

*t*

_{ n,m }and

*M*

_{ i,n }=

*t*

_{ m,n }. The other values in the

*i*th row are set as

*M*

_{ i,j }= 0 if

*j*≠

*m*≠

*n*. In general, the mean square distance for all training points to their NFLs is obtained as follows:

in which ∑_{
j
}
*M*
_{
i,j
} = 1 and *L* = *D* − *W*. From the conclusions of [15], matrix *W* is defined as *W*
_{
i,j
} = (*M* + *M*
^{
T
} − *M*
^{
T
}
*M*)_{
i,j
} when *i* ≠ *j* and zero otherwise. The function in (3) can thus be represented by a Laplacian matrix.

Moreover, when the *K* NFLs are chosen from \( {C}_2^{N-1} \) possible combinations, the objective function in (1) is also represented as a Laplacian matrix as stated in the following theorem.

*Theorem 2.1:* The objective function in (1) can be represented as a Laplacian matrix that preserves the locality among samples.

*F*in (1) is first decomposed into

*K*components. Each component denotes the mean square distances for point

*y*

_{ i }to the

*k*th NFL. The first component matrix

*M*

_{ i,j }(1) denotes the connectivity relationship matrix between point

*x*

_{ i }and the NFL

*L*

_{ m,n }for

*i*,

*m*,

*n*= 1, …,

*N*and

*i*≠

*m*≠

*n*. Two non-zero terms,

*M*

_{ i,n }(1) =

*t*

_{ m,n }and

*M*

_{ i,m }(1) =

*t*

_{ n,m }, exist at each row of matrix

*M*

_{ i,j }(1) and satisfy ∑

_{ j }

*M*

_{ i,j }(1) = 1. According to Lemma 2.1, it is represented as a Laplacian matrix

*w*

^{ T }

*XL*(1)

*X*

^{ T }

*w*. In general,

*M*

_{ i,n }(

*k*) =

*t*

_{ m,n }and

*M*

_{ i,m }(

*k*) =

*t*

_{ n,m }for

*i*≠

*m*≠

*n*if line

*L*

_{ m,n }is the

*k*th NFL of point

*x*

_{ i }and zero otherwise. All components are derived in a Laplacian matrix representation,

*w*

^{ T }

*XL*(

*k*)

*X*

^{ T }

*w*, for

*k*= 1, 2, …,

*K*. Therefore, function

*F*in (1) becomes

where *W*
_{
i,j
}(*k*) = (*M*(*k*) + *M*(*k*)^{
T
} − *M*(*k*)^{
T
}
*M*(*k*))_{
i,j
} and *L*(*k*) = *D*(*k*) − *W*(*k*), for *k* = 1, 2, …, *K*, and *L* = *L*(1) + *L*(2) + … + *L*(*K*). Since the objective function in (4) can be represented as a Laplacian matrix, the locality of the samples is also preserved in the low-dimensional space. More details are given in [13].

*K*

_{1}and

*K*

_{2}are manually determined for the computation of the within-class scatter

**S**

_{ w }and the between-class scatter

**S**

_{ b }, respectively

*K*

_{1}NFLs within the same class

*C*

_{ p }of point

*x*

_{ i }and \( {F}_{K_2}^{(2)}\left({x}_i,{C}_l\right) \) is a set of the

*K*

_{2}NFLs belonging to the different classes of point

*x*

_{ i }. The Fisher criterion

*tr*(S

_{ B }/S

_{ W }) is then maximized to find the projection matrix

*w*which is composed of the eigenvectors with the corresponding largest eigenvalues. A new sample in the low-dimensional space can be obtained by the linear projection

*y*=

*w*

^{ T }

*x*. After that, the NN (one-NN) matching rule is applied to classify the samples. The training algorithm for the NFLE transformation proposed in our previous work [13] is described in Fig. 4.

Although the point-to-line strategy is successfully adopted in the training phase instead of the classification phase for the nearest feature line-based transformation, some drawbacks still remained and limited its performance. The problems are as follows: (1) extrapolation/interpolation inaccuracy: NFLE may not preserve the locality precisely when prototypes are far away from the probes (the probes are the training samples that would be projected on the NFL, and the prototypes are the training samples that generate the NFL); (2) high computation complexity: a large number of feature lines are generated when there are too many training samples; and (3) singular problem: the NFLE needs the inverse procedure to find the final transformation matrix *w*, which is troubled by the problem of singularity especially when the sample size is small. Motivated from the three problems of NFLE, we propose a modified NFLE algorithm to avoid the above three problems. Meanwhile, the algorithm is optimized for detecting fall incidents. The reason why we apply the modified NFLE is stated as follows: NFLE generates virtual training samples by linearly interpolating or extrapolating each pair of feature points. By doing so, the generalization and data diversity are increased. However, three drawbacks are shown as well. For the completeness and no repetition, the details of the three problems of NFLE and the proposed modified NFLE (NNFLE) algorithm are elaborated in Section 3.4.

## 3 The proposed fall detection mechanism

### 3.1 Human body extraction

### 3.2 Optical flow in the coarse stage

- (1)
Rule 1: Given 20 consecutive frames, the average vertical optical flows exhibit downward more than 75 % of frames.

- (2)
Rule 2: The sum of the average vertical optical flows in 20 consecutive frames is larger than a threshold, say 10 in this study.

As shown in Fig. 8a, a fall incident may not be identified if the subject is overlapped by the other. To solve this problem, the bounding box is divided into two equal boxes if overlapping occurs. The width of the silhouette is used to identify whether the overlapping occurs or not. The optical flow features are then extracted in each divided box. The one which has larger average downward optical flow is used to identify possible fall action. As a result, the fall incidents can be extracted correctly as shown in Fig. 8b, and Fig. 8a demonstrates the result without using the bounding box division strategy.

### 3.3 Motion history image in the fine stage

where *U*
_{
h
}, *U*
_{
w
}, and *g*(*i*, *j*) are the height, the width, and the pixel value of the motion energy in row *i* and column *j*, respectively. *Q*(*i*) is the obtained 50-dimensional feature vectors. Figure 9c, f illustrates the comparison between the feature vectors of walk and fall in this study. The distributions of the walk action and the fall action are significantly different. As can be seen, the vertical motion information of the fall action is encoded directly with the horizontal projections, which can be viewed as extracting from MHI but not the silhouette. Therefore, the MHI features of fall-like actions will be fed into the constructed NNFLE verifier to identify fall incidents after the coarse stage.

### 3.4 Nearest neighbor feature line embedding (NNFLE)

*x*

_{ i }, which is extracted from MHI, the proposed NNFLE method is formulated as the following optimization problem:

where \( {x}_i^{\mathrm{within}} \) indicates the projected point of *x*
_{
i
} on the nearest neighbor feature lines (NNFLs) formed by the samples with the same labels, and \( {x}_i^{\mathrm{between}} \) indicates the projected point of *x*
_{
i
} on NNFLs formed by the samples with different labels from *x*
_{
i
}. Here, it has to be mentioned that in the NFLE, each NFL is formed by the samples with the same class. However, in the proposed NNFLE, the NNFLs on which the projected point \( {x}_i^{\mathrm{between}} \) of the *x*
_{
i
} could be formed by the samples with different labels from each other. In other words, all the other classes are treated as one class while calculating the projected point \( {x}_i^{\mathrm{between}} \).

*J*(

*w*) can be simplified to the following form:

*w*

^{ T }

*w*= 1 on the proposed NNFLE. The transformation matrix

*w*can thereby be obtained by solving the eigenvalue problem:

*L*

_{2,3}and

*L*

_{4,5}generated from two prototype pairs (

*x*

_{2},

*x*

_{3}) and (

*x*

_{4},

*x*

_{5}), respectively. Points

*f*

_{2, 3}(

*x*

_{1}) and

*f*

_{4, 5}(

*x*

_{1}) are two projection points

*L*

_{2,3}of lines

*L*

_{2,3}and

*L*

_{4,5}for a query point

*x*

_{1}. From Fig. 11, it is clear that point

*x*

_{1}is close to points

*x*

_{2}and

*x*

_{3}but far away from points

*x*

_{4}and

*x*

_{5}. However, the distance ‖

*x*

_{1}−

*f*

_{4, 5}(

*x*

_{1})‖ for line

*L*

_{4,5}is smaller than that for line

*L*

_{2,3}, i.e., ‖

*x*

_{1}−

*f*

_{2, 3}(

*x*

_{1})‖. The discriminant vector for line

*L*

_{4,5}to point

*x*

_{1}is hence selected instead of the other one. In addition, a great deal of computational time is needed due to the vast number of feature lines in the classification phase, e.g., \( {C}_2^{N-1} \) possible lines.

*k*nearest neighborhood prototypes. More specifically, when two points

*x*

_{ m }and

*x*

_{ n }belong to the nearest neighbors of a query point

*x*

_{ i }, a straight line passing through points

*x*

_{ m }and

*x*

_{ n }is NNFL. The discriminant vector

*x*

_{ i }−

*f*

_{ m,n }(

*x*

_{ i }) is chosen for the scatter computation. The selection strategy for discriminant vectors in NNFLE is designed as follows:

- (1)
The within-class scatter

**S**_{ W }: The NNFLs are generated from the*k*_{1}nearest neighbor samples within the same class for the computation of the within-class scatter matrix, i.e., a set \( {F}_{k_1}^{+}\left({x}_i\right) \). - (2)
The between-class scatter

**S**_{ B }: Select*k*_{2}nearest neighbor samples in different classes from a specified point*x*_{ i }, i.e., a set \( {F}_{k_2}^{-}\left({x}_i\right) \), to generate the NNFLs and calculate the between-class scatter matrix.

The training algorithm for the NNFLE transformation proposed in this study is described in Fig. 11.

The proposed NNFLE method is a simple and effective method to alleviate the extrapolation and interpolation errors. In addition, the scatter matrices are also generated based on the Fisher’s criterion and represented as a Laplacian matrix. Moreover, the complexity of NNFLE is more efficient than that of NFLE. Consider *N* training samples, \( {C}_2^{N-1} \) possible feature lines will be generated and \( {C}_2^{N-1} \) distances have to be calculated for a specified point. The *K*
_{1} nearest feature lines are chosen from all possible lines to calculate the class scatter. The time complexity is *O*(*N*
^{2}) for line generation and *O*(2*N*
^{2} log *N*) for distance sorting. At the same time, the time complexity for selecting the *K*
_{1} nearest feature lines is *O*(*k*
^{2}) + *O*(2*k*
^{2} log *k*) when nearest prototypes are chosen for line generation. Extra overhead *O*(*N* log *N*) is needed for finding the *k* nearest prototypes. When *N* is large, traditional method needs longer time to calculate the class scatter.

## 4 Experimental results

The data sets used in the experiments

Actions | Number of training videos | Number of testing videos |
---|---|---|

Walk (one person) | 30 (5135 frames) | 50 (17,125 frames) |

Fall (one person) | 30 (545 frames) | 50 (1822 frames) |

Walk (multiple persons) | 30 (5069 frames) | 50 (16,130 frames) |

Fall (multiple persons) | 30 (460 frames) | 50 (1810 frames) |

### 4.1 Performance comparisons of various fall detection algorithms

*w*of the proposed NNFLE is constructed from the eigenvectors of

**S**

_{ b }−

**S**

_{ w }with the largest corresponding eigenvalues when the objective function

*J*is maximized. In our work, the dimensionality of feature vectors is reduced by the PCA transformation to remove noises. More than 99 % of the feature information is kept in the PCA process. After the PCA transformation, the optimal projection transformations are obtained for the proposed NNFLE method. All of the testing frames are matched with the trained prototypes using the NN matching rule. The performance comparisons of these three methods are tabulated in Table 2. From Table 2, we can notice that the proposed coarse-to-fine strategy of fall detection outperforms the other two methods. It implies that the proposed method is much more effective than the other two methods.

The fall detection performance on the data set (%)

Method | Classification action (videos) | Reference action (videos) | |
---|---|---|---|

Fall | Walk | ||

CPL | Fall | 92.00 (46/50) | 8.00 (4/50) |

Walk | 10.00 (5/50) | 90.00 (45/50) | |

BBN | Fall | 80.00 (40/50) | 20.00 (10/50) |

Walk | 12.00 (6/50) | 88.00 (44/50) | |

NFLE | Fall | 94.00 (47/50) | 6.00 (3/50) |

Walk | 6.00 (3/50) | 94.00 (47/50) | |

Ours | Fall | 98.00 (49/50) | 2.00 (1/50) |

Walk | 0.00 (0/50) | 100.00 (50/50) |

### 4.2 The identification capability of coarse-to-fine verifier

The identification capability of coarse stage and fine stage of the proposed method

Classification actions | Reference actions in coarse stage (frames) | Reference actions in fine stage (frames) | ||
---|---|---|---|---|

Walk | Fall | Walk | Fall | |

Walk | 17,079/17,125 | 46/17,125 | 46/46 | 0/46 |

Fall | 68/1822 | 1754/1822 | 153/1754 | 1601/1754 |

### 4.3 Performance evaluation of fall detection under overlapping situations

The performance evaluation of fall detection under overlapping situations (%)

Method | Classification action (videos) | Reference action (videos) | |
---|---|---|---|

Fall | Walk | ||

NFLE | Fall | 92.00 (46/50) | 10.00 (5/50) |

Walk | 6.00 (3/50) | 90.00 (45/50) | |

Ours | Fall | 96.00 (48/50) | 4.00 (2/50) |

Walk | 0.00 (0/50) | 100.00 (50/50) |

## 5 Conclusions

In this paper, a novel fall detection mechanism based on a coarse-to-fine strategy in dusky environment is proposed. The human body in dusky environment can be successfully extracted using the thermal imager, and fragments inside the human body silhouette can also be significantly reduced as well. In the coarse stage, the optical flow algorithm is applied on thermal images. Most of walk actions are filtered out by analyzing the downward flow features. In the fine stage, the projected MHI is used as the features followed by the NNFLE method to verify fall incidents. The proposed NNFLE method, which adopts a nearest neighbor selection strategy, is capable of alleviating extrapolation/interpolation inaccuracies, singular problem, and high computation complexity. Experimental results demonstrate that the proposed method outperforms the other state-of-the-art methods and can effectively detect fall incidents even when multiple subjects are moving together.

## Declarations

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- KC Moylan, EF Binder, Falls in older adults: risk assessment, management and prevention. Am. J. Med.
**120**(6), 493–497 (2007)View ArticleGoogle Scholar - L Larson, TF Bergmann, Taking on the fall: the etiology and prevention of falls in the elderly. Clin. Chiropr.
**11**(3), 148–154 (2008)View ArticleGoogle Scholar - S Gs, Falls among the elderly: epidemiology and prevention. Am. J. Prev. Med.
**4**(5), 282–288 (1988)Google Scholar - J Tao, M Turjo, M-F Wong, M Wang, Y-P Tan, Fall incidents detection for intelligent video surveillance, in
*Proceedings of the 15th international conference on communications and signal processing*, 2005, pp. 1590–1594Google Scholar - D Anderson, JM Keller, M Skubic, X Chen, Z He, Recognizing falls from silhouettes, in
*Proceedings of the 28th IEEE EMBS annual international conference*, 2006Google Scholar - CF Juang, CM Chang, Human body posture classification by neural fuzzy network and home care system applications. IEEE Trans. SMC, Part A
**37**(6), 984–994 (2007)MathSciNetGoogle Scholar - H Foroughi, N Aabed, A Saberi, HS Yazdi, An eigenspace-based approach for human fall detection using integrated time motion image and neural networks, in
*Proceedings of the IEEE International Conference on Signal Processing (ICSP)*, 2008Google Scholar - C Rougier, J Meunier, AST Arnaud, J Rousseau, Fall detection from human shape and motion history using video surveillance, in
*Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops*, vol. 2, 2007, pp. 875–880Google Scholar - H Foroughi, A Rezvanian, A Paziraee, Robust fall detection using human shape and multi-class support vector machine, in
*Proceedings of the Sixth Indian Conference on CVGIP*, 2008Google Scholar - CL Liu, CH Lee, P Lin, A fall detection system using k-nearest neighbor classifier. Expert Syst. Appl.
**37**(10), 7174–7181 (2010)View ArticleGoogle Scholar - YT Liao, CL Huang, SC Hsu, Slip and fall event detection using Bayesian Belief Network. Pattern Recogn.
**45**, 24–32 (2012)View ArticleGoogle Scholar - DN Olivieri, IG Conde, XAV Sobrino, Eigenspace-based fall detection and activity recognition from motion templates and machine learning. Expert Syst. Appl.
**39**(5), 5935–5945 (2012)View ArticleGoogle Scholar - YN Chen, CC Han, CT Wang, KC Fan, Face recognition using nearest feature space embedding. IEEE Trans. Pattern Anal. Mach. Intell.
**33**(6), 1073–1086 (2012)View ArticleGoogle Scholar - SZ Li, J Lu, Face recognition using the nearest feature line method. IEEE Trans. Neural Netw.
**10**(2), 439 (1999). -433, 1999View ArticleGoogle Scholar - S Yan, D Xu, B Zhang, HJ Zhang, S Lin, Graph embedding and extensions: general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell.
**29**(1), 40–51 (2007)View ArticleGoogle Scholar - G Wu, Distinguishing fall activities from normal activities by velocity characteristics. J. Biomech.
**33**(11), 1497–1500 (2000)View ArticleGoogle Scholar - CM Wang, KC Fan, CT Wang, Estimating optical flow by integrating multi-frame information. J. Inf. Sci. Eng.
**24**(6), 1719–1731 (2008)Google Scholar - AF Bobick, JW Davis, The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell.
**23**(3), 257–267 (2001)View ArticleGoogle Scholar