Skip to main content

A smart intraocular pressure risk assessment framework using frontal eye image analysis

Abstract

Intraocular pressure (IOP) in general refers to the pressure in the eyes. Gradual increase of IOP and high IOP are conditions/symptoms that may lead to certain diseases such as glaucoma and therefore must be closely monitored. While the pressure in the eye increases, different parts of the eye may become affected until the eye parts are damaged. An effective way to prevent rise in eye pressure is by early detection. A new smart healthcare framework is presented to evaluate the intraocular pressure risk from frontal eye images. The framework monitors the status of IOP risk by analyzing frontal eye images using image processing and machine learning techniques. A database of images collected from Princess Basma Hospital in Jordan was used in this work. The database contains 400 eye images: 200 images with normal IOP and 200 high eye pressure case images. The framework extracts five features from the frontal eye image: the pupil and iris diameter ratio, mean redness level of the sclera, red area percentage of the sclera, and two other features measured from the extracted contour of the sclera (contour height and contour area). Once the features were extracted, a neural network is trained and tested to obtain the status of the patients in terms of eye pressure. The framework detects the status of IOP (normal or high IOP) and produces evidence of the relationship between the five extracted frontal eye image features and IOP, which has not been previously investigated through automated image processing and machine learning techniques using frontal eye images.

1 Introduction

Rise of IOP is one of the most serious causes of glaucoma leading to blindness all over the world. It is known as the silent thief of vision because it can sneak up into any patient [1]. The blindness caused by IOP is irreversible as the optic nerve dies [2]. An effective way to prevent pressure rise inside the eye is through early detection. The earlier the disease is detected, the easier and more effective the treatment will be [3].

Initially, ophthalmologists label some patients as glaucoma candidates due to several risk factors and symptoms that their eyes may have. One of these factors is the suspicion of potential rise in IOP [4]. The pressure can increase inside the eye from a liquid called aqueous humor that is secreted by the ciliary body into the posterior chamber [5]. After that, the aqueous humor flows through the pupil into the anterior chamber [6]. Finally, it drains through a sponge-like structure called the trabecular meshwork (TM) [7]. Moreover, the pressure damages the nerve fibers which can result in patches of vision loss, and if left untreated, may lead to total blindness. In addition, the rise of eye pressure will dilate the pupil [8]. As the aqueous humor liquid builds up in the chamber, other factors can contribute to the onset of the rise of IOP through medications unrelated to eye disease. Different drugs that are taken for anxiety or depression affect the brain and physiological composition of the body [9]. This includes the muscles in the eye that control the pupil size. The progression of IOP is generally preventable by medical treatment, while some patients continue to progress even after treatment [10]. However, the portion of the vision that is already lost cannot be restored. That is why it is necessary to detect early signs of rise in IOP. Generally, regular eye exams like tonometry test, ophthalmoscopy test, perimetry test, gonoscopy test, and pachymetry test are conducted at the clinic for this purpose [11].

In this paper, a new automated detection framework is developed to detect if the eye has normal or high eye pressure. Our smart framework is based on image processing and machine learning techniques to extract five features, solely from the frontal eye image: the pupil/iris diameter or radius ratio, the mean redness level (MRL) and red area percentage (RAP) of the sclera, and features of the contour of the sclera (area, height). Table 1 shows a comparison between the existing clinical methods and the proposed framework. Once the five features are extracted from the frontal eye images, neural network (NN) is applied in order to train and test the extracted features and obtain a risk assessment result for intraocular pressure (normal or high IOP). The proposed work does not directly measure the IOP value in millimeter of mercury; rather, it determines whether the user/patient’s IOP is at a risky level (high) or not, further serving as an initial IOP risk assessment framework that can assist many individuals, especially those with family history of IOP and glaucoma, to provide an early warning if their IOP is beyond the normal range. If the proposed initial screening framework resulted in high IOP, the patient must seek/visit the clinic/doctor for further examinations/consultations.

Table 1 Comparison with existing clinical methods

2 Related work

Many researchers have proposed several works on the issue of IOP detection and analysis of the eye from images. However, there is a lack of studies regarding IOP based on frontal eye images in the computer vision field. Most of the studies focus on fundus images that show the status of the optic nerve or investigate relevant feature extraction for purposes other than IOP. Moreover, some studies require additional hardware/devices with direct contact to the eye to measure IOP.

Mariakakis et al. [12] proposed an approach to assess intraocular pressure using a smartphone and a hardware adapter attached to it. The adapter is a clear acrylic cylinder that is connected to the camera of the smartphone, with a diameter of 8 mm and height of 63 mm. The authors stated that only trained users must use this device. The user holds the smartphone perpendicularly over the patient’s eye and then applies the weight of acrylic cylinder to it. The smartphone camera would then start recording the applanation of the eye. Video analysis is then applied to measure two ellipses, the acrylic cylinder (outer ellipse) and the applanation surface (inner ellipse). The ellipses are then mapped to absolute measurements of the diameter of the acrylic cylinder. The final diameter measurement is mapped to an IOP value using a clinically validated table such as the one published by Adolph Posner [13]. As stated by the authors, this device cannot be deployed by ordinary users and the patient must visit the clinic.

Gisler et al. [14] proposed a glaucoma detection technique using intraocular pressure monitoring. The data was supervised by Sensimed Company where contact lens sensors (CLS) were used to automate recording the continuous ocular dimensional changes over 24 h. The CLS system is safe and non-invasive. However, a health care professional is required to install it and remove it from the patient. The authors used Java software to manage the data and feature extraction. The feature extraction was split into two parts. The first part included statistical features (raw frequency values and filter banks), and the second part consisted of physiological features (eye blinks, ocular pulse, and slope of the curve), which were fed to a support vector machine (SVM) learning technique and classifier [15].

Shahiri et al. [16] proposed a micro electromechanical pressure sensor for measuring IOP based on P++silicon. Finite element analysis (FEA) was used to simulate, optimize, and analyze the mechanical properties of the device. The authors investigated the deformation in the Z axis of the diaphragm with a thickness of 4 mm at applied pressure of 30 mmHg. The authors found that the deflection of the center of the diaphragm varies linearly with the range of pressure.

The work in [17, 18] used fundus images to identify the visual field defect and detect glaucomatous progression. The authors used the Gaussian mixture model (GMM) clustering method based on inspection points of the fundus images to incorporate the distance between these points.

Table 2 provides a comparison of our IOP risk assessment estimation framework with other related techniques. It is important to mention that, however, no prior related work used frontal eye images for IOP risk determination. Therefore, the table only provides a summary of the related techniques, the image database, and performance and/or application. A summarized comparison of the approaches/devices that used different inputs/sensors (e.g., fundus images) for a similar purpose or output like IOP is provided in the table.

Table 2 Summary of related techniques

3 Material

In this study, we used the image databaseFootnote 1 (DB) from Princess Basma Hospital (Jordan) which was generated in 2014 and completed in 2016. Four hundred participants contributed to the database of images. Half of them were patients with high eye pressure. The other half of the participants represented normal eye pressure cases. The age range of the patients was between 40 and 65 years old (which generally represent the age range of high IOP cases). Each patient’s level of eye pressure was recorded in the database by ophthalmologists, and the images were labeled as high or normal IOP. The IOP range of the 200 normal eye pressure cases were 11–20 mmHg (with mean of 14.7 mmHg), and the range of the 200 high eye pressure cases were 21–30 mmHg (with mean of 24.7 mmHg). The IOP cutoff used in this research is 20 mmHg as advised by the ophthalmologists. If the participant has IOP ≤ 20 mmHg, it is normal; otherwise, it is considered as high IOP. All the database images were taken in a range of 20 cm between the camera and the patients. All eye images were taken in the same lighting conditions. The normal and high IOP images were stored in two different folders. Each image was saved in JPG format. The camera used was a canon camera model T6K1 with resolution 3241 × 2545. This resolution can be found in any smartphone nowadays.

4 Methods

In this research, we developed a smart IOP risk estimation framework based on five features extracted from the frontal eye images. Each eye image first goes through a preprocessing stage to prepare the images for feature extraction. The features are pupil/iris ratio, mean redness level (MRL), red area percentage (RAP), and sclera contour features (area, height). Our final result of the grade level of IOP risk is displayed as eye status: normal or high IOP. The final results come from scaled values computed using a neural network. Figure 1 shows an overall view of our framework. The development is carried out by MATLAB 2013a software.

Fig. 1
figure 1

IOP risk assessment framework

4.1 Preprocessing

Prior to feature extraction, the Adaboost face detection algorithm and Haar cascade eye detection [19, 20] (as shown in Fig. 2) are applied to the face images in order to extract the eye image automatically. Each eye area segment was extracted as a rectangle.

Fig. 2
figure 2

Haar cascade classifier to detect object

After extracting the eye image, different steps at the preprocessing stage are applied in order to extract the pupil, iris, and sclera, as shown in Fig. 3.

Fig. 3
figure 3

Preprocessing stage to extract the iris and pupil

In the first step, the image is cropped and resized to set the height:width ratio to be 1:1.8, respectively. Then, the red layer I(:,:, 1) image is extracted because it discards unwanted data and enhances the iris and the pupil area (we use the red layer here just to detect the pupil and the iris). After that, a morphological reconstruction technique [21] is applied on the red layer image in order to remove the light reflection (which is often seen as a bright circle) on the pupil. Removing the light reflection here is considered as an important step since we will use the circular Hough transform (CHT) technique [22] to detect the pupil and iris. Then, local adaptive thresholding [23] is applied to separate the foreground from the background. Canny edge detection [24], which is considered as one of the most well-known techniques to detect edges, is then applied to detect the edges of the eye image. Canny edge detection, consists of three main techniques (Gaussian filter [25], non-max suppressions (NonMaxSup) [26] and hysteresis thresholding (Hysthresh) [27]). After applying several experiments using the canny edge detection function, it has been observed that the best values for the parameters to generate edge images are the ones shown in Table 3.

Table 3 Typical parameter values of canny edge, gamma, radius, and thresholding

The gamma values for canny edge detection are also shown in Table 3. The gamma value is part of the adjust Gamma “adjgamma” function [28] that changes the contrast of an image. After applying canny edge detection, a circular Hough transform (CHT) technique is applied in order to extract the iris and pupil, as shown in Fig. 4.

Fig. 4
figure 4

Iris and pupil detection using CHT technique

The CHT function has one disadvantage. It performs poorly when a large part of the circle to be detected is outside the image. This is not a problem for detecting the pupil or iris circles since both of them are found completely in the image. However, for detecting the upper and lower eyelid circles, this issue would come into picture. To work around this problem, we extend the images by a black area either from the top (when detecting the lower eyelid) or from the bottom (when detecting the upper eyelid), as shown in Fig. 5.

Fig. 5
figure 5

Extended eye image

Moreover, the only circle that is detected without cropping or deletion is the iris circle, after which we use the iris circle parameters to modify edge images and ease the job of finding other circles. For example, before detecting the pupil, the edge image will be cropped to a square with the center equal to the iris circle center and sides just less than the iris radius. This makes the detection of the pupil circle much easier as we do not need all the details outside the iris.

Now, the pupil radius/iris radius ratio can directly be calculated and ready to be used. Sample results are shown in Fig. 6. The blue circle is for the iris, the red for the pupil, the yellow for the upper eyelid, and green for the lower eyelid.

Fig. 6
figure 6

Detecting pupil/iris and eyelids

After detecting the circles (iris, upper, and lower eyelids), segmenting the sclera becomes more clear. The sclera would be the area included between the intersection of the upper and the lower eyelid circles except for the iris circle. So, for a pixel to be in the sclera region, it should be inside both the upper and lower eyelid circles but not in the iris circle. The equation of a circle is:

$$ {\left(x-a\right)}^2+{\left(y-b\right)}^2={r}^2\kern0.5em $$
(1)

where (a, b) is the center coordinates and r is the radius. Since any horizontal line passing by the circle y = constant will cut the circle into two areas, the locus of all points of that horizontal line that are inside the circle will be:

$$ a-\sqrt{r^2-{\left(b-y\right)}^2}\le x\le \sqrt{r^2-{\left(y-b\right)}^2}+\mathrm{a} $$
(2)

when b − r ≤ y ≤ b + r

In the equation, variable x is replaced by “col” and y is replaced by “row,” so we can use the above simple formula to get all pixels inside a circle in the image. This way, we were able to extract the sclera, as shown in Fig. 7 (the sclera image is denoted as S).

Fig. 7
figure 7

Extracted sclera

4.2 Feature extraction

Once the preprocessing steps are applied to the image, five features will be measured: pupil/iris ratio, the mean redness level, the red area percentage, the area of the sclera contour, and the height of the sclera contour.

The pupil/iris diameter or radius ratio is measured once the pupil and iris have been detected. Figure 8 illustrates a sample of pupil/iris ratio results.

Fig. 8
figure 8

The ratio of pupil/iris

The mean redness level (MRL) is heavily computed from the reddish pixel. Each pixel is a combination of three values (red, green, and blue). Also, there are millions of combinations that can result in reddish colors if we assign a large value to the red part of the pixel. Therefore, the red pixel value should always be larger than the green and blue pixel values. To prevent the pixel from being shifted to the yellow or violet colors, the difference between the green and the blue pixel values should not be too large. MRL can be calculated by the proposed formula in Eq. 6:

$$ \mathrm{Mean}\ \mathrm{of}\ \mathrm{red}\ \mathrm{pixels}=\kern1.5em M\left(\mathrm{RPV}\right)=M\left(S\Big(:,:,1\right)\Big)={\sum}_0^mS\left(:,:,1\right)/m $$
(3)
$$ \mathrm{Mean}\ \mathrm{of}\ \mathrm{green}\ \mathrm{pixels}=\kern1em M\left(\mathrm{GPV}\right)=M\left(S\Big(:,:,2\right)\Big)={\sum}_0^mS\left(:,:,2\right)/m\kern1.5em $$
(4)
$$ \mathrm{Mean}\ \mathrm{of}\ \mathrm{blue}\ \mathrm{pixels}=\kern1.25em M\left(\mathrm{BPV}\right)=M\left(S\Big(:,:,3\right)\Big)={\sum}_0^mS\left(:,:,3\right)/m\kern1.25em $$
(5)

So,

$$ \mathrm{MRL}=\frac{3\times M\left(\mathrm{RPV}\right)-M\left(\mathrm{GPV}\right)-M\left(\mathrm{BPV}\right)}{3\times 255}\kern2.5em $$
(6)

where M(RPV) corresponds to the mean of the red pixel values, M(GPV) is the mean of the green pixel values, M(BPV) is the mean of blue pixel values, and m refers to the total number of pixels in the extracted sclera.

The red area percentage RAP is defined as the mean of the red pixel percentage in the binary image of the extracted sclera (P).

$$ \mathrm{RAP}=\left({\sum}_{i=0}^n{P}_i\right)\kern0.5em \div m $$
(7)

In Eq. (7), Pi represents the red pixel values in the extracted sclera, and m represents the total number of pixels in the extracted sclera. Figure 9 represents samples of our MRL and RAP results.

Fig. 9
figure 9

Results of MRL and RAP. a Normal IOP. b High IOP

The idea of measuring features of the contour of the sclera is inspired by sonar techniques such as ultrasound where active trained operators/healthcare personnel are involved [29]. In order to obtain the contour of the sclera, first, the “Activecontour” [30] function is employed in which the 2-D grayscale image A is segmented into foreground (object) and background regions using the active contour-based segmentation. The black and white (bw) output image is a binary image where the foreground is white (logical true) and the background is black (logical false). In these computations, the mask is a binary image that specifies the initial state of the active contour. The boundaries of the object region(s) (white) in the mask define the initial contour position in order to segment the image, as shown in Fig. 10. To obtain faster and more accurate segmentation results, we specify an initial contour position that is close to the desired object boundaries.

Fig. 10
figure 10

Active contour

To find the area and height of the contour, the “regionprops” function is used. The area can be derived directly from this function, and the height can be calculated by subtracting the upper extreme and the lower extreme. The area is then divided by the mask area to get the area ratio, and similarly, the height is divided by the mask height to obtain the height ratio. These are newly proposed features from frontal eye images in our work that have not been previously investigated in the literature for IOP risk assessment.

4.3 Features representation

Once we extracted the five features from the images, the results were stored in a feature matrix. The matrix consists of multiple rows that correspond to the five features and 400 columns (n = 400) that correspond to the number of images in the database used in this study. To train a neural network, we need (input and target) data. The input matrix is organized as shown in Fig. 11.

Fig. 11
figure 11

Feature matrix

4.4 Classification

Several machine learning algorithms were applied on the extracted features. For instance, support vector machine SVM was tested using the radial basis function (RBF) kernel along with the neural network classifier [31, 32] to pick the best accuracy. Neural network classifier shows the best accuracy and execution time over SVM. Therefore, the neural network classifier has been used in the rest of this work. The neural network-based classification applied to the extracted features is designed using the following settings.

Three network layers have been utilized for the classification purpose. The first layer is the input layer which has five inputs corresponding to the number of features; the second layer is one hidden layer that contains 10 nodes, and finally, there is one output layer that shows the final binary result (normal or high eye pressure). Figure 12 shows a visual representation of the various layers used. When the input values are moved from one layer to another, they get multiplied by weights and this procedure is repeated all the way to the output layer. The hidden layer values may be greater than 1, less than zero, or in between. Therefore, in our research, we used the sigmoid as an activation function to adjust and scale all the results to be between 0 and 1 for the output of each node. Finally, in this framework, if the output layer has a value of 0.5 or greater, it will be considered as high eye pressure; otherwise, it will be considered as normal eye pressure. Hence, the final output of our framework is either normal or high IOP. We applied 75% of the images in the database for training and 25% for testing. The patient images used in the testing phase are completely different from the ones used in the training phase (not different images from the same patients, but different images for different patients). The system used the adaptive learning rate as shown in Fig. 13. We found these as the optimal specifications yielding best performances for our neural network, as they were determined after several experiments.

Fig. 12
figure 12

Layers of the proposed neural network framework

Fig. 13
figure 13

Training and testing performance

5 Results

The objective of our study is to extract five features from frontal eye images in an effort to determine the status of IOP (normal or high). We initially demonstrate the results that came from each feature.

5.1 Pupil/iris ratio

During the day, the normal adult pupil/iris diameter range varies (between 2 to 4 mm for the pupil and 11 to 14 mm for the iris [33]). Traditionally, the radius of the iris and pupil is measured in millimeters. However, according to computer vision, it is inaccurate to represent the radius in millimeters even when images contain information such as fixed distance and tangible objects. Therefore, in this study, we rely on both the iris and the pupil to calculate the ratio for accurate results because the units will be discarded. In our study, the ratio of the pupil/iris in daytime hours fall between (0.5, 0.7) for adults. Table 4 represents a sample of the results after our pupil/iris ratio detection technique was applied to the normal and high IOP cases. The table is extended to include other features as well. The table is split into two blocks: normal and high IOP. Each block is further split into five blocks, corresponding to each feature (pupil/iris ratio, MRL, RAP, contour area, and contour height). The mean and standard deviation (STD) and median values are also reported in the last two rows.

Table 4 Sample values of pupil/iris ratio, MRL, RAP, and contour feature (area, height) results for normal, high eye pressure

The results show that there is a strong relationship between the pupil/iris ratios and high intraocular pressure. Once the medical community has this knowledge, we believe that our smart framework will help in the initial screening of IOP that may lead to early detection of high IOP in an effort to circumvent the onset of blindness.

5.2 MRL and RAP

The extraction of the sclera was the most difficult part of this research since the sclera shares the same features of the skin. The sclera was extracted, and the mean redness level was calculated according to the proposed Eq. (6). Red area percentage was also calculated in the extracted sclera, as shown in Eq. (7). Table 4 also contains a sample of the results for normal and high IOP cases based on MRL and RAP measures. The results show that there is a strong relationship between the MRL, RAP, and IOP. This information will, also, aid in automatic IOP screening for early detection of high-risk IOP, in an effort to help in preventing the blindness.

5.3 Contour features (area, height)

In this section of the results, we report the sclera contour area and height measures for normal and high eye pressure cases from frontal eye images. Table 4 depicts these results as well. The sclera contour feature values are also represented as ratios, as described in the previous section.

5.4 High-risk IOP determination

The system was prepared based on the settings of neural network classifier stated in the last section. The status of the eye (normal or high IOP) came from the activation function of the neural network implementation. The implementation dictates the type of normalization functions that can be used to bring the activation values in the range between 0 and 1. These computations are done in a fashion that sums up all the percentages to 1. For example, higher values of the pupil/iris ratio could relate to having a higher value in the range close to 1. It is important to note that, however, the system does not count on one feature to make the final decision and rather depends on five features altogether along with a neural network machine learning model to provide the final decision. The value 0.5 from the output range is used as a cutoff to differentiate between normal and high IOP. As an example, when the pupil/iris ratio was equal to 0.7, the resulted scaled value was high and close to 1. This indicates that if the other features of the same eye image also result in a high value from the range [0–1], the eye status is likely to be classified as high IOP.

Tables 5 and 6 show the test phase confusion matrix for neural network (NN) and SVM, respectively, regarding the proposed framework. The table is split according to the status of eye pressure (normal, high pressure). At the beginning, the data was shuffled; then, 65% of the eye images were taken randomly for the training phase, 25% was taken for the testing phase and 10% for validation. The technique was run at least 10 times, and the average values were recorded. We have shown the accuracy of the classifier, when properly trained and validated, for identifying people with high IOP using the five features from frontal eye images. In this work, NN was adopted as it provided better accuracy, and hence, it is the focal classifier.

Table 5 Test confusion matrix for neural network
Table 6 Test confusion matrix for SVM

There are 200 images in the database that correspond to normal eye pressure. The 65% training consists of 130 random images representing normal eye pressure and 130 random images representing high eye pressure images. The 25% testing data consists of 50 random images that represent the normal eye image, and 50 random images representing high eye pressure. The 10% validation data consist of 20 random images. The proposed framework using NN was able to detect 49 normal eye images as normal pressure, and 1 image that corresponds to normal eye pressure was detected as high eye pressure. The accuracy for normal eye pressure is 98.0%. The second column represents the high eye pressure cases, and there are 50 images in the test phase that correspond to high eye pressure. The proposed framework detected 3 high eye pressure images as normal pressure, and 47 high eye pressure images as high eye pressure, so the accuracy for the high eye pressure is 94.0%. As shown in the confusion matrix table, the overall accuracy (Acc.) for the proposed framework is 95.0% and 5.0% corresponds to the overall error (Err.).

The performance of a classifier can be determined by computing the accuracy, sensitivity, and specificity using TP, FP, FN, and TN values, where TP refers to true positives, TN is true negatives, FP is false positives, and FN is false negatives. The equations of accuracy, sensitivity, and specificity are shown below [34,35,36]:

$$ \mathrm{Accuracy}=\frac{\left(\mathrm{TP}+\mathrm{TN}\right)}{\left(\mathrm{TP}+\mathrm{FP}+\mathrm{TN}+\mathrm{FN}\right)} $$
(8)
$$ \mathrm{Senstivity}=\frac{\mathrm{TP}}{\left(\mathrm{TP}+\mathrm{FN}\right)} $$
(9)
$$ \mathrm{Specificity}=\frac{\mathrm{TN}}{\left(\mathrm{TP}+\mathrm{FN}\right)} $$
(10)

According to Eqs. 8, 9, and 10, the accuracy value is 0.95, the sensitivity value for the proposed framework is 0.95, and the specificity value is 0.97.

For further analysis, Fig. 14 shows the correlation between the extracted frontal eye features with IOP values in millimeter of mercury for all participants. The x-axis represents the feature value of pupil/iris ratio for part (a), RAP for part (b), MRL for part (c), contour height for part (d), and contour area for part (e). The y-axis represents the actual IOP value in millimeter of mercury that corresponds to each eye with the given features. As observed, the pupil/iris ratio, RAP, and MRL features are directly proportional to the IOP values in millimeter of mercury, while the sclera contour features (height and area) are inversely proportional to IOP. The curve fitted graphs for IOP value versus the features are also shown as an exponential trend displayed as exponential “Expon.” in each of the five parts of Fig. 14 using regression models.

Fig. 14
figure 14

Correlation between the five features and IOP in millimeter of mercury. a Pupil/iris ratio feature. b Red area percentage feature. c Mean redness level feature. d Contour height feature. e Contour area feature

6 Discussion

Despite showing promising results in the proposed framework, some limitations may exist. The efficiency of the proposed features and the suitable sample size will be investigated further.

Based on the lighting conditions, the pupil/iris ratio may vary from an image capture to another for the same subject. Furthermore, based on Fig. 14, RAP may seem to be less effective in the classification process. Therefore, another classifier was applied and tested using only the MRL, contour height, and contour area features, as shown in Table 7 (without pupil/iris ratio and RAP features). As seen from Tables 5 and 7, the performance of the five-feature classifier outperforms the one trained with three features only.

Table 7 Test confusion matrix for neural network with three features (MRL, contour area, and contour height)

Moreover, to make sure that the utilized sample size is sufficient for testing, the statistical power analysis is applied to confirm the accuracy claims. Statistical power analysis is performed with the aim of estimating the minimum sample size to be used for the experiment.

To find out what the appropriate sample size would be or justify a proposed sample size, one would need to know the following factors [37].

  1. 1.

    Level of significant (p)

  2. 2.

    Effect size (d)

When considering an alpha level of 0.80 from Table 8, as a large set is used for the t test on means calculation, the effect size will be “d.”

Table 8 Cohen table of statistical power analysis

In this work, with the anticipated effect size of d = 0.80, desired statistical power level of 0.80 and probability level of 0.05, using the t test of means:

$$ \mathrm{Minimum}\ \mathrm{sample}\ \mathrm{size}\ (n)=\frac{N\times p\left(1-p\right)}{\left[\left[N-1\times \left({d}^2\div {z}^2\right)\right]+p\left(1-p\right)\right]}\kern2.5em $$
(11)

where N = 400, p = 80%, d = 5%, and z = 1.96, the sample size (n) can be calculated as:

$$ n=\left(400\times 0.8\times \left(1-0.8\right)\right)/\left(\left(400-1\right)\times \left(\left({0.05}^2\right)/\left({1.96}^2\right)\right)+0.8\times (0.2)\right)=152 $$

The equation shows that the minimum sample size is 152. This is while the sample size that we are working on is 400 images and more than sufficient to confirm the accuracy claims.

In addition, to date, there is no publicly available dataset of frontal eye images annotated with IOP that researchers in the field can work on. Having access to or creating a much larger and comprehensive database with frontal eye images from diverse populations/conditions and investigating the efficiency and robustness of the proposed work is a future plan of our research. Nevertheless, this research provides preliminary findings on the relationship between frontal eye image features and IOP using a reasonable size dataset and opens up avenues for further investigation.

To check the robustness and efficiency of the proposed framework, a test has been carried out on over additional 100 frontal eye images from diverse populations (including various races and age ranges) with normal IOP, which have, however, been diagnosed with eye diseases or conditions other than high IOP (e.g., cataract, eye redness, and trauma) [38, 39]. Since the images were taken in bright lighting environments, the system was able to extract the five features, and as shown in Fig. 15, the tested samples have been accurately classified as normal IOP. These results also show that the proposed framework can perform reliably on frontal eye images captured with different resolutions.

Fig. 15
figure 15

Examples of normal IOP from additional frontal eye images from various races and ages with eye diseases/conditions other than high IOP (e.g., cataract, eye redness, and trauma)

7 Conclusions

In this paper, we have proposed a novel automated non-contact and non-invasive framework contributing to smart healthcare for analyzing frontal eye images to help in the early detection of high-risk IOP. Image processing and machine learning techniques were used to assist in detecting high-risk eye pressure cases.

The dataset used in this study included 200 normal eye pressure cases and 200 cases with high eye pressure. The proposed framework was implemented in MATLAB 2013a. Five features (pupil/iris ratio, MRL, RAP, and two sclera contour features: area and height) were extracted, and then, a neural network classifier was applied to train and test the images.

The proposed framework produced evidence of the relationship between the five extracted features and IOP, which has not been previously investigated through automated image processing and machine learning techniques on frontal eye images. This research was built on top of our preliminary data found in [40, 41] to assist clinicians and patients for early screening of IOP risk. The scaled neural network computations and classification results provided from our framework correlate with IOP levels and the ground truth of eye images with an accuracy of 96%.

As a future direction, more analysis will be provided to optimize the framework in terms of robustness and efficiency and investigate applying this work to mobile devices such as smartphones to make this work available and easily accessible to everyone. The framework can thus be used to check the patient’s IOP status (normal or high) over time. The images and results can be further registered as a profile for each patient to identify if risky elevations of IOP have occurred.

Moreover, the framework is now under progress for further optimization of the work on eye images taken from different angles. The results of this paper showed proof of concept based on a reasonable size dataset of images captured with a certain resolution/environment. In future work, more frontal eye images including those from participants of several races will be fed to the image database. Additional tests and analysis will also be conducted, so the framework can differentiate between IOP and other eye diseases like cataract and redness. Moreover, many core processors can be used to enhance the efficiency of the proposed framework [42, 43].

Notes

  1. IRB approval has been obtained at Princess Basma Hospital for the human subject samples. The authors formally requested access to the dataset.

Abbreviations

Acc.:

Accuracy

CHT:

Circular Hough transform

CLS:

Contact lens sensors

DB:

Database

Err.:

Error

FEA:

Finite element analysis

FN:

False negatives

FP:

False positives

GEM:

Generalized expectation maximization

IOP:

Intraocular pressure

MRL:

Mean redness level

NN:

Neural network

RAP:

Red area percentage

STD:

Standard deviation

SVM:

Support vector machine

TM:

Trabecular meshwork

TN:

True negatives

TP:

True positives

References

  1. A.A. Salam, M.U. Akram, K. Wazir, S.M. Anwar, M. Majid, in 2015 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). Autonomous Glaucoma detection from fundus image using cup to disc ratio and hybrid features (IEEE, Abu Dhabi, 2015), pp. 370–374.

  2. M.K. Dutta, A.K. Mourya, A. Singh, M. Parthasarathi, R. Burget, K. Riha, in Medical Imaging, m-Health and Emerging Communication Systems (MedCom), 2014 International Conference on. Glaucoma detection by segmenting the super pixels from fundus colour retinal images (IEEE, Greater Noida, 2014), pp. 86–90.

  3. A.A. Salam, M.U. Akram, K. Wazir, S.M. Anwar, in 2015 IEEE International Conference on Imaging Systems and Techniques (IST). A review analysis on early Glaucoma detection using structural features (IEEE, Macau, 2015), pp. 1–6.

  4. W.W.K. Damon, J. Liu, T.N. Meng, Y. Fengshou, W.T. Yin, in 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI). Automatic detection of the optic cup using vessel kinking in digital retinal fundus images (IEEE, Barcelona, 2012), pp. 1647–1650.

  5. V. Kinsey, Comparative chemistry of aqueous humor in posterior and anterior chambers of rabbit eye: its physiologic significance. AMA Arch. Ophthalmol. 50(4), 401–417 (1953).

    Article  Google Scholar 

  6. G. Manik, G. Renata, K. Richard, K. Sanjoy, Aqueous humor dynamics: a review. Open Ophthalmol. J. 52, 59 (2010).

    Google Scholar 

  7. T. Seiler, J. Wollensak, The resistance of the trabecular meshwork to aqueous humor outflow. Graefes Arch. Clin. Exp. Ophthalmol. 223(2), 88-91 (1985).

    Article  Google Scholar 

  8. A. Darlene et al., Ocular Periphery and Disorders (University of Alabama, Birmingham, 2010).

    Google Scholar 

  9. F. Mabuchi, K. Yoshimura, K. Kashiwagi, K. Shioe, Z. Yamagata, S. Kanba, H. Iijima, S. Tsukahara, High prevalence of anxiety and depression in patients with primary open-angle glaucoma. J. Glaucoma. 17(7), 552-557 (2008).

    Article  Google Scholar 

  10. M. Schwartz, Vaccination for glaucoma: dream or reality? Brain Res. Bull. 62(6), 481–484 (2004).

    Article  Google Scholar 

  11. Five Common Glaucoma Tests. (Glaucoma Research Foundation), https://www.glaucoma.org/glaucoma/diagnostic-tests.php. Accessed 17 Mar 2016.

  12. A. Mariakakis, E. Wang, S. Patel, J.C. Wen, in 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). A smartphone-based system for assessing intraocular pressure (IEEE, Orlando, 2016), pp. 4353–4356.

  13. A. Posner, Modified conversion tables for the Maklakov tonometer. Eye. Ear. Nose Throat Mon. 41, 638–644 (1962).

    Google Scholar 

  14. C. Gisler, A. Ridi, M. Fauquex, D. Genoud, J. Hennebert, in 2014 6th International Conference of Soft Computing and Pattern Recognition (SoCPaR). Towards glaucoma detection using intraocular pressure monitoring (IEEE, Tunis, 2014), pp. 255–260.

  15. C.W. Hsu, C.C. Chang, C.J. Lin, A practical guide to support vector classification (Taipei, 2003) pp. 1-16.

  16. M. Shahiri-Tabarestani, B.A. Ganji, R. Sabbaghi-Nadooshan, in 2012 16th IEEE Mediterranean Electrotechnical Conference. Design and simulation of new micro-electromechanical pressure sensor for measuring intraocular pressure (IEEE, Yasmine Hammamet, 2012), pp. 208–211.

  17. S. Yousefi et al., Learning from data: recognizing glaucomatous defect patterns and detecting progression from visual field measurements. IEEE Trans. Biomed. Eng. 61(7), 2112–2124 (2014).

    Article  Google Scholar 

  18. S. Yousefi et al., Glaucoma progression detection using structural retinal nerve fiber layer measurements and functional visual field points. IEEE Trans. Biomed. Eng. 61(4), 1143–1154 (2014).

    Article  Google Scholar 

  19. H. Jia, Y. Zhang, W. Wang, J. Xu, in IEEE 9th International Conference on High Performance Computing and Communication, IEEE 14th International Conference on Embedded Software and Systems (HPCC-ICESS). Accelerating Viola-Jones face detection algorithm on GPUs (2012), pp. 396–403.

    Google Scholar 

  20. A. Thompson, The cascading Haar wavelet algorithm for computing the Walsh–Hadamard transform. IEEE Signal Process. Lett. 24(7), 1020–1023 (2017). https://doi.org/10.1109/LSP.2017.2705247.

    Article  Google Scholar 

  21. J.J. Chen, C.R. Su, W.E.L. Grimson, J.L. Liu, D.H. Shiue, Object segmentation of database images by dual multiscale morphological reconstructions and retrieval applications. IEEE Trans. Image Process. 21(2), 828–843 (2012).

    Article  MathSciNet  Google Scholar 

  22. J. Raheja, G. Sahu, Pellet size distribution using circular Hough transform in Simulink. Am. J. Signal Process. 2(6), 158–161 (2012).

    Article  Google Scholar 

  23. N. Ramakrishnan, M. Wu, S.K. Lam, T. Srikanthan, in Adaptive Hardware and Systems (AHS), 2014 NASA/ESA Conference on. Automated thresholding for low-complexity corner detection (IEEE, Leicester, 2014), pp. 97–103.

  24. L. Yuan, X. Xu, in 2015 4th International Conference on Advanced Information Technology and Sensor Application (AITS). Adaptive image edge detection algorithm based on canny operator (IEEE, Harbin, 2015), pp. 28–31.

  25. M. Basu, Gaussian-based edge-detection methods-a survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 32(3), 252–260 (2002).

    Article  Google Scholar 

  26. Neubeck, V.L. Gool, in 18th International Conference on Pattern Recognition (ICPR'06). Efficient non-maximum suppression (IEEE, Hong Kong, 2006), pp. 850–855.

  27. R. Medina-Carnicer, A. Carmona-Poyato, R. MuÑoz-Salinas, F.J. Madrid-Cuevas, Determining hysteresis thresholds for edge detection by combining the advantages and disadvantages of thresholding methods. IEEE Trans. Image Process. 19(1), 165–173 (2010).

    Article  MathSciNet  Google Scholar 

  28. P. Meer, B. Georgescu, Edge detection with embedded confidence. IEEE Trans. Pattern Anal. Mach. Intell. 23(12), 1351–1365 (2001).

    Article  Google Scholar 

  29. X.Y. Ma, D. Zhu, J. Zou, W.J. Zhang, Y.L. Cao, Comparison of ultrasound biomicroscopy and spectral-domain anterior segment optical coherence tomography in evaluation of anterior segment after laser peripheral iridotomy. Int. J. Ophthalmol. 417, 423 (2016).

    Google Scholar 

  30. T. Chan, L. Vese. in An active contour model without edges. International Conference on Scale-Space Theories in Computer Vision (Springer, Berlin, Heidelberg, 1999), pp. 141-151

    Chapter  Google Scholar 

  31. C. Yan, H. Xie, D. Yang, J. Yin, Y. Zhang, Q. Dai, Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans. Intell. Transp. Syst. 19(1), 284–295 (2018).

    Article  Google Scholar 

  32. C. Yan, H. Xie, S. Liu, J. Yin, Y. Zhang, Q. Dai, Effective Uyghur language text detection in complex background images for traffic prompt identification. IEEE Trans. Intell. Transp. Syst. 19(1), 220–229 (2018).

    Article  Google Scholar 

  33. H. Gong, Clinical Methods: The History, Physical, and Laboratory Examinations, 3rd edn. (Butterworth Publishers, Boston, 1990).

  34. B. Sen, M. Peker, A. Çavuşoğlu, F.V. Çelebi, A comparative study on classification of sleep stage based on EEG signals using feature selection and classification algorithms. J. Med. Syst. 1, 21 (2014).

    Google Scholar 

  35. A.R. Hassan, M.I.H. Bhuiyan, Computer-aided sleep staging using complete ensemble empirical mode decomposition with adaptive noise and bootstrap aggregating. Biomed. Signal Process. Control 1, 10 (2016).

    Google Scholar 

  36. L. Fraiwan, K. Lweesy, N. Khasawneh, H. Wenz, H. Dickhaus, Automated sleep stage identification system based on time–frequency analysis of a single EEG channel and random forest classifier. Comput. Methods Progr. Biomed. 10, 19 (2012).

    Google Scholar 

  37. Cohen J., Statistical power analysis for the behavioral sciences (rev: Lawrence Erlbaum Associates, Inc), 1977.

    Google Scholar 

  38. Centre vision bretagne (2018). Cataract eye, Retrieved from http://www.centrevisionbretagne.com/cataracte/cataracte.

  39. Effective Home Remedies for Common Eye Problems (2018). Red eye sclera, Retrieved from https://hubpages.com/health/Home-Remedies-for-Common-Eye-Problems.

  40. M. Aloudat, M. Faezipour, in Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island. Determining the thickness of the liquid on the cornea for open and closed angle Glaucoma using Haar filter (2015), pp. 1–6.

    Google Scholar 

  41. M. Aloudat, M. Faezipour, in Electro/Information Technology (EIT), 2015 IEEE International Conference on. Histogram analysis for automatic blood vessels detection: first step of IOP (2015), pp. 146–151.

    Chapter  Google Scholar 

  42. C. Yan et al., A highly parallel framework for HEVC coding unit partitioning tree decision on many-core processors. IEEE Signal Process. Lett. 21(5), 573–576 (2014).

    Article  Google Scholar 

  43. C. Yan et al., Efficient parallel framework for HEVC motion estimation on many-core processors. IEEE Trans. Circuits Syst. Video Technol. 24(12), 2077–2089 (2014).

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the ophthalmologists Dr. Mohannad Albdour and the Dr. Tariq Almunizel for their help and consultation regarding this research.

Funding

The initial stage of this research was funded by the University of Bridgeport Seed Money Grant UB-SMG-2015, for the January 2015–December 2015 duration.

Availability of data and materials

The image dataset that support the findings of this study are available from Princess Basma Hospital in Jordan, but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Princess Basma Hospital. The code of the proposed image processing and machine learning framework will be available upon reasonable request after MA’s doctoral dissertation (expected in Fall 2018).

Author information

Authors and Affiliations

Authors

Contributions

MA, AE, and MF performed the simulations and machine learning experiments. MF supervised this research. MA and MF were in contact with the ophthalmologist consultants to verify the achieved results. MA, AE, and MF wrote the paper. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Miad Faezipour.

Ethics declarations

Ethics approval and consent to participate

Institutional Review Board (IRB) approval along with consent to participate forms has been obtained at Princess Basma Hospital for the human subject image samples, 11/27/2016.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Al-Oudat, M., Faezipour, M. & El-Sayed, A. A smart intraocular pressure risk assessment framework using frontal eye image analysis. J Image Video Proc. 2018, 90 (2018). https://doi.org/10.1186/s13640-018-0334-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-018-0334-2

Keywords