Skip to main content

Automatic image-based segmentation of the heart from CT scans

Abstract

The segmentation of the heart is usually demanded in the clinical practice for computing functional parameters in patients, such as ejection fraction, cardiac output, peak ejection rate, or filling rate. Because of the time required, the manual delineation is typically limited to the left ventricle at the end-diastolic and end-systolic phases, which is insufficient for computing some of these parameters (e.g., peak ejection rate or filling rate). Common computer-aided (semi-)automated approaches for the segmentation task are computationally demanding, and an initialization step is frequently needed. This work is intended to address the aforementioned problems by providing an image-driven method for the accurate segmentation of the heart from computed tomography scans. The resulting algorithm is fast and fully automatic (even the region of interest is delimited without human intervention). The proposed methodology relies on image processing and analysis techniques (such as multi-thresholding based on statistical local and global parameters, mathematical morphology, and image filtering) and also on prior knowledge about the cardiac structures involved. Segmentation results are validated through the comparison with manually delineated ground truth, both qualitatively (no noticeable errors found after visual inspection) and quantitatively (mass overlapping over 90%).

1 Introduction

Cardiovascular disease is the leading direct or contributing cause of non-accidental deaths in the world[1]. As a consequence, the current research is particularly focused on its early diagnosis and therapy. An example of this effort is the delineation of the left ventricle (LV) of the heart, which turns out to be an important tool in the assessment of cardiac functional parameters such as ejection fraction, myocardium mass, or stroke volume. Fully automatic and reliable segmentation methods are desirable for the quantitative and massive analysis of these clinical parameters, because the traditional practice of manual delineation of the heart’s ventricles is subjective, prone to errors, tedious, hardly reproducible, and very time-consuming - typically between 1 and 2 h per cardiac study, thus exhausting the radiologist’s capacity and resources. Even though the most relevant medical information can be extracted from the left heart, a segmentation of the whole heart (and eventually also the great vessels) can be useful to extract a model of the organ before surgery or to facilitate diagnosis[2, 3].

Compared with other imaging modalities (such as ultrasound and magnetic resonance imaging), cardiac computed tomography (CT) can provide detailed anatomical information about the heart chambers, great vessels, and coronary arteries[4, 5]. Actually, CT is often preferred by diagnosticians since it provides more accurate anatomical information about the visualized structures, thanks to its higher signal-to-noise ratio and better spatial resolution. Although computed tomography was at one time almost absent in cardiovascular examinations, recent technological advances in X-ray tubes, detectors, and reconstruction algorithms, along with the use of retrospectively gated spiral scanning, have opened the doors to new diagnostic opportunities[6], enabling the non-invasive derivation of the aforementioned functional parameters[7, 8]. Therefore, computed tomography becomes an important imaging modality for diagnosing cardiovascular diseases[9].

In the recent literature, one can find many papers which tackle the (semi-)automated segmentation of the heart from CT or MRI scans. These works deal with different strategies for approaching the segmentation task, including image-driven algorithms[1013], probabilistic atlases[14, 15], fuzzy clustering[16], deformable models[1719], neural networks[20], active appearance models[21, 22], anatomical-based landmarks[23], or level set and its variations[24, 25]. A comprehensive review of techniques commonly used in cardiac image segmentation can be found in Kang et al.[5]. Nevertheless, many published methods have various disadvantages for routine clinical practice: they are either computationally demanding[6, 14, 16, 22], potentially unstable for subjects with pathology[25, 26], limited to the left ventricle[11, 24, 25, 27], require additional images to be acquired[28, 29], or need complex shape and/or gray-level appearance models constructed (or ‘learned’) from many manually segmented images - which is labor intensive and of limited use due to both anatomical and image contrast inconsistencies[14, 22, 2628]. Moreover, most prior work has been devoted to segmenting cardiac data given a reasonable initialization[25, 30] or an accurate manual segmentation of a subset of the image data[31, 32]. For full automation, and with the purpose of eliminating the inter- and intra-observer variability, initialization should also be automatic.

In this work, we propose an efficient image-driven method for the automatic segmentation of the heart from CT scans. The methodology relies on image processing techniques such as multi-thresholding based on statistical local and global features, mathematical morphology, or image filtering, but it also exploits the available prior knowledge about the cardiac structures involved. The development of such a segmentation system comprises two major tasks: initially, a pre-processing stage in which the region of interest (ROI) is delimited and the statistical parameters are computed; and next, the segmentation procedure itself, which makes use of the data obtained during the previous stage. Our fully automatic approach improves on the state of the art through both computation speed and simplicity of implementation.

The paper is organized as follows: in Section 2, the proposed methodology is presented; along subsections 2.1 and 2.2, the pre-processing and segmentation stages, respectively, are detailed; Subsection 2.3 deals with the extraction of the left ventricle from the outcome of the previous segmentation. Next, the validity of our approach is tested through the segmentation of different cardiac CT scans and the subsequent comparison of the results with manually delineated ground truth. Finally, the conclusions close the paper.

2 Proposed methodology

As commented before, the segmentation algorithm is based on the information available about the cardiac structures and tissues. This knowledge allows us to separate the region of interest from the rest of the image (such as bones of the rib cage) and to obtain the statistically derived thresholds which are needed in order to define the binary masks that will be used along the procedure. In the following subsection is explained how to calculate these thresholds, which depend on the distribution of the image histogram. An important feature of the proposed algorithm is that it uses the same type of thresholds for all the slices of the scan, not an ad hoc set for each image.

2.1 Pre-processing stage

In this stage, all the variables needed to perform the segmentation (statistical parameters, position of the spine, etc.) are determined, and a preliminary cleaning of the images (which basically selects the ROI) is performed.

2.1.1 Statistical parameters

Let us consider the volume which results of the CT scan as a scalar function f(x,y,z), where x = 1,…,N, y = 1,…,M, and z = 1,…,P, being N, M, and P the number of discrete elements (voxels) in each of the spatial dimensions. For each of the axial slices (i.e., for a fixed value k of the z coordinate) the following parameters are computed:

μ k = 1 N M x = 1 N y = 1 M f x , y , k
(1)
  1. a)

    Mean value of the intensity of the pixels, μ(k):

This value allows us to automatically separate the air and the background from the rest of the image. Indeed, the histogram of images which result from a standard CT scan always present five to seven well-delimited distributions of gray levels. The lowest intensity levels are related to the air and the highest to the bones. Consequently, the first (i.e., leftmost) and second peaks of the histogram correspond to the image background and the air in the lungs, respectively. This can be seen in Figure 1, where the image is thresholded with an intensity value laying in the valley which separates the two leftmost maxima from the remaining peaks (five, in this example). This value is the parameter μ(k).

Figure 1
figure 1

Example of thresholding with μ ( k ). (a) Original slice, (b) histogram, (c) binary mask computed by thresholding with μ(k), and (d) masked slice.

μ sup k = 1 R k i = 1 R k X i Y i k
(2)
  1. b)

    Mean intensity value of the pixels with an intensity level higher than μ(k) in the k th slice, μ sup(k):

where R k is the number of pixels (X i ,Y i ) in the k th slice which satisfy f(X i ,Y i ,k) > μ(k). This value is used when computing the global mean μglobal, which is the parameter that the algorithm requires in the segmentation stage in order to separate cardiac structures from the rest of the image. Moreover, it is also used for obtaining a binary mask which determines the position of the spine in each image. The gray level represented by the parameter μsup(k) belongs to the interval of intensities in which deoxygenated blood and bone marrow are included. Hence, masks obtained from this parameter would contain the outer layer of bones and tissues where oxygenated blood flows, whose intensity levels are higher than the value of μsup(k). However, as shown in Figure 2, this parameter is not a suitable threshold for segmenting cardiac structures, since the resulting mask does not include some tissues where deoxygenated blood flows, such as right atrium and right ventricle. Therefore, in order to accomplish our goal, a lower threshold is needed. More precisely, the required threshold has to be located in the interval of gray levels which corresponds to muscular tissues.

Figure 2
figure 2

Example of thresholdi006Eg with μ sup ( k ). (a) Original slice, (b) histogram, (c) binary mask computed by thresholding with μsup(k), and (d) masked slice.

σ k = 1 R k - 1 i = 1 R k f X i X i k - μ sup k 2
(3)
  1. c)

    Standard deviation of intensities of pixels in the k th slice with an intensity level higher than μ(k), σ(k):

The threshold μsup(k) + σ(k) allows us to obtain a binary mask which is used later in the segmentation stage in order to locate the descending aorta in all the slices of the volumetric scan. The resulting gray level is useful for separating the outer layer of the bones and the structures where oxygenated blood flows from the rest of the image, as shown in Figure 3.

Figure 3
figure 3

Example of thresholding with μ sup ( k )+ σ ( k ). (a) Original slice, (b) histogram, (c) binary mask computed by thresholding with μsup(k) + σ(k), and (d) Masked slice.

μ global = 1 P k = 1 P μ sup k - 1 P - 1 i - 1 P μ sup i - 1 P k - 1 P μ sup k 2
(4)
  1. d)

    Mean of the parameter μ sup(k) minus the standard deviation of μ sup(k) (in the following global mean), μ global:

This is a global parameter, since it depends on the whole CT scan. It belongs to the interval of intensities which characterize muscular tissues. The reason for not using the mean of μsup(k) as a threshold is that this value is located on the edge of two distributions, one representing muscular tissues and the other representing deoxygenated blood, thus occasionally causing an overfitting to the structures of interest and consequently yielding the appearance of holes in the mask. In order to avoid this problem, a less restrictive threshold, i.e., μglobal, is used instead. Figure 4 shows the difference of thresholding with μglobal and μsup(k). Anyway, the resulting binary mask is yet inadequate for separating the structures of interest, since pulmonary veins and part of the bones are still present after the thresholding. This is addressed further in Section 2.2.

Figure 4
figure 4

Example of thresholding with μ global . (a) Original slice, (b) histogram, (c) binary mask computed by thresholding with μsup(k), (d) original slice masked with (c), (e) binary mask computed by thresholding with μglobal, and (f) original slice masked with (e).

2.1.2 Position of the spine and the aorta

Once the statistical parameters are computed, a later step (which will be performed in the segmentation stage) is to remove the spine from the dataset. For doing so, we exploit the fact that both the spine and the descending aorta are present in all the slices of the (axial) scan. Firstly, P binary masks are obtained by thresholding each CT slice with its corresponding parameter μsup(k). If the area which is common to all these masks is computed (e.g., by means of a logical AND), the resulting pixels with a value of 1 certainly belong to either the spine or the aorta. More precisely, the common object with the highest number of pixels should belong to the spine. Nevertheless, it is possible that the pixels which belong to the spine are non-connected, and as a result, the object with the highest number of pixels actually represents the aorta, which would be falsely labeled as spine. In order to avoid such an error, a morphological dilation with a horizontal structuring element is previously performed, as shown in Figure 5b. The object of highest area after the dilation is used as the mask for selecting the spine in all the slices.

Figure 5
figure 5

Position of the spine and the aorta. (a) Common area to all masks computed by thresholding with μsup(k), (b) morphological horizontal dilation of the common area, (c) object of highest area, (d) masked common area (i.e., pixels belonging to the spine), and (e) common area to all masks computed by thresholding with μsup(k) + σ(k) (i.e., pixels belonging to the aorta).

During the process of removing the spine, a portion of the descending aorta can also be incorrectly deleted (e.g., if it overlaps with the mask computed through the dilation of the common area). Therefore it becomes necessary to previously locate the aorta in order to restore it after the deletion procedure. With this purpose, we first compute the common area to all the superimposed masks which are obtained by thresholding each slice with its corresponding value μsup(k) + σ(k). As explained in the previous subsection, the threshold μsup(k) + σ(k) allows us to select the structures where oxygenated blood flows: aorta and left atrium and ventricle. Among these structures, the only one which is common to all slices is the descending aorta. As shown in Figure 5e, the resulting image exclusively contains pixels belonging to the aorta, which will be used to select and restore the latter in the segmentation stage. It should be noted that the logical AND (Figure 5e) would likely result in an empty mask in cases of severe scoliosis or tortuous aorta. In order to prevent such a problem, the algorithm includes a rigid registration stage, which finds the relative displacement (in pixels) between each binary mask and the following one. The P masks are then correctly aligned (i.e., shifted an integer number of pixels in the x- and/or y-axis) prior to the computation of the logical AND.

2.1.3 Automatic selection of the region of interest

This procedure determines, through the analysis of the columns of each image (considered as a matrix of size N × M), which regions have to be removed. For each image, M one-dimensional profiles (i.e.; M arrays of N elements, corresponding to the M columns of the slice) are obtained from the binary mask computed by thresholding with μ(k); as commented before, this parameter is suitable for separating the air and the background from the rest of the image, as shown in Figure 1. Additionally, all the objects, but that with the highest number of pixels, are removed after the thresholding, as shown in Figure 6c.Each profile (i.e., each column of the binary mask) consists in a number of ‘pulses’ of amplitude 1 (the number of pulses may vary from none to more than one), as shown in Figure 6d. These pulses represent the pixels with a value of 1 in the corresponding column of the binary mask. The proposed algorithm, which automatically selects the ROI depending on the number and width of the pulses which appear in each one-dimensional profile, is summarized in the following pseudo-code:

Figure 6
figure 6

Automatic selection of the ROI. (a) Original slice, (b) binary mask computed by thresholding with μ(k), (c) object of highest area (column #70 highlighted), (d) one-dimensional profile corresponding to the column #70, (e) outcome of the proposed algorithm, (f) object of highest area, and (g) masked slice (region of interest).

  1. 1.

    DO initialize the mean width: w mean =0.1*N

  2. 2.

    DO initialize the maximum width to be removed: w max =0.3*N

  3. 3.

    FOR j =1:M

DO compute the j th one-dimensional profile

IF width w j of the leftmost pulse of the j th profile satisfies w j < wmax (i.e., the corresponding pixels belong to the rib cage)

THEN update the mean width wmean with the value w j and remove (i.e., set to 0) the upmost w j pixels with a value of 1 in the j th column of the binary mask

ELSE remove the upmost wmean pixels with a value of 1 in the j th column of the binary mask (i.e., remove only the pixels which belong to the rib cage, not the ones which belong to the heart)

  1. 4.

    IF after the processing there is more than one object in the resulting mask, select the largest one and discard the rest.An example of the results obtained with this procedure can be seen in Figure  6.

2.2 Segmentation stage

In this stage, the segmentation itself is performed, using for this purpose the data collected through the previous subsection: the local and global statistical parameters (which will serve as thresholds), some pixels which belong to the spine and some pixels which belong to the descending aorta, and the particular region of interest which will be processed in each slice of the scan. In the following, the sequential steps of the proposed segmentation algorithm (whose flowchart is shown in Figure 7) are detailed.

Figure 7
figure 7

Flowchart of the segmentation stage.

2.2.1 Location of the aorta

This procedure consists of two tasks. Firstly, each one of the P slices of the scan is thresholded with its corresponding value μsup(k) + σ(k). Next, the objects which appear in the resulting binary mask are labeled; the object which contains the pixels extracted in the process described in Subsection 2.1.2 is the descending aorta in the k th image. Figure 8 illustrates this procedure. The reason for locating the aorta is twofold: it is the only object of interest in the slices with too much liver (i.e., slices in which the liver takes up a large area), as shown Figure 8d; additionally, since there exists the possibility of deleting part or even the totality of the aorta during the removal of the spine (as explained in Subsection 2.1.2), it becomes necessary to know the position of this artery in order to restore it at the end of the following procedure.

Figure 8
figure 8

Location of the aorta. (a) Original slice, (b) binary mask computed by thresholding with μsup(k) + σ(k), (c) object which contains the pixels belonging to the aorta, (d) original slice with too much liver, (e) binary mask computed by thresholding with μsup(k) + σ(k), and (f) object which contains the pixels belonging to the aorta.

2.2.2 Deletion of the spine

This process consists of four steps. First, the P slices of the scan are thresholded with their corresponding values μsup(k), thus allowing us to isolate bones and tissues where oxygenated blood flows from the rest of the image. At this point, the objects of the resulting binary mask are labeled, and the spine is then selected as the object which contains the pixels obtained by the process described in Subsection 2.1.2. Next, the binary mask defined by the spine is dilated with a horizontal structuring element, and the outcome is used as a mask for separating cardiac structures from the posterior part of the chest wall (since the process described in Subsection 2.1.3 does not remove the lower part of the image). Finally, the descending aorta is added, and the object in which it is contained is selected as the resulting mask. Figure 9 illustrates this procedure.

Figure 9
figure 9

Deletion of the spine. (a) Original slice, (b) binary mask computed by thresholding with μsup(k), (c) object which contains the pixels belonging to the spine, (d) binary mask computed as the negative of the morphological dilation, (e) binary mask computed by thresholding with μ(k), (f) ROI before the deletion of the spine, (g) application of mask (d) to the ROI, (h) restoration of the aorta, and (i) masked slice.

2.2.3 Computation of the final mask

In order to segment the structures of interest (i.e., ventricles, atria, aorta, and vena cava vein), a threshold belonging to the interval of intensities which represent muscular tissues is needed. As explained in Subsection 2.1.1, this value is the parameter μglobal. Obviously, the use of μglobal as a threshold results in a binary mask which contains all the aforementioned structures, since the gray level of the cardiac muscles is lower than the gray level of the blood (either oxygenated or not). The bone marrow, which also has an intensity level higher than μglobal, does not appear in this final mask (shown in Figure 10b) because of the cleaning process previously performed (i.e., selection of the ROI and deletion of the spine).

Figure 10
figure 10

Computation of the final mask. (a) Original slice, (b) binary mask computed by thresholding with μglobal, (c) objects with a size higher than amin, (d) common area of binary masks in the considered axial range, (e) final (i.e., post-processed) binary mask, and (f) segmented slice.

2.2.4 Post-processing of the final mask

As can be appreciated in Figure 10b, the outcome of the previous step still shows slight imperfections. Therefore, a post-processing of the binary mask is required. First, objects with a size lower than the minimum area amin (chosen as amin≤ min{N,M}, which has the value of 500 pixels for all CT scans considered in this paper) are removed; the size of the objects can be easily determined after a labeling and pixel counting procedure. Next, objects with a size similar to that of the structures of interest but which do not represent cardiac tissues are also removed. For doing so, we exploit the fact that these undesirable objects are local, i.e., they only appear in a narrow range of slices in the z axis. For the k th image, the algorithm computes the common area between the 2 × r +1 binary masks from k - r to k + r, r being the axial range (a value of 5% the number of slices P performs well in all experiments); these masks are the ones obtained through the application of the threshold μ(k). Unless the computed common area is greater than 30% of its actual area (i.e., 30% of the number of pixels with a value of 1 in the k th slice), an object is removed from the mask. Lastly, a morphological closing by reconstruction is carried out in order to fill the tiny holes that may appear in the final mask. Figure 10c,d,e,f displays the result of this post-processing stage.

2.3 Left heart segmentation

As already commented in Section 1, the analysis of the LV is of great importance, since this structure supplies the oxygenated blood to distant tissues through the aorta. This subsection illustrates how the left heart (i.e., left ventricle and left atrium) and the aorta can be extracted from the outcome of the methodology presented in subsections 2.1 and 2.2. After the pre-processing and segmentation stages, the resulting images show a quasi-bimodal histogram (i.e., a histogram which consists in two main clusters of gray levels, corresponding to oxygenated and non-oxygenated blood), as shown in Figure 11c. This feature allows us to precisely segment the left heart by means of the algorithm Isodata[33], which provides an optimal result with a low computational cost if the two clusters of gray levels are nearly Gaussian distributions (an assumption which is true for virtually all CT scans). The particularization of the Isodata algorithm to our scenario is summarized in the following pseudo-code:

Figure 11
figure 11

Left heart segmentation. (a) Original slice, (b) original slice masked with the final mask, (c) histogram of the masked image (threshold t2 is shown), (d) binary mask computed by thresholding with t2, and (e) segmented slice.

  1. 1.

    DO compute the initial threshold t 1 as the mean gray level of the segmented slice

  2. 2.

    DO compute μ 1 and μ 2 as the mean gray level of each of the two classes obtained after thresholding the segmented slice with the threshold t 1

  3. 3.

    DO compute the new threshold t 2 as the mean value of μ 1 and μ 2: t 2= (μ 1+ μ 2)/2

  4. 4.

    IF t 1 and t 2 differ less than 1%

THEN go to 5

ELSE t1= t2, go to 2

  1. 5.

    RETURN t 2Once the left heart is separated from the right heart, the resulting mask has to be post-processed as explained in Subsection 2.2.4 (i.e., small objects are removed, contours are smoothed, and holes are filled). The outcome of this procedure is shown in Figure  11.

3 Results

Following the methodology described above, the segmentation algorithm introduced in this paper was applied to 32 clinical exams from randomly selected adult patients (source: Hospital Universitario Virgen de la Arrixaca - Murcia, Spain). The datasets were acquired during multiple breath holds as a stack of 2D + time grayscale axial slices, using two different CT scanners (Siemens Sensation 64 and Toshiba Aquilion). The imaging protocols are heterogeneous with diverse capture ranges and resolutions. A volume may contain 75 to 190 slices, while the size of each slice is the same with 512 × 512 pixels. The resolution inside a slice is isotropic and varies from 0.488 to 0.781 mm for different volumes (therefore, the FOV varies from 250 × 250 mm to 400 × 400 mm). The slice thickness (i.e., the distance between neighboring slices) is larger than the in-slice resolution and varies from 0.75 to 3 mm for different volumes. All the data is in DICOM 3.0 format. The experiments were carried out on a PC with Intel Core 2 Duo (2 × 2.4 GHz), 4 GByte of RAM, and the computations were performed under MATLAB 7.6 (R2008a). The mean running time for fully segmenting the cardiac structures varies from 23.1 s (512 × 512 × 75 voxels) to 110.9 s (512 × 512 × 190 voxels). The mean running time for segmenting only the left heart varies from 5.6 s (512 × 512 × 75 voxels) to 25.1 s (512 × 512 × 190 voxels). It should be noted that all these times could be significantly improved through an optimized implementation of the algorithms in C/C++.The resulting contours were visually inspected by experienced cardiologists from Hospital Universitario Virgen de la Arrixaca (in the following, HUVA). According to their evaluation, our automatic approach generated acceptable results to clinicians. Noticeable errors were not found. Figure 12 presents some segmentation outputs from our method. More precisely, the outcome of the segmentation of the left heart is shown: left atrium (LA), left ventricle (LV), aorta (Ao), and descending aorta (DAo). Three-dimensional reconstructions of the full heart and the left heart, obtained from the corresponding segmentation (as explained in sections 2.2 and 2.3, respectively), are also displayed.

Figure 12
figure 12

Example of the outcome of the proposed segmentation methodology. (a-g) Left heart segmentation of several slices from a CT scan, (h) 3D reconstruction of the whole heart, and (i) 3D reconstruction of the left heart.

A quantitative validation was also performed. In this sense, a typically used performance metric is the correlation ratio (please refer to[34] for its mathematical definition), which is equivalent to a measure of mass overlapping between the segmentation results and the ground truth. In our case, the ground truth consists in a collection of contours manually delineated by an expert from HUVA. The mean correlation ratio (CR) was 94.42%, where a value of 100% means a perfect match. This value descends down to 87.64% if we consider the whole CT scan, i.e., if we include the slices with too much liver (such as e.g., Figure 12f, in which the expert did not delineate the cardiac structure labeled as LV, but only the descending aorta). The maximum computed CR value was 99.81%, and the minimum value was 46.95% (the latter corresponding to a slice in which the liver was present). Thus, the correlation ratio reveals a good agreement between the automatic and the manual segmentations. Another similarity measure which is broadly used when dealing with contours of segmented objects is the maximal surface distance (refer to[35] for a mathematical definition). The mean value of this measure was 2.36 mm (with a minimum of 1.11 mm and a maximum of 6.12 mm), where 0 mm would mean a perfect match of the compared contours.Finally, an assessment of the left heart’s volume-time curves was carried out. The temporal variation of the volume of the left heart (left atrium and left ventricle) was obtained for both the output of our method and the manually delineated ground truth. As can be appreciated in the example shown in Figure 13, the estimated volumes were very close to the ground truth with a mean error of 1.22% and a standard deviation of 0.68%. It should be noted that in all cases, the ground truth volumes were greater of equal than the computed volumes due to the fact that the output of our method was tightly adjusted to the boundary of the cardiac structures, while the contours delineated by the expert followed more loosely their overall shape.

Figure 13
figure 13

Example of left heart’s (LA + LV) volume vs. time curve for dynamic 3D sequence.

4 Conclusions

We have developed a comprehensive image-driven segmentation methodology to segment the cardiac structures (or only the left heart) from CT scans by using a processing pipeline of multi-thresholding, image cleaning, mathematical morphology, and image filtering techniques. The algorithm we propose is simple; hence, it is easy to implement and validate. All the contours are delineated automatically, without any initialization or user interaction. Testing results on the data randomly selected from clinical exams demonstrated that our approach can be computed significantly faster than other automated techniques (especially if compared with model-based approaches). This makes it feasible to conveniently calculate online the left heart’s volume for all the imaged cardiac phases (not only end-diastolic and end-systolic), which in turn enables the computation of additional quantitative clinical parameters such as peak ejection rate and filling rate. Moreover, this allows for the automatic identification of the imaged time points of the end-diastole and end-systole (which correspond to the maximum and minimum left heart’s volume among all time points).

The complete cardiac segmentation methodology performed well on the validation set of 32 clinical datasets acquired on two different CT scanners from two manufacturers. Its accuracy is comparable to other approaches recently published. Additionally, visual inspection by experts showed that the proposed algorithm is overall robust and succeeds in segmenting the heart up to minor local corrections.

A limitation of our method is that it only provides a segmentation of the left heart’s blood pool volume (i.e., endocardium). While this is sufficient for computing most of the common clinical quantitative parameters for cardiac function, a segmentation of the left heart’s epicardium would provide additional clinical information.

Further directions of our research include the porting of the presented methodology to other modalities, such as cardiac cine magnetic resonance imaging (cMRI).

References

  1. WHO, Cardiovascular diseases (CVDs): Fact sheet number 317. Updated March 2013. http://www.who.int/mediacentre/factsheets/fs317/en/index.html

  2. Zuluaga MA, Cardoso MJ, Modat M, Ourselin S: Multi-atlas propagation whole heart segmentation from MRI and CTA using a local normalised correlation coefficient criterion. Lect. Notes Comput. Sci 2013, 7945: 174-181. 10.1007/978-3-642-38899-6_21

    Article  Google Scholar 

  3. Josevin Prem S, Ulaganathan MS, Kharmega Sundararaj G: Segmentation of the heart and great vessels in CT images using Curvelet Transform and Multi Structure Elements Morphology. Int. J Eng. Innov. Tech 2013, 1(3):122-128.

    Google Scholar 

  4. Shoenhagen P, Halliburton SS, Stillman AE, White RD: CT of the heart: Principles, advances and clinical uses. Cleveland. Clinic. J. Med. 2005, 72(2):127-138. 10.3949/ccjm.72.2.127

    Article  Google Scholar 

  5. Kang D, Woo J, Slomka PJ, Dey D, Germano G, Jay Kuo C-C: Heart chambers and whole heart segmentation techniques: review. J. Electronic. Imaging. 2012, 21(1):1-16.

    Article  Google Scholar 

  6. Schroeder S, Achenbach S, Bengel F: Cardiac computed tomography: indications, applications, limitations, and training requirements. Eur. Heart J. 2008, 29: 531-556. 10.1093/eurheartj/ehm544

    Article  Google Scholar 

  7. von Berg J, Lorenz C: Multi-surface cardiac modeling, segmentation, and tracking, in Proc. FIMH . Lect. Notes Comput. Sci 2005, 3504: 1-11. 10.1007/11494621_1

    Article  Google Scholar 

  8. Peters J, Ecabert O, Lorenz C, von Berg J, Walker MJ, Ivanc TB, Vembar M, Olszewski ME, Weese J: Segmentation of the heart and major vascular structures in cardiovascular CT images. In. Proc. SPIE. vol. 2008, 6914: 1-12.

    Google Scholar 

  9. Zheng Y, Barbu A, Georgescu B, Scheuering M, Comaniciu D: Four-chamber heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal space learning and steerable features. IEEE Trans. Med. Imaging 2008, 27(11):1668-1681.

    Article  Google Scholar 

  10. Coscoso CS, Niessen WJ, Netsch T, Vonken EPA, Lund G, Stork A, Viergever MA: Automatic image-driven segmentation of the ventricles in cardiac cine MRI. J. Magn. Reson. Imaging 2008, 28: 366-374. 10.1002/jmri.21451

    Article  Google Scholar 

  11. Huang S, Liu J, Lee LC, Venkatesh SK, Teo LLS, Au C, Nowinski WL: An image-based comprehensive approach for automatic segmentation of left ventricle from cardiac short axis cine MRI images. J. Dig. Imaging. 2010, 2010: 1-11.

    Google Scholar 

  12. Redwood AB, Richard JJ, Robb A: Semiautomatic segmentation of the heart from CT images based on intensity and morphological features. In Proc SPIE vol. 2005, 5747: 1373-1719.

    Google Scholar 

  13. Morin JP, Desrosiers C, Duong L: Image segmentation using random-walks on the histogram. In. Proc. SPIE. vol. 2012, 8314: 1-8.

    Google Scholar 

  14. Lorenzo-Valdes M, Sanchez-Ortiz GI, Elkington AG, Mohiaddin RH, Rueckert D: Segmentation of 4D cardiac MR images using a probabilistic atlas and the EM algorithm. Med. Image Anal. 2004, 8: 255-265. 10.1016/j.media.2004.06.005

    Article  Google Scholar 

  15. Isgum I: Multi-Atlas-based segmentation with local decision fusion – Application to cardiac and aortic segmentation in CT scans. IEEE Trans. Med. Imaging 2009, 28(7):1000-1010.

    Article  Google Scholar 

  16. Rezaee MR, van der Zwet PMJ, Lelieveldt BPE, van der Geest RJ, Reiber JHC: A multiresolution image segmentation technique based on pyramidal segmentation and fuzzy clustering. IEEE Trans Image Processing 2000, 9: 1238-1248. 10.1109/83.847836

    Article  Google Scholar 

  17. Park K, Montillo A, Metaxas D, Axel L: Volumetric heart modeling and analysis. Commun. ACM 2005, 48(2):43-48. 10.1145/1042091.1042118

    Article  Google Scholar 

  18. Ecabert O, Peters J, Walker MJ, Ivanc TB, Lorenz C, von Berg J, Lessick J, Vembar M, Weese J: Segmentation of the heart and great vessels in CT images using a model-based adaptation framework. Med. Image Anal. 2011, 15: 863-876. 10.1016/j.media.2011.06.004

    Article  Google Scholar 

  19. Ecabert O, Peters J, Schramm H, Lorenz C, von Berg J, Walker MJ, Vembar M, Olszewski ME, Subramanyan K, Lavi G, Weese J: Automatic model-based segmentation of the heart in CT images. IEEE Trans. Med. Imaging 2008, 27(9):1189-1201.

    Article  Google Scholar 

  20. Sammouda R, Jomaa RM, Mathkour H: Heart region extraction and segmentation from chest CT images using Hopfield Artificial Neural Networks. International Conference on Information Technology and e-Services (ICITeS); 2012:1-6.

    Google Scholar 

  21. Andreopoulos A, Tsotsos JK: Efficient and generalizable statistical models of shape and appearance for analysis of cardiac MRI. Med. Image Anal. 2008, 12(3):335-357. 10.1016/j.media.2007.12.003

    Article  Google Scholar 

  22. Mitchell SC, Bosch JG, Lelieveldt BPF, van Geest RJ, Reiber JHC, Sonka M: 3-D active appearance models: Segmentation of cardiac MR and ultrasound images. IEEE Trans. Med. Imaging 2002, 21(9):1167-1178. 10.1109/TMI.2002.804425

    Article  Google Scholar 

  23. Reeves AP, Biancardi AM, Yankelevitz DF, Cham MD, Henschke CI: Heart region segmentation from low-dose CT scans: an anatomy based approach. In Proc SPIE vol. 2012, 8314: 1-9.

    Google Scholar 

  24. Paragios N: A level set approach for shape-driven segmentation and tracking of the left ventricle. IEEE Trans. Med. Imaging 2003, 22(6):773-776. 10.1109/TMI.2003.814785

    Article  Google Scholar 

  25. Lynch M, Ghita O, Whelan PF: Segmentation of the left ventricle of the heart in 3-D + t MRI data using an optimized nonrigid temporal model. IEEE Trans. Med. Imaging 2008, 27: 195-203.

    Article  Google Scholar 

  26. Gering DT: Automatic segmentation of cardiac MRI. Lect. Notes Comput. Sci 2003, 2878: 524-532. 10.1007/978-3-540-39899-8_65

    Article  Google Scholar 

  27. Jolly MP: Automatic segmentation of the left ventricle in cardiac MR and CT images. Int. J. Comput. Vision. 2006, 70: 151-163. 10.1007/s11263-006-7936-3

    Article  Google Scholar 

  28. Lotjonen J, Kivisto S, Koikkalainen J, Smutek D, Lauerma K: Statistical shape model of the atria, ventricles and epicardium from short- and long-axis MR images. Med. Image Anal. 2004, 8: 371-386. 10.1016/j.media.2004.06.013

    Article  Google Scholar 

  29. Rao A, Chandrashekara R, Sanchez-Ortiz GI: Spatial transformation of motion and deformation fields using nonrigid registration. IEEE Trans. Med. Imaging 2004, 23: 1065-1076. 10.1109/TMI.2004.828681

    Article  Google Scholar 

  30. Levin D, Aladi U, Germano G, Slomka P: Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware. Comput. Med. Imaging Graph. 2005, 29: 463-475. 10.1016/j.compmedimag.2005.02.007

    Article  Google Scholar 

  31. van Greuns RJ, Baks T, Gronenschild EH: Automatic quantitative left ventricular analysis of cine MR images by using three-dimensional information for contour detection. Radiology 2006, 240: 215-221. 10.1148/radiol.2401050471

    Article  Google Scholar 

  32. Hautvast G, Lobregt S, Breeuwer M, Gerritsen F: Automatic contour propagation in cine cardiac magnetic resonance images. IEEE Trans. Med. Imaging 2006, 25: 1472-1482.

    Article  Google Scholar 

  33. Ridler TW, Calvard S: Picture thresholding using an iterative selection method. IEEE Trans Systems. Man and Cybernetics 1978, 8: 630-632.

    Article  Google Scholar 

  34. Roche A, Mandalain G, Pennec X, Ayache N: The correlation ratio as a new similarity measure for multimodal image registration, medical image computing and computer-assisted intervention. Proc MICCAI 1998 and LNCS 1998, 1496: 1115-1124.

    Google Scholar 

  35. Perkins S: Identification and reconstruction of bullets from multiple X-rays. Department of Computer Science, Faculty of Science, University of Cape Town, Dissertation for the M.S. Degree; 2004.

    Google Scholar 

Download references

Acknowledgements

This work is partially supported by the Spanish Ministerio de Ciencia e Innovación, under grant number TEC2009-12675.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jorge Larrey-Ruiz.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Authors’ original file for figure 26

Authors’ original file for figure 27

Authors’ original file for figure 28

Authors’ original file for figure 29

Authors’ original file for figure 30

Authors’ original file for figure 31

Authors’ original file for figure 32

Authors’ original file for figure 33

Authors’ original file for figure 34

Authors’ original file for figure 35

Authors’ original file for figure 36

Authors’ original file for figure 37

Authors’ original file for figure 38

Authors’ original file for figure 39

Authors’ original file for figure 40

Authors’ original file for figure 41

Authors’ original file for figure 42

Authors’ original file for figure 43

Authors’ original file for figure 44

Authors’ original file for figure 45

Authors’ original file for figure 46

Authors’ original file for figure 47

Authors’ original file for figure 48

Authors’ original file for figure 49

Authors’ original file for figure 50

Authors’ original file for figure 51

Authors’ original file for figure 52

Authors’ original file for figure 53

Authors’ original file for figure 54

Authors’ original file for figure 55

Authors’ original file for figure 56

Authors’ original file for figure 57

Authors’ original file for figure 58

Authors’ original file for figure 59

Authors’ original file for figure 60

Authors’ original file for figure 61

Authors’ original file for figure 62

Authors’ original file for figure 63

Authors’ original file for figure 64

Authors’ original file for figure 65

Authors’ original file for figure 66

Authors’ original file for figure 67

Authors’ original file for figure 68

Authors’ original file for figure 69

Authors’ original file for figure 70

Authors’ original file for figure 71

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Larrey-Ruiz, J., Morales-Sánchez, J., Bastida-Jumilla, M.C. et al. Automatic image-based segmentation of the heart from CT scans. J Image Video Proc 2014, 52 (2014). https://doi.org/10.1186/1687-5281-2014-52

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-5281-2014-52

Keywords