 Research
 Open access
 Published:
Variational approach for capsule video frame interpolation
EURASIP Journal on Image and Video Processing volume 2018, Article number: 30 (2018)
Abstract
Capsule video endoscopy, which uses a wireless camera to visualize the digestive tract, is emerging as an alternative to traditional colonoscopy. Colonoscopy is considered as the gold standard for visualizing the colon and takes 30 frames per second. Capsule images, on the other hand, are taken with low frame rate (average five frames per second), which makes it difficult to find pathology and results in eye fatigue for viewing. In this paper, we propose a variational algorithm to smooth the video temporally and create a visually pleasant video. The main objective of the paper is to increase the frame rate to be closer to that of the colonoscopy. We propose variational energy that takes into consideration both motion estimation and intermediate frame intensity interpolation using the surrounding frames. The proposed formulation incorporates both pixel intensity and texture feature in the optical flow objective function such that the interpolation at the intermediate frame is directly modeled. The main feature of this formulation is that error in motion estimation is incorporated in our model, so that only robust motion estimation are used in estimating the intensity of the intermediate frame. We derived EulerLagrange equations and showed an efficient numerical scheme that can be implemented on graphics hardware. Finally, a motion compensated frame rate doubling version of our method is implemented. We evaluate the quality of both 90 and 100% of the frames for medical diagnosis domain through objective image quality metrics. Our method improves stateoftheart result for 90% frames while performing equivalent for the remaining cases with other existing methods. In the last section, we show application of frame interpolation to informative frame segment visualization and to reduce the power consumption.
1 Introduction
Capsule video endoscopy has proven to be a powerful tool for diagnosis of the digestive tract diseases. It has many advantages over traditional colonoscopy, as being less invasive and requires no sedation. There are different types of capsules that are currently available on the market. These include esophageal, small bowel, and colon capsules. Colon capsule video endoscopy (CCE) has been used to diagnose inflammatory bowel disease [1] (i.e., Chron’s disease and ulcerative colitis), gastrointestinal bleeding, and polyps. Since it is less invasive than colonoscopy, it might also increase participation in colorectal cancer screening. It has been shown to have high sensitivity for the detection of clinically relevant lesions [2].
The second generation CCE developed by [3] is about 11 × 31 mm and takes 14 frames/min until the first frame of the small bowel then captures frames at adaptive frame rate of 4–35 frames per second depending on the speed of the capsule. Although, adaptive frame rate improves the visualization, the video appears jagged; sample videos can be obtained from [4]. The images are at a lower resolution compared to traditional colonoscopy (usually fullHD). The images produced by capsule video endoscopy suffer from several problems, such as uneven and low illumination, low resolution, high compression ratio, and noise. The problem of capsule image enhancement has been an active research topic over the past decade [5, 6], but there are few publications that consider the low temporal frame rate aspect of capsule videos. Frame interpolation is a technique of creating intermediate frames based on overlapping neighboring frames in sequence. The CCE video reader softwares such as RapidReader [4] support viewing from 2–40 frames per second with an option to pause, rewind, and play the video. Watching the video at two frames per second is the most robust way to find pathologies, but the videos are not smooth for watching and hence takes more time to view. It is important to note that frame interpolation does not increase the duration of CCE videos rather makes the video more smooth and natural to view. CCE frame interpolation should in general consider the following three conditions: Firstly, the interpolated frame should not contain motion and image artifacts that could lead to wrong diagnosis. Secondly, flickering and blurring of frames should be avoided when displaying image sequences. Thirdly, the interpolated frames need to compensate apparent motion of camera and give a natural motion portrayal. Karargyris et al. [7] proposed threedimensional reconstruction of the digestive wall in capsule endoscopy videos using elastic video interpolation. In their work, they propose a methodology that creates the intermediate (interpolated) frames between two CCE frames followed by threedimensional reconstruction, given that these two frames carry mutual information of some degree. The interpolation is done by computing optical flow from the regionbased matching technique. The segmentation of the video frames is performed by the fuzzy region growing segmentation followed by matching of the segments in consecutive frames based on color, texture, and geometry information. Similar work was presented in [8] where the authors presented frame interpolation as a postprocessing technique at the receiver for saving battery power at the transmitter. This is done by transmitting frames at a lower frame rate, given it can be reconstructed by using neighboring frames at the receiver, hence saving power at the transmitter. In their work, they used unidirectional and bidirectional blockmatching motion estimation and compensation method to create the intermediate frames.
In this paper, we further explore this direction of estimating intermediate frames and propose a different parametrization of variational energy for CCE frame interpolation. We show experimentally that the proposed variational energy formulation improves the quality of interpolated CCE frames. The contributions of this work are three folds. Firstly, we combine motion estimation and compensation in a single energy formulation and such formulation improves the quality of interpolated frame with less computation since we do not compute forward and backward optical flow. The proposed energy formulation includes symmetric motion estimation and compensation which can be solved by primal dual approach [9]. Secondly, by exploiting the law of texture energy, we introduce a symmetric textural and intensity constraint for computing robust interpolation of CCE frames. Unlike previous approaches that compute one directional optical flow [7], our formulation considers symmetric optical flow under the symmetric textural and intensity constraint, which gives a better interpolation of CCE frames in terms of objective quality metrics. Thirdly, we evaluate and analyze appropriateness of CCE frame interpolation for medical diagnosis using objective image quality metrics.
The outline of the article is as follows: in Section 2, we revisit earlier works in variational formulation for frame interpolation. In Section 3, we present our approach and detail derivation both theoretically and numerically. In Section 4, we present implementation of the proposed method. In Section 5, we evaluate the proposed method, along with comparison to other works. Finally, in Section 6, we present discussion and conclusion.
2 Background
Before going into the details of our method, we present a short review of variational frame interpolation approaches as used in natural image sequences. There are two general steps for frame interpolation. These are motion estimation and motion compensation, respectively. Motion estimation involves computing the temporal movement between the current frame and the previous frame, in the form of motion vectors. The motion vectors can be blockbased (sparse) or pixelbased(dense). Keller et al. [10] proposed a variational method for both optical flow calculation (motion estimation) and the actual new frame interpolation (motion compensation). The flow and intensities are calculated simultaneously in a multiresolution setting. Using standard maximum a posteriori to variational formulation rationale, the authors derived a minimum energy formulation for the estimation of a reconstructed sequence as well as motion recovery. Similarly, Rakêt et al. [11] proposed motion compensated frame interpolation with a symmetric optical flow constraint. Motion vectors are computed using TVL1 energy, and interpolated frame is computed by averaging the warped flow to current and previous frame. Once motion vectors are computed, motion compensation is used to estimate the intermediate frame as follows:
Motion estimation involves computation of displacement vectors between two neighboring frames. Let I(x,n) be a video sequence and u=(u,v) be a displacement vector of pixel position x=(x,y) of frame number n. Assuming the intensity of the pixel did not change due to displacement, we can write the optical flow constraint as
Taking the Taylor series expansion of the righthand side of Eq. (2), we get the known optical flow constraint as
where ∇I and I_{ t } are spatial and temporal derivative operators, respectively. Horn and Schunck [12] proposed a variational approach to solving Eq. (3) using L2norm under smoothness assumption of flow field as shown in Eq. (4). This quadratic cost function can easily be solved using EulerLagrange equations. The main disadvantage of this formulation is that it penalizes high gradients of u and disallows discontinuities.
where dA=dxdy and λ is a constant.
To circumvent this problem, L1norm was exploited in [9]. L1norm is a better choice for optical flow computation as it is robust for outliers and allows discontinuities in the flow field. Zach et al. [9] proposed an algorithm that can be understood as a minimization of Eq. (5), which is the sum of the total variation of flow field u=(u,v) and an L1 attachment term:
Once motion vectors are estimated, interpolated frame is computed as in Eq. (1). Our proposed approach is different from [10] in that forward and backward optical flow is not computed on every multiresolution scale. Rather, symmetric flow is employed, hence avoiding redundant flow computation. Unlike [11], we model motion vector estimation and intermediate frame interpolation as a single energy formulation. In the next section, we discuss the proposed method.
3 Method
A high resolution video sequence can be considered as continuous spacetime image volume. Frame interpolation problem, on the other hand, can be modeled as an inpainting problem along temporal axes in this volume. 2D inpainting may be viewed as denoising with a binary mask β(x) which is set to zero in the missing region of the image and nonzero otherwise. Following the same reasoning, let us define original video sequence and desired high frame rate sequence as I_{0} and I respectively. From mathematical point of view, I_{0} and I are piecewise continuous functions of \(\mathbb {R}^{3} \to \mathbb {R}\) defined in space of bounded variation. Assuming the intensity of a pixel is constant in the direction of optical flow, a general formulation to estimate symmetric optical flow (SOF) and interpolated frame can be written as
where I(x,n) is the required interpolated frame, ∇ is the spatial derivative operator, and u=(u,v) are the x and y components of the SOF field. In order to find the intermediate frame I(x,n), we need to minimize the energy given by Eq. (6). Taking the derivative of Eq. (6) with respect to u yields SOF estimation:
We can recognize the above expression as SOF constraint. In ideal case, where the flow field is accurate, the intermediate frame can be computed directly as I(x,n)=I(x+u,n+1)=I(x−u,n−1). However, for practical capsule videos, with photometric variations (i.e., shadow, shading, specular reflection, and light source changes) as well as geometrical variations (i.e., viewpoint and object orientation), these conditions do not hold. In order to improve the smoothness and accuracy of the optical flow, we propose to improve optical flow estimation by including information from textural features. In this work, we explored nine filters of size 5×5, which are constructed from four basic vectors L5=[1,4,6,4,1];E5=[−1,−2,0,2,1];S5=[−1,0,2,0,−1]; and R5=[1,−4,6,−4,1], as suggested by the law of texture energy [13]. By multiplying these four vectors mutually, textural features such as centerweighted local average (L5), edges (E5), spots (S5), ripples, and waves in texture (R5) are included in estimating robust SOF. Once the nine textural maps are computed, the final texture is estimated by weighted summation of the texture using Eq. (8). Texture map estimation scheme is shown in Fig. 1.
where w_{ i } is
In addition, local binary kernel [1,1,1;1,−1,1;1,1,1] is explored for robust optical flow computation. Incorporating the textural constraint to Eq. (7) results in our final energy for optical flow computation:
where I_{th} is a texture feature as in Eq. (8) and λ_{ t } is the weighting factor. This formulation has an advantage in that it incorporates robust textural features. Moreover, in the standard formulation of variational optical flow, the estimated motion vector field depends on the reference image and is asymmetric as can be seen from Eq. (5).
Similarly, the derivative of Eq. (6) with respect to I(x,n) using the notation \(\hat {I}=I(\mathbf {x},n)\) for simplicity gives
which can be rewritten to include quality measure of the optical flow. This can be expressed by dividing the optical flow constraint equation into regions of intensity interpolation, where warping of the flow to both neighboring frames I(x−u,n−1) and I(x+u,n+1) are equal or not.
where β(x)=1 if I(x−u,n−1)−I(x+u,n+1)<ε, and ε is a small positive constant; otherwise, β(x) is set to a zero similar to 2D inpainting. \(\overline {\beta (\mathbf {x})}\) represents a negation operator. In the above formulation, Eq. (12) represents spatial diffusion on the interpolated image and Eq. (13) on the other hand is diffusion along the flow line. The extra fidelity term Eq. (14) is added to avoid blurring of the interpolated frames in regions defined by β(x)=1, i.e., correct intensity estimation. Here, we make an assumption that the intensity of the pixel at intermediate frame as estimated from both neighboring frame is robust, if the optical flow estimation at that pixel is accurate; otherwise, we fill the missing region using flow direction through inpainting along the flow lines. Following the above formulation, the problem is thus to find a motion vector u and intermediate frame I(x,n) that minimizes Eq. (10) and Eq. (12), respectively. In order to minimize Eq. (10), we take firstorder Taylor series expansion on data fidelity term of Eq. (10) and it becomes:
where
σq(x) is added to the fidelity term of our energy function to account for small intensity variation in the image. Introducing auxiliary variable w to Eq. (15) and applying convex relaxation similar to [9], Eq. (15) can be decoupled into two energies as:
This convex relation was first proposed by [9], coupling the two energies by quadratic link function. Setting θ low forces the minima to occur when u=w. In Eq. (17), minimization problem is identical to denoising problem except that the integral is taken over a motion vector u and can be solved by using Chambolles projection algorithm, and Eq. (18) can be solved simply by pointwise thresholding method.
To minimize the intensity inpainting energy given by Eq. (12), we derive the EulerLagrange equations. This can be written as:
Minimizing the energy based on this L1 norm requires that the function to be convex and differentiable. Hence, we write absolute value function in Eq. (12) as \(\varphi (\mathbf {I}^{2})=\sqrt {(\mathbf {I})^{2}+\epsilon _{1}} \equiv \mathbf {I}\), where ε_{1} is a small positive constant regularizer. φ(I^{2}) is a convex and differentiable function which meets the mentioned requirement in the process of searching minimum. Therefore, the EulerLagrange equation becomes
where
A=φ^{′}(∇I(x,n)^{2}), B=φ^{′}(I(x+u,n+1)−I(x−u,n−1)^{2}), and V=(u,1).
In the above equation, the first term represents a diffusion term with diffusion velocity of A. The second part of the equation represents diffusion of intensity in the direction of the computed optical flow. This term act as a transportation of intensity along the flow line. In our case, we are only interested with diffusion along the flow lines as defined by mask β(x). It is possible to apply diffusion on the whole image along the flow lines, but we found that a good initial solution can easily be estimated by setting \(I(\mathbf {x},n)=\frac {1}{2} (I(\mathbf {x}{u},n1)+I(\mathbf {x}+{u},n+1))\). The last part of the equation avoids smoothing the images where intensities are correctly computed from optical flow.
4 Numerical implementation
To minimize the energies defined in Eqs. (12)–(14) and (19), we first derived the EulerLagrange equations. The formulations can easily be parallelized with multicore processors [14]. The EulerLagrange equations for Eqs. (16)–(18) are shown in Eq. (20). The 2D divergence operator ∇· for N by M image is defined as
Similarly, we defined derivatives using fivepoint stencil finite difference approximation with convolution mask [ 1,−8,0,8,−1]/12. The implementation of the second term of Eq. (20) is similar to discretization used in [15]. Deriving the EulerLagrange equation for Eq. (17) and setting it to zero becomes
Let us define dual variable as
Substituting Eq. (22) in Eq. (23) we get
The fixedpoint iteration scheme for Eq. (24) will be
and u^{k+1} is computed from Eq. (22) as u^{k+1}=w^{k+1}+θ∇·(p^{k}). Finally, Eq. (18) can be solved using pointwise thresholding as in [11]. For completeness, we present the final result here. For general formulation, where ρ(u,q)=g^{T}u+c, the dual variable w is given by \({\mathbf {w}}= {\mathbf {u}}+TH\left ({\mathbf {u}}+{\mathbf {g}}\frac {{\mathbf {c}}}{{\mathbf {g}}^{2}}\right)\), where TH is a thresholding operator defined as
Finally, the step by step multiscale implementation scheme is given in the resulting Algorithm 1.
5 Results and discussion
In CCE frames interpolation, not only the smoothness of the output video must be taken into account but also the quality of the interpolated frame for diagnosis. It must be kept in mind that the capsule moves through gastrointestinal track with an uneven speed by muscle peristalsis. Therefore, depending on the speed of the capsule, some of the neighboring frames might contain high overlap or very small overlap between frames. Hence, the quality of the interpolated image depends on the degree of overlap and needs to be evaluated for medical decisionmaking process. Therefore, the reconstruction quality in terms of objective and subjective measures is of great importance.
5.1 Dataset
In our experiment, we have doubled frame rates from 5 to 10 frame/second. The videos are taken with GivenImaging Pillcam Colon camera. Four sequences were extracted from GivenImaging capsule videos [4].

Seq1: Contains 13 frames from colon with perspective passage motion of tissues. Average correlation similarity between neighboring frames is 0.8910.

Seq2: Contains 16 frames from colon showing a 9 mm polyp on a single frame with complicated motions. Average correlation similarity between neighboring frames is 0.8078.

Seq3: Contains 20 frames from rectum with occlusions. Average correlation similarity between neighboring frames is 0.8989.

Seq4: Contains 18 frames from colon showing 6 mm polyp on multiple frames. Average correlation similarity between neighboring frames is 0.8621. Sample results are shown in Fig. 2.
5.2 Objective metrics
We used the most common metrics for evaluation of interpolation error such as meansquared error (MSE) and peak signal to noise ratio (PSNR). In addition, we have also compared using Structural SIMilarity(SSIM) [16] as a quality measure of one of the images being compared. For N by M size image, MSE and PSNR are defined as:
where I_{est} is the interpolated frame, I_{gr} is the ground truth frame, and L is the peak signal strength. For SSIM, we used the implementation provided by [16]. In our experiment, as we do not have ground truth data, we interpolate oddnumbered frames using evennumbered frames and vice versa. Comparison with other methods is done for both 90 and 100% of the frames. The frames are grouped based on the similarity of the neighboring frame for interpolation. Results are summarized in Tables 1, 2 and 3 for both 90 and 100% of frame sequences. The comparison is done with stateoftheart and traditional optical flow variational technique TVCLG [17]^{1} and TVL1[18]^{3} respectively. Moreover, we also compared against nonvariational optical flow method that is robust for large displacement optical flow computation, SFlow [19]^{2}. In addition, frame averaging technique is included in the comparison as a baseline, as it is common in commercial products. The comparison is done using the implementation provided by the respective authors.
From the above result, we can observe that the proposed method improves the image quality by 0.3 dB compared to other stateoftheart methods, although the difference in performance in terms of MSE between top method and our method is comparatively small (1.1×10^{−4}). Sample results of the proposed method are shown in Fig. 2. More results are shown in Fig. 3.
5.3 Parameters
In general, frame interpolation depends on accurate optical flow computation. Coarsetofine and warping techniques are frequently used tool for improving the performance of optic flow methods [9–11]. Warping in Algorithm 1 controls the number of times Eqs. (17) and (18) are solved iteratively, which is set by Maxiter on each scale. Increasing this parameter increases the quality of computed optical flow as tradeoff with speed. NIter is the number of times diffusion along the optical flow lines is propagated. Interpolated frames get smoother with increase of this parameter. Finally, in order to get full advantage of the texture feature, λ_{ t } in Eq. (10) represents how much textural energy map and pixel intensity contribute for estimating accurate interpolated frame. We did parameter optimization on λ_{ t } against PSNR value of the interpolated and ground truth image. The result of optimization is shown in Fig. 7. A visual comparison showing results with and without texture features is shown in Fig. 6. From Figs. 7 and 6, we can see that the textural features improve the quality of interpolated frame with less motion artifacts and tissue surface blur.
5.4 Applicability of interpolation for CCE video frames
Colonosocopy, which is a gold standard for visualizing the colon takes 30 frames per second. The videos are in general smooth and natural to view. Currently, CCE is not recommended as a firstline colorectal cancer screening option in hospitals. Frame interpolation can be used to enhance CCE for better visualization by increasing the frame rate and improving capsule battery life. In literature, there are recent works that aim to detect informative segments automatically [20, 21]. Increasing the frame rate of these segments will assist the gastroenterologist to go through the video quickly. Moreover, frame interpolation can be used as postprocessing for saving battery life. In order to reduce the power consumption of an endoscope capsule transferring still images over a wireless channel from inside human intestines to onbody receivers, the transmitted frame rate can be reduced in favor of generating the frames at the receiver side [8]. However, CCE estimation of intermediate CCE frames, with rapid changing scene and large displacement between frames, can cause problems even to the human observer. In such scenarios, it is difficult (sometimes impossible) to estimate the intermediate frame as there is no information available for reconstruction. Hence, it is important to have a frame reconstruction without undermining the diagnostic value of the video, for example Figs. 4 and 5. When a gastroenterologist examines CCE videos, he/she can play videos and pause on a given frame for examining. This begs a question, how to predict if the interpolated frame is reliable for diagnosis.
In order to examine the appropriateness of an interpolation, we analyzed different parameters of neighboring frames that could impact the quality of interpolated frame. Figure 8 shows PSNR value plot against maximum and minimum magnitude of the optical flow. From the figure, we can see that for optical flow maximum magnitude less than 25, the PSNR is stable. Similar observation can be made from Fig. 8, as the correlation between the neighboring frame increases the PSNR value shows improvement. This is an interesting observation, in that we can use it to switchoff frame interpolation when flows are above some threshold or when the correlation between two frames are below a given value. The result shown in Fig. 8 is expected in that as the correlation between neighboring frames is an indicator to the quality of the interpolated frame. In order to compute robust threshold of correlation between neighboring frames, we collected CCE videos from nine people and extracted seven segments (50 frames each) from each video, which are marked by gastroenterologist as suspected region for different types of pathology. Table 4 shows correlation between neighboring frames as ratio of number of frames for a given correlation value to the total number of frames. By using the data from Fig. 8 and Table 4, one can estimate robust threshold for interpolation that works for significant portion of the CCE videos.
It is also important to note the performance of each method with respect to correlation of neighboring frames. Figure 5 shows the plot of correlation between neighboring frames and PSNR value between ground truth and interpolated frame. The data is curve fitted using exponential family of the form a−b×e^{−cx}. It is easy to see that frame averaging performance above other methods where correlation between neighboring frames is less than 0.4, although the interpolated frame is blurred and has motion artifacts, as the apparent motion is not compensated. This is expected in that in case of large displacement between neighboring frames, it is difficult to estimate accurately the optical flow. On the other hand, using methods that are robust for large displacement optical flow as [19] gives a better result compared to variational techniques for large optical flow displacement. However, variational methods, specifically our proposed method performs better than other methods including [19] for approximately 90% of CCE video frames which has correlation greater than 0.75 as shown in Table 4.
Moreover, we performed a nonparametric paired Wilcoxon signedrank test [22] comparing the PSNR value for each method. The null hypothesis (i.e. data in two paired methods are samples from continuous distributions with equal medians H=0, against the alternative that they are not H=1) is tested with Bonferroni correction of confidence interval [23]. Figure 5, b and c shows Wilcoxon signed rank test for all methods for correlation value between neighboring frames greater than 0.75. As it can be seen, the proposed method performs statistically better except for TVCLG [17]. Although, the proposed method performs better against TVCLG [17] in terms of mean PSNR, it is not statistically significant. For our experiment, we set a correlation value of 0.75 between neighboring frames to decide if the computed intermediate frame is suitable for diagnosis. As it is shown in Table 4, this threshold includes 90% of the frames in typical CCE video. For frames below the threshold, frame interpolation is off, and frame doubling is done to make the frame rate consistent.
5.5 Future direction: CCE video frame interpolation
Compared with wired colonoscopy, the limited working time, the low frame rate, and the low image resolution limit the wider application of CCE. An increase in the frame rate, angle of view, depth of field, and duration of the procedure and improvements in illumination seem likely in the future. The progress of battery technology and robust computational frame interpolation techniques can mitigate problems with the current CCE capsules. CCE needs to be small enough to be swallowable, and the battery needs to last more than 8 h [24]. The transmission of the image data occupies about 90% of the total power in CCE [25]. Hence, computational techniques can have a significant impact in improving the frame rate of future capsules. As shown on Tables 1, 2, 3 and 4, the performance the proposed method improves with correlation between the neighboring frames. With high frame rate videos, the proposed method gives more robust interpolated frames. This could significantly increases the chance of finding more disease pathologies as the CCE passes through the gastrointestinal tract.
6 Conclusion
In this paper, we discussed the limitation of the current CCE videos regarding low frame rates. It is desirable to have a smooth video which is pleasant to view as well as give a better diagnostic value by reducing eye fatigue. We proposed a variational approach to CCE frame motion estimation and intermediate frame intensity computation, simultaneously. In addition, textural features are included to make robust motion estimation. We also evaluated the quality of both 90 and 100% of the frames for medical diagnosis domain through objective image quality metrics. We found that the proposed method gives a stateoftheart result for CCE frame interpolation. Moreover, the proposed method can be parallelized, and computationally efficient methods exist for GPU implementation. As a future work, we will explore extending variational methods to make them more robust for large displacement between neighboring frames(i.e., low correlation). In addition, objective metrics used here need to be supplemented with subjective evaluation by medical professional. Further video materials can be downloaded from https://www.ntnu.edu/web/colourlab/software.
Abbreviations
 CCE:

Colon capsule video endoscopy
 GPU:

Graphics processing unit
 GT:

Ground truth
 HD:

High definition
 MSE:

Meansquared error
 PSNR:

Peak signal to noise ratio
 SOF:

Symmetric optical flow
 SSIM:

Structural Similarity
 TVL1:

Total variation L1 norm
References
C Parker, CE Spada, M McAlindon, C Davison, S Panter, Capsule endoscopy—not just for the small bowel: a review. Expert Review of Gastroenterology & Hepatology. 9(1), 79–89 (2015). https://doi.org/10.1586/17474124.2014.934357.
C Spada, GC Hassan, J Endosc. 44(5), 527–536 (2012). https://doi.org/10.1055/s00311291717.
Medtronic, Pillcam Colon II (2009). http://www.medtronic.com. Accessed 15 July 2016.
GivenImaging. Capsule Video Endoscopy: Atlas, (2016). http://www.capsuleendoscopy.org. Accessed 2016.
MS Imtiaz, KA Wahid, Color enhancement in endoscopic images using adaptive sigmoid function and space variant color reproduction. Comput. Math. Methods Med. 2015(2), 3905–3908 (2015).
J Pohl, I Aschmoneit, S Schuhmann, C Ell, Computed image modification for enhancement of smallbowel surface structures at video capsule endoscopy. Endoscopy. 42(06), 490–492 (2010). https://doi.org/10.1055/s00291243994.
A Karargyris, N Bourbakis, Threedimensional reconstruction of the digestive wall in capsule endoscopy videos using elastic video interpolation. IEEE Trans. Med. Imaging. 30(4), 957–971 (2011). https://doi.org/10.1109/TMI.2010.2098882.
EJ Daling, Reduction of Power Consumption in Video Communication based on Low Frame Rate Transmission and Decoder Frame Interpolation (2011). https://brage.bibsys.no/xmlui/handle/11250/2370308. Accessed 10 Feb 2017.
C Zach, T Pock, H Bischof, A duality based approach for realtime TVL 1 optical flow. Pattern Recognition. 1(1), 214–223 (2007). https://doi.org/10.1007/978354074936322.
F Keller, SH Lauze, M Nielsen, Video superresolution using simultaneous motion and intensity calculations. IEEE Trans. Image Process.20(7), 1870–1884 (2011). https://doi.org/10.1109/TIP.2011.2106793.
Rakêt LL, L Roholm, A Bruhn, J Weickert, Motion compensated frame interpolation with a symmetric optical flow constraint. Lect. Notes Comput. Sci (Incl. Subseries Lect. Notes Artif Intell. Lect Notes Bioinform). 7431 LNCS(PART 1), 447–457 (2012). https://doi.org/10.1007/978.3.642.33179.443.
BG Horn, BKP Schunck, Determining optical flow. Artificial Intell. 17:, 185–203 (1981).
KI Laws, in Image processing for missile guidance, 238. Rapid texture identification (International Society for Optics and Photonics, 1980), pp. 376–382.
C Yan, Y Zhang, J Xu, F Dai, J Zhang, Q Dai, F Wu, Efficient parallel framework for HEVC motion estimation on manycore processors. IEEE Trans. Circ. Syst. Video Technol. 24(12), 2077–2089 (2014).
M Nielsen, 02. A variational algorithm for motion compensated inpainting (Kingston UniversityLondon, 2004), pp. 777–787. https://doi.org/10.5244/C.18.80.
Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, IEEE Trans. Image Process.13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861.
M Drulea, S Nedevschi, in 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC). Total variation regularization of localglobal optical flow, (2011), pp. 318–323. https://doi.org/10.1109/ITSC.2011.6082986.
Sa, J́,nchez, E MeinhardtLlopis, G Facciolo, Image Process. On Line. 1(1), 137–150 (2013). https://doi.org/10.5201/ipol.2013.26.
C Liu, J Yuen, A Torralba, Sift flow: Dense correspondence across scenes and its applications. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 978–994 (2011). https://doi.org/10.1109/TPAMI.2010.147.
Y Chen, Y Lan, H Ren, Trimming the wireless capsule endoscopic video by removing redundant frames, 1–4 (2012). https://doi.org/10.1109/WiCOM.2012.6478729.
A Mohammed, S Yildirim, M Pedersen,. Hovde, F Cheikh, in 2017 IEEE 30th International Symposium on ComputerBased Medical Systems (CBMS). Sparse coded handcrafted and deep features for colon capsule video summarization, (2017), pp. 728–733. https://doi.org/10.1109/CBMS.2017.13.
JD Gibbons, S Chakraborti, Nonparametric Statistical Inference, vol. 1 (CRC Press, 2010).
JH McDonald, Handbook of biological statistics, vol. 2 (Sparky House Publishing, Baltimore, 2009).
G Ou, N Shahidi, C Galorport, O Takach, T Lee, R Enns, Effect of longer battery life on small bowel capsule endoscopy. World J. Gastroenterology: WJG. 21(9), 2677 (2015).
A Moglia, A Menciassi, P Dario, Recent patents on wireless capsule endoscopy. Recent Patents Biomed Eng. 1(1), 24–33 (2008).
Funding
This research has been supported by the Research Council of Norway through project no. 247689 “IQMED: Image Quality enhancement in MEDical diagnosis, monitoring and treatment.”
Availability of data and materials
The dataset supporting the conclusions of this article is available in the [4] repository.
Author information
Authors and Affiliations
Contributions
The work presented in this paper was carried out in collaboration between all authors. AM carried out the main part of this manuscript. IF contributed in numerical implementation, SY and MP are a supervisor of this research. ØH has contributed and evaluated the result for clinical application. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional information
Authors’ information
Mohammed Ahmed is PhD student at NTNU in Gjøvik, in the area of Medical Imaging for fast automatic and accurate anomaly detection and diagnosis using capsule video endoscopy. He received Master’s degree in Electronics and Information Engineering from Chonbuk National University, South Korea in 2014.
Ivar Farup is a professor of computer science and study program leader for bachelor in engineering – computer science at NTNU Gjøvik. He recieved his MSc (siv.ing.) in technical physics from NTH, Norway, 1994 and PhD (dr. scient.) from the Department of Mathematics, University of Oslo, 2000. He is Professor of Computer Science since 2012. His work is centered on Colour science and Image processing
Sule Yildirim is associate professor at the NISLAB, Department of Information Security and Communication Technology, NTNU GØvik. She was appointed as the head of computer science department, at HIHM and also worked there as associate professor before her current position. She has background in artificial intelligence and machine learning. Her work is centered on Secure Technologies and Semantic, agent based and learning systems for ontology modeling in Semantic Web and for the development of smart characters in video games.
Dr. Sule Yildirim Yayilgan is an associate professor at the Norwegian University of Science and Technology (NTNU) at the Department of Information Security and Communication Technology. Her main fields of research interests are artificial intelligence, application of machine learning in various fields, signal and image processing, and biometrics. She has participated in projects funded by EU Horizon 2020, Eurostars and Erasmus+ programs, the Research Council of Norway. She also actively takes part as PC in conferences and acts as reviewer in several journals.
Marius Pedersen received his BSc. in Computer Engineering in 2006 and MiT in Media Technology in 2007, both from Gjøvik University College, Norway. He completed a PhD program in color imaging in 2011 from the University of Oslo, Norway, sponsored by Océ. He is currently employed as professor at NTNU GØvik, Norway. He is also the director of the Norwegian Colour and Visual Computing Laboratory (Colourlab). His work is centered on subjective and objective image quality.
Østein Hovde, MD/PhD, is an associate professor at the Institute of Clinical medicine, University of Oslo. He is also a Senior consultant at Innlandet Hospital, Gjøvik. His main scientific work is in the fields of inflammatory bowel diseases and therapeutic endoscopy.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Mohammed, A., Farup, I., Yildirim, S. et al. Variational approach for capsule video frame interpolation. J Image Video Proc. 2018, 30 (2018). https://doi.org/10.1186/s1364001802679
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1364001802679