Optical Coherence Tomography (OCT) has dramatically changed diagnostics in ophthalmology. The first publication of cross-sectional images of the human retina occurred in [Huang 1991], and its commercial introduction soon followed in 1996. Such cross-sectional images of the retina allow clinicians to quantify retinal thickness changes, which relates directly to macular pathologies. Retinal layer segmentation is subsequently the underlying technology facilitating this quantification of retinal layer thicknesses critical in the diagnosis and studying of ocular diseases.
A good overview of the work can be found in [Debuc 2010]. Thorough though this review is, emphasis is not really afforded to the graph-theoretic approaches that are proving to be a very effective means of layer segmentation, less prone to initialization and parameter tuning than the active contour methods. Leading the charge has been the group at Iowa led by Professor Sonka, who were among the first to apply this method to OCT images in 2008. Based on the Min-Cut/Max-Flow algorithm of [Boykov 2004], they added a special, layer-based parameterization which encoded a smoothness constraint into the optimization [Li 2006]. The initial cost function was gradient based and the resultant graph cuts machinery chugged away with very nice results [Garvin 2008]. The catch here is that these nice results do come at a hefty computation price. Despite the efficiency of the optimization process, running graph cuts for multi-layer segmentations on large 3-d OCT data sets takes many minutes to run meaning that they are not clinically applicable. Progressive evolutions to the cost functions, multi-resolution approaches and various initialization schemes have since occurred, but the computational overheads restrict the approach’s wider – i.e., commercial – adoption and do not seem to be going away.
One scheme used to speed up the convergence was to use a 2-d algorithm using graph traversal to find the initial surfaces. Such graph traversal algorithms, using, for example, Dijkstra’s or Viterbi’s algorithm, have a far lower computation cost; the down-side being they are inherently 2-d. Nonetheless, all the current excitement in multi-layer retinal image segmentation stems from such methods. [Chiu 2010] showed eight layers of segmentation in 2-d, and then extended the work to the anterior segment [Chiu 2011], and quantification of drusen [Chiu 2012]. [Yang 2010] published very similar work yet extended the approach to 2½-d, but only showed results on averaged B-scans.
Along with the groups from Iowa and Duke, the Pattern Recognition Lab at the University of Erlangen-Nuremburg has published some exciting results [Mayer 2010]. Following initial estimates for the layer segmentation based on the raw 1-d intensity profiles, their algorithm also involves the minimization of an energy term that encodes smoothness and edge information and is minimized iteratively. What is especially encouraging is that their software can be downloaded here such that the interested researcher is able to evaluate performance themselves.
The true test of any such algorithm is clinical utility. Ultimately, therefore, one must look to clinical journals to understand what algorithms are being used and how they are performing. There is tremendous interest in both the academic and commercial world to improve the robustness of the commercial algorithms that are released, with particular emphasis now coming from the field of neuro-ophthalmology and the group at Johns Hopkins in particular. But, as Professor Drexler, a pioneer of OCT over the last 20 years, admitted in a recent webinar, the analysis algorithms have lagged the hardware development.