Kuan-Lun Tseng, Yen-Liang Lin, Winston Hsu and Chung-Yang (Ric) Huang
IEEE Computer Vision and Pattern Recognition (CVPR), July 2017
Publication year: 2017

Deep learning models such as convolutional neural net- work have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modali- ties as different input channels. To better leverage the multi- modalities, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data. In addition, we exploit con- volutional LSTM to model a sequence of 2D slices, and jointly learn the multi-modalities and convolutional LSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance. Experimental re- sults on BRATS-2015 show that our method outper- forms state-of-the-art biomedical segmentation approaches.