List

Excited to share our recent work, “Joint Sequence Learning and Cross-Modality Convolution for 3D Biomedical Segmentation,” accepted for CVPR 2017.

Convolutional neural networks have shown effective in image segmentation. However, most of them merely operate with a single modality or simply stack multiple modalities as different input channels. Seeing oncologists leverage the multi-modal signals in tumor diagnosis, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data for tumor segmentation.

In addition, we exploit convolutional LSTM (convLSTM) to model a sequence of 2D slices, and jointly learn the multi-modalities and both sequential and spatial contexts in an end-to-end manner. To avoid converging to the dominating background labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance.

Experimental results on BRATS-2015, an open benchmark for tumor segmentation, evidence that our method yields the best performance so far among the deep methods. To our best knowledge, this is the first end-to-end network jointly considering multiple modalities and the contextual sequences. We believe the proposed framework can be extended to other applications with emerging multimodal signals.

觀察腫瘤科醫師時常交互利用各種醫學影像(MRI, CT, PET)來判斷腫瘤的狀況、位置,為了改進醫療的品質,我們與中部某教學醫院嘗試利用深度網路來改進腫瘤診斷的效率與正確性。很高興達到第一個里程碑並將在今年CVPR發表。

我們設計了第一個可以同時考慮各種掃描影像種類(來源)以及相鄰影像序列關聯性的深度卷積網路,並在腦瘤中達到相當驚豔的效果。同時為了克服醫學領域中常見影像資料量不足的問題,我們更嘗試了少量資料限制下不同的深度網路學習策略。

這是我們第一次嘗試3D醫學影像切割,結果超乎意料之外。在專科醫師的協助下,我們將針對台灣常見的腫瘤,繼續開發其他的卷積類神經網路,協助腫瘤科醫師,消除診斷過程中的「痛點」。