PengyiZhang / ACCL

Code for our paper ACCL: Adversarial constrained-CNN loss for weakly supervised medical image segmentation

Home Page:https://arxiv.org/abs/2005.00328

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ACCL: Adversarial constrained-CNN loss for weakly supervised medical image segmentation

Abstracts

Weakly supervised semantic segmentation is attracting significant attention in medical image analysis as only low-cost weak annotations, e.g., point, scribble or box annotations, are required to train CNNs. Constrained-CNN loss is one poplar approach for weakly supervised segmentation by imposing inequality constraints of prior knowledge, e.g., the size and shape of the object of interest, on network’s outputs. However, describing the prior knowledge, e.g., irregular shape and unsmooth boundary, in programming language may not be so easy. In this paper, we propose adversarial constrained-CNN loss (ACCL), a new paradigm of constrained-CNN loss methods, for weakly supervised medical image segmentation. In the new paradigm, prior knowledge, e.g., the size and shape of the object of interest, is encoded and depicted by reference masks, and is further employed to impose constraints on segmentation outputs through adversarial learning with reference masks. Unlike pseudo label methods for weakly supervised segmentation, such reference masks are used to train a discriminator rather than a segmentation network, and thus are not required to be paired with specific images. Our new paradigm not only greatly facilitates imposing prior knowledge on network’s outputs, but also provides stronger and higher-order constraints, i.e., distribution approximation, through adversarial learning. Extensive experiments involving different medical modalities, different anatomical structures, different topologies of the object of interest, different levels of prior knowledge and weakly supervised annotations with different annotation ratios is conducted to evaluate our ACCL method. Consistently superior segmentation results over the size constrained-CNN loss method have been achieved, some of which are close to the results of full supervision, thus fully verifying the effectiveness and generalization of our method. Specifically, we report an average Dice score of 75.4% with an average annotation ratio of 0.65%, surpassing the prior art, i.e., the size constrained-CNN loss method, by a large margin of 11.4%.

About

Code for our paper ACCL: Adversarial constrained-CNN loss for weakly supervised medical image segmentation

https://arxiv.org/abs/2005.00328