huiqu18 / FullNet-varCE

Pytorch implementation of FullNet-varCE

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about this code

MegumiOuO opened this issue · comments

commented

I cant get the score you wrote in your paper. FullNet+lossvarCE, FullNet and FCNpooling's results are not good abstract compared to your result.
I want to know where is wrong. First, I don't use the weight map, because when i use the weighted map generated by using the method mentionde in paper U-Net, the result is even worse; Second, I calculate the mean and std using all the train and test images, i‘m not sure how much it will influence the result; Third, I make the ground truth to a picture only inlucding 0 and 1, i think this operation is necessary according to your code.
I didn't make other operations. So, can you tell me do i need other operations or what's wrong about the operations i mentioned above.

(1) The code of computing weight maps has been released. I use different parameters compared to those in U-net.

(2) I calculated those values on the training images. I don't think it will affect too much if calculating using both train and test images. Besides, color normalization was performed on the images of nuclei segmentation dataset, therefore, the mean and std will be much different from images without color normalization.

(3) There are three values in the ground-truth labels: 0: background, 1: inside area of nuclei, 2: nuclear contours. I treat it as a three-class segmentation tasks, which is helpful in separating touching nuclei.

(4) Color normalization on Multi-Organ dataset is needed. And I also uniformly sampled 25 image patches of 250x250 from each large 1000x1000 image.

commented

Thanks for your reply, i will try it later.
There is another question, when i use this code to solve the gland segmentation problem, should i use three values in the ground-truth labels: 0: background, 1: inside area of gland, 2: gland contours ? And if I use three values, should i make a dilate to the contours, since the contours of gland is much thicker?

Yes, I use three classes in gland segmentation as well. For the contours, they are re-computed in the method LabelEncoding after a series of data augmentations (otherwise they are distorted):

class LabelEncoding(object):
"""
Encoding the label, computes boundary individually
"""
def __init__(self, radius=1):
self.radius = radius
def __call__(self, imgs):
out_imgs = list(imgs)
label = imgs[-1]
if not isinstance(label, np.ndarray):
label = np.array(label)
# ternary label: one channel (0: background, 1: inside, 2: boundary) #
new_label = np.zeros((label.shape[0], label.shape[1]), dtype=np.uint8)
new_label[label[:, :, 0] > 255*0.5] = 1 # inside
boun = morphology.dilation(new_label) & (~morphology.erosion(new_label, morphology.disk(self.radius)))
new_label[boun > 0] = 2 # boundary
label = Image.fromarray(new_label.astype(np.uint8))
out_imgs[-1] = label
return tuple(out_imgs)

In gland segmentation, the instances are larger than those in nuclei segmentation. Therefore, I set the thickness to three pixels. I am not sure if thicker contours would help. You may try that if interested. It should be noted that the dilation radius in post-processing should be larger if you use thicker contours in training.

commented

Sorry for bothering you again. When I use to gland segmentation,should I sample some image patches from each image,just as what you did on Multi-Organ dataset.

I didn't sample patches in gland segmentation. The images were scaled to 238 (short edge) in training, and 208 in val+test. That's to say, I used about half resolution in the gland segmentation. It is a compromise in consideration of both memory consumption and the receptive field.

If one can come up with a more sophisticated strategy to handle these issues, it may be better to use the original resolution.

Sorry for bother you,I want to know if i want to get the weighted map i should get the binary map first?

Sorry to bother you,I want to know how I can get the file "mean_std.npy" ?

@Acmenwangtuo You need the instance label in which each nucleus/gland is represented by a unique integer.

@Acmenwangtuo You need the instance label in which each nucleus/gland is represented by a unique integer.

emm how can I get the instance label

@Acmenwangtuo You need the instance label in which each nucleus/gland is represented by a unique integer.

emm how can I get the instance label

is it one channels?

@Acmenwangtuo You need the instance label in which each nucleus/gland is represented by a unique integer.

emm how can I get the instance label

is it one channels?

You can generate the instance labels from the annotation files and the code provided by the dataset's authors: https://nucleisegmentationbenchmark.weebly.com/.

Yes, they are one channel images.

@Acmenwangtuo You need the instance label in which each nucleus/gland is represented by a unique integer.

emm how can I get the instance label

is it one channels?

You can generate the instance labels from the annotation files and the code provided by the dataset's authors: https://nucleisegmentationbenchmark.weebly.com/.

Yes, they are one channel images.

Thanks,but I still can't find the code can generate the instance labels

@Acmenwangtuo Here's the link for code: https://drive.google.com/file/d/0ByERBiBsEbuTRkFpeHpmUENPRjQ/view.
You need to revise Codes/data_prep/create_maps.m to extract instance labels from xml annotation files.

@Acmenwangtuo这是代码链接:https : //drive.google.com/file/d/0ByERBiBsEbuTRkFpeHpmUENPRjQ/view。
您需要修改Codes / data_prep / create_maps.m以便从xml批注文件中提取实例标签。

Thanks for your code,now,I have another question,you mean label instance is one-channel image and each instance hava a unique integer,that means,one label instance can only have most 255 instances?

@Acmenwangtuo这是代码链接:https : //drive.google.com/file/d/0ByERBiBsEbuTRkFpeHpmUENPRjQ/view。
您需要修改Codes / data_prep / create_maps.m以便从xml批注文件中提取实例标签。

Thanks for your code,now,I have another question,you mean label instance is one-channel image and each instance hava a unique integer,that means,one label instance can only have most 255 instances?

it don't need to be an integer,the grand dataset instance label is float.
And i want ask why the 2016WarwickQUDataset testB has wrong label??i find that testB3,4,11,18,19 only have one unique label

@Acmenwangtuo这是代码链接:https : //drive.google.com/file/d/0ByERBiBsEbuTRkFpeHpmUENPRjQ/view。
您需要修改Codes / data_prep / create_maps.m以便从xml批注文件中提取实例标签。

Thanks for your code,now,I have another question,you mean label instance is one-channel image and each instance hava a unique integer,that means,one label instance can only have most 255 instances?

it don't need to be an integer,the grand dataset instance label is float.
And i want ask why the 2016WarwickQUDataset testB has wrong label??i find that testB3,4,11,18,19 only have one unique label

@bluefirexbw Actually you can use any numbers as long as pixels of the same instance have a unique number. But we usually use integers which are more convenient for reading and writing. For the other question please refer to the answer in issue 6.

@Acmenwangtuo Instead of using uint8, you need to use uint16 data type because it allows at most 65536 labels.