cwmok / Conditional_LapIRN

Conditional Deformable Image Registration with Convolutional Neural Network

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

I got bad results with my datasets

hpbzxxxx opened this issue · comments

Thank you for your code
I try to train my own model using my datasets.But I got bad results, and I dont know how to fix it.
屏幕截图 2021-11-25 095323
Then I try to figure out whats wrong. I got the results of lv1 and lv2 network.
lv1. It seems not bad.
lv1
But lv2 seems deviated to one side
lv2
Can you help me with that?
Thank you

Hi @hpbzxxxx,

This phenomenon usually happens when the background intensity of your input image is not equal to zero or the input images are not normalized within [0, 1].

If you are performing resize/interpolation using the sklearn package, please make sure the background intensity of the input images is equal to zero. Otherwise, you may want to switch to the masked NCC similarity function when measuring the distance of the images during training.

Thank you for answering.
The background intensity of image is 0 and I did do the normalized within [0, 1] using the function in your code.
Is there any other problem that cause my issues?
Besides, the loss seems strange too.
losslv3
total_loss, NCC_loss, Jet_loss, reg_loss,from top to bottom,respectively.

Could you share a few samples of the training dataset and your training code with me?

I used OASIS2. I ran freesurfer -autorecon1, and got brainmask.mgz. And normalized ,cropped to 160*192*144
OAS2_0002_MR1.nii.gz
OAS2_0009_MR1.nii.gz
OAS2_0010_MR1.nii.gz
OAS2_0012_MR1.nii.gz

The training code was copied from your code, just changed datapath, and sorted '*.nii.gz'

Hi @hpbzxxxx ,
Your preprocessed data looks fine to me. I decide to reproduce your error by training a new model with your preprocessed data. I will get back to you very soon.

If possible, could you please provide the exact modified Train_cLapIRN.py file to me, so that I can minimize the discrepancy between your training code and my training code.

Hi @hpbzxxxx,

I have tried your preprocessed dataset with my method. Unfortunately, I have the same results as yours, and I am not able to locate the bug.

Yet, I want to share the debugging process with you.
Potential problems: PyTorch version problem and the instance normalization makes the training unstable.
Results: Tested with the latest Pytorch and old Pytorch, removed instance normalization. Yet, the problem persisted.

Potential problem: The contrast of your data is significantly lower than my OASIS training dataset.
Results: Improve the contrast using windowing technique, i.e. np.clip(img, 0, 0.65). Yet, the problem persisted.

Potential problem: The size of your training dataset is too small, i.e., 28 image scans.
Results: Sampled 28 image scans from my OASIS training dataset. It works, which implies our method can train with a small dataset.

Potential problem: The conditional module is somehow not compatible with your preprocessed dataset.
Results: Training LapIRN without conditional modules using your preprocess dataset. The model was still collapsing at the first stage.

Conclusion: There is something wrong with your preprocessed dataset or in my code, but I cannot tell what exactly the problem is.

Suggestions:

  1. Try another image registration method such as the Voxelmorph to see if the problem persists. If the problem persists, revisit the preprocess pipeline of your training data. If the problem is solved by another deep learning-based method, try not to crop the data but downsample it instead when using our method because the gradient in the image boundary is not well defined in the deep learning-based method.
  2. Use the preprocessed OASIS dataset provided by Adrian Dalca in link.

Would you mind letting me know if you have made any progress? I apologize I cannot locate the errors.

Im turely grateful for helping me!
Now Im training LapIRN with these data to see if it works. If it does not work, I will try data in link next.
As for voxelmorph, I tried it before which did work. The difference between these data and data for voxelmorph is that I didnt do normalization training voxelmorph and trained with another size(160*192*224). I will try downsample instead of crop as you said.
I will let you know if I get any progress.
AND THANK YOU AGAIN

Hi @hpbzxxxx,

I have updated the code and added an example using the preprocessed OASIS dataset without cropping/resize. You may want to check out the section "(Example) Training on the preprocessed OASIS dataset without cropping" in the "README.md" file.