abrilcf / mednerf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unideal results

yunsuper123 opened this issue · comments

Hello,

I have been trying to implement your code as a learning material recently.
However, I haven't achieved the same good results as shown in Figure 3 of your paper.
Could you please help me identify possible issues?

I have attached all 72 different angle images as follows, "Optimizing G only" and "Optimizing both G and z", respectively.
I believe the results corresponding to the given x-ray (same angle) are quite good (the top-left image), while there seems to be a noticeable discrepancy with the remaining 71 images.

generated_4999_all
generated_4999_all (1)

I haven't adjusted any parameters and I applied the same knee dataset. I also used "01_xray0000" to generate images of other angles (the "model_best.pt" I used is exactly the 100,000th iteration).

Thank you very much for your help and time. I believe any insights from you would be greatly helpful, and I truly appreciate it!

Hi there,

Overall, the generated images for camera poses similar to those found in the training data look somewhat similar to what we obtained from the "optimizing G only" file. You can continue optimizing by increasing the PSNR threshold and you should get better results.

On the other hand, this blurriness is a common outcome in generative models, especially when they have limited capacity or training data. In our case, the dataset we used for training was relatively small compared to other datasets, which can contribute to this uncertainty.

I've included links to our models' weights in the readme. I hope this helps clarify the results you're observing.

Thank you so much for your response and assistance!
I used your model's weights to generate other angles images, and they look really great.
My results are obtained by setting the PSNR threshold to 50 and after 5000 iterations.
Could you please give me some advice on how to tune these parameters?

On the other hand, in your understanding, how should I improve the results obtained from "Optimizing both G and z"?
Seems like the model's still trying to figure out the 3D consistency due to the lack of latent structures.
Thank you again for your help, I greatly appreciate it!

Hi,
I used your chest model's weights to render other images through the "optimizing G only" file, yet I came across some difficulties that I wanted to discuss with you.
During the initial iterations of my experiment, the results unexpectedly look like knee samples rather than chest images.
I wonder if there might have been a mix-up with the weights, perhaps uploading another knee model.
Could you please take a moment to verify the uploaded weights, ensuring that they correspond to the chest model as intended?
Thank you very much for your time and attention to this matter!

@yunsuper123 Hello, I would like to ask how many pictures we need to render other parts in the rendering process render_xray_G.py. I am looking forward to your answer to my questions. If it is convenient for you, can we add a friend? My email address is 2061371579@qq.com

Hi, I used your chest model's weights to render other images through the "optimizing G only" file, yet I came across some difficulties that I wanted to discuss with you. During the initial iterations of my experiment, the results unexpectedly look like knee samples rather than chest images. I wonder if there might have been a mix-up with the weights, perhaps uploading another knee model. Could you please take a moment to verify the uploaded weights, ensuring that they correspond to the chest model as intended? Thank you very much for your time and attention to this matter!

I think that both of the links have the same model's weights - the knee model. I loaded both models and they have the same load_dict['fid_best']=75.32657579762895.

嗨,我使用您的胸部模型的权重通过“仅优化 G”文件渲染其他图像,但我遇到了一些我想与您讨论的困难。在我实验的初始迭代中,结果出乎意料地看起来像膝盖样本而不是胸部图像。我想知道是否可能与重量混淆,也许上传了另一个膝盖模型。您能否花点时间验证上传的重量,确保它们与预期的胸部模型相对应?非常感谢您抽出宝贵时间关注此事!

我认为这两个链接具有相同模型的重量 - 膝盖模型。我加载了两个模型,它们具有相同的load_dict['fid_best']=75.32657579762895。
How do you get load_dict['fid_best']=75.32657579762895? I don't know how to get it, I hope you can teach me, thank you

嗨,我使用您的胸部模型的权重通过“仅优化 G”文件渲染其他图像,但我遇到了一些我想与您讨论的困难。在我实验的初始迭代中,结果出乎意料地看起来像膝盖样本而不是胸部图像。我想知道是否可能与重量混淆,也许上传了另一个膝盖模型。您能否花点时间验证上传的重量,确保它们与预期的胸部模型相对应?非常感谢您抽出宝贵时间关注此事!

我认为这两个链接具有相同模型的重量 - 膝盖模型。我加载了两个模型,它们具有相同的load_dict['fid_best']=75.32657579762895。
How do you get load_dict['fid_best']=75.32657579762895? I don't know how to get it, I hope you can teach me, thank you

For example from render_xray_G.py you need to load the checkpoint

    # Load checkpoint
    model_file = args.model
    print('load %s' % os.path.join(checkpoint_dir, model_file))
    load_dict = checkpoint_io.load(model_file)

and then you just print this parameter print(load_dict['fid_best'])

嗨,我使用您的胸部模型的权重通过"仅优化 G"文件渲染其他图像,但我遇到了一些我想与您讨论的困难。在我实验的初始迭代中,结果出乎意料地看起来像膝盖样本而不是胸部图像。我想知道是否可能与重量混淆,也许上传了另一个膝盖模型。您能否花点时间验证上传的重量,确保它们与预期的胸部模型相对应?非常感谢您抽出宝贵时间关注此事!

我认为这两个链接具有相同模型的重量 - 膝盖模型。我加载了两个模型,它们具有相同的load_dict['fid_best']=75.32657579762895。你如何得到 load_dict['fid_best']=75.32657579762895?我不知道怎么得到它,我希望你能教我,谢谢

例如,您需要从render_xray_G.py加载检查点

    # Load checkpoint
    model_file = args.model
    print('load %s' % os.path.join(checkpoint_dir, model_file))
    load_dict = checkpoint_io.load(model_file)

然后你只需打印这个参数 print(load_dict['fid_best'])

Thank you very much for your answer. I also have a problem, in my training with mednerf and graf, the kid_best I get is always mednerf is better than graf. I want to know if you have tried training the code provided by the author. If you have tried it and got good results, can you teach me

嗨,我使用您的胸部模型的权重通过"仅优化 G"文件渲染其他图像,但我遇到了一些我想与您讨论的困难。在我实验的初始迭代中,结果出乎意料地看起来像膝盖样本而不是胸部图像。我想知道是否可能与重量混淆,也许上传了另一个膝盖模型。您能否花点时间验证上传的重量,确保它们与预期的胸部模型相对应?非常感谢您抽出宝贵时间关注此事!

我认为这两个链接具有相同模型的重量 - 膝盖模型。我加载了两个模型,它们具有相同的load_dict['fid_best']=75.32657579762895。你如何得到 load_dict['fid_best']=75.32657579762895?我不知道怎么得到它,我希望你能教我,谢谢

例如,您需要从render_xray_G.py加载检查点

    # Load checkpoint
    model_file = args.model
    print('load %s' % os.path.join(checkpoint_dir, model_file))
    load_dict = checkpoint_io.load(model_file)

然后你只需打印这个参数 print(load_dict['fid_best'])

Thank you very much for your answer. I also have a problem, in my training with mednerf and graf, the kid_best I get is always mednerf is better than graf. I want to know if you have tried training the code provided by the author. If you have tried it and got good results, can you teach me

I tried the code, but just with fid_best mode. And I am not sure why you want graf to be better than mednerf... That is the purpose of the paper that mednerf is supposed to be better than graf?

Hi, I have updated the weights to the models. Indeed mistakenly weights were from the same model, but that's corrected now

嗨,我已经更新了模型的权重。确实错误地砝码来自同一模型,但现在已经纠正了

Thank you for sharing. I still have some questions. When we train a knee model, for example, do we input 5 sets of data with a total of 360 pictures? I tried it out, trained 10,000 iterations, and trained with 4090D, which took over a dozen hours. However, the minimum value of fid during training is 94.69, and the minimum value of kid is 0.0757. I would like to know how you trained to achieve the lower fid in the paper. I used default.yaml directly. Maybe there's something wrong with my training. I hope you can guide me

Hi, you'll need to train it for longer. We trained both chest and knee models for 100,000 iterations

嗨,你需要训练更长的时间。我们训练了胸部和膝盖模型进行 100,000 次迭代

Thank you for your answer. Could you share the original data set of DRR images generated from CT scan? I didn't find it in the paper link

嗨,你需要训练更长的时间。我们训练了胸部和膝盖模型进行 100,000 次迭代

Sorry, actually I used 100,000 iterations, I was wrong in the above answer, but the fid_best in the iteration process is only 94.69, which seems to be much higher than what you mentioned in your paper. I want to know if your yaml during training is default.yaml