DIDSR / VICTRE

Virtual Imaging Clinical Trial for Regulatory Evaluation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reconstruction-code crashes

yoniker opened this issue · comments

Hi!

Thank you so much for this very interesting project!

I'm having issues when it comes to running the reconstruction code(the binary crashes) on the breast models you have provided and so I can't produce images.

In order for you to understand exactly what's going on, I've added a detailed description of the steps I've taken in the following gist:
a detailed description

Hi,

We are happy to hear of your interest in using our tools. My first suggestion would be for you to generate the flatfield image which is used by the reconstruction code.

To generate flatfield image using MCGPU, you will have to move the phantom out of the detector boundary. In the MC-GPU_v1.5b_sample_mammo_and_DBT_simulation.in file for MCGPU, make the following change and run - this will give you the flatfield corrected image.

999.0 0.0 0.0 # OFFSET OF THE VOXEL GEOMETRY (DEFAULT ORIGIN AT LOWER BACK CORNER) [cm]

Hope this helps!

Best,
Diksha Sharma

Thank you for your super-fast response!
The tools you guys have created may help save lives so thx for that.
I've tried your suggestion, and it made some progress, but the reconstruction looks very different from TomoSynthesis images I've seen in hospitals eg that's how it looks like on my end: https://www.youtube.com/watch?v=A6zAqupFgro
See this gist which describes what I've done differently this time:
It's available here: https://gist.github.com/yoniker/e581ec4467a0e399d7b08775e792cafb
Please advise how to progress and make the generated images more realistic/similar to the ones we see in hospitals (for instance https://www.youtube.com/watch?v=YLaIlLxTlVQ&feature=youtu.be&t=77)

Here is my suggestion: Based on the provided youtube images, the problem is that you did not specify the reconstruction FOV properly. You need to make the reconstruction FOV to be consistent with the object FOV according to the DBT geometry you simulated.

Thanks for your answer,
Following up on your suggestion, I redid the entire process from scratch and created a gist for your convenience,you can follow the exact steps I took right here:
https://gist.github.com/yoniker/80994c67ae94567e8f83e22ad9585cf6
The only difference between the flatfield raw images and the raw images containing the phantom is the offset from the origin as you have suggested previously, and yet the reconstructed tomo image looks like this: https://youtu.be/qLFHc1eqzHc

How do you suggest for us to get a more realistic looking reconstructed tomo volume?
What would you change in the steps I've taken?

From the images you showed, the reconstruction field of view was out of the object region. That's probably why the reconstructed images looked out of focus. To solve this problem, you can play with the parameter "offset_xyz" and others that define the volume positions to be reconstructed to make the images right. Hope this helps.

Thanks for your quick response,

I've followed the parameters' values you have provided within the input configuration file you have uploaded using your breast model, as well as your suggestions.

Each iteration (of creating a tomo image) does take a while (a few hours) so that playing with the parameters might take me a very very long time- remember that I'm not as familiar with your software as you.

I've imaged the scattered density breast model which you provided, with its corresponding input configuration file (MC-GPU_v1.5b_scattered_phantom_mammo+DBT.in) which contains the values of the parameters.
It will be awesome if you can tell me- What exact changes do I need to make to which parameters in that file in order to get a proper output from the imaging system?

Hi,
It looks like you are combining all the DBT projections with the "cat" command. But note that the output files contain two separate images: primary+scatter (the "real" image) and primary only. If you did not remove the primary only images from the input to the reconstruction, it will be using these repeated images instead of the actual projections at the required angles. It looks like this was not documented too well...
I uploaded the utility (extract_projections.c) that I use to extract the correct images and combine them into a single file before DBT recon (you will need to do the same for the flatfields). Here is the link and some explanations: VICTRE_MCGPU/example_simulations.

By the way, you can save yourself some hours in DBT reconstructions by editing the Makefile and change the debugging option CFLAGS := -g -O0 -w for CFLAGS := -g -O3 -w

Thanks Andreu!

So I've used your utility as suggested to concatenate the raw images files and now the reconstruction program gives me as an output a set of black screens as the different tomo slices (https://youtu.be/oIF1RRtEMtk). I've created a gist so that you can let me know what I should correct right here:
https://gist.github.com/yoniker/e3d5581bac386c6190b3ebe597dbab1a

It will be great if you can provide a single end-to-end example,starting with an existing breast model with (or without) a lesion, and show how to produce a reasonable looking Tomo and/or a Mammogram image from the existing model.
Up to the point of imaging everything went smoothly-ish for me, but actually imaging a model and running the reconstruction stage is...let's say,not as user friendly as possible :)

Hi Yoni,

I followed the process you describe and I was able to get a valid reconstruction.
The output you got from the reconstruction code is a completely black image (all zeros) or noise?
Did you open the output file as 64-bit real (double) values, Little-Endian byte order, size 1321x1024x57?
(I know it is confusing that the projections are in single precision but the recon in doubles; it doesn't really make sense).

I could share my simulated projections for the example if you have a way to receive a gigabyte of data.

Have a nice weekend!

  Andreu

Hi Andreu!

First of all,Thank you so much for going over my previous efforts!
In the last couple of days I've tried to debug it myself (tried different machines as well) but got the same result.
I've opened the output file as a 64-bit real,Little endian of size 1421x1024x57, imagej (and numpy) is showing the pixel values as 0.0.

I've decided to upload all of the output files (like in the gist,every output file went to the results directory under MC-GPU/example_simulations/results so i decided to upload the entire directory) since you are in a better position than me to determine the reason for the difference between my output files and yours:
https://drive.google.com/drive/folders/1PiXBgtALL9hrTQqcMJ_lPgRAlPLU7GJd

So let me know if I'm somehow misusing any of the tools-I did put in my best efforts to use those,since we might be able to literally save lives using those tools! (if you are interested I can elaborate more in person).

Hi, I am also getting similar images as Yoni's. I am able to get same dimensions of the reconstructed volume as the phantom used in simulation. But the images look off focus.

image

Hi Harshit,

The image in your post looks ok for the first or last slice of a DBT recon. There are always massive cone-beam artifacts at the extremes. Could you send the image at the center of the volume?

From the image it looks like your volume has size 894x505x715. This does not seem to match any of our phantoms or reconstruction sizes. We reconstructed our phantoms using 0.085x0.085x1.0 mm voxels.
For example, for a scattered phantom the original size was 1740x2415x1140, 0.050 mm voxels. The corresponding reconstructed volume had size: 174050/85 x 241550/85 x 1140*50/1000 = 1023.53 x 1420.59 x 57.00 ~ 1024x 1421x57
It is possible also that the recon software inverts the meaning of X and Y.

Let me know if you can see a DBT volume or not. At least your image is not all zeros as yoniker`s!

By the way, there is a small bug in the recon code. It has no effect in my computer but it might be problematic in other architectures.

In file FBP_DBTrecon.c, line 939, it says:
"fread(projdbt_un, sizeof(double), ns_oldnt_oldnadowndown, fid);"

The input projections are actually float, not double. Try correcting the code to:
"fread(projdbt_un, sizeof(float), ns_oldnt_oldnadowndown, fid);"

Let me know if this changes anything in your reconstructions.

Dear Yoniker,

Please edit line 905 in your "FBP_DBTrecon.c" file to point to the actual DBT projections. You are using the same file name for the flat fields and the projections!

By the way, since it is never a good idea to normalize an image with a noisy image, I recommend running the fast flat field simulations at least 10 times, and inputting the average flat fields to the recon algorithm (this is actually what people do with real imaging systems).

I hope the reconstruction works now!

Hey Andreu!

Thank you for your answer.
Sorry for the mistake - I got it right the first time and somehow i got things mixed up over time...

The reconstruction certainly looks way way better- (here it is : https://youtu.be/6oGU5iQcqo4) and that's from running the ff simulation only once.

That being said, the reconstruction output still doesn't look like something I would expect when it comes to a Tomo(an example for a real Tomo can be viewed here: https://www.youtube.com/watch?v=YLaIlLxTlVQ&feature=youtu.be&t=77).

How can I make the output look more realistic?

Hi Andreu, thanks for reply. The phantom size was such because I changed u and v values. But I again tried with default values and dense phantom for 0.050 mm voxels. The phantom size was 1791x1010x1434. The corresponding volume size was (1791x0.0050/0.0085) x (1010x0.0050/0.00850) x (1434x0.0050/0.100) = 1057x595x72.

But the images are similar to Yoni's and my question was also similiar. Can we get more realistic images?

The reconstruction video looks good to me.
Commercial DBT systems might post-process the reconstructed images to try to enhance some aspects of the images and make them look more appealing based on feedback from their customers (the same is true for the mammography simulations: our images are "for processing", not "for presentation" views). They might also attempt to reduce some of the artifacts. Nothing of this is done by our basic filtered back-projection algorithm (it is not clear that post-processing actually improves clinical performance anyway).

But you should definitely optimize the visualization gray scale. You could change the gray scale window/level value to set the contrast between adipose and glandular tissue to the level that you prefer. And you can also set to black anything outside the breast. Many artifacts are visible outside the breast and they are totally irrelevant for any diagnostic task.