mckib2 / pygrappa

Python implementations of GRAPPA-like algorithms.

Home Page:https://pygrappa.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Precision on the kernel size

zaccharieramzi opened this issue · comments

Hi,

I wanted to have some clarifications on the kernel_size argument of the grappa methods.
Let's say that I undersample in the y direction, with a rate of 5, meaning that one line out of 5 in the y direction is sampled.

How should I set ky if I want to consider only the 2 neighbouring sampled lines to fill in a non-sampled line? I thought it should be 5, but with this number I am having weird results where not all the kspace undersampled lines are filled.

typical_grappa_fail

In the figure, the first image is the kspace grappa reconstruction, the second is the (retrospectively) undersampled kspace, and the third is the original ground truth kspace.

I used cgrappa with a kernel size of (5, 5).

If you want I can try to give you a reproducible example with the data I am using (fastMRI).

Hi @zaccharieramzi , please do send a reproducible example and I can try to troubleshoot it.

Also if you haven't already tried it, try using mdgrappa to see if you get the same results. cgrappa can be flaky sometimes.

Hi @mckib2 ,

I just sent you a google colaboratory notebook with a reproducible example (the data I used comes from fastMRI).
I indeed tried mdgrappa instead of cgrappa and I don't have this missing line issue.

However, I find myself with a problem I also noticed when implementing my own GRAPPA, which is that the lines filled by GRAPPA appear to be lacking energy. I didn't put it in the colab (I can if you want), but basically this results in folding artifacts.

This lack of energy is, I think, not linked to the regularisation parameter since I did a grid search over it to find the best value (in terms of SSIM-PSNR for the reconstructed image).

If you want me to open another issue for this I can since it's a different problem.

Hi @zaccharieramzi , thanks for sending that. It's on my radar, but the soonest I'll probably be able to look at this is this weekend.

Do you have any suggestions/references on how to improve SNR?

Hi @mckib2 ,

Unfortunately not really. I have been trying to investigate the problem with my own implementation (customized for the problem I am working with) and it's really difficult to tell where this "lack of energy" comes from.

@zaccharieramzi I was able to get your example up and running and reproduce the attenuated interpolated voxels.

I believe there is a bug in the GRAPPA implementation: the interpolated voxels should be scaled by the effective undersampling rate. I'm working on a fix for this now. This should make the results of mdgrappa better, but cgrappa still has some issues and I would avoid using it for now.

@mckib2 Thank you for taking the time to reproduce the example.

I am wondering where you read about the scaling. Do you have a pointer to that?
For example, in this excellent tutorial I have been following, they don't talk about scaling by the undersampling rate.

So I experimented a bit today with my implementation which shares this attenuation problem (some exploration showed the same thing with your implementation).

The first thing is that the scaling you were talking about didn't work that well.

The second thing I noticed, is that it's probably due to an over-regularisation of the LS solution for the kernel. This over-regularisation is the result of a grid-search I did over the regularisation parameter looking for the best PSNR-SSIM (added in the colab for pygrappa).
When I am using a lamda=0, then the attenuation disappears, but the end result (with RSS) is very noisy.

@zaccharieramzi I think you're right, scaling based on undersampling factor is not the correct thing to do. It just so happened that it fixed that particular example with extremely high regularization parameter value, but it started tripping all my other unit tests. That's good to know over-regularization can be a problem.

I am wondering where you read about the scaling. Do you have a pointer to that?

I found an additional scaling factor that gadgetron uses here, but upon further inspection this scale factor appears to always be set to one. When I saw it I checked the average ratio of interpolated voxels to measured voxels on the example you provided and found it to be about the undersampling factor so I figured I had just dropped a factor somewhere, but that does not appear to be the case as mentioned above. Upon further review, the current mdgrappa implementation seems to be correct.

For example, in this excellent tutorial I have been following, they don't talk about scaling by the undersampling rate.

I used the FMRIB GRAPPA demo/guide to validate my Python implementation, so hopefully it matches fairly well.

When I am using a lamda=0, then the attenuation disappears, but the end result (with RSS) is very noisy.

I am still looking into why this may be, I am seeing the same behavior with reasonable regularization values. When you run your data with the FMRIB MATLAB scripts does it give the same noisy result?

I am seeing the same behavior with reasonable regularization values

Do you mean the noisy results or the attenuation?

When you run your data with the FMRIB MATLAB scripts does it give the same noisy result?

I haven't checked, for some reason I am always reluctant to launching some Matlab, but I might have to. Do you know whether it's Octave-runnable?
However, I am going to look today into the results of Gadgetron for a prospectively undersampled scan to see if there is this attenuation effect.

So what I ended up doing to check the implementations was to run a scan on a phantom, and have the GRAPPA results from Siemens.
My implementation and yours reconstruct the raw data in the same way as the scanner. So the problem is not with the implementation, but with GRAPPA itself, not being able to handle this particular coil combination/number with this acceleration factor.

I am going to close this since all my questions have been answered. Thanks for your help.

I'm glad to hear the implementations were not at fault! I was beginning to suspect it was a g-factor issue and it sounds like that's what it was.

If you've made any improvements over pygrappa in your implementation, I'd be interested in knowing what they are and getting them integrated.

I could add you as a collaborator for you to see. I didn't make any major improvements, but went a different way.

For example I wanted to be able to separate the kernel estimation from the kernel application to be able to visualize/inspect the kernels. I also made sure to have a clearly defined extraction function in order to be able to reuse this part for other non-linear approaches (like RAKI for example).

However, my approach is tailored to the problem I was dealing with (which is a certain kind of undersampling scheme), and in the current state is not applicable to all situations. Moreover, it only allows for 2 neighbouring sampled lines in the k-space.

By extraction function do you mean extracting sources and targets for the training? The pygrappa implementation vectorizes this as much as possible for efficiency, so I don't know if it makes sense to provide a separate function for individual source/target extraction in pygrappa.

If you're looking to get at the kernels, you can also retrieve them from mdgrappa:

recon, weights = mdgrappa(..., ret_weights=True)

weights is a dict that maps unique sampling patterns to kernels. You can also pass these back into the function to reuse or avoid recomputing them.