mckib2 / pygrappa

Python implementations of GRAPPA-like algorithms.

Home Page:https://pygrappa.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Keeping calibration weights for reuse?

dgallichan opened this issue · comments

Thanks so much for sharing this code - it's just what I've been looking for as I'm trying to migrate my projects away from being tied to MATLAB.

For my particular application I have a whole series of 3D volumes that need 2D GRAPPA reconstruction using the same calibration pre-scan. Do you already have a way of handling this, so the weights don't need to be recalculated for each volume? - I couldn't spot this by scanning through the code.

Hi @dgallichan , thanks for reaching out and apologies for the late response.

There are two ways I can think of to get the effect you're looking for:

  1. use res, weights = mdgrappa(..., ret_weights=True) to get the weights from a single slice/volume and then reuse them (instead of retraining) by subsequently calling mdgrappa(..., weights=weights)
  2. Use the train_kernels function directly to calculate weights. You'll need to provide some kspace and the windowed calibration weights (as done here). Then you call mdgrappa(..., weights=weights) with those weights

I had never intended train_kernels to be a public function (which is probably why A is passed directly in like it is instead of calib), so I would try 1. first and if that doesn't suit your needs go to 2.

Please let me know how it goes or if the above doesn't do what you had hoped

Thanks very much for the response (and my turn for apologies in taking time to actually try it out!). Method 1 does indeed appear to do what I wanted - thank you!

However, I seemed to be finding that 'mdgrappa' does not run as fast as 'grappa', is that correct?

No, it does not run as fast, but I find mdgrappa is more stable. I've been waiting for a lazy day to rewrite some of the core logic in C++ to give it a boost, but I haven't had a chance lately

No big worries about the speed for now, I just wanted to check I wasn't doing something wrong - it's great that you're sharing the code at all!

If you do ever find the time to work on it, I've heard that big speed and memory improvements can also be made by combining GRAPPA with coil compression, so the weights translate directly to a small number of virtual coils instead of needing to do all of them. I've also never found the time to experiment with it though!

I've heard that big speed and memory improvements can also be made by combining GRAPPA with coil compression

This was actually a huge frustration to me and why I wrote this code to begin with -- I needed the coil images (not virtual ones) specifically!

Glad you've found it useful!