christianbrodbeck / DstRF

MEG/EEG analysis tools : Direct estimation of TRFs over the source space

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DstRF

The magnetoencephalography (MEG) response to continuous auditory stimuli, such as speech, is commonly described using a linear filter, the auditory temporal response function (TRF). Though components of the sensor level TRFs have been well characterized, the cortical distributions of the underlying neural responses are not well-understood. In our recent work, we provide a unified framework for determining the TRFs of neural sources directly from the MEG data, by integrating the TRF and distributed forward source models into one, and casting the joint estimation task as a Bayesian optimization problem. Though the resulting problem emerges as non-convex, we propose efficient solutions that leverage recent advances in evidence maximization. For more details please refer to the following resources:

  1. P. Das, C. Brodbeck, J. Z. Simon, B. Babadi, Direct Cortical Localization of the MEG Auditory Temporal Response Function: a Non-Convex Optimization Approach; Proceedings of the 47th Annual Neuroscience Meeting (SfN 2018), Nov. 2-7, San Diego, CA.
  2. P. Das, C. Brodbeck, J. Z. Simon, B. Babadi, Cortical Localization of the Auditory Temporal Response Function from MEG via Non-Convex Optimization; 2018 Asilomar Conference on Signals, Systems, and Computers, Oct. 28–31, Pacific Grove, CA (invited)

This repository contains the implementation of our direct TRF estimation algorithm in python (version 3.6 and above).

Requirements:

Eelbrain (Download/ Installation Instructions)

How to use:

  1. Clone the repo and install using pip.

  2. Suppose we are interested in subject XXXX.

  3. Create forward source model using MNE-python. Convert it into NDVar format and save as a pickled file under fwdsol folder:
    XXXX-vol-7-fwd.pickled

  4. Create predictors folder containing pickled stimulus variables for different conditions in NDVar format. Suppose there are two conditions, then the file contains:
    stim_h.pickled
    stim_l.pickled

  5. Create meg_XXXX folder containing pickled meg recordings in NDVar format. Suppose there are three repetitions for each conditions, then this folder contains:
    meg_h0.pickled
    meg_h1.pickled
    meg_h2.pickled
    meg_l0.pickled
    meg_l1.pickled
    meg_l2.pickled
    Don't forget to put the empty room recordings emptyroom.pickled in the same folder.

  6. Change the ROOTDIR in config.py to the folder containing all these folders. Also change the max # of iterations as suited. But the default values should do just fine!

  7. Then from an ipython shell run the following commands:

form dstrf import load_subject
subject_id = 'XXXX'
model, data = load_subject(subject_id, n_splits, normalize=None)
mu = 0.02  # needs to be chosen by cross-validation
model.fit(data, mu, tol=1e-5, verbose=True)
trf = model.get_strf(data)

That should take ~10 mins to spit out cortical trf estimates.

This is just a simple example of cortical TRF estimation. The package also contains many other functions, classes etc, so one can make custom functions according to his/ her workflow needs.

Results

We applied the algorithm on a subset of MEG data collected from 17 adults (aged 18-27 years) under an auditory task described in the papers. In short, during the task, the participants listened to 1 min long segments from an audio-book recording of The Legend of Sleepy Hollow by Washington Irving, narrated by a male speaker. We consider localizing the TRFs using a total of 6 min data from each participant. MNE-python 0.14 was used in pre-processing the raw data to automatically detect and discard flat channels, remove extraneous artifacts, and to band-pass filter the data in the range 1 - 80 Hz. The six 1 min long data epochs were then down-sampled to 200 Hz. As the stimulus variable, we used the speech envelope reflecting the momentary acoustic power, by averaging the auditory spectrogram representation (generated using a model of the auditory periphery) across the frequency bands, sampled at 200 Hz. A volume source space for individual subjects was defined on a 3D regular grid with a resolution of 7 mm in each direction. The lead-field matrix was then computed by placing free orientation virtual dipoles on the resulting 3322 grid points. The consistent components of our estimated 1 s-long 3D TRFs accross all 17 subjects looks like following:

Demo

Isn't that cool? Do expect to see something like that with any other source localization method? If you realize you could use this method on your data, please feel free to use the codes. You can reach me at proloy@umd.edu if you have any issues with the codes. And don't forget to go over the papers/ posters before applying the algorithm.

Note that, this is a dev version, and I will be adding more functionality over time, so feel free to ask me to add any other functionality, or report if anything si broken.

Citation

This repo is open for anyone to use under Apache license. But if you use this code for your publication, I will appreciate you if cite my papers mentioned above.

About

MEG/EEG analysis tools : Direct estimation of TRFs over the source space

License:Other


Languages

Language:Python 69.7%Language:C 30.3%