Code for analysis of all-optical neural data obtained in Packer lab, Oxford. Figures were generated in whole by code, using notebooks in popping-off/notebooks/Paper Figures/
. This is the code of the following preprint:
https://www.biorxiv.org/content/10.1101/2021.12.28.474343v1
- Clone or fork this repository
- Clone the dependence
Vape
: https://github.com/Packer-Lab/Vape - To install the correct Python packages, please build a conda environment from the
pope.yml
file, using the following terminal commandconda env create -f pope.yml
. - Additionally; run
pip install google-api-python-client
andpip install google-auth-oauthlib
- Add a profile to
popping-off/data_paths.json
, with links to your local paths. (base_path
should be the path that the.pkl
data set is in.Vape_path
should be the directory the Vape repository (also from Packer-Lab Github) is in. The other two entries are not needed for data analysis (only for pre-processing). - Install pop-off by going to your local repo path
/popping-off/popoff/
and runningpython setup.py develop
- Get started by opening the data-loading routine and short tutorial on data structure, available in
popping-off/notebooks/Example notebook to load data.ipynb
- Figures from the bioRxiv preprint were generated by code, available in
popping-off/notebooks/Paper figures/
- To build sessions.pkl files run 'python Session.py' from command line and enter flu_flavour through cli. A new .pkl file will be built for each flu_flavour
- Each .pkl contains a dictionary of SessionLite objects.
- behaviour_trials (float64): 3d array of imaging data as defined by flu_flavour [n_cells x n_trials x n_frames].
- outcome (str): what was the behavioural response to the trial?
- decision (bool): did the animal lick or not?
- photostim (int): 0 = no stim; 1 = test trial; 2 = easy trial.
- trial_subsets (int): how many cells were stimulated on each trial?
- s1_bool (bool): is the cell in s1?
- s2_bool (bool): is the cell in s2?
This repo attempts to follow the directory structure as recommmended by: https://drivendata.github.io/cookiecutter-data-science/ . Most of the code is wrapped in objects (functions and classes), which are called in notebooks to plot results. In summary, there are four main folders:
- figures (saved figures (preferably pdf or svg))
- notebooks (Jupyter notebooks that run the functions)
- popoff (contains all relevant modules for the notebooks)
- scripts (code that is not relevant for notebooks, but is used in other stages of the project (e.g. data pre-processing)).
One requires the repo VAPE for data pre-processing. Furthermore, some routines in scripts/Session.py
were taken from VAPE. VAPE can be cloned here: https://github.com/neuromantic99/Vape
To use OASIS for spike deconvolution in some places, one needs to install their package. To do so, clone this repo: https://github.com/j-friedrich/OASIS and follow their python installation instructions. (Not necessary for the majority of analyses)