This repository is for a workshop organized by the ICARUS ML reconstruction group to train new comers to learn about our machine-learning-based data reconstruction chain. You can find the workshop agenda here.
For the workshop, we will use this "Docker container".
Some notes below:
- The image is fairly large (multiple GBs). Please download in advance if you are using it locally. It is shared in both NVIDIA GPU and CPU running mode of our software.
- Supported GPUs include those with NVIDIA Volta (e.g. V100), Turing (e.g. RTX 2080Ti), and Ampare architectures (e.g. A100, RTX 3080). If you do want an older architectures to be supported, such as Pascal, please contact Kazu.
- We assume basic knowledge about software container, in particular
Docker
. If you are learning for the first time, we recommend to use/learn aboutSingularity
(website) instead ofDocker
.- You can pull a singularity image as follows
$ singularity pull docker://deeplearnphysics/larcv2:ub20.04-cuda11.6-pytorch1.13-larndsim
You can now launch a shell inside the singularity with
$ singularity exec --bind /path/to/workshop/folder/ larcv2_ub20.04-cuda11.6-pytorch1.13-larndsim.sif bash
You can also pull the docker image using docker (easier on Mac and Windows) directly with:
$ docker pull deeplearnphysics/larcv2:ub20.04-cuda11.6-pytorch1.13-larndsim
To see which images are present on your system, you can use docker images. It will look something like this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
deeplearnphysics/larcv2 ub20.04-cuda11.6-pytorch1.13-larndsim cd28cb3cd04b 2 months ago 20.8GB
to run a shell in your image, simply do:
$ docker run -i -t cd28cb3cd04b bash
If you have apple silicon in your laptop, you're out of luck for now...
- Ask Francois for questions or a request for a separate tutorial if interested.
-
The configuration files are packages with this repository.
-
You can find data files for the examples used in this workshop under:
- SDF
/sdf/group/neutrino/icarus/workshop2023/larcv/ # Example MPV/MPR file prior to reconstruction
/sdf/group/neutrino/icarus/workshop2023/reco/ # Reconstructed HDF5 files
- S3DF
/sdf/data/neutrino/icarus/workshop2023/larcv/ # Example MPV/MPR file prior to reconstruction
/sdf/data/neutrino/icarus/workshop2023/reco/ # Reconstructed HDF5 files
- Public
- MPVMPR LArCV file (Day 1)
- MPVMPR HDF5 file (Day 2, 3)
- BNB numu + cosmics (Day 4, 5)
- BNB intime cosmics (Day 4)
- BNB nue + cosmics (Day 4)
- MPVMPR ee pair HDF5 file (Day 5)
- High statistics CSV files
- The network model parameters for the inference tutorial can be found at:
- SDF/S3DF (same path)
/sdf/group/neutrino/drielsma/train/icarus/localized/full_chain/weights/full_chain/grappa_inter_nomlp/snapshot-2999.ckpt
- Public
Most of the notebooks can be ran strictly on CPU with the exception of:
- Training/validation notebook
- Inference and HDF5 file making notebook
For all other notebooks, you can run them locally, provided that you download:
- Singularity container
- Necessary data
- lartpc_mlreco3d v2.8.6
To gain access to GPUs:
- Everyone participating in this workshop should have access to both SDF and S3DF, if you do not, please reach out to Francois.
- SDF Jupyter ondemand: https://sdf.slac.stanford.edu/public/doc/#/
- S3DF Jupyter ondemand: https://s3df.slac.stanford.edu/public/doc/#/
- ICARUS collaborators also have an access to Wilson Cluster at FNAL, equipped with GPUs. Below is a few commands to log-in and load
Singularity
with which you can run a container image for the workshop (see the next section). For how-to utilize the Wilson Cluster, refer to their website as well as this and that documentation from NOvA (replace "nova" with "icarus" and most commands should just work).
$ ssh $USER@wc.fnal.gov
$ module load singularity
$ singularity --version
singularity version 3.6.4