This repository contains example notebooks demonstrating how to use RAPIDS for GPU-accelerated analysis of single-cell sequencing data.
All dependencies for these examples can be installed with conda. CUDA versions 10.1 and 10.2 are supported currently. If installing for a system running a CUDA10.1 driver, use conda/rapidgenomics_cuda10.1.yml
conda env create --name rapidgenomics -f conda/rapidgenomics_cuda10.2.yml
conda activate rapidgenomics
python -m ipykernel install --user --display-name "Python (rapidgenomics)"
After installing the necessary dependencies, you can just run jupyter lab
.
A container with all dependencies, notebooks and source code is available at https://hub.docker.com/r/claraparabricks/single-cell-examples_rapids_cuda10.2.
Please execute the following commands to start the notebook and follow the URL in the log to open Jupyter web application.
docker pull claraparabricks/single-cell-examples_rapids_cuda10.2
docker run --gpus all --rm -v /mnt/data:/data claraparabricks/single-cell-examples_rapids_cuda10.2
Unified Virtual Memory (UVM) can be used to oversubscribe your GPU memory so that chunks of data will be automatically offloaded to main memory when necessary. This is a great way to explore data without having to worry about out of memory errors, but it does degrade performance in proportion to the amount of oversubscription. UVM is enabled by default in these examples and can be enabled/disabled in any RAPIDS workflow with the following:
import cupy as cp
import rmm
rmm.reinitialize(managed_memory=True)
cp.cuda.set_allocator(rmm.rmm_cupy_allocator)
RAPIDS provides a GPU Dashboard, which contains useful tools to monitor GPU hardware right in Jupyter.
We use RAPIDS to accelerate the analysis of a ~70,000-cell single-cell RNA sequencing dataset from human lung cells. This example includes preprocessing, dimension reduction, clustering, visualization and gene ranking.
The dataset is from Travaglini et al. 2020. If you wish to run the example notebook using the same data, use the following command to download the count matrix for this dataset and store it in the data
folder:
wget -P <path to this repository>/data https://rapids-single-cell-examples.s3.us-east-2.amazonaws.com/krasnow_hlca_10x.sparse.h5ad
Follow this Jupyter notebook for RAPIDS analysis of this dataset. In order for the notebook to run, the file rapids_scanpy_funcs.py needs to be in the same folder as the notebook.
We provide a second notebook with the CPU version of this analysis here.
We report the runtime of these notebooks on various AWS instances below. All runtimes are given in seconds. Acceleration is given in parentheses. Benchmarking was performed on July 23, 2020 at commit ID f89e71ae546fe011b9bf222ee5d70ae3fef59d25
.
Step | CPU runtime m5a.12xlarge Intel Xeon Platinum 8000, 48 vCPUs |
GPU runtime g4dn.xlarge T4 16 GB GPU (Acceleration) |
GPU runtime p3.2xlarge Tesla V100 16 GB GPU (Acceleration) |
---|---|---|---|
Preprocessing | 329 | 66 (5x) | 84 (3.9x) |
PCA | 12.2 | 4.6 (2.7x) | 3.1 (3.9x) |
t-SNE | 236 | 3.0 (79x) | 1.8 (131x) |
k-means (single iteration) | 27 | 0.3 (90x) | 0.12 (225x) |
KNN | 28 | 4.9 (5.7x) | 5.9 (4.7x) |
UMAP | 55 | 0.95 (58x) | 0.55 (100x) |
Louvain clustering | 16 | 0.19 (84x) | 0.17 (94x) |
Leiden clustering | 17 | 0.14 (121x) | 0.15 (113x) |
Differential Gene Expression | 99 | 2.9 (34x) | 2.4 (41x) |
Re-analysis of subgroup | 21 | 3.7 (5.7x) | 3.3 (6.4x) |
End-to-end notebook run (steps above + data load and additional processing) |
858 | 103 | 122 |
Price ($/hr) | 2.064 | 0.526 | 3.06 |
Total cost ($) | 0.492 | 0.015 | 0.104 |
We demonstrate the use of RAPIDS to accelerate the analysis of single-cell RNA-seq data from 1 million cells. This example includes preprocessing, dimension reduction, clustering and visualization.
This example relies heavily on UVM and a few of the operations oversubscribed a 32GB V100 GPU on a DGX1. While this example should work on any GPU built on the Pascal architecture or newer, you will want to make sure there is enough main memory available.
The dataset was made publicly available by 10X Genomics. Use the following command to download the count matrix for this dataset and store it in the data
folder:
wget -P <path to this repository>/data https://rapids-single-cell-examples.s3.us-east-2.amazonaws.com/1M_brain_cells_10X.sparse.h5ad
Follow this Jupyter notebook for RAPIDS analysis of this dataset. In order for the notebook to run, the files rapids_scanpy_funcs.py and utils.py need to be in the same folder as the notebook. This notebook runs completely in under 15 minutes on a Tesla V100 GPU with 32 GB memory.
We provide a second notebook with the CPU version of this analysis here.
We report the runtime of these notebooks on various AWS instances below. All runtimes are given in seconds. Acceleration is given in parentheses. Benchmarking was performed on July 23, 2020 at commit ID f89e71ae546fe011b9bf222ee5d70ae3fef59d25
.
Step | CPU runtime m5a.12xlarge Intel Xeon Platinum 8000, 48 vCPUs |
GPU runtime g4dn.12xlarge T4 16 GB GPU (Acceleration) |
GPU runtime p3.8xlarge Tesla V100 16 GB GPU (Acceleration) |
---|---|---|---|
Preprocessing | 4337 | 344 (13x) | 336 (13x) |
PCA | 29 | 28 (1.04x) | 23 (1.3x) |
t-SNE | 5833 | 134 (44x) | 38 (154x) |
k-means (single iteration) | 113 | 13.2 (8.6x) | 2.4 (47x) |
KNN | 670 | 106 (6.3x) | 55.1 (12x) |
UMAP | 1405 | 87 (16x) | 19.2 (73x) |
Louvain clustering | 573 | 5.2 (110x) | 2.8 (205x) |
Leiden clustering | 6414 | 3.7 (1733x) | 1.8 (3563x) |
Re-analysis of subgroup | 249 | 10.9 (23x) | 8.9 (28x) |
End-to-end notebook run (steps above + data load and additional processing) |
19908 | 912 | 702 |
Price ($/hr) | 2.064 | 3.912 | 12.24 |
Total cost ($) | 11.414 | 0.991 | 2.388 |
We demonstrate how to use RAPIDS, Scanpy and Plotly Dash to create an interactive dashboard where we visualize a single-cell RNA-sequencing dataset. Within the interactive dashboard, we can cluster, visualize, and compare any selected groups of cells.
Additional dependencies are needed for this example. Follow these instructions for conda installation:
conda env create --name rapidgenomics-viz -f conda/rapidgenomics_cuda10.2.viz.yml
conda activate rapidgenomics-viz
python -m ipykernel install --user --display-name "Python (rapidgenomics-viz)"
After installing the necessary dependencies, you can just run jupyter lab
.
The dataset used here is the same as in example 1.
Follow this Jupyter notebook to create the interactive visualization. In order for the notebook to run, the files rapids_scanpy_funcs.py and visualize.py need to be in the same folder as the notebook.
We demonstrate the use of RAPIDS to accelerate the analysis of single-cell ATAC-seq data from 60,495 cells. We start with the peak-cell matrix, then perform peak selection, normalization, dimensionality reduction, clustering, and visualization. We also visualize regulatory activity at marker genes and compute differential peaks.
The dataset is taken from Lareau et al., Nat Biotech 2019. We processed the dataset to include only cells in the 'Resting' condition and peaks with nonzero coverage. Use the following command to download (1) the processed peak-cell count matrix for this dataset (.h5ad), (2) the set of nonzero peak names (.npy), and (3) the cell metadata (.csv), and store them in the data
folder:
wget -P <path to this repository>/data https://rapids-single-cell-examples.s3.us-east-2.amazonaws.com/dsci_resting_nonzeropeaks.h5ad; \
wget -P <path to this repository>/data https://rapids-single-cell-examples.s3.us-east-2.amazonaws.com/dsci_resting_peaknames_nonzero.npy; \
wget -P <path to this repository>/data https://rapids-single-cell-examples.s3.us-east-2.amazonaws.com/dsci_resting_cell_metadata.csv
Follow this Jupyter notebook for RAPIDS analysis of this dataset. In order for the notebook to run, the files rapids_scanpy_funcs.py and utils.py need to be in the same folder as the notebook.
We provide a second notebook with the CPU version of this analysis here.
We report the runtime of these notebooks on various AWS instances below. All runtimes are given in seconds. Acceleration is given in parentheses. Benchmarking was performed on August 12, 2020 at commit ID 8f75d419f9806777af97a619fa75990858e6084e
.
Step | CPU runtime m5a.12xlarge Intel Xeon Platinum 8000, 48 vCPUs |
GPU runtime g4dn.12xlarge T4 16 GB GPU (Acceleration) |
GPU runtime p3.2xlarge Tesla V100 16 GB GPU (Acceleration) |
---|---|---|---|
PCA | 149 | 136 (1.1x) | 64 (2.3x) |
KNN | 39 | 3.8 (10x) | 4.9 (8x) |
UMAP | 38 | 1.1 (35x) | 0.78 (49x) |
Louvain clustering | 6.8 | 0.13 (52x) | 0.12 (57x) |
Leiden clustering | 19 | 0.08 (238x) | 0.07 (271x) |
t-SNE | 252 | 3.3 (76x) | 2.1 (120x) |
Differential Peak Analysis | 1006 | 23 (44x) | 20 (50x) |
End-to-end notebook run (steps above + data load and pre-processing) |
1530 | 182 | 111 |
Price ($/hr) | 2.064 | 3.912 | 3.06 |
Total cost ($) | 0.877 | 0.198 | 0.095 |
For our examples, we stored the count matrix in a sparse .h5ad
format. To convert a different count matrix into this format, follow the instructions in this notebook.