gaochengPRC / scEval

Codes for paper: Evaluating the Utilities of Large Language Models in Single-cell Data Analysis.

Home Page:https://sites.google.com/yale.edu/sceval

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

scEvalšŸ˜ˆ: A evaluation platform for single-cell Large Language Models (LLMs)

This is the repo for our benchmarking and analysis project.

Install

To install our benchmarking environment, please use conda to create a environment based on this yml file in your own machine:

conda env create -n scgpt --file scgpt_bench.yml

For other methods we used, please refer their original project website for instruction. We recommend creating different environment for different methods.

These methods include: tGPT, Geneformer, scBERT, CellLM, SCimilarity, scFoundation, TOSICA, ResPAN, scDesign3, scVI, Tangram, GEARS.

We need scIB for evaluation. Please use pip to install it:

pip install scib

We also provide a scib version with our new function in this repo.

Pre-training weights

Most of our experiments were finished based on weights under scGPT_bc. scGPT_full from scGPT v2 was also used in the batch effect correction evaluation.

Pre-training weights of scBERT can be found in scBERT. Pre-training weights of CellLM can be found in cellLM. Pre-training weights of Geneformer can be found in Geneformer. Pre-training weights of SCimilarity can be found in SCimilarity.

scFoundation relies on the APIs for access, please refer scFoundation for details.

Benchmarking information

Please refer different folders for the codes of scEval and metrics we used to evaluate single-cell LLMs under different tasks. In general, we list the tasks and corresponding metrics here:

Tasks Metrics
Batch Effect Correction, Multi-omics Data Integration
and Simulation scIB
Cell-type Annotation and Gene Function Prediction Accuracy, Precision, Recall and F1 score
Imputation scIB, Correlation
Perturbation Prediction Correlation
Gene Network Analysis Jaccard similarity

The file 'sceval_lib.py' includes all of the metrics we used in this project.

To run the codes in different tasks, please use (we choose batch effect correction as an example here):

python sceval_batcheffect.py

We offer demo datasets for batch effect correction and cell type annotation. Such datasets can be found here.

To avoid using wandb, please set:

os.environ["WANDB_MODE"] = "offline"

Results

We have an official website as the summary of our work. Please use this link for access.

Contact

Please contact tianyu.liu@yale.edu if you have any questions about this project.

Citation

@article {Liu2023.09.08.555192,
	author = {Tianyu Liu and Kexing Li and Yuge Wang and Hongyu Li and Hongyu Zhao},
	title = {Evaluating the Utilities of Large Language Models in Single-cell Data Analysis},
	elocation-id = {2023.09.08.555192},
	year = {2023},
	doi = {10.1101/2023.09.08.555192},
	publisher = {Cold Spring Harbor Laboratory},
	URL = {https://www.biorxiv.org/content/early/2023/09/08/2023.09.08.555192},
	eprint = {https://www.biorxiv.org/content/early/2023/09/08/2023.09.08.555192.full.pdf},
	journal = {bioRxiv}
}

About

Codes for paper: Evaluating the Utilities of Large Language Models in Single-cell Data Analysis.

https://sites.google.com/yale.edu/sceval


Languages

Language:Jupyter Notebook 86.4%Language:Python 13.2%Language:C++ 0.3%Language:Makefile 0.0%Language:Shell 0.0%