Zhongwei Qiu's starred repositories
Awesome-Foundation-Models-for-Advancing-Healthcare
We present a comprehensive and deep review of the HFM in challenges, opportunities, and future directions. The released paper: https://arxiv.org/abs/2404.03264
prov-gigapath
Prov-GigaPath: A whole-slide foundation model for digital pathology from real-world data
Awesome-Vision-Mamba-Models
[Official Repo] A Survey on Vision Mamba: Models, Applications and Challenges
Awesome-state-space-models
Collection of papers on state-space models
ViT-Prisma
ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).
Histopathology-Datasets
Ressources of histopathology datasets
EfficientSAM
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
Awesome-MIM
[Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)
MultiStainDeepLearning
Code from Foersch et al. (Under Construction / Development)
plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
Medical-SAM-Adapter
Adapting Segment Anything Model for Medical Image Segmentation