Zhongwei Qiu (ericzw)

ericzw

Geek Repo

Company:Alibaba DAMO Academy

Location:Hangzhou

Home Page:https://ericzw.github.io/

Github PK Tool:Github PK Tool


Organizations
MatrixBrain
researchmm

Zhongwei Qiu's starred repositories

unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

Language:PythonLicense:MITStargazers:19180Issues:0Issues:0

Awesome-Foundation-Models-for-Advancing-Healthcare

We present a comprehensive and deep review of the HFM in challenges, opportunities, and future directions. The released paper: https://arxiv.org/abs/2404.03264

License:MITStargazers:120Issues:0Issues:0

prov-gigapath

Prov-GigaPath: A whole-slide foundation model for digital pathology from real-world data

Language:PythonLicense:NOASSERTIONStargazers:274Issues:0Issues:0

WiKG

[CVPR 2024] Dynamic Graph Representation with Knowledge-aware Attention for Histopathology Whole Slide Image Analysis

Language:PythonStargazers:31Issues:0Issues:0

Awesome-Vision-Mamba-Models

[Official Repo] A Survey on Vision Mamba: Models, Applications and Challenges

Stargazers:313Issues:0Issues:0

VAR

[GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simple, user-friendly yet state-of-the-art* codebase for autoregressive image generation!

Language:PythonLicense:MITStargazers:3847Issues:0Issues:0

CMTA

Here is the official implementation of the paper "Cross-Modal Translation and Alignment for Survival Analysis"

Language:PythonStargazers:33Issues:0Issues:0

Awesome-state-space-models

Collection of papers on state-space models

Stargazers:481Issues:0Issues:0

mamba.py

A simple and efficient Mamba implementation in pure PyTorch and MLX.

Language:PythonLicense:MITStargazers:794Issues:0Issues:0

ViT-Prisma

ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).

Language:PythonLicense:NOASSERTIONStargazers:136Issues:0Issues:0

dinov2

PyTorch code and models for the DINOv2 self-supervised learning method.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:8409Issues:0Issues:0

VL-Mamba

Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"

License:MITStargazers:68Issues:0Issues:0

quilt1m

[NeurIPS 2023 Oral] Quilt-1M: One Million Image-Text Pairs for Histopathology.

Language:PythonLicense:MITStargazers:120Issues:0Issues:0

Histopathology-Datasets

Ressources of histopathology datasets

Stargazers:199Issues:0Issues:0

simba

Simba

Language:PythonStargazers:158Issues:0Issues:0

zeta

Build high-performance AI models with modular building blocks

Language:PythonLicense:Apache-2.0Stargazers:323Issues:0Issues:0

Osprey

[CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"

Language:PythonLicense:Apache-2.0Stargazers:720Issues:0Issues:0

EfficientSAM

EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:1992Issues:0Issues:0

mamba

Mamba SSM architecture

Language:PythonLicense:Apache-2.0Stargazers:11707Issues:0Issues:0

Awesome-MIM

[Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)

Language:PythonLicense:Apache-2.0Stargazers:273Issues:0Issues:0

HIPT

Hierarchical Image Pyramid Transformer - CVPR 2022 (Oral)

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:479Issues:0Issues:0

LLaVA-Med

Large Language-and-Vision Assistant for Biomedicine, built towards multimodal GPT-4 level capabilities.

Language:PythonLicense:NOASSERTIONStargazers:1315Issues:0Issues:0

PixelLM

PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding. PixelLM is accepted by CVPR 2024.

Language:PythonLicense:Apache-2.0Stargazers:145Issues:0Issues:0

ml-aim

This repository provides the code and model checkpoints of the research paper: Scalable Pre-training of Large Autoregressive Image Models

Language:PythonLicense:NOASSERTIONStargazers:667Issues:0Issues:0
Language:PythonLicense:GPL-3.0Stargazers:234Issues:0Issues:0

CLAM

Data-efficient and weakly supervised computational pathology on whole slide images - Nature Biomedical Engineering

Language:PythonLicense:GPL-3.0Stargazers:971Issues:0Issues:0

SurvPath

Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction - CVPR 2024

Language:PythonStargazers:79Issues:0Issues:0

MultiStainDeepLearning

Code from Foersch et al. (Under Construction / Development)

Language:PythonLicense:GPL-3.0Stargazers:33Issues:0Issues:0

plip

Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.

Language:PythonStargazers:236Issues:0Issues:0

Medical-SAM-Adapter

Adapting Segment Anything Model for Medical Image Segmentation

Language:PythonLicense:GPL-3.0Stargazers:896Issues:0Issues:0