Denisa Roberts's starred repositories
image-sculpting
Code release for Image Sculpting: Precise Object Editing with 3D Geometry Control [CVPR 2024]
VisLingInstruct
(NAACL 2024)VisLingInstruct: Elevating Zero-Shot Learning in Multi-Modal Language Models with Autonomous Instruction Optimization
group_sparsity
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. CVPR2020.
graphtrans
Representing Long-Range Context for Graph Neural Networks with Global Attention
EfficientFormer
EfficientFormerV2 & EfficientFormer(NeurIPs 2022)
stable-diffusion
Latent Text-to-Image Diffusion
Prompt-Engineering-Guide
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
x-transformers
A simple but complete full-attention transformer with a set of promising experimental features from various papers
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
superpoint_transformer
Official PyTorch implementation of Superpoint Transformer introduced in "Efficient 3D Semantic Segmentation with Superpoint Transformer"
lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
prismatic-vlms
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
vlm-evaluation
VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning