Denisa Roberts's repositories
transformers-retrieval-ranking-nli-ECIR2021
Multilingual retrieval, ranking and natural language inference with transformers (mBERT); PyTorch implementation code for article in European Conference on Information Retrieval (ECIR2021)
numerical_ml_algorithms_python
Python3 implementation of a few numerical algorithms.
lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
open_flamingo
An open-source framework for training large multimodal models
prismatic-vlms
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
Prompt-Engineering-Guide
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
stable-diffusion
Latent Text-to-Image Diffusion
superpoint_transformer
Official PyTorch implementation of Superpoint Transformer introduced in "Efficient 3D Semantic Segmentation with Superpoint Transformer"
SwiftFormer
SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications
transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
vlm-evaluation
VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
x-transformers
A simple but complete full-attention transformer with a set of promising experimental features from various papers
diffseg
DiffSeg is an unsupervised zero-shot segmentation method using attention information from a stable-diffusion model. This repo implements the main DiffSeg algorithm and additionally includes an experimental feature to add semantic labels to the masks based on a generated caption.