Ege Özgüroğlu's repositories
egeozguroglu.github.io
Ege Ozguroglu's github.io Website
CLIP
Contrastive Language-Image Pretraining
example-project-python
Example python project
Grounded-Segment-Anything
Marrying Grounding DINO with Segment Anything & Stable Diffusion & Tag2Text & BLIP & Whisper & ChatBot - Automatically Detect , Segment and Generate Anything with Image, Text, and Audio Inputs
GroundingDINO
The official implementation of "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
hand_object_detector
Project and dataset webpage:
hyperfuture
Code for the paper Learning the Predictability of the Future
isedit
A python library that combines score editing tools with audio output.
lang-segment-anything
SAM with text prompt
long-short-term-transformer
[NeurIPS 2021 Spotlight] Official implementation of Long Short-Term Transformer for Online Action Detection
numpy
The fundamental package for scientific computing with Python.
openpom
Replication of the Principal Odor Map paper by Lee et al (2022). The model is implemented such that it integrates with DeepChem
pandas
Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
pix2gestalt
Code for the paper "pix2gestalt: Amodal Segmentation by Synthesizing Wholes"
PixelLib
Visit PixelLib's official documentation https://pixellib.readthedocs.io/en/latest/
pytorch-CycleGAN-and-pix2pix
Image-to-Image Translation in PyTorch
sam-hq
Segment Anything in High Quality [NeurIPS 2023]
Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Segment-Everything-Everywhere-All-At-Once
Official implementation of the paper "Segment Everything Everywhere All at Once"
tacto
Simulator of vision-based tactile sensors.
Track-Anything
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
viper
Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"
zero123
Zero-1-to-3: Zero-shot One Image to 3D Object (ICCV 2023)