Trevor Ablett's starred repositories
yolo_tracking
BoxMOT: pluggable SOTA tracking modules for segmentation, object detection and pose estimation models
Track-Anything
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
Semantic-Segment-Anything
Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).
autodistill
Images to inference with no labeling (use foundation models to train supervised models).
Caption-Anything
Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with diverse controls for user preferences. https://huggingface.co/spaces/TencentARC/Caption-Anything https://huggingface.co/spaces/VIPLab/Caption-Anything
universal_manipulation_interface
Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots
depthai_hand_tracker
Running Google Mediapipe Hand Tracking models on Luxonis DepthAI hardware (OAK-D-lite, OAK-D, OAK-1,...)
python-host
The python code running on Raspberry Pi or other Linux based boards to control SwitchBot.
depthai-python
DepthAI Python Library
language-table
Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.
mimicgen_environments
This code corresponds to simulation environments used as part of the MimicGen project.
easy-kinesthetic-recording
A package with all scripts and commands needed to record joint and ee trajectories (and more) from mutliple robots for kinesthetic teaching.