Dean Webb (deanofthewebb)

deanofthewebb

User data from Github https://github.com/deanofthewebb

Company:Insight Data Science

Location:Berkeley, CA

Home Page:https://www.linkedin.com/in/deanofthewebb

GitHub:@deanofthewebb

Dean Webb's repositories

Anima

33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

AS-One

Easy & Modular Computer Vision Detectors and Trackers - Run YOLOv7,v6,v5,R,X in under 20 lines of code.

Language:PythonLicense:GPL-3.0Stargazers:0Issues:1Issues:0

autodistill-grounded-sam-2

Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

DAPO

An Open-source RL System from ByteDance Seed and Tsinghua AIR

Stargazers:0Issues:0Issues:0

DeepStream-Yolo

NVIDIA DeepStream SDK 6.0.1 configuration for YOLO models

Language:C++License:MITStargazers:0Issues:1Issues:0

DeepStream-Yolo-Seg

NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

deepstream_tao_apps

Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

Language:C++License:MITStargazers:0Issues:0Issues:0

Depth-Anything-ONNX

ONNX-compatible Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Depth-Anything-V2

Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation

License:Apache-2.0Stargazers:0Issues:0Issues:0

DepthAnything-on-Browser

This repository demonstrates browser based implementation of DepthAnything and DepthAnythingV2 models. It is powered by Onnx and does not require any web servers.

License:MITStargazers:0Issues:0Issues:0

examples

Client code examples & integrations that utilize LM Studio's local inference server

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

Grounded-SAM-2

Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

image-quality-issues

FiftyOne Plugin for finding common image quality issues

Language:PythonLicense:GPL-3.0Stargazers:0Issues:0Issues:0

label-studio-ml-backend

Configs and boilerplates for Label Studio's Machine Learning backend

License:Apache-2.0Stargazers:0Issues:0Issues:0

moderngl

Modern OpenGL binding for Python

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

mojo

The Mojo Programming Language

Language:MojoLicense:NOASSERTIONStargazers:0Issues:0Issues:0

OSWorld

OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

PaliGemma

This repository contains examples of using PaliGemma for tasks such as object detection, segmentation, image captioning, etc.

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

Segment-and-Track-Anything

An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.

Language:Jupyter NotebookLicense:AGPL-3.0Stargazers:0Issues:1Issues:0

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

ToolBench

An open platform for training, serving, and evaluating large language model for tool learning.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

ultralytics

YOLOv8 🚀 in PyTorch > ONNX > CoreML > TFLite

Language:PythonLicense:AGPL-3.0Stargazers:0Issues:0Issues:0

UniCL

[CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

unimatch

Unifying Flow, Stereo and Depth Estimation

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

verl

verl: Volcano Engine Reinforcement Learning for LLMs

License:Apache-2.0Stargazers:0Issues:0Issues:0

vipy

Python Tools for Visual Dataset Transformation

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

YOLO-World

[CVPR 2024] Real-Time Open-Vocabulary Object Detection

Language:Jupyter NotebookLicense:GPL-3.0Stargazers:0Issues:0Issues:0

yolov12

YOLOv12: Attention-Centric Real-Time Object Detectors

License:AGPL-3.0Stargazers:0Issues:0Issues:0

yolov9

Implementation of paper - YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information

Language:PythonLicense:GPL-3.0Stargazers:0Issues:0Issues:0