wangxu5 / xtreme1

Xtreme1 - The Next GEN Platform for Multimodal Training Data. #3D annotation, lidar-camera fusion annotation, image annotation and visualation tools are supported!

Home Page:https://xtreme1.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Xtreme1 logo

Slack Twitter Online Docs

Intro

Xtreme1 is the world's first open-source platform for Multimodal training data.

Xtreme1 provides deep insight into data annotation, data curation, and ontology management to solve 2D image and 3D point cloud dataset ML challenges. The built-in AI-assisted tools take your annotation efforts to the next level of efficiency for your 2D/3D Object Detection, 3D Instance Segmentation, and LiDAR-Camera Fusion projects.

It is now hosted in LF AI & Data Foundation as a sandbox project.

Join community

Website | Slack | Twitter | Medium | Issues

Join the Xtreme1 community on Slack to share your suggestions, advice, and questions with us.

👉 Join us on Slack today!

Key features

Image Annotation (B-box, Segmentation) - YOLOR & RITM Lidar-camera Fusion (Frame series) Annotation - OpenPCDet & AB3DMOT

1️⃣ Supports data labeling for images, 3D LiDAR and 2D/3D Sensor Fusion datasets

2️⃣ Built-in pre-labeling and interactive models support 2D/3D object detection, segmentation and classification

3️⃣ Configurable Ontology Center for general classes (with hierarchies) and attributes for use in your model training

4️⃣ Data management and quality monitoring

5️⃣ Find labeling errors and fix them

6️⃣ Model results visualization to help you evaluate your model

Image Data Curation (Visualizing & Debug) - MobileNetV3 & openTSNE Lidar-camera Fusion Data Curation (Filter by Class name X Cross Dataset)

Quick start

Download package

Download the latest release package and unzip it.

wget https://github.com/xtreme1-io/xtreme1/releases/download/v0.6.0/xtreme1-v0.6.0.zip
unzip -d xtreme1-v0.6.0 xtreme1-v0.6.0.zip

Start all services

docker compose up

Visit http://localhost:8190 in the browser (Google Chrome is recommended) to try out Xtreme1!

⚠️ Install built-in models

You need to explicitly specify a model profile to enable model services.

docker compose --profile model up

Enable model services

Make sure you have installed NVIDIA Driver and NVIDIA Container Toolkit. But you do not need to install the CUDA Toolkit, as it already contained in the model image.

# You need set "default-runtime" as "nvidia" in /etc/docker/daemon.json and restart docker to enable NVIDIA Container Toolkit
{
  "runtimes": {
    "nvidia": {
      "path": "nvidia-container-runtime",
      "runtimeArgs": []
    }
  },
  "default-runtime": "nvidia"
}

For more installation, development and deployment, check out Xtreme1 Docs.

License

This software is licensed under the Apache 2.0 LICENSE. Xtreme1 is a trademark of LF AI Projects.

If Xtreme1 is part of your development process / project / publication, please cite us ❤️ :

@misc{Xtreme1,
title = {Xtreme1 - The Next GEN Platform For Multisensory Training Data},
year = {2022},
note = {Software available from https://github.com/xtreme1-io/xtreme1/},
url={https://xtreme1.io/},
author = {LF AI Projects},
}

About

Xtreme1 - The Next GEN Platform for Multimodal Training Data. #3D annotation, lidar-camera fusion annotation, image annotation and visualation tools are supported!

https://xtreme1.io/

License:Apache License 2.0


Languages

Language:TypeScript 38.6%Language:Vue 35.3%Language:Java 17.2%Language:JavaScript 3.9%Language:HTML 2.3%Language:Less 2.0%Language:CSS 0.6%Language:Dockerfile 0.0%Language:Shell 0.0%