bysowhat's repositories
3DTrans
An Open-source Codebase for exploring Continuous-learning/Pre-training-oriented Autonomous Driving Task
AB3DMOT
(IROS 2020, ECCVW 2020) Official Python Implementation for "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"
BEVFusion
Offical PyTorch implementation of "BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework"
ByteTrack
[ECCV 2022] ByteTrack: Multi-Object Tracking by Associating Every Detection Box
CascadePSP
[CVPR 2020] CascadePSP: Toward Class-Agnostic and Very High-Resolution Segmentation via Global and Local Refinement
faster-rcnn-tf2
这是一个faster-rcnn的tensorflow2实现的库,可以利用voc数据集格式的数据进行训练。
GTR
Global Tracking Transformers, CVPR 2022
LaneATT
Code for the paper entitled "Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection" (CVPR 2021)
lanenet-lane-detection
Unofficial implemention of lanenet model for real time lane detection using deep neural network model https://maybeshewill-cv.github.io/lanenet-lane-detection/
MOTRv2
[CVPR2023] MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors
Point-Transformers
Point Transformers
PointNeXt
[NeurIPS'22] PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies
PTTR
Pytorch Implementation of PTTR: Relational 3D Point Cloud Object Tracking with Transformer
RandLA-Net
🔥RandLA-Net in Tensorflow (CVPR 2020, Oral & IEEE TPAMI 2021)
RandLA-Net-pytorch
:four_leaf_clover: Pytorch Implementation of RandLA-Net (https://arxiv.org/abs/1911.11236)
siam-mot
SiamMOT: Siamese Multi-Object Tracking
SphereFormer
The official implementation for "Spherical Transformer for LiDAR-based 3D Recognition" (CVPR 2023).
spvnas
[ECCV 2020] Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution
SST
Codes for “Fully Sparse 3D Object Detection” & “Embracing Single Stride 3D Object Detector with Sparse Transformer”
TorchEx
Collection of PyTorch customized operators.
Ultra-Fast-Lane-Detection
Ultra Fast Structure-aware Deep Lane Detection (ECCV 2020)
unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities