lugaofeng's repositories
A-LOAM
Advanced implementation of LOAM
CVPR-2023-1st-foundation-model-challenge-Track2-4th-solution
CVPR 2023第一届大模型比赛Track2第4名方案
darknet_ros
YOLO ROS: Real-Time Object Detection for ROS
DecryptPrompt
总结Prompt&LLM论文,开源数据&模型,AIGC应用
DeLORA
Self-supervised Deep LiDAR Odometry for Robotic Applications
Dig-into-Apollo
Apollo notes (Apollo学习笔记) - Apollo learning notes for beginners.
embed_linux_tutorial
野火《i.MX Linux开发实战指南》书籍及代码
mmdetection
OpenMMLab Detection Toolbox and Benchmark
nanodet
NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥
ethzasl_msf
MSF - Modular framework for multi sensor fusion based on an Extended Kalman Filter (EKF)
First_Part
迭代国庆期间的任务:包括各种测试机构,用户自定义UI以及硬件主控板。
FuxiCTR
A configurable, tunable, and reproducible library for CTR prediction https://fuxictr.github.io
L-ink_Card
Smart NFC & ink-Display Card
linux
Linux kernel source tree
MaixPy
Micropython env for Sipeed Maix boards(K210 RISC-V)
mmpose
OpenMMLab Pose Estimation Toolbox and Benchmark.
ORB_SLAM2
from https://gitee.com/paopaoslam/ORB-SLAM2 注释版,Ubuntu/Windows
ORB_SLAM3
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
ORB_SLAM3_detailed_comments
Detailed comments for ORB-SLAM3
pytorch-cifar100
Practice on cifar100(ResNet, DenseNet, VGG, GoogleNet, InceptionV3, InceptionV4, Inception-ResNetv2, Xception, Resnet In Resnet, ResNext,ShuffleNet, ShuffleNetv2, MobileNet, MobileNetv2, SqueezeNet, NasNet, Residual Attention Network, SENet, WideResNet)
ros_best_practices
Best practices, conventions, and tricks for ROS. Do you want to become a robotics master? Then consider graduating or working at the Robotics Systems Lab at ETH in Zürich!
Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
u-boot
"Das U-Boot" Source Tree
VINS-Fusion
An optimization-based multi-sensor state estimator
VINS-Mono
A Robust and Versatile Monocular Visual-Inertial State Estimator
X-VLM
X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)