942411526's repositories
multimodal
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.
BotanicGarden
BotanicGarden: A high-quality dataset for robot navigation in unstructured natural environments
Depth-Anything
Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data
depth2surface_normals_seg
This repository publishes surface normals from depth images. It uses surface normals image to segement ground.
DPVO
Deep Patch Visual Odometry
drive-any-robot
Official code and checkpoint release for "GNM: A General Navigation Model to Drive Any Robot".
ekf-imu-depth
[ECCV 2022] Towards Scale-Aware, Robust, and Generalizable Unsupervised Monocular Depth Estimation by Integrating IMU Motion Dynamics
electron-ssr-backup
electron-ssr原作者删除了这个伟大的项目,故备份了下来,不继续开发,且用且珍惜
google-research
Google Research
iPlanner
iPlanner: Imperative Path Planning. An end-to-end learning planning framework using a novel unsupervised imperative learning approach
legged_gym
Isaac Gym Environments for Legged Robots
Metric3D
The repo for "Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image"
monodepth2
[ICCV 2019] Monocular depth estimation from a single image
packnet-sfm
TRI-ML Monocular Depth Estimation Repository
PPGeo
[ICLR 2023] Pytorch implementation of PPGeo, a fully self-supervised driving policy pre-training framework to learn from unlabeled driving videos.
sc_depth_pl
SC-Depth (V1, V2, and V3) for Unsupervised Monocular Depth Estimation Webpage:https://jiawangbian.github.io/sc_depth_pl/
multimodal-fusion-network
This repository contains all the code for Parsing, Transforming and Training Multimodal Deep Learning Network, for Social Robot Navigation.
NSFC-application-template-latex
国家自然科学基金申请书正文(面上项目)LaTeX 模板(非官方)
scnuthesis
符合华南师范大学硕士/博士学位论文格式要求的LaTeX模板。
viplanner
ViPlanner: Visual Semantic Imperative Learning for Local Navigation
visualnav-transformer
Official code and checkpoint release for "ViNT: A Foundation Model for Visual Navigation".