Chan Hee (Luke) Song's starred repositories
robot_sugar
Official implementation of "SUGAR: Pre-training 3D Visual Representations for Robotics" (CVPR'24).
3DSceneGraph
The data skeleton from "3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera" http://3dscenegraph.stanford.edu
SceneVerse
Official implementation of ECCV24 paper "SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding"
drogozhang.github.io
Github Pages template for academic personal websites, forked from academicpages/academicpages.github.io
Grounded_3D-LLM
Code&Data for Grounded 3D-LLM with Referent Tokens
Awesome-LLM-3D
Awesome-LLM-3D: a curated list of Multi-modal Large Language Model in 3D world Resources
3D-CLR-Official
[CVPR 2023] Code for "3D Concept Learning and Reasoning from Multi-View Images"
Awesome-Embodied-Agent-with-LLMs
This is a curated list of "Embodied AI or robot with Large Language Models" research. Watch this repository for the latest updates! 🔥
LLMTaskPlanning
LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents (ICLR 2024)
Thinking-VLN
Ideas and thoughts about the fascinating Vision-and-Language Navigation
awesome-language-agents
List of language agents based on paper "Cognitive Architectures for Language Agents"
LLMAgentPapers
Must-read Papers on LLM Agents.
EnvInteractiveLMPapers
Paper collections of methods that using language to interact with environment, including interact with real world, simulated world or WWW(🏄).
awesome-vision-language-navigation
A curated list for vision-and-language navigation. ACL 2022 paper "Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions"
Awesome-Neural-Logic
Awesome Neural Logic and Causality: MLN, NLRL, NLM, etc. 因果推断,神经逻辑,强人工智能逻辑推理前沿领域。
awesome-vision-language-pretraining-papers
Recent Advances in Vision and Language PreTrained Models (VL-PTMs)
pytorch-styleguide
An unofficial styleguide and best practices summary for PyTorch