jungseokhong / opti-acoustic-semantics

Code for running optical image segmentation synced with sonar data for object ranging

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Opti-Acoustic Semantic Mapping

Requirements

DCSAM sonar_oculus dino-vit-features (see below)

segmentation code based on: dino-vit-features

[paper] [project page]

Citation

@article{amir2021deep,
    author    = {Shir Amir and Yossi Gandelsman and Shai Bagon and Tali Dekel},
    title     = {Deep ViT Features as Dense Visual Descriptors},
    journal   = {arXiv preprint arXiv:2112.05814},
    year      = {2021}
}

Setup

Their code is developed in pytorch on and requires the following modules: tqdm, faiss, timm, matplotlib, pydensecrf, opencv, scikit-learn. They use python=3.9 but the code should be runnable on any version above 3.6. They recommend running their code with any CUDA supported GPU for faster performance. Setup the running environment via Anaconda by running the following commands:

$ conda env create -f env/dino-vit-feats-env.yml
$ conda activate dino-vit-feats-env

Otherwise, run the following commands in your conda environment:

$ conda install pytorch torchvision torchaudio cudatoolkit=11 -c pytorch
$ conda install tqdm
$ conda install -c conda-forge faiss
$ conda install -c conda-forge timm 
$ conda install matplotlib
$ pip install opencv-python
$ pip install git+https://github.com/lucasb-eyer/pydensecrf.git
$ conda install -c anaconda scikit-learn

LLM SLAM

Prerequisite: git clone llm-mapping repo

$ export PYTHONPATH=code/Grounded-Segment-Anything/GroundingDINO:code/Grounded-Segment-Anything/segment_anything:$PYTHONPATH
$ python vlm_based.py

About

Code for running optical image segmentation synced with sonar data for object ranging

License:MIT License


Languages

Language:Python 99.8%Language:Shell 0.2%