gs14iitbbs's repositories

BAE-NET

The code for paper "BAE-NET: Branched Autoencoder for Shape Co-Segmentation".

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0

brics_3d

BRICS_3D - 3D Perception and Modeling Library

Language:C++Stargazers:0Issues:1Issues:0

BSP-NET-original

Tensorflow 1.15 implementation of BSP-NET, along with other scripts used in our paper.

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0

caffe

Caffe: a fast open framework for deep learning.

Language:C++License:NOASSERTIONStargazers:0Issues:1Issues:0

graphics

TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

gym

A toolkit for developing and comparing reinforcement learning algorithms.

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

ngp_pl

Instant-ngp in pytorch+cuda trained with pytorch-lightning (high quality with high speed, with only few lines of legible code)

Language:Jupyter NotebookLicense:MITStargazers:0Issues:1Issues:0
Language:C++License:NOASSERTIONStargazers:0Issues:0Issues:0

ORB_SLAM

A Versatile and Accurate Monocular SLAM

Language:C++Stargazers:0Issues:1Issues:0

ORB_SLAM2

Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities

Language:C++License:NOASSERTIONStargazers:0Issues:1Issues:0

pyslam

pySLAM contains a monocular Visual Odometry (VO) pipeline in Python. It supports many modern local features based on Deep Learning.

Language:PythonLicense:GPL-3.0Stargazers:0Issues:1Issues:0
Language:Jupyter NotebookStargazers:0Issues:1Issues:0

SfmLearner-Pytorch

Pytorch version of SfmLearner from Tinghui Zhou et al.

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

slambench2

SLAM performance evaluation framework

License:NOASSERTIONStargazers:0Issues:0Issues:0

VO-SLAM-Review

SLAM is mainly divided into two parts: the front end and the back end. The front end is the visual odometer (VO), which roughly estimates the motion of the camera based on the information of adjacent images and provides a good initial value for the back end.The implementation methods of VO can be divided into two categories according to whether features are extracted or not: feature point-based methods, and direct methods without feature points. VO based on feature points is stable and insensitive to illumination and dynamic objects

License:Apache-2.0Stargazers:0Issues:1Issues:0