hugoycj / Instant-angelo_vis

Quick lookup for Instant-angelo (https://github.com/hugoycj/Instant-angelo) results

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Instant-angelo_vis

Welcome to Instant-angelo_vis! This repository provides a quick way to explore and visualize the results of the Instant-angelo project.

At Instant-angelo_vis, we have created a space where you can easily access and examine the outcomes produced by Instant-angelo. We encourage you to contribute and add more visualization examples that showcase the potential of this project. Additionally, we invite you to share any problematic cases you might encounter. This will assist us in identifying areas that need improvement, ultimately enhancing the functionality and performance of our repository.

We value your contribution and are open to accepting more visualization examples and encountering bad cases to improve our repository!

Samples

BlendedMVS Lowres(768 x 576)

The experiments were conducted on low-resolution data provided by BlendedMVS, with a resolution of 768 x 576. ** Each model was trained on an RTX3090 GPU for 20,000 steps, which took approximately 20-25 minutes (excluding the time required for sfm and mvs prior generation)**. However, several results at this stage were unsatisfactory, and it is possible that using high-resolution images and training for a longer duration may address this issue. Due to limited GPU resources, we only tested the "Large" and "Sculpture" scenes from the BlendedMVS dataset. It is important to note that not all cases were evaluated. We intend to provide the preprocessed data in the future for parameter tuning and further evaluation.

Currently, we compared among three method: neus-colmap, neuralangelo-colmap_sparse, and neuralangelo-colmap_dense:

  • neus-colmap is our baseline trained with instant-nsr-pl for 20k steps, which take around 10 min on 3090

  • neuralangelo-colmap_sparse is our implementation in Instant-angelo. It relies on the colmap sparse point cloud, which is generated by running colmap to obtain poses. This approach does not require any extra preprocessing.

  • Additionally, we have neuralangelo-colmap_dense, which also uses our implementation in Instant-angelo. However, it relies on a dense MVS (Multi-View Stereo) prior generated by the MVS method. Specifically, we use Vis-MVSNet for generating the point cloud.

    We have found that generating a high-fidelity surface reconstruction in 20k steps can be challenging. Therefore, we introduce the dense MVSNet point cloud for acceleration. Vis-MVSNet takes approximately 1 to 3 seconds for each frame and is more effective than Colmap MVS, although it introduces some noise in the process. The noise introduced by the dense MVSNet point cloud can be alleviated as training progresses. This is achieved by decreasing the dense point regulation in the later stages of training.

Below is a visual comparison showcasing the different methods. Please note that the quantity results will be provided in the future. For more visualization results, refer for more cases. Sometimes loading gif on this page is slow, you could either clone the repo to local for visualization or visit our Chinese mirror

5b08286b2775267d5b0634ba

Method Novel View Synthesis
neus-colmap
neuralangelo-colmap_sparse
neuralangelo-colmap_dense

5b69cc0cb44b61786eb959bf

Method Novel View Synthesis
neus-colmap
neuralangelo-colmap_sparse
neuralangelo-colmap_dense

5ba75d79d76ffa2c86cf2f05

Method Novel View Synthesis
neus-colmap
neuralangelo-colmap_sparse
neuralangelo-colmap_dense

5bfc9d5aec61ca1dd69132a2

Method Novel View Synthesis
neus-colmap
neuralangelo-colmap_sparse
neuralangelo-colmap_dense

5afacb69ab00705d0cefdd5b

Method Novel View Synthesis
neus-colmap
neuralangelo-colmap_sparse
neuralangelo-colmap_dense

58d36897f387231e6c929903

Method Novel View Synthesis
neus-colmap
neuralangelo-colmap_sparse
neuralangelo-colmap_dense

58cf4771d0f5fb221defe6da

Method Novel View Synthesis
neus-colmap
neuralangelo-colmap_sparse
neuralangelo-colmap_dense

59817e4a1bd4b175e7038d19

Method Novel View Synthesis
neus-colmap
neuralangelo-colmap_sparse
neuralangelo-colmap_dense

5a588a8193ac3d233f77fbca

Method Novel View Synthesis
neus-colmap
neuralangelo-colmap_sparse
neuralangelo-colmap_dense

5adc6bd52430a05ecb2ffb85

Method Novel View Synthesis
neus-colmap
neuralangelo-colmap_sparse
neuralangelo-colmap_dense

Tanks and Temples

To be comming soon

Scannet & Scannet++

To be comming soon

Custom Data

To be comming soon

Contributions

Contributions are highly encouraged and will greatly contribute to the collaborative nature of this project.

Below are some suggestions for contributions that can be easily undertaken:

  • If you possess a distinctive dataset that you believe would significantly enrich our project, we sincerely welcome your contribution. We are particularly interested in exceptional datasets that can proficiently showcase the capabilities or demonstrate the applicability of our project. Our aim is to present the highest quality data to our users.

  • We would be delighted to receive captivating visualization outcomes that align with our project's objectives. If you possess visually striking results that you believe would be suitable, we encourage you to share them with us. Such contributions hold great potential in enhancing the overall quality and user experience of the project.

  • We kindly encourage contributors to provide visualization results using traditional multi-view stereo (MVS) libraries such as Colmap and OpenMVS, as well as neRF-based MVS repositories like SDFStudio and official Neuralangelo. It would be highly appreciated if the contributors could also provide the preprocessed Colmap format data at a later stage for a fair and comprehensive comparison.

Contribution Guidelines

To ensure the effectiveness and consistency of our project, we have established the following guidelines:

Key Principles for Dataset Contributions:

  • Visualization of Results: Contributions should include visually appealing and informative visualizations of the results, such as reconstructed meshes or novel view synthesis outputs, that clearly demonstrate the advantages or disadvantages of the results.
  • Configuration Details: Please provide any configuration files required for the model, including detailed information on the parameters and settings used.
  • Environmental and Hardware Specifications: It would be helpful if contributors mention the specific versions of PyTorch and CUDA, along with the GPU model, required to run the code effectively.
  • Links to Preprocessed Dataset: It would be highly appreciated if contributors provide links to access or download the preprocessed dataset, enabling easier reproduction of the results.

About

Quick lookup for Instant-angelo (https://github.com/hugoycj/Instant-angelo) results