Torch3d is a PyTorch library consisting of datasets, model architectures, and common operations for 3D deep learning. For 3D domain, there is currently no official support from PyTorch that likes torchvision for images. Torch3d aims to fill this gap by streamlining the prototyping process of deep learning on 3D domain.
Required PyTorch 1.2 or newer. Some other dependencies are:
- torchvision
- h5py
From PyPi:
$ pip install torch3d
From source:
$ git clone https://github.com/pqhieu/torch3d
$ cd torch3d
$ pip install --editable .
Here are some examples to get you started. These examples assume that you have a basic understanding of PyTorch.
- Point cloud classification (ModelNet40) using PointNet (Beginner)
- Point cloud auto-encoder with FoldingNet (Beginner)
Torch3d composes of the following modules:
- datasets: Common 3D datasets for classification, semantic segmentation, and so on.
- metrics: Metrics for on-the-fly training evaluation of different tasks.
- Accuracy (classification, segmentation)
- IoU (segmentation)
- models: State-of-the-art models based on their original papers. The following models are currently supported:
- nn: Low-level operators that can be used to build up complex 3D neural networks.
- transforms: Common transformations for dataset preprocessing.