conda create -n zero123 python=3.9
conda activate zero123
cd zero123
pip install -r requirements.txt
git clone https://github.com/CompVis/taming-transformers.git
pip install -e taming-transformers/
git clone https://github.com/openai/CLIP.git
pip install -e CLIP/
cd 3drec
pip install -r requirements.txt
Download checkpoint under zero123
through one of the following sources:
https://drive.google.com/drive/folders/1geG1IO15nWffJXsmQ_6VLih7ryNivzVs?usp=sharing
https://huggingface.co/cvlab/zero123-weights
wget https://cv.cs.columbia.edu/zero123/assets/10500.ckpt # iteration = [105000, 165000, 230000, 300000]
Install kaolin via the following command:
# Replace TORCH_VERSION and CUDA_VERSION with your torch / cuda versions
pip install kaolin==0.13.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-{TORCH_VERSION}_cu{CUDA_VERSION}.html
Generate six view image candidates and let the user to select:
cd zero123
python demo.py
There are three steps for the coarse stage
cd ./coarse/six2pc
python generate_pc.py #it takes about 10 s
cd ./coarse/pc2surf
python generate_surf.py #it takes about two minutes
cd ./coarse/surf2mesh
python generate_mesh.py #it takes about 30 s
The generate coarse mesh is saved in './coarse/buffer/mesh/'
We can generate the refined textured mesh via the following command:
python main.py
- You can see results under:
3drec/experiments/exp_wild/$EXP_NAME
.
- line 224 in 3drec/main.py: if view_id=i, train the model with all the views.
- line 30 in 3drec/kaolinrender/diffmesh.py: define the learning rate for the mesh.
- line 78 in 3drec/kaolinrender/diffmesh.py: the initialized mesh texture for training.
This repository is based on Zero123