mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.

Home Page:https://torchsparse.mit.edu

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Installation] <Out of RAM to build>

yokosyun opened this issue · comments

Is there an existing issue for this?

  • I have searched the existing issues

Have you followed all the steps in the FAQ?

  • I have tried the steps in the FAQ.

Current Behavior

When I build torchsparse with following command, it reaches maximum RAM(16GB).

pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git@v1.4.0

Error Line

freeze because it reached maximum RAM(16GB)

Environment

- GCC:11.4.0
- NVCC: 12.1
- PyTorch: 2.3.0
- PyTorch CUDA: 12.1

Full Error Log

Error Log

[PUT YOUR ERROR LOG HERE]

solved by setting MAX_JOBS

export MAX_JOBS=4
pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git@v1.4.0