graphdeco-inria / gaussian-splatting

Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"

Home Page:https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question About Convert.py and CPU Offloading

yoponchik opened this issue · comments

When I run convert.py my CPU usage is over 70% and GPU is only 1%. Is that normal?
Is there a way to make my GPU do the work?

From the code perspective, I believe the default should be to utilize the GPU for matching tasks. However, when I tried it on two different computers, the execution time varied significantly, with one taking only a few milliseconds and the other taking several hundred milliseconds. I suspect the former is running on the GPU while the latter is on the CPU, as I haven't made any other modifications, used the same instructions, and worked with the same dataset. I'm puzzled as to why this discrepancy occurred.

@tapowanliwuyun I know right? To my understanding, you need to use -no_gpu command to force CPU task, so it would automatically use the GPU device unless explicitly told not to. But for some reason, it looks like the GPU is barely doing any work.
Additionally, I ran colmap --help and verified that it is the CUDA version.

@tapowanliwuyun I did some poking around and this link seems to talk about COLMAP not using the GPU. I'm not familiar with the lingo, so I don't understand everything they're saying, but maybe it's supposed to be like that?

@yoponchik Yes, the '-no_gpu' command is used to force the usage of the CPU, meaning that by default, the GPU is used for computation.

In fact, while computing poses using COLMAP on the computer where the matching block takes just a few milliseconds, the GPU usage significantly increases. I need to further determine the GPU usage on a computer where COLMAP takes a few hundred milliseconds to compute.