yuval-alaluf / SAM

Official Implementation for "Only a Matter of Style: Age Transformation Using a Style-Based Regression Model" (SIGGRAPH 2021) https://arxiv.org/abs/2102.02754

Home Page:https://yuval-alaluf.github.io/SAM/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No module named 'models.fused_act'

Raviteja-banda opened this issue · comments

Traceback (most recent call last):
File "scripts/inference.py", line 19, in
from models.psp import pSp
File ".\models_init_.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu

Hi, I have resolved the above error. But still have a new error. Could some one help me understand what this error is and any possible solution.
Note: My PC has CUDA installed.

Command line syntax:
(sam_env) D:\Projects\SAM-master>python scripts/inference.py --checkpoint_path=pretrained_models/sam_ffhq_aging.pt --data_path=test --test_batch_size=4 --test_workers=4 --couple_output --target_age=0,10,20,30,40,50,60,70,80

Error:
No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6'
Traceback (most recent call last):
File "scripts/inference.py", line 19, in
from models.psp import pSp
File ".\models_init_.py", line 1, in
from .stylegan2.op.fused_act import FusedLeakyReLU, fused_leaky_relu
File ".\models\stylegan2_init_.py", line 1, in
from .op.fused_act import FusedLeakyReLU, fused_leaky_relu
File ".\models\stylegan2\op_init_.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File ".\models\stylegan2\op\fused_act.py", line 13, in
os.path.join(module_path, 'fused_bias_act_kernel.cu'),
File "C:\Users\chandana.conda\envs\sam_env\lib\site-packages\torch\utils\cpp_extension.py", line 1136, in load
keep_intermediates=keep_intermediates)
File "C:\Users\chandana.conda\envs\sam_env\lib\site-packages\torch\utils\cpp_extension.py", line 1347, in _jit_compile
is_standalone=is_standalone)
File "C:\Users\chandana.conda\envs\sam_env\lib\site-packages\torch\utils\cpp_extension.py", line 1445, in _write_ninja_file_and_build_library
is_standalone=is_standalone)
File "C:\Users\chandana.conda\envs\sam_env\lib\site-packages\torch\utils\cpp_extension.py", line 1834, in _write_ninja_file_to_build_library
cuda_flags = common_cflags + COMMON_NVCC_FLAGS + _get_cuda_arch_flags()
File "C:\Users\chandana.conda\envs\sam_env\lib\site-packages\torch\utils\cpp_extension.py", line 1606, in _get_cuda_arch_flags
arch_list[-1] += '+PTX'
IndexError: list index out of range

I haven't encountered such an issue before, but things like this usually happen because the CUDA version and Pytorch version are incompatible. I would check that you are able to correctly run pytorch with a GPU in your environment.