xiuqhou / Salience-DETR

[CVPR 2024] Official implementation of the paper "Salience DETR: Enhancing Detection Transformer with Hierarchical Salience Filtering Refinement"

Home Page:https://arxiv.org/abs/2403.16131

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

推理的过程中报错

yjdzyr opened this issue · comments

/home/yjd/anaconda3/envs/python3.8/bin/python3.8 /home/yjd/yjd_software/item/Salience-DETR-main/inference.py
Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
Using /home/yjd/.cache/torch_extensions/py38_cu121 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/yjd/.cache/torch_extensions/py38_cu121/MultiScaleDeformableAttention/build.ninja...
Building extension module MultiScaleDeformableAttention...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
Loading extension module MultiScaleDeformableAttention...
ninja: no work to do.
[2024-05-13 11:39:56 det.models.backbones.base_backbone]: Backbone architecture: resnet50
[2024-05-13 11:39:57 det.util.utils]:
/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/overrides.py:110: UserWarning: 'has_cuda' is deprecated, please use 'torch.backends.cuda.is_built()'
torch.has_cuda,
/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/overrides.py:111: UserWarning: 'has_cudnn' is deprecated, please use 'torch.backends.cudnn.is_available()'
torch.has_cudnn,
/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/overrides.py:117: UserWarning: 'has_mps' is deprecated, please use 'torch.backends.mps.is_built()'
torch.has_mps,
/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/overrides.py:118: UserWarning: 'has_mkldnn' is deprecated, please use 'torch.backends.mkldnn.is_available()'
torch.has_mkldnn,
[2024-05-13 11:39:58 det.util.utils]:
0%| | 0/1 [00:00<?, ?it/s]/home/yjd/yjd_software/item/Salience-DETR-main/models/bricks/position_encoding.py:50: UserWarning: cumsum_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:71.)
y_embed = not_mask.cumsum(1, dtype=torch.float32)
/home/yjd/yjd_software/item/Salience-DETR-main/models/bricks/position_encoding.py:51: UserWarning: cumsum_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:71.)
x_embed = not_mask.cumsum(2, dtype=torch.float32)
/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/nn/modules/linear.py:114: UserWarning: Deterministic behavior was enabled with either torch.use_deterministic_algorithms(True) or at::Context::setDeterministicAlgorithms(true), but this operation is not deterministic because it uses CuBLAS and you have CUDA >= 10.2. To enable deterministic behavior in this case, you must set an environment variable before running your PyTorch application: CUBLAS_WORKSPACE_CONFIG=:4096:8 or CUBLAS_WORKSPACE_CONFIG=:16:8. For more information, go to https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility (Triggered internally at ../aten/src/ATen/Context.cpp:156.)
return F.linear(input, self.weight, self.bias)
/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/nn/functional.py:5405: UserWarning: Deterministic behavior was enabled with either torch.use_deterministic_algorithms(True) or at::Context::setDeterministicAlgorithms(true), but this operation is not deterministic because it uses CuBLAS and you have CUDA >= 10.2. To enable deterministic behavior in this case, you must set an environment variable before running your PyTorch application: CUBLAS_WORKSPACE_CONFIG=:4096:8 or CUBLAS_WORKSPACE_CONFIG=:16:8. For more information, go to https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility (Triggered internally at ../aten/src/ATen/Context.cpp:156.)
attn_output_weights = torch.bmm(q_scaled, k.transpose(-2, -1))
/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/nn/functional.py:5410: UserWarning: Deterministic behavior was enabled with either torch.use_deterministic_algorithms(True) or at::Context::setDeterministicAlgorithms(true), but this operation is not deterministic because it uses CuBLAS and you have CUDA >= 10.2. To enable deterministic behavior in this case, you must set an environment variable before running your PyTorch application: CUBLAS_WORKSPACE_CONFIG=:4096:8 or CUBLAS_WORKSPACE_CONFIG=:16:8. For more information, go to https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility (Triggered internally at ../aten/src/ATen/Context.cpp:156.)
attn_output = torch.bmm(attn_output_weights, v)
/home/yjd/yjd_software/item/Salience-DETR-main/models/bricks/basic.py:52: UserWarning: Deterministic behavior was enabled with either torch.use_deterministic_algorithms(True) or at::Context::setDeterministicAlgorithms(true), but this operation is not deterministic because it uses CuBLAS and you have CUDA >= 10.2. To enable deterministic behavior in this case, you must set an environment variable before running your PyTorch application: CUBLAS_WORKSPACE_CONFIG=:4096:8 or CUBLAS_WORKSPACE_CONFIG=:16:8. For more information, go to https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility (Triggered internally at ../aten/src/ATen/Context.cpp:156.)
context = torch.matmul(input_x, context_mask)
100%|██████████| 1/1 [00:04<00:00, 4.10s/it]
0%| | 0/1 [00:00<?, ?it/s]/home/yjd/yjd_software/item/Salience-DETR-main/util/visualize.py:126: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object).
boxes = np.array(boxes, dtype=np.int32)
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/yjd/yjd_software/item/Salience-DETR-main/inference.py", line 150, in
inference()
File "/home/yjd/yjd_software/item/Salience-DETR-main/inference.py", line 146, in inference
[None for _ in tqdm(data_loader)]
File "/home/yjd/yjd_software/item/Salience-DETR-main/inference.py", line 146, in
[None for _ in tqdm(data_loader)]
File "/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/tqdm/std.py", line 1181, in iter
for obj in iterable:
File "/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/accelerate/data_loader.py", line 454, in iter
current_batch = next(dataloader_iter)
File "/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 630, in next
data = self._next_data()
File "/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
return self._process_data(data)
File "/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/_utils.py", line 694, in reraise
raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/home/yjd/anaconda3/envs/python3.8/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
return self.collate_fn(data)
File "/home/yjd/yjd_software/item/Salience-DETR-main/inference.py", line 145, in
data_loader.collate_fn = lambda x: visualize_single_image(**x[0])
File "/home/yjd/yjd_software/item/Salience-DETR-main/inference.py", line 125, in visualize_single_image
image = plot_bounding_boxes_on_image_cv2(
File "/home/yjd/yjd_software/item/Salience-DETR-main/util/visualize.py", line 126, in plot_bounding_boxes_on_image_cv2
boxes = np.array(boxes, dtype=np.int32)
ValueError: only one element tensors can be converted to Python scalars

Process finished with exit code 1

报错信息显示输出的boxes格式不对,而无法转化成numpy.ndarray。可能是因为低版本numpy不支持直接从tensor列表转换成numpy,请尝试升级numpy版本。

pip install -U numpy

如果报错信息仍然存在,请在util/visualize.py第124行(报错代码boxes = np.array(boxes, dtype=np.int32)之前)输出boxes看看是什么形式:

print(boxes)

测试发现numpy=1.26版本不会报错,numpy<=1.24版本运行代码会报错。我修改了报错相关的代码来兼容低版本的numpy,您可以尝试重新拉取新版代码来运行。

感谢您帮助本仓库修复bug,如果还有问题欢迎继续提issue~

非常感谢作者的解答,解决了我的问题!这是一个非常优秀的项目!按照您的回答我去查证了我的numpy版本,确实为1.24。然后我尝试修改发现python3.8在清华源中下载的最高版本的numpy=1.24.4。至于1.25则需要python3.9。

而在您的项目中,创建环境的命令为conda create -n salience_detr python=3.8。所以产生了这个错误。但是现在已经解决了低版本numpy问题。

非常感谢!