open-mmlab / mmaction2

OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark

Home Page:https://mmaction2.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Bug]

maple0leaves opened this issue · comments

Branch

main branch (1.x version, such as v1.0.0, or dev-1.x branch)

Prerequisite

Environment

-mcv-full 1.5.0
-nnxruntime-gpu 1.12.1
absl-py 2.0.0
addict 2.4.0
alabaster 0.7.12
anaconda-client 1.7.2
anaconda-navigator 1.10.0
anaconda-project 0.8.3
appdirs 1.4.4
argh 0.26.2
argon2-cffi 20.1.0
asn1crypto 1.4.0
astroid 2.4.2
astropy 4.0.2
async-generator 1.10
atomicwrites 1.4.0
attrs 20.3.0
audioread 3.0.1
autopep8 1.5.4
av 12.0.0
Babel 2.8.1
backcall 0.2.0
backports.functools-lru-cache 1.6.1
backports.shutil-get-terminal-size 1.0.0
backports.tempfile 1.0
backports.weakref 1.0.post1
bcrypt 3.2.0
beautifulsoup4 4.9.3
bitarray 1.6.1
bkcharts 0.2
bleach 3.2.1
bokeh 2.2.3
boto 2.49.0
Bottleneck 1.3.2
brotlipy 0.7.0
cachetools 5.3.1
certifi 2024.2.2
cffi 1.14.3
chardet 3.0.4
chumpy 0.70
click 7.1.2
cloudpickle 1.6.0
clyent 1.2.2
colorama 0.4.4
coloredlogs 15.0.1
comtypes 1.1.7
conda 4.12.0
conda-build 3.20.5
conda-package-handling 1.7.2
conda-verify 3.4.2
contextlib2 0.6.0.post1
coremltools 6.3.0
coverage 7.2.2
cryptography 3.1.1
cycler 0.10.0
Cython 0.29.21
cytoolz 0.11.0
dask 2.30.0
decorator 4.4.2
decord 0.6.0
defusedxml 0.6.0
diff-match-patch 20200713
distributed 2.30.1
docker-pycreds 0.4.0
docutils 0.16
easydict 1.10
einops 0.7.0
entrypoints 0.3
et-xmlfile 1.0.1
fastcache 1.1.0
filelock 3.0.12
flake8 3.8.4
Flask 1.1.2
flatbuffers 23.5.26
fsspec 0.8.3
ftfy 6.2.0
future 0.18.2
fvcore 0.1.5.post20221221
gevent 20.9.0
gitdb 4.0.11
GitPython 3.1.43
glob2 0.7
gmpy2 2.1.2
google-auth 2.23.0
google-auth-oauthlib 1.0.0
greenlet 0.4.17
grpcio 1.58.0
h5py 2.10.0
HeapDict 1.0.1
html5lib 1.1
humanfriendly 10.0
idna 2.10
imageio 2.9.0
imageio-ffmpeg 0.4.9
imagesize 1.2.0
imgaug 0.4.0
importlib-metadata 6.8.0
iniconfig 1.1.1
interrogate 1.7.0
intervaltree 3.1.0
iopath 0.1.10
ipykernel 5.3.4
ipython 7.19.0
ipython-genutils 0.2.0
ipywidgets 7.5.1
iso8601 2.0.0
isort 4.3.21
itsdangerous 1.1.0
jdcal 1.4.1
jedi 0.17.1
Jinja2 2.11.2
joblib 0.17.0
json-tricks 3.17.3
json5 0.9.5
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.7
jupyter-console 6.2.0
jupyter-core 4.6.3
jupyterlab 2.2.6
jupyterlab-pygments 0.1.2
jupyterlab-server 1.2.0
keyring 21.4.0
Kivy 2.3.0
kivy-deps.angle 0.4.0
kivy-deps.glew 0.3.1
kivy-deps.sdl2 0.7.0
Kivy-Garden 0.1.5
kiwisolver 1.3.0
lazy-loader 0.4
lazy-object-proxy 1.4.3
libarchive-c 2.9
librosa 0.10.1
llvmlite 0.34.0
locket 0.2.0
lxml 4.6.1
Markdown 3.4.4
markdown-it-py 3.0.0
MarkupSafe 1.1.1
matplotlib 3.3.2
mccabe 0.6.1
mdurl 0.1.2
menuinst 1.4.16
mistune 0.8.4
mkl-fft 1.3.8
mkl-random 1.2.4
mkl-service 2.4.0
mmcv 2.0.1
mmdet 2.23.0
mmengine 0.7.1
mmpose 0.24.0
mock 4.0.2
more-itertools 8.6.0
moviepy 1.0.3
mpmath 1.1.0
msgpack 1.0.0
multipledispatch 0.6.0
munkres 1.1.4
mysql-connector-python 8.0.27
navigator-updater 0.2.1
nbclient 0.5.1
nbconvert 6.0.7
nbformat 5.0.8
nest-asyncio 1.4.2
networkx 2.5
nltk 3.5
nose 1.3.7
notebook 6.1.4
numba 0.51.2
numexpr 2.8.4
numpy 1.23.5
numpydoc 1.1.0
oauthlib 3.2.2
olefile 0.46
onnx 1.13.1
onnxruntime-gpu 1.14.0
onnxsim 0.4.33
openai-clip 1.0.1
opencv-contrib-python 4.9.0.80
opencv-python 4.8.0.76
openpyxl 3.0.5
packaging 20.4
pandas 2.0.3
pandocfilters 1.4.3
parameterized 0.8.1
paramiko 2.7.2
parso 0.7.0
partd 1.1.0
path 15.0.0
pathlib2 2.3.5
pathtools 0.1.2
patsy 0.5.1
pep8 1.7.1
pexpect 4.8.0
pickleshare 0.7.5
pillow 10.2.0
PIMS 0.6.1
pip 20.2.4
pkginfo 1.6.1
platformdirs 3.10.0
pluggy 0.13.1
ply 3.11
pooch 1.7.0
portalocker 2.8.2
proglog 0.1.10
prometheus-client 0.8.0
prompt-toolkit 3.0.8
protobuf 3.20.3
psutil 5.7.2
py 1.9.0
py-cpuinfo 9.0.0
pyasn1 0.5.0
pyasn1-modules 0.3.0
pycocotools 2.0.7
pycodestyle 2.6.0
pycosat 0.6.3
pycparser 2.20
pycurl 7.43.0.6
pydocstyle 5.1.1
pyflakes 2.2.0
Pygments 2.16.1
pylint 2.6.0
PyMySQL 1.0.3
PyNaCl 1.4.0
pyodbc 4.0.0-unsupported
pyOpenSSL 19.1.0
pyparsing 2.4.7
pypiwin32 223
pyreadline 2.1
pyreadline3 3.4.1
pyrsistent 0.17.3
PySocks 1.7.1
pytest 0.0.0
pytest-runner 5.3.1
python-dateutil 2.9.0.post0
python-jsonrpc-server 0.4.0
python-language-server 0.35.1
pyttsx3 2.90
PyTurboJPEG 1.7.3
pytz 2020.1
PyWavelets 1.1.1
pywin32 227
pywin32-ctypes 0.2.0
pywinpty 0.5.7
PyYAML 5.3.1
pyzmq 19.0.2
QDarkStyle 2.8.1
QtAwesome 1.0.1
qtconsole 4.7.7
QtPy 1.9.0
regex 2020.10.15
requests 2.24.0
requests-oauthlib 1.3.1
rich 13.5.2
rope 0.18.0
rsa 4.9
Rtree 0.9.4
ruamel-yaml 0.15.87
scikit-image 0.17.2
scikit-learn 0.23.2
scipy 1.10.1
seaborn 0.11.0
Send2Trash 1.5.0
sentry-sdk 1.44.1
serial 0.0.97
setproctitle 1.3.3
setuptools 50.3.1.post20201107
shapely 2.0.1
simplegeneric 0.8.1
singledispatch 3.4.0.3
sip 4.19.13
six 1.15.0
slicerator 1.1.0
smmap 5.0.1
snowballstemmer 2.0.0
sortedcollections 1.2.1
sortedcontainers 2.2.2
soundfile 0.12.1
soupsieve 2.0.1
soxr 0.3.7
Sphinx 3.2.1
sphinxcontrib-applehelp 1.0.2
sphinxcontrib-devhelp 1.0.2
sphinxcontrib-htmlhelp 1.0.3
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.3
sphinxcontrib-serializinghtml 1.1.4
sphinxcontrib-websupport 1.2.4
spyder 4.1.5
spyder-kernels 1.9.4
SQLAlchemy 1.3.20
statsmodels 0.12.0
sympy 1.6.2
tables 3.6.1
tabulate 0.9.0
tblib 1.7.0
tensorboard 2.14.0
tensorboard-data-server 0.7.1
termcolor 2.3.0
terminado 0.9.1
terminaltables 3.1.10
testpath 0.4.4
thop 0.1.1.post2209072238
threadpoolctl 2.1.0
tifffile 2020.10.1
toml 0.10.1
tomli 2.0.1
toolz 0.11.1
torch 1.8.0+cu111
torchsummary 1.5.1
torchvision 0.9.0+cu111
tornado 6.0.4
tqdm 4.66.2
traitlets 5.0.5
typing-extensions 4.9.0
tzdata 2024.1
ujson 4.0.1
ultralytics 8.1.24
unicodecsv 0.14.1
urllib3 2.2.1
wandb 0.16.6
watchdog 0.10.3
wcwidth 0.2.13
yacs 0.1.8
yapf 0.30.0
zict 2.0.0
zipp 3.4.0
zope.event 4.5.0
zope.interface 5.1.2

sys:windows10

Describe the bug

I run webcam_demo.py and no code changes,I don't know how to fix this bug , Would you please help me ?Thank you
I want to real-time detection.

Reproduces the problem - code sample

Copyright (c) OpenMMLab. All rights reserved.

import argparse
import time
from collections import deque
from operator import itemgetter
from threading import Thread

import cv2
import numpy as np
import torch
from mmengine import Config, DictAction
from mmengine.dataset import Compose, pseudo_collate

from mmaction.apis import init_recognizer
from mmaction.utils import get_str_type

FONTFACE = cv2.FONT_HERSHEY_COMPLEX_SMALL
FONTSCALE = 1
FONTCOLOR = (255, 255, 255) # BGR, white
MSGCOLOR = (128, 128, 128) # BGR, gray
THICKNESS = 1
LINETYPE = 1
EXCLUED_STEPS = [
'OpenCVInit', 'OpenCVDecode', 'DecordInit', 'DecordDecode', 'PyAVInit',
'PyAVDecode', 'RawFrameDecode'
]

def parse_args():
parser = argparse.ArgumentParser(description='MMAction2 webcam demo')
parser.add_argument('--config', default='D:\code\mmaction2\configs\skeleton\posec3d\slowonly_r50_8xb16-u48-240e_ntu60-xsub-keypoint.py',help='test config file path')
parser.add_argument('--checkpoint', default='D:\code\mmaction2\checkpoints\slowonly_r50_8xb16-u48-240e_ntu60-xsub-keypoint_20220815-38db104b.pth',help='checkpoint file/url')
parser.add_argument('--label',default=r'D:\code\mmaction2\tools\data\skeleton\label_map_ntu60.txt', help='label file')
parser.add_argument(
'--device', type=str, default='cuda:0', help='CPU/CUDA device option')
parser.add_argument(
'--camera-id', type=int, default=0, help='camera device id')
parser.add_argument(
'--threshold',
type=float,
default=0.01,
help='recognition score threshold')
parser.add_argument(
'--average-size',
type=int,
default=1,
help='number of latest clips to be averaged for prediction')
parser.add_argument(
'--drawing-fps',
type=int,
default=20,
help='Set upper bound FPS value of the output drawing')
parser.add_argument(
'--inference-fps',
type=int,
default=8,
help='Set upper bound FPS value of model inference')
parser.add_argument(
'--cfg-options',
nargs='+',
action=DictAction,
default={},
help='override some settings in the used config, the key-value pair '
'in xxx=yyy format will be merged into config file. For example, '
"'--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'")
args = parser.parse_args()
assert args.drawing_fps >= 0 and args.inference_fps >= 0,
'upper bound FPS value of drawing and inference should be set as '
'positive number, or zero for no limit'
return args

def show_results():
print('Press "Esc", "q" or "Q" to exit')

text_info = {}
cur_time = time.time()
while True:
    msg = 'Waiting for action ...'
    _, frame = camera.read()
    frame_queue.append(np.array(frame[:, :, ::-1]))

    if len(result_queue) != 0:
        text_info = {}
        results = result_queue.popleft()
        for i, result in enumerate(results):
            selected_label, score = result
            if score < threshold:
                break
            location = (0, 40 + i * 20)
            text = selected_label + ': ' + str(round(score * 100, 2))
            text_info[location] = text
            cv2.putText(frame, text, location, FONTFACE, FONTSCALE,
                        FONTCOLOR, THICKNESS, LINETYPE)

    elif len(text_info) != 0:
        for location, text in text_info.items():
            cv2.putText(frame, text, location, FONTFACE, FONTSCALE,
                        FONTCOLOR, THICKNESS, LINETYPE)

    else:
        cv2.putText(frame, msg, (0, 40), FONTFACE, FONTSCALE, MSGCOLOR,
                    THICKNESS, LINETYPE)

    cv2.imshow('camera', frame)
    ch = cv2.waitKey(1)

    if ch == 27 or ch == ord('q') or ch == ord('Q'):
        camera.release()
        cv2.destroyAllWindows()
        break

    if drawing_fps > 0:
        # add a limiter for actual drawing fps <= drawing_fps
        sleep_time = 1 / drawing_fps - (time.time() - cur_time)
        if sleep_time > 0:
            time.sleep(sleep_time)
        cur_time = time.time()

def inference():
score_cache = deque()
scores_sum = 0
cur_time = time.time()
while True:
cur_windows = []

    while len(cur_windows) == 0:
        if len(frame_queue) == sample_length:
            cur_windows = list(np.array(frame_queue))
            if data['img_shape'] is None:
                data['img_shape'] = frame_queue.popleft().shape[:2]

    cur_data = data.copy()
    cur_data['imgs'] = cur_windows
    cur_data = test_pipeline(cur_data)
    cur_data = pseudo_collate([cur_data])

    # Forward the model
    with torch.no_grad():
        result = model.test_step(cur_data)[0]
    scores = result.pred_score.tolist()
    scores = np.array(scores)
    score_cache.append(scores)
    scores_sum += scores

    if len(score_cache) == average_size:
        scores_avg = scores_sum / average_size
        num_selected_labels = min(len(label), 5)

        score_tuples = tuple(zip(label, scores_avg))
        score_sorted = sorted(
            score_tuples, key=itemgetter(1), reverse=True)
        results = score_sorted[:num_selected_labels]

        result_queue.append(results)
        scores_sum -= score_cache.popleft()

        if inference_fps > 0:
            # add a limiter for actual inference fps <= inference_fps
            sleep_time = 1 / inference_fps - (time.time() - cur_time)
            if sleep_time > 0:
                time.sleep(sleep_time)
            cur_time = time.time()

def main():
global average_size, threshold, drawing_fps, inference_fps,
device, model, camera, data, label, sample_length,
test_pipeline, frame_queue, result_queue

args = parse_args()
average_size = args.average_size
threshold = args.threshold
drawing_fps = args.drawing_fps
inference_fps = args.inference_fps

device = torch.device(args.device)

cfg = Config.fromfile(args.config)
if args.cfg_options is not None:
    cfg.merge_from_dict(args.cfg_options)

# Build the recognizer from a config file and checkpoint file/url
model = init_recognizer(cfg, args.checkpoint, device=args.device)
camera = cv2.VideoCapture(args.camera_id)
data = dict(img_shape=None, modality='RGB', label=-1)

with open(args.label, 'r') as f:
    label = [line.strip() for line in f]

# prepare test pipeline from non-camera pipeline
cfg = model.cfg
sample_length = 0
pipeline = cfg.test_pipeline
pipeline_ = pipeline.copy()
for step in pipeline:
    if 'SampleFrames' in get_str_type(step['type']):
        sample_length = step['clip_len'] * step['num_clips']
        data['num_clips'] = step['num_clips']
        data['clip_len'] = step['clip_len']
        pipeline_.remove(step)
    if get_str_type(step['type']) in EXCLUED_STEPS:
        # remove step to decode frames
        pipeline_.remove(step)
test_pipeline = Compose(pipeline_)

assert sample_length > 0

try:
    frame_queue = deque(maxlen=sample_length)
    result_queue = deque(maxlen=1)
    pw = Thread(target=show_results, args=(), daemon=True)
    pr = Thread(target=inference, args=(), daemon=True)
    pw.start()
    pr.start()
    pw.join()
except KeyboardInterrupt:
    pass

if name == 'main':
main()

Reproduces the problem - command or script

No response

Reproduces the problem - error message

Exception in thread Thread-2:
Traceback (most recent call last):
File "D:\Anaconda3\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "D:\Anaconda3\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "D:\code\mmaction2\demo\webcam_demo.py", line 137, in inference
cur_data = test_pipeline(cur_data)
File "D:\Anaconda3\lib\site-packages\mmengine\dataset\base_dataset.py", line 59, in call
data = t(data)
File "D:\Anaconda3\lib\site-packages\mmcv\transforms\base.py", line 12, in call
return self.transform(results)
File "D:\code\mmaction2\mmaction\datasets\transforms\pose_transforms.py", line 1260, in transform
results['total_frames'] = results['keypoint'].shape[1]
KeyError: 'keypoint'

Additional information

No response