scrapy / scrapyd

A service daemon to run Scrapy spiders

Home Page:https://scrapyd.readthedocs.io/en/stable/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When I am in Scrapyd deploy,always report an error

joykerl opened this issue · comments

commented

When I am in Scrapyd deploy, I always report an error with the following information. How can I resolve this issue? thanks

python3.9
scrapyd1.4.3
linux

Server response (200):
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.8/site-packages/scrapyd/runner.py", line 38, in <module>
    main()
  File "/usr/local/lib/python3.8/site-packages/scrapyd/runner.py", line 34, in main
    execute()
  File "/usr/local/lib/python3.8/site-packages/scrapy/cmdline.py", line 128, in execute
    settings = get_project_settings()
  File "/usr/local/lib/python3.8/site-packages/scrapy/utils/project.py", line 71, in get_project_settings
    settings.setmodule(settings_module_path, priority="project")
  File "/usr/local/lib/python3.8/site-packages/scrapy/settings/__init__.py", line 385, in setmodule
    module = import_module(module)
  File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
  File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
  File "<frozen zipimport>", line 259, in load_module
  File "/var/lib/scrapyd/eggs/remotejob/2024-02-07T11_32_07.egg/remotespider/settings.py", line 32, in <module>
    # Only use for 'json_url' in the generated file 'stats.json' resides in SCRAPYD_LOGS_DIR.
  File "/usr/local/lib/python3.8/site-packages/loguru/_logger.py", line 795, in add
    wrapped_sink = FileSink(path, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/loguru/_file_sink.py", line 191, in __init__
    self._create_dirs(path)
  File "/usr/local/lib/python3.8/site-packages/loguru/_file_sink.py", line 223, in _create_dirs
    os.makedirs(dirname, exist_ok=True)
  File "/usr/local/lib/python3.8/os.py", line 213, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/usr/local/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
NotADirectoryError: [Errno 20] Not a directory: '/var/lib/scrapyd/eggs/remotejob/2024-02-07T11_32_07.egg/remotespider'

I see loguru in the traceback, and neither Scrapy nor Scrapyd use loguru. You need to check how you're using or configuring loguru.