grobid connection error
flckv opened this issue · comments
great work, thank you for sharing.
I think the grobid is running :
but the
api-1 | File "/api/app/api/document.py", line 41, in get_figures
api-1 | paragraphs, metadata = PDF2TextService().parse_paragraphs(pdf_path)
api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-1 | File "/api/app/services/pdf2text.py", line 105, in parse_paragraphs
api-1 | response = requests.post(url, files=files)
File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 519, in send
api-1 | raise ConnectionError(e, request=request)
api-1 | requests.exceptions.### ConnectionError: HTTPConnectionPool(host='grobid', port=8070):
More
Max retries exceeded with url: /api/processFulltextDocument (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff4caec710>: Failed to establish a new connection: [Errno -2] Name or service not known'))
api-1 | 192.168.65.1 - - [08/Jun/2024:14:24:59 +0000] "POST /backendapi/document/getfigures HTTP/1.0" 500 190 "http://localhost:8080/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36"
proxy-1 | {"time": "2024-06-08T14:27:12+00:00","request_method": "GET","request_uri": "http://localhost/_next/webpack-hmr","status": 101,"request_length": 813,"body_bytes_sent": 6535,"user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36","ip": "192.168.65.1","realip": "192.168.65.1","referer": "","host": "localhost","scheme": "http","forwarded-for": ""}
proxy-1 | {"time": "2024-06-08T14:29:12+00:00","request_method": "GET","request_uri": "http://localhost/_next/webpack-hmr","status": 101,"request_length": 813,"body_bytes_sent": 166,"user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36","ip": "192.168.65.1","realip": "192.168.65.1","referer": "","host": "localhost","scheme": "http","forwarded-for": ""}
any suggestions on this?
Hi @flckv,
Thanks for the issue!
I haven't been able to reproduce this quickly (though I'll keep trying), but in the meantime, you could try the following steps to diagnose the issue:
-
Open an interactive shell in the
api
container:docker exec -it <container-id> /bin/bash
-
Check GROBID service response, to see if it is reachable from the
api
container.curl -v http://grobid:8070/api/version
-
Check docker network configuration:
docker network ls # ID for 'figura11y-default' docker network inspect <network-id>
Ensure that the
grobid
container is on the same network as theapi
container.
Let me know what you find; glad to help debug further.
Thank you, @nikhilsinghmus
docker network ls
NETWORK ID NAME DRIVER SCOPE
27642bad6499 bridge bridge local
0d84bfa12447 figura11y_default bridge local
24285f91f8d9 host host local
f97d5204b0d7 none null local
docker network inspect 0d84bfa12447
[
{
"Name": "figura11y_default",
"Id": "0d84bfa12447f3b734dcb0f0842a70affef9b2e52cf466b973900f03b8d0acf1",
"Created": "2024-06-08T13:58:58.702392502Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"36b7284bfca535e779d099c7173c50ac953e6b98e76a4296f217e90f382380de": {
"Name":"figura11y-ui-1",
"EndpointID": "0a5cb7f7df8de563d211ae7417cf5a63153fbd5f21dd282705a9d1c1c666accc",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"3e001f128dd8054d2168a313083317fb8af36964111d804250828ac67047a642": {
"Name":"figura11y-db-1"
,
"EndpointID": "ed03ccfcf14004148a2b70311afeecc8c74b872447dc9b9758c55e45c2b6e98f",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"76644cc8a8bee5da86d12c113bfba2d02fae3d496c7d85309d19839bba04e942": {
"Name":"figura11y-api-1",
"EndpointID": "727bbdcb03d48cb21c593b29312d38e981eff528fbb236443dc1cc2cf01b18da",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"92048cacbab52f9a59262d3cc7b17ce764e1b61126bdfc8462be96879080df75": {
"Name":"figura11y-proxy-1",
"EndpointID": "f48547c4f60b887530475e391d137fa73061b3d97ffdc49ba3768fa9bec04b82",
"MacAddress": "02:42:ac:13:00:06",
"IPv4Address": "172.19.0.6/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "figura11y",
"com.docker.compose.version": "2.27.0"
}
}
]
GROBID STILL RUNNING but no connection:
Thank you for the update, @flckv! I wonder if the GROBID container is running out of memory, perhaps? Could you try increasing the memory for Docker, and seeing if that resolves it?
Thank you @nikhilsinghmus, I switched to a larger RAM (24GB) hardware. I am using the same figura11y repo setup, but face different issue: I face issue with the the db and sqlalchemy:
podman start -a figura11y_db_1
2024-06-23 14:37:35.766 UTC [1] LOG: starting PostgreSQL 15.7 (Debian 15.7-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
2024-06-23 14:37:35.767 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2024-06-23 14:37:35.767 UTC [1] LOG: listening on IPv6 address "::", port 5432
2024-06-23 14:37:35.768 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-06-23 14:37:35.771 UTC [24] LOG: database system was shut down at 2024-06-23 14:37:26 UTC
2024-06-23 14:37:35.782 UTC [1] LOG: database system is ready to accept connections
podman start -a figura11y_api_1
Error: Could not import 'api.db'.
More
Usage: flask [OPTIONS] COMMAND [ARGS]...
Try 'flask --help' for help.
Error: No such command 'db'.
Error: Could not import 'api.db'.
Usage: flask [OPTIONS] COMMAND [ARGS]...
Try 'flask --help' for help.
Error: No such command 'db'.
Error: Could not import 'api.db'.
Usage: flask [OPTIONS] COMMAND [ARGS]...
Try 'flask --help' for help.
Error: No such command 'db'.
podman start -a figura11y_grobid_1
More
[2024-06-23 14:37:37 +0000] [1] [INFO] Starting gunicorn 22.0.0
[2024-06-23 14:37:37 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
[2024-06-23 14:37:37 +0000] [1] [INFO] Using worker: sync
[2024-06-23 14:37:37 +0000] [5] [INFO] Booting worker with pid: 5
podman start -a figura11y_ui_1
podman start -a figura11y_proxy_1
- warn Invalid next.config.js options detected:
- warn The root value has an unexpected property, maxBodyLength, which is not in the list of allowed properties (amp, analyticsId, assetPrefix, basePath, cleanDistDir, compiler, compress, configOrigin, crossOrigin, devIndicators, distDir, env, eslint, excludeDefaultMomentLocales, experimental, exportPathMap, generateBuildId, generateEtags, headers, httpAgentOptions, i18n, images, modularizeImports, onDemandEntries, optimizeFonts, output, outputFileTracing, pageExtensions, poweredByHeader, productionBrowserSourceMaps, publicRuntimeConfig, reactProductionProfiling, reactStrictMode, redirects, rewrites, sassOptions, serverRuntimeConfig, skipMiddlewareUrlNormalize, skipTrailingSlashRedirect, staticPageGenerationTimeout, swcMinify, target, trailingSlash, transpilePackages, typescript, useFileSystemPublicRoutes, webpack).
- warn See more info here: https://nextjs.org/docs/messages/invalid-next-config
- warn You have enabled experimental feature (proxyTimeout) in next.config.js.
- warn Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk.
2024/06/23 14:37:39 [emerg] 1#1: host not found in upstream "ui" in /etc/nginx/conf.d/default.conf:10
nginx: [emerg] host not found in upstream "ui" in /etc/nginx/conf.d/default.conf:10
podman start -a figura11y_sonar_1
[Wapiti] Loading model: "/opt/grobid/grobid-home/models/affiliation-address/model.wapiti"
Model path: /opt/grobid/grobid-home/models/affiliation-address/model.wapiti
[Wapiti] Loading model: "/opt/grobid/grobid-home/models/name/header/model.wapiti"
Model path: /opt/grobid/grobid-home/models/name/header/model.wapiti
[Wapiti] Loading model: "/opt/grobid/grobid-home/models/name/citation/model.wapiti"
Model path: /opt/grobid/grobid-home/models/name/citation/model.wapiti
[Wapiti] Loading model: "/opt/grobid/grobid-home/models/header/model.wapiti"
Error: unable to start container 47115bdcaad81c8b1462865db84a2499fcf7fb18f7ab7995fe74000a8d0f5233: generating dependency graph for container 47115bdcaad81c8b1462865db84a2499fcf7fb18f7ab7995fe74000a8d0f5233: container aa79618ce092122dddfeec9b97259295146b14c951de24e5b2f064ab36fef562 depends on container 396de865c1e8ed1ed96cd5014090652544ed3db20df2da88b87c5a80475b8786 not found in input list: no such container
exit code: 1
exit code: 125
Neither CUDA nor MPS are available - defaulting to CPU. Note: This module is much faster with a GPU.
[Wapiti] Loading model: "/opt/grobid/grobid-home/models/citation/model.wapiti"
More
Downloading recognition model, please wait. This may take several minutes depending upon your network connection.
Model path: /opt/grobid/grobid-home/models/citation/model.wapiti
[Wapiti] Loading model: "/opt/grobid/grobid-home/models/fulltext/model.wapiti"
Model path: /opt/grobid/grobid-home/models/fulltext/model.wapiti
[Wapiti] Loading model: "/opt/grobid/grobid-home/models/segmentation/model.wapiti"
Downloading config.json: 100%|██████████| 4.94k/4.94k [00:00<00:00, 27.7MB/s]
Downloading pytorch_model.bin: 48%|████▊ | 388M/809M [00:01<00:01, 288MB/s]Model path: /opt/grobid/grobid-home/models/segmentation/model.wapiti
[Wapiti] Loading model: "/opt/grobid/grobid-home/models/reference-segmenter/model.wapiti"
Downloading pytorch_model.bin: 88%|████████▊ | 713M/809M [00:02<00:00, 285MB/s]Model path: /opt/grobid/grobid-home/models/reference-segmenter/model.wapiti
[Wapiti] Loading model: "/opt/grobid/grobid-home/models/figure/model.wapiti"
Model path: /opt/grobid/grobid-home/models/figure/model.wapiti
[Wapiti] Loading model: "/opt/grobid/grobid-home/models/table/model.wapiti"
Downloading pytorch_model.bin: 93%|█████████▎| 755M/809M [00:02<00:00, 305MB/s]Model path: /opt/grobid/grobid-home/models/table/model.wapiti
Downloading pytorch_model.bin: 100%|██████████| 809M/809M [00:02<00:00, 297MB/s]
Downloading generation_config.json: 100%|██████████| 186/186 [00:00<00:00, 1.51MB/s]
Downloading (…)rocessor_config.json: 100%|██████████| 420/420 [00:00<00:00, 3.52MB/s]
Downloading tokenizer_config.json: 100%|██████████| 510/510 [00:00<00:00, 4.22MB/s]
Downloading (…)tencepiece.bpe.model: 100%|██████████| 1.30M/1.30M [00:00<00:00, 432MB/s]
Downloading tokenizer.json: 100%|██████████| 4.01M/4.01M [00:00<00:00, 7.00MB/s]
Downloading added_tokens.json: 100%|██████████| 235/235 [00:00<00:00, 2.26MB/s]
Downloading (…)cial_tokens_map.json: 100%|██████████| 355/355 [00:00<00:00, 4.09MB/s]
Neither CUDA nor MPS are available - defaulting to CPU. Note: This module is much faster with a GPU.
[2024-06-23 14:38:07 +0000] [5] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py",
line 145, in init
self._dbapi_connection = engine.raw_connection()
^^^^^^^^^^^^^^^^^^^^^^^
More
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 3288, in raw_connection
return self.pool.connect()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 1267, in _checkout
fairy = _ConnectionRecord.checkout(pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 716, in checkout
rec = pool._do_get()
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/impl.py", line 169, in _do_get
with util.safe_reraise():
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 147, in exit
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/impl.py", line 167, in _do_get
return self._create_connection()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
return _ConnectionRecord(self)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 678, in init
self.__connect()
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 902, in __connect
with util.safe_reraise():
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 147, in exit
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 898, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/create.py", line 637, in connect
return dialect.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 615, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psycopg2/init.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/gunicorn/arbiter.py", line 609, in spawn_worker
worker.init_process()
More
File "/usr/local/lib/python3.11/site-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.11/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/gunicorn/util.py", line 424, in import_app
app = app(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/api/start.py", line 50, in create_app
create_app_db_api(app),
^^^^^^^^^^^^^^^^^^^^^^
File "/api/app/api/db.py", line 86, in create_app_db_api
SerializableBase.metadata.create_all(db.engine)
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/sql/schema.py", line 5822, in create_all
bind._run_ddl_visitor(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 3238, in _run_ddl_visitor
with self.begin() as conn:
File "/usr/local/lib/python3.11/contextlib.py", line 137, in enter
return next(self.gen)
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 3228, in begin
with self.connect() as conn:
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 3264, in connect
return self._connection_cls(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 147, in init
Connection._handle_dbapi_exception_noconnection(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2426, in _handle_dbapi_exception_noconnection
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 145, in init
self._dbapi_connection = engine.raw_connection()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 3288, in raw_connection
return self.pool.connect()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 1267, in _checkout
fairy = _ConnectionRecord.checkout(pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 716, in checkout
rec = pool._do_get()
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/impl.py", line 169, in _do_get
with util.safe_reraise():
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 147, in exit
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/impl.py", line 167, in _do_get
return self._create_connection()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 393, in _create_connection
return _ConnectionRecord(self)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 678, in init
self.__connect()
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 902, in __connect
with util.safe_reraise():
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 147, in exit
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 898, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/create.py", line 637, in connect
return dialect.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 615, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psycopg2/init.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
sqlalchemy.exc.OperationalError:
(psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
(Background on this error at: https://sqlalche.me/e/20/e3q8)
[2024-06-23 14:38:07 +0000] [5] [INFO] Worker exiting (pid: 5)
[2024-06-23 14:38:10 +0000] [1] [ERROR] Worker (pid:5) exited with code 3
[2024-06-23 14:38:10 +0000] [1] [ERROR] Shutting down: Master
[2024-06-23 14:38:10 +0000] [1] [ERROR] Reason: Worker failed to boot.
exit code: 3
2024-06-23 14:42:35.863 UTC [22] LOG: checkpoint starting: time
2024-06-23 14:42:35.870 UTC [22] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.002 s, total=0.008 s; sync files=2, longest=0.002 s, average=0.001 s; distance=0 kB, estimate=0 kB
Hi @flckv, thanks for the update! It looks like there's a difference in our setups; the project is designed and tested using Docker with docker compose up --build
, but it looks like you're using podman
and starting containers individually?
I think this difference is likely causing the problems. To resolve this, if possible, could you try installing Docker and Docker Compose, then run docker compose up --build
?
Or alternatively, it might work with Podman Compose with podman-compose up --build
; I did a quick test of this, and found that it worked.
Let me know if this might resolve it, or if I might have misinterpreted something.
Thank you @nikhilsinghmus.
True I run the podman-compose(rootless setup) in combination with podman(root setup) file with podman-compose that initiates command e.g.:podman create --name=figura11y_db_1 ...
The setup is because of:
Root Access for podman only:
podman
is setup on root
More: why not docker-compose
docker compose up --buildEmulate Docker CLI using podman. Create /usr/etc/containers/nodocker to quiet msg.
Error: unknown flag: --build
See 'podman --help'
podman-compose
is NOT setup on root
More: Permission denied
curl -o /usr/local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/main/podman_compose.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file /usr/local/bin/podman-compose:
Permission
Warning: denied
13 121k 13 16375 0 0 47394 0 0:00:02 --:--:-- 0:00:02 47326
curl: (23) Failure writing output to destination
Commands used to run docker-compose.yaml:
cd figura11y
loginctl enable-linger 2030
podman-compose up --build
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.5.1
** excluding: set()
['podman', 'ps', '--filter', 'label=io.podman.compose.project=figura11y', '-a', '--format', '{{ index .Labels "io.podman.compose.config-hash"}}']
podman volume inspect figura11y_pgdata || podman volume create figura11y_pgdata
['podman', 'volume', 'inspect', 'figura11y_pgdata']
['podman', 'network', 'exists', 'figura11y_default']
podman create --name=figura11y_db_1
--label io.podman.compose.config-hash=974349d38b18b6c57d47a8f580bf50ad1f4ff0ff17dcc88aea2734fe44b756bb --label io.podman.compose.project=figura11y --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@figura11y.service --label com.docker.compose.project=figura11y --label com.docker.compose.project.working_dir=/home/felicia.kovacs/figura11y --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=db -e POSTGRES_USER=foo -e POSTGRES_PASSWORD=bar -e POSTGRES_DB=writealttext -v figura11y_pgdata:/var/lib/postgresql/data --net figura11y_default --network-alias db -p 5432:5432 --restart always postgres:15
exit code: 0
['podman', 'network', 'exists', 'figura11y_default']
podman create --name=figura11y_api_1 .....
...
DB gets stuck - Errors:
1.Error: Could not import 'api.db' (on the left)
podman-compose ps
says api.db is running (on the right)
2.Error: unable to start container 4b57d08de04a (on the left)
podman-compose ps
shows that both sonar and proxy are not listed in the compose (on the right)
3.LAST Errors: DB - Sqlalchemy related:
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known (on the left)
podman-compose ps
says api.db is running (on the right)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known (on the left)
podman-compose ps
says api.db is running (on the right)
Hi @flckv, thanks for following up! Would you mind running the network checks I mentioned initially, here in podman? E.g. podman network ls
and podman network inspect <network-id>
. Another thing you could try is podman exec figura11y_api_1 getent hosts db
to see if it's able to resolve the db hostname.
I'm not able to reproduce the issue with podman==5.1.1
and podman-compose==1.0.6
. Can you confirm what versions you are using?
Thank you @nikhilsinghmus,
podman-compose ps
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.5.1
podman network ls
podman network inspect figura11y_default
"subnet": "10.89.0.0/24", "gateway": "10.89.0.1"
podman inspect figura11y_api_1
figura11y_api_1 EXITED (left)
figura11y_api_1 RUNNING for few second (right)
Thank you, @flckv! I tried testing with that version of Podman, but ran into some issues on my machine (though I'll try again). Would you be able to try upgrading and testing with a newer version?
Additionally, one other thing to check might be your installation of netavark (default network stack) and aardvark-dns (DNS server). I think podman-compose will rely on these for name resolution between containers (e.g. db
, where the issue seems to be). I'm still getting familiar with the Podman ecosystem, so I appreciate your patience while working through this!
Thank you @nikhilsinghmus,
no ROOT access
sudo apt-get -y install podman
Permission denied
find . -type f -name 'netavark'
find: ‘./opt/TrendMicro’: Permission denied
find . -type f -name 'aardvark-dns'
find: ‘./opt/TrendMicro’: Permission denied
cd /opt/ds_agent/netagent
ls
pattern tm_netagent tm_netagent.version
cat tm_netagent.version
1.0.140-e6154b
I noticed that figura11y_proxy_1
exists immediately:
More:
"NetworkSettings": { "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "Bridge": "", "SandboxID": "", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "80/tcp": null, "8080/tcp": [ { "HostIp": "", "HostPort": "8080" } ] }, "SandboxKey": "", "Networks": { "figura11y_default": { "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "NetworkID": "figura11y_default", "DriverOpts": null, "IPAMConfig": null, "Links": null, "Aliases": [ "proxy", "4a2e86782c2a" ] } } },
...
"CreateCommand": [ "podman", "create", "--name=figura11y_proxy_1", ... "figura11y_default", "--network-alias", "proxy", "-p", "8080:8080", "figura11y_proxy" ], "Umask": "0022",
I have seen in this video umask being changed from 22
...
"NetworkMode": "bridge", "PortBindings": { "8080/tcp": [ { "HostIp": "", "HostPort": "8080" } ] },
I also noticed that upon command:
podman images
podman inspect <image id>
there the proxy is on port 80 that is less than 1024:
butsonar
exposes ports:
"3000/tcp": {},
"8000/tcp": {},
"8080/tcp": {}
because I added lines:
" Podman can not create containers that bind to ports < 1024." from rootless.md
ports 3000, 8000 and 8080 do not seem to be accessible by proxy only 5432 and 8070 :
How is it possible that the figura11y_db_1 keeps running on port 0.0.0.0:5432->5432/tcp but the error is related to db ?
Is unprivileged ping needed for this project I saw the sonar was using command ping.py
?
Hi @flckv, sorry for the slow response! Re: the api.db issue, I think that's related to the flask-migrate
commands in dev.sh
/start.sh
. I think you may not need them, so if you comment out the lines beginning with flask db
, it should avoid that. The flask server and db connection should still run either way though, so I'm not sure it will resolve the main issue!
Did you find that the whole setup runs on the third machine, or are you running into the same issue?