bmarsh9 / gapps

Security compliance platform - SOC2, CMMC, ASVS, ISO27001, HIPAA, NIST CSF, NIST 800-53, CSC CIS 18, PCI DSS, SSF tracking. https://gapps.darkbanner.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Worker external database connection issue

MathRig opened this issue · comments

Hello,

I am trying to migrate from docker-compose to helm chart to deploy Gapps in AWS EKS using an aws aurora DB.
I was able to do it but I am facing some issues with the worker and procrastinate.

I passed the different variables to the worker : POSTGRES_USER, POSTGRES_HOST, POSTGRES_DB, POSTGRES_PASSWORD and SQLALCHEMY_DATABASE_URI.
The POSTGRES_PASSWORD and SQLALCHEMY_DATABASE_URI are coming from ConfigMap. It works for the app, but the worker failed to start.

In the pod logs, I see that it's trying to connect to the root database... but my database name is gapps.

Here are the logs :


[INFO] Running as a worker. Trying to start...
[INFO] Checking if database models require creation
[INFO] Successfully queried the database models
Error:
Database error.
FATAL:  database "root" does not exist
[INFO] Worker is ready. Starting...
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/procrastinate/aiopg_connector.py", line 31, in wrapped
return await coro(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/procrastinate/aiopg_connector.py", line 191, in _create_pool
return await aiopg.create_pool(**pool_args)
File "/usr/local/lib/python3.8/dist-packages/aiopg/pool.py", line 300, in from_pool_fill
await self._fill_free_pool(False)
File "/usr/local/lib/python3.8/dist-packages/aiopg/pool.py", line 336, in _fill_free_pool
conn = await connect(
File "/usr/local/lib/python3.8/dist-packages/aiopg/connection.py", line 1225, in _connect
await self._poll(self._waiter, self._timeout)  # type: ignore
File "/usr/local/lib/python3.8/dist-packages/aiopg/connection.py", line 881, in _poll
await asyncio.wait_for(self._waiter, timeout)
File "/usr/lib/python3.8/asyncio/tasks.py", line 494, in wait_for
return fut.result()
File "/usr/local/lib/python3.8/dist-packages/aiopg/connection.py", line 788, in _ready
state = self._conn.poll()
psycopg2.OperationalError: FATAL:  database "root" does not exist
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "run_worker.py", line 8, in <module>
with bg_app.open():
File "/usr/local/lib/python3.8/dist-packages/procrastinate/app.py", line 259, in open
self.connector.open(pool_or_engine)
File "/usr/local/lib/python3.8/dist-packages/procrastinate/utils.py", line 149, in wrapper
return sync_await(awaitable=awaitable)
File "/usr/local/lib/python3.8/dist-packages/procrastinate/utils.py", line 200, in sync_await
return loop.run_until_complete(awaitable)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/usr/local/lib/python3.8/dist-packages/procrastinate/aiopg_connector.py", line 186, in open_async
self._pool = await self._create_pool(self._pool_args)
File "/usr/local/lib/python3.8/dist-packages/procrastinate/aiopg_connector.py", line 35, in wrapped
raise exceptions.ConnectorException from exc
procrastinate.exceptions.ConnectorException:
Database error.

I know that it's not a standard issue as you are running it in docker and not EKS, but still I am wondering if you had this kind of issue in the past...

I looked at the code and wasn't able to find it.

It's too bad because as soon as I get it, I will be able to release an helm chart for Gapps.

Thanks.

@MathRig That doesn't really answer the question. But it does solve your issue. I would recommend removing the worker entirely. It actually doesn't do anything right now. The idea was that it can support background tasks.

I'm not sure why you are getting that error - Its possible it could be resolved around here.

But I'd just disable the worker. I might end up using something totally different for background integrations and tasks.

Hi @bmarsh9 ,

Thank you for your reply. Ok, I was wondering what was the role of the worker but as I wasn't able to figure it out I just kept it in the deployment chart. I will disable it.
And yes you are right, I was looking at this part of the code, but if I am not wrong the POSTGRES_XXX variables are extracted from the SQLALCHEMY_DATABASE_URI in another function. But nevertheless, all the variables are set up in my helm chart and so the worker pod should have them... like the app pod.

Thanks a lot for your help.