nhost / hasura-storage

Storage for Hasura built on top of S3

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Self Hosting instructions

andr-ec opened this issue · comments

Would love to see instructions/ an example for self hosting a docker container.

I found this docker container:
https://hub.docker.com/r/nhost/hasura-storage

Though the environment variables are different from the ones here:
https://github.com/nhost/hasura-storage/blob/da427a37c97fd026d93c606beffaa572cbbde2ed/build/dev/docker/docker-compose.yaml
Or here:
https://github.com/nhost/hasura-storage/blob/da427a37c97fd026d93c606beffaa572cbbde2ed/hasura-storage.yaml

I've tried using that docker container and included all of the environment variables I've found but I only get logs like this:

2022-02-03T19:08:34.684 app[c7653d88] gru [info]hasura is not ready, retry in 3 seconds
2022-02-03T19:08:36.656 app[47c5ecf9] mia [info]hasura is not ready, retry in 3 seconds
2022-02-03T19:08:37.688 app[c7653d88] gru [info]hasura is not ready, retry in 3 seconds
2022-02-03T19:08:39.660 app[47c5ecf9] mia [info]hasura is not ready, retry in 3 seconds
2022-02-03T19:08:40.691 app[c7653d88] gru [info]hasura is not ready, retry in 3 seconds

I assume it means I have one of these env variables misconfigured:

  GRAPHQL_ENDPOINT = "https://example.com/v1/graphql"
  GRAPHQL_ENGINE_BASE_URL="https://example.com/"
  HASURA_METADATA = "true"
  HASURA_GRAPHQL_ADMIN_SECRET= "some_secret"
  HASURA_METADATA_ADMIN_SECRET = "some_secret"

Or I might be missing something all together. Also I want to make sure that the migrations and metadata updates run as well when needed.

Would love to see an example env. I know that it's run locally on the cli but I'm not sure how to find which env variables are used.

Hi! Thanks for the interest, the container you were trying actually belongs to a different codebase. This is a full rewrite we are now finalizing and getting ready for production, that's why docs are lacking (besides of the openapi spec) and why you are finding that things don't match. If you want to test this right now you can do the following:

  1. Start with this docker-compose file.
  2. Change the image for hasura-storage to nhost/hasura-storage:$LABEL, replacing $LABEL with the latest release available (without the v in the beginning i.e., nhost/hasura-storage:0.0.1-alpha2)

Doing this should give you a working setup as it is pretty much how we build the development environment for the end to end tests and if you.

If you have more questions don't hesitate to let me know and if find any issues don't hesitate to open new issues.

Thanks for the help! I'm making progress, I'm seeing this now.

2022-02-04T17:51:27.847 app[74b73a30] mia [info]time="2022-02-04T17:51:27Z" level=error msg="problem applying postgres migrations: problem migrating: migration failed: syntax error at or near \"TRIGGER\" (column 19) in line 98: BEGIN;\n-- functions\nCREATE OR REPLACE FUNCTION storage.set_current_timestamp_updated_at ()\n  RETURNS TRIGGER\n  LANGUAGE plpgsql\n  AS $a$\nDECLARE\n  _new record;\nBEGIN\n  _new := new;\n  _new. \"updated_at\" = now();\n  RETURN _new;\nEND;\n$a$;\n\nCREATE OR REPLACE FUNCTION storage.protect_default_bucket_delete ()\n  RETURNS TRIGGER\n  LANGUAGE plpgsql\n  AS $a$\nBEGIN\n  IF OLD.ID = 'default' THEN\n    RAISE EXCEPTION 'Can not delete default bucket';\n  END IF;\n  RETURN OLD;\nEND;\n$a$;\n\nCREATE OR REPLACE FUNCTION storage.protect_default_bucket_update ()\n  RETURNS TRIGGER\n  LANGUAGE plpgsql\n  AS $a$\nBEGIN\n  IF OLD.ID = 'default' AND NEW.ID <> 'default' THEN\n    RAISE EXCEPTION 'Can not rename default bucket';\n  END IF;\n  RETURN NEW;\nEND;\n$a$;\n\n-- tables\nCREATE TABLE IF NOT EXISTS storage.buckets (\n  id text NOT NULL PRIMARY KEY,\n  created_at timestamp with time zone DEFAULT now() NOT NULL,\n  updated_at timestamp with time zone DEFAULT now() NOT NULL,\n  download_expiration int NOT NULL DEFAULT 30, -- 30 seconds\n  min_upload_file_size int NOT NULL DEFAULT 1,\n  max_upload_file_size int NOT NULL DEFAULT 50000000,\n  cache_control text DEFAULT 'max-age=3600',\n  presigned_urls_enabled boolean NOT NULL DEFAULT TRUE\n);\n\nCREATE TABLE IF NOT EXISTS storage.files (\n  id uuid DEFAULT public.gen_random_uuid () NOT NULL PRIMARY KEY,\n  created_at timestamp with time zone DEFAULT now() NOT NULL,\n  updated_at timestamp with time zone DEFAULT now() NOT NULL,\n  bucket_id text NOT NULL DEFAULT 'default',\n  name text,\n  size int,\n  mime_type text,\n  etag text,\n  is_uploaded boolean DEFAULT FALSE,\n  uploaded_by_user_id uuid\n);\n\n-- constraints\nDO $$\nBEGIN\n  IF NOT EXISTS(SELECT table_name\n            FROM information_schema.table_constraints\n            WHERE table_schema = 'storage'\n              AND table_name = 'files'\n              AND constraint_name = 'fk_bucket')\n  THEN\n    ALTER TABLE storage.files\n      ADD CONSTRAINT fk_bucket FOREIGN KEY (bucket_id) REFERENCES storage.buckets (id) ON UPDATE CASCADE ON DELETE CASCADE;\n  END IF;\nEND $$;\n\n-- add constraints if auth.users table exists and there is not an existing constraint\nDO $$\nBEGIN\n  IF EXISTS(SELECT table_name\n              FROM information_schema.tables\n            WHERE table_schema = 'auth'\n              AND table_name LIKE 'users')\n    AND NOT EXISTS(SELECT table_name\n              FROM information_schema.table_constraints\n            WHERE table_schema = 'storage'\n              AND table_name = 'files'\n              AND constraint_name = 'fk_user')\n  THEN\n    ALTER TABLE storage.files\n      ADD CONSTRAINT fk_user FOREIGN KEY (uploaded_by_user_id) REFERENCES auth.users (id) ON DELETE SET NULL;\n  END IF;\nEND $$;\n\n-- triggers\nCREATE OR REPLACE TRIGGER set_storage_buckets_updated_at\n  BEFORE UPDATE ON storage.buckets\n  FOR EACH ROW\n  EXECUTE FUNCTION storage.set_current_timestamp_updated_at ();\n\nCREATE OR REPLACE TRIGGER set_storage_files_updated_at\n  BEFORE UPDATE ON storage.files\n  FOR EACH ROW\n  EXECUTE FUNCTION storage.set_current_timestamp_updated_at ();\n\nCREATE TRIGGER check_default_bucket_delete\n  BEFORE DELETE ON storage.buckets\n  FOR EACH ROW\n    EXECUTE PROCEDURE storage.protect_default_bucket_delete ();\n\nCREATE TRIGGER check_default_bucket_update\n  BEFORE UPDATE ON storage.buckets\n  FOR EACH ROW\n    EXECUTE PROCEDURE storage.protect_default_bucket_update ();\n\n-- data\nDO $$\nBEGIN\n  IF NOT EXISTS(SELECT id\n            FROM storage.buckets\n            WHERE id = 'default')\n  THEN\n    INSERT INTO storage.buckets (id)\n      VALUES ('default');\n  END IF;\nEND $$;\n\nCOMMIT;\n (details: pq: syntax error at or near \"TRIGGER\")"

Consequent runs now yield:

2022-02-04T18:01:59.528 app[71e18b05] gru [info]time="2022-02-04T18:01:59Z" level=error msg="problem applying postgres migrations: problem migrating: Dirty database version 1. Fix and force version."

Any ideas?

I can drop the public.schema_migrations table and run it again, though I will get the first error again.

is that using the docker-compose file without any changes or are you using a different database? The first error is coming from the database hasura-storage is connecting to so if you are using a different database that might be the problem. The second error is basically a consequence of the first one, because applying the migration failed in the first instance the system is in an unknown state and needs manual fixing.

If you need to use a different database from postgres 14 you can remove the flag --postgres-migrations and handle the schema creation yourself prior to starting hasura-storage

Thanks for your help before, now while testing I ran into an issue while attempting to upload a file.
When I try to upload a file using the sdk, I get a 201, the response is just an empty object: {}.
I check the bucket and nothing has been added and I check hasura and there are no new file records there either.

These are the logs I get in storage:

2022-02-07T22:35:10Z app[c2caf27c] gru [info][GIN-debug] redirecting request 307: /v1/storage/files/ --> /v1/storage/files/
2022-02-07T22:35:10Z app[c2caf27c] gru [info]time="2022-02-07T22:35:10Z" level=info client_ip=145.40.121.5 errors="[]" latency_time=7.155708ms method=POST status_code=201 url=/v1/storage/files/

Then while tracing minio I see that no requests have been made to the bucket at all.

So it looks like if I try to upload, it just fails silently and I can't find any errors.
Any suggestions?

I found the issue. I'm using the nhost js sdk. Looks like the issue is that the nhost js sdk doesn't match up with this version of Hasura storage. The sdk sends multiform with file while Hasura storage expects file[] and metadata[].

So in the meantime we can send upload requests manually, but I guess the question is, is there a plan to align the js sdk with this version of hasura-storage?

Thanks!

The sdk sends multiform with file while Hasura storage expects file[] and metadata[]

Exactly, I am fairly certain that's the only change but the openapi document should be considered the source of truth, if something doesn't behaved like explained there don't hesitate to let us know.

So in the meantime we can send upload requests manually, but I guess the question is, is there a plan to align the js sdk with this version of hasura-storage?

Yes, we are in the process of testing and validating this version of hasura-storage. Once we are confident it works fine we will replace current one with this one and update the sdk (it should work fine but we are being extra cautious)