nginxinc / nginx-s3-gateway

NGINX S3 Caching Gateway

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unable to cache objects from private S3 Bucket

vivekmystrey opened this issue · comments

Hi,

Have deployed nginx reverse proxy using the following command and setting file:

Docker command:

/nginx-s3-gateway$ docker run --env-file ./settings --publish 80:80 --name nginx-s3-gateway     ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest

Setting File:

S3_BUCKET_NAME=<my private bucket>
S3_ACCESS_KEY_ID=****************
S3_SECRET_KEY=*********
S3_SESSION_TOKEN=
S3_SERVER=s3.amazonaws.com
S3_SERVER_PORT=443
S3_SERVER_PROTO=http
S3_REGION=eu-west-1
S3_STYLE=virtual
S3_DEBUG=false
AWS_SIGS_VERSION=4
ALLOW_DIRECTORY_LIST=true
PROVIDE_INDEX_PAGE=false
APPEND_SLASH_FOR_POSSIBLE_DIRECTORY=false
PROXY_CACHE_VALID_OK=1h
PROXY_CACHE_VALID_NOTFOUND=1m
PROXY_CACHE_VALID_FORBIDDEN=30s

The container is deployed successfully and see the following statement in the logs when the Get request is made on S3 Object and also the /var/cache/nginx/s3_proxy/ folder is empty.

/docker-entrypoint.sh: Launching /docker-entrypoint.d/22-enable_js_fetch_trusted_certificate.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/04/06 11:17:43 [notice] 1#1: using the "epoll" event method
2023/04/06 11:17:43 [notice] 1#1: nginx/1.23.3
2023/04/06 11:17:43 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/04/06 11:17:43 [notice] 1#1: OS: Linux 5.15.0-1033-aws
2023/04/06 11:17:43 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/04/06 11:17:43 [notice] 1#1: start worker processes
2023/04/06 11:17:43 [notice] 1#1: start worker process 77
2023/04/06 11:17:43 [notice] 1#1: start cache manager process 78
2023/04/06 11:17:43 [notice] 1#1: start cache loader process 79
2023/04/06 11:18:43 [notice] 79#79: http file cache: /var/cache/nginx/s3_proxy 0.000M, bsize: 4096
2023/04/06 11:18:43 [notice] 1#1: signal 17 (SIGCHLD) received from 79
2023/04/06 11:18:43 [notice] 1#1: cache loader process 79 exited with code 0
2023/04/06 11:18:43 [notice] 1#1: signal 29 (SIGIO) received

Please advise what is going wrong here.

Hi there,

I just tried to reproduce your configuration and issue, but I was unable to find a problem. First, can you provide the image id associated with the image ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest you are using? I'm using image id 3b3ac8729b8f to test myself.

Also, have you set S3_SESSION_TOKEN to a value or is it blank?

Have done the following pre-requisites before starting nginx container:

  • Launch EC2 Instance and assigned an role that has full S3 Access
  • Run the following command to generate an S3 session token:
    ubuntu@ip-10-0-2-162:~/nginx-s3-gateway$ aws sts get-session-token { "Credentials": { "AccessKeyId": "*********", "SecretAccessKey": "***************", "SessionToken": "************", "Expiration": "2023-04-07T17:06:50+00:00" } }

Copied AccessKeyId, SecretAccessKey and SessionToken in the settings file and launched the docker container but again the same issue.

And, the image id associated with ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest is 3b3ac8729b8f. So, I guess image-id is not an issue.

I just tried the same thing and created the session token in the same way. When the container started, did you see a message like:

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/00-check-for-required-env.sh

S3 Session token present
S3 Backend Environment
Access Key ID: <REDACTED>
Origin: <REDACTED>
Region: us-east-1
Addressing Style: virtual
AWS Signatures Version: v4
DNS Resolvers:  8.8.8.8 8.8.4.4
Directory Listing Enabled: true
Provide Index Pages Enabled: false
Append slash for directory enabled: false
Stripping the following headers from responses: x-amz-;
CORS Enabled: 0
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/default.conf.template to /etc/nginx/conf.d/default.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/cors.conf.template to /etc/nginx/conf.d/gateway/cors.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v4_headers.conf.template to /etc/nginx/conf.d/gateway/v4_headers.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/s3_location.conf.template to /etc/nginx/conf.d/gateway/s3_location.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v2_headers.conf.template to /etc/nginx/conf.d/gateway/v2_headers.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/js_fetch_trusted_certificate.conf.template to /etc/nginx/conf.d/gateway/js_fetch_trusted_certificate.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v4_js_vars.conf.template to /etc/nginx/conf.d/gateway/v4_js_vars.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v2_js_vars.conf.template to /etc/nginx/conf.d/gateway/v2_js_vars.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/s3_server.conf.template to /etc/nginx/conf.d/gateway/s3_server.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/s3listing_location.conf.template to /etc/nginx/conf.d/gateway/s3listing_location.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/upstreams.conf.template to /etc/nginx/conf.d/upstreams.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/22-enable_js_fetch_trusted_certificate.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/04/07 18:33:43 [notice] 1#1: using the "epoll" event method
2023/04/07 18:33:43 [notice] 1#1: nginx/1.23.3
2023/04/07 18:33:43 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/04/07 18:33:43 [notice] 1#1: OS: Linux 5.4.0-139-generic
2023/04/07 18:33:43 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/04/07 18:33:43 [notice] 1#1: start worker processes
2023/04/07 18:33:43 [notice] 1#1: start worker process 78
2023/04/07 18:33:43 [notice] 1#1: start cache manager process 79
2023/04/07 18:33:43 [notice] 1#1: start cache loader process 80

After the container starts, can you attach to it by doing:

docker exec -it <container_id> bash

The try to write a file or directory to the cache directory:

$ mkdir /var/cache/nginx/s3_proxy/foo
$ touch /var/cache/nginx/s3_proxy/foo/bar

Hi,

Have followed the suggested steps and below are docker logs:

ubuntu@ip-10-0-2-162:~/nginx-s3-gateway$ docker run --env-file ./settings --publish 80:80 --name nginx-s3-gateway     ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/00-check-for-required-env.sh
Running inside an EC2 instance, using IMDS for credentials
S3 Session token present
S3 Backend Environment
Access Key ID: ***************************
Origin: http://***************.s3.*************.amazonaws.com:443
Region: eu-west-1
Addressing Style: virtual
AWS Signatures Version: v4
DNS Resolvers:  10.0.0.2
Directory Listing Enabled: true
Provide Index Pages Enabled: false
Append slash for directory enabled: false
Stripping the following headers from responses: x-amz-;
CORS Enabled: 0
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/cors.conf.template to /etc/nginx/conf.d/gateway/cors.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v2_js_vars.conf.template to /etc/nginx/conf.d/gateway/v2_js_vars.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/s3_location.conf.template to /etc/nginx/conf.d/gateway/s3_location.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/s3_server.conf.template to /etc/nginx/conf.d/gateway/s3_server.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v4_js_vars.conf.template to /etc/nginx/conf.d/gateway/v4_js_vars.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v2_headers.conf.template to /etc/nginx/conf.d/gateway/v2_headers.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v4_headers.conf.template to /etc/nginx/conf.d/gateway/v4_headers.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/js_fetch_trusted_certificate.conf.template to /etc/nginx/conf.d/gateway/js_fetch_trusted_certificate.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/s3listing_location.conf.template to /etc/nginx/conf.d/gateway/s3listing_location.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/default.conf.template to /etc/nginx/conf.d/default.conf
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/upstreams.conf.template to /etc/nginx/conf.d/upstreams.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/22-enable_js_fetch_trusted_certificate.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/04/08 02:51:07 [notice] 1#1: using the "epoll" event method
2023/04/08 02:51:07 [notice] 1#1: nginx/1.23.3
2023/04/08 02:51:07 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/04/08 02:51:07 [notice] 1#1: OS: Linux 5.15.0-1033-aws
2023/04/08 02:51:07 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/04/08 02:51:07 [notice] 1#1: start worker processes
2023/04/08 02:51:07 [notice] 1#1: start worker process 76
2023/04/08 02:51:07 [notice] 1#1: start cache manager process 77
2023/04/08 02:51:07 [notice] 1#1: start cache loader process 78
2023/04/08 02:52:07 [crit] 78#78: cache file "/var/cache/nginx/s3_proxy/foo/test" is too small
2023/04/08 02:52:07 [crit] 78#78: unlink() "/var/cache/nginx/s3_proxy/foo/test" failed (13: Permission denied)
2023/04/08 02:52:07 [notice] 78#78: http file cache: /var/cache/nginx/s3_proxy 0.000M, bsize: 4096
2023/04/08 02:52:07 [notice] 1#1: signal 17 (SIGCHLD) received from 78
2023/04/08 02:52:07 [notice] 1#1: cache loader process 78 exited with code 0
2023/04/08 02:52:07 [notice] 1#1: signal 29 (SIGIO) received

Something is wrong, cache loader keep crashing...

This line from the logs seems concerning:

2023/04/08 02:52:07 [crit] 78#78: unlink() "/var/cache/nginx/s3_proxy/foo/test" failed (13: Permission denied)

What happens when you do the steps outlined above where you execute these commands from inside the container?

$ mkdir /var/cache/nginx/s3_proxy/foo
$ touch /var/cache/nginx/s3_proxy/foo/bar

I'm guessing that somehow your container is mounted as read-only or it somehow the /var/cache/nginx_s3_proxy directory is not getting the correct permissions.

Can you share your docker version and OS? Also, what happens when you build the container image on your own system instead of pulling the image from github?

Hi,

The same issue occurred when the container is started using the local docker image (the image was created using the following command):

docker build --file Dockerfile.oss --tag nginx-s3-gateway:oss --tag nginx-s3-gateway .

And, below is the version of Docker and OS system:

ubuntu@ip-10-0-2-162:~$ docker --version Docker version 23.0.1, build a5ee5b1 ubuntu@ip-10-0-2-162:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.5 LTS Release: 20.04 Codename: focal ubuntu@ip-10-0-2-162:~$

I am using this standard AMI: ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20230207

Any update? I tried running proxy in different region using different AWS account, but issue still persist

I've found an issue where when running in an EC2 instance the S3_SESSION_TOKEN is being ignored. However, it is not outputting the errors you posted above. I'm still investigating it.

Another question, how did you install Docker? Was it via the default Ubuntu packages or did you use the Docker ppa?

Hi,

Thanks, please let me know if you find something:

Have installed docker in the following two ways and both had the same issue:

  1. Installed docker package via Ubuntu Package:
    sudo apt-get install docker.io docker-compose -y

  2. Installed docker using ppa referring to below article:
    https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-20-04

While investigating your issue, I stumbled across this bug: #118. Could you try out the code from this PR and see if it resolves your issue: #119

Honestly, I do not think it will because your particular error message indicates a problem with writing to the file system. Another good thing to check would be total disk space free on the host and container.

Hi,

As suggested, have tried testing using code from bug: #118.
Below are some findings:

  1. The S3_session environment variable is loaded successfully in the docker container
  2. Login inside docker container with nginx user instead root and try to create folder and file inside s3_proxy folder. Now, don't get unlink() error but cache loader exits after few secs:
ubuntu@ip-10-0-2-251:~/nginx-s3-gateway_temp$ docker-compose up
Creating nginx_container ... done
Attaching to nginx_container
nginx_container | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_container | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx_container | /docker-entrypoint.sh: Launching /docker-entrypoint.d/00-check-for-required-env.sh
nginx_container | Running inside an EC2 instance, using IMDS for credentials
nginx_container | S3 Backend Environment
nginx_container | Access Key ID: ***********************
nginx_container | Origin: https://<bucket-name>.s3.eu-west-1.amazonaws.com:443
nginx_container | Region: eu-west-1
nginx_container | Addressing Style: virtual
nginx_container | AWS Signatures Version: v4
nginx_container | DNS Resolvers:  127.0.0.11
nginx_container | Directory Listing Enabled: false
nginx_container | Provide Index Pages Enabled: false
nginx_container | Append slash for directory enabled: false
nginx_container | Stripping the following headers from responses: x-amz-;
nginx_container | CORS Enabled: 0
nginx_container | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx_container | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx_container | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx_container | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/default.conf.template to /etc/nginx/conf.d/default.conf
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/s3_server.conf.template to /etc/nginx/conf.d/gateway/s3_server.conf
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v2_js_vars.conf.template to /etc/nginx/conf.d/gateway/v2_js_vars.conf
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/cors.conf.template to /etc/nginx/conf.d/gateway/cors.conf
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v4_headers.conf.template to /etc/nginx/conf.d/gateway/v4_headers.conf
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v2_headers.conf.template to /etc/nginx/conf.d/gateway/v2_headers.conf
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/s3listing_location.conf.template to /etc/nginx/conf.d/gateway/s3listing_location.conf
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/v4_js_vars.conf.template to /etc/nginx/conf.d/gateway/v4_js_vars.conf
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/s3_location.conf.template to /etc/nginx/conf.d/gateway/s3_location.conf
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/gateway/js_fetch_trusted_certificate.conf.template to /etc/nginx/conf.d/gateway/js_fetch_trusted_certificate.conf
nginx_container | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/upstreams.conf.template to /etc/nginx/conf.d/upstreams.conf
nginx_container | /docker-entrypoint.sh: Launching /docker-entrypoint.d/22-enable_js_fetch_trusted_certificate.sh
nginx_container | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx_container | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx_container | 2023/04/15 14:49:20 [notice] 1#1: using the "epoll" event method
nginx_container | 2023/04/15 14:49:20 [notice] 1#1: nginx/1.23.3
nginx_container | 2023/04/15 14:49:20 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
nginx_container | 2023/04/15 14:49:20 [notice] 1#1: OS: Linux 5.15.0-1033-aws
nginx_container | 2023/04/15 14:49:20 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx_container | 2023/04/15 14:49:20 [notice] 1#1: start worker processes
nginx_container | 2023/04/15 14:49:20 [notice] 1#1: start worker process 77
nginx_container | 2023/04/15 14:49:20 [notice] 1#1: start cache manager process 78
nginx_container | 2023/04/15 14:49:20 [notice] 1#1: start cache loader process 79
nginx_container | 2023/04/15 14:50:20 [crit] 79#79: cache file "/var/cache/nginx/s3_proxy/foo/bar" is too small
nginx_container | 2023/04/15 14:50:20 [notice] 79#79: http file cache: /var/cache/nginx/s3_proxy 0.000M, bsize: 4096
nginx_container | 2023/04/15 14:50:20 [notice] 1#1: signal 17 (SIGCHLD) received from 79
nginx_container | 2023/04/15 14:50:20 [notice] 1#1: cache loader process 79 exited with code 0
nginx_container | 2023/04/15 14:50:20 [notice] 1#1: signal 29 (SIGIO) received

Not, sure if it is expected behavior or not?

Below is the updated setting file:

S3_BUCKET_NAME=<bucket-name>
S3_ACCESS_KEY_ID=************
S3_SECRET_KEY=**********
S3_SESSION=*************
S3_SERVER=s3.eu-west-1.amazonaws.com
S3_SERVER_PORT=443
S3_SERVER_PROTO=https
S3_REGION=eu-west-1
S3_STYLE=virtual
S3_DEBUG=false
AWS_SIGS_VERSION=4
ALLOW_DIRECTORY_LIST=false
PROVIDE_INDEX_PAGE=false
APPEND_SLASH_FOR_POSSIBLE_DIRECTORY=false
PROXY_CACHE_VALID_OK=1h
PROXY_CACHE_VALID_NOTFOUND=1m
PROXY_CACHE_VALID_FORBIDDEN=30s

I think we got the error:

nginx_container | 2023/04/15 14:50:20 [crit] 79#79: cache file "/var/cache/nginx/s3_proxy/foo/bar" is too small

because we created the test directory. I would remove that container and start up a new container with the same settings. By the way, the PR I had you testing has now been merged, so go ahead and pull the latest image and try that.

Hi,

Now, I see something happening right but yet there. The cache loader still exists after 1 min and post that API call logs are printed:

nginx_container | /docker-entrypoint.sh: Launching /docker-entrypoint.d/22-enable_js_fetch_trusted_certificate.sh
nginx_container | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx_container | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx_container | 2023/04/16 09:52:08 [notice] 1#1: using the "epoll" event method
nginx_container | 2023/04/16 09:52:08 [notice] 1#1: nginx/1.23.3
nginx_container | 2023/04/16 09:52:08 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
nginx_container | 2023/04/16 09:52:08 [notice] 1#1: OS: Linux 5.15.0-1033-aws
nginx_container | 2023/04/16 09:52:08 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx_container | 2023/04/16 09:52:08 [notice] 1#1: start worker processes
nginx_container | 2023/04/16 09:52:08 [notice] 1#1: start worker process 77
nginx_container | 2023/04/16 09:52:08 [notice] 1#1: start cache manager process 78
nginx_container | 2023/04/16 09:52:08 [notice] 1#1: start cache loader process 79
nginx_container | 2023/04/16 09:53:08 [notice] 79#79: http file cache: /var/cache/nginx/s3_proxy 0.000M, bsize: 4096
nginx_container | 2023/04/16 09:53:08 [notice] 1#1: signal 17 (SIGCHLD) received from 79
nginx_container | 2023/04/16 09:53:08 [notice] 1#1: cache loader process 79 exited with code 0
nginx_container | 2023/04/16 09:53:08 [notice] 1#1: signal 29 (SIGIO) received
nginx_container | 2023/04/16 09:53:46 [info] 77#77: 1 js: S3 Request URI: GET ?delimiter=%2F
nginx_container | 2023/04/16 09:53:46 [info] 77#77: 1 js: AWS v4 Auth Canonical Request: [GET
nginx_container | /
nginx_container | delimiter=%2F
nginx_container | host:<bucket_name>.s3.eu-west-1.amazonaws.com
nginx_container | x-amz-content-sha256:
*********************************************************
nginx_container | x-amz-date:20230416T095346Z
nginx_container |
nginx_container | host;x-amz-content-sha256;x-amz-date
nginx_container | **********]
nginx_container | 2023/04/16 09:53:46 [info] 77#77: 1 js: AWS v4 Auth Canonical Request Hash: [1f291f529840893cd2e28baf791cfd880b5d77826d4e69cb76a5dae95e78f255]
nginx_container | 2023/04/16 09:53:46 [info] 77#77: 1 js: AWS v4 Auth Signing String: [AWS4-HMAC-SHA256
nginx_container | 20230416T095346Z
nginx_container | 20230416/eu-west-1/s3/aws4_request
nginx_container | 1f291f529840893cd2e28baf791cfd880b5d77826d4e69cb76a5dae95e78f255]
nginx_container | 2023/04/16 09:53:46 [info] 77#77: 1 js: AWS v4 Signing Key Hash: [e333857972efa857a8fae36817b090700e2804987d33cb8da929df946f154928]
nginx_container | 2023/04/16 09:53:46 [info] 77#77: 1 js: AWS v4 Authorization Header: [f7f48a4be67f94f42f7641a3293d87e99faed516de1f1ca4a3b6639e4c2e4fc6]
nginx_container | 2023/04/16 09:53:46 [info] 77#77: 1 js: AWS v4 Auth header: [AWS4-HMAC-SHA256 Credential=
/20230416/eu-west-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=]
nginx_container | 188.232.232.60 - - [16/Apr/2023:09:53:46 +0000] "GET / HTTP/1.1" 404 548 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" "-"
nginx_container | 2023/04/16 09:53:46 [info] 77#77: 1 client 188.232.232.60 closed keepalive connection
nginx_container | 2023/04/16 09:54:42 [info] 77#77: 3 js: S3 Request URI: GET /.env
nginx_container | 2023/04/16 09:54:42 [info] 77#77: 3 js: S3 Request URI: GET /.env
nginx_container | 2023/04/16 09:54:42 [info] 77#77: 3 js: AWS v4 Auth Canonical Request: [GET
nginx_container | /.env
nginx_container |
nginx_container | host:<bucket_name>.s3.eu-west-1.amazonaws.com
nginx_container | x-amz-content-sha256:

nginx_container | x-amz-date:20230416T095442Z
nginx_container |
nginx_container | host;x-amz-content-sha256;x-amz-date
nginx_container | e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855]
nginx_container | 2023/04/16 09:54:42 [info] 77#77: *3 js: AWS v4 Auth Canonical Request Hash: [
]
nginx_container | 2023/04/16 09:54:42 [info] 77#77: *3 js: AWS v4 Auth Signing String: [AWS4-HMAC-SHA256
nginx_container | 20230416T095442Z
nginx_container | 20230416/eu-west-1/s3/aws4_request
nginx_container | *************************************************************************************]
nginx_container | 2023/04/16 09:54:42 [info] 77#77: *3 js: AWS v4 Signing Key Hash: [e333857972efa857a8fae36817b090700e2804987d33cb8da929df946f154928]
nginx_container | 2023/04/16 09:54:42 [info] 77#77: *3 js: AWS v4 Authorization Header: []
nginx_container | 2023/04/16 09:54:42 [info] 77#77: 3 js: AWS v4 Auth header: [AWS4-HMAC-SHA256 Credential=
/20230416/eu-west-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=**************************************************************************]
nginx_container | 185.254.196.173 - - [16/Apr/2023:09:54:42 +0000] "GET /.env HTTP/1.1" 404 548 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36" "-"
nginx_container | 2023/04/16 09:54:42 [info] 77#77: *3 client 185.254.196.173 closed keepalive connection
nginx_container | 2023/04/16 09:54:43 [error] 77#77: *5 "/etc/nginx/html/index.html" is not found (2: No such file or directory), client: 185.254.196.173, server: , request: "POST / HTTP/1.1", host: "34.244.206.251"
nginx_container | 185.254.196.173 - - [16/Apr/2023:09:54:43 +0000] "POST / HTTP/1.1" 404 548 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36" "-"
nginx_container | 2023/04/16 09:54:43 [info] 77#77: *5 client 185.254.196.173 closed keepalive connection

And, don't see any s3 objects getting cached too inside the container as /var/cache/nginx/s3_proxy folder is still empty

ubuntu@ip-10-0-2-251:~$ curl -I -k https://<bucket_name>.s3.eu-west-1.amazonaws.com/test/dsm.png
HTTP/1.1 200 OK
x-amz-id-2: *********************
x-amz-request-id: *****************
Date: Sun, 16 Apr 2023 09:38:10 GMT
Last-Modified: Sun, 16 Apr 2023 09:33:06 GMT
ETag: "*******************"
x-amz-server-side-encryption: AES256
Accept-Ranges: bytes
Content-Type: image/png
Server: AmazonS3
Content-Length: 1785613

How are you testing the gateway? What is the exact curl command you are using to test?

I use below command:

ubuntu@ip-10-0-2-251:~$ curl -I -k https://<bucket_name>.s3.eu-west-1.amazonaws.com/test/dsm.png

Ah! That's the problem. You are calling AWS S3 directly and not calling the IP and port of the gateway. If you are on the gateway, and you started it with the command you posted above, you would execute curl like:

$ curl http://localhost/test/dsm.png

Ok, got it. But now I get this error:

nginx_container | 2023/04/17 09:55:04 [error] 18#18: *1 open() "/usr/share/nginx/html/test/dsm.png" failed (2: No such file or directory), client: 10.0.3.135, server: localhost, request: "HEAD /test/dsm.png HTTP/1.1", host: "10.0.2.251"
nginx_container | 10.0.3.135 - - [17/Apr/2023:09:55:04 +0000] "HEAD /test/dsm.png HTTP/1.1" 404 0 "-" "curl/7.68.0" "-"
nginx_container | 2023/04/17 09:55:04 [info] 18#18: *1 client 10.0.3.135 closed keepalive connection
nginx_container | 2023/04/17 09:55:21 [error] 18#18: *2 open() "/usr/share/nginx/html/test/dsm.png" failed (2: No such file or directory), client: 172.23.0.1, server: localhost, request: "HEAD /test/dsm.png HTTP/1.1", host: "localhost"
nginx_container | 172.23.0.1 - - [17/Apr/2023:09:55:21 +0000] "HEAD /test/dsm.png HTTP/1.1" 404 0 "-" "curl/7.68.0" "-"
nginx_container | 2023/04/17 09:55:21 [info] 18#18: *2 client 172.23.0.1 closed keepalive connection
nginx_container | 172.23.0.1 - - [17/Apr/2023:09:55:32 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.68.0" "-"
nginx_container | 2023/04/17 09:55:32 [info] 18#18: *3 client 172.23.0.1 closed keepalive connection
nginx_container | 2023/04/17 09:55:48 [error] 18#18: *4 open() "/usr/share/nginx/html/test/dsm.png" failed (2: No such file or directory), client: 172.23.0.1, server: localhost, request: "GET /test/dsm.png HTTP/1.1", host: "localhost"
nginx_container | 172.23.0.1 - - [17/Apr/2023:09:55:48 +0000] "GET /test/dsm.png HTTP/1.1" 404 153 "-" "curl/7.68.0" "-"
nginx_container | 2023/04/17 09:55:48 [info] 18#18: *4 client 172.23.0.1 closed keepalive connection
nginx_container | 2023/04/17 09:55:57 [notice] 20#20: http file cache: /var/cache/nginx/s3_proxy 0.000M, bsize: 4096
nginx_container | 2023/04/17 09:55:57 [notice] 16#16: signal 17 (SIGCHLD) received from 20
nginx_container | 2023/04/17 09:55:57 [notice] 16#16: cache loader process 20 exited with code 0
nginx_container | 2023/04/17 09:55:57 [notice] 16#16: signal 29 (SIGIO) received
nginx_container | 2023/04/17 09:56:08 [error] 18#18: *5 open() "/usr/share/nginx/html/test/dsm.png" failed (2: No such file or directory), client: 172.23.0.1, server: localhost, request: "GET /test/dsm.png HTTP/1.1", host: "localhost"
nginx_container | 172.23.0.1 - - [17/Apr/2023:09:56:08 +0000] "GET /test/dsm.png HTTP/1.1" 404 153 "-" "curl/7.68.0" "-"
nginx_container | 2023/04/17 09:56:08 [info] 18#18: *5 client 172.23.0.1 closed keepalive connection

Those log messages do not make sense because nowhere in the s3 gateway configuration is /usr/share/nginx/html/ configured as a directory. Something is not right with your configuration.

Closing issue due to no updates for 3 weeks.

@dekobon

Hi,
We are trying to enable the caching within nginx. For that we are deploying the image "nginx:stable" within kubernetes. I have added the default.conf and nginx.conf files placed within the kubernetes using volumes and volume mounts concepts. While running the nginx application, we are facing the below errors where in cache loader stops. is it possible for you help us what is missing?

I have tried changing the location of
2023/08/23 11:17:11 [notice] 1#1: start worker processes
2023/08/23 11:17:11 [notice] 1#1: start worker process 20
2023/08/23 11:17:11 [notice] 1#1: start worker process 21
2023/08/23 11:17:11 [notice] 1#1: start worker process 22
2023/08/23 11:17:11 [notice] 1#1: start worker process 23
2023/08/23 11:17:11 [notice] 1#1: start cache manager process 24
2023/08/23 11:17:11 [notice] 1#1: start cache loader process 25
2023/08/23 11:18:11 [notice] 25#25: http file cache: /var/cache/nginx 0.000M, bsize: 4096
2023/08/23 11:18:11 [notice] 1#1: signal 17 (SIGCHLD) received from 25
2023/08/23 11:18:11 [notice] 1#1: cache loader process 25 exited with code 0
2023/08/23 11:18:11 [notice] 1#1: signal 29 (SIGIO) received

default.conf
server {
listen 80 reuseport;
listen [::]:80;
server_name localhost;
proxy_cache ohfs_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_background_update on;
proxy_cache_lock on;

    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
        try_files $uri /index.html;
    }

    #error_page  404              /404.html;
    # redirect server error pages to the static page /50x.html
    #      
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
  }

nginx.conf

user nginx;
worker_processes auto;
worker_rlimit_nofile 8192;

  error_log  /var/log/nginx/error.log notice;
  pid        /var/run/nginx.pid;


  events {
      worker_connections  8192;
      accept_mutex  on;
      multi_accept  on;
  }


  http {
      include       /etc/nginx/mime.types;
      default_type  application/octet-stream;
      proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=ohfs_cache:10m max_size=500m 
                       inactive=60m use_temp_path=off;      

      #access_log  /var/log/nginx/access.log  main;

      sendfile        on;
      tcp_nopush     on;
      tcp_nodelay  on;
      keepalive_timeout  120;
      keepalive_requests  25000;
      
      #gzip  on;

      include /etc/nginx/conf.d/*.conf;
  }

Hi @Arulaln-AR

The log message you shared above is indicating that the cache loader is working successfully. Please see my following annotation of what is going on.

# Cache manager has started with pid 24
2023/08/23 11:17:11 [notice] 1#1: start cache manager process 24
# Cache loader has started with pid 25
2023/08/23 11:17:11 [notice] 1#1: start cache loader process 25
# http file cache was created
2023/08/23 11:18:11 [notice] 25#25: http file cache: /var/cache/nginx 0.000M, bsize: 4096
# Master process was signaled that its child process pid 25 existed
2023/08/23 11:18:11 [notice] 1#1: signal 17 (SIGCHLD) received from 25
# Cache loader process ended without problems (code 0) because it is done loading the cache
2023/08/23 11:18:11 [notice] 1#1: cache loader process 25 exited with code 0
# No idea what SIGIO is
2023/08/23 11:18:11 [notice] 1#1: signal 29 (SIGIO) received

Are you having problems with caching or are you just concerned about the log message?

@dekobon,

Thanks for the reply, really appreciate that you have responded so quickly.

I don't see cache files are created under the path /var/cache/nginx within that pod. Why so?
If you see in my default.conf file I am referring proxy cache and other parameters under server block. Is it the problem? I have to give it under location block?
When I was going through docs I can give those parameters under server, location block like that.

Regards,
Arulaln A R

It is worth trying to move from the server block to the http block because that is what we do in the default s3 gateway configuration. However, that is a guess because I do not understand why it is not working.

@dekobon ,
I mean the below configs are currently provided under default.conf within server block.

proxy_cache ohfs_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_background_update on;
proxy_cache_lock on;

Http block contains the actual cache path, keys zone and other size like below.

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=ohfs_cache:10m max_size=500m
inactive=60m use_temp_path=off;

In the past have you tried to achieve content cache within kubernetes pods? Or you know any colleagues who have done that. Maybe I can send an invite to connect and see.

Regards,
Arulaln A R

@dekobon ,

Our application architecture is designed like below.

We are hosting our UI applications using nginx as web server. For that, can we enable the proxy cache? or is it only when nginx was used as reverse proxy and it has some upstream applications to connect to?

current our application flow :
Proxy cache supported here in this flow?
client -> UI app hosted on nginx as web server

Nginx reverese proxy flow :
Proxy cache is supported here right, because in all those docs i can see proxy_pass as the value available within the location directive.
client -> nginx reverse proxy -> UI application

Excuse me if this is repetitive, but I want to understand clearly. You are encountering this issue when running the NGINX S3 Gateway as the Docker image provided within this repo with Kubernetes? Or have you built your own container image from a different Dockerfile or a modification of the provided Dockerfile?

@dekobon,
Hi,

No, we are using open-source image 'nginx:stable' as base image and it acts as a web server for our UI application, all our application code are deployed within this web server

Are you proxying requests to S3 with njs scripts from this project using that web server?

@dekobon
No, we are not sending requests to s3.
Below is the architecture of our application.

current our application flow :
Proxy cache supported here in this flow?
client -> UI app hosted on nginx as web server

I just raised here to get some help since the error message is same type

Hi @Arulaln-AR

The github issues you are using is for the NGINX S3 Gateway open source project. This isn't the right place to get help with a more general NGINX problem.

I would suggest that you ask your questions on one of the following places community groups because this issue is not the right place for this discussion.

Thanks for the above details. Sure will get in touch with the appropriate team.
Thanks Again!