load balancing often fails
seqwait opened this issue · comments
i have 2 swarm node
ntr276c3lbdmt58zjrwdfy606 java1 Ready Active Leader 20.10.11
d0thnr145hhppyaxx2ku00lnx java2 Ready Active 20.10.11
network: overall_java overlay swarm
java1/java2 [global mode]: springboot(app port 8080) project use overall_java network
nginx not in swarm node ,the config:
upstream app-server {
server node1.ip:8080 weight=1;
}
server {
listen 80 ;
server_name aa888.com www.aa888.com;
location / {
proxy_pass http://app-server;
proxy_read_timeout 7200;
proxy_connect_timeout 5;
proxy_set_header Host $Host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Every once in a while about 10 minute batch Nginx error timeout log, i don't know What's the problem
upstream timed out (110: Connection timed out) while connecting to upstream, client: upstream timed out (110: Connection timed out) while connecting to upstream, client
You cannot use upstream directives in this context. Instead, you will need to keep the upstream hostname in a variable so it's resolved each time nginx tries to use proxy_pass.
You also need to tell nginx to use the default docker resolver, the dns records it sends out should have a proper TTL but I got it working by using a small explicit TTL
The following changes to your config will allow nginx to always find new tasks as you scale up and down.
resolver 127.0.0.1 ttl=5s;
location / {
set $s tasks.target:80
proxy_pass http://$s;
}
Sadly, this does not support upstream keepalive directives yet. Only nginx plus supports upstreams with dynamic A records