Reverse proxying fails when `HTTP_PROXY` or `HTTPS_PROXY` environment variables are set
Aequitosh opened this issue · comments
The Issue
When passing either HTTP_PROXY
or HTTPS_PROXY
environment variables to Caddy (in this case, via Docker), reverse proxying unfortunately fails.
I'm aware that this is most likely unsupported, as I was not able to find any documentation on this. I still wanted to file this issue, as I only discovered that this was possible when sifting through Caddy's code (because of some other problem I was having, but that's not relevant for this issue in particular). The env vars are being taken into account over here at line 269:
caddy/modules/caddyhttp/reverseproxy/httptransport.go
Lines 268 to 277 in 03f703a
Although I couldn't find any relevant docs, I expected that Caddy's reverse_proxy
would still honor the env vars when proxying requests. Then again, I'm not really sure whether what I'm trying to do here is even possible.
The following kind of error is raised when trying to connect to my backend (host & uri changed here):
malformed HTTP status code "protocol"
{
"request": {
"remote_ip": "172.16.64.89",
"remote_port": "54250",
"client_ip": "172.16.64.89",
"proto": "HTTP/1.1",
"method": "POST",
"host": "foo.backend.tld:8007",
"uri": "/some/thing",
"headers": {
"Connection": [
"Keep-Alive, TE"
],
"Accept-Encoding": [
"gzip"
],
"User-Agent": [
"libwww-perl/6.68"
],
"Content-Length": [
"38"
],
"Content-Type": [
"application/x-www-form-urlencoded"
],
"Te": [
"deflate,gzip;q=0.3"
],
"Keep-Alive": [
"300"
]
},
"tls": {
"resumed": false,
"version": 772,
"cipher_suite": 4867,
"proto": "",
"server_name": "foo.backend.tld"
}
},
"duration": 0.003689084,
"status": 502,
"err_id": "vgg0m0d55",
"err_trace": "reverseproxy.statusError (reverseproxy.go:1267)"
}
Here are the relevant logs with timestamps omitted; all errors are like this (I only have one backend at the moment):
events event {"name": "tls_get_certificate", "id": "099fae6b-927c-462d-abf7-a49bc41723c0", "origin": "tls", "data": {"client_hello":{"CipherSuites":[4866,4867,4865,49196,49200,159,52393,52392,52394,49195,49199,158,49188,49192,107,49187,49191,103,49162,49172,57,49161,49171,51,157,156,61,60,53,47,255],"ServerName":"foo.backend.tld","SupportedCurves":[29,23,30,25,24,256,257,258,259,260],"SupportedPoints":"AAEC","SignatureSchemes":[1027,1283,1539,2055,2056,2057,2058,2059,2052,2053,2054,1025,1281,1537,771,769,770,1026,1282,1538],"SupportedProtos":null,"SupportedVersions":[772,771,770,769],"RemoteAddr":{"IP":"172.16.64.89","Port":54250,"Zone":""},"LocalAddr":{"IP":"172.20.0.2","Port":8007,"Zone":""}}}}
tls.handshake no matching certificate; will choose from all certificates {"identifier": "foo.backend.tld"}
tls.handshake choosing certificate {"identifier": "foo.backend.tld", "num_choices": 1}
tls.handshake custom certificate selection results {"identifier": "foo.backend.tld", "subjects": ["*.backend.tld"], "managed": false, "issuer_key": "", "hash": "cf26a2af42cdca71bb67d5999da87c2def29ee8b9dcebb891d180cb5f5d8f27e"}
tls.handshake matched certificate in cache {"remote_ip": "172.16.64.89", "remote_port": "54250", "subjects": ["*.backend.tld"], "managed": false, "expiration": "2051/06/26 15:33:44.000", "hash": "cf26a2af42cdca71bb67d5999da87c2def29ee8b9dcebb891d180cb5f5d8f27e"}
http.handlers.reverse_proxy selected upstream {"dial": "172.16.64.78:8007", "total_upstreams": 1}
http.handlers.reverse_proxy upstream roundtrip {"upstream": "172.16.64.78:8007", "duration": 0.003611673, "request": {"remote_ip": "172.16.64.89", "remote_port": "54250", "client_ip": "172.16.64.89", "proto": "HTTP/1.1", "method": "POST", "host": "172.16.64.78:8007", "uri": "/some/thing", "headers": {"User-Agent": ["libwww-perl/6.68"], "X-Forwarded-Proto": ["https"], "Accept-Encoding": ["gzip"], "Content-Length": ["38"], "Content-Type": ["application/x-www-form-urlencoded"], "X-Forwarded-For": ["172.16.64.89:54250"], "X-Forwarded-Host": ["foo.backend.tld:8007"], "X-Real-Ip": ["172.16.64.89:54250"], "X-Forwarded-Port": ["8007"]}, "tls": {"resumed": false, "version": 772, "cipher_suite": 4867, "proto": "", "server_name": "foo.backend.tld"}}, "error": "malformed HTTP status code \"protocol\""}
http.log.error.foo malformed HTTP status code "protocol" {"request": {"remote_ip": "172.16.64.89", "remote_port": "54250", "client_ip": "172.16.64.89", "proto": "HTTP/1.1", "method": "POST", "host": "foo.backend.tld:8007", "uri": "/some/thing", "headers": {"Connection": ["Keep-Alive, TE"], "Accept-Encoding": ["gzip"], "User-Agent": ["libwww-perl/6.68"], "Content-Length": ["38"], "Content-Type": ["application/x-www-form-urlencoded"], "Te": ["deflate,gzip;q=0.3"], "Keep-Alive": ["300"]}, "tls": {"resumed": false, "version": 772, "cipher_suite": 4867, "proto": "", "server_name": "foo.backend.tld"}}, "duration": 0.003689084, "status": 502, "err_id": "vgg0m0d55", "err_trace": "reverseproxy.statusError (reverseproxy.go:1267)"}
Caddy
Running Caddy 2.7.6
in Docker.
Caddyfile
:
{
servers {
trusted_proxies static private_ranges
protocols h1
}
log default {
output stdout
format console
level DEBUG
}
}
foo.backend.tld {
tls /data/cert.pem /data/key.pem
reverse_proxy {
to https://172.16.64.78:8007
header_up Host {http.reverse_proxy.upstream.hostport}
header_up X-Real-IP {http.request.remote}
header_up X-Forwarded-For {http.request.remote}
header_up X-Forwarded-Port {http.request.port}
header_up X-Forwarded-Proto {http.request.scheme}
transport http {
tls_insecure_skip_verify
tls
}
}
log foo {
format console
output file /var/log/foo.log {
roll_size 1gb
roll_keep 2
}
}
}
Docker
# docker --version
Docker version 25.0.3, build 4debf41
docker-compose.yaml
:
version: "3.7"
services:
caddy:
image: caddy:2.7.6
restart: always
environment:
- HTTP_PROXY=http://172.16.64.110:8080
- HTTPS_PROXY=https://172.16.64.110:8080
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- $PWD/Caddyfile:/etc/caddy/Caddyfile
- $PWD/site:/srv
- $PWD/data:/data
- $PWD/config:/config
Notes
HTTP/2 instead of HTTP/1?
When enabling h2
- so, changing protocols h1
to protocols h1 h2
in the Caddyfile above - the errors are exactly the same, except that the error will contain "proto": "HTTP/2"
instead of "proto": "HTTP/1.1"
, even though the backend in question doesn't generally speak HTTP/2 (an upgrade is only requested in a specific case, which does not happen here).
While this may be unrelated, I still wanted to point it out.
The Proxy Used
.. is mitmproxy. I'm trying to debug a very peculiar problem on my end (which is not related to this issue, but might warrant a different one), and we run HTTPS only.
Further Thoughts
As I mentioned above, I'm not sure if this is even supported, so no hard feelings if this won't be fixed.
Also, for completeness' sake, when removing the env vars from docker-compose.yml
everything works fine again (except the thing I'm debugging, grr).
Thanks for reading!
Some More Thoughts
Not sure if this is relevant, but when putting the proxy in front of Caddy and omitting the env vars in docker-compose.yml
(e.g. running curl
or our client with the relevant env vars set in the CLI instead) the requests successfully go from the client through the proxy, then through Caddy acting as reverse proxy, to the server and back again.
So chaining things that way at least works, but that's unfortunately not the place where I want to sniff the packets.
I tested the implementation of honoring HTTP_PROXY
and HTTPS_PROXY
when we added the support for it. The cited error message isn't coming from Caddy directly, rather it's coming from either the proxy or the upstream app (meaning, one of them sends 'protocol
' as the HTTP status, which Go standard library rejects). Can you check the mitmproxy logs as well as your upstream app during one of the problematic requests?
Thank you for your quick reply!
I have since found the underlying issue. It gets a little more complicated, though.
When sending a 101
response to the upgrade request, our server did not include a Connection: upgrade
header. Adding this header in the response solved the initial issue I was debugging: Upgrades to our backup protocol did not work when using Caddy as reverse proxy.
Surprisingly, this also allowed me to use the HTTP_PROXY
and HTTPS_PROXY
env vars again, so everything works as intended in that regard - mea culpa. I can now successfully sniff all traffic that's going through my local testing environment - very useful for debugging. I will therefore close this issue. Thanks again for your attention!
However, I will open a separate issue for the Connection
header thing, as there's quite a bit more to that story, if you'd like to comment on it.
Sounds good, thank you!