darkweak / souin

An HTTP cache system, RFC compliant, compatible with @tyktechnologies, @traefik, @caddyserver, @go-chi, @bnkamalesh, @beego, @devfeel, @labstack, @gofiber, @go-goyave, @go-kratos, @gin-gonic, @roadrunner-server, @zalando, @zeromicro, @nginx and @apache

Home Page:https://docs.souin.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[bug] blank page cached on context canceled

frederichoule opened this issue · comments

I have a recurring error where the homepage of the website is blank.

When I check the key value in redis, it looks like this:

HTTP/0.0 200 OK
Date: Fri, 21 Apr 2023 18:44:19 GMT
X-Souin-Stored-Ttl: 24h0m0s
Content-Length: 0

What strikes to me is the key name, which is GET-https-www.website.com-/, which all the other keys have {-VARY-} appended to them. If I delete the key, and refresh the website, the new key that appears is GET-https-www.website.com-/{-VARY-}Accept-Encoding:gzip, deflate, br, and now the page isn't blank anymore.

What would cause that?

For reference, a working key contains this:

HTTP/0.0 200 OK
Cache-Control: public, max-age=86400, must-revalidate
Content-Encoding: gzip
Content-Type: text/html; charset=UTF-8
Date: Fri, 21 Apr 2023 18:58:53 GMT
Server: Caddy
Server: Google Frontend
Vary: Accept-Encoding
X-Cloud-Trace-Context: 63177d3330a11c9b0a44a650162499a1

Meaning it really comes from the backend server.

Unrelated: why the HTTP/0.0 ?

Hey @frederichoule that's weird. Do you have a reproducible repository/example please?
To explain how the vary works under the hood, it will list all keys starting with (in your case) GET-https-www.website.com-/($|\{VARY\}), either the key ends by the trailing or has the vary separator.
If the first case is the first key to be returned, the key doesn't have any vary headers to validate, so it will match the client expectations and returns the content to the user. It will loop over the other keys and try to validate the varied headers.

What I think at the moment is on the first time a user tried to load the page but the backend returned nothing with an HTTP 200 OK status code.

Btw I don't know why your server returns an HTTP/0.0 protocol version, I don't deal with it and don't try to override it.
Can you post at least your caddy configuration?

1- I'll try to reproduce the issue locally and let you know.

2- The output is clearly not from the backend, since there's no Server: Google Frontend or X-Cloud-Trace-Context.

3- Backend returns HTTP/1.1, Caddy returns HTTP/1.1, but stored key is HTTP/0.0 - This is a non-problem, I was just wondering.

Here is the Caddyfile:

{
	order cache before handle
	email xxx@xxx.xxx

	admin off

	on_demand_tls {
		ask http://localhost:5555/__ask
		interval 2m
		burst 5
	}

	cache {
		cache_name Tarmac
		default_cache_control public, max-age=86400, must-revalidate
		allowed_http_verbs GET
		ttl 86400s
		redis {
			url 127.0.0.1:6379
		}
		key {
			hide
		}
	}

}

(cache_regular) {
	cache {
		key {
			disable_body
			disable_query
		}
	}
}

(cache_qs) {
	cache {
		key {
			disable_body
		}
	}
}

(upstream) {

	request_header -Cache-Control

	tls {
		on_demand
	}

	header {
		-X-Cloud-Trace-Context
		Server Tarmac
	}

	@notwww {
		not host localhost
		not header_regexp www Host ^www\.[a-zA-Z0-9-]+\.[a-zA-Z]{2,}$
	}

	redir @notwww https://www.{host}{uri} permanent

	@get {
		method GET
	}

	@hits {
		path /__hits
		query id=*
		method GET
	}

	@redis {
		path /__redis
		method POST
	}

	@query_caliente query caliente=1
	
	@query_search {
		path /search
		query q=*
	}

	@query_after query after=*

	handle @query_caliente {
		import cache_qs
	}

	handle @query_search {
		import cache_qs
	}

	handle @query_after {
		import cache_qs
	}

	handle @hits {
		reverse_proxy http://127.0.0.1:5555
	}

	handle @redis {
		reverse_proxy http://127.0.0.1:5555
	}

	handle @get {
		import cache_regular
	}

	handle {
		abort
	}

	reverse_proxy backend.com {
		header_down -Cache-Control
		header_up -Cache-Control
		header_up Host {upstream_hostport}
		header_up X-Backend-Hostname {host}		
	}

}

localhost {
	tls internal
	import upstream
}

https:// {
	import upstream
}

http:// {
	redir https://{host}{uri} permanent
}

Unable to reproduce the problem locally for now. I am closing this for now.

That could be something like cache poisoning (I wonder if @jenaye could help us about that). It could explain the

Server: Google Frontend
X-Cloud-Trace-Context: 63177d3330a11c9b0a44a650162499a1

It's a brand new website, not really public yet. I think it might have to do with robot scanning the website, generating incorrect cached page. I'll investigate it more thoroughly soon.

I was able to pinpoint when the blank page bug happens.

Whenever I have a blank page in cache, it comes from a request that contains this in the logs:

"error": "context canceled"

Any idea where it's coming from? I'll investigate after lunch, but if you have a hint for me, let me know!

Nice, that means I don't handle correctly the client disconnections. It should be quite easy to reproduce if that's this case. Let's reopen this issue then 🙂

Have you been able to reproduce the issue yet @darkweak ? I'll give it a try tomorrow if you didn't have time yet.

Reproducible with the following code

// runner/requester.go
package main

import (
	"context"
	"fmt"
	"net/http"
	"time"
)

func main() {
	ctx, _ := context.WithTimeout(context.Background(), 10*time.Millisecond)
	rq, _ := http.NewRequestWithContext(ctx, http.MethodGet, "http://localhost:4443/handler", nil)
	res, err := http.DefaultClient.Do(rq)

	fmt.Printf("-----\nres => %+v\nerr => %+v\n-----\n", res, err)
}
// handlers/handler.go
package main

import (
	"fmt"
	"net/http"
	"time"
)

func main() {
	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
		fmt.Println("Sleep...")
		time.Sleep(3 * time.Second)
		fmt.Println("Wake up")
		w.Write([]byte("Hello after sleep"))
	})
	http.ListenAndServe(":80", nil)
}

With the given caddyfile

{
    order cache before rewrite
    debug
    log {
        level debug
    }
    cache {
      ttl 100s
    }
}

:4443

route /handler {
    cache {
        ttl 10s
    }
    reverse_proxy 127.0.0.1:80
}
./caddy run
go run handler/handler.go
go run runner/requester.go
curl -I http://localhost:4443/handler
# Should return the right headers without content

To keep you updated, I'll work on that tomorrow.

Amazing, thanks! Haven't had time to take a look yet, sorry about that.

It should work using @frederichoule

xcaddy build --with github.com/darkweak/souin/plugins/caddy@47ea558146d978d6be0ec6a42804f679200d6d70 --with github.com/darkweak/souin@47ea558146d978d6be0ec6a42804f679200d6d70

Amazing! Will try it ASAP.

use the commit 2cffe5ac9c989e963b3e28c327e6b6c63eab371d instead of the previous one. I pushed a fix on the uncached responses.

Testing right away.

First page load works, second page load throws a panic and the server stops.

2023/05/25 19:42:38.124	ERROR	http.log.error	context canceled	{...}, "duration": 0.628182}
panic: Header called after Handler finished

goroutine 56 [running]:
net/http.(*http2responseWriter).Header(0x40003584c0?)
	net/http/h2_bundle.go:6569 +0x80
github.com/darkweak/souin/pkg/middleware.(*CustomWriter).Header(...)
	github.com/darkweak/souin@v1.6.39-0.20230524064538-2cffe5ac9c98/pkg/middleware/writer.go:46
github.com/darkweak/souin/pkg/middleware.(*SouinBaseHandler).Upstream(0x4000990640, 0x40003584c0, 0x4000656c00, 0x40005948e0, 0x4000287c80?, {0x4000144660, 0x22})
	github.com/darkweak/souin@v1.6.39-0.20230524064538-2cffe5ac9c98/pkg/middleware/middleware.go:274 +0x404
github.com/darkweak/souin/pkg/middleware.(*SouinBaseHandler).ServeHTTP.func2()
	github.com/darkweak/souin@v1.6.39-0.20230524064538-2cffe5ac9c98/pkg/middleware/middleware.go:478 +0x40
created by github.com/darkweak/souin/pkg/middleware.(*SouinBaseHandler).ServeHTTP
	github.com/darkweak/souin@v1.6.39-0.20230524064538-2cffe5ac9c98/pkg/middleware/middleware.go:477 +0x1324

I hope that this commit fix your issue f9f8ab6da7e98e2e7898d10122766865e0b4884c

Seems OK locally. I just pushed to prod for a small segment of our network and will let you know if we still get blank pages. Thanks!

I still have blank pages. I'm unable to debug right now as my internet connection is pretty weak, but I'll give you more informations later.

Reopened again 😅

So I do have the same "context canceled" in the logs. I'm not sure if I can provide more informations than that.

{
  "level": "debug",
  "ts": 1685364382.2595735,
  "logger": "http.handlers.cache",
  "msg": "Incomming request &{Method:GET URL:/requested-uri Proto:HTTP/2.0 ProtoMajor:2 ProtoMinor:0 Header:map[...] Body:0xc0008b0e10 GetBody:<nil> ContentLength:0 TransferEncoding:[] Close:false Host:www.website.com Form:map[] PostForm:map[] MultipartForm:<nil> Trailer:map[] RemoteAddr:X.X.X.X:XXX RequestURI:/url TLS:0xc0000d4fd0 Cancel:<nil> Response:<nil> ctx:0xc0008b1020}"
}
{
  "level": "debug",
  "ts": 1685364382.8981557,
  "logger": "http.handlers.reverse_proxy",
  "msg": "upstream roundtrip",
  "upstream": "upstream.com:80",
  "duration": 0.637657811,
  "request": {
    "remote_ip": "1.1.1.1",
    "remote_port": "111",
    "proto": "HTTP/2.0",
    "method": "GET",
    "host": "upstream.com:80",
    "uri": "/requested-uri",
    "headers": {...},
    "tls": {
      "resumed": false,
      "version": 772,
      "cipher_suite": 4865,
      "proto": "h2",
      "server_name": "www.website.com"
    }
  },
  "error": "context canceled"
}

Please wait before checking anything. I just redeployed everything with cache disabled on the docker build, I think I had some cached steps.

Let me know if something weird happens.

So far everything works perfectly. Let's close this! Thanks @darkweak

Caddy crashed completely twice in the last 2 days. The last line in the log each time is related to context canceled. I don't know how to reproduce the crash locally yet. I'll find a way to make Caddy log the crash output somewhere.

{"level":"error","ts":1685643137.1032906,"logger":"http.log.error","msg":"context canceled","request":{},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"www.website.com"}},"duration":4.829884199}
{"level":"debug","ts":1685643137.1034744,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"upstream.com:80","duration":4.251552925,"request":{},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"www.website.com"}},"error":"context canceled"}

Do you have some debug logs please ? Because these are not really explicit. I will write the patch to add more debug/info logs during the request process in the next days.

This is all I got from Caddy. I'll read the doc to see what i can do to get more.

When you're running caddy you can set the debug global directive in your caddy file and Souin will run as debug mode and give you the debug logs.

It is already on debug, those logs in the last comments are what I get from the debug mode, yet I can't see what makes Caddy crash. Here I was able to see it since it happened locally: #337 (comment)

What are the previous logs before the crash ?

Only regular requests logs - but I had to reboot the server and didn't take a copy of the logs beforehand. I will do it at the next crash (so far looks like it's once per day... my TTL is 24 hours.. maybe related to stale?)

Crashed again, almost exactly 24h after last reboot (which is my TTL) - again, nothing in the log, but Caddy is not accepting requests anymore. Can't see what make it crash in the logs.

Will lower the TTL and try to debug that.

Got it.

fatal error: concurrent map writes

goroutine 234338 [running]:
github.com/caddyserver/caddy/v2.(*Replacer).Set(...)
	github.com/caddyserver/caddy/v2@v2.6.4/replacer.go:67
github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy.(*Handler).ServeHTTP.func1()
	github.com/caddyserver/caddy/v2@v2.6.4/modules/caddyhttp/reverseproxy/reverseproxy.go:490 +0x70
github.com/caddyserver/caddy/v2/modules/caddyhttp/reverseproxy.(*Handler).ServeHTTP(0xc000898380, {0x24fd6a0, 0xc00002c980}, 0xc000d47400, {0x24f4de0, 0x2254038})
	github.com/caddyserver/caddy/v2@v2.6.4/modules/caddyhttp/reverseproxy/reverseproxy.go:512 +0x45b
github.com/caddyserver/caddy/v2/modules/caddyhttp.wrapMiddleware.func1.1({0x24fd6a0?, 0xc00002c980?}, 0x24f4de0?)
	github.com/caddyserver/caddy/v2@v2.6.4/modules/caddyhttp/routes.go:290 +0x42
github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP(0x24f4de0?, {0x24fd6a0?, 0xc00002c980?}, 0x40dea8?)
	github.com/caddyserver/caddy/v2@v2.6.4/modules/caddyhttp/caddyhttp.go:58 +0x2f
github.com/caddyserver/caddy/v2/modules/caddyhttp.wrapRoute.func1.1({0x24fd6a0, 0xc00002c980}, 0xc000d47400)
	github.com/caddyserver/caddy/v2@v2.6.4/modules/caddyhttp/routes.go:259 +0x3a8
github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP(0x1d16de0?, {0x24fd6a0?, 0xc00002c980?}, 0x7?)
	github.com/caddyserver/caddy/v2@v2.6.4/modules/caddyhttp/caddyhttp.go:58 +0x2f
github.com/caddyserver/caddy/v2/modules/caddyhttp.wrapRoute.func1.1({0x24fd6a0, 0xc00002c980}, 0xc000d47400)
	github.com/caddyserver/caddy/v2@v2.6.4/modules/caddyhttp/routes.go:238 +0x219
github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP(0x40be65?, {0x24fd6a0?, 0xc00002c980?}, 0xc000a8ef00?)
	github.com/caddyserver/caddy/v2@v2.6.4/modules/caddyhttp/caddyhttp.go:58 +0x2f
github.com/darkweak/souin/plugins/caddy.(*SouinCaddyMiddleware).ServeHTTP.func1({0x24fd6a0?, 0xc00002c980?}, 0x203dbc4?)
	github.com/darkweak/souin/plugins/caddy@v0.0.0-20230525203934-f9f8ab6da7e9/httpcache.go:83 +0x39
github.com/darkweak/souin/pkg/middleware.(*SouinBaseHandler).Upstream(0xc00013b400, 0xc00002c980, 0xc000d47b00, 0xc00021c820, 0x443145?, {0xc0010c6330, 0x2a})
	github.com/darkweak/souin@v1.6.39-0.20230525203934-f9f8ab6da7e9/pkg/middleware/middleware.go:258 +0x1a6
github.com/darkweak/souin/pkg/middleware.(*SouinBaseHandler).ServeHTTP.func3()
	github.com/darkweak/souin@v1.6.39-0.20230525203934-f9f8ab6da7e9/pkg/middleware/middleware.go:484 +0x3e
created by github.com/darkweak/souin/pkg/middleware.(*SouinBaseHandler).ServeHTTP
	github.com/darkweak/souin@v1.6.39-0.20230525203934-f9f8ab6da7e9/pkg/middleware/middleware.go:483 +0x19b3

Nice, now we know where, I have to check why and how to reproduce.

Hey sorry for the delay. I tried many time with in memory storages but that doesn't occur. I wonder if that could come from the reconnection in redis/etcd/olric storages

I haven't had time to dig deeper on that so far. We use Redis as storage, so maybe we can investigate on that side.