This is a http cache plugin for caddy 2. In fact, I am not so familiar with the part of cdn cache’s mechanism so I reference the code from https://github.com/nicolasazrak/caddy-cache. which is the caddy v1’s cache module. Then, I migrate the codebase to be consist with the caddy2 architecture and develop more features based on that.
Currently, this doesn’t support distributed cache
yet. It’s still in plain.
Now the following backends are supported.
- file
- inmemory
- redis
In the latter part, I will show the example Caddyfile to serve different type of proxy cache server.
- uri path matcher
- http header matcher
A default age for matched responses that do not have an explicit expiration.
There are exposed endpoints to purge cache. I implement it with admin apis endpoints. The following will show you how to purge cache
Given that you specify the port 7777 to serve caddy admin, you can get the list of cache by the api below.
GET http://example.com:7777/caches
It supports the regular expression.
DELETE http://example.com:7777/caches/purge
Content-Type: application/json
{
"method": "GET",
"host": "localhost",
"uri": ".*\\.txt"
}
NOTE: still under development and only the memory
backend supports.
I’ve provided a simple example to mimic a cluster environment.
PROJECT_PATH=/app docker-compose --project-directory=./ -f example/distributed_cache/docker-compose.yaml up
In development, go to the cmd folder and type the following commands.
go build -ldflags="-w -s" -o caddy
To build linux binary by this
GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o caddy
Or you can use the xcaddy to build the executable file.
Ensure you’ve install it by go get -u github.com/caddyserver/xcaddy/cmd/xcaddy
xcaddy build v2.0.0 --with github.com/sillygod/cdp-cache
Xcaddy also provide a way to develop plugin locally.
xcaddy run --config cmd/Caddyfile
To remove unused dependencies
go mod tidy
You can directly run with the binary.
caddy run --config [caddy file] --adapter caddyfile
Or if you are preferred to use the docker
docker run -it -p80:80 -p443:443 -v [the path of caddyfile]:/app/Caddyfile docker.pkg.github.com/sillygod/cdp-cache/caddy:latest
The following will list current support configs in the caddyfile.
The header to set cache status. default value: X-Cache-Status
Only the request’s path match the condition will be cached. Ex. /
means all request need to be cached because all request’s path must start with /
The cache’s expiration time.
only the req’s header match the condtions ex.
match_header Content-Type image/jpg image/png “text/plain; charset=utf-8”
The position where to save the file. Only applied when the cache_type
is file
.
The key of cache entry. The default value is {http.request.method} {http.request.host}{http.request.uri.path}?{http.request.uri.query}
The bucket number of the mod of cache_key’s checksum. The default value is 256.
Indicate to use which kind of cache’s storage backend. Currently, there are two choices. One is file
and the other is in_memory
The max memory usage for in_memory backend.
Working in process. Currently, only support consul
to establish the cluster of cache server node.
To see a example config, please refer this
specify your service to be registered in the consul agent.
the address of the consul agent.
indicate the health_check endpoint which consul agent will use this endpoint to check the cache server is healthy
You can go to the directory example. It shows you each type of cache’s configuration.
Now, I just simply compares the performance between in-memory and disk.
Caddy run with the config file under directory benchmark
and tests were run on the mac book pro (1.4 GHz Intel Core i5, 16 GB 2133 MHz LPDDR3)
The following benchmark is analysized by wrk -c 50 -d 30s --latency -t 4 http://localhost:9991/pg31674.txt
without log open. Before running this, ensure you provision the tests data by bash benchmark/provision.sh
req/s | latency (50% 90% 99%) | |
proxy + file cache | 13853 | 3.29ms / 4.09ms / 5.26ms |
proxy + in memory cache | 20622 | 2.20ms / 3.03ms / 4.68ms |
- [ ] add more tests (in progress) In the same time, write some fragement benchmark to find out which part can be optimized go test -bench=. -benchmem -cpuprofile profile.out go tool pprof profile.out (in the interactive mod, you can type web to open a web interface to see the graph)
- [ ] distributed cache (in progress)
- [ ] more optimization..