litespeedtech / lsws-docker-env

LiteSpeed Enterprise Docker Environment

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Performance tuning of LSWS on docker.

PiotrCzapla opened this issue · comments

Hi IT seems that the default /dev/shm size is 64MB and this folder is being used by the config by default. I guess it will reduce the cache usefulness and slow down the system.
The options is easy to fix by setting shm_size in the compose or potentialy by mounting system /dev/shm to the container /dev/shm (I haven't seen docs about that yet though).

I'm writing to start a conversation what else can be missing in the docker containers that may negatively affect the performance in production settings. I'm moving a website with ~100k monthly users to the dockerized lsws.
I'm running on bare metal Ryzen 3700X with 64 GB and nvme ssd. I'm happy to run some load tests.

The reason why I'd like to use the dockerized setup is it security and ability to migrate to larger hardware when needed.

Would you like to help ?

commented

Hi,

Thanks for the feedback. I am not quite sure about the reduced cache part since the lscache folder is under each virtual host, so it shouldn't affect /dev/shm, unless you move the cache folder there?

That would be great if you can share some performance result.

Yes, we are here to help.

It seems that lsws uses /dev/shm by default. At least that is my impression not sure why if not for the cache though.
But maybe it doesn't use it for anything important, after starting the demon the /dev/shm has the following structure

# du -sh /dev/shm/lsws/*
8.0K	/dev/shm/lsws/SSL.lock
8.0K	/dev/shm/lsws/SSL.shm
8.0K	/dev/shm/lsws/adns_cache.lock
16K	/dev/shm/lsws/adns_cache.shm
0	/dev/shm/lsws/ocspcache
8.0K	/dev/shm/lsws/stats.lock
8.0K	/dev/shm/lsws/stats.shm
8.0K	/dev/shm/lsws/stats_clients.lock
48K	/dev/shm/lsws/stats_clients.shm
4.0K	/dev/shm/lsws/status

I'll see how it changes once I run the load test.

Btw. you may want ot add a section about setting up proper mtu if the mtu is different that 1500, the packages are dropped on the default bridge network it will be lower the performance of tcp and make the udp unrelable.
see https://medium.com/@sylwit/how-we-spent-a-full-day-figuring-out-a-mtu-issue-with-docker-4d81fdfe2caf
for mor details.

commented

Btw. you may want ot add a section about setting up proper mtu if the mtu is different that 1500, the packages are dropped on the default bridge network it will be lower the performance of tcp and make the udp unrelable.
see https://medium.com/@sylwit/how-we-spent-a-full-day-figuring-out-a-mtu-issue-with-docker-4d81fdfe2caf
for mor details.

That is interesting, I will keep this in mind if it happens.

How about adding a note to the docs for ppl trying to run this setup in production?

I've had a quick look and it seems that MTU are lower than 1500 for TCP tunnels and on google cloud VPC (it is set to 1460 or lower https://cloud.google.com/compute/docs/troubleshooting/general-tips). And it is the case for most tunnels.

I was a bit too optimistic regarding TCP behaviour, it may just timeout if docker bridge is working on layer 3 and is not aware of TCP/IP running over. The TCP stack is going to adjust its packet size when "ICMP fragmentation needed" packet is received. And to send such packet the hardware /virtual hardware need to be TCP aware (working on Layer 4).