Asynchronous echo server
adah1972 opened this issue · comments
Hi Josh,
I am comparing some coroutine libraries. Do you have an asynchronous server example, like those in https://github.com/netcan/asyncio/blob/master/docs/benchmark.md?
I tried some of your current examples, but they do not seem to do what I want: accepting and processing more requests when a request is going on.
Thanks for your work.
Hi Wu, thanks for your inquiry. I took a look at your benchmark page, are you just looking for multiple clients to be requesting at once?
I am looking for a typical asynchronous server example. It seems people are all doing it via the echo server, which can be easily tested and compared by ab
. The benchmark page (by the author of asyncio) did just that.
Yeah, I see how you are doing it on your project, ab
doesn't care that its not a real http response it looks like?
This does sound like a good addition to the project, might take me some time to get to it.
No, it seems ab
does not care about the response, and can be used as a general stress tool.
There is no "my project" by the way. I am just looking around and trying things out. I am not the author of the benchmark page.
Hi @adah1972 I've opened a PR with a tcp basic echo server. I ran the ab
tests but the software is limited by how fast ab
can request the server so I added in the code a basic http 200 ok
response to support tools like wrk
or autocannon
that require a proper http response. You can swap in the buf
for the client.send()
function to make it just a true tcp echo server easily.
With ab
it was getting around 100k qps on my laptop, but with wrk
I can get it up to 325k qps so the benchmarking tool ab
seems to be really limiting here, fyi.
Please take a look at the PR if you have a chance.
Hi Josh,
Thank you for the changes. I have played with your code and am very satisfied.
A few comments:
- The current behaviour does not make it a proper echo server. Maybe the default should still be the echo server; or else you should rename the example.
- There is no EOL character at the end of the new example. I may not mention it, if not for the fact that all other examples have the proper EOLs.
- The current code does not handle half-closing, i.e. it does not close the connection when the client indicates that it has finished sending.
It seems ab
is single-threaded, and it cannot stress the server hard enough if it has multiple cores....
I noticed one counterintuitive thing. There are std::thread::hardware_concurrency() + 1
(also workers.size() + 1
) threads running, all busy when requests are coming....
Thanks for the feedback, I've made some additional changes:
- I've made it a true echo server and added another http 200 ok server since
ab
cannot fully stress due to it being singly threaded. And you're right these are not the same thing even if the final code for both is pretty close for this project. - Good point out, added a new EOL to the file.
- I adjusted some internals on the
task_manager
class to call the destructors for the coroutines a bit more aggressively. I'm probably going to add a new way to have that happen automatically on theio_scheduler
class if the user requests it via its options. #129
I'm not sure I follow on the std::thread::hardware_concurrency() + 1
question? The echo server(s) only spawn up to hardware_concurrency()
, the main
fn thread should sit basically idle on the sync_wait()
call.
Its a bit unhappy with my empty corountine trick :(
I'm not sure I follow on the
std::thread::hardware_concurrency() + 1
question? The echo server(s) only spawn up tohardware_concurrency()
, themain
fn thread should sit basically idle on thesync_wait()
call.
Forget it. I misread the htop
report. It shows all threads, but the first one actually shows the accumulated values.
BTW, your HTTP server has an extra LF at the beginning of response. The response should begin immediately after R"(
. It currently does not work with normal HTTP clients like curl
.
Oh yes that's how heredocs work 😅 thanks for pointing that out