YukunJ / Turtle

A C++17-based lightweight high-performance network library

Home Page:https://github.com/YukunJ/Turtle

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Enhancement: allow socket send to be "asynchronous" if data size too large

YukunJ opened this issue · comments

The following is the cope snippet for the Connection::Send() which outwrites all the data stored in the Buffer, it will stay in this function loop until all data has been sent out. In this sense, this code snippet is correct.

void Connection::Send() {
  // robust write
  ssize_t curr_write = 0;
  ssize_t write;
  const ssize_t to_write = GetWriteBufferSize();
  const unsigned char *buf = write_buffer_->Data();
  while (curr_write < to_write) {
    if ((write = send(GetFd(), buf + curr_write, to_write - curr_write, 0)) <=
        0) {
      if (errno != EINTR && errno != EAGAIN && errno != EWOULDBLOCK) {
        perror("Error in Connection::Send()");
        ClearWriteBuffer();
        return;
      }
      write = 0;
    }
    curr_write += write;
  }
  ClearWriteBuffer();
}

However, one thing to be optimized is that: when the data to sent is large in size, the underlying socket buffer allocated by the operating system might be full and throws back EAGAIN/EWOULDBLOCK to the send() calls. In current version, the code just sit and wait until the buffer is empty enough to keep sending.

However, a better approach would be to register to the Poller that "We are interested in the being able to write more event of this TCP connection", and this worker thread could go to work on other client's callbacks. When the write buffer is empty enough, OS will notifies the Poller in the next round and it could continue sending the leftover bits.

Most likely, the above approach could achieve a better overall performance of the server, at the little cost of that "overloaded outwrite" connection might experience a short delay of full response since it might need to wait for next round of polling.

I plan to implement this feature in the near future.

Yukun
Feb 06, 2023