rustasync / runtime

Empowering everyone to build asynchronous software

Home Page:https://docs.rs/runtime

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is it possible to read and write from one socket simultaneously?

oxalica opened this issue · comments

I found the both read and write functions of TcpStream and UdpSocket take &mut self and make it impossible.

Is it able to achieve this by providing a method that "split" the read and write part of a socket?

TcpStream implements AsyncRead and AsyncWrite. You can split it with AsyncReadExt::split() into the reader/writer part.

UdpSocket does not implement those traits though, which would seem useful :)

UdpSocket does not implement those traits though, which would seem useful :)

Or a way to split it into some other kind of send/recv parts that have the send_to(), recv_from() functions on them.

commented

Or a way to split it into some other kind of send/recv parts that have the send_to(), recv_from() functions on them.

That sounds like something fun to experiment with! -- I'm not entirely positive it's possible to send data while waiting for data to come on on the same socket (cross platform, ofcourse). But if it's possible, that sounds like it'd be great! I'm feeling quite optimistic about the possibilities!

That sounds like something fun to experiment with!

It's something I'll probably look at at some point once I can use runtime in my applications :)

I'm not entirely positive it's possible to send data while waiting for data to come on on the same socket (cross platform, ofcourse). But if it's possible, that sounds like it'd be great! I'm feeling quite optimistic about the possibilities!

That's the same thing that is already possible for TcpStream really. Just that there is currently no API for that because UdpSocket does not implement AsyncRead / AsyncWrite but has a slightly different API. Should be relatively easy to implement, just following what is already existing for TcpStream and making it fit with the UDP-style functions.

The trick here is btw the BiLock (or more general the futures-aware Mutex in the futures-utils crate). See also rust-lang/futures-rs#1679

Well, I do NOT think BiLock, used by AsyncReadExt::split(), can be the final solution to it, .

Both TCP and UDP are full-duplex, having read and write buffers separated. So we should be able to do reading and writing in parallel (like in two different threads, without locks). But now it is impossible to do it using current API.

Both TCP and UDP are full-duplex, having read and write buffers separated. So we should be able to do reading and writing in parallel (like in two different threads, without locks). But now it is impossible to do it using current API.

While that's possible if you're careful (that's what we're doing in some applications just fine, in C though), it's not too easy to get right as there might be some shared state between the receiving/sending part. read/write readyness for example, and on different platforms, Windows for example, you might not be able to wait for read/write readyness in two different threads at the same time without race conditions. I can't remember details but on Windows we had to fight some annoying bugs related to that. Depending on how mio handles this on Windows (or also in general?), some special care might be needed there.

Now for the BiLock, I don't think the situation is as bad as you imagine. All operations are non-blocking, so the lock is only taken for a very short time when writing or reading to do the actual operation. I.e. the duration of the syscall, which might immediately return (WOULDBLOCK) or has to copy the buffer around from userspace to kernelspace and then returns immediately.

I'm sure that can be optimized more with special care, at least for some platforms and socket types, but as a starting point and generic solution (for any kind of AsyncRead/Write) that seems like a good compromise.

EDIT: I.e., once you notice that this is your bottleneck go optimizing... but before that I'm sure there are enough other things to worry about before you end up at this being your bottleneck. And once you end up there, optimizing it should be possible without interfering with the other parts of your application. It would be a completely local change.

Okay, you're right.
That's my fault that, I used to simply use Mutex<UdpSocket>, lock().await and recv().await, then it blocks sending attempts, cause it will never release the lock until data arrive.

Now I just realize that only trying to lock during poll, or use try_lock inside poll, will have no blocking problem and also cost little, and it is now how {Read,Write}Half works.
But we still need to manually write poll. It's annoying, yeah?

So just wrapping BiLocked socket with something like SendToHalf/RecvFromHalf seems helpful enough.

Besides, I'm also curious about how to deal with other multicast related and socket option functions.

Okay, you're right.
That's my fault that, I used to simply use Mutex<UdpSocket>, lock().await and recv().await, then it blocks sending attempts, cause it will never release the lock until data arrive.

Now I just realize that only trying to lock during poll, or use try_lock inside poll, will have no blocking problem and also cost little, and it is now how {Read,Write}Half works.
But we still need to manually write poll. It's annoying, yeah?

You need to use the futures-enabled Mutex: https://docs.rs/futures-preview/0.3.0-alpha.16/futures/lock/struct.Mutex.html

That will never block and instead integrate with the futures executor.

Besides, I'm also curious about how to deal with other multicast related and socket option functions.

Can you create another issue about the operations/functions/etc that you're missing currently?

My understanding was that @uHOOCCOOHu was using the futures-enabled Mutex, but doing something like:

let socket  = UdpSocket::bind(&addr);
let socket = Arc::new(Mutex::new(socket));

spawn({
    let socket = socket.clone();
    async move {
         let guard = socket.lock().await;
         let mut buf = vec![0; 1024];
         socket.recv_from(&mut buf).await?;
    }
});

spawn({
    let socket = socket.clone();
    async move {
         let guard = socket.lock().await;
         let buf = "hello world";
         socket.send_to(&mut buf, &target).await?;
    }
});

This locks the socket mutex for the entire send/receive operation, even if it is not ready to proceed. Switching to something that uses internal locking in the poll_send_to and poll_recv_from methods would avoid this.

I don't think BiLock is entirely appropriate here though, unlike TCP it seems possible to have >1 reader and >1 writer attempting to use the same socket. Because each read/write operation is a single atomic message you don't get issues with interleaved reads/writes.

I'm not certain of the likely situations, but maybe it would be appropriate to support both a BiLock and a more general Arc<Mutex<_>> based solution; if the user only needs a single reader+writer then the BiLock would be more performant, and if they need something more flexible Arc<Mutex<_>> allows having a fully shared bidirectional socket.