ndelvalle / oports

Async library to retrieve open ports for a given IP address

Home Page:https://crates.io/crates/oports

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

open_ports_by_range() waits for each TCP connection synchronously

nokola opened this issue · comments

The following code from open_ports_by_range will .await each is_port_open() instead of executing the is_port_open()-s in parallel. Ideally the .awaiting will be done for multiple future-s at a time using something like join_all. see: https://users.rust-lang.org/t/joinall-and-async-await/31051/9.

        for port in from..to {
            let is_open = self.is_port_open(port).await;
            if is_open {
                open_ports.push(port)
            }
        }

Note: if you decide to .await all ports in parallel, there might be an issue in case there's restriction of max TCP connections at the same time for some reason - not sure if that would be a real issue on some machines or not an issue at all.

@nokola do you mind taking a look at my PR and let me know what you think?
I notice that changing the argument value from buffer_unordered method does not make things faster. I should maybe allow the user to change that value or update it with a sane default.

I don't mind but unfortunately I'm not qualified enough (I'm Rust noob still) to say for sure whether this code would result in the open_ports_by_range executing asynchronously or not.

pub async fn is_port_open(ip: IpAddr, port: u16) -> bool {
   TcpStream::connect((ip, port)).await.is_ok()
}

One way to verify is to replace the is_port_open body with a dummy wait code and print out the port # like this:

pub async fn is_port_open(ip: IpAddr, port: u16) -> bool {
   // TODO: async_wait_2_seconds.await()
   println!("{}", port);
}

Then, run open_ports_by_range(...) for say 100 ports and see if the whole app completes in ~2 seconds or ~200 seconds. If it completes fast, then all good.

Please let me know how this goes if/when you try the experiment - interested to see the results. Thanks!