algesten / ureq

A simple, safe HTTP client

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Might `Agent` be caching network info in a way that survives the destructor?

lermana opened this issue · comments

The following confusion may stem from my newness to Rust, but I thought this was worth reporting in the event there is something actually happening here.

I have a function like this, that is being used to make a request to an external service:

use cookie_store::{ Cookie, CookieStore };
use std::time::Duration;
use ureq::{ Agent, AgentBuilder };


fn request_service(
    cookie_store: CookieStore,
    service_url: &str
) -> Result<ureq::Response, ureq::Error> {
    let agent: Agent = AgentBuilder::new()
        .cookie_store(cookie_store)
        .https_only(true)
        .timeout_read(Duration::from_secs(1))
        .timeout_write(Duration::from_secs(1))
        .build();

    let res = agent.get(&service_url).call();

    if res.is_err() {
        log::warn!("Could not make request");
    }

    res
}

which is being invoked like so:

use lazy_static::lazy_static;
use std::{ fmt, iter };
use std::pin::Pin;
use std::str::FromStr;
use url::Url;


lazy_static! {
    static ref URL: String = "https://place.io/some/path".to_string()
}

fn validate_cookie(
    cookie_value: &str,
) -> Result<String, CustomCookieError> {

    let url = Url::parse(&URL).expect("Could not parse URL");
    let cookie = Cookie::parse(cookie_value, &url).unwrap();

    let cookie_store = CookieStore::from_cookies(
                                            iter::once(Ok::<cookie_store::Cookie<'_>, Error>(cookie.into_owned())),
                                            false
                                        ).expect("Could not construct cookie store");

    let res = request_service(cookie_store, &URL)?;

We have noticed something peculiar happening, under a specific set of conditions:

  • we are running both the requesting service (where the above code lives) and the server it's making the request to, in k8s
  • for various reasons, we make this request through public DNS (i.e. it leaves the cluster, meaning it passes through ingress)
  • on this k8s cluster, we have a cluster-wide SSL certificate set up (so this does not change with service lifetimes)
  • on this k8s cluster, we run Nginx for ingress, and we do so on AWS "spot" infrastructure
  • this means the underlying resources are sometimes reclaimed, and when such a reclamation happens, Nginx is rebooted

As soon as a reclamation happens, we start seeing this error in the logs for the service that is making the above request:

Sending fatal alert BadRecordMac

Specifically, we see that error across all of the responses from the above function; when this happens, our only remedy so far has been to reboot the service that's making the request.

It should be noted that the above function only makes a request to one service, but we have verified that we can successfully make requests to that same service from within another service in the same cluster (where there's a similar setup involving a request over public DNS), even while we're seeing the above error in our Rust service's logs.

With all that background in place, I had figured (maybe naively) that since I am defining Agent in a function call, this Agent would be re-created and then completely destroyed with each call. But, the above error makes me wonder whether there's something that's persisting across function calls, since it seems like we are hitting something in the realm of a packet-parsing issue that appears to maybe perfectly coincide with a network address change for our receiving service (while the URL we are making the request to does not change).

Maybe it's possible that having a statically compiled URL is triggering something in the Agent to cache? Maybe it's the CookieStore? Or maybe, given the way the Agent is set up, it's supposed to cache some network address information, even across scopes / destructor invocations? I am a bit new to Rust so there may be some general concepts I'm missing.

This is all a bit speculative, but it's a peculiar situation that has thrown a couple wrenches for us, so I figured it was time to reach out. Thanks for your time and hope the above makes sense.

Hi! Welcome to ureq!

What SSL backend are you using here? Which features are turned on for ureq in Cargo.toml?

Thank you! And thanks for all the hard work that's gone into this library -- it's great to have such a standard for HTTP requests.

I believe we use OpenSSL:

$ dpkg -l | grep -i openssl
ii  openssl                   1.1.1n-0+deb11u3             amd64        Secure Sockets Layer toolkit - cryptographic utility

As for ureq feature spec:

ureq = { version = "2.6.2", features = ["cookies", "json"] }

Also, Rust version:

$ rustc --version
rustc 1.65.0 (897e37553 2022-11-02)

FYI tracked this down to what I believe was memory-related connection mismanagement in this Go service we have to make some requests to (closing)

Great, glad you solved it! BTW, given the features you showed from Cargo.toml you're using the rustls TLS backend, which is the default for ureq.

And you're correct that the agent you're defining is dropped at the end of each function call, and that in turn drops all the connections it held.

@jsha thanks and yes, all makes sense (sorry about SSL / TLS miscommunication - opened this ticket quickly and "under the gun")