Blocking resources: `ResponseOptions` are lost / nested blocking resources don't block but stream.
zakstucke opened this issue · comments
Describe the bug
Nested blocking resources inside suspense don't end up blocking.
This becomes obvious when setting things to ResponseOptions
in the nested resource and those values don't make it to the client, I believe because they end up streaming after the response is sent, instead of blocking correctly.
I've used OnceResource
's in the repro, but applies to normal Resource::new_blocking
too.
Leptos Dependencies
Rc2
To Reproduce
I've got a repro working from the server_fns_axum
example, specific repro in codeblock below, but you can just checkout my fork and run that example: https://github.com/zakstucke/leptos/tree/suspense-bug
cargo leptos watch
- Click the
"Check cookie"
button, you'll see no cookie set (broken) - Replace the
<App />
body withview! { <InnerComponent /> }
, removing the outer suspense/blocking resource, retry and you'll see the cookie is now being correctly set.
Broken with nested suspense/blocking resource:
Screen.Recording.2024-11-23.at.13.36.49.mov
Working after removing outer suspense/blocking resource:
Screen.Recording.2024-11-23.at.13.38.32.mov
#[component]
pub fn App() -> impl IntoView {
let outer_r = OnceResource::new_blocking(async move {
#[cfg(feature = "ssr")]
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok::<_, ()>(())
});
let inner_view = move || {
Suspend::new(async move {
let _ = outer_r.await;
view! { <InnerComponent /> }
})
};
view! {
<main>
<Suspense>{inner_view}</Suspense>
</main>
}
// FOR WORKING CASE, REPLACE THIS APP BODY WITH: `view! { <InnerComponent /> }`
}
#[component]
pub fn InnerComponent() -> impl IntoView {
let inner_r = OnceResource::new_blocking(async move {
let cookie_name =
uuid::Uuid::new_v4().to_string().replace("-", "")[0..5].to_string();
let cookie_value =
uuid::Uuid::new_v4().to_string().replace("-", "")[0..5].to_string();
#[cfg(feature = "ssr")]
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
let _ = s_set_cookie(cookie_name.clone(), cookie_value.clone()).await;
(cookie_name, cookie_value)
});
let getter = Action::new(move |(name,): &(String,)| {
let name = name.clone();
async move {
s_get_cookie(name)
.await
.unwrap()
.unwrap_or_else(|| "NO COOKIE SET".to_string())
}
});
view! {
<Suspense>
{move || {
Suspend::new({
async move {
let (name, value) = inner_r.await;
let name = StoredValue::new(name);
let value = StoredValue::new(value);
view! {
<div>
<button
type="button"
on:click=move |_e| {
getter.dispatch((name.get_value(),));
}
>
{"Check cookie"}
</button>
<p>
{move || {
view! {
{format!(
"Expecting cookie of {}={}",
name.get_value(),
value.get_value(),
)}
<br />
{format!(
"cookie: {}",
getter
.value()
.get()
.unwrap_or_else(|| {
"click 'Check cookie' to see".to_string()
}),
)}
}
}}
</p>
</div>
}
}
})
}}
</Suspense>
}
}
#[server]
pub async fn s_set_cookie(
name: String,
value: String,
) -> Result<(), ServerFnError> {
if let Some(opts) = use_context::<leptos_axum::ResponseOptions>() {
opts.insert_header(
http::header::SET_COOKIE,
http::header::HeaderValue::from_str(&format!("{}={}", name, value))
.unwrap(),
)
} else {
println!("No response options found");
}
Ok(())
}
#[server]
pub async fn s_get_cookie(
name: String,
) -> Result<Option<String>, ServerFnError> {
if let Some(parts) = leptos::prelude::use_context::<http::request::Parts>()
{
let cookies = axum_extra::extract::cookie::CookieJar::from_headers(
&parts.headers,
);
println!("COOKIES: {:#?}", cookies);
let found = cookies.get(&name).map(|cookie| cookie.value().to_string());
println!("For name: {}, FOUND: {:#?}", name, found);
Ok(found)
} else {
println!("No request parts found");
Ok(None)
}
}
Potentially cf #3048
Just so I am sure I understand what you're reporting correctly, now that I've finished my cup of coffee 😄
The idea here is that a new blocking resource that's created inside the children of Suspense
doesn't hold the first chunk of the stream (including headers) right?
What's clear to me: This data-loading pattern is not optimal and will lead to a slower response from the website and therefore worse user experience (and worse SEO, etc.) It introduces a waterfall, by not beginning to load the inner resource until after the outer Suspense is ready. Rather, resources should be hoisted up to the highest level possible. If the inner resource depends on the outer resource, it can still be hoisted up to that higher level, and .await
the value of the outer resource instead.
i.e., in this case, simply hoisting the definition of inner_r
up into the parent function and providing it to InnerComponent
as a prop fixes the issue.
I love a good MRE but sometimes they can make it hard to understand the broader context. Can you say a little more about the real use case that led to this pattern?
What's not clear to me: I have a PR with a solution in #3282, which I've tested against the above example and it seems to fix it. I haven't tested it with more-deeply-nested sets of suspenses and blocking resources, and I haven't tested whether it has any ill effects elsewhere.
#3282 seems to solve the repro and in general for the 2-layer case but not the general n
-layer case, simply adding a second intermediary causes it to break again:
#[component]
pub fn MiddleComponent() -> impl IntoView {
let outer_r = OnceResource::new_blocking(async move {
#[cfg(feature = "ssr")]
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok::<_, ()>(())
});
let inner_view = move || {
Suspend::new(async move {
let _ = outer_r.await;
view! { <InnerComponent /> }
})
};
view! {
<main>
<Suspense>{inner_view}</Suspense>
</main>
}
}
The idea here is that a new blocking resource that's created inside the children of Suspense doesn't hold the first chunk of the stream (including headers) right?
Correct!
This data-loading pattern is not optimal and will lead to a slower response from the website
Definitely agree this is mostly non-optimal and bad for user experience, but there's a lot of cases where you simply want portions of the app to block returning, until all the information to produce a meaningful UI is ready and you can pass it to the rest of the app, or have nodes of the app that need to add specific ResponseOptions
. You could hoist it all out, assuming only resources need to depend on eachother and you didn't need anything directly in components, but in the case where you need them to waterfall anyway, this provides no speed benefit and just spreads out the logic.
For the ResponseOptions
case this is a really serious bug, because e.g. cookies get completely lost, as the resource isn't called from the frontend where they'd still get set, but called from the backend and streamed where it all seems to get lost.
In my case my whole app is inside a blocking resource, to add some global state via provide_context()
, as we discussed as the best pattern for the need a while back here: #3038 (comment)
But main thing I think is I'd definitely assume this would work, as the next user would, and was super hard to track down 😅
Fair enough. Blocking resources are definitely in the category of "things I wish I'd never implemented," for reasons exactly like this. Perhaps the answer all along should've been "if you need to do this, use SsrMode::Async
," which would be slightly less powerful in cases where you have other chunks on the page that can stream in separately, but would have reduced the complexity a lot for me and avoided this whole class of bugs.
This particular issue (resolving the first blocking resource immediately creates another blocking resource) is possible to solve, although apparently doing it in the n-ary case is relatively harder. But there will necessarily always be cases a user can create in which creating a blocking resource does not actually block, by burying it somewhere further down in the tree where it doesn't run until the response has already, irreversibly, begun. (Your "nodes of the app that need to add specific ResponseOptions
")
Any chance you'd be interested in exploring a way to solve the nested version here, that does better than my attempt in #3282?
But there will necessarily always be cases a user can create in which creating a blocking resource does not actually block, by burying it somewhere further down in the tree where it doesn't run until the response has already, irreversibly, begun. (Your "nodes of the app that need to add specific ResponseOptions")
This would probably make sense as a user-warning. I would say anything blocking being streamed is a bug (this issue being leptos's bug but your case being userland misuse), seems like something's always wrong unless client-side or server "really blocking".
I've been messing around with #3282 but not having much luck:
I've been trying to drive the stream whilst continuously checking sc.await_deferred()
, storing completed chunks in a vec that'll be re-added to the start of the stream after. I Only allow the response to progress once nothings in the defer queue, even after another poll of the stream.
But not having much luck in getting the third blocking resource to run. So I think I'm off on what's going on here.