rust-lang / rust

Empowering everyone to build reliable and efficient software.

Home Page:https://www.rust-lang.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tracking issue for RFC 2033: Experimentally add coroutines to Rust

aturon opened this issue · comments

RFC.

This is an experimental RFC, which means that we have enough confidence in the overall direction that we're willing to land an early implementation to gain experience. However, a complete RFC will be required before any stabilization.

This issue tracks the initial implementation.

related issues

cc #43076, an initial implementation

Copied from #43076:


I'm using this branch for stream-heavy data processing. By streams I mean iterators with blocking FS calls. Because Generator is missing an Iterator or IntoIterator implementation, you must call your own wrapper. Zoxc kindly provided an example, but it's quite unergonomic. Consider:

Python:

def my_iter(iter):
    for value in iter:
        yield value

Rust with generators:

fn my_iter<A, I: Iterator<Item=A>>(iter: I) -> impl Iterator<Item=A> {
    gen_to_iter(move || {
        for value in iter {
            yield value;
        }
    })
}

Two extra steps: inner closure + wrapper, and, worse, you have to write the wrapper yourself. We should be able to do better.

TL:DR: There should be a built-in solution for GeneratorIterator.

I was a bit surprised that, during the RFC discussion, links to the C++ world seemed to reference documents dating back from 2015. There have been some progress since then. The latest draft TS for coroutines in C++ is n4680. I guess the content of that draft TS will be discussed again when the complete RFC for Rust's coroutines is worded, so here are some of the salient points.

First, it envisions coroutines in a way similar to what this experimental RFC proposes, that is, they are stackless state machines. A function is a coroutine if and only if its body contains the co_await keyword somewhere (or co_yield which is just syntactic sugar for co_await, or co_return). Any occurrence of co_await in the body marks a suspension point where control is returned to the caller.

The object passed to co_await should provide three methods. The first one tells the state machine whether the suspension should be skipped and the coroutine immediately resumed (kind of a degenerate case). The second method is executed before returning control to the caller; it is meant to be used for chaining asynchronous tasks, handling recursive calls, etc. The third method is executed once the coroutine is resumed, e.g. to construct the value returned by the co_await expression. When implementing most generators, these three methods would have trivial bodies, respectively { return false; }, {}, and {}.

Various customization mechanisms are also provided. They tell how to construct the object received by the caller, how to allocate the local variables of the state machine, what to do at the start of the coroutine (e.g. immediately suspend), what to do at the end, what do to in case of an unhandled exception, what to do with the value passed to co_yield or co_return (how yielded values are passed back to the caller is completely controlled by the code).

One subtle point that came up is how we handle the partially-empty boxes created inside of box statements with respect to OIBITs/borrows.

For example, if we have something like:

fn foo(...) -> Foo<...> {}
fn bar(...) -> Bar<...> {}
box (foo(...), yield, bar(...))

Then at the yield point, the generator obviously contains a live Foo<...> for OIBIT and borrow purposes. It also contains a semi-empty Box<(Foo<...>, (), Bar<...>)>, and we have to decide whether we should have that mean that it is to be treated like it contains a Box, just the Foo<...>, or something else.

I might be missing something in the RFC, but based on the definition of resume in the Generator struct, and the given examples, it looks like these generators don't have two way communication. Ideally this language construct would allow us to yield values out and resume values into the generator.

Here's an example of implementing the async/await pattern using coroutines in ES6. The generator yields Promises and the coroutine resumes the generator with the unwrapped value of a Promise each time the Promise completes. There is no way this pattern could have been implemented without the two-way communication.

Rust has a problem here because what's the type of resume? In the ES6 example, the generator always yields out some kind of Promise and is always resumed with the unwrapped value of the Promise. However the contained type changes on each line. In other words, first it yields a Promise<X> and is resumed with an X, and then it yields a Promise<Y> and is resumed with a Y. I can imagine various ways of declaring that this generator first yields a Wrapper<X> and then a Wrapper<Y>, and expects to be resumed with an X and then a Y, but I can't imagine how the compiler will prove that this is what happens when the code runs.

TL;DR:
yield value is the less interesting half. It has the potential to be a much more ergonomic way to build an Iterator, but nothing more.

let resumedValue = yield value; is the fun half. It's what turns on the unique flow control possibilities of coroutines.

(Here are some more very interesting ideas for how to use two-way coroutines.)

@arielb1

Then at the yield point, the generator obviously contains a live Foo<...> for OIBIT and borrow purposes. It also contains a semi-empty Box<(Foo<...>, (), Bar<...>)>, and we have to decide whether we should have that mean that it is to be treated like it contains a Box, just the Foo<...>, or something else.

I don't know what you mean by "OIBIT". But at the yield point, you do not have a Box<(Foo<...>, (), Bar<...>)> yet. You have a <Box<(Foo<...>, (), Bar<...>)> as Boxed>::Place and a Foo<...> that would need to be dropped if the generator were dropped before resuming.

Looking at the API, it doesn't seem very ergonomic/idiomatic that you have to check if resume returns Yielded or Complete every single iteration. What makes the most sense is two methods:

fn resume(&mut self) -> Option<Self::Yield>;
fn await_done(self) -> Self::Return;

Note that this would technically require adding an additional state to closure-based generators which holds the return value, instead of immediately returning it. This would make futures and iterators more ergonomic, though.

I also think that explicitly clarifying that dropping a Generator does not exhaust it, stopping it entirely. This makes sense if we view the generator as a channel: resume requests a value from the channel, await_done waits until the channel is closed and returns a final state, and drop simply closes the channel.

Has there been any progress regarding the generator -> iterator conversion? If not, is there any active discussion about it somewhere? It would be useful to link it.
@Nemikolh and @uHOOCCOOHu, I'm curious about why you disagree with @clarcharr's suggestion. Care to share your thoughts?

Has there been any progress regarding the generator -> iterator conversion? If not, is there any active discussion about it somewhere?

https://internals.rust-lang.org/t/pre-rfc-generator-integration-with-for-loops/6625

I was looking at the current Generator-API and immediately felt uneasy when I read

If Complete is returned then the generator has completely finished with the value provided. It is invalid for the generator to be resumed again.

Instead of relying on the programmer to not resume after completion, I would strongly prefer if this was ensured by the compiler. This is easily possible by using slightly different types:

pub enum GeneratorState<S, Y, R> {
    Yielded(S, Y),
    Complete(R),
}

pub trait Generator where Self: std::marker::Sized {
    type Yield;
    type Return;
    fn resume(self) -> GeneratorState<Self, Self::Yield, Self::Return>;
}

(see this rust playground for a small usage example)

The current API documentation also states:

This function may panic if it is called after the Complete variant has been returned previously. While generator literals in the language are guaranteed to panic on resuming after Complete, this is not guaranteed for all implementations of the Generator trait.

So you might not immediately notice a resume-after-completion at runtime even when it actually occurs. A panic on resume-after-completion needs additional checks to be performed by resume, which would not be necessary with the above idea.

In fact, the same idea was already brought up in a different context, however, the focus of this discussion was not on type safety.

I assume there are good reasons for the current API. Nevertheless I think it is worth (re)considering the above idea to prevent resume-after-completion. This protects the programmer from a class of mistakes similar to use-after-free, which is already successfully prevented by rust.

I too would have preferred a similar construction for the compile time safety. Unfortunately, that construction doesn't work with immovable generators, once they have been resumed they can't ever be passed by value. I can't think of a way to encode that constraint in a similar way for pinned references, it seems you need some kind of affine reference that you can pass in and recieve back in the GeneratorState::Yielded variant rather than the current lifetime scoped Pin reference.

A resume/await_done version seems much more ergonomic than moving the generator every time resume is called. And plus, this would prevent all of @withoutboats' work on pinning from actually being applied.

Note that Iterator has a similar constraint- it's not really a big deal, it doesn't affect safety, and the vast majority of users of the trait don't even have to worry about it.

Question regarding the current experimental implementation: Can the yield and return types of generators (move-like syntax) be annotated? I would like to do the following:

use std::hash::Hash;

// Somehow add annotations so that `generator` implements
// `Generator<Yield = Box<Hash>, Return = ()>`.
// As of now, `Box<i32>` gets deduced for the Yield type.
let mut generator = || {
    yield Box::new(123i32);
    yield Box::new("hello");
};

I was hopeful that let mut generator: impl Generator<Yield = Box<Debug>> = || { ... }; might allow this, but testing with

fn foo() -> impl Generator<Yield = Box<Debug + 'static>> {
    || {
        yield Box::new(123i32);
        yield Box::new("hello");
    }
}

it seems the associated types of the return value aren't used to infer the types for the yield expression; this could be different once let _: impl Trait is implemented, but I wouldn't expect it to be.

(Note that Hash can't be used as a trait object because its methods have generic type parameters which must go through monomorphization).

One terrible way to do this is to place an unreachable yield at the start of the generator declaring its yield and return types, e.g.:

let mut generator = || {
    if false { yield { return () } as Box<Debug> };
    yield Box::new(123i32);
    yield Box::new("hello");
};

EDIT: The more I look at yield { return () } as Box<Debug> the more I wonder how long till Cthulu truly owns me.

Yeah, I was hoping as well impl Trait would do the trick, but couldn't get it to work either. Your if false { yield { return () } as Box<Debug> }; hack does indeed work, though after seeing that, I don't think I will be able to sleep for tonight.

I guess the only way is to introduce more syntax to annotate the types?

Will the Generator::resume() method be changed to use Pin<Self> and be safe, or is the idea to add a new SafeGenerator trait?

I assumed that it would be changed, and I happened to be looking at the Pin RFC just now and noticed that it agrees, but it is blocked on object safety of arbitrary self types (which is currently an open RFC):

Once the arbitrary_self_types feature becomes object safe, we will make three changes to the generator API:

  1. We will change the resume method to take self by self: Pin<Self> instead of &mut self.
  2. We will implement !Unpin for the anonymous type of an immovable generator.
  3. We will make it safe to define an immovable generator.

The third point has actually happened already, but it doesn't help much since that required making Generator::resume unsafe.

I've found a use case that suggests that the Yield associated type should be parameterised by a lifetime (and thus rely on GATs). The trait would then look like this (with explicit lifetimes for clarity):

pub trait Generator {
    type Yield<'a>;
    type Return;
    unsafe fn resume<'a>(self: Pin<'a, Self>) -> GeneratorState<Self::Yield<'a>, Self::Return>;
}

This version of the trait allows a generator to yield a reference to a local variable or other variables constrained by the lifetime of the generator.

What's curious is that currently on nightly you can write a generator that yields a reference to a local variable, but not use it. As soon as you put in a call to resume, compilation fails.

@dylanede that is curious, can you post a playpen example?

Here it is in the playpen: https://play.rust-lang.org/?gist=4cc04defbebf06fa55a3074499d256ad&version=nightly

The line to uncomment to trigger the compilation error is not mentioned in the error.

I've come up with a hacky wrapper around generators that implements a trait like the one I mentioned above, so that you can return local references from generators. Caveats are mentioned inline. Take this as a proof-of-concept and motivating use case for changing the Generator trait.

https://play.rust-lang.org/?gist=74fe6554d3fa5503594bb48889caed49&version=nightly

One possible signature for the Generator trait that avoids the need for GATs is

pub trait Generator<'a> {
    type Yield;
    type Return;
    unsafe fn resume(self: Pin<'a, Self>) -> GeneratorState<Self::Yield, Self::Return>;
}

Users of the trait would then typically refer to for<'a> Generator<'a>.

@dylanede given that GATs are already on the list to be implemented, I'd rather wait on them. It'd be very unfortunate to be locked into a more frustrating syntax which is inconsistent with Iterator.

When suggesting Generator changes, please, do not forget about the fact that they can be used outside of async use-cases. Also I think it's worth to consider renaming Yield to Item for better compatibility with Iterator and potential future unification.

fwiw, yield is used in Python and Javascript for generators not related to async behavior.

Has anyone looked into the experimental LLVM coroutine support? @Zoxc on IRC mentioned that the current IR generated is hard to optimize, and the situation may improve if we passed it using the native LLVM facilities.

Last time I looked at it, LLVM coroutines were basically purpose-built for C++ coroutines, which default to allocating their state block. I'm not sure how easy it would be to get around that, and even without that there are layering problems.

It would be nice to be able to optimize the code pre-state-machine-transformation without lifting all of LLVM into MIR, though. :)

commented

I noticed that Generator is implemented for mutable references to generators. I think that this is unfortunate because it prevents any safe abstractions for the Generator trait.

Consider this seemingly safe function:

fn get_first_yield<T>(mut gen: impl Generator<Yield = T>) -> Option<T> {
    // We know `resume` wasn't called on `gen` before because the caller needs to
    // move `gen` to call this method. Since we now own `gen` and we won't move it
    // anymore, it is safe for us to call `resume`.
    match unsafe { gen.resume() } {
        GeneratorState::Yielded(value) => Some(value),
        GeneratorState::Complete(_) => None,
    }
    // `gen` gets dropped here, therefore we are sure it isn't moved in the future.
}

According to the documentation of the Generator trait this function should be sound. But it isn't because it can be called with mutable references to generators. Here is a full example on how this function fails: https://play.rust-lang.org/?gist=94d0081f2f4e8fd08e4e0b97989422f1&version=nightly&mode=debug&edition=2018

I know that generators will eventually get a safe api but depending on how long this will take I think it would make sense to remove the implementation of Generator for mutable references to generators now. Once there is a safe api, this implementation isn't possible anyway.

@zroug #54383 should allow creating a safe api for generators soon ™️ (currently doing a local build of that branch to see if it works, will update in a few hours once it completes), as you mention impl<G> Generator for &mut G where G: Generator will have to disappear and instead we'll get impl<G> Generator for Pin<&mut G> where G: Generator or similar.

In the same vein, last time I checked generators with borrows across yield points didn't impl !Unpin. That seems like the prime example of an !Unpin type, or am I misunderstanding something?

I can confirm that the following definition of Generator works with the changes in #54383 (full playground of what I tested):

trait Generator {
    type Yield;
    type Return;

    fn resume(self: Pin<&mut Self>) -> GeneratorState<Self::Yield, Self::Return>;
}

impl<G> Generator for Pin<G>
where
    G: DerefMut,
    G::Target: Generator
{
    type Yield = <<G as Deref>::Target as Generator>::Yield;
    type Return = <<G as Deref>::Target as Generator>::Return;

    fn resume(self: Pin<&mut Self>) -> GeneratorState<Self::Yield, Self::Return> {
        <G::Target as Generator>::resume(Pin::get_mut(self).as_mut())
    }
}

I'm going to have a look whether I can figure out the changes to make the MIR transform match this trait as well.

Going to again point out what I mentioned earlier: rather than a single resume method, I honestly think that resume should return Option<Yield> and that there should be a separate await method that consumes self and returns Return.

Why generator doesn't accept values on resume?
It'd be great to have yield return value provided into resume.

@clarcharr you can implement such
await with current API.

Any update on whether Generator is going to be changed to depend on GATs, as suggested by #43122 (comment)?

The strange behaviour mentiond in #43122 (comment) is still reproducible as well.

@omni-viral Both definitions of generators can be written in terms of each other. I was more stating that I feel a double-method approach is more ergonomic for consumers.

@clarcharr how do you write the pinned version in terms of a version of Generator that consumes self? Once it's pinned you're not allowed to move it so there is no way to produce a value to pass into await.

Oh, right.

It's unfortunate we don't have a way to do a form of consume-and-drop with the Pin API.

Why Generator trait cares about immovability of the implementer, while Iterator does not? I am afraid Generator development is too heavily influenced with async use-cases, which results in an unfortunate disregard towards potential synchronous uses.

I'm fairly certain that Iterator would get the same treatment if it could be changed in a non-breaking way.

@newpavlov note that only static generators are immovable. If you look at the testcases in the linked PR (e.g. https://github.com/rust-lang/rust/pull/55704/files#diff-d78b6984fcee145621a3415c60978b88) there's no unsafety required to deal with a non-static generator anymore, you just have to wrap references in Pin::new() before calling resume.

This also means you can have a fully safe generic Iterator implementation for a wrapped generator by requiring a Generator + Unpin (which can also be passed a Pin<Box<dyn Generator>> (or stack-pinned variant of) if you want to define an iterator via an immovable generator).

I was recently wondering if adding an &mut variant of resume for Unpin generators would be useful, e.g.

     fn resume_unpinned(&mut self) -> GeneratorState<Self::Yield, Self::Return> where Self: Unpin {
        Pin::new(self).resume()
     }

but I can't think of a name that seems short enough to be worth it.

@newpavlov also, not supporting immovability drastically lowers the usefulness of generators for pretty much all usecases, e.g. an iterator-generator as simple as

|| {
    let items = [1, 2, 3];
    for &i in &items {
        yield i;
    }
}

runs afoul of error[E0626]: borrow may still be in use when generator yields.

@Nemo157
I guess I will repeat the question (couldn't find an answer with a cursory search), but could you remind me why Generator can not be implemented only for pinned self-referential types? In other words your snippet will automatically pin the generator closure.

will automatically pin the generator closure

but where will it be pinned? To construct efficient adaptors you need to be able to pass the generator around by value as you add layers on, only once you want to actually use it do you pin the top level and have that pinning apply to the entire structure at once.

It could be automatically pinned to the heap via Pin<Box<_>> but then you have an indirection between each layer of adaptors.

You can distinguish between safe and unsafe generators, you will pass around by value "unsafe" generator, it will not implement Generator trait, but could instead implement something like UnsafeGenerator with an unsafe resume method. And we will have impl<T: UnsafeGenerator> Generator for Pin<T> { .. }, thus to safely use the resulting generator you will have to pin it first, which can be done either automatically or manually.

With #55704 we can distinguish between potentially self-referential and guaranteed movable generators, they're named Generator and Generator + Unpin respectively. Neither of them require any unsafe code to interact with.

My point is that I am looking forward to using Generators in a synchronous code and implementing it manually for custom structs, so baking Pin semantics into the trait seems suspicious to me.

It's trivial to opt-out of pinning and use Generator still, I think this is a key part of providing all the potential power of generators, then allowing more user-friendly abstractions to be built on top of them that might restrict that in some way. Since Iterator can't be adapted to support pinned values directly you just opt-out in the adaptation layer and force users to pin any self-referential generator they want to use with it:

impl<G: Generator<Return = ()> + Unpin> Iterator for G {
    type Item = <G as Generator>::Yield;

    fn poll(&mut self) -> Option<Self::Item> {
        match Pin::new(self).resume() {
            GeneratorState::Yielded(item) => Some(item),
            GeneratorState::Complete(()) => None,
        }
    }
}

let gen = || { yield 5; yield 6; };
for item in gen {
    println!("{}", item);
}

let gen = static || { yield 5; yield 6; };
for item in Box::pinned(gen) {
    println!("{}", item);
}

So why use this approach instead of the UnsafeGenerator which I've proposed earlier? IIUC both are essentially equivalent to each other, but with yours users have to learn about Pin semantics even if do not work with self-referential structs.

I do agree that the burden of figuring out how to work with self-referential structs should be put on the creators of the self-referential structs, rather than in the API for something like Generator.

For example, why is it that we can't just use &mut self as usual and pass in an &mut PinMut<'a_ T> instead? It seems silly, but it does get around having to put this weird API in the Generator trait.

If your generator is Unpin, then all someone has to do to use it is to do Pin::new(generator). In return, people who have generators that are not Unpin can also use your code and abstractions on their generators. The Pin API was designed so that pointers to structs which are not self-referential can be easily be put in and out of a Pin. Having a separate UnsafeGenerator trait would force everyone to implement things twice, once for UnsafeGenerator and once for Generator.

Also, you keep implying that somehow self-referential generators are only relevant for async i/o use cases, but that's not at all true. Any generator that uses borrows to local variables (like this example in this thread) will need to be self-referential.

Having a separate UnsafeGenerator trait would force everyone to implement things twice, once for UnsafeGenerator and once for Generator.

Why is that? I don't see why impl<T: UnsafeGenerator> Generator for Pin<T> { .. } wouldn't work.

commented

@newpavlov Just to make it clear: Pin has absolutely nothing whatsoever to do with asynchronous vs synchronous.

The reason for Pin is to allow for borrowing across yield points, which is necessary/useful for both asynchronous and synchronous code.

Pin just means "you cannot move this value", which allows for self-referential references.

If your struct does not need to pin anything, then a Pin<&mut Self> is the same as &mut Self (it uses DerefMut), so it is just as convenient to use.

It is only when your struct needs to pin something that you need to deal with the complexity of Pin.

In practice that means the only time you need to deal with Pin is if you are creating abstractions which wrap other Generators (like map, filter, etc.)

But if you're creating standalone Generators then you don't need to deal with Pin (because it derefs to &mut Self).

Why is that? I don't see why impl<T: UnsafeGenerator> Generator for Pin<T> { .. } wouldn't work.

Let's suppose we did that. That means that now this code won't work:

let unsafe_generator = || {
    let items = [1, 2, 3];
    for &i in &items {
        yield i;
    }
};

unsafe_generator.map(|x| ...)

It doesn't work because the map method requires self to be a Generator, but unsafe_generator is an UnsafeGenerator.

And your Generator impl requires UnsafeGenerator to be wrapped in Pin, so you would need to use this instead:

let unsafe_generator = || {
    let items = [1, 2, 3];
    for &i in &items {
        yield i;
    }
};

let unsafe_generator = unsafe { Pin::new_unchecked(&mut unsafe_generator) };
unsafe_generator.map(|x| ...)

And now you must carefully ensure that unsafe_generator remains pinned, and is never moved. Hopefully you agree that this is much worse than the previous code.


If instead we require Pin<&mut Self> for the resume method, that means we don't need to do any of that funky stuff, we can just pass unsafe_generator directly to map, and everything works smoothly. Unsafe generators can be treated exactly the same as safe generators!

The difference with your system and the Pin<&mut Self> system is: where is the Pin created?

With your system, you must manually create the Pin (such as when passing unsafe_generator to another API which expects a Generator).

But with Pin<&mut Self> you don't need to create the Pin: it's created automatically for you.

Has there been any discussions about avoiding more than one Box allocation when embedding an immovable generator inside another? That's basically what happens in C++ for optimization purpose, but in Rust we probably need that in the type system.

Another optimization question is about safe-to-move references: particularly, dereferenced Vec references will not change even if the generator moves, but due to how the types work it still requires an immovable generator for now.

No boxes are needed, see https://docs.rs/pin-utils/0.1.0-alpha.4/pin_utils/macro.pin_mut.html for stack pinning that works even for generators in generators.

Hi to everybody,

experimenting with the generators now and I need to store multiple generators into a Vec. This code works:

#![feature(generators, generator_trait)]

use std::ops::{Generator, GeneratorState};
use std::pin::Pin;


fn main() {

    let gen = Box::new(|| {
        yield 1;
        return "foo"
    });

    let mut vec = Vec::new();

    vec.push(gen);

    match Pin::new(vec[0].as_mut()).resume() {
        GeneratorState::Yielded(1) => {}
        _ => panic!("unexpected return from resume"),
    }

    match Pin::new(vec[0].as_mut()).resume() {
        GeneratorState::Complete("foo") => {}
        _ => panic!("unexpected return from resume"),
    }

}

However I will also need to be able to explicitly specify the vec type without using the inference, but this is where I fail. This code does not work:

#![feature(generators, generator_trait)]

use std::ops::{Generator, GeneratorState};
use std::pin::Pin;


fn main() {

    let gen = Box::new(|| {
        yield 1;
        return "foo"
    });

    //let mut vec = Vec::new();
    let mut vec:Vec<Box<Generator <Yield=i32, Return=&str>>> = Vec::new(); // what is the correct type?

    vec.push(gen);

    match Pin::new(vec[0].as_mut()).resume() {
        GeneratorState::Yielded(1) => {}
        _ => panic!("unexpected return from resume"),
    }

    match Pin::new(vec[0].as_mut()).resume() {
        GeneratorState::Complete("foo") => {}
        _ => panic!("unexpected return from resume"),
    }

}

I get:

error[E0277]: the trait bound `dyn std::ops::Generator<Return = &str, Yield = i32>: std::marker::Unpin` is not satisfied
  --> src/main.rs:19:11
   |
19 |     match Pin::new(vec[0].as_mut()).resume() {
   |           ^^^^^^^^ the trait `std::marker::Unpin` is not implemented for `dyn std::ops::Generator<Return = &str, Yield = i32>`
   |
   = note: required by `std::pin::Pin::<P>::new`

Any idea what is the correct way to explicitly declare the vector?

Use Vec<Box<dyn Generator<Yield = i32, Return = &'static str> + Unpin>>, Unpin is a marker trait you can add on to other trait bounds.

Or use Vec<Pin<Box<dyn Generator<Yield = i32, Return = &'static str>>>> if you want to support self-referential generators as well.

Wow, thanks a lot!

Has anyone done any work with generator resume arguments? Is there an RFC for them?

Not that I know of. The original RFC mentioned that resume arguments could be added, but that's all.

commented

@lachlansneff My initial PR adding generator included resume arguments and that implementation is still in one of my branches. Adding generator resume arguments would be covered by the existing eRFC for generators.

I have read through the RFC and all the comments here. I may be missing this. However, it doesn't seem that the case of cancellation has been considered.

Let's say that I want to collect the first million numbers in the Fibonacci sequence. I can write a generator that never returns and yields the next number in the sequence. Then I can call resume() until I have one million results. So far so good. However, this needs to use some kind of BigNum because the numbers are large. The BigNum implements Drop to free its memory buffer.

After I've collected my first million BigNums, I stop resuming. What happens to the two BigNums that I'm internally keeping as state? Are they dropped? Do we get a memory leak?

It seems like some sort of cancellation routine is needed here to explicitly end execution early. It probably also needs to be defined in the Generator trait otherwise independent types implementing Generator will not be well-defined.

After I've collected my first million BigNums, I stop resuming. What happens to the two BigNums that I'm internally keeping as state? Are they dropped? Do we get a memory leak?

They are stored as part of the generated impl Generator type, the transform also produces a generated Drop impl for the type, so when you drop the generator all of the current state is correctly dropped (simple demonstration playground).

commented

After I've collected my first million BigNums, I stop resuming. What happens to the two BigNums that I'm internally keeping as state? Are they dropped? Do we get a memory leak?

Generators work similarly to closures: they are converted into a struct which contains all of the state for the generator. So the generator struct will contain the BigNums as fields.

When the generator struct is dropped, it will automatically drop all of its fields, so there is no memory leak. Here is an example:

https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=74878c5f222d4329fdf654f5d873d44d

Should Generator depend on Drop then? Third-party implementors of Generator need either clear documentation on the need to handle cancellation semantics or a strict dependency on Drop to force them to do so. Possibly both.

commented

@npmccallum I'm not sure what you mean. Things will automatically be dropped even if you don't implement Drop.

So unless you're intentionally trying to leak memory, it's very hard to leak memory by accident.

@Pauan So if you create a generator using the closure-like syntax, the Rust compiler will create a new ephemeral type, implement Generator and Drop on it and create an instance of that type.

The Rust compiler, however, is not the only party that can implement Generator and Drop on a type. For example, a number of stackfull coroutine crates do this. We might want to document that if you implement Generator you also need to implement Drop and handle cancellation.

We probably shouldn't make Generator depend on Drop since this would make it awkward to implement Generator on types which don't need to free resources.

Unless you're actually managing memory yourself you don't need to implement Drop at all- when you just have a BigNum field the compiler will generate "drop glue" to call BigNum::drop for you. This is exactly the same as a case like this:

struct SomeType {
    some_data: Vec<i32>,
}

// SomeType doesn't implement Drop, but you don't see it leaking anything
commented

So if you create a generator using the closure-like syntax, the Rust compiler will create a new ephemeral type, implement Generator and Drop on it and create an instance of that type.

That is correct, yes.

We might want to document that if you implement Generator you also need to implement Drop and handle cancellation.

No, that is not necessary. What I said is not specific to generators, it applies to all Rust types:

https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f6d244f8fc13f887a99d1f3f425ddaec

As you can see, even though Bar does not implement Drop, it still dropped Foo anyways. And even if Bar does implement Drop, it still drops Foo anyways:

https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=1b401bfa0b63d97bd81a35f6a5529f84

This is a fundamental property of Rust: everything is automatically dropped, even if you do not implement Drop. That applies to all Rust types (with a couple exceptions like ManuallyDrop).

The only time you need to implement Drop is if you're managing some external resource (like the heap, or a file descriptor, or a network socket, or something like that). But that's not specific to generators.

It's very hard to accidentally leak memory in Rust. The only way I know of to leak memory accidentally is with Rc / Arc cycles.

To be fair, if you are implementing something acting like the coroutines created by the generator transform then you are likely to be manually managing your own internal memory to deal with the states efficiently e.g. translating my earlier example into a manually implemented Generator uses MaybeUninit everywhere and needs a custom Drop impl to take care of it (and is highly likely to not be panic safe).

But still that is not specific to generators/coroutines, that's just to do with manually managed internal memory, and doesn't require any changes to this feature to deal with.

While this is still unstable, can async/await be used in Rust stable in place of generators in a way that does not carry in the full fledged futures and task scheduler run-time, but rather in a more simply typed localized iterator-like interface?

Sure, there are even crates for that. See https://github.com/whatisaphone/genawaiter for example.

I compared a genawaiter Iterator with the async_std way of accomplishing the same task, i.e. an iterator async task send via a async_std::sync::Sender plus an receive Iterator that does async_std::task::block_on(receiver.recv()). The latter has needless calls to the futex syscall on Linux, and genawaiter of course does not, which is cool. Thanks @whatisaphone, @HadrienG2

Thanks for calling me out, I meant to leave a comment here. If anyone wants to experiment with generator control flow, using genawaiter as a starting point might be quicker than hacking the compiler itself.

I think resume arguments are an important feature, and genawaiter supports them via the resume_with method. I couldn't come up with a good answer for where the first resume argument should end up (before the first yield statement). See the note in this section. I'd love to see a better way to handle that.

@whatisaphone One design which was proposed before in order to resolve this problem is to unify generator call arguments and generator resume arguments. Basically, your generator acts like an FnMut(ResumeArgs).

It is not very consensual yet though. Critics argue that it is surprising to have the arguments of a generator fn change magically every time you reach a yield statement, and that it's not nice to have to explicitely save them if you don't want to lose them.

@whatisaphone how about something like:

// either of:
// fn generator(initial_arg1: i32, initial_arg2: i64) -> String
// yield(initial_resume_arg_pat: (i32, i32)) -> i32
// {

// or:
fn generator(initial_arg1: i32, initial_arg2: i64) -> impl Generator<Return=String, Yield=i32, Resume=(i32, i32)>
yield(initial_resume_arg_pat)
{
    println!("{:?}", (initial_arg1, initial_arg2, initial_resume_arg_pat));
    let yield_value = 23i32;
    let next_resume_arg = yield yield_value;
    println!("{:?}", next_resume_arg);
    "return value".to_string()
}

fn main() {
    let mut g = generator(1i32, 2i64);
    let mut g = unsafe { Pin::new_unchecked(&mut g) };
    assert_eq!(g.resume((5i32, 6i32)), Yielded(23i32));
    assert_eq!(g.resume((7i32, 8i32)), Complete("return value".to_string()));
}

It prints:

(1, 2, (5, 6))
(7, 8)

Generator lambda function:

let lambda: impl FnOnce(i32, i64) -> impl Generator<Return=String, Yield=i32, Resume=R> =
|initial_arg1: i32, initial_arg2: i64| -> i32 /* or impl Generator */
yield(initial_resume_arg_pat: R) -> i32
{
    todo!()
};

(edit: fix generator lambda type)

@HadrienG2 My idea avoids losing initial resume arguments and doesn't have magically changing variables.

I've implemented resume arguments in #68524. They go for the "minimal language changes" solution of passing the first resume argument as a "normal" argument and subsequent ones as the result of yield expressions.

I think we should land this implementation first, since it unlocks a lot of interesting patterns (including async/await support on #[no_std], which is what I'm interested in). Generators are experimental, so that shouldn't be a problem. Once we've collected some experience with this we can work on writing a fully-fledged RFC for generators.

@jonas-schievink does your implementation of generator resume arguments relate to the RFC under discussion at rust-lang/rfcs#2781 ?

@bstrie No, I've found that proposal to be extremely confusing with how it changes the value of supposedly immutable variables after a yield. My implementation just makes yield evaluate to the resume argument, and passes the first one as an argument.

I avoid using generators in Python because of their "obscure" syntax: to know whether you are looking at a functions definition or at a generator definition, you need to search for the yield keyword in the body. To me this is as if a function definition and a class definition in Python where both using def (instead of def and class), and to recognise a function, it was necessary to search for return keyword in the body.

When i choose to define a generator in Python, i add a comment before the definition to mark it as a generator.

I am surprised that Rust is following Python here. IMO, if both

let mut one = || { if false {}; 1 };

and

let mut one = || { if false {yield 42}; 1 };

are allowed, they should mean the same thing.

I would expect the generator definition syntax to be remarkably different from the closure definition syntax.

Yeah, I also quite dislike that generator syntax mimics closures, I have proposed an alternative syntax here, although it will need a new edition and probably we should use something else instead of generator, e.g. something like cort as a shorthand for coroutine (see this post for motivation).

@newpavlov @alexeymuranov I proposed a syntax that doesn't require a new edition but is unambiguous here. It also doesn't have magically changing function arguments.

It's unambiguous because of the yield that is declaring the yield and resume types that is between the return type and the function body.

A generator lambda function would look like:

|initial_arg1: i32, initial_arg2: i64| -> i16 /* or impl Generator */
yield(initial_resume_arg_pat: R) -> i32
{
    let resume_arg2: R = yield 23i32;
    0i16
}

I like how libfringe does it where the initial input (resume arguments) to the generator are specified as part of the closure:

|input, yielder| {
  loop {
    input = yielder.yield(input + 1);
  }
}

I raised this question on discord and was asked to add it here: Where is the high-performance cooperative multitasking facility?

Clarifications: I'm writing a very high performance simulation system that is most comfortably modeled with actors (or as I knew it, communicating processes, CSP). You can easily do it with (OS) threads, but that's 3-4 orders of magnitude slower than doing it with coroutines from a single pinned thread.

Benchmarking modern C++ stackless corutines I can get 600 M/s context switches (which is insanely good), roughly 12 x86 instructions per iteration. Closer to the model I want is cooperative multitasking, which the Boost Contexts provides. This "merely" manages ~ 100 M/s context switches.

Alas doing to do this in Rust has been a disaster; I've tried some six different crates, but everything is tuned for multithreading and IO, not for compute bound single thread. Performance is typically ~ 300 k/s (eg. tokio). The best I have gotten so far is with futures_lite and spin_on which can eek out 50 M/s (and a ~ 64 instruction inner loop with lots of indirect branches). Looking at the binary trace it seems trying to shoehorn cooperative multitasking over async incurs a fair bit of overhead.

I forgot to include my best case example that manages 50 M/s switches (1/6th of what I can do with C++):

use futures_lite::future;

pub fn main() {
    let my_future = async {
        for _step in 0..100_000_000 {
            future::yield_now().await;
        }
    };

    println!("size {}", std::mem::size_of_val(&my_future));

    spin_on::spin_on(my_future);
}

@tommythorn the perf difference is likely because the C++ version does not have/need the equivalent of a "waker" to notify a future when to resume.

Since you are using spin_on, and your task is CPU bound and not I/O bound, it may be that you do not need this waker system at all.

However, future::yield_now() will always notify the executor via the task's waker. You could probably improve performance by writing a custom future that returns NotReady the first time it is called, but does not notify the waker. Awaiting this custom future would be more efficient (although strictly speaking you would be violating the contract expected of futures).

In the long run, I think Rust will get generalized coroutines, which would allow you to use the state-machine transform performed by the compiler, without being tied to the async/await model. I wrote down my thoughts on how this should look here.

Exactly what it looked like from reading the dynamic instruction trace (a dummy waker was consulted etc). I only have a few month of Rust experience, but I'll try. I'll not pollute this thread any longer, but I must plead that "generator/coroutines" is not what I need and implementing cooperative multitasking on top of generators is critical overhead. What I would like is actual coroutines as defined by Knuth (https://en.wikipedia.org/wiki/Coroutine) with explicit transfer of control, not Python style generators. Thanks.

I posed once a question on Rust reddit: How about state machines instead of generators?. There may be some relevant comments there.

I must plead that "generator/coroutines" is not what I need and implementing cooperative multitasking on top of generators is critical overhead. What I would like is actual coroutines as defined by Knuth (https://en.wikipedia.org/wiki/Coroutine) with explicit transfer of control, not Python style generators. Thanks.

Did you read https://en.wikipedia.org/wiki/Coroutine#Comparison_with_generators ?

There should be zero overhead to implementing coroutines on top of generators like this: in both cases you would need an indirect call as soon as more than one coroutine is involved.

@Diggsey, I didn't read that but it's a well know fact, but it's not free; you are implementing a trampoline and thus you transfer control twice, at least one of which is an indirect branch.

In detail: in the classic coroutine implementation yield_to(B) while running A, will save callee registers, store the stack pointer in A, load the stack pointer from B, load the callees, and return. That's it. You have one call to the yield_to(B) and one return. With a generator, you have three contexts to switch between: A -> C -> B (looking at your link they correspond to A-produce, B-consume, and C-dispatcher. Twice the cost.

ADD: Note, with a small change to yield_to() to allow it to pass a value to the next thread, eg: yield_to(B, value) which will arrive as the return value in B: value = yield_to(...), you can implement generators directly on top of this with zero cost. (I used exactly this primitive in a high performance firmware for a NAND flash controller. It was almost as fast as the convoluted state machine it replaced, but vastly more readable and less error prone).

With a generator, you have three contexts to switch between: A -> C -> B

I'm not sure what definition of "context switch" you are using, but this is definitely not true.

With a generator (based on the state-machine transform), the call stack initially looks like this:

A <-- top
C

When A yields, it corresponds to a ret:

C <-- top

And then C calls B:

B <-- top
C

In other words, switching coroutines in this model involves a return and an indirect call. That's it. There's no stack swapping or other context switching happening at all.

What you described here as a "classic coroutine implementation" is often slower as it requires swapping out the whole stack, ie. there's a real context switch happening. Furthermore, the compiler cannot optimize across the context switch. Using a state-machine transform is both more cache-friendly and more compiler friendly.

@tommythorn : Some of the confusion might be that while async/await use a trampoline, the coroutines/generators described by this issue are a lower level building block (for which there is no stable syntax: can see examples of the unstable API in https://github.com/rust-lang/rust/pull/68524/files?file-filters%5B%5D=.md&file-filters%5B%5D=.stderr) that work the way @Diggsey describes. The async/await transform consumes that low level building block to add a high level interface that is amenable to concurrent/multi-core schedulers.

I have a use case for generators that isn't currently supported: I need a Generator that has access to a reference only for the short duration between yields.

Here's an example:

#![feature(generators)]

use std::ops::Generator;

fn test() -> impl Generator<&mut String> {
// What lifetime goes here? ^
    
    |resume: &mut String| {
        resume.push_str("hello");
        let resume: &mut String = yield ();
        resume.push_str("world");
    }
}

This could be done using GATs if the Generator trait looked more like this:

trait Generator {
    type Resume<'a>;
    type Yield;
    type Return;
    
    fn resume<'a>(self: Pin<&mut Self>, arg: Self::Resume<'a>) -> GeneratorState<Self::Yield, Self::Return>;
}

fn test() -> impl Generator<Resume<'a> = &'a mut String> {

See #68923, the function signature there doesn't need GATs, if transient references actually worked then it could be:

fn test() -> impl for<'a> Generator<&'a mut String>