rust-lang / rust

Empowering everyone to build reliable and efficient software.

Home Page:https://www.rust-lang.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tracking issue for async/await (RFC 2394)

withoutboats opened this issue · comments

This is the tracking issue for RFC 2394 (rust-lang/rfcs#2394), which adds async and await syntax to the language.

I will be spearheading the implementation work of this RFC, but would appreciate mentorship as I have relatively little experience working in rustc.

TODO:

  • Implement
  • Stabilize #63209
  • Document

Unresolved questions:

The discussion here seems to have died down, so linking it here as part of the await syntax question: https://internals.rust-lang.org/t/explicit-future-construction-implicit-await/7344

Implementation is blocked on #50307.

About syntax: I'd really like to have await as simple keyword. For example, let's look on a concern from the blog:

We aren’t exactly certain what syntax we want for the await keyword. If something is a future of a Result - as any IO future likely to be - you want to be able to await it and then apply the ? operator to it. But the order of precedence to enable this might seem surprising - await io_future? would await first and ? second, despite ? being lexically more tightly bound than await.

I agree here, but braces are evil. I think it's easier to remember that ? has lower precedence than await and end with it:

let foo = await future?

It's easier to read, it's easier to refactor. I do believe it's the better approach.

let foo = await!(future)?

Allows to better understand an order in which operations are executed, but imo it's less readable.

I do believe that once you get that await foo? executes await first then you have no problems with it. It's probably lexically more tied, but await is on the left side and ? is on the right one. So it's still logical enough to await first and handle Result after it.


If any disagreement exist, please express them so we can discuss. I don't understanda what's silent downvote stands for. We all wish good to the Rust.

I have mixed views on await being a keyword, @Pzixel. While it certainly has an aesthetic appeal, and is perhaps more consistent, given async is a keyword, "keyword bloat" in any language is a real concern. That said, does having async without await even make any sense, feature wise? If it does, perhaps we can leave it as is. If not, I'd lean towards making await a keyword.

I think it's easier to remember that ? has lower precedence than await and end with it

It might be possible to learn that and internalise it, but there's a strong intuition that things that are touching are more tightly bound than things that are separated by whitespace, so I think it would always read wrong on first glance in practice.

It also doesn't help in all cases, e.g. a function that returns a Result<impl Future, _>:

let foo = await (foo()?)?;

The concern here is not simply "can you understand the precedence of a single await+?," but also "what does it look like to chain several awaits." So even if we just picked a precedence, we would still have the problem of await (await (await first()?).second()?).third()?.

A summary of the options for await syntax, some from the RFC and the rest from the RFC thread:

  • Require delimiters of some kind: await { future }? or await(future)? (this is noisy).
  • Simply pick a precedence, so that await future? or (await future)? does what is expected (both of these feel surprising).
  • Combine the two operators into something like await? future (this is unusual).
  • Make await postfix somehow, as in future await? or future.await? (this is unprecedented).
  • Use a new sigil like ? did, as in future@? (this is "line noise").
  • Use no syntax at all, making await implicit (this makes suspension points harder to see). For this to work, the act of constructing a future must also be made explicit. This is the subject of the internals thread I linked above.

That said, does having async without await even make any sense, feature wise?

@alexreg It does. Kotlin works this way, for example. This is the "implicit await" option.

@rpjohnst Interesting. Well, I'm generally for leaving async and await as explicit features of the language, since I think that's more in the spirit of Rust, but then I'm no expert on asynchronous programming...

@alexreg async/await is really nice feature, as I work with it on day-to-day basis in C# (which is my primary language). @rpjohnst classified all possibilities very well. I prefer the second option, I agree on others considerations (noisy/unusual/...). I have been working with async/await code for last 5 years or something, it's really important to have such a flag keywords.

@rpjohnst

So even if we just picked a precedence, we would still have the problem of await (await (await first()?).second()?).third()?.

In my practice you never write two await's in one line. In very rare cases when you need it you simply rewrite it as then and don't use await at all. You can see yourself that it's much harder to read than

let first = await first()?;
let second = await first.second()?;
let third = await second.third()?;

So I think it's ok if language discourages to write code in such manner in order to make the primary case simpler and better.

hero away future await? looks interesting although unfamiliar, but I don't see any logical counterarguments against that.

In my practice you never write two await's in one line.

But is this because it's a bad idea regardless of the syntax, or just because the existing await syntax of C# makes it ugly? People made similar arguments around try!() (the precursor to ?).

The postfix and implicit versions are far less ugly:

first().await?.second().await?.third().await?
first()?.second()?.third()?

But is this because it's a bad idea regardless of the syntax, or just because the existing await syntax of C# makes it ugly?

I think it's a bad idea regardless of the syntax because having one line per async operation is already complex enough to understand and hard to debug. Having them chained in a single statement seems to be even worse.

For example let's take a look on real code (I have taken one piece from my project):

[Fact]
public async Task Should_UpdateTrackableStatus()
{
	var web3 = TestHelper.GetWeb3();
	var factory = await SeasonFactory.DeployAsync(web3);
	var season = await factory.CreateSeasonAsync(DateTimeOffset.UtcNow, DateTimeOffset.UtcNow.AddDays(1));
	var request = await season.GetOrCreateRequestAsync("123");

	var trackableStatus = new StatusUpdate(DateTimeOffset.UtcNow, Request.TrackableStatuses.First(), "Trackable status");
	var nonTrackableStatus = new StatusUpdate(DateTimeOffset.UtcNow, 0, "Nontrackable status");

	await request.UpdateStatusAsync(trackableStatus);
	await request.UpdateStatusAsync(nonTrackableStatus);

	var statuses = await request.GetStatusesAsync();

	Assert.Single(statuses);
	Assert.Equal(trackableStatus, statuses.Single());
}

It shows that in practice it doesn't worth to chain awaits even if syntax allows it, because it would become completely unreadable await just makes oneliner even harder to write and read, but I do believe it's not the only reason why it's bad.

The postfix and implicit versions are far less ugly

Possibility to distinguish task start and task await is really important. For example, I often write code like that (again, a snippet from the project):

public async Task<StatusUpdate[]> GetStatusesAsync()
{
	int statusUpdatesCount = await Contract.GetFunction("getStatusUpdatesCount").CallAsync<int>();
	var getStatusUpdate = Contract.GetFunction("getStatusUpdate");
	var tasks = Enumerable.Range(0, statusUpdatesCount).Select(async i =>
	{
		var statusUpdate = await getStatusUpdate.CallDeserializingToObjectAsync<StatusUpdateStruct>(i);
		return new StatusUpdate(XDateTime.UtcOffsetFromTicks(statusUpdate.UpdateDate), statusUpdate.StatusCode, statusUpdate.Note);
	});

	return await Task.WhenAll(tasks);
}

Here we are creating N async requests and then awaiting them. We don't await on each loop iteration, but firstly we create array of async requests and then await them all at once.

I don't know Kotlin, so maybe they resolve this somehow. But I don't see how you can express it if "running" and "awaiting" the task is the same.


So I think that implicit version is a no-way in even much more implicit languages like C#.
In Rust with its rules that doesn't even allow you to implicitly convert u8 to i32 it would be much more confusing.

@Pzixel Yeah, the second option sounds like one of the more preferable ones. I've used async/await in C# too, but not very much, since I haven't programmed principally in C# for some years now. As for precedence, await (future?) is more natural to me.

@rpjohnst I kind of like the idea of a postfix operator, but I'm also worried about readability and assumptions people will make – it could easily get confused for a member of a struct named await.

Possibility to distinguish task start and task await is really important.

For what it's worth, the implicit version does do this. It was discussed to death both in the RFC thread and in the internals thread, so I won't go into a lot of detail here, but the basic idea is only that it moves the explicitness from the task await to task construction- it doesn't introduce any new implicitness.

Your example would look something like this:

pub async fn get_statuses() -> Vec<StatusUpdate> {
    // get_status_updates is also an `async fn`, but calling it works just like any other call:
    let count = get_status_updates();

    let mut tasks = vec![];
    for i in 0..count {
        // Here is where task *construction* becomes explicit, as an async block:
        task.push(async {
            // Again, simply *calling* get_status_update looks just like a sync call:
            let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
            StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))
        });
    }

    // And finally, launching the explicitly-constructed futures is also explicit, while awaiting the result is implicit:
    join_all(&tasks[..])
}

This is what I meant by "for this to work, the act of constructing a future must also be made explicit." It's very similar to working with threads in sync code- calling a function always waits for it to complete before resuming the caller, and there are separate tools for introducing concurrency. For example, closures and thread::spawn/join correspond to async blocks and join_all/select/etc.

For what it's worth, the implicit version does do this. It was discussed to death both in the RFC thread and in the internals thread, so I won't go into a lot of detail here, but the basic idea is only that it moves the explicitness from the task await to task construction- it doesn't introduce any new implicitness.

I believe it does. I can't see here what flow would be in this function, where is points where execution breaks until await is completed. I only see async block which says "hello, somewhere here there are async functions, try to find out which ones, you will be surprised!".

Another point: Rust tend to be a language where you can express everything, close to bare metal and so on. I'd like to provide some quite artificial code, but I think it illustrates the idea:

var a = await fooAsync(); // awaiting first task
var b = barAsync(); //running second task
var c = await bazAsync(); // awaiting third task
if (c.IsSomeCondition && !b.Status = TaskStatus.RanToCompletion) // if some condition is true and b is still running
{
   var firstFinishedTask = await Task.Any(b, Task.Delay(5000)); // waiting for 5 more seconds;
   if (firstFinishedTask != b) // our task is timeouted
      throw new Exception(); // doing something
   // more logic here
}
else
{
   // more logic here
}

Rust always tends to provide full control over what's happening. await allow you to specify points where continuation process. It also allows you to unwrap a value inside future. If you allows implicit conversion on use side, it has several implications:

  1. First of all, you have to write some dirty code to just emulate this behaviour.
  2. Now RLS and IDEs should expect that our value is either Future<T> or awaited T itself. It's not an issue with keywords - it it exists, then result is T, otherwise it's Future<T>
  3. It makes code harder to understand. In you example I don't see why it does interrupt execution at get_status_updates line, but it doesn't on get_status_update. They are quite similar to each other. So it's either doesn't work the way original code was or it's so much complicated that I can't see it even when I'm quite familiar with the subject. Both alternatives don't make this option a favor.

I can't see here what flow would be in this function, where is points where execution breaks until await is completed.

Yes, this is what I meant by "this makes suspension points harder to see." If you read the linked internals thread, I made an argument for why this isn't that big of a problem. You don't have to write any new code, you just put the annotations in a different place (async blocks instead of awaited expressions). IDEs have no problem telling what the type is (it's always T for function calls and Future<Output=T> for async blocks).

I will also note that your understanding is probably wrong regardless of the syntax. Rust's async functions do not run any code at all until they are awaited in some way, so your b.Status != TaskStatus.RanToCompletion check will always pass. This was also discussed to death in the RFC thread, if you're interested in why it works this way.

In you example I don't see why it does interrupt execution at get_status_updates line, but it doesn't on get_status_update. They are quite similar to each other.

It does interrupt execution in both places. The key is that async blocks don't run until they are awaited, because this is true of all futures in Rust, as I described above. In my example, get_statuses calls (and thus awaits) get_status_updates, then in the loop it constructs (but does not await) count futures, then it calls (and thus awaits) join_all, at which point those futures concurrently call (and thus await) get_status_update.

The only difference with your example is when exactly the futures start running- in yours, it's during the loop; in mine, it's during join_all. But this is a fundamental part of how Rust futures work, not anything to do with the implicit syntax or even with async/await at all.

I will also note that your understanding is probably wrong regardless of the syntax. Rust's async functions do not run any code at all until they are awaited in some way, so your b.Status != TaskStatus.RanToCompletion check will always pass.

Yes, C# tasks are executed synchronously until first suspension point. Thank you for pointing that out.
However, it doesn't really matter because I still should be able to run some task in background while executing the rest of the method and then check if background task is finished. E.g. it could be

var a = await fooAsync(); // awaiting first task
var b = Task.Run(() => barAsync()); //running background task somehow
// the rest of the method is the same

I've got your idea about async blocks and as I see they are the same beast, but with more disadvantages. In original proposal each async task is paired with await. With async blocks each task would be paired with async block at construction point, so we are in almost same situation as before (1:1 relationship), but even a bit worse, because it feel more unnatural, and harder to understand, because callsite behavior becomes context-depended. With await I can see let a = foo() or let b = await foo() and I would know it this task is just constructed or constructed and awaited. If i see let a = foo() with async blocks I have to look if there is some async above, if I get you right, because in this case

pub async fn get_statuses() -> Vec<StatusUpdate> {
    // get_status_updates is also an `async fn`, but calling it works just like any other call:
    let count = get_status_updates();

    let mut tasks = vec![];
    for i in 0..count {
        // Here is where task *construction* becomes explicit, as an async block:
        task.push(async {
            // Again, simply *calling* get_status_update looks just like a sync call:
            let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
            StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))
        });
    }

    // And finally, launching the explicitly-constructed futures is also explicit, while awaiting the result is implicit:
    join_all(&tasks[..])
}

We are awaiting for all tasks at once while here

pub async fn get_statuses() -> Vec<StatusUpdate> {
    // get_status_updates is also an `async fn`, but calling it works just like any other call:
    let count = get_status_updates();

    let mut tasks = vec![];
    for i in 0..count {
        // Isn't "just a construction" anymore
        task.push({
            let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
            StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))
        });
    }
    tasks 
}

We are executing them one be one.

Thus I can't say what's exact behavior of this part:

let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))

Without having more context.

And things get more weird with nested blocks. Not to mention questions about tooling etc.

callsite behavior becomes context-depended

This is already true with normal sync code and closures. For example:

// Construct a closure, delaying `do_something_synchronous()`:
task.push(|| {
    let data = do_something_synchronous();
    StatusUpdate { data }
});

vs

// Execute a block, immediately running `do_something_synchronous()`:
task.push({
    let data = do_something_synchronous();
    StatusUpdate { data }
});

One other thing that you should note from the full implicit await proposal is that you can't call async fns from non-async contexts. This means that the function call syntax some_function(arg1, arg2, etc) always runs some_function's body to completion before the caller continues, regardless of whether some_function is async. So entry into an async context is always marked explicitly, and function call syntax is actually more consistent.

Regarding await syntax: What about a macro with method syntax? I can't find an actual RFC for allowing this, but I've found a few discussions (1, 2) on reddit so the idea is not unprecedented. This would allow await to work in postfix position without making it a keyword / introducing new syntax for only this feature.

// Postfix await-as-a-keyword. Looks as if we were accessing a Result<_, _> field,
// unless await is syntax-highlighted
first().await?.second().await?.third().await?
// Macro with method syntax. A few more symbols, but clearly a macro invocation that
// can affect control flow
first().await!()?.second().await!()?.third().await!()?

There is a library from the Scala-world which simplifies monad compositions: http://monadless.io

Maybe some ideas are interesting for Rust.

quote from the docs:

Most mainstream languages have support for asynchronous programming using the async/await idiom or are implementing it (e.g. F#, C#/VB, Javascript, Python, Swift). Although useful, async/await is usually tied to a particular monad that represents asynchronous computations (Task, Future, etc.).

This library implements a solution similar to async/await but generalized to any monad type. This generalization is a major factor considering that some codebases use other monads like Task in addition to Future for asynchronous computations.

Given a monad M, the generalization uses the concept of lifting regular values to a monad (T => M[T]) and unlifting values from a monad instance (M[T] => T). > Example usage:

lift {
  val a = unlift(callServiceA())
  val b = unlift(callServiceB(a))
  val c = unlift(callServiceC(b))
  (a, c)
}

Note that lift corresponds to async and unlift to await.

This is already true with normal sync code and closures. For example:

I see several differences here:

  1. Lambda context is unavoidable, but it's not for await. With await we don't have a context, with async we have to have one. The former wins, because it provide the same features, but require knowing less about the code.
  2. Lambdas tends to be short, several lines at most so we see the entire body at once, and simple. async functions may be quite big (as big, as regular functions) and complicated.
  3. Lambdas are rarely nested (except for then calls, but it's await is proposed for), async blocks are nested frequently.

One other thing that you should note from the full implicit await proposal is that you can't call async fns from non-async contexts.

Hmm, I didn't notice that. It doesn't sound good, because in my practice you often want to run async from non-async context. In C# async is just a keyword that allows compiler to rewrite function body, it doesn't affect function interface in any way so async Task<Foo> and Task<Foo> are completely interchangeable, and it decouples implementation and API.

Sometimes you may want to to block on async task, e.g when you want to call some network API from main. You have to block (otherwise you return to the OS and the program ends) but you have to run async HTTP request. I'm not sure what solution could be here except hacking main to allow it to be async as well as we do with Result main return type, if you cannot call it from non-async main.

Another consideration in favor of current await is how it works in other popular language (as noted by @fdietze ). It makes it easier to migrate from other language such as C#/TypeScript/JS/Python and thus is a better approach in terms of drumming up new people.

I see several differences here

You should also realize that the main RFC already has async blocks, with the same semantics as the implicit version, then.

It doesn't sound good, because in my practice you often want to run async from non-async context.

This is not an issue. You can still use async blocks in non-async contexts (which is fine because they just evaluate to a F: Future as always), and you can still spawn or block on futures using exactly the same API as before.

You just can't call async fns, but instead wrap the call to them in an async block- as you do regardless of the context you're in, if you want a F: Future out of it.

async is just a keyword that allows compiler to rewrite function body, it doesn't affect function interface in any way

Yes, this is is a legitimate difference between the proposals. It was also covered in the internals thread. Arguably, having different interfaces for the two is useful because it shows you that the async fn version will not run any code as part of construction, while the -> impl Future version may e.g. initiate a request before giving you a F: Future. It also makes async fns more consistent with normal fns, in that calling something declared as -> T will always give you a T, regardless of whether it's async.

(You should also note that in Rust there is still quite a leap between async fn and the Future-returning version, as described in the RFC. The async fn version does not mention Future anywhere in its signature; and the manual version requires impl Trait, which carries with it some problems to do with lifetimes. This is, in fact, part of the motivation for async fn to begin with.)

It makes it easier to migrate from other language such as C#/TypeScript/JS/Python

This is an advantage only for the literal await future syntax, which is fairly problematic on its own in Rust. Anything else we might end up with also has a mismatch with those languages, while implicit await at least has a) similarities with Kotlin and b) similarities with synchronous, thread-based code.

Yes, this is is a legitimate difference between the proposals. It was also covered in the internals thread. Arguably, having different interfaces for the two is useful

I'd say having different interfaces for the two has some advantages, because having API depended on implementation detail doesn't sound good to me. For example, you are writing a contract that is simply delegating a call to internal future

fn foo(&self) -> Future<T> {
   self.myService.foo()
}

And then you just want to add some logging

async fn foo(&self) -> T {
   let result = await self.myService.foo();
   self.logger.log("foo executed with result {}.", result);
   result
}

And it becomes a breaking change. Whoa?

This is an advantage only for the literal await future syntax, which is fairly problematic on its own in Rust. Anything else we might end up with also has a mismatch with those languages, while implicit await at least has a) similarities with Kotlin and b) similarities with synchronous, thread-based code.

It's an advantage for any await syntax, await foo/foo await/foo@/foo.await/... once you get that it's the same thing, the only difference is that you place it before/after or have a sigil instead of keyword.

You should also note that in Rust there is still quite a leap between async fn and the Future-returning version, as described in the RFC

I know it and it disquiets me a lot.

And it becomes a breaking change.

You can get around that by returning an async block. Under the implicit await proposal, your example looks like this:

fn foo(&self) -> impl Future<Output = T> { // Note: you never could return `Future<T>`...
    async { self.my_service.foo() } // ...and under the proposal you couldn't call `foo` outside of `async` either.
}

And with logging:

fn foo(&self) -> impl Future<Output = T> {
    async {
        let result = self.my_service.foo();
        self.logger.log("foo executed with result {}.", result);
        result
    }
}

The bigger issue with having this distinction arises during the transition of the ecosystem from manual future implementations and combinators (the only way today) to async/await. But even then the proposal allows you to keep the old interface around and provide a new async one alongside it. C# is full of that pattern, for example.

Well, that sounds reasonable.

However, I do believe such implicitness (we don't see if foo() here is async or sync function) lead to the same problems that arised in protocols such as COM+ and was a reason for WCF being implemented as it was. People had problems when async remote requests were looking like simple methods calls.

This code looks perfectly fine except I can't see if some request if async or sync. I believe that it's important information. For example:

fn foo(&self) -> impl Future<Output = T> {
    async {
        let result = self.my_service.foo();
        self.logger.log("foo executed with result {}.", result);
        let bars: Vec<Bar> = Vec::new();
        for i in 0..100 {
           bars.push(self.my_other_service.bar(i, result));
        }
        result
    }
}

It's crucial to know if bar is sync or async function. I often see await in the loop as a marker that this code have to be changed to achieve better throughout load and performance. This is a code I reviewed yesterday (code is suboptimal, but it's one of review iterations):

image

As you can see, I easily spotted that we have a looping await here and I asked to change it. When change was committed we got 3x page load speedup. Without await I could easily overlook this misbehaviour.

I admit I haven't used Kotlin, but last time I looked at that language, it seemed to be mostly a variant of Java with less syntax, up to the point where it was easy to mechanically translate one to the other. I can also imagine why it would be liked in the world of Java (which tends to be a little syntax-heavy), and I'm aware it recently got a boost in popularity specifically due being not Java (the Oracle vs. Google situation).

However, if we decide to take popularity and familiarity into account, we might want to take a look at what JavaScript does, which is also explicit await.

That said, await was introduced to mainstream languages by C#, which is maybe one language where usabilty was considered to be of utmost importance. In C#, asynchronous calls are indicated not only by the await keyword, but also by the Async suffix of the method calls. The other language feature that shares most with await, yield return is also proeminently visible in code.

Why is that? My take on it is that generators and asynchronous calls are too powerful constructs to let them pass unnoticed in code. There's a hierarchy of control flow operators:

  • sequential execution of statements (implicit)
  • function/method calls (quite apparent, compare with eg. Pascal where there's no difference at call site between a nullary function and a variable)
  • goto (all right, it's not a strict hierarchy)
  • generators (yield return tends to stand out)
  • await + Async suffix

Notice how they also go from less to more verbose, according to their expressiveness or power.

Of course, other languages took different approaches. Scheme continuations (like in call/cc, which isn't too different from await) or macros have no syntax to show what you are calling. For macros, Rust took the approach of making it easy to see them.

So I would argue that having less syntax isn't desirable in itself (there are languages like APL or Perl for that), and that syntax doesn't have to be just boilerplate, and has an important role in readability.

There's also a parallel argument (sorry, I can't remember the source, but it might have come from someone in the language team) that people are more comfortable with noisy syntax for new features when they are new, but then are fine with a less verbose one once they end up to be commonly used.


As for the question of await!(foo)? vs. await foo?, I'm in the former camp. You can internalise pretty much any syntax, however we are too used to taking cues from spacing and proximity. With await foo? there's a lange chance one will second-guess themselves on the precedence of the two operators, while the braces make it clear what's happening. Saving three characters isn't worth it. And as for the practice of chaining await!s, while it might be a popular idiom in some languages, I feel it has too many downsides like poor readability and interaction with debuggers to be worth optimizing for.

Saving three characters isn't worth it.

In my anecdotal experience, extra characters (e.g. longer names) aren't much of a problem, but extra tokens can be really annoying. In terms of a CPU analogy, a long name is straightline code with good locality - I can just type it out from muscle memory - while the same number of characters when it involves multiple tokens (e.g. punctuation) is branchy and full of cache misses.

(I fully agree that await foo? would be highly non-obvious and we should avoid it, and that having to type more tokens would be far preferable; my observation is only that not all characters are created equal.)


@rpjohnst I think your alternative proposal might have slightly better reception if it were presented as "explicit async" rather than "implicit await" :-)

It's crucial to know if bar is sync or async function.

I'm not sure this is really any different from knowing whether some function is cheap or expensive, or whether it does IO or not, or whether it touches some global state or not. (This also applies to @lnicola's hierarchy- if async calls run to completion just like sync calls, then they're really no different in terms of power!)

For example, the fact that the call was in a loop is just as, if not more, important than the fact that it was async. And in Rust, where parallelization is so much easier to get right, you could just as well go around suggesting that expensive-looking synchronous loops be switched to Rayon iterators!

So I don't think requiring await is actually all that important for catching these optimizations. Loops are already always good places to look for optimization, and async fns are already a good indicator that you can get some cheap IO concurrency. If you find yourself missing those opportunities, you could even write a Clippy lint for "async call in a loop" that you run occasionally. It would be great to have a lint similar for synchronous code as well!

The motivation for "explicit async" is not simply "less syntax," as @lnicola implies. It's to make the behavior of function call syntax more consistent, so that foo() always runs foo's body to completion. Under this proposal, leaving out an annotation just gives you less-concurrent code, which is how virtually all code already behaves. Under "explicit await," leaving out an annotation introduces accidental concurrency, or at least accidental interleaving, which is problematic.

I think your alternative proposal might have slightly better reception if it were presented as "explicit async" rather than "implicit await" :-)

The thread is named "explicit future construction, implicit await," but it seems the latter name has stuck. :P

I'm not sure this is really any different from knowing whether some function is cheap or expensive, or whether it does IO or not, or whether it touches some global state or not. (This also applies to @lnicola's hierarchy- if async calls run to completion just like sync calls, then they're really no different in terms of power!)

I think this is as important as know that function changes some state, and we alreay have a mut keyword on both call side an caller side.

The motivation for "explicit async" is not simply "less syntax," as @lnicola implies. It's to make the behavior of function call syntax more consistent, so that foo() always runs foo's body to completion.

One one side it's a good consideration. On the other one you can easily separate future creation and future run. I mean if foo returns you some abstraction that allows you then to call run and get some result it doesn't make foo useless trash that does nothing, it does a very useful thing: it construct some object you can call methods later on. It doesn't make it any different. The foo method we call is just a blackbox and we see its signature Future<Output=T> and it actually returns a future. So we explicitly await it when we want to do so.

The thread is named "explicit future construction, implicit await," but it seems the latter name has stuck. :P

I personally thing that the better alternative is "explicit async explicit await" :)


P.S.

I was also hitted by a thought tonight: did you try to communicate with C# LDM? For example, guys like @HaloFour , @gafter or @CyrusNajmabadi . It may be really good idea to ask them why they took syntax they took. I'd propose to ask guys from others languages as well but I merely don't know them :) I'm sure they had multiple debates about existing syntax and they could already discuss it a lot and they may have some useful ideas.

It doesn't mean Rust have to have this syntax because C# does, but it just allows to make more weighted decision.

I personally thing that the better alternative is "explicit async explicit await" :)

The main proposal isn't "explicit async," though- that's why I picked the name. It's "implicit async," because you can't tell at a glance where asynchrony is being introduced. Any unannotated function call might be constructing a future without awaiting it, even though Future appears nowhere in its signature.

For what it's worth, the internals thread does include an "explicit async explicit await" alternative, because that's future-compatible with either main alternative. (See the final section of the first post.)

did you try to communicate with C# LDM?

The author of the main RFC did. The main point that came out of it, as far I remember, was the decision not to include Future in the signature of async fns. In C#, you can replace Task with other types to have some control over how the function is driven. But in Rust, we don't (and won't) have any such mechanism- all futures will go through a single trait, so there's no need to write that trait out every time.

We also communicated with the Dart language designers, and that was a large part of my motivation for writing up the "explicit async" proposal. Dart 1 had a problem because functions didn't run to their first await when called (not quite the same as how Rust works, but similar), and that caused such massive confusion that in Dart 2 they changed so functions do run to their first await when called. Rust can't do that for other reasons, but it could run the entire function when called, which would also avoid that confusion.

We also communicated with the Dart language designers, and that was a large part of my motivation for writing up the "explicit async" proposal. Dart 1 had a problem because functions didn't run to their first await when called (not quite the same as how Rust works, but similar), and that caused such massive confusion that in Dart 2 they changed so functions do run to their first await when called. Rust can't do that for other reasons, but it could run the entire function when called, which would also avoid that confusion.

Great experience, I wasn't aware of it. Nice to hear you've done such a massive work. Well done 👍

I was also hitted by a thought tonight: did you try to communicate with C# LDM? For example, guys like @HaloFour , @gafter or @CyrusNajmabadi . It may be really good idea to ask them why they took syntax they took.

I'm happy to provide any info you're interested in. However, and i'm only skimmed through it. Would it be possible to condense down any specific questions you currently have?

Regarding await syntax (this might be completely stupid, feel free to shout at me; I am an async programming noob and I have no idea what I am talking about):

Instead of using the word "await", can we not introduce a symbol/operator, similar to ?. For example, it could be # or @ or something else that is currently unused.

For example, if it were a postfix operator:

let stuff = func()#?;
let chain = blah1()?.blah2()#.blah3()#?;

It is very concise and reads naturally from left to right: await first (#), then handle errors (?). It doesn't have the problem that the postfix await keyword has, where .await looks like a struct member. # is clearly an operator.

I am not sure if postfix is the right place for it to be, but it felt that way because of precedence. As prefix:

let stuff = #func()?;

Or heck even:

let stuff = func#()?; // :-D :-D

Has this ever been discussed?

(I realise this kinda starts to approach the "random keyboard mash of symbols" syntax that Perl is infamous for ... :-D )

@RayVector #50547 (comment) , 5th alternative.

@CyrusNajmabadi thank you for coming. The main question is what option from listed ones you think fits better the current Rust language as it is, or maybe there is some other alternative? This topic isn't really long so you can easily scroll it top down quickly. The main question: should Rust follow current C#/TS/... await way or maybe it should implement its own. Is current syntax some kind of "legacy" that you would like to change in some way or it fits C# the best and it's the best option for newcoming languages as well?

The main consideration against C# syntax is operator precedence await foo? should await first and then evaluate ? operator as well as difference that unlike C# execution doesn't run in caller thread until first await, but doesn't start at all, the samy way current code snippet doesn't run negativity checks until GetEnumerator is called first time:

IEnumerable<int> GetInts(int n)
{
   if (n < 0)
      throw new InvalidArgumentException(nameof(n));
   for (int i = 0; i <= n; i++)
      yield return i;
}

More detailed in my first comment and later discussion.

@Pzixel Oh, I guess I missed that one when I was skimming through this thread earlier ...

In any case, I haven't seen much discussion about this, other than that brief mention.

Are there any good arguments for/against?

@RayVector I argued a little here in favour of more verbose syntax. One of the reasons is the one that you mention:

the "random keyboard mash of symbols" syntax that Perl is infamous for

To clarify, I don't think await!(f)? is really in the running for the final syntax, it was chosen specifically because its a solid way of not committing to any particular choice. Here are syntaxes (including the ? operator) that I think are still "in the running":

  • await f?
  • await? f
  • await { f }?
  • await(f)?
  • (await f)?
  • f.await?

Or possibly some combination of these. The point is that several of them do contain braces to be clearer about precedence & there are a lot of options here - but the intention is that await will be a keyword operator, not a macro, in the final version (barring some major change like what rpjohnst has proposed).

I vote for either a simple postfix await operator (e.g. ~) or the keyword with no parens and highest precedence.

I've been reading through this thread, and I would like to propose the following:

  • await f? evaluates the ? operator first, and then awaits the resultant future.
  • (await f)? awaits the future first, and then evaluates the ? operator against the result (due to ordinary Rust operator precedence)
  • await? f is available as syntactic sugar for `(await f)?. I believe "future returning a result" will be a super common case, so a dedicated syntax makes a lot of sense.

I agree with other commenters that await should be explicit. It's pretty painless doing this in JavaScript, and I really appreciate the explicitness and readability of Rust code, and I feel like making async implicit would ruin this for async code.

It occured to me that "implicit async block" ought to be implementable as a proc_macro, which simply inserts an await keyword before any future.

The main question is what option from listed ones you think fits better the current Rust language as it is,

Asking a C# designer what best fits the rust language is... interesting :)

I don't feel qualified to make such a determination. I like rust and dabble with it. But it's not a language i'm using day in and day out. Nor have I deeply ingrained it in my psyche. As such, i don't think i'm qualified to to make any claims about what are the appropriate choices for this language here. Want to ask me about Go/TypeScript/C#/VB/C++. Sure, i'd feel much more comfortable. But rust is too much out of my realm of expertise to feel comfortable with any such thoughts.

The main consideration against C# syntax is operator precedence await foo?

This is something i do feel like i can comment on. We thought about precedence a lot with 'await' and we tried out many forms before setting on the form we wanted. One of the core things we found was that for us, and the customers (internal and external) that wanted to use this feature, it was rarely the case that people really wanted to 'chain' anything past their async call. In other words, people seemed to strongly gravitate toward 'await' being the most important part of any full-expression, and thus having it be near the top. Note: by 'full expression' i mean things like the expression you get at the top of a expression-statement, or hte expression on the right of a top level assign, or the expression you pass as an 'argument' to something.

The tendency for people to want to 'continue on' with the 'await' inside an expr was rare. We do occasionally see things like (await expr).M(), but those seem less common and less desirable than the amount of people doing await expr.M().

This is also why we didn't go with any 'implicit' form for 'await'. In practice it was something people wanted to think very clearly about, and which they wanted front-and-center in their code so they could pay attention to it. Interestingly enough, even years later, this tendency has remained. i.e. sometimes we regret many years later that something is excessively verbose. Some features are good in that way early on, but once people are comfortable with it, are better suited with something terser. That has not been the case with 'await'. People still seem to really like the heavy-weight nature of that keyword and the precedence we picked.

So far, we've been very happy with the precedence choice for our audience. We might, in the future, make some changes here. But overall there is no strong pressure to do so.

--

as well as difference that unlike C# execution doesn't run in caller thread until first await, but doesn't start at all, the samy way current code snippet doesn't run negativity checks until GetEnumerator is called first time:

IMO, the way we did enumerators was somewhat of a mistake and has led to a bunch of confusion over the years. It's been especially bad because of the propensity for a lot of code to have to be written like this:

void SomeEnumerator(X args)
{
    // Validate Args, do synchronous work.
    return SomeEnumeratorImpl(args);
}

void SomeEnumeratorImpl(X args)
{
   // ...
   yield
   // ...
}

People have to write this all the time because of the unexpected behavior that the iterator pattern has. I think we were worried about expensive work happening initially. However, in practice, that doesn't seem to happen, and people def think about the work as happening when the call happens, and the yields themselves happening when you actually finally start streaming the elements.

Linq (which is the poster child for this feature) needs to do this everywhere, this highly diminishing this choice.

For await i think things are much better. We use 'async/await' a ton ourselves, and i don't think i've ever once said "man... i wish that it wasn't running the code synchronously up to the first 'await'". It simply makes sense given what the feature is. The feature is literally "run the code up to await points, then 'yield', then resume once the work you're yielding on completes". it would be super weird to not have these semantics to me since it is precisely the 'awaits' that are dictating flow, so why would anything be different prior to hitting the first await.

Also... how do things then work if you have something like this:

async Task FooAsync()
{
    if (cond)
    {
        // only await in method
        await ...
    }
} 

You can totally call this method and never hit an await. if "execution doesn't run in caller thread until first await" what actually happens here?

await? f is available as syntactic sugar for `(await f)?. I believe "future returning a result" will be a super common case, so a dedicated syntax makes a lot of sense.

This resonates the most with me. It allows 'await' to be the topmost concept, but also allows simple handling of Result types.

One thing we know from C# is that people's intuition around precedence is tied to whitespace. So if you have "await x?" then it immediately feels like await has less precedence than ? because the ? abuts the expression. If the above actually parsed as (await x)? that would be surprising to our audience.

Parsing it as await (x?) would feel the most natural just from the syntax, and would fit the need of getting a 'Result' of a future/task back, and wanting to 'await' that if you actually received a value. If that then returned a Result back itself, it feels appropraite to have that combined with the 'await' to signal that it happens afterwards. so await? x? each ? binds tightly to the portion of the code it most naturally relates to. The first ? relates to the await (and specifically the result of it), and the second relates to the x.

if "execution doesn't run in caller thread until first await" what actually happens here?

Nothing happens until the caller awaits the return value of FooAsync, at which point FooAsync's body runs until either an await or it returns.

It works this way because Rust Futures are poll-driven, stack-allocated, and immovable after the first call to poll. The caller must have a chance to move them into place--on the heap for top-level Futures, or else by-value inside a parent Future, often on the "stack frame" of a calling async fn--before any code is executed.

This means we're stuck with either a) C# generator-like semantics, where no code runs at invocation, or b) Kotlin coroutine-like semantics, where calling the function also immediately and implicitly awaits it (with closure-like async { .. } blocks for when you do need concurrent execution).

I kind of favor the latter, because it avoids the problem you mention with C# generators, and also avoids the operator precedence question entirely.

commented

@CyrusNajmabadi In Rust, Future usually does no work until it is spawned as a Task (it's much more similar to F# Async):

let bar = foo();

In this case foo() returns a Future, but it probably doesn't actually do anything. You have to manually spawn it (which is also similar to F# Async):

tokio::run(bar);

When it is spawned, it will then run the Future. Since this is the default behavior of Future, it would be more consistent for async/await in Rust to not run any code until it is spawned.

Obviously the situation is different in C#, because in C# when you call foo() it immediately starts running the Task, so it makes sense in C# to run code until the first await.

Also... how do things then work if you have something like this [...] You can totally call this method and never hit an await. if "execution doesn't run in caller thread until first await" what actually happens here?

If you call FooAsync() then it does nothing, no code is run. Then when you spawn it, it will run the code synchronously, the await will never run, and so it immediately returns () (which is Rust's version of void)

In other words, it's not "execution doesn't run in caller thread until first await", it's "execution doesn't run until it is explicitly spawned (such as with tokio::run)"

Nothing happens until the caller awaits the return value of FooAsync, at which point FooAsync's body runs until either an await or it returns.

Ick. That seems unfortunate. There are many times i may not ever get around to awaiting something (often due to cancellation and composition with tasks). As a dev i'd still appreciate getting early errors those (which is one of hte most common reasons people want execution to run up to the await).

This means we're stuck with either a) C# generator-like semantics, where no code runs at invocation, or b) Kotlin coroutine-like semantics, where calling the function also immediately and implicitly awaits it (with closure-like async { .. } blocks for when you do need concurrent execution).

Given these, i'd far prefer the former than the latter. Just my personal pref though. If the kotlin approach feels more natural for your domain, then go for that!

commented

@CyrusNajmabadi Ick. That seems unfortunate. There are many times i may not ever get around to awaiting something (often due to cancellation and composition with tasks). As a dev i'd still appreciate getting early errors those (which is one of hte most common reasons people want execution to run up to the await).

I feel the exact opposite. In my experience with JavaScript it is very common to forget to use await. In that case the Promise will still run, but the errors will be swallowed (or other weird stuff happens).

With the Rust/Haskell/F# style, either the Future runs (with correct error handling), or it doesn't run at all. Then you notice that it isn't running, so you investigate and fix it. I believe this results in more robust code.

@Pauan @rpjohnst Thanks for the explanations. Those were approaches we considered as well. But it turned out to not actually be that desirable in practice.

In the cases where you didn't want it to "actually do anything. You have to manually spawn it", we found it cleaner to model that as returning something that generated tasks on demand. i.e. something as simple as Func<Task>.

I feel the exact opposite. In my experience with JavaScript it is very common to forget to use await.

C# does work to try to ensure that you either awaited, or otherwise used the task sensibly.

but the errors will be swallowed

That's the opposite of what i'm saying. I'm saying i want the code to execute eagerly so that errors are things i hit immediately, even in the event that i don't ever end up getting around to executing the code in the task. This is hte same with iterators. I'd much rather know i was creating it incorrect at the point in time whne i call the function versus potentially much further down the line if/when the iterator is streamed.

Then you notice that it isn't running, so you investigate and fix it.

In the scenarios i'm talking aobut, "not running" is completely reasonable. After all, my application may decide at any point that it doesn't need to actually run the task. That's not the bug that i'm describing. The bug i'm describing is that i didn't pass validation, and i want to find out about that as close to the point where i logically created the work as opposed to the point when the work actually needs to run. Given that these are models to describe async processing, it's often goign to be hte case that these are far away from each other. So having the information about issues happen as early as possible is valuable.

As mentioned, this is not hypothetical either. A similar thing happens with streams/iterators. People often create them, but then don't realize them until later. It's been an extra burden for people to have to track these things back to their source. This is why so many APIs (including hte BCL) now have to do the split between the synchronous/early work, and the actual deferred/lazy work.

commented

That's the opposite of what i'm saying. I'm saying i want the code to execute eagerly so that errors are things i hit immediately, even in the event that i don't ever end up getting around to executing the code in the task.

I can understand the desire for early errors, but I'm confused: under what situation would you ever "end up not getting around to spawning the Future"?

The way that Futures work in Rust is that you compose Futures together in various ways (including async/await, including parallel combinators, etc.), and by doing this it builds up a single fused Future which contains all the sub-Futures. And then at the top-level of your program (main) you then use tokio::run (or similar) to spawn it.

Aside from that single tokio::run call in main, you usually won't be spawning Futures manually, instead you just compose them. And the composition naturally handles spawning/error handling/cancellation/etc. correctly.

i also want to make somethign clear. When i say something like:

But it turned out to not actually be that desirable in practice.

I'm talking very specifically about things with our language/platform. I can only give insight into the decisions that made sense for C#/.Net/CoreFx etc. It may be completely the case that your situation is different and what you want to optimize for and the types of approaches you should take go in an entirely different direction.

I can understand the desire for early errors, but I'm confused: under what situation would you ever "end up not getting around to spawning the Future"?

All the time :)

Consider how Roslyn (the C#/VB compiler/IDE codebase) is itself written. It is heavily async and interactive. i.e. the primary use case for it is to be used in a shared fashion with many clients accessing it. Cliest services are common interacting with the user over a wealth of features, many of which many decide that they no longer need to do work they originally thought was important, due to the user doing any number of actions. For example, as the user is typing, we're doing tons of task compositions and manipulations, and we may end up deciding to not even get around to executing them because another event came in a few ms later.

commented

For example, as the user is typing, we're doing tons of task compositions and manipulations, and we may end up deciding to not even get around to executing them because another event came in a few ms later.

Isn't that just handled by cancellation, though?

And the composition naturally handles spawning/error handling/cancellation/etc. correctly.

It simply sounds like we have two very different models to represent things. That's fine :) My explanations are meant to be taken in the context of the model we choose. They may not make sense for the model you are choosing.

commented

It simply sounds like we have two very different models to represent things. That's fine :) My explanations are meant to be taken in the context of the model we choose. They may not make sense for the model you are choosing.

Absolutely, I'm just trying to understand your perspective, and also explaining our perspective. Thank you for taking the time to explain things.

Isn't that just handled by cancellation, though?

Cancellation is an orthogonal concept to asynchrony (for us). They're commonly used together. But neither necessitates the other.

You could have a system entirely without cancellation, and it may simply be hte case that you just never get around to running the code that 'awaits' the tasks that you've composed. i.e. for logical reason your code may just go "i don't need to await 't', i'm just going to do something else". Nothing about tasks (in our world) dictates or necessitates that it should be expected that that task be awaited. In such a system, i would want to get early validation.

Note: this is similar to the iterator problem. You may call someone to get results you intend to use later on in your code. However, for any number of reasons, you may not end up actually having to use the results. My personal desire would still be to get the validation results early, even if i technically could have not gotten them and had my program succeed.

I think there are reasonable arguments for both directions. But my take is that the synchronous approach has had more pros than cons. Of course, if hte synchronous approach literally does not fit due to how your actual impl wants to work then that seems to answer the question on what you need to do :D

In other words, i don't think your approach is bad here. And if it has strong benefits around this model you think is right for Rust, then def go for it :)

commented

You could have a system entirely without cancellation, and it may simply be hte case that you just never get around to running the code that 'awaits' the tasks that you've composed. i.e. for logical reason your code may just go "i don't need to await 't', i'm just going to do something else".

Personally, I think that's best handled by the usual if/then/else logic:

async fn foo() {
    if some_condition {
        await!(bar());
    }
}

But as you say, it's just a very different perspective from C#.

Personally, I think that's best handled by the usual if/then/else logic:

Yes. that would be fine if the checking of the condition could be done at the same point the task is created (and tons of cases are like this). But in our world it's commonly not the case that things are so well connected like that. After all, we want to eagerly do async work in response to users (so that the results are ready when needed), but we may later on decide we don't care anymore.

In our domains the 'await' happens at the point the person "needs the value", which is a different determination/component/etc. from the decision about "should i start working on the value?"

In a sense, these are very decoupled, and that's viewed as a virtue. The producer and consumer can have entirely different policies, but can communicate effectively about the async work being done through the nice abstraction of the 'Task'.

Anways, i'll back out of the sync/async opinion. Clearly there are very different models at play here. :)

In terms of precedence i've given some information on how C# thinks about things. I hope it is helpful. Let me know if you want any more information there.

commented

@CyrusNajmabadi Yes, your insights were quite helpful. Personally I agree with you that await? foo is the way to go (though I also like the "explicit async" proposal as well).

BTW, if you want one of the best expert opinions on all the intricacies of the .net model around modeling async/sync work, and all the pros/cons of that system, then @stephentoub would be the person to talk to. He would be about 100x better than me at explaining things, clarifying the pros/cons, and likely being able to dive deep into the models on both sides. He's intimately familiar with .net's approach here (including the choices made and the choices rejected), and what how it has had to evolve since the beginning. He's also painfully aware of the perf costs of the approaches .net has taken (which is one of hte reason ValueTask now exists), which i imagine would be something you guys are thinking about first-and-foremost with your desire for zero/low-cost abstractions.

From my recollection, similar thoughts about these splits were put into .net's approach in the early days, and i think he could speak very well to the ultimate decisions that were made and how appropriate they've been.

I'd still vote in favor of await? future even if it looks a bit unfamiliar. Are there any real downsides in composing those?

Here's another thorough analysis of the pros and cons of cold (F#) vs hot (C#,JS) asyncs: http://tomasp.net/blog/async-csharp-differences.aspx

There now is a new RFC for postfix macros that would allow experimentation with postfix await without a dedicated syntax change: rust-lang/rfcs#2442

await {} is my favorite one here, reminiscent of unsafe {} plus it shows precedence.

let value = await { future }?;

@seunlanlege
yes, it's remeniscent, so people have a false assumption they can write code like this

let value = await {
   let val1 = future1;
   future2(val1)
}

But they can't.

@Pzixel
if i understand you correctly, you're assuming people would assume that futures are implicitly awaited on inside an await {} block? I disagree with that. await {} would only await on the expression the block evaluates to.

let value = await {
    let future = create_future();
    future
};

And it should be a pattern that is discouraged

simplified

let value = await { create_future() };

You propose a statement where more than one expression "should be discouraged". Don't you see anything wrong with it?

Is it favorable to make await a pattern (aside with ref etc)?
Something like:

let await n = bar();

I prefer to call that an async pattern than await, although I don't see much advantage of making it a pattern syntax. Pattern syntaxes generally work dually with respect to their expression counterparts.

According to current page of https://doc.rust-lang.org/nightly/std/task/index.html, the task mod consists of both reexports from libcore and reexports for liballoc, which makes the result a little ... suboptimal. Hope this is addressed somehow before it stablizes.

I took a look at the code. And I have a few suggestions:

  • The UnsafePoll trait and Poll enum have very similar names, but they are not related. I suggest to rename UnsafePoll, e.g. to UnsafeTask.
  • In the futures crate the code was split up into different submodules. Now, most code is bunched together in task.rs which makes it harder to navigate. I suggest splitting it up again.
  • TaskObj#from_poll_task() has an odd name. I suggest naming it new() instead
  • TaskObj#poll_task could just be poll(). The field called poll could be called poll_fn which would also suggest that it's a function pointer
  • Waker might be able to use the same strategy as TaskObj and put the vtable on the stack. Just an idea, I don't know whether we want this. Would it be faster because it's a little less indirection?
  • dyn is now stable in beta. The code should probably use dyn where it applies

I can provide a PR for this stuff as well. @cramertj @aturon feel free to reach out to me via Discord to discuss the details.

how about just add an method await() for all Future?

    /// just like and_then method
    let x = f.and_then(....);
    let x = f.await();

    await f?     =>   f()?.await()
    await? f     =>   f().await()?

/// with chain invoke.
let x = first().await().second().await()?.third().await()?
let x = first().await()?.second().await()?.third().await()?
let x = first()?.await()?.second().await()?.third().await()?

@zengsai The problem is that await doesn't work like a regular method. In fact, do consider what await method would do when not in an async block/function. Methods don't know in what context they are executed, so it couldn't cause compilation error.

@xfix this is not true in general. The compiler can do whatever it wants to and could handle the method call specially in this case. The method style call solves the preference issue but it is unexpected (await does not work this way in other languages) and would probably be an ugly hack in the compiler.

@elszben That the compiler can do whatever it wants doesn't mean it should do whatever it wants.

future.await() sounds like a regular function call, while it is not. If you want to go this way, the future.await!() syntax proposed somewhere above would allow the same semantics, and clearly mark with a macro “Something weird is going on here, I know.”

Edit: Post removed

I moved this post into the futures RFC. Link

Has anyone looked at the interaction between async fn and #[must_use]?

If you have an async fn, calling it directly runs no code and returns a Future; it seems like all async fn should have an inherent #[must_use] on the "outer" impl Future type, so you can't call them without doing something with the Future.

On top of that, if you attach a #[must_use] to the async fn yourself, it seems like that should apply to the inner function's return. So, if you write #[must_use] async fn foo() -> T { ... }, then you can't write await!(foo()) without doing something with the result of the await.

Has anyone looked at the interaction between async fn and #[must_use]?

For others interested in this discussion, see #51560.

I was thinking about how asynchronous functions are implemented and realized that these functions don't support recursion, or mutual recursion either.

for the await syntax, I am personally toward the post-fix macros, no implicit await approach, for its easy chaining, and that it can also be used sort of like a method call

@warlord500 you are completely ignoring the entire expirience of millions of developers described above. You don't want to chain await's.

@Pzixel please don't presume I haven't read the thread or what I want.
I know that some contributor might not want to allow chaining awaits but there are some of us
developers who do. I am not sure where you even got the notion that I was ignoring
developers opinions, my comment only was specifying an opinion of member of the community and my reasons for holding that opinion.

EDIT: if you have a difference of opinion then please do share! I am curious as to why you say
we shouldn't allow chaining awaits via a method like syntax?

@warlord500 because MS team shared its experience across thousand of customers and millions of developers. I know it myself because I write async/await code on day-to-day basis, and you never want to chain them. Here is exact quote, if you wish:

We thought about precedence a lot with 'await' and we tried out many forms before setting on the form we wanted. One of the core things we found was that for us, and the customers (internal and external) that wanted to use this feature, it was rarely the case that people really wanted to 'chain' anything past their async call. In other words, people seemed to strongly gravitate toward 'await' being the most important part of any full-expression, and thus having it be near the top. Note: by 'full expression' i mean things like the expression you get at the top of a expression-statement, or hte expression on the right of a top level assign, or the expression you pass as an 'argument' to something.

The tendency for people to want to 'continue on' with the 'await' inside an expr was rare. We do occasionally see things like (await expr).M(), but those seem less common and less desirable than the amount of people doing await expr.M().

I am now quite confused, If I understand you correctly, we shouldn't support
await post-fix easy chain style because it isn't commonly used? you see await as being the most important part of an expression.
I only presume in this case to make sure I understand you correctly.
If I am wrong don't hesitate to correct me.

also, you could please post the link to where you got the quote,
thank you.

my counter to the two above points are just because you don't use something commonly, doesn't necessarily mean supporting it would be harmful for the case where it makes code cleaner.

sometimes await inst the most important part of an expression, if the future generating expression is
the most important part and you would like to put it toward the top, you can still do that if we allowed a postfix macro style in addition to normal macro style

also, you could please post the link to where you got the quote,
thank you.

But... but you said that you have read the whole thread... 😃

But I have no problem with sharing it: #50547 (comment) . I suggest you to read all Cyrus posts, it's really experience of the whole C#/.Net ecosystem, it's a priceless experience that can be reused by Rust.

sometimes await inst the most important part of an expression

The quote is clearly says the opposite 😄 And you know, I have the same feeling myself, writing async/await on day-to-day basis.

Do you have an experience with async/await? Can you share it then, please?

Wow, I cant I believe I missed that. Thank you for taking time out of your day to link that.
I dont have any experience so, I guess in the grand scheme things my opinion doesn't matter all that much

@Pzixel I appreciate you sharing information about your and others' experience using async /await, but please be respectful to other contributors. You don't need to criticize the experience levels of others in order to make your own technical points heard.

Moderator note: @Pzixel Personal attacks on community members are not allowed. I've edited it out of your comment. Do not do it again. If you have questions about our moderation policy, please follow up with us at rust-mods@rust-lang.org.

@crabtw I didn't criticize anyone itt. I apologize for any inconvenience that could have place here.

I asked about expirience once when I wanted to understand if person have an actual need in chaining 'await's or it's his extrapolation of today features. I didn't want to appeal to authority, it just a useful bunch of information where I can say "you need to try it yourself and realize this truth yourself". Nothing offensive here.

Personal attacks on community members are not allowed. I've edited it out of your comment.

No personal attacks. As I can see you commented out my reference about downvotes. Well, it was just my reactor on my post downvote, nothing special. As it was removed, it's also reasonable to remove that reference (it may be even confusing for further readers), so thank you for removing that out.

Thanks for the reference. I did want to mention that you shoudl take none of what i say as 'gospel' :) Rust and C# are different languages with different communities, paradigms and idioms. You should def make the best choices for your language. I do hope my words are helpful and can give insight. But always be open to different ways to do things.

My hope is you come up with something amazing for Rust. Then we can see what you did and stealgraciously adopt it for C# :)

As far as I can tell, the linked argument primarily talks about the precedence of await, and in particular argues that it makes sense to parse await x.y() as await (x.y()) rather than (await x).y() because the user will more often want and expect the former interpretation (and the spacing also suggests that interpretation). And I would tend to agree, though I'd also observe that syntax like await!(x.y()) removes the ambiguity.

However, I don't think that suggests any particular answer regarding the value of chaining like x.y().await!().z().

The quoted comment is interesting in part because there's a big difference in Rust, which has been one of the big factors in delaying our figuring out the final await syntax: C# has no ? operator, so they have no code that would need to be written (await expr)?. They describe (await expr).M() as really uncommon, and I tend to think that would be true in Rust as well, but the only exception to that, from my perspective, is ?, which will be very common because many futures will evaluate to results (all of them that exist right now do, for example).

@withoutboats yes, that's right. I'd like to quote this part once more:

the only exception to that, from my perspective, is ?

If there is only exception then it seems reasonable to create await? foo as a shortcut for (await foo)? and having the best of both worlds.

Right now, at least, the proposed syntax of await!() will allow unambiguous use of ?. We can worry about some shorter syntax for the combination of await and ? if and when we decide to change the base syntax for await. (And depending on what we change it to, we might not have an issue at all.)

@joshtriplett these extra braces removes ambiguity, but they are really very heavy. E.g. search across my current project:

Matching lines: 139 Matching files: 10 Total files searched: 77

I have 139 awaits in 2743 sloc. Maybe it's not a big deal, but I think we should consider braceless alternative as cleaner and better one. Being said, ? is the only exception, so we could easily use await foo without braces, and introduce a special syntax just for this special case. It's not a big deal, but could save some braces for a LISP project.

I've created a blog post about why I think async functions should use the outer return type approach for their signature. Enjoy reading!

https://github.com/MajorBreakfast/rust-blog/blob/master/posts/2018-06-19-outer-return-type-approach.md

I haven't followed all the discussions, so feel free to point me to where this would already have been discussed if I missed it.

Here is an additional concern about the inner return type approach: how would the syntax for Streams look like, when it'll be specified? I would think async fn foo() -> impl Stream<Item = T> would look nice and consistent with async fn foo() -> impl Future<Output = T>, but it wouldn't work with the inner return type approach. And I don't think we'll want to introduce an async_stream keyword.

@Ekleog Stream would need to use a different keyword. It can't use async because impl Trait works the other way around. It can only ensure that certain traits are implemented, but the traits themselves need to be already implemented on the underlying concrete type.

The outer return type approach would, however, come in handy if we would one day like to add async generator functions:

async_gen fn foo() -> impl AsyncGenerator<Yield = i32, Return = ()> { yield 1; ... }

Stream could be implemented for all async generators with Return = (). This makes this possible:

async_gen fn foo() -> impl Stream<Item = i32> { yield 1;  ... }

Note: Generators are in nightly already, but they don't use this syntax. Currently they use the closure syntax without a marker. The are also currently not pinning-aware unlike Stream in futures 0.3.

Edit: This code previously used a Generator. I missed a difference between Stream and Generator. Streams are asynchronous. This means that they may but don't have to yield a value. They can either respond with Poll::Ready or Poll::Pending. A Generator on the other hand has to always yield or complete synchronously. I have now changed it to AsyncGenerator to reflect this.

Edit2: @Ekleog The current implementation of generators uses a syntax without marker and seems to detect that it should be a generator by looking for a yield inside the body. This means that you would be correct in saying that async could be reused. Whether that approach is sensible is another question, though. But I guess that's for another topic ^^'

Indeed, I was thinking that async could be reused, would it only be because async would, as per this RFC, only be allowed with Futures, and thus could detect it's generating a Stream by looking at the return type (which must be either a Future or a Stream).

The reason why I'm raising this now is because if we want to have the same async keyword for generating both Futures and Streams, then I think the outer return type approach would be much cleaner, because it would be explicit, and I don't think anyone would expect that a async fn foo() -> i32 would yield a stream of i32 (which would be possible if the body contained a yield and the inner return type approach was picked).

We could have a second keyword for generators (e.g. gen fn), and then create streams just by applying both (e.g. async gen fn). The outer return type doesn't need to come into this at all.

@rpjohnst I brought it up because the outer return type approach makes it possible to easily set two associated types.

We don't want to set two associated types. A Stream is still just a single type, not impl Iterator<Item=impl Future>> or anything like that.