inko-lang / inko

A language for building concurrent software with confidence

Home Page:http://inko-lang.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Checking for sendable return types is overly strict, possibly due to it not using fully resolved types

yorickpeterse opened this issue · comments

Please describe the bug

When checking for sendable return types, the compiler is too conservative in what it allows. For example, take a method like this:

class Symbol {
  fn static array_for(module: ref ir.Module) -> Array[Symbol] {
    ...
  }
}

Assuming Symbol doesn't store any references in its graph, it should be fine to allow recovering Array[Symbol] into uni Array[Symbol], yet the compiler won't allow this.

Upon closer inspection, it appears this can be triggered when a type stores a Map, as the Entry values stored won't be resolved/inferred properly. This in turn is because when TypeRef::is_sendable_output crawls the object graph, it doesn't take into account the type parameter mapping when checking field types. This results in it (eventually) checking types such as Any(TypeParameterId(...)), which aren't sendable.

While we could resolve this by fully resolving field types before checking them, I have my doubts about this being a sustainable long-term solution. Most notably, checking entire object graphs for every method call may prove expensive in large projects, even if we somehow start memoizing the results. Even if we make that fast, recursively resolving all the field types such that we can correctly check them is prone to error and hard to debug.

One potential option is that instead of checking for sendable types at the type-checking stage, we do so after type specialization. We'd still have to crawl object graphs, but because the type is fully specialized we won't encounter any generic type parameters, removing the need for resolving these as we crawl the graph. We can then apply memoization on top of that to speed things up, if even needed.

Please list the exact steps necessary to reproduce the bug

Create test.inko with the following contents:

class Example {
  let @map: Map[String, String]

  fn static new -> Example {
    Example(Map.new)
  }
}

class async Main {
  fn async main {
    let examples = recover []

    examples.push(Example.new)
  }
}

Now run inko check test.inko and observe the type error. Instead, this code should compile without errors.

Operating system

Fedora 40

Inko version

main

Rust version

1.70.0

We can then apply memoization on top of that to speed things up, if even needed.

To clarify this: the idea would be to add some sort of sendable enum flag to types::Class that is initially Unknown. Post specialization we then crawl object graphs, and do the following:

  • If the flag is set to No, treat it as unsendable and bail out early
  • If the flag is set to Yes, treat it as sendable and skip its sub graph (because a parent is only sendable if all its children are also sendable)
  • If the flag is set to Unknown, check its sub graph and set the flag according to the result

This means that on the first visit, the time taken is linear to the number of nodes in the graph to visit, but after that it becomes a constant time operation.

An entirely different way would be to rely on escape analysis: instead of looking at the returned object graph, when checking method bodies we determine if any borrows to data from self or arguments escape into the return type. If so, we disallow recovering the return type into a uni T.

The benefit of this would be that we can extend it to allow for more flexible arguments. For example, if an argument is a mut T we currently straight up don't allow recovering owned return types. If however we can determine the mut T doesn't escape into the return type, it would be OK to allow it.

As far as performance goes, I suspect escape analysis would be more expensive but also allow for a greater degree of flexibility. In addition, we can reuse it for stack allocating data.