gfx-rs / gfx-extras

DEPRECATED: Extra libraries to help working with gfx-hal

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

On-demand memory configuration

kvark opened this issue · comments

I find the current way of configuring memory impractical. It allows setting up individual configs for each memory type and each kind of an allocator. In practice, I don't think anyone would bother doing that.

Perhaps, we could get away with the following API instead:

impl Heaps {
  /// Don't create any allocators at this point, but know how to initialize any on demand.
  pub fn new(
    hal_properties: &hal::adapter::MemoryProperties,
    linear_config: LinearConfig,
    general_config: GeneralConfig,
  ) -> Self;
}

impl MemoryUsage {
  /// Required allocator kind for this usage.
  fn allocator_kind(&self) -> Kind;
}

So instead of using the allocator fitness function and picking one of the allocators that exist, we'd know for sure which allocator we need, and initialize it if it's not yet used for this memory type.

Thoughts or concerns about doing this? @omni-viral

I slept on the idea, and here is what I came to. Basically, MemoryUsage being a trait made sense in rendy-memory. It's just something we pass on every allocation that gives us information about:

  • required properties
  • property fitness
  • allocator

That last one is most tricky. User technically needs to configure N different allocators, and have one of them picked on allocation. The old API had 3 allocators per memory type, but the problem was that if 2 different requested allocations end up wanting the same Kind::Linear on the same memory type, there was no way to say that they wanted differently configured linear allocators.

The API suggested in this issue is a simplification of this general API I described. Instead of having N different allocators that we pick upon, this API only defines 3 (one per kind). I believe this is what most people would want anyway, so I think of going ahead with this, and then considering expansion in the later versions.

You actually may want to configure same allocator on different memory types differently.
Or at least you may want to disable allocator on particular memory types.

For example linear allocator with sane default coniguration (256 MB chunks) will eat whole device-local + host-visible heap by single chunk.

I agree that configuring them all is reasonably hard. I was assuming that any GPU extensive application that is going to see its release will just have fine-tuned profiles for all existing GPUs.

Thus I think removing this ability completely is unreasonable.

Thanks for getting back on this topic!

For example linear allocator with sane default coniguration (256 MB chunks) will eat whole device-local + host-visible heap by single chunk.

That's a big outlier, corner case. It would be nice if the allocator supported this chunk nicely, but it's not critical to function of most applications. Besides, there is no MemoryUsage that would show high fitness to this chunk today, either.

Thus I think removing this ability completely is unreasonable.

I'll be thinking on how we can make this more powerful.

MemoryUsage::Dynamic should pick device-local + host-visible memory if one available.

True, thanks for correction! In the current API however, the user provides the Kind explicitly, and they should just use General for allocating dynamic memory.