amethyst / shred

Shared resource dispatcher

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Scalability of Shred

minecrawler opened this issue · comments

Something which came to mind, but I don't know if it has been discussed, yet, or if my understanding of how the scheduling works is just wrong :)

Example (from SPECS):

impl<'a> System<'a> for SysA {
    type SystemData = (WriteStorage<'a, Pos>, ReadStorage<'a, Vel>);

    fn run(&mut self, (mut pos, vel): Self::SystemData) {
        // The `.join()` combines multiple components,
        // so we only access those entities which have
        // both of them.
        for (pos, vel) in (&mut pos, &vel).join() {
            pos.0 += vel.0;
        }
    }
}

Given we have several million entities with Pos and Vel components, how does Shred dispatch this system? Does it pass all entities to a single call of run() or does it call run() multiple times with viewer entities, but on its own thread each? If I understand Shred correctly, it passes all entities to a single call of run(), hence a single thread, which would mean that, if I only have one system, only one core of my CPU would be used for all data sequentially, and e.g. 31 other cores would idle around doing nothing (I know, this example is very constructed, but I want to demonstrate my train of thoughts and concern if Shred, hence Specs, hence Amethyst and any other high-level library and application depending on this crate is scheduled suboptimally).

That being said, wouldn't it be more optimal to ideally take the number of systems (which can run in parallel) and entities and CPU cores and feed smaller chunks to the system, so that one system runs multiple times in parallel?

If you want multithreading in the above code sample, take a look at par_join. That gives more control to the user as there might be things you want to do just once per dispatch.

@torkleyy Thanks, that's very interesting! I found an example after searching for that keyword. Though it still seems as if Shred doesn't support such a way of scaling systems...

fn run(&mut self, (comp_int, mut comp_float): Self::SystemData) {
        use rayon::prelude::*;
        (&comp_int, &mut comp_float)
            .par_join()
            .for_each(|(i, f)| f.0 += i.0 as f32);
    }

There should be more info about par_join() in the docs and the books, so people actually find this useful functionality. If I have time tonight, I will at least contribute a few sentences mentioning it (in Specs) :)

Though it still seems as if Shred doesn't support such a way of scaling systems...

Sorry for the late response, I just wanted to say:

shred does not have a scalability problem; the concept "Entity" does not even exist in shred; Systems just work on Resources. And running the same system on multiple views of entities is besides being horribly unsafe simply not practical. There are e.g. things you want to do only once per dispatch, there are loops you cannot parallelize. The current design gives control to the system implementor and also ensures that all safety requirements are fulfilled.