turol / FiberTaskingLib

A library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fiber Tasking Lib

This is a library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies. Dependencies are represented as atomic counters.

Under the covers, the task graph is executed using fibers, which in turn, are run on a pool of worker threads (one thread per CPU core). This allows the scheduler to wait on dependencies without task chaining or context switches.

This library was created as a proof of concept of the ideas presented by Christian Gyrling in his 2015 GDC Talk 'Parallelizing the Naughty Dog Engine Using Fibers'


Example

#include <ftl/task_scheduler.h>
#include <ftl/atomic_counter.h>


struct NumberSubset {
    uint64 start;
    uint64 end;

    uint64 total;
};


void AddNumberSubset(ftl::TaskScheduler *taskScheduler, void *arg) {
    NumberSubset *subset = reinterpret_cast<NumberSubset *>(arg);

    subset->total = 0;

    while (subset->start != subset->end) {
        subset->total += subset->start;
        ++subset->start;
    }

    subset->total += subset->end;
}


/**
* Calculates the value of a triangle number by dividing the additions up into tasks
*
* A triangle number is defined as:
*         Tn = 1 + 2 + 3 + ... + n
*
* The code is checked against the numerical solution which is:
*         Tn = n * (n + 1) / 2
*/
void TriangleNumberMainTask(ftl::TaskScheduler *taskScheduler, void *arg) {
    // Define the constants to test
    const uint64 triangleNum = 47593243ull;
    const uint64 numAdditionsPerTask = 10000ull;
    const uint64 numTasks = (triangleNum + numAdditionsPerTask - 1ull) / numAdditionsPerTask;

    // Create the tasks
    // FTL allows you to create Tasks on the stack.
    // However, in this case, that would cause a stack overflow
    ftl::Task *tasks = new ftl::Task[numTasks];
    NumberSubset *subsets = new NumberSubset[numTasks];
    uint64 nextNumber = 1ull;

    for (uint64 i = 0ull; i < numTasks; ++i) {
        NumberSubset *subset = &subsets[i];

        subset->start = nextNumber;
        subset->end = nextNumber + numAdditionsPerTask - 1ull;
        if (subset->end > triangleNum) {
            subset->end = triangleNum;
        }

        tasks[i] = {AddNumberSubset, subset};

        nextNumber = subset->end + 1;
    }

    // Schedule the tasks
    ftl::AtomicCounter counter(taskScheduler);
    taskScheduler->AddTasks(numTasks, tasks, &counter);

    // FTL creates its own copies of the tasks, so we can safely delete the memory
    delete[] tasks;

    // Wait for the tasks to complete
    taskScheduler->WaitForCounter(&counter, 0);


    // Add the results
    uint64 result = 0ull;
    for (uint64 i = 0; i < numTasks; ++i) {
        result += subsets[i].total;
    }

    // Test
    assert(triangleNum * (triangleNum + 1ull) / 2ull == result);

    // Cleanup
    delete[] subsets;
}

int main(int argc, char *argv) {
    ftl::TaskScheduler taskScheduler;
    taskScheduler.Run(25, TriangleNumberMainTask);

    return 0;
}


Automatic Test Matrix

Windows

Linux

OS X

VC++ 2015

Windows VC++ build status

gcc-4.8

Linux gcc-4.8 build status

gcc-4.9

Linux gcc-4.9 build status

gcc-4.9

OSX gcc-4.9 build status

gcc-5

Linux gcc-5 build status

gcc-5

OSX gcc-5 build status

gcc-6

Linux gcc-6 build status

gcc-6

OSX gcc-6 build status

clang-3.5

Linux clang-3.5 build status

clang-3.6

Linux clang-3.6 build status

clang-3.7

Linux clang-3.7 build status

clang-3.7

OSX clang-3.7 build status

clang-3.8

Linux clang-3.8 build status

clang-3.9

Linux clang-3.9 build status

clang-3.9

OSX clang-3.9 build status


How it works

Honestly, the best explanation is to watch Christian Gyrling’s talk. It’s free to watch (as of the time of writing) from the GDC vault. His explaination of fibers as well as how they used the fiber system in their game engine is excellent. However, I will try to give a TL;DR; version here.

What are fibers

A fiber consists of a stack and a small storage space for registers. It’s a very lightweight execution context that runs inside a thread. You can think of it as a shell of an actual thread.

Why go though the hassle though? What’s the benefit?

The beauty of fibers is that you can switch between them extremely quickly. Ultimately, a switch consists of saving out registers, then swapping the execution pointer and the stack pointer. This is much much faster than a full-on thread context switch.

How do fibers apply to task-based multithreading?

To answer this question, let’s compare to another task-based multithreading library: Intel’s Threading Building Blocks. TBB is an extremely well polished and successful tasking library. It can handle really complex task graphs and has an excellent scheduler. However, let’s imagine a scenario:

  1. Task A creates Tasks B, C, and D and sends them to the scheduler

  2. Task A does some other work, but then it hits the dependency: B, C, and D must be finished.

  3. If they aren’t finished, we can do 2 things:

    1. Spin-wait / Sleep

    2. Ask the scheduler for a new task and start executing that

  4. Let’s take path b

  5. So the scheduler gives us Task G and we start executing

  6. But Task G ends up needing a dependency as well, so we ask the scheduler for another new task

  7. And another, and another

  8. In the meantime, Tasks B, C, and D have completed

  9. Task A could theoretically be continued, but it’s buried in the stack under the tasks that we got while we were waiting

  10. The only way we can resume A is to wait for the entire chain to unravel back to it, or suffer a context switch.

Now, obviously, this is a contrived example. And as I said above, TBB has an awesome scheduler that works hard to alleviate this problem. That said, fibers can help to eliminate the problem altogether by allowing cheap switching between tasks. This allows us to isolate the execution of one task from another, preventing the 'chaining' effect described above.


The Architecture from 10,000 ft

(Christian has some great illustrations on pages 8 - 17 of his slides that help explain the flow of fibers and tasks. I suggest looking at those while you’re reading)

Task Queue - An 'ordinary' queue for holding the tasks that are waiting to be executed. In the current code, there is only one queue. However, a more sophisticated system might have multiple queues with varying priorities.

Fiber Pool - A pool of fibers used for switching to new tasks while the current task is waiting on a dependency. Fibers execute the tasks

Worker Threads - 1 per logical CPU core. These run the fibers.

Waiting Tasks - A list of the tasks that are waiting for a dependency to be fufilled. Dependencies are represented with atomic counters

Tasks can be created on the stack. They’re just a simple struct with a function pointer and an optional void *arg to be passed to the function:

struct Task {
    TaskFunction Function;
    void *ArgData;
};
Task tasks[10];
for (uint i = 0; i < 10; ++i) {
    tasks[i] = {MyFunctionPointer, myFunctionArg};
}

You schedule a task for execution by calling TaskScheduler::AddTasks()

ftl::AtomicCounter counter(taskScheduler);
taskScheduler->AddTasks(10, tasks, &counter);

The tasks get added to the queue, and other threads (or the current one, when it is finished with the current task) can start executing them when they get popped off the queue.

AddTasks can optionally take a pointer to an AtomicCounter. If you do, the value of the counter will be set equal to the number of tasks queued. Every time a task finishes, the counter will be atomically decremented. You can use this functionality to create depencendies between tasks. You do that with the function

void TaskScheduler::WaitForCounter(AtomicCounter *counter, int targetValue);

This is where fibers come into play. If the counter == value, the function trivially returns. If not, the scheduler will move the current fiber into the Waiting Tasks list and grab a new fiber from the Fiber Pool. The new fiber pops a task from the Task Queue and starts execution with that.

But what about the task we stored in Waiting Tasks? When will it finish being executed?

Every time an AtomicCounter is modified ( Store() / FetchAdd() / FetchSub() ), we check the new value against the targetValue of any fibers that are waiting on the counter. If we find one, we remove it from the list, and add it to a Ready Fibers list in the TaskScheduler. Before a fiber tries to pop a task off the Task Queue, it checks if there are any Ready Fibers. If so, it will return itself to the Fiber Pool and switch to the fiber that is ready. The ready fiber will continue execution right where it left off


Dependencies

  • C++11 Compiler

  • CMake 3.2 or greater


Supported Platforms

Arch

Windows

Linux

OS X

iOS

Android

arm

Needs testing

Tested OK

In theory

In theory

arm_64

Needs testing

Tested OK

In theory

In theory

x86

Tested OK

Needs testing

Needs testing

In theory

x86_64

Tested OK

Tested OK

Tested OK

In theory

ppc

In theory

ppc_64

In theory


Building

FiberTaskingLib is a standard CMake build. However, for detailed instructions on how to build and include the library in your own project, see the documentation page.


License

The library is licensed under the Apache 2.0 license. However, FiberTaskingLib distributes and uses code from other Open Source Projects that have their own licenses:


Contributing

Contributions are very welcome. See the contributing page for more details.


Request for Criticism

This implementation was something I created because I thought Christian’s presentation was really interesting and I wanted to explore it myself. The code is still a work in progress and I would love to hear your critiques of how I could make it better. I will continue to work on this project and improve it as best as possible.

About

A library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies.

License:Apache License 2.0


Languages

Language:C++ 86.3%Language:CMake 11.5%Language:C 1.6%Language:Shell 0.7%