graphql-java / java-dataloader

A Java 8 port of Facebook DataLoader

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

composing dataloaders

sheepdreamofandroids opened this issue · comments

Hi,

I have a datafetcher where I use 2 dataloaders in sequence: the first to translate from 1 ID to another, the second to fetch data corresponding to the second ID.

loader1.load(id1).thenCompose(id2 -> loader2.load(id2))

This hangs because dispatchAll() is not called again after loader1 completes.
I can work around that by adding that call inside the thenCompose() lambda but then it is called for every id2 which is ugly at the very least.

Is there a better way of doing this?

Currently we dont have a way to do what you are after.

CompleteableFutures give us no way to know how deep they are nested and hence how many times dispatch must be called

This is currently and unsolved problem

CompleteableFutures might do something else completely so even if you had a way of inspecting their "nestedness", you still wouldn't know that their completion leads to another load(), or even multiple.
I see two possible solutions:

  1. Whenever load() is called, start a timer to call dispatchAll() after 1 ms. If the timer is already running, delay it a bit.
  2. Whenever a batchload completes, complete all the futures and then call dispatchAll().

Both can be made optional using an extra parameter. And of course dispatchAll() should finish quickly when nothing needs to be done.

The solution to this problem is to modify loader1 so that nothing needs to be loaded in the .then() block. In a traditional relational database this would mean that loader1 would do some extra joins to obtain all the required data.

@vojtapol In all honesty, if you modify the DataLoader to eagerly optimize fetching, you can also modify the original resolver function in the exact same way and drop DataLoader completely.

@vojtapol

The solution to this problem is to modify loader1 so that nothing needs to be loaded in the .then() block.

That would solve the immediate problem but there might be other ways to obtain id2. Then that load would have to do a similar join, not taking advantage of the already loaded data2 so that would decrease the effectiveness of the cache and require more memory.

I noticed #46 which will solve this transparently.

One potential solution is to dispatch all pending data loaders, wait for the futures returned from the dispatched data loaders to complete and then repeat the process to dispatch new pending data loaders that come from the thenCompose chaining. This process can be repeated until all levels of pending data loaders are dispatched. A code example to do this would be something like:

public class Dispatcher {
    private final List<DataLoader<?, ?>> dataLoaders;

    Dispatcher(List<DataLoader<?, ?>> dataLoaders) {
        this.dataLoaders = dataLoaders;
    }

    private int depth() {
        return dataLoaders.stream()
                .mapToInt(DataLoader::dispatchDepth)
                .sum();
    }

    void dispatchAllAndJoin() {
        while (depth() > 0) {
            // Dispatch all data loaders. This will kickoff all batched tasks.
            CompletableFuture<?>[] futures = dataLoaders.stream()
                    .filter(dataLoader -> dataLoader.dispatchDepth() > 0)
                    .map(DataLoader::dispatch)
                    .toArray(CompletableFuture[]::new);
            // Wait for the futures to complete.
            CompletableFuture.allOf(futures).join();
        }
    }
}

In every round of dispatch, Dispatcher#dispatchAllAndJoin will be able to batch all tasks whose dependencies have been resolved by previous dispatches. This logic could potentially live in DataLoaderRegistry but clients can also just implement their own dispatching logic without changing the dataloader library.

@hahooy I know, I'm actually using a fully asynchronous version of this: #46 (comment)_

For the record just dispatching until the depth is <= 0 will work however it will have the opposite effect. It will cause fields that that COULD be batched together to be eagerly dispatched. So you have "UNDER BATCHING" in this situation.

The real trick is that you need to know WHEN a good time to dispatch is and unlike say javascript there is no "nextTick" time in a JVM.

Actually I have done testing on node.js and they can also UNDER BATCH based on next tick firing before fields compete.

For the record just dispatching until the depth is <= 0 will work however it will have the opposite effect. It will cause fields that that COULD be batched together to be eagerly dispatched. So you have "UNDER BATCHING" in this situation.

I don't think that is true since the code waits for all dispatchers to terminate before starting a new round. This guarantees that no new calls will be done on the dataloaders. Unless of course multiple dispatchAllAndJoin() loops run in parallel...

Another inefficiency in this method is that a new round has to wait for the slowest dataloader. Some dispatches could have started earlier.

I imagine some heuristics where each dataloader that has received requests waits some time before dispatching. The closer the number of requests is to the maximum batchsize, the earlier the dispatch. The optimal mapping from number of waiting keys to wait time could be hand tuned or "learned" automatically.

See org.dataloader.registries.ScheduledDataLoaderRegistry in the lastest 3.x versions for some of the answers to the above

It allows the DataLoader to "tick" in the background and decide if it is going to dispatch or not.

The predicates you provide can dispatch on time or depth or both. Or make up you own predicates.

This may help people compose together dataloaders such that dispatch is eventually called regardless of the field tracking in graphql say

I tried swapping the default DataLoaderRegistry with ScheduledDataLoaderRegistry in my graphql application and ran a simple loader1.load(id1).thenCompose(id2 -> loader2.load(id2)) test with the following configuration

DispatchPredicate depthOrTimePredicate =
        DispatchPredicate.dispatchIfDepthGreaterThan(10)
            .or(DispatchPredicate.dispatchIfLongerThan(Duration.ofMillis(200)))
ScheduledDataLoaderRegistry.newScheduledRegistry()
              .dispatchPredicate(depthOrTimePredicate)
              .schedule(Duration.ofMillis(10))
              .register(...)
              .build()

When executing a query that triggered the data fetcher calling the test, the server hung after calling loader1.load(id1) never calling loader2.load(id2)

Has anyone gotten an example this or an alternative to work?

I've run into the same limitation and ScheduledDataLoaderRegistry didn't work for me. I think ScheduledDataLoaderRegistry serves a different use case: to make the overall dispatching strategy less eager by pushing dispatch attempts into the future. This still relies on dispatchAll to be called first, which doesn't happen in the scenario with nested loaders.

So my ugly hack current approach is to have a separate scheduled task periodically check all inflight data loaders and forcefully dispatch them if they haven't been dispatched within a preset time window (e.g. 500ms). This works to unstuck nested data loaders, at the cost of naively triggering dispatches too early for long running data loading tasks. I am not sure what the implications of those are, but my API has been working fine so far so I'm happy 😎

@Component
@Slf4j
public class ScheduledDataLoaderDispatcher {

    Queue<DataLoaderRegistry> globalRegistries = new ConcurrentLinkedQueue<>();
    Duration timeToDispatch;

    public ScheduledDataLoaderDispatcher(@Value("${app.dataLoader.timeToDispatch:500}") Integer timeToDispatch) {
        this.timeToDispatch = Duration.ofMillis(timeToDispatch);
    }

    public void addRegistry(DataLoaderRegistry dataLoaderRegistry) {
        globalRegistries.add(dataLoaderRegistry);
    }

    public void removeRegistry(DataLoaderRegistry dataLoaderRegistry) {
        globalRegistries.remove(dataLoaderRegistry);
    }

    @Scheduled(fixedRateString = "${app.dataLoader.dispatchTickRate:100}")
    public void dispatchAll() {
        globalRegistries.stream()
                .map(DataLoaderRegistry::getDataLoaders)
                .flatMap(Collection::stream)
                .filter(this::isDispatchNeeded)
                .forEach(DataLoader::dispatch);
    }

    private boolean isDispatchNeeded(DataLoader dataLoader) {
        return timeToDispatch.compareTo(dataLoader.getTimeSinceDispatch()) < 0;
    }
}

This works to unstuck nested data loaders, at the cost of naively triggering dispatches too early for long running data loading tasks.

This is pretty much how the JS tick works for JavaScript data loaders. They can dispatch too early a well but never miss composed loaders because eventually control is passed back and tick will happen.

One thing I will say about the above us - since DataLoaders are per request, your scheduler Queue will grow to to the size of the number of concurrent requests * the number of dataloaders per request.

It's good that you have a removeRegistry because otherwise this would get unwieldy quick with enough load

Thank you for your reply! Correct, I also wrote an instrumentation to clean up the globalRegistries after each request. And even with that mechanism in place, the queue can grow rapidly under heavy load. 500 milliseconds is a long time when your API is processing tens or hundreds of calls simultaneously. I still have to performance test my API to make sure this setup functions under load.

@Component
@RequiredArgsConstructor
public class ScheduledDataLoaderInstrumentation extends SimpleInstrumentation {

    private final ScheduledDataLoaderDispatcher scheduledDataLoaderDispatcher;

    @Override
    public InstrumentationContext<ExecutionResult> beginExecution(InstrumentationExecutionParameters parameters) {
        return new SimpleInstrumentationContext<>() {
            @Override
            public void onCompleted(ExecutionResult result, Throwable t) {
                Optional.ofNullable(parameters.getExecutionInput())
                        .map(ExecutionInput::getDataLoaderRegistry)
                        .ifPresent(scheduledDataLoaderDispatcher::removeRegistry);

            }
        };
    }
}

Just checking in. Has anyone come up with a workable solution to this problem yet?

Trying to find one.

This not composing dataloader calls per say - however this PR may help others who want to write custom dispatchers

#128

I have created a variant on ScheduledDataLoaderRegistry called ticker mode that will reschedule the dispatch() calls continuously in the background.

This will allow chained calls to complete however perhaps the batching windows will be as efficient as possible.

See PR : #131

Closing this thread after pull requests #128 and #131