cloudfuse-io / buzz-rust

Serverless query engine

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Re-use HBee cache

rdettai opened this issue · comments

Work is distributed randomly to HBees because the query is transferred upon invocation. This means that subsequent queries have little probability to attribute the workloads with possible cache hits (e.g on the data downloaded from S3).

A possible solution would be to let the HBees first connect to the HComb, registering the state of their cache, which would then be able to assign work according to this state.