tensorflow / quantum

Hybrid Quantum-Classical Machine Learning in TensorFlow

Home Page:https://www.tensorflow.org/quantum

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Batching for sampler backends

herr-d opened this issue · comments

Hi,

some colleagues of mine and I noticed that the sampler backends do not receive batches. Instead, the backends evaluate the circuits individually. This seems very inefficient if one wants to run on actual hardware.

On a more technical side, the relevant code is in the core/ops/batch_util.py script. In the function batch_calculate_sampled_expectation each batch element is evaluated individually and the TFQPauliSumCollector class calculates the expectations. Instead can't we just collect all jobs, submit them as a whole to the supplied backend and then calculate the expectation values?

In our very hacky version, with our custom IBM Q sampler we got a speedup of roughly 200x.
Our version, however, is very hacky and breaks the assumption of non-commuting observables. According to issue 271 you still want to keep this.

Best wishes,
Daniel Herr

I think the issue might have gone stale as with noise integration, qsim became much more standard over any cirq usage (for simulation). On a tangential note, adding more supported backends (especially real ones with public access like IBM's) is something that I really support and think would bring a lot of value. Not sure if the google team feels the same, but if so, maybe you can make that custom IBM Quantum code into a PR (enabling a new backend: "ibm").

About the IBM sampler: I plan to make the code public. However, this needs first to go through our internal code publication process and might take a bit.

For batching: Yes, I have the feeling that TFQ as well as Pennylane mainly focus on simulation. Yet, I would argue this is mainly due to the slow hardware backends. Batching would be a first step to make it somewhat usable.

But I guess we can close this, if your focus is more on simulation.