CQCL / tket2

Version 2 of the TKET quantum compiler

Home Page:http://crates.io/crates/tket2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Convexity checking for sets of rewrites.

aborgna-q opened this issue · comments

Luca's comment from #239:

This could be resolved by introducing the concept of "convex set of (convex) patterns". Checking that a set of patterns is convex is quite similar to how we are checking convexity at the moment, and we could (presumably) reuse the precomputed convex checker data structure that we are already using.

Convex set of patterns: For every pair of patterns $P_1$, $P_2$ in the set, there are no nodes $b_1, a_1 \in P_1$ and $b_2, a_2 \in P_2$ such that $b_1$ is in the past of $a_2$ and $b_2$ is in the past of $a_1$. In other words: all nodes in $P_1$ are either in the past or spacelike-separated from all nodes in $P_2$, or they are all in the future or spacelike-separated.

This checking isn't free, unfortunately:

  • for two patterns with $q$ qubits ($q$ is always small), you could check whether the $q$ smallest of $P_1$ are in the past of the $q$ largest in $P_2$ XOR whether the $q$ largest of $P_1$ are in the future of the $q$ smallest of $P_2$. For a set of $n$ patterns, this would take time $q^2 \cdot n^2$. The $n^2$ here will be expensive :/
  • Enumerate patterns in a topsort order (this would be a simple and rather cheap change to make) and require conservatively that you can only add patterns to a set of patterns if it is in the future or spacelike-separated from any pattern in the set. This can be checked cheap-ish-ly by keeping an array of the $Q$ largest vertices (the largest vertex for each qubit) of the patterns already in the set, where $Q$ is the total number of qubits in the input (may be large). Then for each of the $q$ qubits in the pattern, check that the smallest in the pattern is larger than the one in the stored array. Worst case this would take time $\sim Q \cdot n$ (might be possible to make this faster), but at the cost of missing some valid sets of patterns. And it might still be too slow!

On top of this checking not being free, there is the argument that we should only check this once it's been filtered and is actually about to be applied, instead of computing this for sets that might well be discarded later on.

EDIT: this might require to precompute slightly different stuff than what we are doing now, but shouldn't be too far off.

Originally posted by @lmondada in #239 (comment)