Can Meta-VQE beat Barren Plateaus?
One of the main problems on Quantum Neural Networks (QNN) is the problem of Barren Plateaus, that is as the system grows in size (more qubits) the gradient of the loss function becomes exponentially smaller, this leads to untrainable circuits. Barren Plateaus have various origins, for instance the ansatz expressiveness [4] or even the presence of noise [5].
It has been shown in [3] that a clever initialization of parameters can avoid barren plateaus, thus in this project I plan to analyze if the Meta-VQE [1] initialization can surpass the problem of barren plateaus. This project has two parts:
- The first step of the project was to implement the Meta-VQE for pennylane and apply for solving the XXZ Hamiltonian;
- The second part is to compare random initalization and Meta-VQE initialization.
Results
All results are on this notebook.
Meta-VQE
I sucessfuly implemented the Meta-VQE algorithm [1] according to the paper, and got similar results as the paper for the XXZ hamiltonian.
Barren Plateaus
Due to lack of time, I analyzed the performance of VQE using random initialization (orange) and Meta-VQE initialization (blue) and we see that it appears that the Meta-VQE initialization is able to avoid barren plateaus for this gradient as becomes constant according to the number of qubits while random initialization keeps decreasing.