Joint Work with Christopher Ferrie and D. G. Cory
Producing useful quantum information devices requires efficiently assessing control of quantum systems, so that we can determine whether we have implemented a desired gate, and refine accordingly. Randomized benchmarking uses symmetry to reduce the difficulty of this task.
We bound the resources required for benchmarking and show that with prior information, orders of magnitude in accuracy can be obtained. We reach these accuracies with near-optimal resources, improving dramatically on curve fitting. Finally, we show that our approach is useful for physical devices by comparing to simulations.
Slides, references and source code are available at https://www.cgranade.com/research/arb/. \(\renewcommand{\vec}[1]{\boldsymbol{#1}}\) \(\newcommand{\ket}[1]{\left|#1\right\rangle}\) \(\newcommand{\dd}{\mathrm{d}}\) \(\newcommand{\expect}{\mathbb{E}}\) \(\newcommand{\matr}[1]{\mathbf{#1}}\) \(\newcommand{\T}{\mathrm{T}}\)
To compile these slides, we use nbconvert.
!ipython nbconvert --to slides --template slides.tpl slides.ipynb
!mv slides.slides.html slides.html
[NbConvertApp] Using existing profile dir: u'/home/cgranade/.ipython/profile_default' [NbConvertApp] Converting notebook slides.ipynb to slides [NbConvertApp] Support files will be in slides_files/ [NbConvertApp] Loaded template slides.tpl [NbConvertApp] Writing 225569 bytes to slides.slides.html
If you want to view them in your browser complete with speaker notes, remote control support, etc., then you need to host the slides. The instructions for Reveal.js include directions for hosting via a library called Grunt. Unfortunately, this doesn't work well with remot.io, as that tool requires that you serve from port 80.
Since we're going to display some <iframe>
s in this talk, we'll need to import the display functionality from IPython and write a small function. These have no part in the talk itself, so we mark these cells as Skip in the Cell Toolbar.
from IPython.display import HTML
def iframe(src):
return HTML('<iframe src="{}" width=1000 height=400></iframe>'.format(src))
Fully characterizing large quantum systems is very difficult.
For some applications, fidelity alone can be useful. Ex:
Fidelity isn't the full story, though (Puzzuoli et al, PRA 89 022306), so some care is needed.
Knill et al, PRA 77 012307 (2008). Magesan, Gambetta and Emerson, PRA 85 042311 (2012). Wood, in preparation.
\[ F_g(m) = A p^m + B. \]
Knill et al, PRA 77 012307 (2008). Magesan, Gambetta and Emerson, PRA 85 042311 (2012).
For example, to measure the fidelity of \(S_C\):
Magesan et al, PRL 109 080505 (2012).
\(\tilde{p} = 0.99994\), \(p_{\text{ref}} = 0.99999\)
Mean (taken over data) error matrix \(\expect_D[(\hat{\vec{x}} - \vec{x}) (\hat{\vec{x}} - \vec{x})^\T]\) for unbiased estimators bounded by Cramer-Rao Bound: \[ \matr{E}(\vec{x}) \ge \matr{I}^{-1}(\vec{x}), \] where \[ \matr{I}(\vec{x}) := \expect_{D | \vec{x}} [\nabla_{\vec{x}} \log\Pr(D | \vec{x}) \cdot \nabla_{\vec{x}}^\T \log\Pr(D | \vec{x}) ] \] is the Fisher information at \(\vec{x}\).
Here, \(\vec{x} = (p, A, B)\) or \((\tilde{p}, p_{\text{ref}}, A, B)\) are the unknown parameters being estimated.
Ferrie and Granade, QIP 12 611 (2012).
In practice, we often have prior information. Demanding unbiased estimators is too strong.
Let's take a Bayesian approach instead. After observing a datum \(d\) taken from a sequence of length \(m\): \[ \Pr(\vec{x} | d; m) = \frac{\Pr(d | \vec{x}; m)}{\Pr(d | m)} \Pr(\vec{x}). \]
We can implement this on a computer using sequential Monte Carlo (SMC). For example, to incorporate a uniform prior:
from qinfer.smc import SMCUpdater
from qinfer.rb import RandomizedBenchmarkingModel
from qinfer.distributions import UniformDistribution
prior = UniformDistribution([[0.9, 1], [0.4, 0.5], [0.5, 0.6]])
updater = SMCUpdater(RandomizedBenchmarkingModel(), 10000, prior)
# As data arrives:
# updater.update(datum, experiment)
Granade et al, NJP 14 103013 (2012).
With prior information, we need the Bayesian Cramer-Rao Bound, \[ \expect_{\vec{x}} [\matr{E}(\vec{x})] \ge \matr{J}^{-1}, \] where \[ \matr{J} := \expect_{\vec{x}} [\matr{I}(\vec{x})] \] is the Bayesian information matrix.
This again can also be computed by using SMC.
from qinfer.smc import SMCUpdaterBCRB
updater = SMCUpdaterBCRB(RandomizedBenchmarkingModel(), 10000, prior)
# As data arrives, the BCRB is given by:
# updater.current_bim
Ferrie and Granade, QIP 12 611 (2012). Granade et al, NJP 14 103013 (2012).
SMC-accelerate algorithm, outperforms least-squares fitting, esp. with small amounts of data.
This advantage persists for changing the maximum sequence length as well.
To show that SMC acceleration is experimentally useful, we use a prior that is approximately 7 standard deviations away from the correct values for a cumulant-simulated gateset.
The data was simulated using the methods of Puzzuoli et al, PRA 89 022306.
Even with a significantly bad prior, SMC does quite well.
\[\begin{array}{l|cccc} & \tilde{p} & p_{\text{ref}} & A_0 & B_0 \\ \hline \text{True} & 0.9983 & 0.9957 & 0.3185 & 0.5012 \\ \text{SMC Estimate} & 0.9940 & 0.9968 & 0.3071 & 0.5134 \\ \text{LSF Estimate} & 0.9947 & 0.9972 & 0.3369 & 0.4820 \\ \hline \text{SMC Error} & 0.0043 & 0.0011 & 0.0113 & 0.0122 \\ \text{LSF Error} & 0.0036 & 0.0015 & 0.0184 & 0.0192 \end{array}\]
Due to the bad prior, it doesn't outperform least-squares fitting in this case for \(\tilde{p}\), but it does very well at \(p_{\text{ref}}\), \(A\) and \(B\), lending credibility to the estimate.
We have developed a flexible and easy-to-use Python library, QInfer, for implementing SMC-based applications.
iframe("http://python-qinfer.readthedocs.org/en/latest/")
Full reference information is available on Zotero.
iframe('https://www.zotero.org/cgranade/items/collectionKey/2NQVPRK9')