Several challenges stand in the way of developing useful computation at the 100-qubit scale. Current methods for the characterization of large quantum devices, for instance, demand exponentially large resource costs with the number of qubits. Moreover, once a candidate for such small-scale quantum processors is developed, verifying and certifying its dynamics is beyond the scale of what can be achieved using purely classical resources. In this talk, I will argue that both of these problems can be addressed by using classical statistical inference techniques to reduce them to problems of quantum simulation. In particular, I will introduce the classical methods that we build upon, show how quantum simulation and communication can be applied as resources, and will describe experiments in progress that demonstrate the utility of these techniques.
https://www.cgranade.com/research/talks/unm-2014 \(\renewcommand{\vec}[1]{\boldsymbol{#1}}\) \(\newcommand{\ket}[1]{\left|#1\right\rangle}\)
Joint Work with Nathan Wiebe, Christopher Ferrie, Ian Hincks, Rahul Deshpande and D. G. Cory
To compile these slides, we use nbconvert.
!ipython nbconvert --to slides --template slides.tpl slides.ipynb
!mv slides.slides.html slides.html
[NbConvertApp] Using existing profile dir: u'/home/cgranade/.config/ipython/profile_default' [NbConvertApp] Converting notebook slides.ipynb to slides [NbConvertApp] Support files will be in slides_files/ [NbConvertApp] Loaded template slides.tpl [NbConvertApp] Writing 254657 bytes to slides.slides.html
If you want to view them in your browser complete with speaker notes, remote control support, etc., then you need to host the slides. The instructions for Reveal.js include directions for hosting via a library called Grunt. Unfortunately, this doesn't work well with remot.io, as that tool requires that you serve from port 80.
Since we're going to display some <iframe>
s in this talk, we'll need to import the display functionality from IPython and write a small function. These have no part in the talk itself, so we mark these cells as Skip in the Cell Toolbar.
from IPython.display import HTML, YouTubeVideo
def iframe(src):
return HTML('<iframe src="{}" width=1000 height=400></iframe>'.format(src))
In order for quantum computing to be useful, we need to be able to build, characterize and verify quantum devices at scales well past current techniques.
In this talk, I will focus on learning the Hamiltonian dynamics of large quantum systems, at scales approaching 100 qubits.
The characterization of quantum systems is a rich field.
We start by reviewing a few approaches, their advantages, and what prevents them from being applicable at the 100-qubit scale.
State and process tomography can be used iteratively to probe Hamiltonian dynamics.
[BH+03]
[MGE12]
[BG+13]
[dSLP11]
We can build upon these techniques by exploiting a key insight from Bayesian inference: simulation is a resource for characterization.
To see this, suppose that we have a probability \(\Pr(d | H)\) of obtaining data \(d\) from a system whose Hamiltonian is \(H\). Then, by Bayes' rule,
\[ \Pr(H | d) = \frac{\Pr(d | H)}{\Pr(d)} \Pr(H). \]
By simulating according to \(H\), we can find a probability distribution \(\Pr(H | d)\) over the possible Hamiltonians of a system of interest, given experimental knowledge.
This is very powerful, as it allows us to bring our best simulation resources to bear on the problem of characterizing large quantum systems.
Importantly, \(\Pr(d | H)\) need not be a classical simulation, but can be implemented using quantum resources. This will then enable us to use a known quantum system as a tool to verify the dynamics of a system under study.
In the rest of this talk, I will describe:
To build our approach, we express the problem of characterizing quantum systems as one of parameter estimation. Thus, instead of reasoning directly about the Hamiltonian \(H\), we consider a parameterization \(\vec{x}\) such that \(H = H(\vec{x})\). This offers two distinct advantages:
The dimension of \(\vec{x}\) can be substantially smaller than that of \(H\) by restricting the parameterization to a specific model.
For example, if we know that the system follows an Ising model:
\[ H = \sum_{\langle i, j \rangle} J_{i,j} \sigma_z^{(i)} \sigma_z^{(j)} \\ \vec{x} = \left(J_{i,j}\right)_{\langle i, j \rangle} \\ \dim \vec{x} = {n \choose 2} \ll \dim \mathcal{H} = 4^n \]
We can also include in \(\vec{x}\) not just Hamiltonian parameters, but also parameters describing the details of a particular implementation.
For example:
Image courtesy of Ian Hincks.
In NV centers, the electron spin degree of freedom \(\vec{S}\) couples to a \({}^{13}\text{C}\) spin \(\vec{I}\) by a hyperfine interaction, such that
\[ H = \Delta_{\text{zfs}} S_z^2 + \gamma_e \vec{B} \cdot \vec{S} + \gamma_C \vec{B} \cdot \vec{I} + \vec{S} \cdot \mathbf{A} \cdot \vec{I}, \]
where \(\vec{B}\) is the magnetic field, and where \(\Delta_{\text{zfs}}\) is the zero-field splitting.
Let \(\vec{B} = \vec{B}_0 + \delta \vec{B}\), where \(\delta \vec{B}\) is an unknown error in setting the magnetic field. Moreover, choose the coordinate system such that \[ \mathbf{A} = \left( \begin{matrix} A_{xx} & 0 & A_{xz} \\ 0 & A_{yy} & 0 \\ A_{xz} & 0 & A_{zz} \end{matrix} \right) \] contributes four real parameters.
Having adopted a parameterization \(\vec{x}\), at any point, we can describe our knowledge of \(\vec{x}\) by a probability distribution \(\Pr(\vec{x})\).
We can learn the value of these parameters that best explains our data by using Bayes' rule to update our knowledge about \(\vec{x}\), finding a new distribution \(\Pr(\vec{x} | D)\) for a data record \(D = \{d_1, d_2, \dots, d_N\}\).
\[ \Pr(\vec{x} | d_1, \dots, d_N) = \frac{\Pr(d_N | \vec{x})}{\Pr(d_N)} \Pr(\vec{x} | d_1, \dots, d_{N - 1}). \]
In this way, we see that Bayes rule suggests an iterative algorithm, in which we set the prior distribution \(\Pr(\vec{x})\) at each step to be the posterior distribution \(\Pr(\vec{x} | d_i)\) from the previous step.
The Sequential Monte Carlo (SMC) algorithm from classical statistics allows us to find and sample from the final posterior distribution \(\Pr(\vec{x} | D)\) on a classical computer by representing the Bayesian update \(\Pr(\vec{x} | d_1, \dots, d_i) \mapsto \Pr(\vec{x} | d_1, \dots, d_{i+1})\) as a Markov chain acting on a set of hypotheses drawn from the initial prior.
In SMC, we approximate the distribution at each step by a mixture of \(n_p\) \(\delta\)-distributions: \[ \Pr(\vec{x}) = \sum_{i = 1}^{n_p} w_i \delta(\vec{x} - \vec{x}_i). \]
The Bayes update can then be expressed as a finite number of simulations, \[ w_i \mapsto w_i \Pr(d | \vec{x}_i) / \mathcal{N}, \] where \(\mathcal{N}\) can be found by normalization.
YouTubeVideo('AFsoG9N6gbk', rel=0, showinfo=0)
Given that SMC uses simulation as a resource, it follows that we can exploit quantum simulation as a resource to characterize quantum systems. Thus, instead of computing \(\Pr(d | \vec{x})\) using a classical computer, we can sample a quantum simulator for \(H(\vec{x})\).
To learn \(H(\vec{x})\), we use the two-outcome likelihood function \[ \Pr(0 | \vec{x}; t) = \left|\left\langle\psi | e^{-i H(\vec{x}) t} | \psi \right\rangle\right|^2 \] for some state \(\ket{\psi}\).
The first step is to replace the evaluation of \(\Pr(d | \vec{x}_i)\) with a quantum simulator.
\[ \hat{p}_{i,\text{MLE}} := \frac{|\{d' \in D'_i : d' = d\}|}{|D'_i|} \]
Essentially, we are comparing classical outcomes of measurements on an unknown and trusted quantum system.
\[ \hat{p}_{i, \text{MLE}} = p_i + \eta \]
In addition to applying quantum simulation as a resource for estimating likelihoods, we can also use quantum simulation to give us additional experimental controls and to extend the evolution times which we can use to learn.
We do so by coupling a trusted simulator to the system under study, so that we can invert the evolution by a hypothesis about the system under study.
The experimental data \(d\) is drawn by coupling the two systems.
For each hypothesis, a simulation record \(D'_i\) is drawn on the trusted system alone.
Let's consider a single-qubit example, \[ H(\omega) = \frac{\omega}{2}\sigma_z. \] Suppose we invert by \(H(\omega_-)\) for some \(\omega_-\) that we get to choose.
The likelihood for this experiment is then \[ \Pr(0 | \omega; \omega_-, t) = \cos^2([\omega - \omega_-] t / 2). \]
This is especially important when there are many parameters, so that we have enough experimental controls to test our hypothesis about the system of interest.
To do this:
This heuristic requires no simulation, but adapts to the current uncertainty about \(H\).
We test how robust SMC, QLE and IQLE are to errors in three distinct ways:
Additionally, we show that IQLE continues to work well with non-commuting models.
We test the robustness of IQLE to errors in the SWAP gates by adding depolarizing noise of strength \(\mathcal{N}\).
Next, we consider the performance of SMC when the likelihood function being used is itself an estimate of the true likelihood function,
\[ \widehat{\Pr}(D | \vec{x}; \vec{e}) = \Pr(D | \vec{x}; \vec{e}) + \eta, \]
where \(\eta \sim \mathcal{N}(0, \sigma^2)\) is an error introduced into the simulation.
More formally, we have shown analytically that the cost of performing a QLE or IQLE update scales as \[ \frac{|\{\vec{x}_i\}|}{\epsilon^2}\left(\mathbb{E}_{d|\vec{x}}\left[\frac{\max_k \Pr(d|\vec{x}_k)(1-\Pr(d|\vec{x}_k))}{\left( {\sum_k \Pr(d|\vec{x}_k)\Pr(\vec{x}_k)}\right)^2}\right]\right) \] for an error \(\epsilon\) in the posterior probability.
In this case, we can gain some benefit by using model selection to decide if we have included enough parameters to describe the full dynamics.
Consider the translationally-invariant two-parameter Hamiltonian \[ H(\vec{x}) = \vec{x}_1\sum_{k=1}^n \sigma_x^{(k)} + \vec{x}_2\sum_{k=1}^{n-1} \sigma_z^{(k)} \otimes \sigma_z^{(k+1)}. \]
With IQLE, we can still get exponential learning by choosing pseudorandom separable input states:
In addition to classically simulating quantum simulators, we want to be able to show experimental evidence to verify our methods.
Using two \({}^{13}\text{C}\) and two \(\text{H}\) spins, we obtain two coupled two-qubit subsystems.
Treat \(\text{H}\) spins as untrusted register, \({}^{13}\text{C}\) as a trusted simulator.
\[ H(\omega_1, \omega_2, J) = \omega_1 \sigma_z^{(1)} / 2 + \omega_2 \sigma_z^{(2)} / 2 + J \sigma_z^{(1)} \sigma_z^{(2)}. \]
\[ U(\vec{x}_-, t) = \exp(+i \tau_3 J_C \sigma_z^{(1)} \sigma_z^{(2)}) \exp(+i \tau_2 \omega_{2,C} \sigma_z^{(2)} / 2) \exp(+i \tau_1 \omega_{1,C} \sigma_z^{(1)} / 2), \] where \(\tau_1\), \(\tau_2\) and \(\tau_3\) are chosen such that:
NMR uses ensemble measurement, so data is a time record of the free induction decay, and can be simulated with a Weiner process \[ dR = \langle\rho(t) \hat{O}\rangle\ dt + dW. \]
We have developed a flexible and easy-to-use Python library, QInfer, for implementing SMC-based applications.
iframe("http://python-qinfer.readthedocs.org/en/latest")
In the future, we plan on extending these results by
[BG+13] R. Blume-Kohout, J. K. Gamble, E. Nielsen, J. Mizrahi, J. D. Sterk, and P. Maunz, “Robust, self-consistent, closed-form tomography of quantum logic gates on a trapped ion qubit,” arXiv:1310.4492 [quant-ph], Oct. 2013.
[BH+03] N. Boulant, T. F. Havel, M. A. Pravia, and D. G. Cory, “Robust method for estimating the Lindblad operators of a dissipative quantum process from measurements of the density operator at multiple time points,” Phys. Rev. A, vol. 67, no. 4, p. 042322, Apr. 2003.
[dSLP11] M. P. da Silva, O. Landon-Cardinal, and D. Poulin, “Practical Characterization of Quantum Devices without Tomography,” Phys. Rev. Lett., vol. 107, no. 21, p. 210404, Nov. 2011.
[FG13] C. Ferrie and C. E. Granade, “Likelihood-free quantum inference: tomography without the Born Rule,” arXiv e-print 1304.5828, Apr. 2013.
[GFWC12] C. E. Granade, C. Ferrie, N. Wiebe, and D. G. Cory, “Robust online Hamiltonian learning,” New Journal of Physics, vol. 14, no. 10, p. 103013, Oct. 2012.
[HW12] M. J. W. Hall and H. M. Wiseman, “Does Nonlinear Metrology Offer Improved Resolution? Answers from Quantum Information Theory,” Phys. Rev. X, vol. 2, no. 4, p. 041006, Oct. 2012.
[MGE12] E. Magesan, J. M. Gambetta, and J. Emerson, “Characterizing Quantum Gates via Randomized Benchmarking,” Physical Review A, vol. 85, no. 4, Apr. 2012.
[WG+12a] N. Wiebe, C. Granade, C. Ferrie, and D. G. Cory, “Hamiltonian Learning and Certification Using Quantum Resources,” arXiv:1309.0876 [quant-ph], Sep. 2013.
[WG+12b] N. Wiebe, C. Granade, C. Ferrie, and D. G. Cory, “Quantum Hamiltonian Learning Using Imperfect Quantum Resources,” arXiv:1311.5269 [quant-ph], Nov. 2013.
These references are also available on Zotero.
iframe('https://www.zotero.org/cgranade/items/collectionKey/UWFT2XAI')