Cassandra E. Granade
Centre for Engineered Quantum Systems
https://www.cgranade.com/research/talks/griffiths/05-2016
10/s87,
10/bh6w
\newcommand{\ee}{\mathrm{e}}
\newcommand{\ii}{\mathrm{i}}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\id}{๐}
\newcommand{\TT}{\mathrm{T}}
\newcommand{\defeq}{\mathrel{:=}}
\newcommand{\Tr}{\operatorname{Tr}}
\newcommand{\Var}{\operatorname{Var}}
\newcommand{\Cov}{\operatorname{Cov}}
\newcommand{\rank}{\operatorname{rank}}
\newcommand{\expect}{\mathbb{E}}
\newcommand{\sket}[1]{|#1\rangle\negthinspace\rangle}
\newcommand{\sbraket}[1]{\langle\negthinspace\langle#1\rangle\negthinspace\rangle}
\newcommand{\Gini}{\operatorname{Ginibre}}
\newcommand{\supp}{\operatorname{supp}}
\newcommand{\ket}[1]{\left|#1\right\rangle}
\newcommand{\bra}[1]{\left\langle#1\right|}
\newcommand{\braket}[1]{\left\langle#1\right\rangle}
joint work with Christopher Ferrie
contributions from:
Steven Casagrande, Ian Hincks, Jonathan Gross, Michal Kononenko, Thomas Alexander, and Yuval Sanders
Characterization plays a number of different roles in quantum information experiments.
All of these are examples of parameter estimation.
Given data D, and a model \vec{x}, what should we estimate \vec{x} as?
From an experimental perspective, parameter estimation isn't the point, but a tool to get things done.
Suppose H = \omega \sigma_z / 2 for some unknown \omega.
To learn \omega:
You'll get something that looks a bit like this:
What's \omega? Fourier transform and look at the peak.
We can do better.
Ex.:
Sergeevich et al. 10/c4vv95,
Ferrie et al. 10/tfx,
Hall and Wiseman 10/bh6v
Our goal is to make useful tools for parameter estimation that work in practice, in a statistically-principled fashion.
Make it easier to get experiments done:
Our theoretical basis will be
Represent our beliefs about the model by a set of hypotheses \{\vec{x}_i\}, along with their weights \{w_i\}.
Numerical stability is provided by resampling:
Preserves estimates and errors of hypotheses \{\vec{x}\}, while restoring stability of the approximation.
>>> import qinfer
Implements particle filtering, with support for common quantum information models:
Thus, QInfer is:
$ pip install qinfer
Works on Python 2.7, 3.3, 3.4, and 3.5 with the Anaconda Distribution.
import numpy as np
import qinfer as qi
# Make the data...
true_omega = 70.3
n_shots = 100
ts = np.pi * np.arange(1, 101) / (2 * 100.0);
signal = np.sin(true_omega * ts / 2) ** 2;
counts = np.random.binomial(n=n_shots, p=ideal_signal)
# ...and then process it.
data = np.column_stack([counts, ts, n_shots * np.ones(len(ts))])
est_mean, est_cov = qi.simple_est_prec(data, freq_min=0, freq_max=100)
% Make the data...
true_omega = 70.3;
n_shots = 400;
ts = pi * (1:1:100) / (2 * 100);
signal = sin(true_omega * ts / 2) .^ 2;
counts = binornd(n_shots, signal);
% ... and then process it.
setenv MKL_NUM_THREADS 1
data = py.numpy.column_stack({counts ts ...
n_shots * ones(1, size(ts, 2))});
est = py.qinfer.simple_est_prec(data, ...
pyargs('freq_min', 0, 'freq_max', 100));
@pyimport numpy as np
@pyimport qinfer as qi
# Make the data...
true_omega = 70.3
n_shots = 100
ts = pi * (1:1:100) / (2 * 100)
signal = sin(true_omega * ts / 2) .^ 2
counts = map(p -> rand(Binomial(n_shots, p)), signal);
# ...and then process it.
data = [counts'; ts'; n_shots * ones(length(ts))']'
est_mean, est_cov = qi.simple_est_prec(data, freq_min=0, freq_max=100)
QInfer is built up of several main components:
Model
: Specifies a model for what parameters describe
an experiment.Distribution
: Specifies what is known about those
parameters at the start.SMCUpdater
: Uses sequential Monte Carlo to update
knowledge based on data.Heuristic
: Selects new experiments to perform.Parameter esimation problems are specified as models, defining parameters of interest, what data looks like, etc.
>>> SimplePrecessionModel()
>>> BinomialModel(RandomizedBenchmarkingModel())
>>> BinomialModel(TomographyModel(basis))
Models expose two very important functionalities:
simulate_experiment
: Simulates data d
from an experiment e, according
to a set of model parameters \vec{x}.likelihood
: Returns the probability \Pr(d | \vec{x}; e)
of observing d in an experiment e
given model parameters \vec{x}.outcomes = np.array([1])
modelparams = np.array([w])
expparams = ts
L = SimplePrecessionModel().likelihood(
outcomes, modelparams, expparams
)
plt.plot(ts, L[0, 0, :])
>>> UniformDistribution([0, omega_max])
Represents that \omega \in [0, \omega_\max].
Distributions can also be combined in different ways:
>>> ProductDistribution(
... NormalDistribution([0.9, 0.1 ** 2]),
... UniformDistribution([0, 1]),
... ConstantDistribution(0)
... )
Typically, once you have a model and a prior, learning parameters then proceeds by looping over data:
>>> updater = SMCUpdater(model, n_particles, prior)
>>> for idx in range(n_measurements):
... experiment = # select the next experiment
... datum = # make a measurement
... updater.update(datum, experiment)
>>> est = updater.est_mean()
The updated distribution provides estimates, error bars, and plots.
>>> mean, cov, extra = qi.simple_est_rb(
... data, p_min=0.8, return_all=True
... )
>>> print(mean[0], "ยฑ", np.sqrt(cov)[0, 0])
0.991188359708 ยฑ 0.0012933975599
>>> print(np.sqrt(np.diag(cov)))
>>> extra['updater'].plot_posterior_marginal(idx_param=0)
>>> extra['updater'].plot_covariance(corr=True)
Heuristics can be used to design measurements.
For example, t_k = ab^k is optimal for non-adaptive Rabi/Ramsey/phase estimation.
>>> heuristic = ExpSparseHeuristic(scale=a, base=b)
QInfer implements heuristics as functions which provide new experiments.
For instance, using a heuristic heuristic_class
and
simulating data, we can flesh out the updater loop.
>>> updater = SMCUpdater(model, n_particles, prior)
>>> heuristic = heuristic_class(updater)
>>> for idx in range(n_measurements):
... experiment = heuristic()
... datum = model.simulate_experiment(true_model, experiment)
... updater.update(datum, experiment)
>>> est = updater.est_mean()
from qutip import *
I, X, Y, Z = qeye(2), sigmax(), sigmay(), sigmaz()
basis = pauli_basis(1)
prior = GinibreReditDistribution(basis)
model = BinomialModel(qi.tomography.TomographyModel(basis))
updater = SMCUpdater(model, 2000, prior)
heuristic = RandomPauliHeuristic(updater,
other_fields={'n_meas': 40}
)
for idx_exp in xrange(50):
experiment = heuristic()
datum = model.simulate_experiment(true_state, experiment)
updater.update(datum, experiment)
In both of these examples, we assumed that the true model was known. This lets us quickly assess how well QInfer works for a given model.
>>> performance = perf_test_multiple(
... n_trials=400,
... model=BinomialModel(SimplePrecessionModel()),
... n_particles=2000,
... prior=UniformDistribution([0, 1]),
... n_exp=200,
... heuristic_class=partial(
... ExpSparseHeuristic, other_fields={'n_meas': 40}
... )
... )
QInfer also supports time-dependent parameter estimation by adding an update rule to hypothesis positions as well as weights:
\vec{x}(t_{k + 1}) - \vec{x}(t_k) \sim \mathcal{N}(0, \sigma^2).
Diffusive estimation can still work even if the underlying trajectory is deterministic.
For example, suppose a coin bias evolves as \Pr(p) = \frac12 (\cos^2(\omega_1 t / 2) + \cos^2(\omega_2 t) / 2)).
Our hope is that QInfer will thus be a useful tool for theory and experiment alike.
Version 1.0 coming soon.