# Verifying continuous-variable Bell correlations

Posted on 23 May 2018

# Introduction¶

Continuous-variable (CV) quantum info is a lot less studied than the qubit/qudit scenario. Having spent a few days trying to learn the basics, I can understand why. The theory is often still tied to experimental setups, so when I would expect a formula for a state, I get something like "the squeezed states were created in the side bands of spatially separated beams of a Nd:YAG 1064nm laser" (arXiv:1801.03194). So I thought a good start would be studying the kind of Bell scenarios with which I have some familiarity. The paper arXiv:1012.1916 gives a very simple and mathematically clear scenario that can easily be implemented in Strawberry Fields.

In this scenario, we use a blend of two views. Alice and Bob can perform two measurements each like in a regular CHSH setting, but instead of the usual configuration of Pauli measurements, one of their measurements is the photon number $N$ and the other is the $X$ quadrature. The former takes discrete values and the latter continues ones. The measurement results are postprocessed to obtain outcomes in $a,b\in\{-1,+1\}$, where $a$ and $b$ label Alice and Bob's outcomes respectively, again, just like in a regular CHSH setting.

When measuring $N$, we set the outcome to $+1$ if the result is $N>0$, and to $-1$ if the result is $N=0$. For the $X$ measurement, we single out an interval $[-z, z]$. We output $+1$ if $x\in [-z, z]$ and $-1$ if $x\in \mathbb{R}\setminus [-z, z]$. The choice of $z$ depends on how we deal with $\hbar$.

Using these measurements, we use the CHSH inequality: $$S=E_{XX}+E_{XN}+E_{NX}-E_{NN}\leq 2,$$ where $E_{jk}=P(a=b|jk)-P(a\neq b|jk)$ is the expectation value of the measurements $j$ and $k$ after the binning.

# Violating a Bell inequality in a CV setting¶

First we import everything we will need:

In :
import numpy as np
import strawberryfields as sf
from numpy import pi as π
from strawberryfields.ops import Fock, BSgate, MeasureFock, MeasureX
from tqdm import tqdm
np.set_printoptions(suppress=True, precision=3)


We define the state preparation protocol and the measurements, which are arguments to the function. The state we prepare is $|\Psi_2\rangle=\frac{|2\rangle_A|0\rangle_B+|0\rangle_A|2\rangle_B}{\sqrt{2}}$, which can be done with a 1:1 beam splitter.

In :
def prepare_and_measure(Alice, Bob, q):
Fock(1) | q
Fock(1) | q
BSgate(π/4, π) | (q, q)
Alice   | q
Bob     | q


Next we set up the function for postprocessing the measurement outcomes and another one for picking the right measurement for Alice and Bob corresponding to a choice of $x$ and $y$.

In :
def postprocess(in_, out, z):
if in_ == 0:
if out > 0:
return 1
else:
return 0
else:
if out < -z or out > z:
return 0
else:
return 1

def preprocess(in_):
if in_ == 0:
return MeasureFock()
else:
return MeasureX


For one round of the experiment, we start a new engine and run the program with a given $x$ and $y$. The function returns the postprocessed measurement outcome. The value for $z$ in the postprocessing was picked to have positive correlation in the $E_{XX}$ expectation value.

In :
def do_experiment(x, y, z=40.83):
Alice = preprocess(x)
Bob = preprocess(y)
eng, q = sf.Engine(2)
with eng:
prepare_and_measure(Alice, Bob, q)
eng.run('fock', cutoff_dim=3)
a = postprocess(x, q.val, z)
b = postprocess(y, q.val, z)
return a, b


With this function, we can finally start gathering measurement statistics.

In :
n_rounds = 500
N = np.zeros((2, 2, 2, 2))
for _ in tqdm(range(n_rounds)):
x = np.random.randint(2)
y = np.random.randint(2)
a, b = do_experiment(x, y)
N[a, b, x, y] += 1

100%|██████████| 500/500 [00:12<00:00, 39.13it/s]


Now we can convert these statistics to expectation values.

In :
def calculate_expectation(N, x, y):
n = sum(N[a, b, x, y] for a in range(2) for b in range(2))
return (sum(N[a, b, x, y] for a, b in zip(range(2), range(2))) -
sum(N[a, b, x, y] for a, b in zip(range(2), range(1, -1, -1))))/n

E_NN = calculate_expectation(N, 0, 0)
E_XN = calculate_expectation(N, 1, 0)
E_NX = calculate_expectation(N, 0, 1)
E_XX = calculate_expectation(N, 1, 1)


Here's our Bell violation:

In :
print(E_XX + E_XN + E_NX - E_NN)

2.11492537313


# Device-independent verification of quantum correlations¶

The NPA hierarchy gives a way to bound the quantum set with semidefinite programs (SDPs), so we can verify whether a set of correlations is quantum. Since we only deal with the observed correlations, the certification is in principle device-independent. Therefore the dimension of the system that generated the correlations does not play any role: it works for discrete and CV systems. The package Ncpol2sdpa was primarily developed for generating and solving these SDPs. We need to import a few functions from it:

In :
from ncpol2sdpa import flatten, generate_measurements, projective_measurement_constraints, SdpRelaxation


We generate the symbolic measurement operators and the constraints that ensure that they are projective:

In :
A = generate_measurements([2, 2], 'A')
B = generate_measurements([2, 2], 'B')
substitutions = projective_measurement_constraints(A, B)


To match the observed statistics with the probability picture, we define an additional helper function:

In :
def get_probability(N, a, b, x, y):
n = sum(N[a, b, x, y] for a in range(2) for b in range(2))
return N[a, b, x, y]/n


We clamp some of the moments in the moment matrix of the SDP to the observed correlations. If the SDP is feasible with these clampings, that guarantees that the correlations are within the polytope that bounds the quantum set. This approximating polytope gets tighter around the quantum set with higher levels of relaxation, and eventually converges.

In :
moment_substitutions = {}
# Joint probabilities
for x in range(2):
for y in range(2):
moment_substitutions[A[x]*B[y]] = get_probability(N, 0, 0, x, y)
# Marginal probabilities
for x in range(2):
moment_substitutions[A[x]] = sum(N[0, b, x, y] for y in range(2) for b in range(2))/N.sum()
for y in range(2):
moment_substitutions[B[y]] = sum(N[a, 0, x, y] for x in range(2) for a in range(2))/N.sum()


Finally, we generate the SDP and solve it. This is a feasibility problem, and few SDP solvers can deal with it, which is why we used Mosek.

In :
level = 2
sdp = SdpRelaxation(flatten([A, B]))
sdp.get_relaxation(level, substitutions=substitutions,
momentsubstitutions=moment_substitutions)
sdp.solve(solver='mosek')
print(sdp.status)

optimal


The status says optimal, which means the problem is feasible.