# Stabilizer formalism for symmetric spin-states?

+ 11 like - 0 dislike
1457 views

This question developed out of conversation between myself and Joe Fitzsimons. Is there a succinct stabilizer representation for symmetric states, on systems of n spin-1/2 or (more generally) n higher spin particles?

By a "stabilizer representation", I mean that:

• every symmetric state (or some notable, non-trivial family of them which contains more than just product states) is represented as the unique +1-eigenstate of some operator or the unique joint +1-eigenstate of a list of operators, where

• each element of this set of stabilizing operators can be succinctly described, as an operator on the larger Hilbert space (i.e. not only as a transformation restricted to the symmetric subspace itself), and

• where the stabilizing operators transform in a nice way in the Heisenburg picture under symmetric local unitaries (i.e. unitary transformations of the form U⊗n).

Ideally, one would be able to efficiently describe all sorts of transformations between various symmetric states; but one cannot have everything.

The constraint of being a unique +1-eigenstate of the list of stabilizing operators could also be made subject to the constraint of being a symmetric state. (For instance, many states on n spin-1/2 particles are stabilized by a σz operator on a single spin, but exactly one symmetric state is stabilized by that operator. Not that I would expect such an operator necessarily to arise in the formalism...)

Does a representation with the above properties (or one close to it) exist?

This post has been migrated from (A51.SE)
retagged Mar 7, 2014

+ 7 like - 0 dislike

I believe there is such a representation, as follows:

You need only consider operators which can be written as $\Omega = \sum_k \alpha_k \gamma_k$, where $\gamma_k = \sum_\ell P_\ell \big( \sigma_{k_1} \otimes \cdots \otimes \sigma_{k_N} \big)P_\ell^\dagger$ with $P_\ell$ being the operator corresponding to the $\ell^{\text{th}}$ permutation of qubits. Note that $\gamma_k$ is defined on the whole space, and is essentially a generalization of the number operator. $\gamma_k$ can be uniquely identified by each of the number of each kind of Pauli matrix it contains, and so is polynomial in the number of qubits. The same works for the case of qudits, though the degree of the polynomial will scale with the dimensionality of the local systems. For qubits, we have 3 relevant numbers: 1) $N_X$ the number of sites where the operator acts as $\sigma_X$, 2) $N_Y$ the number of sites where the operator acts as $\sigma_Y$ and 3) $N_Z$, the number of sites where the operator acts as $\sigma_Z$, subject to the constraint that $N_X + N_Y + N_Z \leq N$. So instead, we can relabel the $\gamma$ matrices as $\gamma_{N_X,N_Y,N_Z}$ for the two dimensional case.

Notice that the set of possible $\Omega$ form a group under multiplication, and that each element of the group has a polynomial description (up to approximation of the complex coefficients).

Thus:

1. Since the $\gamma$ operators form a basis for Hermitian matrices which are invariant under permutation of the local Hilbert spaces, they satisfy your first criterion.

2. Any stabilizing operator $\Omega$ can be described by a set of real numbers (or approximations there of), the number of which is polynomial in the number of subsystems and exponential in their local dimension, thus satisfying your second criterion.

3. Symmetric local unitaries can also be written in terms of a sum of $\gamma$ matrices (albeit it with an additional restriction on the values of $\alpha_k$), and thus by the group structure, the outcome can still be represented within this framework. When a unitary is applied, the stabilizers transform as $U \Omega U^\dagger$, which is efficiently computable if $U$ is symmetric given the reduced basis with which both $U$ and $\Omega$ can be expressed, satisfying your last criterion.

This post has been migrated from (A51.SE)
answered Oct 2, 2011 by (3,575 points)
A few more observations, (i) you might want to include efficient measurement dynamics into the framework; (ii) $U^{n}$ operations are symmetries of any symmetric state and so are uninteresting, is there actually a non-empty set of unitaries which acts non-trivially but preserves the formalism (like the Clifford group). Finally, (iii) it might that we are missing a trick here, a large number of papers on symmetric states of the form $\rho^{n}$ have made use of Schur-Weyl duality in representation of symmetric groups, e.g. [link]http://arxiv.org/abs/quant-ph/9910124

This post has been migrated from (A51.SE)
@Earl: $(X_1 + X_2)(Y_1 + Y_2) = (X_1Y_1 + X_2Y_2) + (X_1Y_2 + Y_1X_2) = i(Z_1 + Z_2) + (X_1Y_2 Y_1X_2)$. Note that both quantities in brackets are $\gamma$ terms.

This post has been migrated from (A51.SE)
@Earl: Re (i), yes, that would be nice (but wasn't part of the question). Symmetric measurements should be trivial to encorporate, but non-symmetric ones might be hard. If you increase the local dimension you essentially have an optical QC, but for fixed dimension I guess it might be efficiently simulable for any non-entangling measurements. Re (ii): Yes, all symmetric operations, encluding entangling ones. Re (iii) it is entirely possible that there are tricks I have failed to exploit here.

This post has been migrated from (A51.SE)
@Joe The state having a polynomial description doesn't necessarily imply that there will be only polynomially many states (and hence entries in the look-up table.). E.g. an $n$-bit string only requires $n$ bits to store but there are $2^{n}$ of them, and of course look-up tables don't work here. Also, if there is any hope of finding only a polynomial number of states we do so up-to local unitaries as this would generate an infinite set of states... hmmm...

This post has been migrated from (A51.SE)
@Earl: No, I meant you do the multiplication term by term. There are polynomially many terms, which get multiplied pairwise, so you only need a poly sized look-up table.

This post has been migrated from (A51.SE)
I can this that this approach will probably work, but I don't quite see the multiplication rule (and hence also the time evolution) yet. That is, how to calculate multiplications in polynomial time. For example, $(X_{1}+X_{2})(Y_{1}+Y_{2})=X_{1}Y_{1}+X_{2}Y_{1}+X_{2}Y_{2}+X_{1}Y_{2}$
 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOv$\varnothing$rflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). Please complete the anti-spam verification