# Indices of a Pauli matrix transformed in the Lorentz representation

+ 3 like - 0 dislike
301 views

When Peskin and Schroeder want to prove a Fierz identity on page 51, they make use of the identity $$(\sigma^{\mu})_{\alpha \beta} (\sigma_{\mu})_{\gamma\delta} = 2 \epsilon_{\alpha \gamma} \epsilon_{\beta \delta}.$$ where $\sigma^{\mu} \equiv (1,\mathbf{\sigma}).$ They state "One can understand this relation by noting that the indices $\alpha, \gamma$ transform in the Lorentz representation of $\psi_L$, while $\beta,\delta$ transform in the separate representation of $\psi_R$, and the whole quantity must be a Lorentz invariant." What do they want to say?

This post imported from StackExchange Physics at 2015-04-17 07:39 (UTC), posted by SE-user L. Su
reshown Apr 17, 2015
@Jia Yiyang You had the answer. The question may not be "high-level" enough to go to Overflow.

This post imported from StackExchange Physics at 2015-04-17 07:39 (UTC), posted by SE-user L. Su

Hi @L.Su,

it is ok to ask such graduate-level technical questions on PO, they are welcome.

@Dilaton Haha. Thank you. I certainly have no idea how to define "graduate-level."

Let's not water PO down.

+ 3 like - 0 dislike

Ok, here's what I figured out after you asked me the question:

Recall how (the spinor representation of) Lorentz transformations act on gamma matrices:

$S^{-1}(\Lambda)\gamma^{\mu}S(\Lambda)=\Lambda^\mu_{\ \ \nu}\gamma^\nu\cdots(1),$

where according to Peskin and Schroeder,

$S(\Lambda)=\begin{bmatrix} S_L(\Lambda) & 0\\0& S_R(\Lambda) \end{bmatrix}\cdots(2),$

where $S_L$ and $S_R$ are transformations that act on  left-handed spinor $\psi_L$ and right-handed spinor $\psi_R$(see P&S's equation (3.37)). And

$\gamma^\mu=\begin{bmatrix} 0 & \sigma^\mu\\ \bar{\sigma}^\mu& 0 \end{bmatrix}\cdots(3).$

Plug (2) and (3) into (1) you immediately see

$S_L^{-1}\sigma^\mu S_R=\Lambda^\mu_{\ \ \nu}\sigma^\nu\cdots(4).$

Note $S_L$ acts on the row index while $S_R$ acts on the column index, and this is the meaning of

.... that the indices $\alpha, \gamma$ transform in the Lorentz representation of $\psi_L$, while $\beta, \delta$ transform in the separate representation of $\psi_R$...

Clearly this implies that $\sigma^\mu\otimes\sigma_\mu$ is invariant under the transformation of the LHS of (4). In terms of matrix entries, if we define

$I_{\alpha\gamma\beta\delta}:=(\sigma^\mu)_{\alpha\beta}(\sigma_\mu)_{\gamma\delta}\cdots(5),$

then

$(S^{-1}_L)_{\alpha'\alpha}(S^{-1}_L)_{\gamma'\gamma}I_{\alpha\gamma\beta\delta}(S_R)_{\beta\beta'}(S_R)_{\delta\delta'}=I_{\alpha'\gamma'\beta'\delta'}\cdots(6).$

We need to solve for $I_{\alpha\gamma\beta\delta}$. Now clearly $\epsilon_{\alpha \gamma} \epsilon_{\beta \delta}$ is a solution, because of the identity $\epsilon_{ij}A_{li}A_{kj}=\det(A)\epsilon_{lk}$, and our $S_L, S_R$ both have determinant 1. The proportionality constant 2 can be obtained by comparing appropriate entries on both sides of the equation $(\sigma^\mu)_{\alpha\beta}(\sigma_\mu)_{\gamma\delta}=\text{const}\times\epsilon_{\alpha \gamma} \epsilon_{\beta \delta}$.

Now the only gap remained is the uniqueness of the solution. To prove uniqueness it is convenient to re-write (6) as a matrix equation:$I(S_R\otimes S_R)=(S_L \otimes S_L)I \cdots(7),$

where $I$ and $S\otimes S$ are $4\times 4$ matrices, in particular, the row index for $I$ is the pair $\alpha\gamma$ and column index is the pair $\beta\delta$. We are going to apply Schur's lemma to (7).

Recall in the standard representation theory analysis of Lorentz group, $S_L$ is in the $(\frac{1}{2}, 0)$ representation and $S_R$ is in the $(0,\frac{1}{2})$ representation, hence $S_L \otimes S_L\approx (1,0)\oplus (0,0)$ and $S_R \otimes S_R\approx (0,1)\oplus (0,0)$. Note they only share the 1-dimensional representation $(0, 0)$. Then Schur's lemma basically says in suitable basis, matrix $I$ is block diagonal with a 3 by 3 block and a 1 by 1 block, and the 3 by 3 block is a zero matrix, and the 1 by 1 block of course is unique up to scaling. Then return to the original basis you start with, we conclude the matrix $I$ must be unique up to scaling.

Q.E.D

A small caveat: the choice of basis (to have block diagonalization)is only unique up to an arbitrary linear combination within each invariant subspaces, and this freedom is implemented by multiplying a 3+1 block diagonal matrix to your original similarity transfomation, and you can easily show this does not affect the uniqueness.

answered Apr 17, 2015 by (2,635 points)
edited Apr 17, 2015

Very detailed!

+ 2 like - 0 dislike

They mean that - apart from being able to verify the equation by brute force evaluation of both sides - one can see that it must be true based on symmetry consideration.

One has a Lorentz scalar if one multiplies the left hand side by spinors $u_L^\alpha$, $v_R^\beta$, $w_L^\gamma$, and $z_R^\delta$ (whose chirality is given by the index $L$ or $R$) and sums over repeated indices. Thus the result must be a scalar formed out of these spinors, and linear in each of them. This gives a linear combination of the possibilities $(u_L\epsilon w_L)(v_R\epsilon z_R)$, $(u_Lv_R)(w_L z_R)$, and $(u_Lz_R)(w_L v_R)$. Only the first one has the right tensor product structure to work. Thus the formula holds up to a constant factor, which is obtained by evaluating the left hand side for the particular choice $\alpha=\beta=1,\gamma=\delta=2$, say.

answered Apr 17, 2015 by (13,219 points)
edited Apr 17, 2015
+ 1 like - 2 dislike
1. The Pauli matrices anticommute, so the product of two of them has to be antisymmetric in its indices.

2. The only antisymmetric 2-tensor in two dimensions is the Levi-Civita symbol $\epsilon$.

Hence, you can "guess" the structure of the product in question up to a constant without calculating anything explicitly.

This post imported from StackExchange Physics at 2015-04-17 07:39 (UTC), posted by SE-user ACuriousMind
answered Apr 16, 2015 by (820 points)
How do you attach the indices? I would like to know you understanding of the statement as well.

This post imported from StackExchange Physics at 2015-04-17 07:39 (UTC), posted by SE-user L. Su
-1. this is a tensor product, you cannot apply "anticommute"argument this way, besides even if you can, a pauli matrix does not anticommute with itself.

 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar\varnothing$sicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.