EDIT: Explanation in light of 't Hooft's answers
I have been getting downvotes, possibly because people percieve a disconnect between the comments I made in response to 't Hooft's answers, and the content of this answer. The two sets of statements are not incompatible.
I would like to say where I agree with 't Hooft:
- I don't think hidden variables are impossible.
- I do think that it might be possible to reproduce something approximately like QM from something which is exactly a classical automaton. (I give it a 50% chance of working, I can't do it yet, but it looks possible, and if it is possible, I give it an 80% chance of being true, therefore overall, I give a 40% chance to this scenario.)
- I don't think other people's criticism of his program is valid, because people tend to believe hidden variables are just plain impossible, and I don't see a proof. The proofs are for local hidden variables or for naive hidden variables.
My criticism is not of the general program, it is of the precise implementation, as detailed in this paper and previous ones. The disagreements come from the mismatch between the Hilbert space that t'Hooft introduces without comment, as a formal trick, and classical probability space:
- 't Hooft considers the space of all possible superpositions of states of a classical automaton, plus an exponentiated Hamiltonian that reproduces the automaton behavior on a discrete time. This Hilbert space is formal, not emergent, it is a trick for rewriting probability distributions.
- 't Hooft says that so long as the basis states evolve according to permutation, there are never any superpositions in the global states. But he then goes on to discuss operators whose eigenvectors correspond to definite states of interior subsystems, and he claims that it is possible to prepare superpositions of these subsystems using these operators. The process of measuring these operators does not, as I see it, necessarily have a clear meaning in terms of the no-superposition global states, and it does not correspond to a classically allowed operation on the CA involved.
If it is possible to get quantum mechanics from CA, then I agree with nearly every intuitive statement 't Hooft makes about how it is supposed to happen—including the "template" business, and the reduction to Born's rule from counting automata states (these intuitions are horrendously vague, but I don't think there is anything wrong with them), I only disagree with the precise stuff, not the vague stuff (although if QM does not ever emerge from CA, the vague stuff is wrong too, in that case, I would just be sharing 't Hooft's wrong intuition). There is a slight difference in intuition in that I think that the violation of Bell's theorem comes from nonlocality not from superdeterminism, but this is related to the precise implementation difference in the two approaches. I will focus on the disagreements from now on.
Probability distributions on CA
Consider a CA where we know the rules, we know the correspondence between the CA and the stuff we see, but we don't know the "ontic state" (meaning we don't know the bits in the CA). We make a probability distribution based on our ignorance, and as we learn more information from observation, we make a better and better probability distribution on the CA. This is the procedure in classical systems, it can't be fiddled with, and the question is whether this can ever look like quantum mechanics at long distances.
Luboš Motl asks the fair question—what is a noncommuting observable? To describe this, consider a system consisting of $2N$ bits with an equal number of zeros and ones. The measurement $A$ returns the parity of the number of $1$'s in the first $N$ bits, and performs a cyclic permutation one space to the right on the remaining $N$ bits. The measurement $B$ returns the parity of the number of $1$'s in the bits at even-numbered positions (it's a staggered version of $A$) and permutes the odd bits cyclically. These two measurements are noncommutative for a long, long time, when $N$ is large, you need order $N$ measurements to figure out the full automaton state.
Given a full probability distribution on automaton states $\rho$, you can write it as a sum of the steady state (say uniform) distribution and a perturbation. The perturbation behaves according to the eigenvalues of the linear operator that tells you how probabilities work, and in cases where you have long-wavelength measurements only (like the operators of the previous example), you can produce things that look like they are evolving linearly with noncommutative measurements that vaguely look like quantum mechanics.
But I can't find a precise limit in which this picture reduces to QM, and further, I can't use 't Hooft's constructions to do this either, because I can't see the embedding of Hilbert space precisely in the construction. It can't be a formal Hilbert space as large as the Hilbert space of all superpositions of all automaton states, because this is too big. It must be a reduction of some sort of the probability space, and I don't know how it works.
Since 't Hooft's construction fails to have an obvious reinterpretation as an evolution equation for a classical probability density (not the Hamiltonian—that has an obvious interpretation, the projections corresponding to measurements at intermediate times), I can't see that what he is doing is anything more profound than a formal trick, rewriting QM in a beable basis. This is possible, but it is not the difficult part in making QM emerge from a classical deterministic theory.
If you do it right, the QM you get will at best only be approximate, and will show that it is classical at large enough entangled systems, so that quantum computation will fail for large quantum computers. This is the generic prediction of this point of view, as 't Hooft has said many times.
So while I can't rule out something like what 't Hooft is doing, I can't accept what 't Hooft is doing, because it is sidestepping the only difficult problem—finding the correspondence between probability and QM, if it even exists, because I haven't found it, and I tried several times (although I didn't give up, maybe it'll work tomorrow).
Previous answer
There is an improvement here in one respect over previous papers—the discrete proposals are now on a world-sheet, where the locality arguments using Bell's inequality are impossible to make, because the worldsheet is totally nonlocal in space time. If you want to argue using Bell's inequality, you would have to argue on the worldsheet.
't Hooft's models in general have no problems with Bell's inequality. The reason is the main problem with this approach. All of 't Hooft's models make the completely unjustified assumption that if you can rotate a quantum system into a $0$-$1$ basis where the discrete time evolution is a permutation on the basis elements, then superpositions of these $0$-$1$ basis elements describe states of imperfect knowledge about which $0$-$1$ basis is actually there in the world.
I don't see how he could possibly come to this conclusion, it is completely false. If you don't know which basis you are in, you describe this lack of knowledge by a probability distribution on the initial state, not by probability amplitudes. If you give a probability distribution on a classical variable, you can rotate basis until you are blue in the face, you don't get quantum superpositions out. If you start with all quantum superpositions of a permutation basis, you get quantum mechanics, not because you are reproducing quantum mechanics, but because you are still doing quantum mechanics! The states of "uncertain knowledge" are represented by amplitudes, not by classical probabilities.
The fact that there is a basis where the Hamiltonian is a permutation is completely irrelevant, 't Hooft is putting quantum mechanics in by hand, and saying he is getting it out. It isn't true. This type of thing should be called an "'t Hooft quantum automaton", not a classical automaton.
The main difficulty in reproducing quantum mechanics is that starting with probability, there is no naive change of variables where the diffusion law of probability ever looks like amplitudes. This is not a proof, there might be such effective variables as far as I know, but knowing that there is a basis where the Hamiltonian is simply a permutation doesn't help in constructing such a map, and it doesn't constitute such a map.
These comments are of a general nature. I will try to address the specific issues with the paper.
In this model, 't Hooft is discussing a discrete version of the free-field string equations of motion on the worldsheet, when the worldsheet is in flat space-time. These are simple $1+1$ dimensional free field theories, so they are easy enough to recast in the form 't Hooft likes in his other papers (the evolution equation is for independent right and left movers. The example of 4D fermions 't Hooft did many years ago is more nontrivial).
The first issue is that the world sheet theory requires a conformal symmetry to get rid of the ghosts, a superconformal symmetry when you have fermions. This gives you a redundancy in the formulation. But this redundancy is only for continuous world-sheets, it doesn't work on lattices, since these are not conformally invariant. So you have to check that the 't Hooft beables are giving a ghost-free spectrum, and this is not going to happen unless 't Hooft takes the continuum limit on the world-sheet at least.
Once you take the continuum limit on the worldsheet, even if the space-time is discrete, the universality of continuum limits of 2D theories tells you that it doesn't make much difference—a free scalar which takes discrete values is fluctuating so wildly at short distances that whether the target space values are discrete or continuous is irrelevant, they are effectively continuous anyway. So I don't see much point in saying that leaving the target space discrete is different from usual string theory in continuous space, the string propagation is effectively continuous anyway.
The particular transformation he uses is neither particularly respectful of the world-sheet SUSY or of the space-time SUSY, and given the general problems in the interpretation of this whole program, I think this is all one needs to say.
This post imported from StackExchange Physics at 2014-04-01 13:14 (UCT), posted by SE-user Ron Maimon