I think the correct answer is that such models are both quantum mechanical and classical, although this could be considered as a question of semantics.
It is a fact that, as soon as you found a basis in your quantum system where the evolution is just a permutation, the "quantum probabilities" for the states in this basis, (as defined by Born's rule) become identical to the classical probabilities (indeed obeying Bayes' logic). Therefore it will be difficult to avoid interpreting them as such: the "universe" is in one of these states, we don't know which, but we know the probabilities.
The question is well posed: will it still be meaningful to consider superimposed states in this basis, and ask whether these can be measured, and how these evolve?
My answer depends on whether the quantum system in question is sufficiently structured to allow for considering "macroscopic features" in the "classical limit", and whether this classical limit allows for non-trivial interactions, causing phenomena as complex as "decoherence".
Then take a world described by this model and consider macroscopic objects in this world. The question is then whether inside these macroscopic objects (planets, people, indicators in measurement devices,...), our CA behaves differently from what they do in the vacuum. This may be reasonable to assume, and I do assume this to be true in the real world, but it is far from obvious. If it is so, then the macroscopic events are described by the CA alone.
This then would be my idea of a hidden variable theory. Macroscopic features are defined to be features that can be recognised by looking at collective modes of the CA. They are classical. Note that, if described by wave functions, these wave functions will have collapsed automatically. Physicists in this world may have been unable to identify the CA states, but they did reach the physical scale where CA states no longer behave collectively but where, instead, single bits of information matter. These physicists will have been able to derive the Schroedinger equation for the states they need to understand their world, but they work in the wrong basis, so that they land in heated discussions about how to interpret these states...
Note added: I thought my answer was clear, but let me summarise by answering the last 2 paragraphs of the question:
YES, my models are always equivalent to a "Bayes reduction of the global wave function"; if you can calculate how the probability distribution $\rho$ evolves, you are done.
But alas, you would need a classical computer with Planckian proportions to do that, because the CA is a universal computer. So if you want to know how the distributions behave at space and time scales much larger than the Planck scales, the only thing you can do is do the mapping onto a quantum Hilbert space. QM is a substitute, a trick. But it works, and at macroscopic scales, it's all you got.
This post imported from StackExchange Physics at 2014-03-03 18:49 (UCT), posted by SE-user G. 't Hooft