This fourth paper of the series about the foundations of quantum physics continues the discussion of the thermal interpretation as introduced in the second paper and applied to measurements in the third paper.
By considering the properties of so-called anonymous collections, the notion of indistinguishability known from quantum mechanics is taken over to classical (statistical) mechanics. A rather nice proof that classical mechanics emerges in the thermal interpretation from the dynamics of q-expectations using the weak law of large numbers is given. For a system with a Hamiltonian with a purely discrete spectrum, the Ritz recombination principle is recovered. This is then applied to probing a quantum system (modeled as an external forcing) with a (damped) harmonic oscillator to measure spectra. A single qubit can be described by considering the density operator of the universe to be the tensor product of the density operator of the qubit system and the density operator of the environment. To measure its properties, the response of macroscopic environmental variables to the qubit is analyzed. For a scattering process, Born’s rule can be rigorously derived. To generally describe a measurement process, the thermal interpretation considers measurement results as the sum of a so-called true value which is a matter of choice or convention plus some residual measurement errors. As true values are chosen the q-expectations that follow classical dynamics. For example the Stern-Gerlach experiment is described using a true value between [-1,1]. The double slit experiment is explained by assuming a classical continuous electromagnetic field and a detector that is the only source of discreteness in the outcome of the experiment. In a similar way, low intensity measurements of continuous fields are explained by the behavior of the detector as a quantum bucket which can only produce discrete responses. Reactions in (particle physics) are described by the in and outgoing currents and metastability of the system, which can by symmetry breaking decay into the initial or final state.
At least in the examples given, the true values seem to be obtained by “brushing” over the discreteness of the spectrum of the operator measured and replacing it by its mean (or center) and an uncertainty spanning the range of the spectrum. This is in agreement with the notion that q-expectations are the only measurable observables. However, a true value that can (not even in principle) turn up as a measurement result of a single experiment seems not very natural to me. For example regarding the measurement results of a spin-$1/2$ particle as approximations to an individually never measured true value of 0 seems a bit questionable. In addition, in case of matters of spin the true value seems to ignore the issue of spin-statistics which should still be there in the quantum formalism but does no longer show up explicitly in the outcome of the measurement. In principle, subtracting from a measurement result the properly modeled measurement errors should give the true value describing the system. They can then be compared to models and theories to choose the one which agrees best with experiment. In this way, the true values are not a matter of choice or interpretation. More precisely, if one allows for scientific validity of single measurements in quantum mechanics and takes the conventional point of view, the Eigenvalues of the operator measured are what is approximately (depending on the resolution of the instrument) left of the result of a single measurement after subtracting the measurement errors. If an how all errors can really be correctly determined is another (experimental) question.
As acknowledged in the paper, the two spots in the Stern-Gerlach experiment do appear because the spin of the silver atoms gets aligned to the B-field. So observing the two macroscopic spots is usually considered to be a (indirect) measurement of the microscopic spin of the entities making up the current. It is not obvious to me why this should not be a measurement of the spin. The fact that it is unknown a priori on which of the two spots a single particle will land is well explained by the conventional quantum formalism.
Even though one of the goals of the thermal interpretation seems to be to make classical and quantum mechanics more look alike, it still is the case that conversely to the quantum case classical statistical mechanics the entities making up the system considered can in principle be distinguished or even labeled. The system also does have microstates that are described by the properties of the microscopic constituents. The number of microstates that agrees with a given macroscopic variable what quantifies the entropy of the system.
Usually, when investigating the interaction between matter and radiation, the quantum system is modeled by an oscillator and the external field is the external forcing. To retrieve the Ritz combination principle in this paper, things are done the other way round. More generally, to describe the quantum behavior of many microscopic systems the point of view presented in this paper seems to be to shift the focus interest to the (macroscopic or classical) behavior of the detector.
In summary, I still think as said earlier that the thermal interpretation makes a lot of sense in statistical mechanics, whereas concerning the microscopic implications I have my reservations. For example the true values as defined in this paper are highly non-intuitive for me personally to say the least. However some derivations, such as the one for the classical limit or the response of environmental (detector?) variables for example, look also rather cute to me.