I have been studying decoherence in quantum mechanics (not in qft, and don't know how it is described there) and renormalization in QFT and statistical field theory, I found at first a similarity between the two procedures: on one side decoherence tells us to trace over the degrees of freedom we don't monitor, in some way intrinsically unknown to recover a classical picture picture from quantum mechanics, on the other side by renormalizing we also integrate over "our ignorance" but this time, the U.V. physics or high energy modes to get the infra-red physics we observe. Beyond the technical similarity (taking a trace, for discrete Kadanoff-Wilson transformations) it feels that in both cases we are forced to do these procedures because we start from a wrong picture where we separate the free object (purely quantum in the first case, with bare parameters in the second one) and then calculate the effects of the interactions, that are responsible from what we, observers, see, classical and infra-red physics. This where it comes to me that some interested links between the two concepts may exist or be pointed out (and also wonder what decoherence becomes in QFT).

I still see one huge asymmetry between the two, decoherence is dynamical, it has a typical time of decay, where renormalization is static.

I hope I could explain my interrogation clearly, and some interesting comments will come.

This post imported from StackExchange Physics at 2014-05-01 12:06 (UCT), posted by SE-user toot