One defines the "scaling dimension" (as opposed to "engineering dimension") of an operator $\cal{O}$ as $[\cal{O}]$ such that if $\cal{O}(t^{-1}x) = t^{[\cal{O}]}\cal{O}(x)$ then the Lagrangian in which $\cal{O}$ appears would be scale invariant.

- Unlike for "engineering dimensions" it seems that the value of scaling dimensions (even classically!) can't be derived from just looking at the operator but one seems to need to know the Lagrangian in which it appears so that the "right" $[\cal{O}]$ can be assigned to preserve scale-invariance.

For example - how else does one explain that the "engineering dimension" of $m^2\phi$ is $3$ whereas its "scaling dimension" is $1$? (same as that of $\phi$) (..the above obviously follows if I think of the term to be occurring in a $2+1$-dimensional Lagrangian and ask as to what should the scaling dimensions be so that the Lagrangian is scale-invariant..but something doesn't look very intuitive..)

I would like to know what is the special difficulty that is faced in defining $2-$point correlation functions of $\cal{O}$ if it is real? (..as opposed to when they are complex like in the next question - thought not that obvious either!..)

For complex $\cal{O}$ it "follows" that $<\cal{O}(x)\cal{O}^*(y)> \sim \vert x - y \vert ^{-2[ \cal{O}]}$ It is clearly consistent with definitions of the scaling dimension but is there a "derivation" for this? I have often seen the statement that the above short-distance behaviour follows from "reflection positivity" (..ala Wightman axioms..) I would like to know of some explanations.

This post imported from StackExchange MathOverflow at 2014-09-01 11:22 (UCT), posted by SE-user Anirbit