banner image
Sedang Dalam Perbaikan

Kenneth Wilson, RIP

Kenneth Wilson died from complications of lymphoma (a blood cancer) in Saco, Maine (where he and his wife were previously brought due to their love for kayaking) on Saturday, aged 77 years and 1 week. He received his Nobel Prize in 1982. His adviser was Murray Gell-Mann and students included Jackiw, Shenker, Peskin, and Ginsparg.
See also: WSJ, WaPo, Yahoo, NECN, Newsday, Google News, Physics World, Cornell, Press Herald, John Preskill, Sean Carroll, Clifford Johnson, a Shmoit
More importantly, he taught us about the concepts of effective field theories and the renormalization group that have explained why the renormalization works – and many other things. Many folks – a set that includes my PhD ex-adviser Tom Banks – classify Wilson's insights as the deepest advance of theoretical physics since the 1970s. Despite these experts' opinions, Wilson remained largely unknown to the public throughout his life.




The first talk of my life presented at a university different than my own that I ever gave was a 1998 talk at the Ohio State. Wilson has been there since 1988 and he could have arrived to the talk but (even though I was immensely interested in his presence) I have completely forgotten whether he actually has arrived. ;-)

I first met Wilson and talked to him during a lunch in the Society of Fellows overlapping with the 2005 Sidneyfest. He was smiling and satisfied and he was still thinking about physics although his most beloved newest theories seemed self-evidently silly – not only according to me but also according to some fellow Nobel prize winners – and he wasn't quite following the ongoing cutting-edge theoretical research. I also knew a younger Slovak lady (Martina M. Brisudova) who was his recent collaborator.




Kenneth Wilson is the father of the Wilson loop, the path-ordered exponential over a closed loop that counts the trace of the monodromy,\[

W_C := \mathrm{Tr}\,(\, \mathcal{P}\exp i \oint_C A_\mu dx^\mu \,)\,.

\] Such quantities are mundane for us today (and useful every day) but there used to be times when no one would eve dare to do such things with the gauge fields. Look at his impressive publication and citation record.

But more importantly, he became the main guy behind the Renormalization Group (RG). Physicists had learned the playful and clever tricks of renormalization but they didn't quite understand where it came from and some of them had doubts whether it should have been trusted at all.
Remotely related: On Thursday, 5 pm Prague Summer Time i.e. 11 am Boston Daylight Time, there will be a Google Hangout with top HEP phenomenologist John Ellis about SUSY on youtube.com/CERNTV. Use @CERN #askcern on Twitter to ask questions. Incidentally, the first tetraquark (a pair of mesons stuck together) called \(Z_c(3900)\) was discovered at Belle as well as Beijing EPC, see e.g. Nature.
The Renormalization Group with its related machinery and terminology including effective field theories, relevant and irrelevant interactions, fixed points, and so on has eliminated all the doubts, unmasked the power that makes the renormalization procedures consistent and successful with a remarkable clarity, and gave us a modern understanding of what quantum field theory actually means (some people say that we are still waiting for analogous insights about the "true nature" of string theory). Wilson achieved these things in 1971-1974, building on the shoulders of Freeman Dyson's systematic theory of the old renormalization methodology from 1949 and Leo Kadanoff's 1966 ideas about the "block spin renormalization group".

What does this Wilsonian theory (some people could call it "Wilsonian philosophy" but this label doesn't reduce its robustness and importance in physics at all) say?

It says that quantum field theories (and similarly models in statistical physics that are mathematically analogous) should not be viewed as the final theories of everything but just as approximate theories that describe all objects and phenomena whose characteristic length scales are (much) longer than some \(L\) or, equivalently, whose energies are (much) lower than \(E\).

An important fact is that such a "restriction of the original theory", possibly a final theory, is possible at all. Why is it possible? Because we can explicitly construct it. Assuming that your "more complete" theory admits a formulation in terms of Feynman's path integral, we may define\[

\exp\left(-S_{\Lambda'}[\phi]\right)\ \stackrel{\mathrm{def}}{=}\ \int_{\Lambda' \leq p \leq \Lambda} \mathcal{D}\phi \exp\left[-S_\Lambda[\phi]\right].

\] On the right hand side, we are using a theory with the action \(S_\Lambda\) and this theory is supposed to work for all energies/momenta up to \(\Lambda\) which is very high. You may imagine this parameter to be infinite if you haven't thought about theories with a restricted domain of validity before.

All the calculable probability amplitudes are given by the Feynman path integral which is an infinite-dimensional integral over all field modes with various momenta. The key observation is that this integral may be reorganized in such a way that we first integrate it over the higher-energy modes, e.g. – in the formula above – modes with \(\Lambda' \leq p \leq \Lambda\). In this way, we obtain a function that only depends on the low-energy field modes, \(p\leq \Lambda'\), and the integral over these field modes can be done at the end.

A funny thing is that the function we integrate at the end only depends on the low-energy field modes – because the higher-energy field modes have been "integrated out" which means that they have been "integrated over" which excluded them "out of the list of variables upon which our favorite/remaining action on the left hand side \(S_{\Lambda'}\) depends"). Still, this simplified function is totally sufficient to calculate arbitrary correlators etc. of the low-energy field modes (and scattering amplitudes for particles at low energies, among related things) as long as we "integrated out" the high-energy quanta properly and accurately.

The function that only depends on the low-energy quanta defines what we call the "effective field theory". Because its action doesn't depend on the high-energy quanta at all, this "effective field theory" will also generally become independent of any particles, fields, interactions, and laws of physics that only influence the very-short-distance or very-high-energy physical phenomena. We don't need to know the quarks to study atomic physics (or chemistry) and the Wilsonian "integrating things out" quantitatively realizes the same general idea in the technical framework of quantum field theories.

(You should bring your mind to the right mood by checking one of the interactive Flash animations showing the Universe at various length scales. Wilson effectively tells us to study the scales independently.)

So different theories valid for all distance scales, including the very short ones, may produce the same - or nearly the same – effective field theories for the low-energy modes. They may just imply the same spectrum of particles or fields at low energies and because their interactions are rather constrained (the space of effective field theories obeying certain extra conditions is rather small or exclusive), the interactions may agree, too.



Celebrations of the 1982 Nobel prize at Cornell. He looks very young among his colleagues – we're used to young people celebrating old men's Nobel prize – but he was already 46 on the picture above.

This was the first, more general part of the Wilsonian ideas: it's a good idea to separate physics to the physics at various scales. Short-distance physics affects long-distance physics that is derived from it; but the relationship doesn't hold in the opposite direction because short-distance physics is often left undetermined if we only know its long-distance manifestations.

The second part of Wilson's important contributions is a whole industry of methods that tell us how the effective field theories differ from the original ones in the case that the original ones are also quantum field theories, and we could even say that they are effective field theories as well but ones with a higher \(\Lambda\) and how the space of possible effective field theories may be parameterized.

When I wrote the only big displayed equation above, I encouraged you to imagine that \(\Lambda\), the highest scale at which the original theory was valid, was infinite while \(\Lambda'\), the highest scale where the effective (derived) theory is applicable, is much smaller. However, the real technical power of the Renormalization Group shows up when the scales \(\Lambda\) and \(\Lambda'\) are actually very close to each other:\[

\Lambda' = \Lambda (1-\varepsilon)

\] Here, \(\varepsilon\) is an infinitesimal positive number. In this case, the partial integration in the Feynman path integral is the integration over a thin shell of field modes \(\phi(p)\) whose momenta (their magnitude) belong to a very narrow interval\[

\Lambda(1-\varepsilon) \leq p \leq \Lambda.

\] In other words, we are just trying to lower the scale \(\Lambda\) by an infinitesimal amount. This changes the original quantum field theory to something else but because the change we have made is apparently "infinitesimal", the change of the quantum field theory should be infinitely small, too.

In fact, the derived effective field theory will be a theory of the very same kind as the original one but the values of the parameters – masses of particles and coupling constants – will be changed by an infinitesimal amount. We may always interpret the lowering of the value of \(\Lambda\) as a "transformation" and these transformations may be composed associatively. There is also an identity transformation (keep \(\Lambda\) and therefore the quantum field theory intact) so we may say that these transformations that lower the values of \(\Lambda\) form a group.

Well, more precisely, we have said that the transition from a more complete theory with a higher \(\Lambda\) to an effective field theory with a lower \(\Lambda'\) is irreversible because this procedure is "forgetting" some particles and interactions that only mattered at high energies. Because of this irreversibility, the transformations lowering the values of \(\Lambda\) don't admit any inverse transformations. An almost group without the condition that the inverse elements exist is called a semigroup but because physicists would think that the term Renormalization Semigroup is awkward, hard to pronounce, and dominated by mathematicians' nitpickiness, they use the term Renormalization Group. The (not quite) group elements are still the (associative) transformations reducing the value of the \(\Lambda\), the maximum energy scale at which the effective theory works.

The procedure of lowering \(\Lambda'\) has some impact on the parameters of the effective field theory. This effect may be calculated by Feynman diagrams (at least perturbatively) in which the internal lines are only integrated over a small interval or shell of allowed momenta and energies. When you do such a thing, you will find out that the couplings "run". They depend on \(\Lambda\). (When you discuss the same kind of changes of all the parameters and perhaps even more qualitative changes of the whole theory that make the theory "run", the right verb is that we are "flowing the theory to the infrared".) The most important and perhaps the most typical functional dependence that appears in this running is the logarithmic running, something like (approximately, up to 1-loop diagrams)\[

\frac{1}{g^2(\Lambda)} - \frac{1}{g^2(\Lambda')} = B\cdot \ln\zav{ \frac{\Lambda}{\Lambda'} }

\] where the constant prefactor \(B\) is related to the so-called \(\beta\)-function, the "rate" by which the coupling constant changes with \(\Lambda\). Similar and perhaps more complicated "RG equations" are used to study how the parameters evolve from the high-energy scale to a low-energy scale. In particular, these "running coupling" calculations are totally essential to discuss the gauge coupling unification (convergence of the "fine-structure constants" of the three factors of the Standard Model gauge group to a common value at a high energy scale) in grand unified theories and for many similar applications. It's important to realize that as long as we identify the couplings with finite numbers that really correspond to some processes at a given energy, they are allowed to run.

If you want to use the RG methods to understand why the old renormalization methods – already used since the 1940s – work, it is a good idea to "map" the space of possible effective theories with a given spectrum and with some fixed value of \(\Lambda\). If these theories form an \(n\)-dimensional space, it must be possible to deform each of them to get to a nearby effective field theory. These deformations may in turn be realized by adding a term (operator) to their Lagrangian.

For an effective field theory, you want to classify all possible deformations. They may be divided to relevant ones, marginal ones (the "unlikely", generically measure-zero border case), and irrelevant ones according to their influence on the very low-energy physics. In general, the relevant deformations are those whose effect is increasingly important as you move from high energies to low energies; the rule is reverted for the irrelevant ones and the effect remains equally strong at all scales for the marginal ones.

The most reductionist treatment of the perturbatively known quantum field theories such as QED or the Standard Model presents all of them as deformations of a "Gaussian fixed point". The adjective "Gaussian" means that the integrand of the path integral is Gaussian i.e. that the action is free (at most bilinear); there also exist non-Gaussian (interacting) fixed points but they're harder to be found. The deformations are all the interactions we are adding. The term "fixed point" refers to the theory's being unchanged under the renormalization group flows i.e. its being independent of \(\Lambda\): fixed points are nothing else than scale-invariant theories – the most important lighthouses in the landscape of effective field theories according to the RG methods to map this landscape.

The deformations may be roughly identified with the extra terms in the Lagrangian that you might add. You will find out that the relevant ones are those whose coefficients have units of \({\rm mass}^n\), positive powers of mass, while the irrelevant ones have negative powers of mass. You will only find a finite number of relevant deformations but an infinite number of irrelevant ones – the latter are the "non-renormalizable interactions" (also essentially equivalent to what physicists call "higher-dimension operators"), such as \[

\delta S = L^4\cdot (F_{\mu\nu}F^{\mu\nu})^2

\] in quantum electrodynamics where \(L\) is some parameter with the units of length.

Before Wilson, non-renormalizable interactions could have been interpreted as the ultimate blasphemies, extra terms that immediately throw us to a hell of inconsistencies (an infinite hell because there are infinitely many such terms we may add), something that we shouldn't even think about. Wilson's appraisal of their status is different. They're OK, you may actually add them, but they're "irrelevant" because their effect on the effective field theory below the scale \(\Lambda'\) becomes negligible if this scale is much smaller than the original one, \(\Lambda'\ll \Lambda\).

If you generate an irrelevant interaction in an effective field theory from the "integrating out" of some field modes, the typical magnitude of the parameter \(L\) above will be of order \(1/\Lambda\), i.e. linked to the very high-energy scale where the source of the interaction resides. This is why the effect of such a higher-dimension operator will be negligible around the low energy scale \(\Lambda'\) because the coefficient\[

L^4 \sim \frac{1}{\Lambda^4} \ll \frac{1}{\Lambda^{\prime 4}}

\] is much smaller, by the factor of \((\Lambda'/\Lambda)^n\) with some positive exponent \(n\), in this case \(n=4\), than the typical size of the coefficient that you would have to expect (by dimensional analysis) if this interaction were as important as some relevant or marginal ones at the energy scale close to \(\Lambda'\).

Once again, instead of being "immediate superstrong devils and killers of consistency", irrelevant interactions were reclassified as effectively harmless bugs. The higher the gap is between the low energy that you experimentally probe and the high energy scale where the irrelevant term originates, the more negligible they will be. Despite the small coefficient, they may still sometimes be important, especially if they generate rare processes that can't be caused by any relevant, marginal or otherwise "normally strong" interactions.

The marginal interactions are in between. For example, the fine-structure constant \(\alpha\sim 1/137.036\) is dimensionless which means that the characteristic strength of the electromagnetic interactions is linked to a marginal deformation. Well, because this fine-structure logarithmically runs, it's actually not exactly marginal. These couplings have "anomalous dimensions" – the exponents have corrections proportional to \(\alpha\) themselves. So the fine-structure constant only looks dimensionless classically; quantum mechanically, the corresponding coefficients have the units of a fractional power of the energy that is just close but not equal to the power derived classically.

(If you want exactly marginal deformations, you demand the quantum correction to the classical dimension – the anomalous dimension – to vanish exactly as well. This rarely occurs by chance and almost all important examples we know, at least for \(d\gt 2\), are supersymmetric theories. Supersymmetry likes to guarantee similar cancellations. We also know important interacting supersymmetric theories that are nevertheless fixed points, i.e. exactly scale invariant. The \(\NNN=4\), \(d=4\) gauge theory is the most celebrated example while a non-Lagrangian six-dimensional \((2,0)\) theory is its much less well-known cousin.)



Ascania: Supersymmetry.

Such RG methods may also convince you that it doesn't matter which kind of a regularization – brute cutoffs, Pauli-Villars, dimensional regularization etc. – you use. The Wilsonian idea is that you focus on the space of effective theories i.e. those that are directly useful for the predictions of doable low-energy experiments. This space of theories – defined to be "almost directly relevant for the observations" – may be shown to exist and to have a certain dimensionality or allowed deformations and there may be many ways how this space is described or parameterized. These methods must ultimately differ by a redefinition of variables only. Whatever you can do with one regularization technique or renormalization scheme, must be translatable to another.

The "integrating out" is the key technique that allows us to translate the properties of the high-energy quantum field theory – something that may be rather directly linked to a more fundamental theory that doesn't have to be a local quantum field theory, especially to string theory – into the properties of the low-energy effective field theories that is almost immediately usable to describe the doable observations.

It's important that this translation – and the running of the couplings or flowing of the theories etc. – exists at all and it is not an identity transformation. It's important that the low-energy effective field theory is independent of many or most details of the high-energy physics. The previous sentence is pretty much equivalent to an observation from a different angle, namely that the behavior of quantum field theories (and even other high-energy starting points such as string theory) at low energies tends to be "universal". These possible low-energy behaviors may be discussed separately from the dynamics at high energies or short distances.

So what about the infinities that the old renormalization uses (and has to cancel) all the time? In the renormalization group philosophy, you may imagine that these are finite numbers that depend on a high energy scale \(\Lambda\). These terms have to cancel by definition if our task is to study effective field theories i.e. descriptions that are independent of the physics above the high energy scale \(\Lambda\). In particular, the effective field theory has to be independent of \(\Lambda\) itself.

The cancellation of the divergences is no magic or blasphemy anymore. Wilson has shown that this cancellation pretty much tautologically follows from the very task we outlined for ourselves – the task is to study the observable low-energy phenomena which effectively means to study the effective field theory for a physical system (or the possible effective field theories for a class of systems). Because of this independence, one may also get rid of some contrived artifacts linked to a particular finite value of \(\Lambda\) and study the limit \(\Lambda\to\infty\) in which the cancelled terms are "strictly" infinite. It's just a natural limit that makes the unimportance of the physics at the high energy scale more self-evident.

The Wilsonian approach leads to a revision of many ideas about naturalness, the real problems with non-renormalizable theories, and more. Whether a theory is natural or not should be decided according to the values of the parameters at the high, fundamental energy scale; the values at low energies are their consequence. However, it may often be hard for a high-energy theory to "flow" to a realistic or semirealistic theory at low energies, e.g. to preserve any light particles that survive at all (if there are no particles lighter than \(\Lambda'\), the "integrating out" may leave us with no degrees of freedom at all; the path integral becomes a boring constant because there are no variables left). The infinities themselves aren't a problem because you may always imagine that those numbers are finite; the real problem of the non-renormalizable interactions is that there are infinitely many of them whose coefficients have to be adjusted which makes the theory unpredictive for the phenomena near \(\Lambda\).

All these insights were found independently of string theory and, effectively ;-), before string theory. And Ken Wilson wasn't even a string theorist at any point of his life (sorry, I don't count his strings on a lattice). Still, pretty much all the people who talk about nonsensical things such as "competing theories", "loop quantum gravity", and so on misunderstand most of the insights about the renormalization group – even the general comments above. Their beliefs about the character and right interpretation of renormalization techniques are stuck somewhere in the 1940s (especially because of the patently obsolete opinion that the real challenge when it comes to UV divergences is to get rid of divergent integrals). In this sense, these "anti-string-theorists" misunderstand not only the physics of the last 40 years but also the physics of the last 70 years. They're just hopeless.

The name of Ken Wilson in this very form has appeared in more than 20 older TRF blog entries. RIP.
Kenneth Wilson, RIP Kenneth Wilson, RIP Reviewed by DAL on June 18, 2013 Rating: 5

No comments:

Powered by Blogger.