banner image
Sedang Dalam Perbaikan

Particles and signals moving backward in time

Mephisto has raised the issue of the Wheeler-Feynman absorber theory in particular and the influences that travel backward in time in general.

I remember having been excited about many similar topics when I was a teenager. Feynman and Wheeler were surely excited, too. But as physics was making its historical progress towards quantum field theory and beyond, most of the original reasons for excitement turned out to be misguided.

In some sense, each (independently thinking) physicist is following the footsteps of the history. So let's return to the early 20th century, or to the moment when I was 15 years old or so. Remember: these two moments are not identical - they're just analogous. ;-)

Divergences in classical physics

Newton's first conceptual framework was based on classical mechanics: coordinates x(t) of point masses obeyed differential equations. That was his version of a theory of everything. He had to make so many steps and he has extended the reach of science so massively that it wouldn't be shocking if he had believed that he had found a theory of everything. However, Newton was modest and he kind of appreciated that his theory was just "effective", using a modern jargon.

People eventually applied these laws to the motion of many particles that constituted a continuum (e.g. a liquid). Effectively, they invented or derived the concept of a field. Throughout the 19th century, most of the physicists - including the top ones such as James Clerk Maxwell - would still believe that Newton's framework was primary: there always had to be some "corpuscles" or atoms behind any phenomenon. There had to be some x(t) underlying everything.




As you know, these misconceptions forced the people to believe in the luminiferous aether, a hypothetical substance whose mechanical properties looked like the electromagnetic phenomena. The electromagnetic waves (including light) were thought to be quasi-sound waves in a new environment, the aether. This environment had to be able to penetrate anything. It had many other bizarre properties, too.

Hendrik Lorentz figured out that the empirical evidence for the electromagnetic phenomena only supported one electric vector E and one magnetic vector B at each point. A very simple environment, indeed. Everything else was derived. Lorentz explained how the vectors H, D are related to B, E in an environment.

Of course, people already knew Maxwell's equations but largely because Newton's framework was so successful and so imprinted in their bones, people assumed it had to be fundamental for everything. Fields "couldn't be" fundamental until 1905 when Albert Einstein, elaborating upon some confused and incomplete findings by Lorentz, realized that there was no aether. The electromagnetic field itself was fundamental. The vacuum was completely empty. It had to be empty of particles, otherwise the principle of relativity would have been violated.

This was a qualitative change of the framework. Instead of a finite number of observables "x(t)" - the coordinates of elementary particles - people suddenly thought that the world was described by fields such as "phi(x,y,z,t)". Fields depended both on space and time. The number of degrees of freedom grew by an infinite factor. It was a pretty important technical change but physics was still awaiting much deeper, conceptual transformations. (I primarily mean the switch to the probabilistic quantum framework but its epistemological features won't be discussed in this article.)

The classical field theory eventually became compatible with special relativity - Maxwell's equations were always compatible with it (as shown by Lorentz) even when people didn't appreciate this symmetry - and with general relativity.

However, in the 19th century, it wasn't possible to realistically describe matter in terms of fields. So people believed in some kind of a dual description whose basic degrees of freedom included both coordinates of particles as well as fields. These two portions of reality had to interact in both ways.

(To reveal it in advance, quantum field theory finally unified particles and fields. All particles became just quanta of fields. Some fields such as the Dirac field used to be known just as particles - electrons - while others were originally known as fields and not particles - such as photons that were appreciated later.)

However, the electromagnetic field induced by a charged point-like particle is kind of divergent. In particular, the electrostatic potential goes like "phi=Q/r" in certain units. The total energy of the field may either be calculated as the volume integral of "E^2/2", or as a volume integral of "phi.rho". The equivalence may be easily shown by integration by parts.

Whatever formula you choose, it's clear that point-like sources will carry an infinite energy. For example, the electrostatic field around the point-like source goes like "E=Q/r^2", its square (over two) is "E^2/2 = Q^2 / 2r^4", and this goes so rapidly to infinity for small "r" that even if you integrate it over volume, with the "4.pi.r^2.dr" measure, you still get an integral of "1/r^2" from zero to infinity. And it diverges as "1/r_0" where "r_0" is the cutoff minimum distance.

If you assume that the source is really point-like, "r_0=0", its energy is divergent. That was a very ugly feature of the combined theory of fields and point-like sources.

Because the classical theory based on particles and fields was otherwise so meaningful and convincing - a clear candidate for a theory of everything, as Lord Kelvin and others appreciated much more than Newton and much more than they should have :-) - it was natural to try to solve the last tiny problems with that picture. The divergent electrostatic self-energy was one of them.

You can try to remove it by replacing the point-like electrons with finite lumps of matter (non-point-like elementary objects), by adding non-linearities (some of them may create electron-like "solitons", smooth localized solutions), by the Wheeler-Feynman absorber theory, and by other ideas. However, all of these ideas may be seen to be redundant or wrong if you get to quantum field theory which really explains what's going on with these particles and interactions.

On the other hand, each of these defunct ideas has left some "memories" in the modern physics. While the ideas were wrong solutions to the original problem, they haven't quite disappeared.

First, one of the assumptions that was often accepted even by the physicists who were well aware of the quantum phenomena was that one has to "fix" the classical theory's divergences before one quantizes it. You shouldn't quantize a theory with problems.

This was an arbitrary irrational assumption that couldn't have been proved. The easiest way to see that there was no proof is to notice that the assumption is wrong and there can't be any proofs of wrong statements. ;-) The actual justification of this wrong assumption was that the quantization procedure was so new and deserved such a special protection that you shouldn't try to feed garbage into it. However, this expectation was wrong.

More technically, we can see that the quantum phenomena "kick in" long before the problems with the divergent classical self-energy become important. Why?

It's because, as we have calculated, the self-energy forces us to impose a cutoff "r_0" at the distance such that
Q^2 / r0 = m.
I have set "hbar=c=1", something that would be easy and common for physicists starting with Planck, and I allowed the self-energy to actually produce the correct rest mass of the electron, "m". (There were confusions about a factor of 3/4 in "E=mc^2" for a while, confusions caused by the self-energy considerations, but let's ignore this particular historical episode because it doesn't clarify too much about the correct picture.)

The purely numerical factors such as "1/4.pi.epsilon_0" are omitted in the equation above. "Q" is the elementary electron charge.

At any rate, the distance "r_0" that satisfies the condition above is "Q^2/m", so it is approximately 137 times shorter than "1/m", than the Compton wavelength of the electron: recall the value of the fine-structure constant. This distance is known as the classical radius of the electron.

But because the fine-structure constant is (much) smaller than one, the classical radius of the electron is shorter than the Compton wavelength of the electron. So if you approach shorter distances, long before you get to the "structure of the electro" that is responsible for its finite self-energy, you hit the Compton wavelength (something in between the nuclear and atomic radius) where the quantum phenomena cannot be neglected.

So the electron's self-energy is a "more distant" problem than quantum mechanics. That's why you should first quantize the electron's electric field and then solve the problems with the self-energy. That's of course what quantum field theory does for you. It was premature to solve the self-energy problem in classical physics because in the real world, all of its technical features get completely modified by quantum mechanics.

This simple scaling argument shows that all the attempts to regulate the electron's self-energy in the classical theory were irrelevant for the real world, to say the least, and misguided, to crisply summarize their value. But let's look at some of them.

Born-Infeld model

One of the attempts was to modify physics by adding nonlinearities in the Lagrangian. The Born-Infeld model is the most famous example. It involves a square root of a determinant that replaces the simple "F_{mn}F^{mn}" term in the electromagnetic Lagrangian. This "F^2" term still appears if you Taylor-expand the square root.

Using a modern language, this model adds some higher-derivative terms that modify the short-distance physics in such a way that there's a chance that the self-energy will become finite. Don't forget that this whole "project proposal" based on classical field theory is misguided.

While this solution wasn't helpful to solve the original problem, the particular action is "sexy" in a certain way and it has actually emerged from serious physics - as the effective action for D-branes in string theory. But I want to look at another school, the Wheeler-Feynman absorber theory.

The Wheeler-Feynman absorber theory

The Wheeler-Feynman absorber theory starts with a curious observation about the time-reversal symmetry.

Both authors of course realized that the macroscopic phenomena are evidently irreversible. But the microscopic laws describing very simple objects - such as one electron - should be and are T-invariant. They don't change if you exchange the past and the future. This symmetry was easily seen in the equations of mechanics or electromagnetism (and its violations by the weak force were unknown at the time).

However, there exists a valid yet seemingly curious way to calculate the field around moving charged objects. It is called Liénard–Wiechert potentials.

A static charge has a "1/r" potential around it. But how do we calculate the field if the charge is moving? Well, the answer can actually be written down explicitly. The potential (including the vector potential "A") at a given moment "t" in time may be calculated as the superposition of the potentials induced by other charges in the Universe.

Each charge contributes something like "Q/r" to the potential at a given point - and it's multiplied by the velocity of the sources if they're nonzero and if you calculate the vector potential. But the funny feature is that you must look where the charges were in the past - exactly at the right moment such that the signal from those sources would arrive at the point "x, t" whose "phi, A" you're calculating, at the right moment, by the speed of light.

So the potential "phi(t)" (and "A(t)") depends on charge densities "rho(t-r/c)" (and the currents "j(t-r/c)" where "c" is the speed of light and "r" is the distance between the source and the probe. One can actually write this formula exactly. It's simple and it solves Maxwell's equation.

But you may see that this Ansatz for the solution violates the time-reversal symmetry. Why are we looking to the past and not to the future? Obviously, the field configuration where "t-r/c" is replaced by "t+r/c" is a solution, too. It's just a little bit counterintuitive.

Both of the expressions, the advanced and the retarded ones, solve Maxwell's equations with the same sources. That's not too shocking because their difference solves Maxwell's equations with no sources. While this electromagnetic field (the difference) has no sources, it still depends on the trajectories of charged particles - it projects "past minus future" light cones around every point that the charged particle ever visited.

If you look at it rationally, the difference between the two ways how to write the solution is just a subtlety about the way how to find the solutions. The "homogeneous" part of the solution to the differential equation is ultimately determined by the initial conditions for the electromagnetic field.

But Wheeler and Feynman believed that some subtlety about the combination of retarded and advanced solutions was the right mechanism that Nature uses to get rid of the divergent self-energy. As far as I can say, this argument of theirs has never made any sense. Even in the classical theory, there would still be a divergent integral of "E^2" I discussed previously. So in the best case, they found a method how to isolate the divergent part and how to subtract it - a classical analogue of the renormalization procedure in quantum field theory.

However, these Wheeler-Feynman games soon made Feynman look at quantum mechanics from a "spacetime perspective". This spacetime perspective ultimately made Lorentz invariance much more manifest than it was in Heisenberg's and especially Schrödinger's picture. Moreover, the resulting path integrals and Feynman diagrams contains several memories of the Wheeler-Feynman original "retrocausal" motivation:
  • the Feynman propagators, extending the retarded and advanced electromagnetic potentials, are half-retarded, half-advanced (a semi-retrocausal prescription)
  • antiparticles in Feynman diagrams are just particles with negative energies that propagate backwards in time
However, quantum field theories such as QED are perfectly time-reversal symmetric at the microscopic level. As always, it doesn't mean that you're only allowed to observe time-reversal symmetric phenomena. Also, it doesn't mean that all your calculations have to be time-reversal-symmetric at every step.

Quite on the contrary. Calculations that are not time-reversal symmetric are very useful. And macroscopic phenomena that violate the time-reversal symmetry are no exceptions: they're the rule because of the second law of thermodynamics. Obviously, this law applies to all systems with a macroscopically high entropy and electromagnetic waves are no exception. In this context, the second law makes the waves diffuse everywhere.

The point about the breaking of symmetries by physics and calculations is often misunderstood by the laymen (including those selling themselves as top physicists). Many people think that if a theory has a symmetry, every history allowed by the theory must have the same symmetry and/or every calculation of anything we ever make is obliged to preserve the symmetry. This nonsensical opinion lies at the core of the ludicrous statements e.g. that "string theory is not background independent".

Retrocausality of the classical actions

Feynman's path integral formalism may also be viewed as the approach to quantum mechanics that treats the whole spacetime as a single entity. The slices of spacetime at a fixed "t" don't play much role in this approach. That's also why two major Feynman's path integral papers have the following titles:
Space-time approach to non-relativistic quantum mechanics

Space-time approach to quantum electrodynamics
This "global" or "eternalist" perspective also exists in the classical counterpart of Feynman's formulation, namely in the principle of least action. Mephisto has complained that the principle is "retrocausal" - that you're affected by the future. Why? The principle says:
The particle (or another system) is just going to move in such a way that the action evaluated between a moment in the past and a moment in the future is extremized among all trajectories with the same endpoints.
At the linguistic level, this principle requires you to know lots about the particle's behavior in the past as well as in the future if you want to know what the particle is going to do right now. Such a dependence on the future would violate causality, the rule that the present can only be affected by the past but not by the future.

However, if you analyze what the principle means mathematically, you will find out that it is exactly equivalent to differential equations for "x(t)", to a variation of the well-known "F=ma" laws. You need to know the mathematical tricks that Lagrange has greatly improved. But if you learn this variational calculus, the equivalence will be clear to you.

So while the principle "linguistically" makes the present depend both on the past and the future - here I define a linguist as a superficial person who gets easily confused by what the words "seem" to imply - the correct answer is that the laws described by the principle of least action actually don't make the present depend on the future.

The situation is analogous at the quantum level, too. You may sum amplitudes for histories that occur in the whole spacetime. But the local observations will always be affected just by the facts about their neighborhood in space and in time. To know whether there's any acausality or non-locality in physics, you have to properly calculate it. The answer of quantum field theory is zero.

Self-energy in quantum field theory

The picture that got settled in quantum field theory - and of course, string theory doesn't really change anything about it - is that the time-reversal symmetry always holds exactly at the fundamental level; antiparticles may be related to holes in the sea of particles with negative energy, which is equivalent to their moving backward in time (creation is replaced by annihilation).

And what about the original problem of self-energy? It has reappeared in the quantum formalism but it was shown not to spoil any physical predictions.

Various integrals that express Feynman diagrams are divergent but the divergences are renormalized. All of the predictions may ultimately be unambiguously calculated by setting a finite number of types of divergent integrals to the experimentally measured values (mass of the electron, the fine-structure constant, the vanishing mass of the photon, the negligible vacuum energy density, etc.).

Equivalently, you may also think of the removal of the divergent contributions as an effect of the "counterterms". The energy of some objects simply gets a new, somewhat singularly looking contribution. When you add it to the divergent "integral of squared electric field", the sum will be finite.

In some sense, you don't have to invent a story "where the canceling term comes from". What matters is that you can add the relevant terms to the action - and you have to add them to agree with the most obvious facts about the observations e.g. that the electron mass is finite.

There are different ways how to "justify" that the infinite pieces are removed or canceled by something else. But if you still need lots of these "justifications", you either see inconsistencies that don't really exist; or you are not thinking about these matters quite scientifically. Why?

Because science dictates that it's the agreement of the predictions with the observations that all justifications have ultimately boil down to. And indeed, the robust computational rules of quantum field theory - with a finite number of parameters - can correctly predict pretty much all experiments ever done with elementary particles. So if you find its rules counterintuitive or ugly, there exists a huge empirical evidence that your intuition or sense of beauty is seriously impaired. It doesn't matter what your name is and how pretty you are, you're wrong.

Maybe, if you will spend a little bit more time with the actual rules of quantum field theory, you may improve both your intuition and your aesthetic sense.

However, the insights about the Renormalization Group in the 1970s, initiated by Ken Wilson, brought a new, more philosophically complete, understanding what was going on with the renormalization. It was understood that the long-distance limit of many underlying theories is often universal and described by "effective quantum field theories" that usually have a finite number of parameters ("deformations") only.

The calculations that explicitly subtract infinities to get finite results are just the "simplest" methods to describe the long-distance physical phenomena in any theory belonging to such a universality class. In this "simplest" approach, it is just being assumed that the "substructure that regulates things" appears at infinitesimally short distances. The long-distance physics won't depend on the short-distance details - an important point you must appreciate. It really means that 1) you don't need the unknown short-distance physics, 2) you can't directly deduce the unknown short-distance physics from the observations, 3) you may still believe that the short-distance physics would match your sense of beauty.

You can still ask: what is the actual short-distance physics? As physics became fully divided according to the distance/energy scale in the 1970s, people realized that to address questions at shorter distances, you need to understand processes at higher energies (per particle) - either experimentally or theoretically. So it's getting hard.

But it's true that the only "completions" of physics of divergent quantum field theories that are known to mankind are quantum field theories and string theory (the latter is necessary when gravity in 3+1 dimensions or higher is supposed to be a part of the picture). Any statement that a "new kind of a theory" (beyond these two frameworks) that could reduce to the tested effective quantum field theories at long distances is extremely bold and requires extraordinary evidence.
Particles and signals moving backward in time Particles and signals moving backward in time Reviewed by MCH on October 28, 2010 Rating: 5

No comments:

Powered by Blogger.