Another blog post: When you're finished with this blog entry, continue with a more detailed one: Why the sum of integers is equal to –1/12A typical example of a mathematical fact that the anti-talents in theoretical physics can't ever swallow are the identities that appear in various regularizations: we will mainly talk about the zeta-function regularization applied to the sum of positive integers.
First, let's ask: How much is
S = 1+ 1/2 + 1/4 + 1/8 + ... ?Everyone who knows some maths will tell you that if you multiply geometric series by "(1-q)", you obtain one. In this case, if you multiply "S" by "(1-1/2)", all terms cancel in pairs except for "1" that is left. "S" must thus be "2" because "2" times "1/2" equals one. Now, how much is
S = 1 + 10 + 100 + 1000 + ...?Well, most people will tell you that it is infinity, it makes no sense, it diverges, it is not even wrong, and so forth. Fair enough, at least in the context of mathematics. Some creative kids will tell you that the sum is "...11111" whatever it is.
But imagine that you obtain this sum as a result of a legitimate scientific calculation that is supposed to be relevant for natural phenomena - this situation occurs every day when you're a physicist. Moreover, you are told that the experimenters have measured a finite answer - which is what they usually do.
In other words, what will you do if the sum above appears in the context of physics whose goal is to predict finite results of experiments rather than to philosophize about the relations of infinities and God, to brainwash stupid laymen by the thesis that science is not even wrong, as an infamous crackpot likes to do, or to sketch meaningless infinite sequences of ASCII-characters? You simply have to get a finite, real result.
What will a scientist do? Well, he will realize that if you multiply the sum above by "(1-10) = -9" and use the distributive law, all terms except "1" will cancel in this case, too. That means that "S" must be equal to "-1/9". What I am saying here is that a physicist will be ready to use the formula "1/(1-q)" for divergent sums, too.
Theoretical physics - because it is a natural science - has a different set of wisdoms what to do with seemingly meaningless expressions than conventional mathematics has. Sums and integrals in physics mean something else than a prescription for a mechanical algorithm. Instead, they encode "natural" sums and integrals that are supposed to be evaluated by Nature. And She always likes to return a meaningful finite answer. From Her viewpoint, the people who will rant about divergences, infinities, and not even wrong things are just looking at the sum too naively, without using some necessary powerful tools.
When a physicist writes an integral, she usually doesn't care whether you use the Lebesgue integral or the Riemann integral. For a physicist, these two and other definitions of an integral are just man-made caricatures to calculate some expressions in practice and to give them a rigorous meaning in a particular system of conventions.
That's not exactly what a physicist means by the integral. A physicist always means nothing else than Nature's integral that coincides with the Riemann and Lebesgue integral in most well-behaved situations. But whenever there is something unusual about the integral, we must leave it up to Nature - not Riemann or Lebesgue - to decide what is the right thing to do with the integral. And we must learn the answer from Her, rather than Riemann or Lebesgue. And indeed, Her answer is often different and brings some additional flavor and rules to calculate. This fact about theoretical physics is virtually impenetrable for most laymen and even for most mathematicians.
Sum of integers
The geometric series was a simple example. There exists a more important example in physics,
This sum appears at many places of perturbative string theory. For example, it determines the mass of the tachyon or the critical dimensions of string theory. The well-known result isS = 1 + 2 + 3 + 4 + ...
That's already enough for many physics anti-talents to argue that string theory is not even wrong and it surely can't be tested, and so forth. However, what I haven't told you so far is that the same sum also appears in the calculation of the Casimir effect that has, in fact, been experimentally measured. The measurement - an experiment - confirms that ths sum is equal to "-1/12". Fine, so let's avoid further general clichés and accept the fact that the people who say that theoretical physics is not even wrong are just a waste of time and their writing is spam - one that can't even cure their readers' impotence.S = -1/12
You might still be left with some uncertainty about the result. We will be asking three general questions:
- How can the result be derived?
- What's the difference between correct derivations and wrong derivations?
- What is the actual relationship between the "finite" and "infinite" answers? Is there a contradiction?
Derivations of the correct result
Concerning the first question, the answer is that there are actually many correct ways to derive the right result. There are also many incorrect ways to derive a wrong result but we don't need to discuss these because a generic creative but uninformed reader is surely able to design one of those. ;-)
For example, a creative, convincing, but still wrong thing is to say that "1+2+3+..." is equal to "(1+1+1+...)^2" - just draw dots in an infinite quadrant. The main problem with the result for the sum of integers obtained in this way, namely "(+1/4)", is that it is wrong.
One correct way to proceed is to generalize the sum to a more general expression, the Riemann zeta function
zeta(s) = 1^{-s} + 2^{-s} + 3^{-s} + ...
The original sum is "S=zeta(-1)" as you can easily see. What's funny is that the formula for "zeta(s)" is perfectly convergent if the real part of "s" is greater than "1". The resulting sum is a meromorphic (analytical) function of the complex variable "s" and there exists a canonical method to extend such a function to general complex values of "s". In the case of the zeta function, the result is unique and "zeta(-1)" happens to be "-1/12".
Another method adds a regulator. Compute a more general sum
S' = e + 2 e^2 + 3 e^3 +4 e^4 + ...
where "e" is a number that is equal to "exp(-epsilon)" where "epsilon" is very small. Thanks to Jirka for a fix; my original S' was "e" times smaller, starting with "1+2e". For "epsilon=0", you reproduce the original sum "S". For a finite positive "epsilon", however, the sum converges. When you sum it up and expand in powers of epsilon, you obtain
S' = 1/epsilon^2 - 1/12 + o(1).
The first divergent term can be and must be removed by an addition of a "local counterterm". That's a technical term for the fact that it can be handwaved away by very rigorous arguments. I say "handwaved" because only experts are capable to understand how these arguments work and decide whether they're correct. Believe me or not.
Although the leading term is divergent, its "natural" value is actually zero because it can be and must be consistently removed with certain well-defined rules of "consistent removals". If you care, the total quantity that must be zero is the vacuum energy density and it must be zero because we require that the full theory is scale-invariant. The value of "o(1)" for "epsilon=0" is zero even outside the realm of natural sciences so what is left is "-1/12".
There are many other methods but the simplest one goes back to Euler. It's so simple that this blog article will give you the full derivation. You first relate "S" to a similar quantity
T = 1 - 2 + 3 - 4 + 5 - ...
that has alternating signs. First of all, this sum can be calculated using Taylor expansions because
(1+x)^{-2} = 1 - 2x + 3x^2 - 4x^3 + ...
Substitute "x=+1" (exactly on the edge of the disk of convergence) and you get the previous sum as well as its result, "T=+1/4". Now, when you know "T", it is easy to get "S" because
T = 1+2+3+... - 2 x (2+4+6+...) =
... = 1+2+3+... (1 - 4) = -3S
The equation "T=-3S" is solved by "S=-1/12". Fine. There are also wrong methods to get wrong results, usually involving some "forbidden" transfers of values from one term to another. People who have neither good physics intuition nor the experimental results will easily end up with a wrong calculation. So let us ask the second question:
How do we distinguish wrong methods from correct methods?
Well, the correct methods always lead to the correct result, namely "S=-1/12" that can be experimentally tested. Is there a purely theoretical way to decide, one that could be used for experimentally inaccessible sums and integrals? Yes, there is. In the context of the removing the "1/epsilon^2" term, everything is about a regulator that will generate finite results and whose infinite part can be subtracted by a local counterterm. When you do things right, it is guaranteed that you will never end up with a wrong result - and whether you will end up with the right result depends on your skills and patience.
The zeta-function regularization has a different feature than "locality of counterterms" that makes it special: it preserves the conformal symmetry and the modular invariance. A generic method in which you "redistribute" parts of the terms would violate this symmetry. The zeta-function approach is analogous to dimensional regularization: instead of a general complex dimensionality, we work with a general number of derivatives in the expression for the worldsheet energy, if you care.
The sum of positive integers happens to have a unique finite result - one of the signs that the underlying theory (string theory) has no adjustable parameters. In particle physics, we often deal with integrals whose result is calculable but depends on a finite number of parameters extracted from experiments - such as the fine-structure constant. The reason why the sum of positive integers is so unique is that we have dealt with a free theory in this case and the only local counterterm that we were adding was the vacuum energy density.
At any rate, I want to assure you that physicists know the correct rules that allow them to identify an illegimate step in a wrong calculation - or to demonstrate that a correct calculation is legitimate. As a theoretical physicist gets mature, these methods become a part of her skills. They are not contained in general lectures of mathematics for mathematicians because what we need here is mathematics for physicists that simply follows different rules in these "exceptional" cases.
However, they are rules that are still very well-defined and accurate. And incidentally, they are confirmed experimentally. Theoretical physicists are able to do similar steps very quickly and automatically while the anti-talents are not even able to understand that they're missing some knowledge that is necessary to do advanced physics calculations. Once again: please, try to understand that these things are necessary for physics and they follow clear rules even though these rules are not taught in the colleges or lower levels of education.
Relationships between the mathematicians' and physicists' understanding of such sums
We are getting to the final question: is there any contradiction between the mathematician's "infinite" answer and the physicist's "finite" answer? No, there is none: the two occupations really mean somewhat different things by the "sum" whenever the sum diverges. The difference is analogous to the difference between the Riemann integrals and Nature's integrals that we described previously.
But that's not the main thing I want to say in this section. What I want to say is that physicists in general and string theorists in particular are natural scientists whose main focus is the ultimate finite prediction of the results of experiments. There are usually many ways to obtain the correct finite result and the particular procedure that leads to the correct result is, strictly speaking, not a part of physics even though a physicist must of course learn or find at least one correct way to get where he wants to be. ;-)
This separation differs from the approach of mathematicians who are not able to divide things to "physical" and "unphysical" because all of their reasoning is supposed to be disconnected from any perceptions: all of their reasoning should be "unphysical". The mathematicians' convention to describe their situation would probably be the opposite one, i.e. to say that all results of calculations as well as all intermediate results are "physical". ;-)
What's important is that there is no division of pure mathematical results to physical or unphysical ones because all results in pure mathematics are either correct or wrong and none of them can be measured.
That's why mathematicians, much like the laymen, have a lot of problems to understand why physicists can use many different methods to obtain the ultimate results and why the intermediate results seem so ambiguous. They would be asking: so is the analytical continuation and/or the removal of the "1/epsilon^2" divergent term a part of the result? A mathematician may be stuck with this question but a physicist doesn't care. This question is simply not a physical one. There are different procedures to find the correct result but it is only the final result that is in principle measurable. Only such an answer may be viewed as the physicist's answer. Everything else may depend on conventions and it often does.
The fact that different regularizations lead to the same final results is a priori non-trivial but can be mathematically demonstrated to be inevitably true by the tools of the renormalization group.
The sum of integers can be computed in many ways that superficially look very different but their answers coincide. That shows that there is something very robust beneath all of these correct calculations: something inherently physical that all of them agree upon. A similar conclusion holds for results calculated in dual or equivalent descriptions of string theory. The final predictions of experimentally measurable quantities are identical even though the calculations look very different.
The oldest example of such a non-trivial equivalence were the identical predictions of quantum mechanics calculated from Heisenberg's matrix calculus vs those from Schrödinger's wave mechanics. The equivalence was soon proved by Dirac.
In the case of string dualities, we don't have a unified framework analogous to the renormalization group or Dirac's brackets that would allow us to prove all dualities at the same moment. In various descriptions, some dualities may be proven (e.g. in Matrix theory) but others can't. This fact is what we mean by saying that we don't have a background-independent description of string theory or, as we often misleadingly say, "we don't know what string theory is". This statement doesn't mean that we don't know something about physics of string theory in a particular situation but rather that we don't know what principle, if any, unifies all allowed situations in string theory. Such a principle would have to be independent of any particular computational technique in string theory or its Lagrangian definition.
Despite this absence of a simple and universal proof, the following fact is important. Every time we have several very different ways to obtain the same accurate and quantitative result, it always counts as a highly non-trivial consistency check that indicates that we haven't made a mistake and that the calculation was more than a mechanical masturbation. Even though physics anti-talents may think that it is bad that none of the calculations is "more canonical" than all others, theoretical physicists know very well that it is always a virtue, not a disadvantage, to have many procedures that lead to the same physical predictions.
As we approach ever deeper theories, from ordinary quantum mechanics to quantum field theory to string theory, the number of seemingly inequivalent ways to obtain the same physical results gets increasingly diverse. That's one of the reasons why we think that we have a more complete understanding of both physics as well as of the network of mathematical ideas that are relevant for physics.
And that's the memo.
No comments: