banner image
Sedang Dalam Perbaikan

Composite vs elementary particles

Tommaso Dorigo wrote a text about some limits on compositeness. You don't have to read it: nothing is excessively interesting about the experiments and their punch line is, of course, that no internal substructure of quarks and leptons has been seen by the colliders as of today.



Tommaso Dorigo's comrade Vladimir Lenin believed that an electron was a galaxy with many electrons, and so on, indefinitely. That's what he meant by the statement that matter was "inexhaustible". This hierarchical picture of the Matryoshka Universe was clearly indefensible already during Lenin's life. First, it has to stop at the Planck scale because distances shorter than the Planck scale are unphysical, or at least don't follow the normal laws of geometry. Second, two electrons must be exactly identical to allow chemistry to work, so they can't carry any substructure that would be as variable as one of a galaxy (or even a high entropy, which must be zero).

The typical characteristic energy scale where compositeness could possibly become compatible with the existing experiments is something like 5 TeV (or higher): it means that the internal pieces have to be really close to each other, closer than 1/(5 TeV) in the "hbar=c=1" units, if they're composite. Unless unexpected things happen, the LHC will just improve these limits.




Compositeness has been a traditional and repeatedly successful type of insight in physics but when you repeat some idea many times, it ceases to be revolutionary. It is getting pretty boring. And frankly speaking, there exists no convincing reason - theoretical or experimental reason - why the quarks and leptons in the Standard Model should be composite, i.e. composed out of smaller particles. Instead, there exist very good reasons to expect that no further compositeness of the old kind will be found.

History of compositeness

Let me review a brief history of compositeness.

In the ancient Greece, many philosophers would think that everything was made out of five classical elements:
air
water ..... aether ..... fire!
Earth
They invented this theory by pure thought which is great and, in principle, conceivable except that they have heavily overestimated their ability to guess the correct answer. ;-) Four of the elements were highly composite (note that they're naturally combine into "opposite" pairs) while the middle one, the aether, didn't exist at all. :-)

Democritus' atomist school hypothesized that the matter was composed of atoms. That wasn't such a huge discovery - it's really one of the two possible answers. Matter can either be continuous or not. Their shapes of the "indivisible" atoms were strange, contrived, but kind of practical - balls with hooks to make the interactions easy and diverse. :-)

But the idea was qualitatively correct. Once alchemy was supplemented with some rational thinking and with careful work, chemistry was born. People learned that the mixing ratios were nice rational numbers, in some proper units. At the microscopic level, the compounds (materials) were made out of molecules, and each molecule is made out of atoms which are the basic building blocks of chemical elements (materials).

So generic matter was found to be made of molecules which were bound states of several atoms. Everyone knows this story so I won't give you a full PBS special here. Atoms were found to have a nucleus at the center. Rutherford was shocked when the alpha particles mostly penetrated through the gold (packed in zinc sulfide) in 1909. He summarized his surprise in his famous quote:
All science is either physics or stamp collecting.
Oh no, I did mean this one although it is true, too. I meant:
It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backward must be the result of a single collision, and when I made calculations I saw that it was impossible to get anything of that order of magnitude unless you took a system in which the greater part of the mass of the atom was concentrated in a minute nucleus. It was then that I had the idea of an atom with a minute massive centre, carrying a charge.
The atomic nucleus was born. Because people understood the atomic weights, it became quickly clear that the nucleus was composed out of protons and neutrons. The latter was discovered, too. By the 1960s, a whole jungle of hadrons - particles similar to protons and neutrons - had been known. The deep inelastic experiments played a very analogous role to the Rutherford experiment: they helped to show that even protons and neutrons were composite.

The theory ultimately found to describe these particles - Quantum Chromodynamics, also motivated by several other key partial observations - described protons and neutrons as composites of three quarks (aside from a lot of gluon fuzz and quark-antiquark short-lived pairs).

The Standard Model assumes that leptons, quarks, photons, W-bosons, Z-bosons, the Higgs boson, the graviton (if I include gravity for a while), and all their anti-particles are point-like, described, created, and annihilated by quantum fields that map points in spacetime to operators on the Hilbert space. They interact locally.

The Rutherford experiment story could be repeated again in the near or far future. Except that it seems as a boring old idea. And there are no good reasons to think that the leptons and quarks are composite objects. There are no known experimental reasons, as Tommaso's article clarifies in detail.

But there are no good theoretical reasons, either. While the quarks have simplified the jungle of strongly interacting particles, no known model of a substructure of quarks and leptons is able to do the same thing today. Preons and rishons usually have to add a lot of new stuff that is as complicated as the quarks and gluons themselves, they often need new gauge groups (hypercolor etc.), and they usually fail to get a realistic phenomenology (including three families), anyway.

Mass vs compositeness scale

In fact, there exists a good general argument that helps us to see that the traditional compositeness (even smaller point-like particles inside the known ones) is unlikely to win another battle. Let's look at the masses and sizes of the composite objects. In most of this discussion, I will use the relativistic "E=mc^2" relationship between mass and energy but I will begin with highly non-relativistic molecular physics where the relativistic conversion is not too useful.

The molecules typically change the total energies of the electrons in the atoms by dozens of millielectronvolts or so (it's the energy of a photon emitted when a molecule changes). The size of the molecule may be an inverse electronvolt. The molecules themselves are heavy - containing dozens of protons and neutrons i.e. dozens of GeV in the nuclei - but most of the mass is sitting at fixed places most of the time. It's the light and free electrons that should be credited with all the wonders of chemistry.

That was atomic and molecular physics - chemistry. Nuclear physics deals with energy differences comparable to fractions of a GeV. And the size of the nuclei is an inverse GeV, too. Note that in the molecular case, the energy difference was smaller than the inverse size - because of the small fine-structure constant and because of the heavy nuclei that keep the electrons localized). But in the nuclear case, the sizes and typical energy differences are linked more tightly: the coupling of the strong force is much closer to one.

From nuclear physics on, we can't neglect relativity. All the subnuclear particles inevitably have speeds that are comparable to the speed of light. So it's always sensible to convert mass into energy by "E=mc^2".

There exist lighter hadrons - strongly interacting particles. Pions are the most important examples of mesons. They're light - below 0.14 GeV - because they can be approximately described as Goldstone bosons - a type of particles that should ideally be massless because of a theorem and because they're linked with a symmetry (in this case, an approximate SU(2) or, even less accurately, SU(3) symmetry between the flavors i.e. different types of quarks - up/down and perhaps strange).

But most of the strongly interacting particles have GeV-like masses. It's very hard to get particles that are substantially lighter than the inverse size of the bound state. Once we don't ignore the latent "E=mc^2" energy of the nuclei or anything, the natural expectation is that the size and the energy are inversely proportional to each other.

But the elementary fermions of the Standard Model are pretty light. The heaviest is the top quark, and all others are much lighter. And we have already studied physics at distances comparable to the inverse mass of these particles (in the hbar=c=1 units): and there's no compositeness. So it makes it natural to expect that there is no new compositeness anywhere.

Strings and surprisingly light composites

However, you know that quarks and leptons may be viewed as composites, in some generalized sense. If they're vibrating strings, i.e. if perturbative string theory is a good approximation of reality, they may be interpreted as eigenstates of a bound state of "pearls" - the so-called string bits - connected into loops. That's how a closed string may be represented. These string bits are strongly interacting. In fact, their interaction is determined by the string tension which is just huge - probably 10^{18} GeV or so.

So how is it possible that there exist light string vibrations whose mass is well below the "expected" string scale of 10^{18} GeV? Well, that's a good question but there are good answers, too. In string theory, one can actually show that some states may be exactly massless, or almost exactly massless, because of both old (non-stringy) and new (stringy) reasons.

The old reasons are primarily symmetries. The photons and gravitons are massless because the gauge bosons associated with unbroken local symmetries in spacetime have to be massless. Similarly, fermions may be massless because of supersymmetry - if they're paired with massless bosons - or because of the chiral symmetry (left-right asymmetric change of the phase of their wave function).

Additional spin-zero bosons may be massless because supersymmetry may pair them with massless fermions, whose masslessness was protected by the chiral symmetry, or because they're the Goldstone bosons connected with an exact or approximate symmetry.

At any rate, whenever you have a particle that is much lighter than the dimensional analysis would indicate - that is unnaturally light - you should ask why it is so because such an observation is "marginally incompatible" with the a priori expectations based on the Bayesian inference that lead to natural masses. There is never any sharp contradiction here - because it's just some Bayesian inference based on vague arguments and statistics - but those mental tools should be refined, too.

Unnatural things are indisputably strange, and having no intuition for what is natural (likely in Nature) and what is not natural (what was probably constructed artificially) is almost equivalent to knowing nothing about natural science.

I have mentioned the non-stringy reasons why particles can be massless (or much lighter than expected). But there also exist purely stringy reasons. One of their classes are index theorems. If you consider e.g. heterotic strings on Calabi-Yau manifolds, the first realistic realizations of the (nearly) real world within string theory, you find out that in the approximation of the geometry, the leptons and quarks are massless. They're surely much lighter than the string scale.

Why is it so? It's because supersymmetry links not only bosons and fermions but it usually pairs left-handed particles with their right-handed partners, too. This has to be true for all massive particles. But massless particles can come in "short multiplets" - the "partner" of a particle can be "zero" which fails to be a new independent normalizable "basis vector" or a new "particle species". (The square of this zero, a vanishing coefficient, is linked to the mass of the particle.)

In fact, there exist sophisticated geometric methods to calculate the number of left-handed massless particles without partners. In the case of Calabi-Yau manifolds, this imbalance is linked to the homology of the manifolds - its Hodge numbers. It's the number of topologically non-equivalent and independent non-contractible "holes" or codimension-p submanifolds of the Calabi-Yau manifold.

For a given topology, you can prove that these integers are nonzero, and they imply that there has to be an asymmetry between left-handed and right-handed fermions. Those "odd ones" have to be massless which changes our "natural" expetations about the numbers of previously unnaturally light particles: there have to be three families of them.

These particles can eventually get some small masses from the supersymmetry breaking and from the interactions with the Higgs boson etc. But these effects are "small oscillations on the stringy background", much like the binding energies of the electrons in the molecules (analogy: corrections from the Higgs) were just small corrections to the huge, solid, and unchanging latent energy of the nuclei (analogy: stringy geometry). Of course, for everyday physics, the small changes of the energy are more important. But fundamentally, most of the stuff and knowledge sits in the nuclei (analogy: stringy geometry).

There exist other, often surprising reasons why the light fields are light in various vacua of string theory. All these arguments may be viewed as aspects of "generalized geometry" in one way or another. The diversity of reasons that string theory is able to relate (or even identify) is amazing. And even in model-building, when people try to construct models where the Higgs is lighter than the generic models would imply, they get inspired by geometry - they "engineer" degrees of freedom that behave much like the extra dimensions of string theory. See e.g. Littlest Higgs model and deconstruction.

To summarize this portion of the text: particles that are much lighter than their inverse size (in the c=hbar=1 untis) almost always have to have reasons to be light. The reasons include broken or unbroken symmetries, relationship with other particles protected by symmetries, or stringy arguments such as index theorems that are as powerful as the symmetries. All those arguments may work either exactly, or in some approximation. In the latter case, the particles are massive but much lighter than you would expect if you didn't know about the hierarchy of influences.

Compositeness of magnetic and electric particles

There's a much more general and equally important theme I want to mention in this article: the notion of compositeness is not physical in general. It depends on the description. However, when you know that the coupling between your lightest objects is weak, you may always divide your objects into elementary and composite ones.

Let me mention some examples.

Electrons and quarks carry the electric U(1) charge. In The Big Bang Theory, Sheldon Cooper tried to find the magnetic monopoles, and for a good reason. It's almost guaranteed that the magnetically charged particles - South poles of a magnet without the North poles, or vice versa - have to exist.

Why? For example, locality around black holes implies that it must be possible for the magnetic field to be "mostly outgoing" from a region. The region may be one side of a dipole magnet. However, one of the sides may be faster in its collapse to a black hole. Consequently, you must be able to create a black hole that carries a magnetic monopole charge, at least in principle.

So such microstates have to exist. And it's likely that the lightest microstates with this new kind of charge will look more like particles than the black holes. However, these particles may still be insanely heavy - like the GUT or string scale, 10^{18} GeV. At any rate, they should exist. While it's not clear whether there's any useful or well-known low-energy local field-theoretical description, I think that good physicists agree that the monopoles should exist somewhere in the spectrum.

The funny thing is that the monopoles may always be viewed as composite objects. More precisely, in some field theories such as GUT theories that admit monopoles, they can be represented by classical solutions. They're topologically nontrivial configurations of the photon and generalized "gluon" fields that hold together because of some nonlinearities in the interactions, if you wish. In this sense, they're made out of infinitely many gauge bosons conspired in a specific way. That's also a reason why they're so heavy: the mass typically goes like "1/g^2" where "g" is a small coupling constant.

More generally, we use the word "solitons" for such composite objects that are most easily described as classical solutions involving fields that are associated with the light particles. (Of course, they should still be quantized, after you construct them: the world is a quantum world, and it applies to everyone.) Examples include kinks in 1+1D, vortices in 2+1D, monopoles in 3+1D, skyrmions in higher dimensionalities, and others - including knitted fivebranes.

If you want to know, "instantons" are similar solutions like solitons, but they're localized in the (Euclideanized) time, too. Instantons are not static objects but isolated "histories" that contribute to the Feynman path integral. They change the results if you calculate the probability of a process (such as a rare decay of a seemingly stable particle).

The electrically charged particles look point-like and elementary - they're usually light - while the magnetic monopoles are heavy and look like a non-local, extended solution involving the elementary fields. A similar separation exists in string theory, too.

When the string coupling constant is low (weak coupling), the strings are the lightest, and therefore most elementary, objects in your theory. Other objects are "made out of strings". For example, D-branes can be understood as a special type of "solitons". In fact, the D-brane masses go like "1/g" in string units, and they're near the geometric average of the strings' mass ("1" in string units) and the field-theoretical solitons similar to magnetic monopoles (whose mass goes like "1/g^2"). In this counting, the D-branes are "less solitonic" than the normal solitons.

In a different parameterization, the D-branes have masses that go like "1/g_{closed} = 1/g_{open}^2", which is the usual power law for the solitons, but with "g_{open}" replacing the gauge coupling from field theory (which is the right map for the D-brane gauge fields, anyway).

S-duality, evaporating compositeness, and bootstrap

While the separation to elementary (usually light) and composite (usually heavy) particles is clear at the weak coupling, it becomes ill-defined at the strong coupling ("g" of order one). In fact, many theories exhibit S-duality, i.e. the equivalence between the weak interaction regime and the strong coupling. If you make "g" much greater than one, physics will be totally equivalent to physics of another (or the same) theory at the coupling "1/g" which is much smaller than one. For the N=4 gauge theory, the rule is really this simple and this map is an exact self-equivalence.

Such an equivalence means that the magnetic monopoles and the electrically charged particles are equally elementary or equally composite! It was just a matter of the weak-coupling expansion that one group looked more elementary while the other looked more composite. For higher couplings, this "qualitative" difference goes away.

Less symmetric theories, such as N=2 gauge theories, usually have a more complex prescriptions how the electrically and magnetically charged states (and dyons, which have both) transform into each other, as shown by Seiberg and Witten (and their followers). It's still true that these theories show that the separation of particles into elementary and composite depends on the context and is not sharp and universal.

Seiberg has constructed a class of other S-dualities that relate pairs of inequivalent gauge theories with a different spectrum (and with the minimal, N=1 supersymmetry in four dimensions).

After all, the fuzzy boundary between composite and elementary fields is what was expected for decades. Werner Heisenberg was among those who believed in "bootstrap", a self-consistent theory that defines its own rules and that prevents you from starting from a unique, constructive starting point that divides the objects into elementary and composite ones.

This bootstrap thinking was popular in the late 1960s, at the same time when string theory happened to be born (they believed it was a key to crack the strong force), and it kind of influenced the birth of string theory, too. However, the philosophy was completely defeated at least for 30 years. In the early 1970s, Quantum Chromodynamics described the nuclei using a completely constructive, non-bootstrap theory with well-defined elementary fields. And even string theory itself abruptly became a constructive theory with very well-defined elementary degrees of freedom which are separated from the composite or "derived" ones.

So historically, string theory is sometimes linked to the bootstrap program, but it is a flawed correlation scientifically. String theory is as uncorrelated to the bootstrap program as field theory. And the bootstrap program hasn't really been successful (except for the classification of classes of two-dimensional conformal field theories).

But it's pretty likely that the bootstrap program will have to return to physics. Compositeness is not absolute, and if people ever find a description of string theory that is equally valid or equally "weakly or strongly coupled" in all situations (which may be a contradiction, who knows!), i.e. a background-independent definition of string theory (but I don't mean in the Smolin crackpot sense!), then such a formulation will also have to treat all objects as equally fundamental or non-fundamental, and only physical distinctions such as the mass and/or interaction strengths in a given environment will be derivable from the formalism.

We know that the difference between elementary and composite particles depends on the environment. And there's one more place that clearly shows that compositeness is doomed. At the Planck scale, the smallest black hole microstates are surely "somewhere in between" composite objects - black holes are kinds of "solitons of general relativity" - and elementary particles - black hole microstates are just heavy particle species.

This transition has to be gradual and the peaceful co-existence of the black holes as the dominant heavy-mass microstates, described semiclassically by GR as solitons, and the low center-of-mass limit of GR without black holes is a major consistency constraint that makes quantum gravity so hard and that guarantees that only the solutions linked to string theory may work.

Quantum gravity is not an "anything goes" business. It is a very fine reconciliation of two worlds with known descriptions. Both of these worlds, in some sense, describe everything when extrapolated properly, but the extrapolation that agrees with both (or all) limits is very nontrivial and doesn't allow you to make those old naive bureaucratic decisions such as the separation of composite particles from the elementary ones, or the counting of either. These things are ill-defined when you describe physics properly.

At the same time, we shouldn't forget that this separation becomes "damn real" in some very good approximations to and descriptions of the reality.

And that's the memo.
Composite vs elementary particles Composite vs elementary particles Reviewed by DAL on February 22, 2010 Rating: 5

No comments:

Powered by Blogger.