Florin Moldoveanu, an eclectic semi-anti-quantum zealot, hasn't ever been trained in particle physics and he doesn't understand it but he found it reasonable to uncritically write about Alain Connes' proposals to construct a correct theory of particle physics using the concepts of noncommutative geometry.
Now, Connes is a very interesting guy, great, creative, and playful mathematician, and he surely belongs among the most successful abstract mathematicians who have worked hard to learn particle physics. Except that the product just isn't enough because the airplanes don't land. His and his collaborators' proposals are intriguing but they just don't work and what the "new framework" is supposed to be isn't really well-defined at all.
The status quo in particle physics is that quantum field theories – often interpreted as effective field theories (theories useful for the description of all phenomena at distance scales longer than a cutoff) – and string theory are the only known ways to produce realistic theories. Moreover, to a large extent, string theory in most explicit descriptions we know also adopts the general principles of quantum field theory "without reservations".
The world sheet description of perturbative string theory is a standard two-dimensional conformal (quantum) field theory, Matrix theory and AdS/CFT describe vacua of string/M-theory but they're also quantum field theories in some spaces (world volumes or AdS boundaries), and string theory vacua have their effective field theory descriptions exactly of the type that one expects in the formalism of effective field theories (even though string theory itself isn't "quite" a regular quantum field theory in the bulk).
When we discuss quantum field theories, we decide about the dimension, qualitative field content, and symmetries. Once we do so, we're obliged to consider all (anomaly-free, consistent, unitary) quantum field theories with these conditions and all values of the parameters. This also gives us an idea about which choices of the parameters are natural or unnatural.
Now, Connes and collaborators claim to have something clearly different from the usual rules of quantum field theory (or string theory). The discovery of a new framework that would be "on par" with quantum field theory or string theory would surely be a huge one, just like the discovery of additional dimensions of the spacetime of any kind. Except that we have never been shown what the Connes' framework actually is, how to decide whether a paper describing a model of this kind belongs to Connes' framework or not. And we haven't been given any genuine evidence that the additional dimensions of Connes' type exist.
So all this Connes' work is some hocus pocus experimentation with mixtures of mathematics of noncommutative spaces (which he understands very well) and particle physics (which he understands much less well) and in between some mathematical analyses that are probably hugely careful and advanced, he often writes things that are known to be just silly to almost every physics graduate student. And a very large fraction of his beliefs how noncommutative geometry may work within physics just seems wrong.
How is it supposed to work?
In Kaluza-Klein theory (or string theory), there is some compactification manifold which I will call \(CY_6\) because the Calabi-Yau three-fold is the most frequently mentioned, and sort of canonical, example. Fields may be expanded to modes – a generalization of Fourier series – which are functions of the coordinates on \(CY_6\). And there is a countably infinite number of these modes. Only a small number of them are very light but if you allow arbitrary masses, you have a whole tower of increasingly heavy Kaluza-Klein modes.
Connes et al. want to believe that there are just finitely many fields in 3+1 dimensions, like in the Standard Model. How can we get a finite number of Kaluza-Klein modes? We get them if the space is noncommutative. The effect is similar as if the space were a finite number of points except that a noncommutative space isn't a finite set of points.
A noncommutative space isn't a set of points at all. For this reason, there are no "open sets" and "neighborhoods" and the normal notions of topology and space dimension, either. A noncommutative space is a generalization of the "phase space in quantum mechanics". The phase space has coordinates \(x,p\) but they don't commute with each other – it's why it's called a "noncommutative space". Instead, we have\[
xp-px=i\hbar.
\] Consequently, the uncertainty principle restricts how accurately \(x,p\) may be determined at the same moment. The phase space is effectively composed of cells of area \(2\pi\hbar\) (or its power, if we have many copies of the coordinates and momenta). And these cells behave much like "discrete points" when it comes to the counting of the degrees of freedom – except that they're not discretely separated at all. The boundaries between them are unavoidably fuzzier than even those in regular commutative manifolds. If you consider a compactified (periodic \(x,p\) in some sense) versions of the phase space (e.g. fuzzy sphere and fuzzy torus), you may literally get a finite number of cells and therefore a finite number of fields in 3+1 dimensions.
That's basically what Connes and pals do.
Now, they have made some truly extraordinary claims that have excited me as well. I can't imagine how could I be unexcited at least once; but I also can't imagine that I would preserve my excitement once I see that there's no defensible added value in those ideas. In 2006, for example, Chamseddine, Connes, and Marcolli have released their standard model with neutrino mixing that boldly predicted the mass of the Higgs boson as well. The prediction was \(170\GeV\) which is not right, as you know: the Higgs boson of mass \(125\GeV\) was officially discovered in July 2012.
But the fate of this prediction \(m_h=170\GeV\) was sort of funny. Two years later, in 2008, the Tevatron became able to say something about the Higgs mass for the first time. It ruled out the first narrow interval of Higgs masses. Amusingly enough, the first value of the Higgs mass that was killed was exactly Connes' \(170\GeV\). Oops. ;-)
There's a consensus in the literature of Connes' community that \(170\GeV\) is the prediction that the framework should give for the Higgs mass. But in August 2012, one month after the \(125\GeV\) Higgs boson was discovered, Chamseddine and Connes wrote a preprint about the resilience of their spectral standard model. A "faux pas" would probably be more accurate but "resilience" sounded better.
In that paper, they added some hocus pocus arguments claiming that because of some additional singlet scalar field \(\sigma\) that was previously neglected, the Higgs prediction is reduced from \(170\GeV\) to \(125\GeV\). Too bad they couldn't make this prediction before December 2011 when the value of \(125\GeV\) emerged as the almost surely correct one to the insiders among us.
I can't make sense of the technical details – and I am pretty sure that it's not just due to the lack of effort, listening, or intelligence. There are things that just don't make sense. Connes and his co-author claim that the new scalar field \(\sigma\) which they consider a part of their "standard model" is also responsible for the Majorana neutrino masses.
Now, this just sounds extremely implausible because the origin of the small neutrino masses is very likely to be in the phenomena that occur at some very high energy scale near the GUT scale – possibly grand unified physics itself. The seesaw mechanism produces good estimates for the neutrino masses\[
m_\nu \approx \frac{m_{h}^2}{m_{GUT}}.
\] So how could one count the scalar field responsible for these tiny masses to the "Standard Model" which is an effective theory for the energy scales close to the electroweak scale or the Higgs mass \(m_h\sim 125\GeV\)? If the Higgs mass and neutrino masses are calculable in Connes' theory, the theory wouldn't really be a standard model but a theory of everything and it should work near the GUT scale, too.
The claim that one may relate these parameters that seemingly boil down to very different physical phenomena – at very different energy scales – is an extraordinary statement that requires extraordinary evidence. If the statement were true or justifiable, it would be amazing by itself. But this is the problem with non-experts like Connes. He doesn't give any evidence because he doesn't even realize that his statement sounds extraordinary – it sounds (and probably is) incompatible with rather basic things that particle physicists know (or believe to know).
Connes' "fix" that reduced the prediction to \(125\GeV\) was largely ignored by the later pro-Connes literature that kept on insisting that \(170\GeV\) is indeed what the theory predicts.
So I don't believe one can ever get correct predictions out of a similar framework, except for cases of good luck. But my skepticism about the proposal is much stronger than that. I don't really believe that there exists any new "framework" at all.
What are Connes et al. actually doing when they are constructing new theories? They are rewriting some/all terms in a Lagrangian using some new algebraic symbols, like a "star-product" on a specific noncommutative geometry. But is it a legitimate way to classify quantum field theories? You know, a star-product is just a bookkeeping device. It's a method to write down classical theories of a particular type.
But the quantum theory at any nonzero couplings isn't really "fully given by the classical Lagrangian". It should have some independent definition. If you allow the quantum corrections, renormalization, subtleties with the renormalization schemes etc., I claim that you just can't say whether a particular theory is or is not a theory of the Connes' type. The statement "it is a theory of Connes' type" is only well-defined for classical field theories and probably not even for them.
A generic interacting fully quantum field theory just isn't equivalent to any star-product based classical Lagrangians!
There are many detailed questions that Connes can't quite answer that show that he doesn't really know what he's doing. One of these questions is really elementary: Is gravity supposed to be a part of his picture? Does his noncommutative compactification manifold explain the usual gravitational degrees of freedom, or just some polarizations of the graviton in the compact dimensions, or none? You can find contradictory answers to this question in the Connes' paper.
Let me say what is the answer to the question whether gravity is a part of the consistent decoupled field theories on noncommutative spaces – i.e. those in string theory. The answer is simply No. String theory allows you to pick a \(B\)-field and decouple the low-energy open-string dynamics (which is a gauge theory). The gauge theory is decoupled even if the space coordinates are noncommutative.
But it's always just a gauge theory. There are never spin-two fields that would meaningfully enter the Lagrangian with the noncommutative star-product. Why? Because the noncommutativity comes from the \(B\)-field which may be set to zero by a gauge invariance for the \(B\)-field, \(\delta B_{(2)} = d \lambda_{(1)}\). So the value of this field is unphysical. This conclusion only changes inside a D-brane where \(B+F\) is the gauge-invariant combination. The noncommutativity-inducing \(B\)-field may really be interpreted as a magnetic \(F\) field inside the D-brane which is gauge-invariant. Its value matters. But in the decoupling limit, it only matters for the D-brane degrees of freedom because the D-brane world volume is where the magnetic field \(F\) is confined.
In other words, the star-product-based theory only decouples from the rest of string theory if the open-string scale is parameterically longer than the closed-string scale. And that's why the same star-product isn't relevent for the closed-string modes such as gravity. Or: if you tried to include some "gravitational terms with the star product", you would need to consider all objects with the string-scale energies and the infinite tower of the massive string states would be a part of the picture, too.
Whether you learn these lessons from the string theory examples or you derive them purely from "noncommutative field theory consistency considerations", your conclusions will contradict Connes' assumptions. One simply cannot have gravity in these decoupled theories. If your description has gravity, it must have everything. At the end, you could relate this conclusion with the "weak gravity conjecture", too. Gravity is the weakest force so once your theory of elementary building blocks of Nature starts to be sensitive to it, you must already be sensitive to everything else. Alternatively, you may say that gravity admits black holes that evaporate and they may emit any particle as the Hawking radiation – any particle in any stage of a microscopic phenomenon that is allowed in Nature. So there's no way to decouple any subset of objects and phenomena.
When I read Connes' papers on these issues, he contradicts insights like that – which seem self-evident to me and probably to most real experts in this part of physics. You know, I would be extremely excited if a totally new way to construct theories or decouple subsets of the dynamics from string theory existed. Except that it doesn't seem to be the case.
In proper string/M-theory, when you actually consistently decouple some subset of the dynamics, it's always near some D-brane or singularity. The decoupling of the low-energy physics on D-branes (which may be a gauge theory on noncommutative spaces) was already mentioned. Cumrun Vafa's F-theory models of particle physics are another related example: one decouples the non-gravitational particle physics near the singularities in the F-theory manifold, basically near the "tips of some cones".
But Connes et al. basically want to have a non-singular compactification without branes and they still want to claim that they may decouple some ordinary standard-model-like physics from everything else – like the excited strings or (even if you decided that those don't exist) the black hole microstates which surely have to exist. But that's almost certainly not possible. I don't have a totally rock-solid proof but it seems to follow from what we know from many lines of research and it's a good enough reason to ignore Connes' research direction as a wrong one unless he finds something that is really nontrivial, which he hasn't done yet.
Again, I want to mention the gap between the "physical beef" and "artefacts of formalism". The physical beef includes things like the global symmetries of a physical theories. The artefacts of formalism include things like "whether some classical Lagrangian may be written using some particular star-product". Connes et al. just seem to be extremely focused on the latter, the details of the formalism. They just don't think as physicists.
You know, as we have learned especially in the recent 100 years, a physical theory may often be written in very many different ways that are ultimately equivalent. Quantum mechanics was first found as Heisenberg's "matrix mechanics" which turned into the Heisenberg picture and later as "wave mechanics" which became Schrödinger's picture. Dirac pointed out that a compromise, the interaction/Dirac picture, always exists. Feynman added his path integral approach later, it's really another picture. The equivalence of those pictures was proven soon.
For particular quantum field theories and vacua of string/M-theory, people found dualities, especially in recent 25 years: string-string duality, IIA/M, heterotic/M, S-dualities, T-dualities, U-dualities, mirror symmetry, AdS/CFT, ER=EPR, and others. The point is that physics that is ultimately the same to the observers who live in that universe may often be written in several or many seemingly very different ways. After all, even the gauge theories on noncommutative spaces are equivalent to gauge theories on commutative spaces – or noncommutative spaces in different dimensions, and so on.
The broader lesson is that the precise formalism you pick simply isn't fundamental. Connes' whole philosophy – and the philosophy of many people who focus on appearances and not the physical substance – is very different. At the end, I think that Connes would agree that he's just constructing something that may be rewritten as quantum field theories. If there's any added value, he just claims to have a gadget that produces the "right" structure of the relevant quantum field theories.
But even if he had some well-defined criterion that divides the "right" and "wrong" Lagrangians of this kind, and I think he simply doesn't have one because there can't be one, why would one really believe the Connes' subset? A theory could be special because it could be written in Connes' form but is that a real virtue or just an irrelevant curiosity? The theory is equally consistent and has equal symmetries etc. as many other theories that cannot be written in the Connes form.
So even if the theories of Connes' type were a well-defined subset of quantum field theories, I think that it would be irrational to dramatically focus on them. It would seem just a little bit more natural to focus on this subset than to focus on quantum field theories whose all dimensions of representations are odd and the fine-structure constant (measured from the electron-electron low-energy scattering) is written using purely odd digits in the base-10 form. ;-) You may perhaps define this subset but why would you believe that belonging to this subset is a "virtue"?
I surely don't believe that "the ability to write something in Connes' form" is an equally motivated "virtue" as an "additional enhanced symmetry" of a theory.
This discussion is a somewhat more specific example of the thinking about the "ultimate principles of physics". In quantum field theory, we sort of know what the principles are. We know what theories we like or consider and why. The quantum field theory principles are constructive. The principles we know in string theory – mostly consistency conditions, unitarity, incorporation of massless spin-two particles (gravitons) – are more bootstrapy and less constructive. We would like to know more constructive principles of string theory that make it more immediately clear why there are 6 maximally decompactified supersymmetric vacua of string/M-theory, and things like that. That's what the constantly tantalizing question "what is string theory" means.
But whenever we describe some string theory vacua in a well-defined quantitative formalism, we basically return to the constructive principles of quantum field theory. Constrain the field/particle content and the symmetries. Some theories – mostly derivably from a Lagrangian and its quantization – obey the conditions. There are parameters you may derive. And some measure on these parameter spaces.
Connes basically wants to add principles such as "a theory may be written using a Lagrangian that may be written in a Connes form". I just don't believe that principles like that matter in Nature because they don't really constrain Nature Herself but only what Nature looks like in a formalism. I simply don't believe that a formalism may be this important in the laws of physics. Nature abhors bureaucracy. She doesn't really care about formalisms and what they look like to those who have to work with them. She doesn't really discriminate against one type of formalisms and She doesn't favor another kind. If She constrains some theories, She has good reasons for that. To focus on a subclass of quantum field theories because they are of the "Connes type" simply isn't a good reason. There isn't any rational justification that the Connesness is an advantage rather than a disadvantage etc.
Even though some of my objections are technical while others are "philosophically emotional" in some way, I am pretty sure that most of the people who have thought about the conceptual questions deeply and successfully basically agree with me. This is also reflected by the fact that Connes' followers are a restricted group and I think that none of them really belongs to the cream of the theoretical high-energy physics community. Because the broader interested public should have some fair idea about what the experts actually think, it seems counterproductive for non-experts like Moldoveanu to write about topics they're not really intellectually prepared for.
Moldoveanu's blog post is an example of a text that makes the readers believe that Connes has found a framework that is about as important, meaningful, and settled as the conventional rules of the model building in quantum field theory or string theory. Except that he hasn't and the opinion that he has is based on low standards and sloppiness. More generally, people are being constantly led to believe that "anything goes". But it's not true that anything goes. The amount of empirical data we have collected and the laws, principles, and patterns we have extracted from them is huge and the viable theories and frameworks are extremely constrained. Almost nothing works.
The principles producing theories that seem to work should be taken very seriously.
Now, Connes is a very interesting guy, great, creative, and playful mathematician, and he surely belongs among the most successful abstract mathematicians who have worked hard to learn particle physics. Except that the product just isn't enough because the airplanes don't land. His and his collaborators' proposals are intriguing but they just don't work and what the "new framework" is supposed to be isn't really well-defined at all.
The status quo in particle physics is that quantum field theories – often interpreted as effective field theories (theories useful for the description of all phenomena at distance scales longer than a cutoff) – and string theory are the only known ways to produce realistic theories. Moreover, to a large extent, string theory in most explicit descriptions we know also adopts the general principles of quantum field theory "without reservations".
The world sheet description of perturbative string theory is a standard two-dimensional conformal (quantum) field theory, Matrix theory and AdS/CFT describe vacua of string/M-theory but they're also quantum field theories in some spaces (world volumes or AdS boundaries), and string theory vacua have their effective field theory descriptions exactly of the type that one expects in the formalism of effective field theories (even though string theory itself isn't "quite" a regular quantum field theory in the bulk).
When we discuss quantum field theories, we decide about the dimension, qualitative field content, and symmetries. Once we do so, we're obliged to consider all (anomaly-free, consistent, unitary) quantum field theories with these conditions and all values of the parameters. This also gives us an idea about which choices of the parameters are natural or unnatural.
Now, Connes and collaborators claim to have something clearly different from the usual rules of quantum field theory (or string theory). The discovery of a new framework that would be "on par" with quantum field theory or string theory would surely be a huge one, just like the discovery of additional dimensions of the spacetime of any kind. Except that we have never been shown what the Connes' framework actually is, how to decide whether a paper describing a model of this kind belongs to Connes' framework or not. And we haven't been given any genuine evidence that the additional dimensions of Connes' type exist.
So all this Connes' work is some hocus pocus experimentation with mixtures of mathematics of noncommutative spaces (which he understands very well) and particle physics (which he understands much less well) and in between some mathematical analyses that are probably hugely careful and advanced, he often writes things that are known to be just silly to almost every physics graduate student. And a very large fraction of his beliefs how noncommutative geometry may work within physics just seems wrong.
How is it supposed to work?
In Kaluza-Klein theory (or string theory), there is some compactification manifold which I will call \(CY_6\) because the Calabi-Yau three-fold is the most frequently mentioned, and sort of canonical, example. Fields may be expanded to modes – a generalization of Fourier series – which are functions of the coordinates on \(CY_6\). And there is a countably infinite number of these modes. Only a small number of them are very light but if you allow arbitrary masses, you have a whole tower of increasingly heavy Kaluza-Klein modes.
Connes et al. want to believe that there are just finitely many fields in 3+1 dimensions, like in the Standard Model. How can we get a finite number of Kaluza-Klein modes? We get them if the space is noncommutative. The effect is similar as if the space were a finite number of points except that a noncommutative space isn't a finite set of points.
A noncommutative space isn't a set of points at all. For this reason, there are no "open sets" and "neighborhoods" and the normal notions of topology and space dimension, either. A noncommutative space is a generalization of the "phase space in quantum mechanics". The phase space has coordinates \(x,p\) but they don't commute with each other – it's why it's called a "noncommutative space". Instead, we have\[
xp-px=i\hbar.
\] Consequently, the uncertainty principle restricts how accurately \(x,p\) may be determined at the same moment. The phase space is effectively composed of cells of area \(2\pi\hbar\) (or its power, if we have many copies of the coordinates and momenta). And these cells behave much like "discrete points" when it comes to the counting of the degrees of freedom – except that they're not discretely separated at all. The boundaries between them are unavoidably fuzzier than even those in regular commutative manifolds. If you consider a compactified (periodic \(x,p\) in some sense) versions of the phase space (e.g. fuzzy sphere and fuzzy torus), you may literally get a finite number of cells and therefore a finite number of fields in 3+1 dimensions.
That's basically what Connes and pals do.
Now, they have made some truly extraordinary claims that have excited me as well. I can't imagine how could I be unexcited at least once; but I also can't imagine that I would preserve my excitement once I see that there's no defensible added value in those ideas. In 2006, for example, Chamseddine, Connes, and Marcolli have released their standard model with neutrino mixing that boldly predicted the mass of the Higgs boson as well. The prediction was \(170\GeV\) which is not right, as you know: the Higgs boson of mass \(125\GeV\) was officially discovered in July 2012.
But the fate of this prediction \(m_h=170\GeV\) was sort of funny. Two years later, in 2008, the Tevatron became able to say something about the Higgs mass for the first time. It ruled out the first narrow interval of Higgs masses. Amusingly enough, the first value of the Higgs mass that was killed was exactly Connes' \(170\GeV\). Oops. ;-)
There's a consensus in the literature of Connes' community that \(170\GeV\) is the prediction that the framework should give for the Higgs mass. But in August 2012, one month after the \(125\GeV\) Higgs boson was discovered, Chamseddine and Connes wrote a preprint about the resilience of their spectral standard model. A "faux pas" would probably be more accurate but "resilience" sounded better.
In that paper, they added some hocus pocus arguments claiming that because of some additional singlet scalar field \(\sigma\) that was previously neglected, the Higgs prediction is reduced from \(170\GeV\) to \(125\GeV\). Too bad they couldn't make this prediction before December 2011 when the value of \(125\GeV\) emerged as the almost surely correct one to the insiders among us.
I can't make sense of the technical details – and I am pretty sure that it's not just due to the lack of effort, listening, or intelligence. There are things that just don't make sense. Connes and his co-author claim that the new scalar field \(\sigma\) which they consider a part of their "standard model" is also responsible for the Majorana neutrino masses.
Now, this just sounds extremely implausible because the origin of the small neutrino masses is very likely to be in the phenomena that occur at some very high energy scale near the GUT scale – possibly grand unified physics itself. The seesaw mechanism produces good estimates for the neutrino masses\[
m_\nu \approx \frac{m_{h}^2}{m_{GUT}}.
\] So how could one count the scalar field responsible for these tiny masses to the "Standard Model" which is an effective theory for the energy scales close to the electroweak scale or the Higgs mass \(m_h\sim 125\GeV\)? If the Higgs mass and neutrino masses are calculable in Connes' theory, the theory wouldn't really be a standard model but a theory of everything and it should work near the GUT scale, too.
The claim that one may relate these parameters that seemingly boil down to very different physical phenomena – at very different energy scales – is an extraordinary statement that requires extraordinary evidence. If the statement were true or justifiable, it would be amazing by itself. But this is the problem with non-experts like Connes. He doesn't give any evidence because he doesn't even realize that his statement sounds extraordinary – it sounds (and probably is) incompatible with rather basic things that particle physicists know (or believe to know).
Connes' "fix" that reduced the prediction to \(125\GeV\) was largely ignored by the later pro-Connes literature that kept on insisting that \(170\GeV\) is indeed what the theory predicts.
So I don't believe one can ever get correct predictions out of a similar framework, except for cases of good luck. But my skepticism about the proposal is much stronger than that. I don't really believe that there exists any new "framework" at all.
What are Connes et al. actually doing when they are constructing new theories? They are rewriting some/all terms in a Lagrangian using some new algebraic symbols, like a "star-product" on a specific noncommutative geometry. But is it a legitimate way to classify quantum field theories? You know, a star-product is just a bookkeeping device. It's a method to write down classical theories of a particular type.
But the quantum theory at any nonzero couplings isn't really "fully given by the classical Lagrangian". It should have some independent definition. If you allow the quantum corrections, renormalization, subtleties with the renormalization schemes etc., I claim that you just can't say whether a particular theory is or is not a theory of the Connes' type. The statement "it is a theory of Connes' type" is only well-defined for classical field theories and probably not even for them.
A generic interacting fully quantum field theory just isn't equivalent to any star-product based classical Lagrangians!
There are many detailed questions that Connes can't quite answer that show that he doesn't really know what he's doing. One of these questions is really elementary: Is gravity supposed to be a part of his picture? Does his noncommutative compactification manifold explain the usual gravitational degrees of freedom, or just some polarizations of the graviton in the compact dimensions, or none? You can find contradictory answers to this question in the Connes' paper.
Let me say what is the answer to the question whether gravity is a part of the consistent decoupled field theories on noncommutative spaces – i.e. those in string theory. The answer is simply No. String theory allows you to pick a \(B\)-field and decouple the low-energy open-string dynamics (which is a gauge theory). The gauge theory is decoupled even if the space coordinates are noncommutative.
But it's always just a gauge theory. There are never spin-two fields that would meaningfully enter the Lagrangian with the noncommutative star-product. Why? Because the noncommutativity comes from the \(B\)-field which may be set to zero by a gauge invariance for the \(B\)-field, \(\delta B_{(2)} = d \lambda_{(1)}\). So the value of this field is unphysical. This conclusion only changes inside a D-brane where \(B+F\) is the gauge-invariant combination. The noncommutativity-inducing \(B\)-field may really be interpreted as a magnetic \(F\) field inside the D-brane which is gauge-invariant. Its value matters. But in the decoupling limit, it only matters for the D-brane degrees of freedom because the D-brane world volume is where the magnetic field \(F\) is confined.
In other words, the star-product-based theory only decouples from the rest of string theory if the open-string scale is parameterically longer than the closed-string scale. And that's why the same star-product isn't relevent for the closed-string modes such as gravity. Or: if you tried to include some "gravitational terms with the star product", you would need to consider all objects with the string-scale energies and the infinite tower of the massive string states would be a part of the picture, too.
Whether you learn these lessons from the string theory examples or you derive them purely from "noncommutative field theory consistency considerations", your conclusions will contradict Connes' assumptions. One simply cannot have gravity in these decoupled theories. If your description has gravity, it must have everything. At the end, you could relate this conclusion with the "weak gravity conjecture", too. Gravity is the weakest force so once your theory of elementary building blocks of Nature starts to be sensitive to it, you must already be sensitive to everything else. Alternatively, you may say that gravity admits black holes that evaporate and they may emit any particle as the Hawking radiation – any particle in any stage of a microscopic phenomenon that is allowed in Nature. So there's no way to decouple any subset of objects and phenomena.
When I read Connes' papers on these issues, he contradicts insights like that – which seem self-evident to me and probably to most real experts in this part of physics. You know, I would be extremely excited if a totally new way to construct theories or decouple subsets of the dynamics from string theory existed. Except that it doesn't seem to be the case.
In proper string/M-theory, when you actually consistently decouple some subset of the dynamics, it's always near some D-brane or singularity. The decoupling of the low-energy physics on D-branes (which may be a gauge theory on noncommutative spaces) was already mentioned. Cumrun Vafa's F-theory models of particle physics are another related example: one decouples the non-gravitational particle physics near the singularities in the F-theory manifold, basically near the "tips of some cones".
But Connes et al. basically want to have a non-singular compactification without branes and they still want to claim that they may decouple some ordinary standard-model-like physics from everything else – like the excited strings or (even if you decided that those don't exist) the black hole microstates which surely have to exist. But that's almost certainly not possible. I don't have a totally rock-solid proof but it seems to follow from what we know from many lines of research and it's a good enough reason to ignore Connes' research direction as a wrong one unless he finds something that is really nontrivial, which he hasn't done yet.
Again, I want to mention the gap between the "physical beef" and "artefacts of formalism". The physical beef includes things like the global symmetries of a physical theories. The artefacts of formalism include things like "whether some classical Lagrangian may be written using some particular star-product". Connes et al. just seem to be extremely focused on the latter, the details of the formalism. They just don't think as physicists.
You know, as we have learned especially in the recent 100 years, a physical theory may often be written in very many different ways that are ultimately equivalent. Quantum mechanics was first found as Heisenberg's "matrix mechanics" which turned into the Heisenberg picture and later as "wave mechanics" which became Schrödinger's picture. Dirac pointed out that a compromise, the interaction/Dirac picture, always exists. Feynman added his path integral approach later, it's really another picture. The equivalence of those pictures was proven soon.
For particular quantum field theories and vacua of string/M-theory, people found dualities, especially in recent 25 years: string-string duality, IIA/M, heterotic/M, S-dualities, T-dualities, U-dualities, mirror symmetry, AdS/CFT, ER=EPR, and others. The point is that physics that is ultimately the same to the observers who live in that universe may often be written in several or many seemingly very different ways. After all, even the gauge theories on noncommutative spaces are equivalent to gauge theories on commutative spaces – or noncommutative spaces in different dimensions, and so on.
The broader lesson is that the precise formalism you pick simply isn't fundamental. Connes' whole philosophy – and the philosophy of many people who focus on appearances and not the physical substance – is very different. At the end, I think that Connes would agree that he's just constructing something that may be rewritten as quantum field theories. If there's any added value, he just claims to have a gadget that produces the "right" structure of the relevant quantum field theories.
But even if he had some well-defined criterion that divides the "right" and "wrong" Lagrangians of this kind, and I think he simply doesn't have one because there can't be one, why would one really believe the Connes' subset? A theory could be special because it could be written in Connes' form but is that a real virtue or just an irrelevant curiosity? The theory is equally consistent and has equal symmetries etc. as many other theories that cannot be written in the Connes form.
So even if the theories of Connes' type were a well-defined subset of quantum field theories, I think that it would be irrational to dramatically focus on them. It would seem just a little bit more natural to focus on this subset than to focus on quantum field theories whose all dimensions of representations are odd and the fine-structure constant (measured from the electron-electron low-energy scattering) is written using purely odd digits in the base-10 form. ;-) You may perhaps define this subset but why would you believe that belonging to this subset is a "virtue"?
I surely don't believe that "the ability to write something in Connes' form" is an equally motivated "virtue" as an "additional enhanced symmetry" of a theory.
This discussion is a somewhat more specific example of the thinking about the "ultimate principles of physics". In quantum field theory, we sort of know what the principles are. We know what theories we like or consider and why. The quantum field theory principles are constructive. The principles we know in string theory – mostly consistency conditions, unitarity, incorporation of massless spin-two particles (gravitons) – are more bootstrapy and less constructive. We would like to know more constructive principles of string theory that make it more immediately clear why there are 6 maximally decompactified supersymmetric vacua of string/M-theory, and things like that. That's what the constantly tantalizing question "what is string theory" means.
But whenever we describe some string theory vacua in a well-defined quantitative formalism, we basically return to the constructive principles of quantum field theory. Constrain the field/particle content and the symmetries. Some theories – mostly derivably from a Lagrangian and its quantization – obey the conditions. There are parameters you may derive. And some measure on these parameter spaces.
Connes basically wants to add principles such as "a theory may be written using a Lagrangian that may be written in a Connes form". I just don't believe that principles like that matter in Nature because they don't really constrain Nature Herself but only what Nature looks like in a formalism. I simply don't believe that a formalism may be this important in the laws of physics. Nature abhors bureaucracy. She doesn't really care about formalisms and what they look like to those who have to work with them. She doesn't really discriminate against one type of formalisms and She doesn't favor another kind. If She constrains some theories, She has good reasons for that. To focus on a subclass of quantum field theories because they are of the "Connes type" simply isn't a good reason. There isn't any rational justification that the Connesness is an advantage rather than a disadvantage etc.
Even though some of my objections are technical while others are "philosophically emotional" in some way, I am pretty sure that most of the people who have thought about the conceptual questions deeply and successfully basically agree with me. This is also reflected by the fact that Connes' followers are a restricted group and I think that none of them really belongs to the cream of the theoretical high-energy physics community. Because the broader interested public should have some fair idea about what the experts actually think, it seems counterproductive for non-experts like Moldoveanu to write about topics they're not really intellectually prepared for.
Moldoveanu's blog post is an example of a text that makes the readers believe that Connes has found a framework that is about as important, meaningful, and settled as the conventional rules of the model building in quantum field theory or string theory. Except that he hasn't and the opinion that he has is based on low standards and sloppiness. More generally, people are being constantly led to believe that "anything goes". But it's not true that anything goes. The amount of empirical data we have collected and the laws, principles, and patterns we have extracted from them is huge and the viable theories and frameworks are extremely constrained. Almost nothing works.
The principles producing theories that seem to work should be taken very seriously.
Reality vs Connes' fantasies about physics on non-commutative spaces
Reviewed by MCH
on
July 09, 2016
Rating:
No comments: