What should be explained better? (Cosmic Variance)
Quick links to 15 questions and answers
01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12, 13, 14, 15.
But when the theory that science finds doesn't look "perfect", science may also try to find a deeper explanation - "why" the imperfect or approximate theory worked. However, in the scientific context, this question means "how" a more satisfying or more accurate theory may produce a less satisfying one as an approximation.
Science doesn't ask "why" questions in the sense "for what purpose" the features of the real world are the way they are. In fact, science doesn't assume that things have a purpose. Often, it assumes that they don't have any purpose. The religious questions "why" - emotionally looking for desires or priorities of an anthropomorphic agent - don't appear in science.
They may be thought of as independent of science. However, as Sean Carroll would surely agree, these explanations based on "purpose" often directly contradict science because science often shows that things work because of different reasons that are incompatible with a "purpose".
Evolution of life and human acts may be explained in terms of a "purpose" - but this "purpose" may still be reduced to a more mechanistic explanation because science is never satisfied with the declaration of a "purpose" in the role of a final explanation. Instead, it asks "why" the humans or life forms seem to want what they want, and it reduces this question to a question "how" does it work, anyway.
In some cases, it is useful for the group of organisms in the same species to evolve simultaneously. If such "collective" evolution occurs, we talk about anagenesis. The organisms may still transform into a new species if the change is significant enough - there's never any sharp boundary, of course. (Anagenesis that was perceived to make the animals worse or "go back" used to be called catagenesis.)
However, some environments may encourage the organisms to diversify. This kind of evolution is called cladogenesis and differs from anagenesis. The subgroups of the original species begin to diverge in their genes (innate properties encoded in the DNA) as well as phenotypes (visual features of the organisms determined by the environment or history or DNA). They are gradually becoming reproductively isolated: they can't have fertile offspring across the species. New species are being created.
Again, the separation is gradual - there's no sharp moment when it occurs. But in most cases, it's damn clear whether two organisms belong to the same species or different ones. However, one may always find marginal cases and no completely universally acceptable definition of speciation may be found because speciation is not a discrete process.
Quantum mechanics is the experimentally verified description of all phenomena in the real world that replaced classical physics in the 1920s and this replacement will never be undone again. Quantum mechanics shows that the world fundamentally boasts many properties that are counterintuitive for creatures like us that have been mostly trained by life at the macroscopic scales.
According to quantum mechanics, only probabilities of individual outcomes - i.e. the probabilistic distributions - may be predicted by the most accurate and complete conceivable scientific description: the outcome of a particular experiment is random. Classical quantities such as position and momentum are replaced by linear operators on the Hilbert space. Unlike their classical counterparts, they don't commute: the commutators - differences of the type XP-PX - are not zero but they are other operators multiplied by coefficients proportional to Planck's constant.
All the information about the system is encoded in the wave function that evolves according to Schrödinger's equation. Interference - or, equivalently, probing all possible trajectories or histories - is possible and all of them contribute to the "probability amplitude" which is a complex number associated with the pair of initial and final states. The squared absolute value of this amplitude gives the probability (or its density).
Quantum mechanics doesn't allow one to imagine that the physical objects have any objective properties prior to the measurement; it remembers all correlations between subsystems (the wave function is a function of all possible configurations, not just a collection of several wave functions for each object); it predicts correlations that can be stronger (or just plain different) than those in classical physics; it doesn't need any action at a distance to achieve them because it is not classical physics; it predicts that many operators such as the energy (Hamiltonian) or the angular momentum often/always have quantized allowed values (mathematically represented as the eigenvalues of the operators); predicts the wave-particle duality, quantum tunneling, and many other phenomena that are major achievements of 20th century physics.
The theories respecting the general postulates of quantum mechanics as well as those of special relativity must be either quantum field theories or vacua of string theory.
See also a file about interpretation of quantum mechanics and entanglement.
After having been discovered in the late 1960s (as a theory of the strong nuclear force which was an application that was soon abandoned, but partly revived in the late 1990s), having evolved into its viable supersymmetric version in the 1970s (when it was also showed that it was a theory of quantum gravity for the first time), after the first superstring revolution in the 1980s (which showed that string theory had no anomalies and contained particles and forces of all types required to describe everything we have observed), and the duality revolution in the mid 1990s (which showed a complete unity of all versions of string theory, identified a new important 11-dimensional limit without strings called M-theory, and appreciated new multidimensional objects called branes, among other things), string theory has gone through a couple of extra developments in recent decade and the most recent years.
After the observation of the cosmological constant in 1998, quantum cosmology became a hot topic within string theory. It has led to almost no important yet convincing conclusions except for the proof that despite its unity, string theory predicts a large number of metastable possible universes, the so-called "landscape". Their existence is pretty much indisputable by now. However, whether all of the "vacua" in the landscape are physically realized - probably in an eternally inflating "multiverse" - remains an open question. An even more open question is whether this multiverse, if real, is any helpful to help us to identify in which vacuum we live - whether the inflating pre-history before the Big Bang has any observable consequences.
Even in the absence of the knowledge of the exact vacuum we inhabit, string theory makes lots of universal predictions such as the exact validity of the equivalence principle, local Lorentz symmetry, as well as "specific" stringy predictions such as the "gravity is the weakest force" and other statements that distinguish the landscape of string theory from the so-called "swampland" of effective field theory that are inconsistent at the level of string-theoretical standards.
Meanwhile, string theory has recently (last 3 years) gone through the membrane minirevolution. New, previously mysterious types of field theories in 3 dimensions related to membranes in M-theory were identified and studied. Those new theories have somewhat demystified the new special features by which objects in M-theory differ from objects in perturbative string theory (the latter includes D-branes with ordinary Yang-Mills gauge theories).
Also, a new scenario for the qualitative shape of the extra dimensions in realistic compactifications has emerged in the form of the F-theory phenomenology - intensely developed in recent 4 years or so. It's type IIB string theory with a hugely variable dilaton-axion field and many singular "fibers" that allows an irresistibly unique and predictive description using algebraic geometry. Even in 2010, it was a hot area of research.
It's been settled that string theory predicts the right thermodynamics of black holes - in agreement with Bekenstein-Hawking's macroscopic considerations - when the right entropy and other quantities were checked for wider classes of black holes, including many non-supersymmetric ones, toroidal ones, four-dimensional ones, and so on. The list now (because of advances in the last 2 years) includes four-dimensional extremal rotating black holes that can be seen in the telescope: the proof that such black holes carry the right entropy may be phrased independently of string theory, by selecting just a couple of the stringy mathematical tools (especially the right Virasoro algebra).
The holographic description of quantum gravity, one that exploded by the 1997 Maldacena's discovery of AdS/CFT, is reaching to ever more distant subdisciplines of physics, showing that these seemingly remote phenomena may be described in terms of stringy physics with black holes in a higher-dimensional curved spacetime. In recent years, this description became very successful for non-Fermi liquids and superconductors - that were added to previous (and still developed) successes in the case of the quark-gluon plasma.
I could continue the general review for some time but other questions are waiting, too.
The values of these two vector-valued fields are completely independent at each point but the origin of these two fields is not independent. Relativity allows one to derive the existence of one from the other. In a reference frame that is moving, a part of the original B field is transformed into a new E field and vice versa.
There is indeed a symmetry between E and B in all the equations governing the electromagnetic field: for example, a changing electric field produces some magnetic fields, and vice versa.
The only difference between electric and magnetic fields in their qualitative macroscopic manifestations is that there exist electric charges but there are no "magnetic monopoles" - which would be sources of the magnetic fields whose arrows would be everywhere incoming (or everywhere outgoing).
In fact, only magnetic dipoles can be found in Nature: the North pole of a magnet always comes together with the South pole and they cannot be separated. Consequently, only the "electrically charged" sources (and their current) appear in the Maxwell's equations.
However, advanced particle physics (and especially quantum gravity) seems to make it likely that the magnetic monopoles have to exist, too. For example, a region near the heavy North pole of a magnet may collapse into a black hole and the resulting black hole should act as a magnetic monopole. There should therefore exist states for pretty much every value of the magnetic monopole charge, up to the quantization condition. However, it seems likely that the lightest magnetic monopoles - particles with this new kind of charge - are extremely heavy, about 10^{15} times heavier than the proton, not too far from the Planck mass (i.e. from the mass of the lightest black hole that deserves the name). There are not too many of them in the visible Universe and it's hard to artificially produce them, too.
But returning to more basic physics, a frequent laymen's error is to imagine that the electric and magnetic fields are "the same". They are surely not the same: static magnets don't attract electric charges and vice versa. The opposite mistake is to believe that electricity and magnetism have nothing to do with one another: they actually have a common origin that is manifested e.g. as electromagnetic induction (when magnets are moving, electric field appears) and by the electromagnets themselves (the opposite action: magnetic fields are created by moving electric charges).
By the way, some theories that differ from the theories describing the real world but that are closely related mathematically - e.g. the N=4 supersymmetric Yang-Mills theory - admit an exact symmetry exchanging the electric and magnetic fields, the so-called S-duality.
However, the probabilities that are predicted after decoherence may be interpreted as ordinary "classical" probabilities of the same kind that one encounters in Liouville's equation for the distribution function on the phase space in classical physics. In particular, these probabilities don't interfere.
Decoherence is the loss of all information about the relative phases of the quantum amplitudes and a process that dynamically picks a privileged "classically observable" basis of the Hilbert space. How does it work?
One describes the system in terms of its density matrix - a combination of psi.psi* tensor products of the wave functions, weighted by their probabilities. The density matrix is a direct quantum counterpart of the distribution function on the phase space. This density matrix "rho" evolves according to a generalized Schrödinger's equation - the commutator "[rho,H]" appears on the right-hand side.
In the description of decoherence, one divides the physical system into the interesting degrees of freedom we can fully observe and the rest that we don't - the "environment".
However, if we can't keep track of all, especially "environmental" degrees of freedom, all predictions for our system may be fully extracted from the density matrix that is "traced over" those environmental degrees of freedom; this density matrix only lives in the Hilbert space of the interesting object (tensor-multiplied by its conjugate). Every time we trace over the "environmental" degrees of freedom, it makes the density matrix for the interesting i.e. studied system more diagonal. That's because the states of the environment onto which the states of the interesting system are "imprinted" are orthogonal and the off-diagonal elements are proportional to the inner products of the environmental states. Well, they are orthogonal after we wait for a while, and if we pick a good enough basis for the interesting system (it's typically the "natural" basis of the states you can interpret easily).
The off-diagonal elements of the density matrix typically go to zero schematically as exp(-exp(t)), with coefficients everywhere. It's much faster a decrease than an ordinary exponential decrease. This expo-exponential decrease is valid because the number of degrees of freedom into which the interesting system gets imprinted by the mutual interactions grows exponentially with time - like exp(t), if you wish (an avalanche!) - and each degree of freedom reduces the off-diagonal element by a factor "u" smaller than one, so it's "u^exp(t)" or "exp(-exp(t))".
Even for a relatively empty and cool environment and microscopic objects, the decoherence is rapid. Even interactions with the weak cosmic microwave background are sufficient to decohere the position of a tiny speck of dust within a tiny fraction of a second. As an effect, the density matrix for the speck of dust abruptly becomes diagonal in a privileged basis - essentially the position basis in this case - and the diagonal entries of the density matrix are directly interpreted as the classical probabilities of the corresponding states.
The result is still probabilistic but these probabilities may be treated classically. In particular, we can now imagine that a particular state was chosen before the measurement - even if we don't know which one it was. By making this assumption, we can derive no contradictions (or wrong predictions) because the probabilities are evolving classically after decoherence. This contrasts with the general probabilities in quantum mechanics before decoherence. Effectively, decoherence allows us to think that Schrödinger's cat was in a well-defined state before we opened the box - because it was after it decohered. Of course, quantum mechanics still doesn't allow us to calculate whether the cat was alive or dead with any certainty: decoherence doesn't help with this thing.
Decoherence, fully understood in the mid 1980s and linked e.g. to the name of Wojciech Zurek (review!), answers pretty much all physically meaningful questions that were left open after the Copenhagen interpretation, especially the nature of the classical-quantum boundary. The classical-quantum boundary appears at the time scale when decoherence starts to act strongly.
It seems likely that Niels Bohr (and maybe others) actually understood the logic of decoherence already back in the 1920s but the communication skills of the guru of the Copenhagen school were not refined enough for him to express his understanding too clearly. That's partly why the founding fathers continued to use their "phenomenological" rules for what constituted a "classical object" or a "measuring apparatus" even though all these assumptions may be derived from a fully quantum mechanical framework.
In particular, decoherence also explains why we don't ever observe a superposition of dead and alive Schrödinger cats: the states of the environment induced by the dead and alive cat, respectively, are orthogonal to each other, so it's the basis in which the density matrix gets diagonalized. The relative phase between the "dead" and "alive" basis vector is being quickly forgotten so the question about the probability of seeing e.g. "0.6 dead + 0.8.i.alive" is ill-posed (note that a relative phase between 0.6 and 0.8 was specified but the density matrix knows nothing about it), as e.g. the Consistent Histories formalism explains in some extra detail.
While this principle was originally viewed as a part of intuition or good taste that couldn't be proved, one may actually argue that this rule is "statistically" valid. According to the Bayesian inference, we have to consider a contrived theory together with many ("N") similar theories that make an equal number of choices and have an approximately equal number of arbitrary wheels and gears. The whole set of all these "similar" hypotheses should be counted as one qualitative theory that is on par with a simpler theory preferred by Occam's razor.
However, the probability that one of the contrived theories is correct has to be shared by "N" versions of the contrived theory which is why the probability for each has to be divided by "N". The values of "N" are often exponentially large - the exponential of the number of choices we made - which makes a lot of difference.
In particular, Occam's razor implies that theories that agree with the data but use a smaller number of continuous (and even discrete) parameters are preferred over theories with many more parameters that have to be adjusted to agree with the observations. The probability that a theory with randomly added structures that are not needed is valid is almost exactly zero.
Both "theory" and "hypothesis" are words that are being used by scientists in a much more refined way than most laymen imagine. Theories and hypotheses are unions of axioms and rules and basic concepts and arguments (and, usually, equations) that are meant to logically explain some phenomena (usually including some technical and quantitative details) - and that can actually convince other scientists that their author has a point. They're not just some "random guesses", "lucky hunches", or "conspiracy theories".
The word "hypothesis" is being used for systems of ideas that are not immediately seen to be invalid but that haven't been established yet, either. When we talk about a "hypothesis", we are very interested in the question whether it is valid or not.
On the other hand, the word "theory" is usually used for "hypotheses" that have already been established as valid, or provisionally valid. However, we must be careful because the word "theory" is also used for systems of concepts, rules, and equations that are just "similar" to some realistic theories but that don't agree with the real observations. For example, we talk about M-theory or Chern-Simons theory even though we know that the real world is neither 10+1-dimensional nor 2+1-dimensional. The idea is that these "theories" are defined by remotely analogous equations and produce predictions - e.g. "correlators of fields" - of the same kind as realistic "theories".
There is one more dichotomy - "theories" and "models". The word "theory" usually refers to a major "framework" - the set of general postulates, tools, and methods that can be refined in many ways. The word "model" usually refers to detailed implementations of a theory in which all the details are chosen in one way or another. There should typically exist many models and none of them should be too important.
None of these terminological rules is blindly obeyed. For example, the term "Standard Model" for the current theory of all empirically known non-gravitational phenomena (and elementary particles) clearly understates the importance and uniqueness of this theory - and many people would prefer a "Standard Theory" instead. However, the "Standard Model" - a name coined by Steven Weinberg who also heroically helped to develop the theory and not just its name :-) - arguably sounds sexier and less pious.
Some of the differences and relationships between science and religion were discussed in the first question - about "how" and "why".
Observations indicate that the pressure "p" is equal to "-rho" - or not too far from it - where "rho" is the corresponding energy density. If this relationship is exact, and there are good reasons to think so, the dark energy is almost certainly a "cosmological constant", an extra term in Einstein's equations introduced by their author himself (who later, incorrectly, called the term the greatest blunder of his life) that causes this curvature - and accelerating expansion - of the Universe.
The magnitude of this constant is positive and nonzero - but 60-125 orders of magnitude lower than the most straightforward estimates based on particle physics (with or without supersymmetry, respectively), a discrepancy known as the cosmological constant problem (supersymmetry makes the problem numerically smaller but more sharply well-defined).
At this point, unfortunately, the anthropic principle - the assumption that there are many Universes to choose from and only those special ones with a tiny cosmological constant are ready for life which is the only reason why the constant is tiny in our Cosmos - is the only known single explanation convincing enough to be adopted by a large enough group of physicists. But of course, it's extremely far from being settled as a right explanation.
Finally, the dark energy - or cosmological constant, which is probably the same thing - differs from the luminiferous aether because the aether was believed to be composed out of normal matter and picked a privileged reference frame, thus breaking the Lorentz symmetry of special relativity. On the other hand, the cosmological constant has the stress energy tensor equal to "rho" times the metric tensor "g_{mn}". Because the metric tensor is the same in all inertial frames, so is the stress-energy tensor for the cosmological constant. The latter therefore preserves the local Lorentz symmetry. In the simplest configuration - empty space - it replaces the global Poincaré symmetry (Lorentz symmetry and translations) by an equally big group of isometries of the de Sitter space, SO(5,1), which is how the empty Universe with a positive cosmological constant looks like (a kind of hyperboloid).
If we looked further, we would say: look, an ugly vertebrate. What kind of a structureless quasi-squirrel whose family has no future :-) this guy is.
The formula for the energy can be seen to contain term like "-mu.B" where "mu" is the magnetic moment of an object. So every object with a magnetic moment (another vector) tries to orient itself in the direction of the magnetic field. There's a corresponding force acting on the magnet.
Magnets with nonzero "mu" may be obtained by electromagnets - a circulating charge in the solenoid creates a magnetic field - and a magnetic moment of the solenoid. Elementary particles such as electrons or protons also carry their internal magnets. The electron has a "spin" - in some sense, it rotates around its axis (although the rotation has all the unfamiliar properties dictated by quantum mechanics). Because it's charged and the charge rotates, it behaves just like a fucking electromagnet. The exact strength of the electron's magnetic field may be deduced from Dirac's equation.
Yes, the freedom of speech is great.
Ferromagnets such as iron are able to make it popular among the electrons to spin in the same direction in a region - the magnetic domain. Most of their magnetic field - and their ability to act as a magnetic dipole - comes from the spin of the electrons. A smaller portion of the magnetic properties of ferromagnets comes from the orbital motion of the electrons around the nuclei.
The magnetic fields can't be explained in terms of squirrels or thirsty drunk marines' libido because the magnetic fields are more fundamental than squirrels or thirsty drunk marines' libido. ;-) Magnetic fields simply exist - even in the vacuum - and various objects interact with them. However, one can still say that the magnetic field is not "quite fundamental" according to the newest theories of physics.
In electrodynamics, magnetic fields may be written as "curl A" out of a more fundamental vector field, the vector potential "A" - which is however not uniquely determined, because of a redundancy called the "gauge invariance". The field "A" itself can be written as a combination of two similar, more fundamental fields in the electroweak theory. This field itself may be seen to be a condensate of strings (open or closed strings - or branes or another stringy object) in a particular vibration pattern.
As the previous sentences indicatet, there is an extra pyramid of advanced physics concepts that derive things like magnetic fields from more fundamental starting points. But when it comes to the early 20th century physics, the magnetic fields are among the most fundamental objects of the reality, so you shouldn't try to deduce them from anything deeper. Magnetism and its brotherly electricity (see question 5 for some comments about their relationship) are the fundamental phenomena that, on the contrary, explain almost all of chemistry, biology, and engineering. You should use electricity and magnetism to explain more complex phenomena, not the other way around.
To see that the Universe is expanding and that the relative motion of the galaxies (first observed by Hubble) is not just due to some local explosion etc., one has to study the motion of many galaxies and their change with time, and use some equations from general relativity to relate them (and/or use the cosmological principle, the assumption that our place is not too special in the Universe). Depending on the distance of a galaxy that can also be estimated otherwise, one can estimate how it should be moving because of the expansion of space, and the actual motion determined from the Doppler shift minus the expected motion from the expansion may be interpreted as the "individual" motion of the particular galaxy.
Concerning the internal gravitational field of the whole galaxy, in principle, the equivalence principle also says that "motion" and "gravitational field" are ultimately indistinguishable - the equivalence principle says nothing else.
However, the internal gravitational field of a galaxy is negligible to produce an observable redshift. The gravitational redshift is proportional to the gravitational potential Phi divided by c^2, the squared speed of light, and this ratio is only comparable to one for objects that whose gravity is not far from black holes (neutron stars are close). You can also say that Phi/c^2 is only close to one if the orbital speed at a fixed distance from the source is close to the speed of light. Galaxies' mass density is too low and very far from a collapse into a black hole - and the stars' orbital velocities are much (1,000 times) smaller than the speed of light. Because the potential goes like v^2, the potential Phi/c^2 is actually just one part in 1,000,000 - which is the redshift, too.
However, individual sources of light in the galaxy also have their own local gravitational fields. If you were able to observe the sources separately, you could deduce the mass from the gravitational redshift - obtained by removing the redshifts due to motion and expansion. Neutron stars have a huge redshift but not much light escapes from them. Black holes have the "ultimate redshift" - that goes to infinity for light emitted from points near the "event horizon" (and you can't get photons from the black hole interior at all). Black holes also radiate a thermal Hawking radiation - whose temperature (as measured at infinity) is proportional to the gravitational "acceleration" at the event horizon in the most natural "quantum gravity units".
There are usually methods to estimate the speed, distance, and mass of sources of light we observe.
Of course, a peer-reviewed paper is not guaranteed to be right - and it is highly questionable whether throughout the history of science, the "collective" work in science, including peer review, has been more helpful than the "individual" work by scientists who were simply better than others and had no peers.
In some cases, such as the current climate science, peer review doesn't help to improve the quality. (There are many historical analogies that were arguably even more brutal - such as the institutionalized censorship of genetics in the Soviet Union or the harassment of relativity - renamed as Jewish Pseudoscience - in Nazi Germany.) Instead, it helps to impose ideological and other biases that may be interpreted as systematic errors that are efficiently spread in almost all of professional literature. In this case, peer review mostly acts as a filter that helps to remove inconvenient insights, that slows the progress down, and that highlights convenient junk over important but inconvenient findings.
Whether science comes to a "consensus" means absolutely nothing according to the rules of science themselves. Only actual scientific arguments - observations of the relevant phenomena and verified theories and equations that describe them - are relevant as arguments in science. So a majority of scientists will converge to the right opinion for the right reasons only if the relevant theories and arguments supporting them have already been found and if a majority of the community is informed, educated, clever, and impartial enough to appreciate these theories and arguments. As you can see, the outcome depends not only on the scientific facts but also on the abilities - and, indeed, moral qualities - of the researchers.
All findings that have become really "settled" can be defended by very particular arguments or papers that everyone can follow, at least in principle. If such papers or arguments don't exist, it almost always means that references to authorities or consensus are nothing else than propaganda. The historical record of "consensuses" that were not supported by valid proofs - or at least strong evidence - is very poor, too. The case of "Jewish pseudoscience" is a loud warning.
Even if "consensus" mattered, there is no universal method how "science reaches the consensus". The specific mechanism depends on many sociological details of the environment, habits, channels of interactions, and the personal preferences of scientists.
Ideally, science is able to throw away junk. Theories that may be falsified should be abandoned. Of course, this is only true if the real scientists resemble ideal scientists - ideally honest and sufficiently bright researchers. It is virtually impossible for defenders of various predetermined "consensuses" to belong to this group.
The three components of the angular momentum generate an SO(3)-isomorphic Lie Algebra. That's true for spin-1/2 particles, too. In the conventional basis, the three generators are proportional to the three Pauli matrices and they really generate the group of rotations SU(2) which is therefore isomorphic to SU(2). The commutator of two different Pauli matrices is +i or -i times the third generator. The angular momentum is hbar/2 times the Pauli matrices, so the commutator of two components of the spin is +-i.hbar/2 times the third component.
The spin is one of the simplest quantum numbers that can't be "beables" in a would-be Bohmian model wished to replace quantum mechanics. It means that particles never carry any "well-defined" classical information about the spin - unlike the information about position. Clearly, if they also carried a bit remembering the polarization of the spin with respect to a particular axis, we would break the rotational invariance.
However, this complete "forgetting" of the spin is enough to show that the Bohmian model runs into troubles because all quantum mechanical systems can actually be arbitrarily accurately approximated by a quantum computer that is only composed of qubits - or spins of spin-1/2 particles. The Bohmian model for such a spin-only quantum mechanical device wouldn't contain any "beables", just the wave function, that could therefore be never measured by the Bohmian "ontological" measurements. Proper quantum mechanics can never segregate "primitive" and "contextual" observables.
In the same way, the Bohmian approach is also incompatible with special relativity, locality, and particle creation or annihilation which are other typical features of quantum field theories.
The text above has only partially answered 15 questions or so. I don't have the time to answer the remaining 50+ questions at this point, sorry.
And this is my apology. :-)
Quick links to 15 questions and answers
01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12, 13, 14, 15.
1. Wldmr: The fact that science asks “how?” rather than “why?” and therefore has no overlap with religion. People need to get over the idea that one somehow precludes the other.Indeed, science usually asks detailed questions - "how" and "how much", among others. When it finds an explanation or a theory that works and predicts "how" objects behave or explains "how" they behaved or looked like, the scientists may be satisfied.
I don’t think that was quite what you were asking for, but there it is.
But when the theory that science finds doesn't look "perfect", science may also try to find a deeper explanation - "why" the imperfect or approximate theory worked. However, in the scientific context, this question means "how" a more satisfying or more accurate theory may produce a less satisfying one as an approximation.
Science doesn't ask "why" questions in the sense "for what purpose" the features of the real world are the way they are. In fact, science doesn't assume that things have a purpose. Often, it assumes that they don't have any purpose. The religious questions "why" - emotionally looking for desires or priorities of an anthropomorphic agent - don't appear in science.
They may be thought of as independent of science. However, as Sean Carroll would surely agree, these explanations based on "purpose" often directly contradict science because science often shows that things work because of different reasons that are incompatible with a "purpose".
Evolution of life and human acts may be explained in terms of a "purpose" - but this "purpose" may still be reduced to a more mechanistic explanation because science is never satisfied with the declaration of a "purpose" in the role of a final explanation. Instead, it asks "why" the humans or life forms seem to want what they want, and it reduces this question to a question "how" does it work, anyway.
2. Brent Mosley: Speciation. I have seen so many “definitions” that I’m not that even scientists agree on what the term means, let alone the public.Speciation is a term for the emergence of new species. As offspring differ from the parents, organisms evolve from generation to generation. The group of organisms tries to adapt to the environment.
In some cases, it is useful for the group of organisms in the same species to evolve simultaneously. If such "collective" evolution occurs, we talk about anagenesis. The organisms may still transform into a new species if the change is significant enough - there's never any sharp boundary, of course. (Anagenesis that was perceived to make the animals worse or "go back" used to be called catagenesis.)
However, some environments may encourage the organisms to diversify. This kind of evolution is called cladogenesis and differs from anagenesis. The subgroups of the original species begin to diverge in their genes (innate properties encoded in the DNA) as well as phenotypes (visual features of the organisms determined by the environment or history or DNA). They are gradually becoming reproductively isolated: they can't have fertile offspring across the species. New species are being created.
Again, the separation is gradual - there's no sharp moment when it occurs. But in most cases, it's damn clear whether two organisms belong to the same species or different ones. However, one may always find marginal cases and no completely universally acceptable definition of speciation may be found because speciation is not a discrete process.
3. Non-believer: I’ve read a lot of about it, (pop books, not real science books) but I still have a hard time grasping quantum mechanics.The foundations of quantum mechanics have been frequently discussed on this blog, recently in the article about Sidney Coleman's lecture, Quantum mechanics in your face and another article about delayed choice quantum eraser. They will be discussed later in this text, too.
Quantum mechanics is the experimentally verified description of all phenomena in the real world that replaced classical physics in the 1920s and this replacement will never be undone again. Quantum mechanics shows that the world fundamentally boasts many properties that are counterintuitive for creatures like us that have been mostly trained by life at the macroscopic scales.
According to quantum mechanics, only probabilities of individual outcomes - i.e. the probabilistic distributions - may be predicted by the most accurate and complete conceivable scientific description: the outcome of a particular experiment is random. Classical quantities such as position and momentum are replaced by linear operators on the Hilbert space. Unlike their classical counterparts, they don't commute: the commutators - differences of the type XP-PX - are not zero but they are other operators multiplied by coefficients proportional to Planck's constant.
All the information about the system is encoded in the wave function that evolves according to Schrödinger's equation. Interference - or, equivalently, probing all possible trajectories or histories - is possible and all of them contribute to the "probability amplitude" which is a complex number associated with the pair of initial and final states. The squared absolute value of this amplitude gives the probability (or its density).
Quantum mechanics doesn't allow one to imagine that the physical objects have any objective properties prior to the measurement; it remembers all correlations between subsystems (the wave function is a function of all possible configurations, not just a collection of several wave functions for each object); it predicts correlations that can be stronger (or just plain different) than those in classical physics; it doesn't need any action at a distance to achieve them because it is not classical physics; it predicts that many operators such as the energy (Hamiltonian) or the angular momentum often/always have quantized allowed values (mathematically represented as the eigenvalues of the operators); predicts the wave-particle duality, quantum tunneling, and many other phenomena that are major achievements of 20th century physics.
The theories respecting the general postulates of quantum mechanics as well as those of special relativity must be either quantum field theories or vacua of string theory.
See also a file about interpretation of quantum mechanics and entanglement.
4. Spyder: What happened to string theory? Or perhaps, what is happening in string theory?It depends on the time scale.
After having been discovered in the late 1960s (as a theory of the strong nuclear force which was an application that was soon abandoned, but partly revived in the late 1990s), having evolved into its viable supersymmetric version in the 1970s (when it was also showed that it was a theory of quantum gravity for the first time), after the first superstring revolution in the 1980s (which showed that string theory had no anomalies and contained particles and forces of all types required to describe everything we have observed), and the duality revolution in the mid 1990s (which showed a complete unity of all versions of string theory, identified a new important 11-dimensional limit without strings called M-theory, and appreciated new multidimensional objects called branes, among other things), string theory has gone through a couple of extra developments in recent decade and the most recent years.
After the observation of the cosmological constant in 1998, quantum cosmology became a hot topic within string theory. It has led to almost no important yet convincing conclusions except for the proof that despite its unity, string theory predicts a large number of metastable possible universes, the so-called "landscape". Their existence is pretty much indisputable by now. However, whether all of the "vacua" in the landscape are physically realized - probably in an eternally inflating "multiverse" - remains an open question. An even more open question is whether this multiverse, if real, is any helpful to help us to identify in which vacuum we live - whether the inflating pre-history before the Big Bang has any observable consequences.
Even in the absence of the knowledge of the exact vacuum we inhabit, string theory makes lots of universal predictions such as the exact validity of the equivalence principle, local Lorentz symmetry, as well as "specific" stringy predictions such as the "gravity is the weakest force" and other statements that distinguish the landscape of string theory from the so-called "swampland" of effective field theory that are inconsistent at the level of string-theoretical standards.
Meanwhile, string theory has recently (last 3 years) gone through the membrane minirevolution. New, previously mysterious types of field theories in 3 dimensions related to membranes in M-theory were identified and studied. Those new theories have somewhat demystified the new special features by which objects in M-theory differ from objects in perturbative string theory (the latter includes D-branes with ordinary Yang-Mills gauge theories).
Also, a new scenario for the qualitative shape of the extra dimensions in realistic compactifications has emerged in the form of the F-theory phenomenology - intensely developed in recent 4 years or so. It's type IIB string theory with a hugely variable dilaton-axion field and many singular "fibers" that allows an irresistibly unique and predictive description using algebraic geometry. Even in 2010, it was a hot area of research.
It's been settled that string theory predicts the right thermodynamics of black holes - in agreement with Bekenstein-Hawking's macroscopic considerations - when the right entropy and other quantities were checked for wider classes of black holes, including many non-supersymmetric ones, toroidal ones, four-dimensional ones, and so on. The list now (because of advances in the last 2 years) includes four-dimensional extremal rotating black holes that can be seen in the telescope: the proof that such black holes carry the right entropy may be phrased independently of string theory, by selecting just a couple of the stringy mathematical tools (especially the right Virasoro algebra).
The holographic description of quantum gravity, one that exploded by the 1997 Maldacena's discovery of AdS/CFT, is reaching to ever more distant subdisciplines of physics, showing that these seemingly remote phenomena may be described in terms of stringy physics with black holes in a higher-dimensional curved spacetime. In recent years, this description became very successful for non-Fermi liquids and superconductors - that were added to previous (and still developed) successes in the case of the quark-gluon plasma.
I could continue the general review for some time but other questions are waiting, too.
5. Magnus: The difference between electrostatics and magnetism.In the classical theory of electromagnetism, each point in space is equipped with a vector (arrow plus magnitude) remembering the strength of the electric field, E, and a similar vector, B, that remembers the strength of the magnetic field. Hendrik Lorentz was the first person who fully realized that only these two vectors are independent (and H, D are derived quantities in a given material).
The values of these two vector-valued fields are completely independent at each point but the origin of these two fields is not independent. Relativity allows one to derive the existence of one from the other. In a reference frame that is moving, a part of the original B field is transformed into a new E field and vice versa.
There is indeed a symmetry between E and B in all the equations governing the electromagnetic field: for example, a changing electric field produces some magnetic fields, and vice versa.
The only difference between electric and magnetic fields in their qualitative macroscopic manifestations is that there exist electric charges but there are no "magnetic monopoles" - which would be sources of the magnetic fields whose arrows would be everywhere incoming (or everywhere outgoing).
In fact, only magnetic dipoles can be found in Nature: the North pole of a magnet always comes together with the South pole and they cannot be separated. Consequently, only the "electrically charged" sources (and their current) appear in the Maxwell's equations.
However, advanced particle physics (and especially quantum gravity) seems to make it likely that the magnetic monopoles have to exist, too. For example, a region near the heavy North pole of a magnet may collapse into a black hole and the resulting black hole should act as a magnetic monopole. There should therefore exist states for pretty much every value of the magnetic monopole charge, up to the quantization condition. However, it seems likely that the lightest magnetic monopoles - particles with this new kind of charge - are extremely heavy, about 10^{15} times heavier than the proton, not too far from the Planck mass (i.e. from the mass of the lightest black hole that deserves the name). There are not too many of them in the visible Universe and it's hard to artificially produce them, too.
But returning to more basic physics, a frequent laymen's error is to imagine that the electric and magnetic fields are "the same". They are surely not the same: static magnets don't attract electric charges and vice versa. The opposite mistake is to believe that electricity and magnetism have nothing to do with one another: they actually have a common origin that is manifested e.g. as electromagnetic induction (when magnets are moving, electric field appears) and by the electromagnets themselves (the opposite action: magnetic fields are created by moving electric charges).
By the way, some theories that differ from the theories describing the real world but that are closely related mathematically - e.g. the N=4 supersymmetric Yang-Mills theory - admit an exact symmetry exchanging the electric and magnetic fields, the so-called S-duality.
6. Onymous: Decoherence.Decoherence, discussed in some detail here, is the explanation of the emergence of an approximately valid classical reasoning in a world that is fully quantum mechanical (such as ours). The emergence of the classical limit doesn't mean that probabilities go away: quantum mechanics is probabilistic so the exact predictions are never quite deterministic.
However, the probabilities that are predicted after decoherence may be interpreted as ordinary "classical" probabilities of the same kind that one encounters in Liouville's equation for the distribution function on the phase space in classical physics. In particular, these probabilities don't interfere.
Decoherence is the loss of all information about the relative phases of the quantum amplitudes and a process that dynamically picks a privileged "classically observable" basis of the Hilbert space. How does it work?
One describes the system in terms of its density matrix - a combination of psi.psi* tensor products of the wave functions, weighted by their probabilities. The density matrix is a direct quantum counterpart of the distribution function on the phase space. This density matrix "rho" evolves according to a generalized Schrödinger's equation - the commutator "[rho,H]" appears on the right-hand side.
In the description of decoherence, one divides the physical system into the interesting degrees of freedom we can fully observe and the rest that we don't - the "environment".
However, if we can't keep track of all, especially "environmental" degrees of freedom, all predictions for our system may be fully extracted from the density matrix that is "traced over" those environmental degrees of freedom; this density matrix only lives in the Hilbert space of the interesting object (tensor-multiplied by its conjugate). Every time we trace over the "environmental" degrees of freedom, it makes the density matrix for the interesting i.e. studied system more diagonal. That's because the states of the environment onto which the states of the interesting system are "imprinted" are orthogonal and the off-diagonal elements are proportional to the inner products of the environmental states. Well, they are orthogonal after we wait for a while, and if we pick a good enough basis for the interesting system (it's typically the "natural" basis of the states you can interpret easily).
The off-diagonal elements of the density matrix typically go to zero schematically as exp(-exp(t)), with coefficients everywhere. It's much faster a decrease than an ordinary exponential decrease. This expo-exponential decrease is valid because the number of degrees of freedom into which the interesting system gets imprinted by the mutual interactions grows exponentially with time - like exp(t), if you wish (an avalanche!) - and each degree of freedom reduces the off-diagonal element by a factor "u" smaller than one, so it's "u^exp(t)" or "exp(-exp(t))".
Even for a relatively empty and cool environment and microscopic objects, the decoherence is rapid. Even interactions with the weak cosmic microwave background are sufficient to decohere the position of a tiny speck of dust within a tiny fraction of a second. As an effect, the density matrix for the speck of dust abruptly becomes diagonal in a privileged basis - essentially the position basis in this case - and the diagonal entries of the density matrix are directly interpreted as the classical probabilities of the corresponding states.
The result is still probabilistic but these probabilities may be treated classically. In particular, we can now imagine that a particular state was chosen before the measurement - even if we don't know which one it was. By making this assumption, we can derive no contradictions (or wrong predictions) because the probabilities are evolving classically after decoherence. This contrasts with the general probabilities in quantum mechanics before decoherence. Effectively, decoherence allows us to think that Schrödinger's cat was in a well-defined state before we opened the box - because it was after it decohered. Of course, quantum mechanics still doesn't allow us to calculate whether the cat was alive or dead with any certainty: decoherence doesn't help with this thing.
Decoherence, fully understood in the mid 1980s and linked e.g. to the name of Wojciech Zurek (review!), answers pretty much all physically meaningful questions that were left open after the Copenhagen interpretation, especially the nature of the classical-quantum boundary. The classical-quantum boundary appears at the time scale when decoherence starts to act strongly.
It seems likely that Niels Bohr (and maybe others) actually understood the logic of decoherence already back in the 1920s but the communication skills of the guru of the Copenhagen school were not refined enough for him to express his understanding too clearly. That's partly why the founding fathers continued to use their "phenomenological" rules for what constituted a "classical object" or a "measuring apparatus" even though all these assumptions may be derived from a fully quantum mechanical framework.
In particular, decoherence also explains why we don't ever observe a superposition of dead and alive Schrödinger cats: the states of the environment induced by the dead and alive cat, respectively, are orthogonal to each other, so it's the basis in which the density matrix gets diagonalized. The relative phase between the "dead" and "alive" basis vector is being quickly forgotten so the question about the probability of seeing e.g. "0.6 dead + 0.8.i.alive" is ill-posed (note that a relative phase between 0.6 and 0.8 was specified but the density matrix knows nothing about it), as e.g. the Consistent Histories formalism explains in some extra detail.
7. Viggen: Two things: Occam’s Razor and the difference between a hypothesis and a theory.Occam's razor is a heuristic rule by 14th century English logician named William of Ockham that says that concepts shouldn't be multiplied unless it is necessary. It means that hypotheses or theories with a smaller number of randomly invented assumptions and/or mechanisms with the smallest number of arbitrary wheels and gears are preferred over the contrived ones, assuming that both agree with the observed data.
Maybe a bit more generally than you’re asking, I would suggest that the thing most needed by the general audience is an understanding of the difference in philosophy that makes science different from religion. There are a lot of claims made by people in our world that are confused for scientific by laymen mainly because people don’t really understand the difference between something that “sounds” like science and something that _is_ science.
There are a lot of cool things I’ve seen in my years of studying sciences, but weird, cool details are sort of lost on common people if they are just as weird and maybe less comprehensible in coolness than some internet inspired Hollyweird fantasy.
While this principle was originally viewed as a part of intuition or good taste that couldn't be proved, one may actually argue that this rule is "statistically" valid. According to the Bayesian inference, we have to consider a contrived theory together with many ("N") similar theories that make an equal number of choices and have an approximately equal number of arbitrary wheels and gears. The whole set of all these "similar" hypotheses should be counted as one qualitative theory that is on par with a simpler theory preferred by Occam's razor.
However, the probability that one of the contrived theories is correct has to be shared by "N" versions of the contrived theory which is why the probability for each has to be divided by "N". The values of "N" are often exponentially large - the exponential of the number of choices we made - which makes a lot of difference.
In particular, Occam's razor implies that theories that agree with the data but use a smaller number of continuous (and even discrete) parameters are preferred over theories with many more parameters that have to be adjusted to agree with the observations. The probability that a theory with randomly added structures that are not needed is valid is almost exactly zero.
Both "theory" and "hypothesis" are words that are being used by scientists in a much more refined way than most laymen imagine. Theories and hypotheses are unions of axioms and rules and basic concepts and arguments (and, usually, equations) that are meant to logically explain some phenomena (usually including some technical and quantitative details) - and that can actually convince other scientists that their author has a point. They're not just some "random guesses", "lucky hunches", or "conspiracy theories".
The word "hypothesis" is being used for systems of ideas that are not immediately seen to be invalid but that haven't been established yet, either. When we talk about a "hypothesis", we are very interested in the question whether it is valid or not.
On the other hand, the word "theory" is usually used for "hypotheses" that have already been established as valid, or provisionally valid. However, we must be careful because the word "theory" is also used for systems of concepts, rules, and equations that are just "similar" to some realistic theories but that don't agree with the real observations. For example, we talk about M-theory or Chern-Simons theory even though we know that the real world is neither 10+1-dimensional nor 2+1-dimensional. The idea is that these "theories" are defined by remotely analogous equations and produce predictions - e.g. "correlators of fields" - of the same kind as realistic "theories".
There is one more dichotomy - "theories" and "models". The word "theory" usually refers to a major "framework" - the set of general postulates, tools, and methods that can be refined in many ways. The word "model" usually refers to detailed implementations of a theory in which all the details are chosen in one way or another. There should typically exist many models and none of them should be too important.
None of these terminological rules is blindly obeyed. For example, the term "Standard Model" for the current theory of all empirically known non-gravitational phenomena (and elementary particles) clearly understates the importance and uniqueness of this theory - and many people would prefer a "Standard Theory" instead. However, the "Standard Model" - a name coined by Steven Weinberg who also heroically helped to develop the theory and not just its name :-) - arguably sounds sexier and less pious.
Some of the differences and relationships between science and religion were discussed in the first question - about "how" and "why".
8. Viggen: Don’t get me wrong, Quantum in a nutshell would be cool too. And, if you can give me some hints about Renormalization group, it might help me on my homework. ;-)Quantum mechanics was discussed in question 3 and conceptual issues of renormalization were discussed a day ago in Quantum field theory has no problems. This is of course one of the most frequent topics on this blog that usually deserves whole postings and not just their small fractions.
9. No question: just information about tweetsThis new kind of "aether" (a misleading name, as I explain below) responsible for the acceleration is called the dark energy. It is a form of energy density that is not composed of any ordinary particles. In fact, it has no geometric structure. The fact that it differs from ordinary static matter (including dark matter) as well as from the radiation can be seen from its pressure. Dark energy has a negative pressure. It's this negative pressure that makes the expansion of the universe accelerate.
10. BoRon: Space is expanding at an accelerating rate. Galaxies are accelerating away. Is this a coincident value or is the space imparting a force on the galaxies that accelerates them? Sounds like an ether is required.
Observations indicate that the pressure "p" is equal to "-rho" - or not too far from it - where "rho" is the corresponding energy density. If this relationship is exact, and there are good reasons to think so, the dark energy is almost certainly a "cosmological constant", an extra term in Einstein's equations introduced by their author himself (who later, incorrectly, called the term the greatest blunder of his life) that causes this curvature - and accelerating expansion - of the Universe.
The magnitude of this constant is positive and nonzero - but 60-125 orders of magnitude lower than the most straightforward estimates based on particle physics (with or without supersymmetry, respectively), a discrepancy known as the cosmological constant problem (supersymmetry makes the problem numerically smaller but more sharply well-defined).
At this point, unfortunately, the anthropic principle - the assumption that there are many Universes to choose from and only those special ones with a tiny cosmological constant are ready for life which is the only reason why the constant is tiny in our Cosmos - is the only known single explanation convincing enough to be adopted by a large enough group of physicists. But of course, it's extremely far from being settled as a right explanation.
Finally, the dark energy - or cosmological constant, which is probably the same thing - differs from the luminiferous aether because the aether was believed to be composed out of normal matter and picked a privileged reference frame, thus breaking the Lorentz symmetry of special relativity. On the other hand, the cosmological constant has the stress energy tensor equal to "rho" times the metric tensor "g_{mn}". Because the metric tensor is the same in all inertial frames, so is the stress-energy tensor for the cosmological constant. The latter therefore preserves the local Lorentz symmetry. In the simplest configuration - empty space - it replaces the global Poincaré symmetry (Lorentz symmetry and translations) by an equally big group of isometries of the de Sitter space, SO(5,1), which is how the empty Universe with a positive cosmological constant looks like (a kind of hyperboloid).
11. Stray Cat: That evolution does not assert that we evolved from modern day creatures, but instead that they are our distant cousins.Indeed, our ancestors were not identical to any of the current species because the current species (and their ancestors) have been evolving as well (although, arguably, not as successfully as the humans haha). They're our distant cousins. All species are probably distant cousins of all other species - all of life has a common origin. However, it's still true that if we saw our ancestors who lived 8 million years ago, we would surely say: look, a monkey. ;-) Surely, the monkey would be more similar to some current monkey species than to others.
If we looked further, we would say: look, an ugly vertebrate. What kind of a structureless quasi-squirrel whose family has no future :-) this guy is.
12. Freelancer: Can’t believe I’m the the first one here, but, f***ing magnets, they work how, exactly?Magnets were already asked about by the aptly named Magnus in question 5. It's just a fact about the Universe that each point of space, at each moment, is equipped with a little arrow - a "vector" we usually denote "B". Each point of space therefore remembers the direction and strength of the magnetic field.
The formula for the energy can be seen to contain term like "-mu.B" where "mu" is the magnetic moment of an object. So every object with a magnetic moment (another vector) tries to orient itself in the direction of the magnetic field. There's a corresponding force acting on the magnet.
Magnets with nonzero "mu" may be obtained by electromagnets - a circulating charge in the solenoid creates a magnetic field - and a magnetic moment of the solenoid. Elementary particles such as electrons or protons also carry their internal magnets. The electron has a "spin" - in some sense, it rotates around its axis (although the rotation has all the unfamiliar properties dictated by quantum mechanics). Because it's charged and the charge rotates, it behaves just like a fucking electromagnet. The exact strength of the electron's magnetic field may be deduced from Dirac's equation.
Yes, the freedom of speech is great.
Ferromagnets such as iron are able to make it popular among the electrons to spin in the same direction in a region - the magnetic domain. Most of their magnetic field - and their ability to act as a magnetic dipole - comes from the spin of the electrons. A smaller portion of the magnetic properties of ferromagnets comes from the orbital motion of the electrons around the nuclei.
The magnetic fields can't be explained in terms of squirrels or thirsty drunk marines' libido because the magnetic fields are more fundamental than squirrels or thirsty drunk marines' libido. ;-) Magnetic fields simply exist - even in the vacuum - and various objects interact with them. However, one can still say that the magnetic field is not "quite fundamental" according to the newest theories of physics.
In electrodynamics, magnetic fields may be written as "curl A" out of a more fundamental vector field, the vector potential "A" - which is however not uniquely determined, because of a redundancy called the "gauge invariance". The field "A" itself can be written as a combination of two similar, more fundamental fields in the electroweak theory. This field itself may be seen to be a condensate of strings (open or closed strings - or branes or another stringy object) in a particular vibration pattern.
As the previous sentences indicatet, there is an extra pyramid of advanced physics concepts that derive things like magnetic fields from more fundamental starting points. But when it comes to the early 20th century physics, the magnetic fields are among the most fundamental objects of the reality, so you shouldn't try to deduce them from anything deeper. Magnetism and its brotherly electricity (see question 5 for some comments about their relationship) are the fundamental phenomena that, on the contrary, explain almost all of chemistry, biology, and engineering. You should use electricity and magnetism to explain more complex phenomena, not the other way around.
13. BoRon: I observe an elliptical galaxy’s redshift. How do I differentiate redshift due to its motion, due to the stretching of space and due to its gravitation?Fundamentally speaking, you can't differentiate them. If you only have one galaxy, the stretching of space is physically the same thing as the relative motion of galaxies whose relative velocities increase with the separation. So the first two sources of the redshift are equivalent, at least for one galaxy.
(Sorry, that’s my 2nd and final question.)
To see that the Universe is expanding and that the relative motion of the galaxies (first observed by Hubble) is not just due to some local explosion etc., one has to study the motion of many galaxies and their change with time, and use some equations from general relativity to relate them (and/or use the cosmological principle, the assumption that our place is not too special in the Universe). Depending on the distance of a galaxy that can also be estimated otherwise, one can estimate how it should be moving because of the expansion of space, and the actual motion determined from the Doppler shift minus the expected motion from the expansion may be interpreted as the "individual" motion of the particular galaxy.
Concerning the internal gravitational field of the whole galaxy, in principle, the equivalence principle also says that "motion" and "gravitational field" are ultimately indistinguishable - the equivalence principle says nothing else.
However, the internal gravitational field of a galaxy is negligible to produce an observable redshift. The gravitational redshift is proportional to the gravitational potential Phi divided by c^2, the squared speed of light, and this ratio is only comparable to one for objects that whose gravity is not far from black holes (neutron stars are close). You can also say that Phi/c^2 is only close to one if the orbital speed at a fixed distance from the source is close to the speed of light. Galaxies' mass density is too low and very far from a collapse into a black hole - and the stars' orbital velocities are much (1,000 times) smaller than the speed of light. Because the potential goes like v^2, the potential Phi/c^2 is actually just one part in 1,000,000 - which is the redshift, too.
However, individual sources of light in the galaxy also have their own local gravitational fields. If you were able to observe the sources separately, you could deduce the mass from the gravitational redshift - obtained by removing the redshifts due to motion and expansion. Neutron stars have a huge redshift but not much light escapes from them. Black holes have the "ultimate redshift" - that goes to infinity for light emitted from points near the "event horizon" (and you can't get photons from the black hole interior at all). Black holes also radiate a thermal Hawking radiation - whose temperature (as measured at infinity) is proportional to the gravitational "acceleration" at the event horizon in the most natural "quantum gravity units".
There are usually methods to estimate the speed, distance, and mass of sources of light we observe.
14. Lin Mu: With the coming climate circus in the congress. We need to know more about peer review, & how we know, what we know. We need clear unambiguous statements about how Science comes to consensus, and how it deals with junk.Peer review is a quality control mechanism. The referee, a "peer" of the author(s), is often able to find errors (or inconsistencies with the known observations, data, or established theories) that the author(s) has or have neglected. Well, it's not guaranteed that he or she can do it, either. If it works, the average quality of published paper is higher than the average quality of the submitted papers.
Of course, a peer-reviewed paper is not guaranteed to be right - and it is highly questionable whether throughout the history of science, the "collective" work in science, including peer review, has been more helpful than the "individual" work by scientists who were simply better than others and had no peers.
In some cases, such as the current climate science, peer review doesn't help to improve the quality. (There are many historical analogies that were arguably even more brutal - such as the institutionalized censorship of genetics in the Soviet Union or the harassment of relativity - renamed as Jewish Pseudoscience - in Nazi Germany.) Instead, it helps to impose ideological and other biases that may be interpreted as systematic errors that are efficiently spread in almost all of professional literature. In this case, peer review mostly acts as a filter that helps to remove inconvenient insights, that slows the progress down, and that highlights convenient junk over important but inconvenient findings.
Whether science comes to a "consensus" means absolutely nothing according to the rules of science themselves. Only actual scientific arguments - observations of the relevant phenomena and verified theories and equations that describe them - are relevant as arguments in science. So a majority of scientists will converge to the right opinion for the right reasons only if the relevant theories and arguments supporting them have already been found and if a majority of the community is informed, educated, clever, and impartial enough to appreciate these theories and arguments. As you can see, the outcome depends not only on the scientific facts but also on the abilities - and, indeed, moral qualities - of the researchers.
All findings that have become really "settled" can be defended by very particular arguments or papers that everyone can follow, at least in principle. If such papers or arguments don't exist, it almost always means that references to authorities or consensus are nothing else than propaganda. The historical record of "consensuses" that were not supported by valid proofs - or at least strong evidence - is very poor, too. The case of "Jewish pseudoscience" is a loud warning.
Even if "consensus" mattered, there is no universal method how "science reaches the consensus". The specific mechanism depends on many sociological details of the environment, habits, channels of interactions, and the personal preferences of scientists.
Ideally, science is able to throw away junk. Theories that may be falsified should be abandoned. Of course, this is only true if the real scientists resemble ideal scientists - ideally honest and sufficiently bright researchers. It is virtually impossible for defenders of various predetermined "consensuses" to belong to this group.
15. Ben: The spin 1/2 system in quantum mechanics. See Griffiths’ Introduction to QM, 2nd edition, Section 4.4, pg. 188-189.I agree that spin of spin-1/2 particles is a great system to learn subtleties of quantum mechanics. For example, Sidney Coleman's Quantum mechanics in your face talk and my blog entry about it has explained the GHZM state linking the spins of three electrons. Quantum mechanics predicts exactly the opposite correlations of certain kinds than classical physics. (Read the blog entry, click.)
Since it’s a single particle, it’s easy enough to understand and to appreciate the weirdness resulting from the fact that Sx and Sz don’t commute.
The three components of the angular momentum generate an SO(3)-isomorphic Lie Algebra. That's true for spin-1/2 particles, too. In the conventional basis, the three generators are proportional to the three Pauli matrices and they really generate the group of rotations SU(2) which is therefore isomorphic to SU(2). The commutator of two different Pauli matrices is +i or -i times the third generator. The angular momentum is hbar/2 times the Pauli matrices, so the commutator of two components of the spin is +-i.hbar/2 times the third component.
The spin is one of the simplest quantum numbers that can't be "beables" in a would-be Bohmian model wished to replace quantum mechanics. It means that particles never carry any "well-defined" classical information about the spin - unlike the information about position. Clearly, if they also carried a bit remembering the polarization of the spin with respect to a particular axis, we would break the rotational invariance.
However, this complete "forgetting" of the spin is enough to show that the Bohmian model runs into troubles because all quantum mechanical systems can actually be arbitrarily accurately approximated by a quantum computer that is only composed of qubits - or spins of spin-1/2 particles. The Bohmian model for such a spin-only quantum mechanical device wouldn't contain any "beables", just the wave function, that could therefore be never measured by the Bohmian "ontological" measurements. Proper quantum mechanics can never segregate "primitive" and "contextual" observables.
In the same way, the Bohmian approach is also incompatible with special relativity, locality, and particle creation or annihilation which are other typical features of quantum field theories.
The text above has only partially answered 15 questions or so. I don't have the time to answer the remaining 50+ questions at this point, sorry.
And this is my apology. :-)
Fifteen questions CV readers wouldn't be afraid to ask
Reviewed by MCH
on
November 14, 2010
Rating:
No comments: