Many Christians, when giving the Teleological Argument, will cite fine-tuning parameters for universal constants - i.e., if force x were modified by one part in 1020, life would be impossible.
However, they often neglect to cite where they got these numbers from, even in footnotes. This article will attempt to remedy this, by quoting original sources, written by qualified experts in their field, detailing the fine-tuning of the universe.
Also see Appendix I for brief responses to common Atheist rebuttals to the universe's fine-tuning.
Stephen Hawking - A Brief History of Time - Chapter 8
Why did the universe start out with so nearly the critical rate of expansion that separates models that recollapse from those that go on expanding forever, that even now, ten thousand million years later, it is still expanding at nearly the critical rate? If the rate of expansion one second after the big bang had been smaller by even one part in a hundred thousand million million, the universe would have recollapsed before it ever reached its present size.
Martin Rees - Just Six Numbers - Chapter 1, pg. 2
The cosmos is so vast because there is one crucially important huge number N in nature, equal to 1, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000. This number measures the strength of the electrical forces that hold atoms together, divided by the force of gravity between them. If N had a few less zeros, only a short-lived miniature universe could exist: no creatures could grow larger than insects, and there would be no time for biological evolution.
Another number, ε, whose value is 0.007, defines how firmly atomic nuclei bind together and how all the atoms on Earth were made. Its value controls the power from the Sun and, more sensitively, how stars transmute hydrogen into all the atoms of the periodic table. Carbon and oxygen are common, whereas gold and uranium are rare, because of what happens in the stars. If ε were 0.006 or 0.008, we could not exist.
Martin Rees - Just Six Numbers - Chapter 6, pg. 85
Suppose that, for every 109 quark-antiquark pairs, such an asymmetry had led to one extra quark. As the universe cooled, antiquarks would all annihilate with quarks, eventually giving quanta of radiation. This radiation, now cooled to very low energies, constitutes the 2.7 degree background heat pervading intergalactic space. But for every billion quarks that were annihilated with antiquarks, one would survive because it couldn't find a partner to annihilate with. There are indeed more than a billion times more radiation quanta (photons) in the universe than there are protons (412 million photons in each cubic metre, compared with about 0.2 protons). So all the atoms in the universe could result from a tiny bias in favour of matter over antimatter. We, and the visible universe around us, may exist only because of a difference in the ninth decimal place between the numbers of quarks and of antiquarks.
Martin Rees - Just Six Numbers - Chapter 6, pg. 88-89
In this perspective, it looks surprising that our universe was initiated with a very finely tuned impetus, almost exactly enough to balance the decelerating tendency of gravity. It's like sitting at the bottom of a well and throwing a stone up so that it just comes to a halt exactly at the top - the required precision is astonishing: at one second after the Big Bang, Ω (the cosmic density parameter) cannot have differed from unity by more than one part in a million billion (one in 1015) in order that the universe should now, after ten billion years, be still expanding and with a value of Ω that has certainly not departed wildly from unity.
We have already noted that any complex cosmos must incorporate a 'large number' N reflecting the weakness of gravity, and must also have a value of ε that allows nuclear and chemical processes to take place. But these conditions, though necessary, are not sufficient. Only a universe with a 'finely tuned' expansion rate can provide the arena for these processes to unfold. So Ω must be added to our list of crucial numbers. It had to be tuned amazingly close to unity in the early universe. If expansion was too fast, gravity could never pull regions together to make stars or galaxies; if the initial impetus were insufficient, a premature Big Crunch would quench evolution when it had barely begun.
Martin Rees - Just Six Numbers - Chapter 8, pg. 115
If Q (amplitude of the primordial density fluctuations) were smaller than 10−6, gas would never condense into gravitationally bound structures at all, and such a universe would remain forever dark and featureless, even if its initial 'mix' of atoms, dark matter and radiation were the same as in our own.
On the other hand, a universe where Q were substantially larger than 10−5 - where the initial 'ripples' were replaced by large-amplitude waves - would be a turbulent and violent place. Regions far bigger than galaxies would condense early in its history. They wouldn't fragment into stars but would instead collapse into vast black holes, each much heavier than an entire cluster of galaxies in our universe. Any surviving gas would get so hot that it would emit intense X-rays and gamma rays. Galaxies (even if they managed to form) would be much more tightly bound than the actual galaxies in our universe. Stars would be packed too close together and buffeted too frequently to retain stable planetary systems. (For similar reasons, solar systems are not able to exist very close to the centre of our own galaxy, where the stars are in a close-packed swarm compared with our less central locality.)
Leonard Susskind - The Cosmic Landscape - Chapter 2
The vacuum energies of fermions and bosons do not cancel, and the bottom line is that our best theory of elementary particles predicts vacuum energy whose gravitational effects would be vastly too large. We don't know what to make of it. Let me put the magnitude of the problem in perspective. Let's invent units in which 10116 joules per cubic centimeter is called one Unit. Then each kind of particle gives a vacuum energy of roughly a Unit. The exact value depends on the mass and other properties of the particle. Some of the particles give a positive number of Units, and some negative. They must all add up to some incredibly small energy density in Units. In fact a vacuum energy density bigger than .00000...(110 more zeroes)...00001 Units would conflict with astronomical data. For a bunch of numbers, none of them particularly small, to cancel one another to such precision would be a numerical coincidence so incredibly absurd that there must be some other answer.
Theoretical physicists and observational cosmologists have regarded this problem differently. The traditional cosmologists have generally kept an open mind about the possibility that there may be a tiny cosmological constant. In the spirit of experimental scientists, they have regarded it as a parameter to be measured. The physicists, myself included, looked at the absurdity of the required coincidence and said to themselves (and each other) that there must be some deep hidden mathematical reason why the cosmological constant must be exactly zero. This seemed more likely than a numerical cancellation of 119 decimal places for no good reason. We have sought after such an explanation for almost half a century with no luck.
String theorists are a special breed of theoretical physicist with very strong opinions about this problem. The theory that they work on has often produced unexpected mathematical miracles, perfect cancellations for deep and mysterious reasons. Their view (and it was, until not too long ago, also my view) has been that String Theory is such a special theory that it must be the one true theory of nature. And being true, it must have some profound mathematical reason for the supposed fact that the vacuum energy is exactly zero. Finding the reason has been regarded as the biggest, most important, and most difficult problem of modern physics. No other phenomenon has puzzled physicists for as long as this one. Every attempt, be it in quantum field theory or in String Theory, has failed. It truly is the mother of all physics problems.
Leonard Susskind - The Cosmic Landscape - Chapter 2
In any case Weinberg set out to see if he could find a reason why a cosmological constant much bigger than 10-120 Units would prevent life....
But because these density contrasts were initially so small, even a very tiny amount of repulsion could reverse the tendency to cluster. Weinberg found that if the cosmological constant were just an order of magnitude or two bigger than the empirical bound, no galaxies, stars, or planets would ever have formed!...
If the negative cosmological constant were too large, the crunch would not allow the billions of years necessary for life like ours to evolve. Thus, there is an anthropic bound on negative λ to match Weinberg's positive bound. In fact the numbers are fairly similar. If the cosmological constant is negative, it must also not be much bigger than 10-120 Units if life is to have any possibility of evolving.
Leonard Susskind - The Cosmic Landscape - Chapter 4
The basic setup looks almost too good to be true. Rather than following a pattern of mathematical simplicity or elegance, the laws of nature seem specially tailored to our own existence. As I have repeatedly said, physicists hate this idea. But as we will see, String Theory seems to be an ideal setup to explain why the world is this way.
- The universe is a fine-tuned thing. It grew big by expanding at an ideal rate. If the expansion had been too rapid, all of the material in the universe would have spread out and separated before it ever had a chance to condense into galaxies, stars, and planets. On the other hand, if the initial expansion had not had a sufficient initial thrust, the universe would have turned right around and collapsed in a big crunch much like a punctured balloon.
- The early universe was not too lumpy and not too smooth. Like the baby bear's porridge, it was just right. If the universe had started out much lumpier than it did, instead of the hydrogen and helium condensing into galaxies, it would have clumped into black holes. All matter would have fallen into these black holes and been crushed under the tremendously powerful forces deep in the black hole interiors. On the other hand, if the early universe had been too smooth, it wouldn't have clumped at all. A world of galaxies, stars, and planets is not the generic product of the physical processes in the early universe; it is the rare and, for us, very fortunate, exception.
- Gravity is strong enough to hold us down to the earth's surface, yet not so strong that the extra pressure in the interior of stars would have caused them to burn out in a few million years instead of the billions of years needed for Darwinian evolution to create intelligent life.
- The microscopic Laws of Physics just happen to allow the existence of nuclei and atoms that eventually assemble themselves into the large “Tinkertoy” molecules of life. Moreover, the laws are just right, so that the carbon, oxygen, and other necessary elements can be “cooked” in first-generation stars and dispersed in supernovae.
Leonard Susskind - The Cosmic Landscape - Chapter 6
The properties of Hoyle's carbon resonance are sensitive to a number of constants of nature, including the all-important fine structure constant. Just a few percent change in its value, and there would have been no carbon and no life. This is what Hoyle meant when he said that “it looks as if a super-intellect has monkeyed with physics as well as with chemistry and biology.”
But again, it would do no good for the nuclear physics to be “just right” if the universe had no stars. Remember that a perfectly homogeneous universe would never give birth to these objects. Stars, galaxies, and planets are all the result of the slight lumpiness at the beginning. Early on, the density contrast was about 10-5 in magnitude, but what if it had been a little bigger or a little smaller? If the lumpiness had been much less, let's say, 10-6, in the early universe, galaxies would be small and the stars, very sparse. They would not have had sufficient gravity to hang on to the complex atoms that were spewed out by supernovae; these atoms would have been unavailable for the next generation of stars. Make the density contrast a little less than that, and no galaxies or stars would form at all.
What would happen if the lumpiness were larger than 10-5? A factor of one hundred larger, and the universe would be full of violent, ravenous monsters that would swallow and digest galaxies before they were even finished forming.
Leonard Susskind - The Cosmic Landscape - Chapter 6
There is a lot more. The laws of particle physics include the requirement that every particle has an antiparticle. How then did the universe get to have such a large preponderance of matter over antimatter? Here is what we think happened:
When the universe was very young and hot, it was filled with plasma that contained almost exactly equal amounts of matter and antimatter. The imbalance was extremely small. For every 100,000,000 antiprotons, there were 100,000,001 protons. Then, as the universe cooled, particles and antiparticles combined in pairs and annihilated into photons. One hundred million antiprotons found 100,000,000 partners and, together, they committed suicide, leaving 200,000,000 photons and just 1 leftover proton. These leftovers are the stuff we are made of. Today, if you take a cubic meter of intergalactic space, it will contain about 1 proton and 200,000,000 photons. Without the slight initial imbalance, I would not be here to tell you (who would not be here to read) these things.
Paul Davies - The Goldilocks Enigma - Chapter 1
Now, it happens that to meet these various requirements, certain stringent conditions must be satisfied in the underlying laws of physics that regulate the universe, so stringent in fact that a bio-friendly universe looks like a fix - or a “put-up job,” to use the pithy description of the late British cosmologist Fred Hoyle. It appeared to Hoyle as if a superintellect had been “monkeying” with the laws of physics. He was right in his impression. On the face of it, the universe does look as if it has been designed by an intelligent creator expressly for the purpose of spawning sentient beings.
Paul Davies - The Goldilocks Enigma - Chapter 7
If the weak (nuclear) force were weaker, the neutrinos would lack the punch to create this explosion. If it were stronger, the neutrinos would react more vigorously with the stellar core and wouldn't escape to deliver their blow to the outer layers. Either way, the dissemination of carbon and other heavy elements needed for life via this process would be compromised....
The upshot of these various nuclear considerations, then, is that had the weak force been either somewhat stronger or very slightly weaker, the chemical makeup of the universe would be very different, with much poorer prospects for life.
Paul Davies - The Goldilocks Enigma - Chapter 7
Let me now turn to the other two forces of nature, gravitation and electromagnetism. How vital are their properties to the life story? It is easy to see why changing their strengths too much would threaten biology. If gravity were stronger, stars would burn faster and die younger: if by some magic we could make gravitation twice as strong, say, then the sun would shine more than a hundred times as brightly. Its lifetime as a stable star would fall from 10 billion to less than 100 million years, which is probably too short for life to emerge and certainly too short for intelligent observers to evolve. If electromagnetism were stronger, the electrical repulsion between protons would be greater, threatening the stability of atomic nuclei....
Carter discovered from the theory of stellar structure that to get both sorts of stars, the ratio of the strengths of the electromagnetic and gravitational forces needs to be very close to the observed value of 1040. If gravity were a bit stronger, all stars would be radiative and planets might not form; if gravity were somewhat weaker, all stars would be convective and supernovas might never happen. Either way, the prospects for life would be diminished.
Paul Davies - The Goldilocks Enigma - Chapter 7
To give you a feeling for what I am talking about, the ratio of the mass of the proton to that of the electron is 1,836.1526675 - an utterly mundane number. The neutron-to-proton mass ratio is 1.00137841870, which looks equally uninspiring. Physically, it means that the proton has very nearly the same mass as the neutron, which, as we have already seen, is about 0.1 percent heavier. Is this important? Indeed it is, and not just in determining the ratio of hydrogen to helium in the universe. The fact that the neutron's mass is coincidentally just a little bit more than the combined mass of the proton, electron, and neutrino is what enables free neutrons to decay. If the neutron were even very slightly lighter, it could not decay without an energy input of some sort. If the neutron were lighter still, yet only by a fraction of 1 percent, it would have less mass than the proton, and the tables would be turned: isolated protons, rather than neutrons, would be unstable. Then protons would decay into neutrons and positrons, with disastrous consequences for life, because without protons there could be no atoms and no chemistry.
Cosmology provides more remarkable examples of fine-tuning. As I have discussed, the cosmic microwave background radiation is embellished with all-important ripples or perturbations, echoes of the seeds of the large-scale structure of the universe. These seeds, remember, are thought to originate in quantum fluctuations during inflation. Numerically, the variations are small: about one part in a hundred thousand, a quantity that cosmologists denote by the letter Q. Now, if Q were smaller than one hundred-thousandth - say, one millionth - this would severely inhibit the formation of galaxies and stars. Conversely, if Q were bigger - one part in ten thousand or more - galaxies would be denser, leading to lots of planet-disrupting stellar collisions. Make Q too big and you'd form giant black holes rather than clusters of stars. Either way, Q needs to sit in a rather narrow range to make possible the formation of abundant, stable, long-lived stars accompanied by planetary systems of the type we inhabit.
Paul Davies - The Goldilocks Enigma - Chapter 7
How many knobs are there? The Standard Model of particle physics has about twenty undetermined parameters, while cosmology has about ten. All told, there are over thirty “knobs.” As I have cautioned already, not all the parameters are necessarily independent of the others, and not all require exceptional fine-tuning for life to be possible. But several certainly do: some of the examples I have given demand “knob settings” that must be fine-tuned to an accuracy of less than 1 percent to make a universe fit for life. But even this sensitivity pales into insignificance compared with the biggest fine-tuning riddle of all: dark energy.
Paul Davies - The Goldilocks Enigma - Chapter 7
When the value of dark energy seemed to be zero, it was at least plausible that some yet-to-be-discovered mechanism might operate to force an exact cancellation. But, as Leonard Susskind has stressed, a mechanism that cancels to one part in 120 powers of ten, and then fails to cancel after that, is something else entirely. To give the reader some idea of just how much of a fix this almost-cancellation is, let me write out the number 10120 in its full glory:
1, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000
So the big fix somehow works brilliantly (if mysteriously) for 119 powers of ten, but fails at the 120th.
Whatever dark energy may be - and it may just be the “natural” energy of empty space - it is dangerous. In fact, it could be the most dangerous stuff known to science. About twenty years ago Steven Weinberg pointed out that if the magnitude of the dark energy were only moderately larger than the observed value, it would have frustrated the formation of galaxies. Galaxies form by the slow aggregation of matter under the action of attractive gravitation. If this tendency were opposed by a strong enough cosmic repulsion force, galaxies would be unable to grow properly. And as I have already remarked, without galaxies there would probably be no stars or planets or life. So our existence depends on the dark energy's not being too large. A factor of ten would suffice to preclude life: if space contained ten times as much dark energy as it actually does, the universe would fly apart too fast for galaxies to form. A factor of ten may seem like a wide margin, but one power of ten on a scale of 120 is a pretty close call. The cliché that “life is balanced on a knife-edge” is a staggering understatement in this case: no knife in the universe could have an edge that fine.
Logically, it is possible that the laws of physics conspire to create an almost but not quite perfect cancellation. But then it would be an extraordinary coincidence that that level of cancellation - 119 powers of ten, after all - just happened by chance to be what is needed to bring about a universe fit for life. How much chance can we buy in scientific explanation? One measure of what is involved can be given in terms of coin flipping: odds of 10120 to one is like getting heads no fewer than four hundred times in a row. If the existence of life in the universe is completely independent of the big fix mechanism - if it's just a coincidence - then those are the odds against our being here. That level of flukiness seems too much to swallow.
Lewis and Barnes - A Fortunate Universe - Chapter 1
Dark energy could be a number of things, including something called vacuum energy, that is, the energy present in empty space even when there are no particles. Our best theory of the structure of matter tells us that each fundamental type of matter will contribute to this vacuum energy, either positively or negatively. Alarmingly, the typical size of these contributions is larger than the amount of dark energy in our Universe by a factor of 1 followed by 120 zeros, or in scientific notation 10120.
What would happen if the amount of dark energy in our Universe were, say, a trillion (1012) times larger? This sounds like a big increase, but it is a pittance compared to 10120. In that universe, the expansion of space would be so rapid that no galaxies, stars or planets would form. The universe would contain a thin soup of hydrogen and helium. At most, these particles might occasionally bounce off each other, and head back out into space for another trillion years of lonely isolation.
Lewis and Barnes - A Fortunate Universe - Chapter 2
The Delta-Plus-Plus Universe: Let's start by increasing the mass of the down quark by a factor of about 70. Down quarks would readily transform into up quarks (and other stuff), even inside protons and neutrons. Thus, they would rapidly decay into the new 'most stable' title-holder, our old friend the Δ++ particle. We would find ourselves in the 'Delta-plus-plus universe'.
As we've seen, the Δ++ particle is a baryon containing three up quarks. Unlike the proton and neutron, however, the extra charge, and hence electromagnetic repulsion, on the Δ++ particles makes them much harder to bind together. Individual Δ++ particles can capture two electrons to make a helium-like element. And this will be the only element in the universe. Farewell, periodic table! The online PubChem database in our Universe lists 60, 770, 909 chemical compounds (and counting); in the Δ++ universe it would list just one. And, being like helium, it would undergo zero chemical reactions.
The Delta-Minus Universe: Beginning with our Universe again, let's instead increase the mass of the up quark by a factor of 130. Again, the proton and neutron will be replaced by one kind of stable particle made of three down quarks, known as the Δ−. Within this Δ− universe, with no neutrons to help dilute the repulsive force of their negative charge, there again will be just one type of atom, and, in a dramatic improvement on the Δ++ universe, one chemical reaction! Two Δ− particles can form a molecule, assuming that we replace all electrons with their positively charged alter-ego, the positron.
The Hydrogen Universe: To create a hydrogen-only universe, we increase the mass of the down quark by at least a factor of 3. Here, no neutron is safe. Even inside nuclei, neutrons decay. Once again, kiss your chemistry textbook goodbye, as we'd be left with one type of atom and one chemical reaction.
The Neutron Universe: If you think the hydrogen universe is rather featureless, let's instead increase the mass of the up quark by a factor of 6. The result is that the proton falls apart. In a reversal of what we see in our Universe, the proton, including protons buried in the apparent safety of the atomic nucleus, decay into neutrons, positrons and neutrinos. This is by far the worst universe we've so far encountered: no atoms, no chemical reactions. Just endless, featureless space filled with inert, boring neutrons.
There is more than one way to create a neutron universe. Decrease the mass of the down quark by just 8 per cent and protons in atoms will capture the electrons in orbit around them, forming neutrons. Atoms would dissolve into clouds of featureless, chemical-free neutrons.
What about the other particle of everyday stuff, the electron? Since the electron (and its antiparticle, the positron) is involved in the decay of neutron and proton, it too can sterilize a universe. For example, increase its mass by a factor of 2.5, and we're in the neutron universe again.
Lewis and Barnes - A Fortunate Universe - Chapter 2
So, the mass of the Higgs boson presents us with a conundrum. Our quantum mechanical calculations predict that it should have a mass of 1018 GeV. Life requires a value not too much different to what we observe, so that the masses of the fundamental particles are not disastrously large. There must be an as yet unknown mechanism that slices off the contributions from the quantum vacuum, reducing it down to the observed value. This slicing has to be done precisely, not too much and not so little as to destabilize the rest of particle physics. This is a cut as fine as one part in 1016. Maybe there is a natural solution to this cutting, but it seems quite lucky for our Universe that the slicing resulted in stable particle physics. This problem - known as the hierarchy problem - keeps particle physicists awake at night.
Lewis and Barnes - A Fortunate Universe - Chapter 3
How much would the properties of the Universe need to change in order to give us a helium universe? Consider strengthening the grip of the strong force, so that the cosmic oven becomes more efficient. Nuclei can stick together under hotter conditions, and so form earlier in the history of the Universe, when there are more neutrons. Our Universe burns 25 per cent of its hydrogen in the first few minutes. An increase of the strong force by a factor of about 2 is enough to cause the Universe to burn more than 90 per cent of its hydrogen.
Lewis and Barnes - A Fortunate Universe - Chapter 3
A supervillain who threatens to dissolve our atoms and nuclei is worth taking seriously. Chemical elements that last but a moment are pretty useless for building molecules, cells and organisms and it's not difficult to arrange: decrease the strength of the strong force by a factor of about 4, and the periodic table, from carbon up, is gone. They don't even α-decay. They experience fission - the nucleus simply splits into two. We could achieve the same effect by increasing the strength of electromagnetism by a factor of 16. Just don't tell your local Evil Overlord.
Smaller changes will result in most of the elements being radioactive, with a range of lifetimes. This creates a swath of problems for life. Each α- and β-decay transmutes one chemical element into another, and so totally changes its chemical properties. Within life's highly specialized, precisely constructed amino acids, proteins and cells, an unpredictable change of this magnitude will cause untold damage. Life will not get very far if it cannot trust chemistry.
Lewis and Barnes - A Fortunate Universe - Chapter 4
With all this fast living, these stars die young, rapidly burning through their nuclear fuel. Stars would burn quickly through the universe's nuclear energy, emitting radiation that is lethal to life and dying in an even more energetic and dangerous supernova explosion. In our Universe, gravity is about 1040 times weaker than the strong force; if it were only 1030 times weaker, typical stars would burn out in a matter of years, not tens of billions of years.
Or so we thought, until Fred Adams of the University of Michigan took a closer look at stars in other universes. Fred discovered another condition on the stability of stars, a condition that narrows the stellar window as the strength of gravity increases. If gravity were 1035 instead of 1040 times weaker than the strong force, then the window would close completely. Stable stars would not be possible at all.
Lewis and Barnes - A Fortunate Universe - Chapter 4
Even smaller changes might impact how stars burn. A small decrease in the strength of the strong force by about 8 per cent would render deuterium unstable. A proton can no longer stick to a neutron, and the first nuclear reaction in stars is in danger of falling apart. An increase of 12 per cent binds the diproton - a proton can stick to another proton. This gives stars a short cut, an easy way to burn fuel. If the diproton were suddenly bound within the Sun, it would burn hydrogen at a phenomenal rate, exhausting its fuel in mere moments.
Lewis and Barnes - A Fortunate Universe - Chapter 4
Other than silicon, there is little hope for life based upon other elements. Boron and sulphur have been suggested, but these are very rare in the cosmos, and simply don't provide the long, linked, folding large molecules that life needs for its inner machinery and genetic blueprint. There are only 92 naturally occurring chemical elements, and carbon is by a wide margin the most suitable for life.
Lewis and Barnes - A Fortunate Universe - Chapter 4
We can, therefore, ask whether the existence and properties of the carbon resonance are fine-tuned for life: if we varied the properties of the Universe, and in particular its fundamental parameters, would it still make carbon?
In 1989, Mario Livio and colleagues, of the Space Telescope Science Institute, simulated the life and death of stars with slightly different resonance energies. Relative to the ground state, a change of more than about 3 per cent begins to shut down carbon production. In fact, this happens for two reasons - either carbon isn't made at all, or else the star burns so efficiently that a carbon is made and then immediately burned up. The carbon nucleus can capture another helium nucleus, making oxygen.
Lewis and Barnes - A Fortunate Universe - Chapter 4
Evgeny Epelbaum, Hermann Krebs, Timo Lähde, Dean Lee and Ulf-G. Meißner (2011) took up the challenge and considered what would happen to carbon and oxygen production if we mess around with the fundamental properties of matter. Contrary to Weinberg's hunch, the narrowness and location of the relevant energy levels translate into quite stringent limits on the masses of the quarks. A change of much more than a small percentage destroys a star's ability to create both carbon and oxygen.
And remember from last chapter that because the quarks are already 'absurdly light', in the words of physicist Leonard Susskind (2005, p. 176), a range of mass that is a small percentage of their value in our Universe corresponds to a tiny fraction of their possible range. It is about one part in a million relative to the Higgs field, which gives them their mass. It is about one part in 1023 relative to the Planck mass!
So the fact that we are here typing these words, and you are there reading them, all constructed from molecules of carbon and oxygen, is only possible because the masses of the quarks and the strength of the forces lie within an outrageously narrow range!
Lewis and Barnes - A Fortunate Universe - Chapter 5
While the expansion of a universe decelerates, curved universes become more curved. Exactly flat universes remain flat, but the slightest curvature is quickly amplified. Because (according to the Standard Cosmology) our Universe was decelerating for the first few billion years of its existence, its initial conditions must be preposterously fine-tuned for it to appear flat today. Suppose we wind the tape of the Universe back to when the first elements are formed, a few minutes after the Big Bang. Then, in order for our Universe to be flat to within one per cent today, it must have been flat to within one part in a thousand trillion (1 followed by 15 zeros). Suspicious!
Winding the clock further back in time only makes it worse. The furthest we can meaningfully wind the clock back is the so-called Planck time, before which we would need a theory of quantum gravity to predict what's going on. (We don't have one. Or at least, not one that is well understood and well tested.) The Planck time is equal to 10−44 seconds. At this time, the fine-tuning is one part in 1055. Even more suspicious!
This is known as the flatness problem. It is a problem for cosmology. It is a problem for life, too.
Lewis and Barnes - A Fortunate Universe - Chapter 5
Thanks to the fine-tuning of the initial density of the universe, it doesn't take much to induce a suicidal expansion. If we look at the density of the Universe just one nanosecond after the Big Bang, it was immense, around 1024 kg per cubic metre. This is a big number, but if the Universe was only a single kg per cubic metre higher, the Universe would have collapsed by now. And with a single kg per cubic metre less the Universe would have expanded too rapidly to form stars and galaxies.
Lewis and Barnes - A Fortunate Universe - Chapter 5
The almost-uniform temperature of the CMB implies that the early Universe was almost smooth. The density of the Universe departed from the average density by at most one part in 100,000. This number (1 in 100,000) is known as Q. We have no idea why it has this value, but this smoothness of our Universe is essential for life....
How large does Q have to be before we create such catastrophic universes? Not large at all. If Q were one part in 10,000, then nearby stars would disrupt planets in their orbits. And if Q were one part in 100, then black holes would abound.
Lewis and Barnes - A Fortunate Universe - Chapter 5
But how big a change can we make? Well, not that much. As we've seen, neutrinos are about one millionth of the mass of the electron, the next most massive particle. Tegmark and collaborators showed that increasing the neutrino mass by even a factor of a couple will have a devastating effect on galaxy formation, effectively suppressing it completely and leaving the universe nothing but a smooth soup of shapeless matter.
Lewis and Barnes - A Fortunate Universe - Chapter 6
Overall, the Universe is electrically neutral. If the contents of some patch of the Universe were to somehow become predominately positively charged, the corresponding negative charges must be elsewhere in the Universe. The two regions will be drawn together, eventually meeting, mingling, and restoring local electrical neutrality.
It's important to realize just how precisely electric neutrality needs to be enforced. Suppose that you were assembling Earth and got a little careless with your book-keeping, so that for every trillion trillion trillion protons and electrons you put into the mix, one extra electron slipped in. The combined repulsion of these extra electrons would be stronger than the attraction of gravity. The Earth would not be gravitationally bound.
In fact, the same net charge (one part in 1036) would preclude any gravitationally bound structure in the Universe at all. Galaxies, stars and planets would all fail to collapse under their own gravity, instead being dispersed by electromagnetic repulsion. The result: a universe of extremely diffuse gas, and not much else.
Mario Livio and Martin Rees - Fine-Tuning in the Physical Universe - Chapter 1, pg. 9-10
The parameter that measures the 'roughness' of the Universe is called Q. At recombination, the temperature fluctuations across the sky T/T are of order Q. There is no firm theoretical argument that explains why it has the observed value of about 10-5...
The conclusion from this discussion (summarised also in [25]; see Figure 1.3) is that for a universe to be conducive for complexity and life, the amplitude of the fluctuations should best be between 10-6 and 10-4 and, therefore, not particularly finely tuned.
Bernard Carr - Fine-Tuning in the Physical Universe - Chapter 2, pg. 42
Once the suggestion was made, the resonance was looked for in the laboratory and rapidly found. So this might be regarded as the first confirmed anthropic prediction, although Kragh [53] takes a different view. Indeed, the fine-tuning required is so precise that Hoyle concluded that the Universe has to be a 'put-up job'. At the time, it was not possible to quantify this coincidence, but more recent work has studied this more carefully [6, 25, 37, 56]. In particular, studies by Oberhummer et al. - calculating the variations in oxygen and carbon production in red giant stars as one varies the strength and range of the nucleon interactions – indicates that the nuclear interaction strength must be tuned to at least 0.5% [58].
Bernard Carr - Fine-Tuning in the Physical Universe - Chapter 2, pg. 42-43
As discussed in detail by Barrow and Tipler [11], many features of chemistry are sensitive to the value of αS. For example, if αS were increased by 2%, all the protons in the Universe would combine at cosmological nucleosynthesis to form diprotons (nuclei consisting of two protons). In this case, there would be no hydrogen and, hence, no hydrogen-burning stars. Since stars would then have a much reduced main-sequence time, there might not be time for life to arise. If αS were increased by 10%, the situation would be even worse because everything would go into nuclei of unlimited size, and there would be no interesting chemistry. The lack of chemistry would also apply if αS was decreased by 5% because all deuterons would then be unbound, and one could only have hydrogen.
Bernard Carr - Fine-Tuning in the Physical Universe - Chapter 2, pg. 53
Since many models of particle physics invoke extra spatial dimensions, the issue of why we live in a world with three spatial dimensions naturally arises. While there may well be some physical explanation for this, it clearly has anthropic aspects. For example, there would be no gravity with two spatial dimensions, and planetary orbits would be unstable with four of them. There are also constraints on the number of time dimensions, associated with causality. Other arguments for the number of space and time dimensions have been given by Tegmark [79], and Figure 2.10 is taken from his paper.
Adrianne Slyz - Fine-Tuning in the Physical Universe - Chapter 6, pg. 232
There are many places where one could see fine-tuning at work in cosmological structure formation, beginning with the initial conditions from which structures come into existence. Indeed, it may seem paradoxical that the very mechanism invoked to solve, amongst other issues, the acausal fine-tuning problem, turns out to appear so fine-tuned itself. In other words, why is the value of the initial fluctuation amplitudes produced by inflation so low (one part in 100,000)?
Should they be much smaller, despite the presence of dark matter, they would remain in the linear regime, and galaxies would not have enough time to grow. If, on the other hand, they were much larger, say on the order of unity, they would never grow in the linear regime to begin with, but behave like independent universes very rapidly collapsing on top of one another, creating a very violent environment where galaxy cannibalism would be the rule rather than the exception. As a result, this would greatly reduce the total number of galaxies in the Universe and, hence, the number of places where life could be nurtured.
Hugh Ross - The Creator and the Cosmos - Chapter 15
How delicate is the balance for the strong nuclear force? If the strong nuclear force were just 4% stronger, the diproton (an atom with two protons and no neutrons) would form. Diprotons would cause stars to so rapidly exhaust their nuclear fuel as to make any kind of physical life impossible. On the other hand, if the strong nuclear force were just 10% weaker, carbon, oxygen, and nitrogen would be unstable and again physical life would be impossible.
Does this just apply to life as we know it? No, this holds true for any conceivable kind of life chemistry throughout the cosmos. This delicate condition must be met universally.
Hugh Ross - The Creator and the Cosmos - Chapter 15
In the late 1970s and early 1980s, Fred Hoyle discovered that an incredible fine-tuning of the nuclear ground state energies for helium, beryllium, carbon, and oxygen was necessary for any kind of life to exist. The ground stats energies for these elements cannot be higher or lower with respect to each other by more than 4% without yield a universe containing insufficient oxygen or carbon for life. Hoyle, who has written extensively against theism and Christianity in particular, nevertheless concluded on the basis of this quadruple fine-tuning that "a superintellect has monkeyed with physics, as well as with chemistry and biology".
Hugh Ross - The Creator and the Cosmos - Chapter 15
The (German-Hungarian) astrophysical team mathematically constructed models of red giant stars that adopted slightly different values of the strong nuclear force and electromagnetic force constants. They discovered that tiny adjustments in the values of either of these constants imply that red giant stars would produce too little carbon, too little oxygen, or too little of both oxygen and carbon. Specifically, they determined that if the value of the coupling constant for electromagnetism were 4% smaller or 4% larger than what we observe, then life would be impossible. In the case of the coupling constant for the strong nuclear force, if it were 0.5% smaller or larger, then life would be impossible.
Hugh Ross - The Creator and the Cosmos - Chapter 15
In the first moments after creation, the universe contained about 10 billion and 1 nucleons for every 10 billion antinucleons. The 10 billion antinucleons annihilated the 10 billion nucleons, generating an enormous amount of energy. All the galaxies and stars that make up the universe today were formed from the leftover nucleons. If the initial excess of nucleons over antinucleons were any less, there would not be enough matter for galaxies, stars, and heavy elements to form. If the excess were any greater, galaxies would form, but they would so efficiently condense and trap radiation that none of them would fragment to form stars and planets.
Hugh Ross - The Creator and the Cosmos - Chapter 15
The neutron is 0.138% more massive than a proton. Because of this extra mass, neutrons require slightly more energy to make than protons. So as the universe cooled from the hot big bang creation event, it produced more protons than neutrons - in fact, about seven times as many.
If the neutron were just another 0.1% more massive, so few neutrons would remain from the cooling off of the big bang that there would not be enough of them to make the nuclei of all the life-essential heavy elements. The extra mass of the neutron relative to the proton also determines the rate at which neutrons decay into protons and protons build into neutrons (one neutron decays into one proton + one electron + one neutrino). If the neutron were 0.1% less massive, so many protons would be built up to make neutrons that all the stars in the universe would have rapidly collapsed into either neutron stars or black holes. Thus for life to be possible in the universe, the neutron mass must be fine-tuned to better than 0.1%.
Hugh Ross - The Creator and the Cosmos - Chapter 15
Unless the number of electrons is equivalent to the number of protons to an accuracy of one part in 1037 or better, electromagnetic forces in the universe would never have overcome gravitational forces so that galaxies, stars, and planets never would have formed.
Hugh Ross - The Creator and the Cosmos - Chapter 15
The dark energy density must be even more spectacularly fine-tuned. The original source or sources of dark energy must be at least 122 orders of magnitude larger than the amount astronomers now detect. This implies that somehow the source(s) must cancel one another so as to leave just one part in 10122.
As Lawrence Krauss and many other astrophysicists noted, this one part in 10122 is by far the most extreme fine-tuning yet discovered in physics.
Hugh Ross - The Creator and the Cosmos - Chapter 15
For relativity to operate so that certain proteins containing copper and vanadium will adequately support life means that the velocity of light must be fine-tuned. This is not the only reason why the velocity of light must be held constant and fixed at the value of 299,792.458 kilometers per second. Because of Einstein's equation, E = mc2, even small changes in c (the velocity of light) lead to huge changes in E (the energy) or m (the mass). Thus, a slight change in light's velocity implies that starlight will either be too strong, or too feeble for life, or that stars will produce the wrong elements for life.
Hugh Ross - The Creator and the Cosmos - Chapter 15
A fourth measured parameter, another very sensitive one, is the ratio of the electromagnetic force constant to the gravitational force constant. If the electromagnetic force relative to gravity were increased by just one part in 1040, the full range of small star sizes and types needed to make life possible would not form. And, it it were decreased by just one part in 1040, the full range of large star sizes and types needed to make life possible would not form. For life to be possible in the universe, the full range of both large and small star sizes and types must exist. Large stars must exist because only their thermonuclear furnaces produce most of the life-essential elements. Small stars like the Sun must exist because only small stars burn long enough and stably enough to sustain a planet with life.
The Creator and the Cosmos - Chapter 17
Everything written so far in this chapter assumes that physical life must be carbon-based. As physicist Robert Dicke observed 50 years go, if you want physicists (or any other life-forms), you must have carbon.
Arsenic, boron, and silicon are the only other elements on which complex molecules can be based, but arsenic and boron are relatively rare and, where concentrated, poisonous to life, and silicon can hold together no more than about a hundred amino acids. Only carbon yields the chemical bonding stability and bonding complexity that life requires. Given the constraints of physics and chemistry, we now know that physical life must be carbon-based.
While many others have dealt with Atheistic challenges to the fine-tuning of the universe - see, for instance, section 7 of Robin Collins's chapter on the Teleological Argument in The Blackwell Companion to Natural Theology - what follows are some brief responses to the most popular arguments which are put forth: