Do Multiverse Scenarios Solve the Problem of Fine-Tuning?

by Max Andrews

The multiverse hypothesis is the leading alternative to the competing fine-tuning hypothesis.  The multiverse dispels many aspects of the fine-tuning argument by suggesting that there are different initial conditions in each universe, varying constants of physics, and the laws of nature lose their known arbitrary values; thus, making the previous single-universe argument from fine-tuning incredibly weak.  There are four options for why a fine-tuning is either unnecessary to invoke or illusory if the multiverse hypothesis is used as an alternative explanans. Fine-tuning might be (1) illusory if life could adapt to very different conditions or if values of constants could compensate each other. Additionally, (2) it might be a result of chance or (3) it might be nonexistent because nature could not have been otherwise.  With hopes of discovering a fundamental theory of everything all states of affairs in nature may perhaps be tautologous.  Finally, (4) it may be a product of cosmic Darwinism, or cosmic natural selection, making the measured values quite likely within a multiverse of many different values. In this paper I contend that multiverse scenarios are insufficient in accounting for the fine-tuning of the laws of nature and that physicists and cosmologists must either accept it as a metaphysical brute fact or seriously entertain the hypothesis of a fine-tuner.

I.  Outlining the Multiverse Hierarchy

Contemporary physics seem to indicate that there are good reasons, theoretically and physically, for the postulation a plurality of worlds.  This concept has come to be understood as the multiverse.  The multiverse is not monolithic, but it is modeled after the contemporary understanding of an inflationary model of the beginning of this universe.  Max Tegmark has championed the field of precision cosmology and has proposed the most prominent versions of the multiverse.[1]  Tegmark has made a four-way distinction in classifying these models.

Tegmark’s first version of the multiverse is called the level one multiverse.  The level one is, for the most part, more space beyond the observable universe.  So, theoretically, if we were to go to the “edge” of the universe there would be more space.  Having this model as a version of the multiverse may be misleading because there is still only one volume or system involved.  A generic prediction of cosmological inflation is an infinite space, which contains Hubble volumes (what we see in our universe) realizing in all conditions—including an identical copy of each of us about 1010^29 meters away.[2]

The level two multiverse is typically associated with other bubble universes spawning from a cosmic landscape and inflation.  This version predicts that different regions of space can exhibit different laws of physics (physical constants, dimensionality, particle content, etc.) corresponding to different localities and a landscape of possibilities.[3]  Imagine the multiverse as a bathtub filled with tiny bubbles.  Each bubble in this larger system (the bathtub) is a single universe.  Or, imagine a pot of boiling water.  The bubbles arise from the bottom of the pot analogous to the way inflationary cosmology works.  These other domains (or bubble universes) are nearly infinitely far away in the sense that we could never get there even if we traveled faster than the speed of light (due to the constant stretching of space and creation of more volume).[4]  It may, however, not be the case that there is an infinite set of universes.  Andrei Linde and Vitaly Vanchurin have argued that the way slow-roll inflation works it could only produce a finite number of universes.  Hence, they propose that there are approximately 1010^10^7 universes.[5]

The level three multiverse is particular to certain interpretations of quantum mechanics such as Hugh Everett’s Many Worlds Interpretation.  It is a mathematically simple model in support of unitary physics.  Everything that can happen in the particle realm actually does happen.  Observers would only view their level one multiverse, but the process of decoherence—which mimics wave function collapse while preserving unitary physics—prevents them from seeing the level three parallel copies of themselves.[6]

The fourth level is the all-encompassing version where mathematical existence is equivalent to physical existence.  Mathematical structures are physically real and the entire human language we use to describe it is merely a useful approximation for describing our subjective perceptions.  Other mathematical structures give different fundamental equations of physics for every region of reality.[7]  This would be Plato’s ideal reality.

What is most important about having this scientific evidence is that it provides us with reasonable evidence to support the idea of modal realism.  Modal realism cannot simply be brushed off anymore as being incoherent and baseless whereas this evidence may be an example of when purely mathematical, scientific, and philosophical theories may have physical support.  Additionally, each version of the multiverse allows for modal realism to be true.  The level one multiverse allows for an infinite space for different states of affairs to happen.  Level two and up depict a greater amount of systems.  Whether Linde and Vanchurin are correct in their finite version of the multiverse or whether Tegmark is correct is irrelevant to modal realism.  All that is required is that there be a time at which all possible states of affairs do occur—all events must not be simultaneous for modal realism to be true.

II. Inflationary Cosmology

The properties of our universe appear to be finely-tuned for the existence of life.  Cosmologists would like to explain the numbers and values that describe these properties we observe.  Their attempt is to show that these constants and values in nature are completely determined as a product of inflation.[8]

The eternally inflating multiverse is often used to provide a consistent framework to understand coincidences and fine-tuning in the universe we inhabit.[9]  This theory primarily appears in several forms, which attempt to explain the mechanism that drives the rapid expansion of the universe.  Before developing these models there are a few basic premises that must be agreed upon: the size of the universe, the Hubble expansion, homogeny and isotropy, and the flatness problem.

It’s unanimously agreed upon that the Hubble volume we inhabit is incredibly large.  According to standard Friedmann-Lemaître-Robertson-Walker (FRW) cosmology, without inflation, one simply postulates 1090 elementary particles.[10]  This number is derived from simple geometrical quantitative measurements.  One of the tasks at hand is explaining how the universe got so big.  The exponential expansion of inflation reduces the problem of explaining 1090 particles to the problem of explaining 60 or 70 e-foldings of inflation.[11] Inflationary cosmology therefore suggests that, even though the observed universe is incredibly large, it is only an infinitesimal fraction of the entire multiverse.[12]

The Hubble expansion serves as a factor in the initial conditions of the universe. In the 1920’s Edwin Hubble was studying the Andromeda nebula.  At least since the time of Kant scientists wondered what these distant enormous objects were (galaxies). With further study, Hubble noticed that these galaxies had a red shift; the galaxies were appearing redder than they should have and Hubble postulated that these galaxies were moving away from one another.  What was being observed was the same thing that the Doppler effect has on sound.  The trajectory of an object has an effect on the wavelength of the sound, or in this case, light. If this expansion is extrapolated, the equations of motion then (and even now) can only go but so far—until the universe comes to a singularity.  Inflation actually offers the possibility of explaining how this expansion initially began. The repulsive gravity associated with the false vacuum is what contributes to the explanation.  The false vacuum energy density is the exact kind of force needed to propel the universe into a pattern of motion in which any two particles are moving apart with a velocity proportional in their separation.[13]

Homogeneity and isotropy refer to the uniformity of the universe. This can be seen in the below image of the Planck Satellite one-year survey results.

Fig. 1[14]

 WMAP

The intensity of the cosmic background radiation is the same in all directions.  It’s calculated to the incredible precision of one part in 100,000 and possibly even greater precision with developing Planck Survey results.[15]  In standard FRW cosmology, the uniformity could be established so quickly only if information could propagate 100 times the speed of light, a proposition clearly contradicting known physics.  However, in inflationary cosmology, the uniformity is easily explained by the creation of uniformity on microscopic scales via normal thermal-equilibrium processes. Inflation then takes over and stretches the regions of uniformity to become large enough to encompass the observed universe.[16]

Fig. 2[17]

SiazeAgeUniverse

The flatness problem is related to the precision required for the initial value of Ω, the ration of the actual mass density to the critical mass density. This occurred when Robert Dicke and P.J.E. Peebles pointed out that at t = 1 second after the big bang nucleosynthesis were just beginning, Ωtot (total) must have equaled one to an accuracy of one part in 1015. If this ratio were not accurate to this degree the resulting universe would not resemble our own.[18] As depicted in figure 5.2 the evolution of the universe differs between inflation scenarios and FRW scenarios.

The standard FRW cosmology doesn’t have an explanation for the Ω value while inflation does.

Magnetic monopoles are extremely massive particles carrying a net magnetic charge, which is a result of predictions made by all the grand unified theories.  By combining the grand unified theories with non-inflation scenarios, the expected age of the universe is no longer 13.73 billion years old, and it becomes about 30,000 years old.  Inflation eliminates these monopoles by arranging the parameters so that inflation takes place after or during monopole production, so the monopole density is diluted to a completely negligible level.[19]

Fig. 3[20]

MexicanHat

These preliminaries for inflation will help understand what exactly inflation accomplishes and what it predicts.  Sometime between 1983-1986 Andrei Linde developed and proposed a model of eternal chaotic inflation, for which the energy density, with the initial randomly chosen value of the fields corresponds to a point hill, contra a Mexican hat with a bowl shape, then sufficient inflation can occur as the fields roll towards the state of minimum energy density.[21]

Consider the evolution of the scalar field below:

Fig. 4[22]

Screen Shot 2013-01-10 at 1.51.54 PM

As depicted in figure 5.3 the evolution of the scalar field leads to many inflationary domains as revealed in this computer-generated depiction.  In most parts of the universe, the scalar field decreases (the depressions and the valleys).  In other places quantum fluctuations cause the scalar field to augment.  In those places, represented as peaks, the universe undergoes inflation and rapidly expands, leading to the creation of inflationary regions. Our Hubble volume is in one of the valleys, where space is no longer inflating.[23] Each of these peaks consists in large domains, which have different laws of physics (represented by the different colors in figure 5.4 below).  Sharp peaks are big bangs; their heights correspond to the energy density of the universe there.  At the top of the peaks, the colors rapidly fluctuate, indicating that the laws of physics there are not yet settled.  They become fixed only in the valleys, one of which corresponds to the universe we live in now.[24]

Fig. 1[25]

Fluctions

Due to the nature of inflation each valley produces a universe with different values, which is a prediction of quantum cosmology.  Inflation isn’t a monolithic in form (eternal, chaotic, new, string, etc.); however, each model has the basic premises as described above.  Not only does inflation have scientific attraction for conforming observations, theory, and data, but it yields a philosophical satisfaction in attempting to explain away fine-tuning. In the standard FRW big bang model, inflationists see fine-tuning as ‘ugly.’  The claim is that the need for such fine-tuning of the initial state is removed in the inflationary picture, and this is regarded as a more aesthetically pleasing physical picture.[26]  Additionally, if inflation is true then there isn’t one universe but a multiverse, potentially infinite in number.

III. Explaining Nomic Behavior

Regularity theory (RT) attempts to account for laws in a descriptive manner contra the necessitarian position (NT), which expresses the laws of nature as nomic necessity.  According to the RT the fundamental regularities are brute facts; they neither have nor require an explanation.  Regularity theorists attempt to formulate laws and theories in a language where the connectives are all truth functional.  Thus, each law is expressed with a universal quantifier as in [(x) (Px ⊃  Qx)].[27]  The NT states that there are metaphysical connections of necessity in the world that ground and explain the most fundamental regularities.  Necessitarian theorists usually use the word must to express this connection.[28]  Thus, NT maintains must-statements are not adequately captured by is-statements (must ≠ is, or certain facts are unaccounted for).[29]

The role of counterfactuals serves to make distinctions in regularities.  Concerning the RT and counterfactuals the regularist may claim that laws do not purport what will always occur but what would have occurred if things were different.  NT claims that it is difficult for RT to account for certain counterfactual claims because what happens in the actual world do not themselves imply anything about what would have happened had things been different.[30]  This is only a mere negative assertion on behalf of NT and carries no positive reason to adopt the NT position.  However, RT does have a limited scope in explanation. C.D. Broad argued that the very fact that laws entail counterfactuals is incompatible with regularity theory.[31]  He suggests that counterfactuals are either false or trivially true. If it is now true that Q occurs if P causally precedes Q then the regularist may sufficiently account for past counterfactual claims.  Given the present antecedent condition of P at tn and P implies Q at tn and it was true that P implied Q at tn-1 then using P as an antecedent for R at hypothetical tn-1’ then R is true if P was a sufficient condition R at tn-1’. Thus, RT accounts for past counterfactuals, but this is trivially true.  However, in positive favor of the NT, there is no reason to expect the world to continue to behave in a regular manner as presupposed by the practice of induction.  Consider Robin Collins’ illustration of this point:

Suppose that a coin were tossed one thousand times and each time it came up heads.  Both [NT and RT proponents] would agree that such an occurrence cries out for explanation, such as that the coin was biased strongly in favor of heads; such an occurrence would constitute too grand of a coincidence to be plausibly ascribed by chance.  Moreover, only if we believed that there was some such explanation would we have any reason to believe that the coin would continue to come up heads in the future; if we discovered that it had landed on heads by mere accident, we would have no reason to believe that it would continue to land on heads.[32]

The regularist may point out that generalizations from finite sample sets cannot be warranted unless the appropriate necessary connections are postulated, which is this problem of induction.  This is a problem whose examination has often been the occasion for the introduction of NT. Unless a necessitarian is prepared to say that the relation of necessity is actually observed in the instances of some law, the inference to a necessary law creates the problem of induction just as easily.[33]

Thus, NT and RT fall short of explanatory scope at some point or another.  The regularist can only account for past and present occurrences of laws, but such universal implication and induction for future instances do not promise certainty in prediction.  With [(x) (Px ⊃  Qx)] the necessitarian will claim that Qx is just a brute fact of necessity while the regularist will claim that the regularities are due to a brute fact.[34]  The regularist can certainly account for past and, to an extent, present behavior of laws but the necessitarian has no basis for even asserting necessity. At least the regularists may take their position from previous empirical evidence and argue that even though there is no guarantee that [(x) (Px ⊃  Qx)] they can still make probability claims.

Gold has an atomic weight of 196.966543.  This follows necessarily from gold’s atomic structure but gold is contingent.  With this being an analytic a posteriori claim there is no counterfactual claim that is true about the atomic weight.  The law of alpha particle decay in the half-life of a uranium atom is purely probabilistic.  The probability remains constant over time and is the same in every uranium atom; and there is no difference at all between two uranium atoms one of which decays and the other doesn’t in the next minute.[35]  It is the case that introducing a laser into the atomic nucleus of 232U, which alters the stability of the atom and accelerates the alpha and beta decay, can alter the rate of decay.[36]  However, the fact remains that when the decay occurs is determined by the quantum world of probability (depending on one’s interpretation of quantum mechanics.) In either of these two neither NT nor RT provide a preferred explanation for such counterfactual states of affairs. Such counterfactual claims are empty with certain laws.  Some may have counterfactual truth and others are vacuous.  If one were to attempt to express such alpha decay as [(x) (Px ⊃ Qx)], for every alpha particle, if an alpha particle of uranium obtains then that the alpha particle will decay is true but cannot be causally or temporally indexed.

The Princeton philosopher David Lewis and Tegmark have postulated a metaphysical multiverse (MM) to account for the behavior of natural laws.  Their proposed multiverse scenarios entail modal realism.  This modal realism is, in a sense, modally limited.  The state of affairs of the non-existence of anything cannot be true if something does exist, so by definition modal realism must entail ~$!W with W being the non-existence of anything—nothing, lest it suffer the consequence of being intrinsically incoherent.  Under such a MM different regions of space will exhibit different effective laws of physics (i.e. difference constants, dimensionality, particle content, relation of information, information propagation, etc.) corresponding to different local minima in a landscape of possibilities.[37]  This could obtain in several different ways such as the local bubble location in a string landscape or in unitary quantum physics the wave function does not collapse and all possibilities are actualized.  Such an approach denies counterfactual definiteness.  This means that any counterfactual of what measurements have not been performed are empty of any meaning and truth.

These MM scenarios allow for the proponents to get the best of both worlds (pun intended).  It avoids the problem of RT by having variance in the behavior of laws.  A tropical fish never leaving the ocean might mistakenly conclude that the properties of water are universal, not realizing that there is also ice and steam.  We may be smarter than fish but may just as easily be fooled.[38]  This is a shortcoming of RT, for all we know, such regularities are localized instantiations.  The problems of NT and RT are avoided but it takes on its own problem similar to the NT problem of accounting for the mechanisms that produce the varying laws and values.  It is a displacement issue.

IV. Anthropic Reasoning in the Multiverse

In order to use multiverse scenarios as a means of avoiding the problems of fine-tuning it’s the hope for the objector to FT that the larger the number of possible values of physical parameters provided by the string landscape, the more string theory legitimates anthropic reasoning as a new basis for physical theories.[39]  Not only does this become a physical theory but a metaphysical theory. Roughly speaking, the anthropic argument takes as its starting point the fact that the universe we perceive about us must be of such a nature as will produce and accommodate the existence of the observers who can perceive it.[40]

The anthropic principle takes two primary forms:[41] the weak (WAP) and the strong (SAP).  The WAP is a reflective and happenstantial inquiry: The observed values of all physical and cosmological quantities are not equally probably but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirement that the universe be old enough for it to have already done so.[42]  The SAP is much more problematic: rather than considering just one universe we envisage an ensemble of possible universes—among which the fundamental constants of nature vary. Sentient beings must find themselves to be located in a universe where the constants of nature (in addition to the spatiotemporal location) are congenial.[43]

Fig. 6[44]

Screen Shot 2013-01-19 at 1.08.01 AMJohn Barrow and Frank Tipler have three different interpretations for the SAP: 1) There exists one possible universe designed with the goal of generating and sustaining observers; 2) observers are necessary to bring the universe into being; and 3) An ensemble of other different universes is necessary for the existence of our universe. The non-fine-tuning hypothesis starts with the universe or environment and argues that life evolved to be compatible with the environment (either WAP or SAP).  The WAP doesn’t seem to have any explanatory hypothesis over the actual values and existence of the universe we find ourselves in.  It is useful for noting whether the laws of physics have changed in our lifetime.  Obviously, if it changed then we wouldn’t be here to observe it.  WAP becomes abused when it is treated as an explanatory hypothesis to account for fine-tuning.  The FT starts with life and looks at all the sufficient and necessary conditions that are required for life to exist.  These conditions are considered to be the fine-tuning data. Therefore, the fine-tuner proponent will adopt the SAP1 interpretation.

Physicist Victor Stenger advocates the multiverse as an explanatory hypothesis to account for the anthropic principle; thus, adopting SAP3. He offers one possible natural explanation for the anthropic coincidences being that multiple universes exist with different physical constants and laws and our life form evolved in one suitable for us.[45]  In the analogy of having a computer with knobs which determine the value of all the physical parameters, if one ever so slightly changes the value of, say, the weak nuclear force life could not exist.  Stenger is of the opinion that if one were to completely reconfigure every different value with different physics then that will sufficiently explain the existence of life.[46] This analogy offered by Stenger and SAP3 will be the primary focus in chapter six where I will harmonize these multiverse scenarios with FT and attempt to demonstrate that these scenarios actually increase the explanatory power and scope of the fine-tuning hypothesis.

V. Can Probabilities be calculated in Multiverse Scenarios?

Whenever probability is being considered there must be some type of relevant or total background information (usually depicted as k).  The immediate objection when applying a probability rule or calculus to the fine-tuning of the universe in a multiverse scenario would be to say that this is universe is not an appropriate random sampling.  In other words, if we know of [at least] only one universe with these values the random sample size is precisely 1; thus, no random sample can be used to assess the probability of certain values of physics in the argument.  In statistics a random sample drawn must have the same chance of being sampled as all the other samples.  Since we know of only one universe we do not know what the range of values for the constants and physics could be.  Additionally, since we don’t know how narrow or broad these ranges could be there’s no way of drawing out any probability based argument for fine-tuning.  However, we can know what other universes would be like if the values were different.  If our natural laws have counterfactuals that are in any way incoherent then this is an appropriate sampling.  Also, to make this objection and advocate that we just so happen to live in a life permitting universe in the multiverse then this objection cannot be made since the claim that we happen to live in a life-permitting one amongst countless others suggest we can know what the other samplings are.  For instance, if the strong nuclear force were any stronger the universe would be composed of only hydrogen and if gravity were any weaker stars could never form to create the heavier elements.  If these counterfactuals make any coherent sense and are possible then we can draw an appropriate random sample.  Also, note that we do not have to know how narrow or broad the range of values could be.  That does not matter.  It could be very narrow or extremely broad and this sampling still is appropriate.[47]  Thus, by virtue of the possible counterfactual expressions of the values of the constants and laws of nature I believe we can make an appropriate probability based calculation.

The role probability serves in this argument does not favor the non-fine-tuning hypothesis (either chance, necessity, or a combination of the sort) in multiverse scenarios.  If the objector to a fine-tuner argues that the odds of having a finely tuned universe, which harbors life, increases given the vast number of universes there is bound to be one with the values we have.  This is an abuse of probability and commits the gambler’s fallacy.  This claim assumes a general disjunction rule of probability.  For example, this rule of probability suggests that the probability of drawing a king from a deck of cards increases when each card drawn is not replaced.  If the deck has all the cards it is supposed to have the probability of eventually drawing a king is 1.  The multiverse is like the restricted or general conjunction rule of probability. By simply increasing the number of possibilities, one does not increase the probability of selection.  For example, say you randomly draw a card from the deck and you want the king of spades.  The odds of you drawing a king of spades are 1/52.  Say you draw the three of hearts.  When you replace the card and draw another card from random the odds of you getting the king of spades is not increased by the first selection.

VI. Conclusion

Although some of the laws of physics can vary from universe to universe in string or inflationary multiverse scenarios, these fundamental laws and principles underlie said scenarios and therefore cannot be explained as a multiverse selection effect. Further, since the variation among universes would consist of variation of the masses, and types of particles and the form of the forces between them, complex structures would almost certainly not be akin to our atomic structure and stable energy sources would almost certainly require aggregates of matter.  Thus, the said fundamental laws seem necessary for there to be life in any of the many universes, not merely in a universe with our specific types of particles and forces.[48]  Physicists, cosmologists, and philosophers must either accept the laws of nature and basic premises of inflationary cosmology and string theory as metaphysical brute facts or seriously entertain the possibility of the fine-tuning hypothesis; that is, the possible existence of a fine-tuner.

 

 

Endnotes


[1] See Max Tegmark, “The Multiverse Hierarchy,” arXiv:0905.1283v1, (accessed March 15, 2011).

[2] When Tegmark refers to an “identical” copy he simply refers to a similar copy.  There is a genuine ontological distinction.  Ibid., 2.

[3] Ibid.

[4] Ibid., 7.  Additionally, there has been good scientific evidence suggesting observational grounds for inflation.  Researchers have taken the 7-year WMAP data and applied certain algorithms to pick up traces of thermal fluctuations in the early universe.  What they found were traces of what could be bubble collisions of the edges of our universe with another universe.  Stephen Feeney, et al., “First Observational Tests of Eternal Inflation:  Analysis Methods and WMAP 7-year Results,” arXiv:1012.3667v2 (accessed March 16, 2011).

[5] Andrei Linde and Vitaly Vanchurin, “How Many Universes are in the Multiverse?” arXiv:0919.1589v2 (accessed March 15, 2011).

[6] Tegmark, 10.

[7] Ibid., 2, 12-13.

[8] John D. Barrow, The Constants of Nature: The Numbers Encode the Deepest Secrets of the Universe (New York: Random House, 2003), 182.

[9] Alan Guth and Yasunori Nomura, “What Can the Observation of Nonzero Curvature Tell Us?” arXiv:1203.6876v1, 1 (accessed May 6, 2012).

[10] Alan Guth, “Eternal Inflation and Its Implications,” in The Nature of Nature, eds. William Dembski and Bruce Gordon (Wilmington, DE: Intercollegiate Studies Institute, 2011), 487.

 

[11] E-foldings are time measurements, which serve as intervals between the exponential growth of a quantity or volume by the factor of e.

[12] Ibid.

[13] Ibid., 488.

[14] This all-sky image shows the distribution of the Galactic Haze seen by ESA’s Planck mission at microwave frequencies superimposed over the high-energy sky as seen by NASA’s Fermi Gamma-ray Space Telescope. The Planck data (shown here in red and yellow) correspond to the Haze emission at frequencies of 30 and 44 GHz, extending from and around the Galactic Centre. The Fermi data (shown here in blue) correspond to observations performed at energies between 10 and 100 GeV and reveal two bubble-shaped, gamma-ray emitting structures extending from the Galactic Centre. This becomes important in next chapter.  It has been posited that these bubbles in the data may in fact be the result of an early universe collision with another universe’s bubble. ESA/Planck and NASA/DOE/Fermi LAT/Dobler et al./Su et al. http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=50008 (accessed May 6, 2012). P.A.R. Ade, et al, “Planck Early Results. I. The Planck Mission,” arXiv: 1101.2022v2 (accessed May 6, 2012). N. Aghanim, et al, “Planck Intermediate Results II: Comparison of Sunyaev-Zeldovich Measurements from Planck and from the Arcminute Microkelvin Imager for 11 Galaxy Clusters,” arXiv:1204.1318v1 (accessed May 6, 2012).

[15] Guth, “Eternal Inflation,” 488.

[16] Ibid.

[17] Andrei Linde, “The Self-Reproducing Inflationary Universe: Recent Versions of the Inflation Scenario Describe the Universe as a Self-Generating Fractal That Sprouts Other Inflationary Universe,” Scientific American (Nov. 1994): 54.

[18] Alan Guth, The Inflationary Universe: The Quest for a New Theory of Cosmic Origins (Reading, MA: Perseus, 1997), 332. R.H. Dicke and P.J.E. Peebles, (1979) in S.H. Hawking and W. Israel, eds. General Relativity: An Einstein Centenary Survey (Cambridge: Cambridge University Press, 1979).

[19] Guth, “Eternal Inflation,” 490.

[20] Roger Penrose, The Road to Reality (New York: Random House, 2004), 737.

[21] Alan Guth, The Inflationary Universe: The Quest for a New Theory of Cosmic Origins (Reading, MA: Perseus, 1997), 327.

[22] Linde, “The Self-Reproducing Universe,” 50-51.

[23] Ibid.

[24] Linde, “The Self-Reproducing Universe,” 49.

[25] Ibid.

[26] Penrose, The Road to Reality, 755.

[27] Bernard Berofsky, “The Regularity Theory,” Nous Vol. 2 No. 4 (1968): 315.

[28] Robin Collins, “God and the Laws of Nature,” Philo Vol. 12 No. 2 (2009): 2-3. (Preprint).

[29] Berofsky, 316.

[30] Collins, 4.

[31] C.D. Broad, “Mechanical and Teleological Causation,” Proceedings of the Aristotelian Society: Supplementary Volumes 1935, XIV.

[32] Ibid.

[33] Berofsky, 325-26.

[34] Collins., 11.

[35] Alex Rosenberg, Philosophy of Science (New York: Routeledge, 2012), 92.

[36] A.V. Simakin and G.A. Shafeev, “Accelerated Alpha-Decay of 232U Isotope Achieved by Exposure of its Aqueous Solution with Gold Nanoparticles to Laser Radiation,” http://arxiv.org/pdf/1112.6276.pdf1-2 (accessed March 6, 2012), 1-2.

[37] Max Tegmark, “The Multiverse Hierarchy,” http://arxiv.org/pdf/0905.1283v1.pdf (accessed March 6, 2012), 1.

[38] Ibid.

[39] Steven Weinberg, “Living in the Multiverse,” in The Nature of Nature, 548.

[40] Penrose, The Road to Reality, 758.

[41] There’s actually a third anthropic principle, the Final Anthropic Principle (FAP): Intelligent, information-processing must come into existence in the universe, and, once it comes into existence, it will never die out. If scientists and mathematicians could ever have a sense of humor, the polymath Martin Gardner referred to FAP as the Completely Ridiculous Anthropic Principle (CRAP). Martin Gardner, “Wap, Sap, Pap, and Fap,” New York Review of Books 23, no. 8 (May 8, 1986): 22-25.

[42] John Barrow and Frank Tipler, The Anthropic Cosmological Principle (Oxford: Oxford University Press, 1986), 16.

[43] Penrose, The Road to Reality, 758-59.

[44] Let universe on the left depict the WAP scenario and the universe on the right, amongst alternative universes, depict the SAP. Ibid., 759.

[45] Victor Stenger, The Fallacy of Fine-Tuning: Why The Universe is Not Designed for Us (Amherst, NY: Prometheus, 2011), 42.

[46] Victor Stenger, God: The Failed Hypothesis: How Science Shows That God Does Not Exist (Amherst, NY: Prometheus, 2008), 137-164.

[47] What may be argued for is the mechanism that produces these random constants.  This mechanism would be superstring theory or M-theory.  Even though there may be a huge number of possible universes lying within the life-permitting universe, region of the cosmic landscape, nevertheless that life-permitting region will be unfathomably tiny compared to the entire landscape. This also shows that the physical universe itself is not unique.  The physical universe does not have to be the way it is:  it could be been otherwise functioning under different laws. Paul Davies, The Mind of God (New York:  Simon & Schulster, 1992), 169.  Davies means the laws of physics within the actual values of the constants, not confusing there being different values of the constants with there being different laws.

[48] Robin Collins, “The Teleological Argument,” in The Blackwell Companion to Natural Theology Eds. William Lane Craig and J.P. Moreland (Oxford, UK: Blackwell, 2009), 277.


11 Responses to “Do Multiverse Scenarios Solve the Problem of Fine-Tuning?”

  1. Has anyone given any thought to spirital powers residing in parallel universes, i.e., angels that can observe us and respond to worldly needs?

  2. G’day mate

    I am just writing a paper on this very topic at the moment and have found your article really helpful. Thank you!

  3. To say that the universe is fine-tuned for life by a “fine-tuner” is ridiculous. You don’t need any multiverse argument to show why it’s so stupid.

    There are so many planets around hundreds of billions of stars in the universe and earth is the only planet we know of that has life. Why is there an abundance of seemingly useless planets if they can’t harbor life, let alone intelligent life, even in our own solar system?! The universe began roughly 14 billion yrs ago and the solar system formed roughly 4.5 billion yrs ago. Why did it take so long (almost 10 billion yrs) for the earth to arise, if the “fine-tuner” precisely had such a planet in mind?!!

    When the earth finally formed, it was totally inhospitable with searing temperatures, rampant volcanism, asteroid impact showers and a toxic atmosphere. There was no oxygen to breathe. Our planet was involved in a ferocious collision with another planet that nearly destroyed it (though this collision formed the moon). This is not what you’d expect if the “fine-tuner” tuned the earth with life or humans in mind.

    For the first billion years after its formation, the earth was sterile. Why would the “fine tuner” wait for so long? For the next 2 billion years there were only microbes on the planet. What kept him from moving on? Complex life forms didn’t appear until about 4 billion years after the earth was born! And humans appeared only in the most recent past. This is anything but fine-tuning! The earth’s history emphatically tells us that, neither our planet nor the universe that produced it, is optimized for humans or life of any kind. That’s why it took so long and a lot of struggle for life to take hold and diversify.

    Even after life had a foothold on earth, it got nearly wiped out on several occasions, through disasters such as asteroid impacts and global ice ages. Again, this is not what you’d expect if a supernatural being with untold powers was overseeing the process. It’s irrational and appalling that a scientist reaches such a conclusion when all the odds are stacked against it.

    • Now Borny, that’s not very nice; Max makes this loooong science-y argument with 47(!) footnotes, and you go and demonstrate that it’s a completely fallacious argument from ignorance in only five short paragraphs.

      And not one footnote!

  4. I’m amazed, I have to admit. Seldom do I come across a blog that’s equally educative and amusing, and without a doubt, you’ve hit the nail on the head. The problem is something too few men and women are speaking intelligently about. I’m very happy that I came across this in my search for something concerning this.

  5. Hi, multiverse is a concept that should not evoke any denial,after all we are very ignorant as to how even a single universe came to be. If a single universe can energe out of nothing then a trillion can as easily happen. Out then in cyberspace there are many forums discussing alternate memories like the death of nelson mandela in 80s and differing locations of some countries. I am a participant of some such forums,i have researched that most people remember sri lanka was located way down than it now appears in map. Anecdotal data is the only source of understanding the enigma of evolution and existentialism.

  6. Trackbacks

Leave a Reply