Why Inductive Fine-Tuning Arguments are Weak

by Max Andrews

Inductive logic, generally speaking, takes elements of a set and applies this subset of elements to a broader set.  More specifically, the principle of mathematical induction states that if zero has a property, P, and if whenever a number has the property its successor also has the property, then all numbers have the property:[1]

Induction works by enumeration: as support for the conclusion that all p’s are q’s, one could list many examples of p’s that are q’s.  It also includes ampliative argument in which the premises, while not entailing the truth of the conclusion, nevertheless purports good reason for accepting it.[2]

Inductive probability in the sciences has been generally successful in the past.  It has been used by Galileo, Kepler, and has even resulted in the discovery of Neptune.  The English astronomer John Michell exemplified this discuss in a discussion of ‘probable parallax and magnitude of the fixed stars’ published by the Royal Society in 1767.[3]  Michell found that the incidence of apparently close pairings of stars was too great for them all to be effects of line of sight, and that next to a certainty such observed pairs of stars must actually be very close together, perhaps moving under mutual gravitation.  Michell’s conclusion was not corroborated for forty years until William Herschel’s confirmatory observations.[4]

William Paley

However, there is a great deal of problem for induction when applied to the fine-tuning argument.  The fine-tuning argument can be expressed inductively if the premises make the conclusion probable.  An inductive form of the argument typically appears somewhat similar to William Paley’s argument from design.

  1. Entity e in nature, or nature itself, is like specified human artifact a in relevant properties p.
  2. a has p precisely because it is a product of deliberate design by intelligent human agency.
  3. Like effects typically have like causes or like explanations.
  4. Therefore, it is probable that e has p precisely because it too is a product of deliberate design by intelligent, relevantly human-like agency.[5]

This analogizes known instances of design, such as the design of a watch, a computer, or a light bulb, with the similarities of the natural phenomena to be explained (i.e. DNA, the function of certain equations, etc.). Expressed inductively, for every element in a set of properties in which these properties are compared to the properties of known and observed designs there is a subset of properties that have similar properties.  One of the problems is determining the line of demarcation between how many similarities warrant for concluding that there is design (or fine-tuning) and how much dissimilarity warrants not-design.  Additionally, there must be a distinction in what is the appropriate margin for categorizing a property as similar—this asks the question, “How similar are these properties?”—and the distinction for what property is dissimilar.  In known instances of design we can know that final causation has occurred when a light bulb is designed because it was evidently designed to fulfill the purpose of illumination.  In attempting to demonstrate final causation the inductive argument must assume final causation if any criterion of ‘function’ is to be compared.  This problem is akin to the requirement of maximal specificity.  Though this doesn’t completely eliminate the power of any fine-tuning argument, when it is formatted where the premises support the conclusion it is a weaker version for a historical method of inquiry.



[1] A.D. Irvine, “The Philosophy of Logic” in Philosophy of Science, Logic and Mathematics in the Twentieth Century (New York: Routledge, 2000), 13.

[2] Ibid., 389.  As I’ll note, one of the reasons why induction is supported entails a reason for why induction is weakened as an explanatory methodology. All induction problems may be phrased in a way that depicts a sequence of predictions.  Inductive problems will contain a previous indicator or explanans for the explanandum.  For instance, Carl Hempel’s example of Jones’ infection:

Where j is Jones, p is the probability, Sj is Jones’ infection, Pj is he being treated with penicillin, and Rj is his recovery.  If the probability of observing R at any time given the past observations of S&P1S&P2S&Pn (the probability of the set meeting R is m) where R was close to 1 then a predictive explanans (the S&Pn ) can be made for future instances of m using an inductive-statistical explanation.  For if the probability m(S&Pn | S&P1S&P2 …) is a computable function, the range of data is finite then a posterior predication M can be made from mM can be legitimately referred to as a universal predictor in cases of m. This is where Hempel rejects the requirement of maximal specificity (RMS), contra Rudolph Carnap, in which the RMS is a maxim for inductive logic that states that this is a necessary condition for rationality of any given knowledge situation K.  Let K represent the set of data known in m.  According to Hempel we cannot have all the material for K.  For any future time when the explanandum is in sync with the explanans of K, in this case, Rj, may be different when factoring different data at different times. It may be the case that future data that was impossible to consider may bring about ~Rj.  I believe Carnap’s RMS should be understood as a principle rather than an axiom for inductive logic.  It seems RMS is an attempt to make inductive arguments like deductive arguments.  So, instead of using M as a universal instantiation of future m, M may simply be a categorical similarity of m as a mere prediction and only a prediction because it is tentative to future variations of like-conditions in future explanandums.  I know Carnap would suggest that in his system of inductive logic there can be a degree of confirmation of statements which assign an inductive probability to a hypothesis about a particular event relative to the evidence statements about other particular events and that no universal generalizations are involved. Carl G. Hempel, “Inductive-Statistical Explanation” in Philosophy of Science. Eds. Martin Curd and J.A. Cover (New York: Norton, 1998), 706-708. Marcus Hutter, “On the Existence and Convergence of Computable Universal Priors,” Technical Report 5 no. 3 (2003): 1-3. Wesley Salmon, “Hempel’s Conception of Inductive Inferences in Inductive-Statistical Explanation,” Philosophy of Science 44 no. 2 (1977): 183.

[3] This paper was published just three years after Thomas Bayes published his theorem on conditional probability of a single occurrence of a given event, which rests between any two degrees of probability.

[4] Ivor Grattan-Guinness, The Norton History of the Mathematical Sciences (New York: Norton, 1998), 341

[5] This is a similar to the form used by Del Ratzsch, “Teleological Arguments for God’s Existence” in The Stanford Encyclopedia of Philosophy (Spring 2012 Edition), ed. Edward N. Zalta. plato.stanford.edu/archives/spr2012/entries/teleological-arguments (Accessed April 23, 2012).


Leave a Reply