Max,

I want to run something by you to get your opinion. The KCA and fine-tuning arguments are presented as philosophical/logical arguments with some scientific premises. Some skeptics that don’t like philosophy will dismiss it and appeal to scientism.

But if we look at something like the detection and declaration of black holes, aren’t they doing the same things? They aren’t looking at direct observation but instead looking at effects and making inferences to the best explanation for the cause. If that is accepted as science then the KCA and the fine-tuning arguments should be as well.

I’m not interested in declaring the KCA and fine-tuning to be science but I’m thinking that an analogy such as this might be useful when a skeptic cries god-of-the-gap.

read more »

## Why Inductive Fine-Tuning Arguments are Weak

Inductive logic, generally speaking, takes elements of a set and applies this subset of elements to a broader set. More specifically, the principle of mathematical induction states that if zero has a property, *P*, and if whenever a number has the property its successor also has the property, then all numbers have the property:[1]

Induction works by enumeration: as support for the conclusion that all *p*’s are *q*’s, one could list many examples of *p*’s that are *q*’s. It also includes ampliative argument in which the premises, while not entailing the truth of the conclusion, nevertheless purports good reason for accepting it.[2]

Inductive probability in the sciences has been generally successful in the past. It has been used by Galileo, Kepler, and has even resulted in the discovery of Neptune. The English astronomer John Michell exemplified this discuss in a discussion of ‘probable parallax and magnitude of the fixed stars’ published by the Royal Society in 1767.[3] Michell found that the incidence of apparently close pairings of stars was too great for them all to be effects of line of sight, and that next to a certainty such observed pairs of stars must actually be very close together, perhaps moving under mutual gravitation. Michell’s conclusion was not corroborated for forty years until William Herschel’s confirmatory observations.[4]

## The Problem with Inductive Arguments

All induction problems may be phrased in a way that depicts a sequence of predictions. Inductive problems will contain a previous indicator or *explanans* for the *explanandum*. For instance, Carl Hempel’s example of Jones’ infection:

Where* j* is Jones, *p* is the probability, *Sj* is Jones’ infection, *Pj* is he being treated with penicillin, and *Rj* is his recovery. If the probability of observing *R* at any time given the past observations of *S&P*_{1}… *S&P*_{2} … *S&P*_{n} (the probability of the set meeting *R* is *m*) where *R* was close to 1 then a predictive *explanans* (the *S&P*_{n} ) can be made for future instances of *m* using an inductive-statistical explanation. For if the probability *m*(*S&P*_{n }| *S&P*_{1}… *S&P*_{2} …) is a computable function, the range of data is finite then a posterior predication *M* can be made from *m*. *M* can be legitimately referred to as a universal predictor in cases of *m*. This is where Hempel rejects the requirement of maximal specificity (RMS), contra Rudolph Carnap, in which the RMS is a maxim for inductive logic that states that this is a necessary condition for rationality of any given knowledge situation *K*. Let *K* represent the set of data known in *m*. According to Hempel we cannot have *all* the material for *K*.

## Induction and the Requirement of Maximal Specificity

Inductive problems will contain a previous indicator or *explanans* for the *explanandum*. For instance, Carl Hempel’s example of Jones’ infection:

Where* j* is Jones, *p* is the probability, *Sj* is Jones’ infection, *Pj* is he being treated with penicillin, and *Rj* is his recovery. If the probability of observing *R* at any time given the past observations of *S&P*_{1}… *S&P*_{2} … *S&P*_{n} (the probability of the set meeting *R* is *m*) where *R* was close to 1 then a predictive *explanans* (the *S&P*_{n} ) can be made for future instances of *m* using an inductive-statistical explanation. For if the probability *m*(*S&P*_{n }| *S&P*_{1}… *S&P*_{2} …) is a computable function, the range of data is finite then a posterior predication *M* can be made from *m*. *M* can be legitimately referred to as a universal predictor in cases of *m*.

## Mathematical Induction

A method of proving mathematical theorems, used particularly for series sums. For example, it is possible to show that the series 1 + 2 + 3 + 4 + … has a sum of *n *terms of *n*(*n* + 1)/2. Firest one must show that if it is true for *n* terms it must also be true for (*n* + 1) terms. According to the formula

*S _{n}* =

*n*(

*n*+ 1)/2

if the formula is correct, the sum of (*n* +1) terms is obtained by adding (*n *+1) terms is obtained by adding (*n* + 1) to this

*S _{n+}*

_{1}=

*n*(

*n*+ 1)/2 + (

*n*+1)

*S _{n+}*

_{1}=

*n*(

*n*+ 1)(

*n*+ 2)/2

This agrees with the result obtained by replacing *n* in the general formula by (*n* + 1), i.e.:

*S _{n+}*

_{1}=

*n*(

*n*+ 1)(

*n*+ 1 + 1)/2

*S _{n+}*

_{1}=

*n*(

*n*+ 1)(

*n*+ 2)/2