The Deductive-Nomological model, strictly speaking, certainly seems ideal but is untenable. This is ideal for empiricists arguing from fixed premises but this view hardly seems amenable to novel discoveries and even predictions. D-N does have a robust explanatory scope and power of causal laws such as the law of conservation. This model doesn’t have any explanatory power for other laws (i.e. the Pauli Exclusion Principle, which prohibits atomic electrons from collapsing in on the nucleus and being propelled away from the nucleus). The D-N model, if it were to implement the Pauli Exclusion Principle, would have a self-defeating condition in the *explanandum* or *explanans* (depending on how the principle is being used). So, the model itself seems inert to the effect that it could never be verified or falsified by its own merit and criteria. It stands in a privileged explanatory position.

Additionally, the D-N seems incompatible with many models of our universe. This model assumes that the universe is deterministic. Its view of causality is more than the Humean notion of effects rooted in habits of association, and rightly so, but it assumes that causality is applicable in every instance of a law. There are several problems with this in the quantum world. Quantum calculations are solely based on probabilities. The vast majority of quantum interpretations are indeterministic (i.e. the traditional Copenhagen, GRW, Popper, transactional, etc.). Additionally, there are other interpretations that suggest that the quantum world is deterministic (i.e. de Broglie-Bohm and Many Worlds).[1] What this goes to say is that the world may not be completely deterministic but it’s certainly not chaotic either.[2] This is where I get caught between the efficacy of the I-S model and the D-N-P model. The D-N-P model makes sense of deterministic and probabilistic *explanandums*.

These models (D-N and D-N-P) don’t explain the *why* questions. Consider the example of barometric pressure. When the atmospheric pressure falls, the weather turns bad; the atmospheric pressure falls, therefore the weather will turn bad. There is no explanation as to *why* the weather will turn bad. No connection between cause and effect, no mechanics by which falling atmospheric pressure produces a change for the worse in the weather has been revealed. Surely there is a connection, but the argument doesn’t mention it here (i.e. manipulating laboratory conditions). This would certainly encompass a weaker rendition: whenever the water overflows from the glass container, the weather turns bad; the water overflows from the glass container, therefore the water turns bad. If we include the sufficient condition in the latter as a prior sufficient condition to the former argument then there is a better model.[3] However, without the initial conditions set in a laboratory provided in the aggregate argument and without the law links and connections, the given explanation is poor. It at best explains the *how* and not the *why*. I believe the *why* question is misplaced, and in the scientific community the *why* is synonymous with the *how* and can only mean the *how*. Even in ideal initial conditions and complete manipulation of the causes and effects in the laboratory it comes to the *how* workings.

All induction problems may be phrased in a way that depicts a sequence of predictions. Inductive problems will contain a previous indicator or *explanans* for the *explanandum*. For instance, Hempel’s example of Jones’[*] infection:[4]

[*] Where* j* is Jones *p* is the probability, *Sj* is Jones’ infection, *Pj* is he being treated with penicillin, and *Rj* is his recovery.

If the probability of observing *R* at any time given the past observations of *S&P*_{1}… *S&P*_{2} … *S&P*_{n} (the probability of the set meeting *R* is *m*) where *R* was close to 1 then a predictive *explanans* (the *S&P*_{n} ) can be made for future instances of *m* using the I-S explanation. For if the probability *m*(*S&P*_{n }| *S&P*_{1}… *S&P*_{2} …) is a computable function, the range of data is finite then a posterior predication *M* can be made from *m*. *M* can be legitimately referred to as a universal predictor in cases of *m*.[5]

This is where Hempel rejects the RMS, contra Carnap, in which the RMS is a maxim for inductive logic that states that this is a necessary condition for rationality of any given knowledge situation *K*. Let *K* represent the set of data known in *m*. According to Hempel we cannot have *all* the material for *K*. For any future time when the *explanandum* is in sync with the *explanans* of *K*, in this case, *Rj*, may be different when factoring different data at different times. It may be the case that future data that was impossible to consider may bring about ~*Rj*. I believe Carnap’s RMS should be understood as a principle rather than an axiom for inductive logic. It seems RMS is an attempt to make inductive arguments like deductive arguments. So, instead of using *M* as a universal instantiation of future *m*, *M* may simply be a categorical similarity of *m* as a mere prediction and only a prediction because it is tentative to future variations of like-conditions in future *explanandums*.

I know Carnap would suggest that his system of inductive logic there can be a degree of confirmation of statements which assign an inductive probability to a hypothesis about a particular event relative to the evidence statements about other particular events and that no universal generalizations are involved. If this were rejected it seems that it would be necessary to give up this I-S model as a covering law.[6] I think that this is similar to what I’ve stated above regarding how *M* should be understood. This use of RMS as a principle can tell us the degree to which an event is to be expected relative to the total relevant evidence—it’s tentative.[7] Thus, the structural identity thesis voids explanation from being identical to prediction.

Aside from the predictive capabilities of the I-S explanation, Hempel made it quite clear that the final probability, if it embodies all available and relevant data, is not a certainty. Apart from the deductive arguments the I-S conclusion has a probabilistic qualifier. He rejects the concept of “epistemic utility,” which was an attempt to formulate an inductive rule in terms of the relative utilities of accepting or not accepting various available hypotheses (or at best is without commitment to a particular theory of inductive confirmation).[8] There does seem to be a problem with the ambiguity of I-S explanations. This epistemic ambiguity is: The total set *K* of accepted scientific statements contains different subsets of statements which can be used as premises in arguments of the probabilistic form, which confer high probabilities on logically contradictory conclusions.[9] I found this to be a frustration with logic—unjustifiably so. With material implication’s truth-values being T and F respectively for the sufficient and necessary conditions the only false conclusion follows by under this rule of inference. However, when changing what the conditions are, usually when there is no inferential connection between the two, while keeping the conditional’s truth-values, the conclusion may come to be T when it is actually F.

Hempel referred to this problem of ambiguity in induction as having no analogue in deductive logic.[10] In a correct deductive argument if the *premises are true* the conclusion is true regardless of whether or not further evidence is considered. In a correct inductive argument, even if the premises are true, and embody all available relevant evidence, the conclusion may still be false.[11] I would argue that there has to be a connection or relationship between the conditions. Consider the argument, as *modus ponens*, that if the moon’s core is made of cheese then my desk is made out of mahogany. What relationship do these two conditions have? The truth-value is valid (F-T-T). However, I recognize that this is merely a preference, which is, at times, convenient. When making a novel *explanans* and prediction the relationship between the conditions may not be epistemically evident. This is one of the reasons why I prefer the I-S model over the D-N model for the sake of being modest in explanatory scope and the epistemic range of data in the *explanandum.*

[1] I actually find myself more in stride with Bohm’s interpretation. The determinism is attractive to me since I have metaphysical qualms with acausality. Objections from the principle of conservation are moot in an Einsteinian universe because it is not causally closed. Even so, certain quantum interpretations reject the principle of conservation such as the Ghirardi-Rimini-Weber (GRW) interpretation. In a theistic context, GRW makes sense of external causes having an ontological link to the physical world without violating conservation. See Bradley Monton, “The Problem of Ontology for Spontaneous Collapse Theories,” *Studies in History and Philosophy of Modern Physics* (2004): 9-10.

[2] Peter Railton, “A Deductive-Nomological Model of Probabilistic Explanation,” *Philosophy of Science* 45 no. 2 (1978): 206-7.

[3] Ibid., 207-8.

[4] Carl G. Hempel, “Indicutive-Statistical Explanation” in *Philosophy of Science*. Eds. Martin Curd and J.A. Cover (New York: Norton, 1998), 706-708.

[5] Marcus Hutter, “On the Existence and Convergence of Computable Universal Priors,” *Technical Report* 5 no. 3 (2003): 1-3.

[6] Wesley Salmon, “Hempel’s Conception of Inductive Inferences in Inductive-Statistical Explanation,” *Philosophy of Science* 44 no. 2 (1977): 183.

[7] Ibid., 184.

[8] Ibid., 181-82.

[9] Hempel, 710.

[10] Hempel, 709.

[11] Salmon, 182-83.