## Posts tagged ‘probability’

February 8th, 2013

## How to Construct a Cumulative Case Argument

The cumulative case uses the prime principle of confirmation: Whenever we are considering two competing hypotheses, an observation counts as evidence in favor of the hypothesis under which the observation has the highest probability. This principle is sound under all interpretations of probability.  Each argument must be taken on its own grounds and one cannot arrive at “God” at the end of each argument.  The conjunction of arguments is what is needed to make a cumulative case for the existence of God.

The Likelihood Principle of Confirmation theory states as follows.  Let h1 and h2 be two be competing hypothesis (in this case the existence of X and ~X, with X being a first cause, fine-tuner, etc.).  According to the Likelihood Principle, an observation e counts as evidence in favor of hypothesis h1 over h2 if the observation is more probable under h1 than h2.  Thus, e counts in favor of h1 over h2 if P(e|h1) > P(e|h2), where P(e|h1) and P(e|h2) depict a conditional probability of e on h1 and h2, respectively.  The degree to which the evidence counts in favor of one hypothesis over another is proportional to the degree to which e is more probable under h1 than h2: particularly, it is proportional to P(e|h1)/P(e|h2) .  The Likelihood Principle seems to be sound under all interpretations of probability.  This form is concerned with epistemic probability.

June 26th, 2012

## The Multiverse, Fine-Tuning, and Nomic Probabilities

Whenever probability is being considered there must be some type of relevant or total background information (usually depicted as k).  The immediate objection when applying a probability rule or calculus to the fine-tuning of the universe in a multiverse scenario would be to say that this is universe is not an appropriate random sampling.  In other words, if we know of [at least] only one universe with these values the random sample size is precisely 1; thus, no random sample can be used to assess the probability of certain values of physics in the argument.  In statistics a random sample drawn must have the same chance of being sampled as all the other samples.  Since we know of only one universe we do not know what the range of values for the constants and physics could be.  Additionally, since we don’t know how narrow or broad these ranges could be there’s no way of drawing out any probability based argument for fine-tuning.  However, we can know what other universes would be like if the values were different.  If our natural laws have counterfactuals that are in any way incoherent then this is an appropriate sampling.  Also, to make this objection and advocate that we just so happen to live in a life permitting universe in the multiverse then this objection cannot be made since the claim that we happen to life in a life-permitting one amongst countless others suggest we can know what the other samplings are.

May 31st, 2012

## Understanding Bayesian Analysis, the Evolution Skeptic’s Friend

Reblogged from Stephen A. Batzer and Evolution News and Views

If you’ve followed the ID vs. Darwinism debate at all, you’ve probably come across the term “Bayesian analysis.” This technique is the skeptic’s friend and it can actually be very simple if informally used. Englishman Thomas Bayes was an 18th-Century Presbyterian minister and mathematician. He asserted that it is rational to analyze new data based upon prior knowledge.

This is subjective probability analysis, the opposite of data analysis “in a vacuum.” Here’s a handy example. Many of us recall being asked by our parents, “If everyone were jumping off a bridge, would you do that too?” I don’t think my mother asked me again after I told her, “Almost certainly. There must be a solid reason that everyone is jumping off the bridge. You probably would, too.”

This down-and-dirty analysis isn’t absolutely reliable, but it is cogent and we all use it every day. People commonly make choices in what they believe and do for experiential reasons. Here’s another example; a bad choice is superior to an intolerable choice. In 2001, intelligent, well-educated adults jumped out of skyscrapers to certain death. Why would did they do such a thing? Because jumping was better than burning to death. When they jumped, fuel-fed flames were working inexorably up through the World Trade Center.

April 9th, 2012

## Calculating the Sufficiency of Belief Probabilistically

Here is a real simple model that can be used to determine how we weight our beliefs.  As an evidentialist, I appropriate the degree of belief of commitment to a belief according to it’s evidence.  The question of sufficiency may be expressed probabilistically.  If my belief p is sufficient then it must be 0 < p ≤ 1 where p is > .5.  Expressing the value of p is difficult and may certainly have values that may be measured and compared but there are also instances where p must be assigned an arbitrary value that must be determined intuitively or against the aggregate whole of one’s current knowledge [if p is novel].  Additionally, if p is not equivalent to 1 then all future tensed propositions may only be expressed probabilistically.

Why these criteria for determining the value of p?  There may be instances when p may be assigned a definitive value.  Suppose that I am colorblind, to an extent, and red and purple are indistinguishable for me and that they are the same shade.  Suppose I have three marbles in my pocket that are similar in weight and texture (or otherwise may only be distinguished by color) and I have been told by a reliable source that there is one red marble and two purple marbles.  The probability of me pulling out the red marble is .333.  I pull out a marble and it is red.  Let p be the belief that I pulled a purple marble from my pocket.  With the background knowledge, k, that I have the inability to differentiate red from purple; I am justified in assigning a value of .333 to p even though I actually pulled out a red marble.  Would I then be justified?

December 22nd, 2011

## Inferential Reasoning in Foundationalism and Coherentism

Logically prior to inferential reasoning is intuition.  These intuitions may be basic beliefs. The belief that this glass of water in front of me will quench my thirst if I drink it is not inferred back from previous experiences coupled with an application of a synthetic a priori principle of induction.  Though this example is not how we form our beliefs psychologically or historically, it can be formed via instances of past experience and induction in the logical sense.  However, when it does come to inferential reasoning R.A. Fumerton provides two definitions for what it means to say that one has inferential justification.[1]

D1 S has an inferentially justified belief in P on the basis of E. = Df.

(1) S believes P.

(2) S justifiably believes both E and the proposition that E confirms P.

(3) S believes P because he believes both E and the proposition that E confirms P.

(4) There is no proposition X such that S is justified in believing X and that E&X does not confirm P.

D2 S has an inferentially justified belief in P on the basis of E. = Df.

(1) S believes P.

(2) E confirms P.

(3) The fact that E causes S to believe P.

(4) There is no proposition X such that S is justified in believing X and that E&X does not confirm P.

Given the explications of such definitions, both D1 and D2, there seems to be good grounds for believing that P must be inferentially justified.  It is most certainly that case that D2 is more amenable to having scientific knowledge in the sense that both (2) and (3) are confirmatory.  D2-(3) is certainly difficult to substantiate without begging the question.  Having E cause S to believe P is difficult to distance from some form of transitive relation.  Inferential justification may also be expressed probabilistically or determined probabilistically.[2]

November 6th, 2011

## Bayes’s Theorem of Conditional Probability

Thomas Bayes’s theorem, in probability theory, is a rule for evaluating the conditional probability of two or more mutually exclusive and jointly exhaustive events.  The conditional probability of an event is the probability of that event happening given that another event has already happened.[1]  The theorem may be expressed as:

What the solution [P(h|e&k)] represents is the probability of the hypothesis in question is given the evidence and the background information.  The numerator [P(e|h&k) P(h|k)]  is the probability of the product of evidence and background knowledge and the background knowledge alone. The denominator [P(e|k)] is the probability of the event with the evidence alone.  Each factor involved is assigned a probability between 0 and 1 with 0 as impossible and 1 being completely certain.[2]

January 13th, 2011

## A Probability So Small It’s Impossible

I was listening to William Lane Craig’s most recent podcast (Existence of God Part 15) on design and fine-tuning and I recently had William Dembski’s The Design Inference given to me as a gift by a friend (I know, I’m embarrassed I didn’t already own the book).  Craig spoke of Dembski’s local and universal small probability calculations and I wanted to make this information available here.[1] The question is at what probability is the probability so small that it could be considered impossible?

1080 x 1045 x 1025 = 10150

The unit 1080 is a number representing the number of elementary particles in the universe.  Elementary particles are believed to have no substructure, this would include:  quarks, leptons, and bosons.

The unit 1045 is measured in hertz, which represents alterations in the states of matter per second.  The properties of matter are such that transitions from one physical state to another cannot occur at a rate faster than 1045 times per second.  This universal bound on transitions between physical states is based on the Planck time, which constitutes the smallest physically meaningful unit of time.

The unit 1025 is in seconds.  This is a generous, upper bound on the number of seconds that the universe can maintain its integrity [before expanding forever or collapsing back in on itself in a “big crunch”].  This number is according to the Standard Model (the big bang).

The product, 10150, is the total number of state changes that all the elementary particles in the universe can undergo throughout its duration.  Compare this number to Oxford physicist Roger Penrose’s calculation that the odds of the special low entropy condition having occurred by chance in the absence of any constraining principles is at least one in 1010^123.  In other words, that’s how many different ways the universe could appear from it’s initial conditions.  To understand how large of a number 1010^123 is, take away the exponents and try writing out the number.  If you were to write a one and put a zero on every elementary particle in our universe you could then write out 1080, which only makes up an incredibly tiny portion of Penrose’s probability (twice for Dembski’s universal probability).

[1] For all this information see William Dembski, The Design Inference (New York:  Cambridge, 1998), 203-214.