February 8th, 2013
The cumulative case uses the prime principle of confirmation: Whenever we are considering two competing hypotheses, an observation counts as evidence in favor of the hypothesis under which the observation has the highest probability. This principle is sound under all interpretations of probability. Each argument must be taken on its own grounds and one cannot arrive at “God” at the end of each argument. The conjunction of arguments is what is needed to make a cumulative case for the existence of God.
The Likelihood Principle of Confirmation theory states as follows. Let h1 and h2 be two be competing hypothesis (in this case the existence of X and ~X, with X being a first cause, fine-tuner, etc.). According to the Likelihood Principle, an observation e counts as evidence in favor of hypothesis h1 over h2 if the observation is more probable under h1 than h2. Thus, e counts in favor of h1 over h2 if P(e|h1) > P(e|h2), where P(e|h1) and P(e|h2) depict a conditional probability of e on h1 and h2, respectively. The degree to which the evidence counts in favor of one hypothesis over another is proportional to the degree to which e is more probable under h1 than h2: particularly, it is proportional to P(e|h1)/P(e|h2) . The Likelihood Principle seems to be sound under all interpretations of probability. This form is concerned with epistemic probability.
read more »
June 26th, 2012
Whenever probability is being considered there must be some type of relevant or total background information (usually depicted as k). The immediate objection when applying a probability rule or calculus to the fine-tuning of the universe in a multiverse scenario would be to say that this is universe is not an appropriate random sampling. In other words, if we know of [at least] only one universe with these values the random sample size is precisely 1; thus, no random sample can be used to assess the probability of certain values of physics in the argument. In statistics a random sample drawn must have the same chance of being sampled as all the other samples. Since we know of only one universe we do not know what the range of values for the constants and physics could be. Additionally, since we don’t know how narrow or broad these ranges could be there’s no way of drawing out any probability based argument for fine-tuning. However, we can know what other universes would be like if the values were different. If our natural laws have counterfactuals that are in any way incoherent then this is an appropriate sampling. Also, to make this objection and advocate that we just so happen to live in a life permitting universe in the multiverse then this objection cannot be made since the claim that we happen to life in a life-permitting one amongst countless others suggest we can know what the other samplings are.
read more »
May 31st, 2012
Reblogged from Stephen A. Batzer and Evolution News and Views
If you’ve followed the ID vs. Darwinism debate at all, you’ve probably come across the term “Bayesian analysis.” This technique is the skeptic’s friend and it can actually be very simple if informally used. Englishman Thomas Bayes was an 18th-Century Presbyterian minister and mathematician. He asserted that it is rational to analyze new data based upon prior knowledge.
This is subjective probability analysis, the opposite of data analysis “in a vacuum.” Here’s a handy example. Many of us recall being asked by our parents, “If everyone were jumping off a bridge, would you do that too?” I don’t think my mother asked me again after I told her, “Almost certainly. There must be a solid reason that everyone is jumping off the bridge. You probably would, too.”
This down-and-dirty analysis isn’t absolutely reliable, but it is cogent and we all use it every day. People commonly make choices in what they believe and do for experiential reasons. Here’s another example; a bad choice is superior to an intolerable choice. In 2001, intelligent, well-educated adults jumped out of skyscrapers to certain death. Why would did they do such a thing? Because jumping was better than burning to death. When they jumped, fuel-fed flames were working inexorably up through the World Trade Center.
read more »
April 9th, 2012
Here is a real simple model that can be used to determine how we weight our beliefs. As an evidentialist, I appropriate the degree of belief of commitment to a belief according to it’s evidence. The question of sufficiency may be expressed probabilistically. If my belief p is sufficient then it must be 0 < p ≤ 1 where p is > .5. Expressing the value of p is difficult and may certainly have values that may be measured and compared but there are also instances where p must be assigned an arbitrary value that must be determined intuitively or against the aggregate whole of one’s current knowledge [if p is novel]. Additionally, if p is not equivalent to 1 then all future tensed propositions may only be expressed probabilistically.
Why these criteria for determining the value of p? There may be instances when p may be assigned a definitive value. Suppose that I am colorblind, to an extent, and red and purple are indistinguishable for me and that they are the same shade. Suppose I have three marbles in my pocket that are similar in weight and texture (or otherwise may only be distinguished by color) and I have been told by a reliable source that there is one red marble and two purple marbles. The probability of me pulling out the red marble is .333. I pull out a marble and it is red. Let p be the belief that I pulled a purple marble from my pocket. With the background knowledge, k, that I have the inability to differentiate red from purple; I am justified in assigning a value of .333 to p even though I actually pulled out a red marble. Would I then be justified?
read more »
November 6th, 2011
Thomas Bayes’s theorem, in probability theory, is a rule for evaluating the conditional probability of two or more mutually exclusive and jointly exhaustive events. The conditional probability of an event is the probability of that event happening given that another event has already happened. The theorem may be expressed as:
What the solution [P(h|e&k)] represents is the probability of the hypothesis in question is given the evidence and the background information. The numerator [P(e|h&k) P(h|k)] is the probability of the product of evidence and background knowledge and the background knowledge alone. The denominator [P(e|k)] is the probability of the event with the evidence alone. Each factor involved is assigned a probability between 0 and 1 with 0 as impossible and 1 being completely certain.
read more »
January 13th, 2011
I was listening to William Lane Craig’s most recent podcast (Existence of God Part 15) on design and fine-tuning and I recently had William Dembski’s The Design Inference given to me as a gift by a friend (I know, I’m embarrassed I didn’t already own the book). Craig spoke of Dembski’s local and universal small probability calculations and I wanted to make this information available here. The question is at what probability is the probability so small that it could be considered impossible?
1080 x 1045 x 1025 = 10150
The unit 1080 is a number representing the number of elementary particles in the universe. Elementary particles are believed to have no substructure, this would include: quarks, leptons, and bosons.
The unit 1045 is measured in hertz, which represents alterations in the states of matter per second. The properties of matter are such that transitions from one physical state to another cannot occur at a rate faster than 1045 times per second. This universal bound on transitions between physical states is based on the Planck time, which constitutes the smallest physically meaningful unit of time.
The unit 1025 is in seconds. This is a generous, upper bound on the number of seconds that the universe can maintain its integrity [before expanding forever or collapsing back in on itself in a “big crunch”]. This number is according to the Standard Model (the big bang).
The product, 10150, is the total number of state changes that all the elementary particles in the universe can undergo throughout its duration. Compare this number to Oxford physicist Roger Penrose’s calculation that the odds of the special low entropy condition having occurred by chance in the absence of any constraining principles is at least one in 1010^123. In other words, that’s how many different ways the universe could appear from it’s initial conditions. To understand how large of a number 1010^123 is, take away the exponents and try writing out the number. If you were to write a one and put a zero on every elementary particle in our universe you could then write out 1080, which only makes up an incredibly tiny portion of Penrose’s probability (twice for Dembski’s universal probability).
 For all this information see William Dembski, The Design Inference (New York: Cambridge, 1998), 203-214.