From Certainty to Belief: How Probability Extends Logic - Part 1

From Certainty to Belief: How Probability Extends Logic - Part 1

In our previous post, we gave an introduction to probability theory. In an even earlier post, we went over the basics of propositional logic — starting with the logic of Aristotle, moving on to Boolean logic, and finally showing the link to propositional logic.

But is there any relationship between probability theory and deductive logic? In fact, there is! Boolean logic and propositional logic can be thought of as a special case of probability theory.

This may seem quite strange at first. When we think about reasoning, our minds often jump to logic: if certain statements are true, then other statements must be true. This is the bedrock of deductive logic — a system built on clear-cut, binary truths: something is either true or false.

What could logic possibly have to do with something as squishy and uncertain as probability theory? Think back to the examples we used in the previous post on probability theory. If you roll a die, you don't know for certain the outcome. But you do know that rolling two fair dice and scoring a total of 12 has only a 1 in 36 chance of happening. So if you needed to make a rational decision based on those odds, you could calculate the rationally correct choice — even though there’s no certainty like there is in deductive logic.

So just intuitively, there does seem to be something at least similar between deductive logic and probability theory. Both can be used to make rational decisions.

Given this intuition, could it be possible that there is some sort of formal relationship between deductive logic and probability theory? Or is this intuition misguided?

Deductive Logic: The World of Absolutes

To explore this further, let's start with traditional deductive logic (either propositional or Boolean logic works here), where statements are assigned one of two values: True or False. If "A implies B" is true, and "A is true" is true, then "B is true" must logically follow. There’s no room for "A is probably true" or "B is sometimes true." It’s a system of absolute certainties.

But our real world is rarely so black and white. We encounter situations where information is incomplete, observations are noisy, and conclusions are tentative. This is where probability theory offers a richer language that allows more flexible reasoning.

Probability Theory: Quantifying Uncertainty

Probability theory provides a framework for managing uncertainty. When you roll a fair die, you don't know what outcome you'll get — but you do have a rational explanation for why a 12 on two fair dice will only happen 1 in 36 throws. So you can assign a confidence to the outcome even though it’s uncertain.

In fact, this can also be true for a non-random event. In our previous post, we considered the “probability” of an asteroid hitting the Earth and even put a number on that event despite it being non-random. Re-imagining probability theory as a sort of plausibility calculus (more on this in future posts) allows us to use the same mathematical rigor for both uncertainty due to randomness in nature and uncertainty due to our ignorance.

Imagine an extension to propositional logic where, instead of only restricting yourself to the two values of 1 = True and 0 = False, you allow any continuous value between 0 and 1. If the value is 0.5, you have no reason to prefer assuming the event in question is true or false (say, you don’t know if a giant asteroid will hit the Earth this year). If the value is 1, you currently accept the event as true (you’ve observed an asteroid heading toward Earth and calculated it’s on a collision course). If the value is 0, you currently accept the event as false (you’ve scanned the sky and there are no asteroids on a collision course this year).

Now say you have no evidence to work with other than the fact that asteroids hit the Earth once every 500,000 years (as discussed in the previous post ). Then your best estimate is that there is a 0.0002% chance that a giant asteroid will hit the Earth this year.

Notice how allowing continuous values from 0 to 1 lets us mathematically express these intuitive ideas about the “probability” of a giant asteroid smashing into the Earth given some set of observations. This is true even though giant asteroids are not random events.

What, then, are we expressing if not the probability of a random event? There is considerable debate on this point, but the most obvious answer is that we’re expressing the plausibility of an event or statement being true given what we currently know — and what we currently don’t know. This is sometimes called, by Bayesians, degrees of belief (a term I don’t object to, even though I think Bayesians only have part of the truth about probability theory).

When viewed this way, it seems intuitive that probability theory is an extension of deductive logic. Deductive logic is just a special case of probability theory that allows only True (1) and False (0), whereas probability theory allows any value between 0 and 1.

Probability Theory as an Extension of Deductive Logic

Can we show formally that our intuition is correct — that deductive logic is a special case of probability theory, and that probability theory extends deductive logic?

In my next post, I’ll go over two excellent examples from David Barber’s Bayesian Reasoning and Machine Learning that demonstrate exactly that.

SHARE


comments powered by Disqus

Follow Us

Latest Posts

subscribe to our newsletter