Bayesian at the Moon
James Annan has a post up about Bayesian inference and Bayesian vs. Frequentists interpretations of probability. He has a very cute example:
But the example he has in mind is weather prediction.
Oops! Sorry! I guess I jumped from that train of logic onto the wrong conclusion.
The real conclusion was that the space shuttle management totally misjudged the probability of space shuttle failure before the Challenger disaster because they used Bayesian probability. And here I would have said that it was because they substituted wishful thinking for analysis. Go figure.
David Ruelle deals with the same question of interpretation of probabilities in his book Chance and Chaos. The motivation for the theory of probability in the first place was of course the problem of predicting an uncertain future - or, as may be the case in a poker game, the problem of predicting the likelyhood of various alternative possibilities that may constitute the present.
The question he addresses is that of the connection of the purely mathematical theory of probability with reality. Ruelle insists that it's:
By physical theory he means we have to be able to compare our results operationally with physical reality. He doesn't completely clarify this question either, pretty much falling back on computer simulations. If [the simulations find a probability of] 90 per cent for rain, even the purists will take their umbrellas
I think I'm a follower of Ruelle. I don't think probabilities make sense in the absence of a theory which can be tested operationally, at least in part. I also think that that theory ultimately has to have a frequentist interpretation. Ultimately, probabilities are predictions of theories, and our confidence in theories depends on a lot of things - internal consistency, mathematical beauty, and, most crucially, predictions confirmed by experiment.
An analogy with number theory may be helpful. It has been shown that the number of primes less than x is approximately given by x/ln(x), where ln is the natural logarithm. Using this formula, we find there are about 390,000,000 primes between 10^9 and 10^10 (ie 10-digit numbers, of which there are 9x10^9). In other words, if we pick a 10-digit number uniformly at random, there's a 4.3% probability that it is prime. That's a perfectly good frequentist statement. If we exclude those numbers which are divisible by 2, 3 or 5 (for which there are trivial tests) the probability rises to 16.1%. But what about 1,234,567,897? Does it make sense to talk about this number being prime with probability 16.1%? I suspect that some, perhaps most, number theorists would be uneasy about subscribing to that statement. Any particular number is either prime, or not. This fact may be currently unknown to me and you, but it is not random in a frequentist sense. Testing a number will always give the same result, whether it be "prime" or "not prime" (I'll ignore tests which are themselves probabilistic here).
But the example he has in mind is weather prediction.
But does it make sense for someone to accept the validity of a probabilistic weather forecast, while rejecting the appropriateness of a probabilistic assessment about a particular number being prime?...And, of course, global warming and anthropogenic climate change:
Almost every time that anyone uses an estimate of anything in the real world, it's a Bayesian one, whether it be the distance to the Sun, sensitivity of globally averaged surface temperature to a doubling of CO2, or the number of eggs in my fridge. The purely frequentist approach to probability dominates in all teaching of elementary theory, but it hardly exists in the real world.Before I quibble with James, coincidentally, or not, another locally well-known blogger (or antiblogger, as he styles himself) has posted on Bayesian inference.
...the probabilities only have a scientific meaning if they can be determined or at least interpreted in a frequentist fashion, and they can only be trusted if the relevant experiments have actually been tried sufficiently many times to give us the result with the desired accuracy.Which is why Lubos considers String Theory to be a complete hoax.
Oops! Sorry! I guess I jumped from that train of logic onto the wrong conclusion.
The real conclusion was that the space shuttle management totally misjudged the probability of space shuttle failure before the Challenger disaster because they used Bayesian probability. And here I would have said that it was because they substituted wishful thinking for analysis. Go figure.
David Ruelle deals with the same question of interpretation of probabilities in his book Chance and Chaos. The motivation for the theory of probability in the first place was of course the problem of predicting an uncertain future - or, as may be the case in a poker game, the problem of predicting the likelyhood of various alternative possibilities that may constitute the present.
The question he addresses is that of the connection of the purely mathematical theory of probability with reality. Ruelle insists that it's:
important that we assess correctly the probability in uncertain circumstances, and to do this we need a physical theory of probabilities.
By physical theory he means we have to be able to compare our results operationally with physical reality. He doesn't completely clarify this question either, pretty much falling back on computer simulations. If [the simulations find a probability of] 90 per cent for rain, even the purists will take their umbrellas
I think I'm a follower of Ruelle. I don't think probabilities make sense in the absence of a theory which can be tested operationally, at least in part. I also think that that theory ultimately has to have a frequentist interpretation. Ultimately, probabilities are predictions of theories, and our confidence in theories depends on a lot of things - internal consistency, mathematical beauty, and, most crucially, predictions confirmed by experiment.
Comments
Post a Comment