# Long-run frequency

In a lot of classical work, probability is defined in terms of long-run frequency. A coin flip, according to this way of thinking, has a probability one half of coming up heads precisely because, if we were to somehow “flip it an infinite number of times,” the proportion of heads in the infinite sequence of trials would be one half. This definition leads to all sorts of conundrums, from obvious to subtle, such as

How can a physical property be defined in terms of a physically impossible experiment (flipping an infinite number of times)?

What about situations where such an infinite sequence has no physical meaning?

What about probabilities that change over time?

How can we rationally base one-off decisions on an (infinitely long) string of hypothetical future events?

I don’t intend to re-hash these old questions here. (As usual, I like Ian Hacking’s discussion of the problems; see his book, The Logic of Statistical Inference.) But I do believe that the long-run frequency definition of probability obscures the relationship between aleatoric and epistemic probability I discussed int he last post, and for that reason I would like to propose an alternative perspective. I will argue two things: (1) That we know the “probabilities” of special gambling devices because we specially design them to produce symmetries, and that (2) When we speak of the probabilities of events that are not gambling devices, we do so by analogy with a (possibly very complicated) gambling device or class of gambling devices.

# Symmetry and the discard of information in gambling devices

Let’s returning to that old workhorse, the coin flip. It seems to me that, upon observing a single flip, one could confidently assert that the long-run frequency of heads would be 0.5, without performing a single additional experiment. Why is that? Because a coin flip is specially designed to produce *symmetric* outcomes between heads and tails. The flip, done correctly, destroys the asymmetry produced by which face was up at the beginning of the flip. Here, “done correctly” is a key property — the probability of a “coin flip” is the probability of a process, not a physical object, and the process is presumed to be executed in good faith, and the purpose of the process is to eliminate the influence of initial conditions and produce a symmetry in the outcome. It seems to me that it is the symmetry in this process that gives rise to a confident assertion about the outcome of the (practically impossible) infinitely long sequence of flips.

Every gambling device I can think of — dice, roulette wheels, card shuffling, lottery urns, pseudo-random number generators — are similar: they consist of a process designed to discard initial conditions and produce a symmetry in the outcome. We discard initial conditions because, without doing so, the process does not look “random,” it looks contingent and deterministic. We require symmetry in the outcome for, without it, we would not know what the “probabilities” are. For gambling devices, it seems to me that there can be no harm in defining probabilities in terms of these symmetries rather than an infinite sequence; indeed, we feel confident about the infintie sequence precisely because of these symmetries*.

It may seem as if gambling devices so described can only produce uniform distributions over discrete sets. Of course this is not true, since functions of uniform distributions may not be uniform. Consider a spinner wheel, for which some fraction of the circumference is colored red and the rest blue. The probability of red can be made to be any number in the interval from zero to one. Billingsley’s Probability and Measure opens with a fun discussion of how all random numbers could be derived as functions of a single uniform random number on the unit interval. So when I say “gambling devices,” I happily include all abstract probabilistic models. Of course, many such models do not correspond to any device that could actually be built, but I argue that they derive their motivation from the possibility of purely aleatoric probability, which was historically manifested in only a few special machines. Ask yourself, for example — how motivated would we be to study probability if it were practically impossible to write a reasonably good pseudo random number generator?

# The probabilty of events that are not gambling devices

Let’s turn now to the “probability” of physical events that do not look like gambling devices — say, for example, the probability of rain tomorrow? From today to tomorrow, the initial conditions are obviously not (wholly) discarded, and no symmetry can be readily seen, as the system is far too complicated. Perhaps one would be tempted to fall back to long-run probability for these reasons. However, I would argue that any statement about the probability of rain tomorrow has, as a necessary constituent part, an *analogy* between the real event (rain tomorrow) and some idealized gambling device. The analogy may be implicit, but the notion of probability depends on it. In the case of rain, the analogy would typically be with a lottery urn: all days in some set of days that are, as much as possible, just like today are placed in an urn. Some of these days have rain tomorrow, some not. The urn is shaken and our “actual tomorrow” is drawn from this urn, and the “probability” of rain is, by shaking and by symmetry, the proportion of days with rain tomorrow to begin with. Indeed, the “infinite sequence” definition of probability is precisely of this form, only with an infinitely large urn! Obviously, the key decision to be made is which days go into the urn. In what way must days be “like today”? What time period is eligible? Are theoretical days (as simulated, say, by a computer) admitted or only actual days? And so on. The many reasonable ways of making the choice about the urn’s contents may appear to lead to difficulty in the long-run frequency definition, but are simply part and parcel of forming an analogy with a gambling device.

Let us then say this: most physical systems do not have well-defined probability distributions over their outcomes. The exceptions are gambling devices, which are specially constructed to discard information and produce symmetries. We speak of probabilities for systems that are not gambling devices by forming an analogy with an (idealized) gambling device, possibly a very complicated one in the form of a probabilistic model or set of such models. Often many analogies are plausible; correspondingly many probabilities are plausible. To speak of probabilities, the analogy must be made, even if only implicitly. The analogy must be made for long-run frequency to make sense, and, once it is made, long-run frequency does not matter.

I argued in a previous post that statistics always consists in the formation of such analogies, which I call the “statistical analogy”. Thus there are two intertwined problems in statistics: the formation of useful analogies with gambling devices, and, given an analogy, the production of epistemic statements from aleatoric properties. One might characterize the frequentist philosophical perspective as being unwilling to stretch the analogy very far, but most interesting practical statistics stretches it at least a little bit.

# The statistical analogy is not falsifiable

At this point, it may seem like I have simply come around to the obvious point that statisticians model the world with classes of probability models. But I would like to try to differentiate the statistical analogy from a “statistical model,” by which I mean some fixed candidate set of probability distributions. A violation of a probability model may prompt you to say, “I have chosen a bad (insufficiently expressive, misspecified, &c) set of probability distributions,” whereas a violation of the statistical analogy would prompt you to say “that is not what I meant by probability.” Suppose I simply rotated a coin in the air, never letting it go, and set it down heads up. You would say: “That is not what I mean by a coin flip.” Or suppose you told me that it will very likely not rain tomorrow, and I then flew to another city where it is raining and say you were mistaken; you would say “That is not what I meant by ‘rain tomorrow.’” Violations of the statistical analogy are not falsifiable, though they may be more or less *useful*. (Though I think that, in practice, the distinction is not so neat — it may be that one only discovers that a statistical analogy is not useful only after investigation with a set of statistical models that discover, say, some correlations or outliers that were not known previously.)

# The problem of the mapping between aleatoric and epsitemic probability remains

I would like to close this post by emphasizing that I have been discussing here only the notion of probability. There remains a more subtle issue, which is the relationship between aleatoric and epistemic uncertainty. Are they the same? It may seem that, with a pure gambling device such as a roulette wheel, the two can be safely equated. But if we’re going to stretch this analogy far beyond actual coin flips and roulette wheels into public policy and climate modeling (for example), then we should think about the question more carefully — in a later post.