Some of the gambling devices that build statistics.

philosophy
Published

January 27, 2022

In an earlier post, I discuss how statistics uses gambling devices (aleatoric uncertainty) as a metaphor for more the unknown in general (epistemic uncertainty). I called this the “statistical analogy.” Of course, this perspective is not at all new — see section 1.5 of [0], for example.

When folks employ the statistical analogy, explicitly or implicitly, a few gambling devices come up again and again. I find that having their taxonomy in the back of the mind can help see what metaphor(s) is (are) being employed in a particular analysis. These gambling devices are obviously not fully distinct — you can typically simulate one with another, and the final “device” obviously encompasses all the others. But I will separate them here because they tend to play different metaphorical roles — and, I would argue, increasingly tenuously in the order I have written them.

The urn (exchangeability)

The gambling device most commonly used in statistics is probably the urn: some container containing some objects, such as balls of different colors, which is shaken, and from which some objects are removed. The aleatoric randomness is provided by shaking as well as drawing blindly from the urn, creating a symmetry between all objects in the urn. Equivalent gambling devices include drawing cards from a shuffled deck or random respondents for a poll. Once one is in the habit of thinking about urns with a finite number of objects, it is a small step to consider urns with an infinite number of objects, such as super-populations in causal inference ([1], section 1.12).

The ubiquitous assumption of exchangeability is equivalent to sequential drawing from a shaken urn ([2], section 3). Consequently, the urn model is at the core of most frequentist inferential methods, including the bootstrap and normal approximations for exchangeable data.

Bets using biased coins (subjective probability)

The biased coin, which chooses between two outcomes with given probabilities, plays a large role in subjective probability (associated with Bayesian statistics) as the basis for hypothetical betting. The key idea behind subjective probability is that, before gathering data, we have beliefs about the state of the world. If these beliefs satisfy some reasonable assumptions (i.e., are “coherent”) then there are some bets that we would be consider fair, and some that we would not. Equivalent aleatoric versions of these bets can then be used as metaphors for your subjective beliefs.

For example, suppose that some unknown quantity can be either A or B, and we would accept as fair a bet in which we get $1 if A occurs but pay $2 if B occurs. Since these are precisely the odds that would be acceptable for a biased coin which comes up A 2/3 of the time and B 1/3 of the time, one might say that your subjective belief about A and B is equivalent to your subjective belief about a biased coin with probabilities 2/3 and 1/3. The bet on a biased coin is a metaphor for your subjective belief about A and B. (The full formal connection between betting and subjective probability is richer and more complicated than my cartoon. See [3], sections 3.1-3.4.)

With a coin, the aleatoric randomness is produced a symmetric coin shape together with flipping or spinning, which creates a symmetry between the two sides. The biased coin can be extended to multiple outcomes with uneven dice, such as sheep knuckle bones, again with symmetry created between outcomes via spinning. Of course, you can draw from an urn using biased coins, or produce bets with urns. That is not my point! The point is that the way these gambling devices are used metaphorically is distinct.

The spinner (continuous uniform random variables)

The urn and the biased coin are fundamentally discrete, though much of statistics deals with continuous-valued random variables. The spinner is the most natural way to produce a continuous random variable — namely, a uniform distribution on the circumference of a circle. A spinner creates aleatoric randomness by symmetry of the disk together with a vigorous spin. The needle goes around many times, but the random number is produced by the fractional part of the number of cycles. Pseudo-random number generators like the Mersenne twister seem to me to be in the same class, as they are based on the fractional part of a large number.

The spinner creates sort of a bridge to the rest of probability theory, since any continuous random variable can be produced by applying function (the inverse CDF) to a uniform random variable on the unit interval. Given a spinner, one can begin to imagine complex aleatoric processed based on spinners and computation alone. Of course we can form approximations to the continuum with a sufficiently large number of coin flips, for example, or a sufficiently large urn. However, I think the spinner provides much cleaner intuition for why we consider continuous random variables to be reasonable aleatoric processes in the first place.

Probabilistic models

Once we have the probability calculus (via the spinner and computation), we can begin to form quite complex aleatoric models to represent our uncertainty. Arguably, this is the realm in which a lot of modern statistical work takes place. For example, suppose you are analyzing a binary outcome (hospitalized for COVID or not) as a function of some regressors (age and vaccine status). For an individual with a given age and vaccine status, we do not know for certain whether they will be hospitalized. A logistic regression is precisely a posited aleatoric system to describe this subjective uncertainty. Software like Stan, which allows generalists to perform inference on their own generative processes, make this kind of complex modeling relatively easy.

Of course, at this level of abstraction, the metaphor can lose clarity and force. Why is logistic regression reasonable? Why not some other link function? Why not other regressors (e.g. interactions)? Taking for granted that such abstract models provide good metaphors for epistemic uncertainty is at the root of many misapplications of statistics. In fact, many early statisticians, particularly those in the frequentist camps, were expressly unwilling to extend the statistical analogy much further than exchangeability. One might see a key difference between Neyman-Rubin causal inference ([1]), which (mostly ) requires only the urn, and Pearlian casual inference ([4]), which requires probabilistic graphical models, as a difference in willingness to stretch the statistical analogy.

As with all analogies, the quality of a particular statistical analogy is subject to an ineradicable subjectivity. But being aware of what analogy is being made in a particular situation can help clarify disagreements and avoid missteps.

References

[0] Gelman, Andrew, et al. Bayesian data analysis. CRC press, 2013.

[1] Imbens, Guido W., and Donald B. Rubin. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press, 2015.

[2] Shafer, Glenn, and Vladimir Vovk. “A Tutorial on Conformal Prediction.” Journal of Machine Learning Research 9.3 (2008).

[3] Ghosh, Jayanta K., Mohan Delampady, and Tapas Samanta. An introduction to Bayesian analysis: theory and methods. Vol. 725. New York: Springer, 2006.

[4] Pearl, Judea. Causality. Cambridge university press, 2009.