Ryan Giordano, Statistician
  • About me
  1. Blog posts

Blog posts

 

Approximate constraint relaxation is a Newton step

\[ \def\thetahat{\hat{\theta}} \def\lambdahat{\hat{\lambda}} \def\f{f} \def\g{g} \]
Feb 18, 2025
Ryan Giordano

 

Three paths to a reproducing kernel Hilbert space

\[ \def\F{\mathcal{F}} \def\G{\mathcal{G}} \]
Jan 24, 2025
Ryan Giordano

 

Moving to Quarto

After a semester of running a course in Quarto, I’ve been inspired to move my blog over from Jekyll. I expect that this will actually make it easier for me to post more…
Jul 14, 2024
Ryan Giordano

 

Can you use data to choose your prior?

Can you use data to choose your prior? Some might say you can, pointing, for example, to empirical Bayes procedures, which are formally doing just that. But I would argue…
Sep 20, 2023

 

Three versions of risk-controlling prediction sets

I had the honor and good fortune to present the “Distribution-Free, Risk-Cntrolling Prediction Sets” paper (RCPS, [1]) at the Jordan symposium this month. The paper is…
Jun 30, 2023

 

Meaning and randomness

“If the various formations had had some meaning, if, for example, there had been concealed signs and messages for us which it was important we decode correctly, unceasing…
Apr 10, 2023

 

Free will and randomness

Free will and randomness feel opposed to one another: free wil is what makes us human; randomness is the epitome of meaninglessness. But the two share a deep affinity: they…
Mar 29, 2023

 

The Popper-Miller theorem is the Bayesian transitivity paradox.

Popper and Miller[1,2] proposed a tidy little paradox about inductive reasoning. Many 20th century Bayesians (e.g. [3]) claim that Bayesian reasoning is valid inductive…
Oct 19, 2022

 

R torch for statistics (not just machine learning).

The torch package for R (found here) is CRAN-installable and provides automatic differentiation in R, as long as you’re willing to rewrite your code using Torch functions.
Apr 1, 2022

 

A Few Equivalent Perspectives on Jackknife Bias Correction

In this post, I’ll try to connect a few different ways of viewing jackknife and infinitesimal jackknife bias correction. This post may help provide some intution, as well as…
Mar 17, 2022

 

St. Augustine’s question: A counterexample to Ian Hacking’s ‘law of likelihood’

In this post, I’d like to discuss a simple sense in which statistical reasoning refutes itself. My reasoning is almost trivial and certainly familiar to statisticians. But I…
Feb 17, 2022

 

Some of the gambling devices that build statistics.

In an earlier post, I discuss how statistics uses gambling devices (aleatoric uncertainty) as a metaphor for more the unknown in general (epistemic uncertainty). I called…
Jan 27, 2022

 

How does AMIP work for regression when the weight vector induces colinearity in the regressors?

How does AMIP work for regression when the weight vector induces colinearity in the regressors? This problem came up in our paper, as well as in a couple users of zaminfluence…
Dec 17, 2021

 

Fiducial inference and the interpretation of confidence intervals.

I came across the following section in the (wonderful) textbook ModernDive:
Dec 9, 2021

 

To think about the influence function, think about sums.

I think the key to thinking intuitively about the influence function in our work on AMIP is this: Lineraization approximates a complicated estimator with a simple sum. If…
Dec 1, 2021

 

The bootstrap randomly queries the influence function.

When we present our work on AMIP the relationship with the bootstrap often comes up. I think there’s a lot to say, but there’s one particularly useful perspective: the…
Nov 8, 2021

 

Saint Augustine and chance.

I came across an interesting passage in the Confessions of Saint Augustine at the end of section (5) of Vindicianus on Astronomy. Augustine is describing a period in his…
Oct 27, 2021

 

Three ridiculous hypothesis tests.

There are lots of reasons to dislike p-values. Despite their inherent flaws, over-interpretation, and risks, it is extremely tempting to argue that, absent other…
Sep 30, 2021

 

Probability and the statistical analogy: Gambling devices, long-run probability, and symmetry.

In a lot of classical work, probability is defined in terms of long-run frequency. A coin flip, according to this way of thinking, has a probability one half of coming up…
Sep 24, 2021

 

Approximate Maximum Influence Perturbation and P-hacking

Let’s talk about Hacking. Not Ian Hacking this time — p-hacking! I’d like to elaborate on a nice post by Michael Wiebe, where he investigates whether my work with Rachael…
Sep 17, 2021

 

What is statistics? (The statistical analogy)

By this I mean: What differentiates statistics from other modes of thinking that are not fundamentally statistical?
Aug 22, 2021

 

Convergence in probability of order statistics.

Order statistics converge in probability to their sample quantiles, basically no matter what. That is a fact that I was surprised to find missing (as far as I could see)…
Aug 15, 2021

 

Linear approximation when a norm is the quantity of interest.

In the setting of some our linear approximation work, I was recently asked the excellent question of what to do when the quantity of interest is a norm. The short answer is…
Jul 20, 2021

Fréchet differentiability in R2.

The concept of Fréchet (aka bounded) differentiability plays a role in our recent paper on the sensitivity of variational Bayes approximations for in discrete Bayesian…
Jul 10, 2021

BCLT Regions

A stroll through the Bayesian central limit theorem. Part 2: The actual BCLT.

In the previous post, I introduced some notation and concepts that I’ll now carry forward into an actual sketch of how to prove the BCLT.
Mar 6, 2021

 

A stroll through the Bayesian central limit theorem. Part 1: Uniform laws of large numbers and maximum likelihood estimators.

Over the course of two posts, I’d like to provide an intuitive walk-through of a proof of the Bayesian central limit theorem (BCLT, aka the Bernstein-von Mises theorem). I…
Dec 3, 2020

 

Some notes (to myself) on Theorem 10.1 of Asymptotic Statistcs by van der Vaart.

This post is a continuation of the previous post about van der Vaart’s Theorem 7.2. As before, these are just my personal notes, with no guarantee of correctness nor…
Dec 1, 2020

 

The Bayesian infinitesimal jackknife

I’m going to be speaking next week at Stancon 2020 about a project I’ve been trying to wrap up this summer: the Bayesian (first-order) infinitesimal jackknife. The idea is…
Aug 9, 2020

 

Some notes (to myself) on Theorem 7.2 of Asymptotic Statistcs by van der Vaart.

I have gradually come to appreciate how much insight can be found in Asymptotic Statistics by van der Vaart’s (henceforth vdV). I have come to appreciate this only gradually…
Aug 7, 2020

 

A question I have about the conjugate gradient algorithm.

I have a question about the conjugate gradient (CG) algorithm in particular, and possibly about interative solvers in general. My main reference here will be section 5.1 of N…
Feb 6, 2020

 

Asymptotics of the log likelihood ratio and a Bayesian model selection “paradox”.

In an early draft of Jonathan Huggins’ and Jeff Miller’s BayesBag paper I learned of a particular ``paradox’’ in Bayesian model selection, in which different models with…
Jan 15, 2020

 

Infant sleep training and model selection.

We have a one-year old infant who is going through a sleep regression. As my wife and I discussed how we might approach sleep training, it occurred to me that choosing a…
Sep 15, 2019

 

A paragami version of autograd’s simple neural net example.

In the below notebook, I make a version of autograd’s very simple neural network example using my paragami package as a more readable alternative to autograd’s flatten functi…
Aug 31, 2019

Bayesian and frequentist inference for inverse problems in the presence of randomness.

Dyed-in-the-wool Bayesians like to talk about the decision theoretic benefits of being Bayesian. But I find it more convincing for myself and others to think of Bayesian…
Aug 30, 2019

 

A simple and clever (but inefficient) way to calculate M-estimator sensitivity with automatic differentiation.

I have some recent work (A Higher-Order Swiss Army Infinitesimal Jackknife) that is all about calculating Taylor expansions of optima with respect to hyperparameters using au…
Aug 29, 2019

 

The Bayesian bootstrap is not a free lunch.

The following post is generated from this Jupyter notebook. Forgive the formatting — I’m still working out how best to post notebooks.
Aug 11, 2019

 

Why keep an open research journal?

Why keep an open research journal? Surely not for the sake of readers. There are already many great data science, statistics, and machine learning books and blogs out there…
Jul 26, 2019
No matching items