What is the distribution of the maximum of random variables? What started out a utilitarian question in my exploration of some generalized versions of the secretary problem turns out to be quite a deep topic.
(Note that I have little background in probability and statistics. Please forgive (and inform me of, so I can fix!) errors of notation or language. Or general confusion on my part.)
Geometric Interpretation of Maximum
Consider the cumulative density function . You can picture two copies of the distribution as forming sides of a square. is the inside.
More generally, can be visualized as an -cube and equal to . By differentiation we get the probability density function,
(Note that we need to be a bit careful about the semantics of . This makes sense for discrete distributions, but for others we need to think of it as probability density. For the benefit of intuition, we will continue to use this notation, however.)
This can also be observed as a simple geometric fact, the probability being the expanding surface of the cube.
Combinatorial Interpretation of Maximum
We may also observe that in a nice combinatorial way.
Given random variables , we imagine them ordered . (Note the notational difference between which is some arbitrary random variable and which is the one that turned out to be in position when we ordered them.)
There are possibilities for which random variable is the last one (). The probability that , which will be equal to the maximum, will be is . The probability that the other variables will be less than equal to is .
The result follows.
The probability that is the probability that are not all greater than .
The probability that that a particular variable is greater than is the probability that it isn’t less than or equal to it.
It this feels familiar to you, it’s probably because it’s just application of De Morgan’s law, equivalent to how A OR B is the same as NOT ((NOT A) AND (NOT B)).
You can also interpret this geometrically, similarly to how we interpreted maximum.
From here, differentiate to get probability density.
Combinatorial Construction of Order Statistics
Suppose we have a number of random variables, from the same distribution. We want to know the distribution to expect from the random variable in a particular position after we order them, , the order statistics. Let’s consider the distribution to expect of a variable which has variables less than (or equal to) it and variables greater than (or equal to) it.
The number of ways to select elements for the less than set, for the actual element, and for the greater than from elements is . Then, the probability distribution of is
It’s a little bit less elegant, but the usual way to think of this is as , the -th order statistic of random variables. Or more so, as the -th order statistic of random variables.
(This is my first writing done in emacs org mode + LaTeX and wordpress integration. Thanks to Sacha Chua for help! This is awesome.)