The average of a probability distribution tells you a lot about the real world. What’s the average height of the lawn? What’s the average amount of money do you expect to lose playing video poker in Las Vegas? What is the life-expectancy of a chipmunk?
But aside from knowing this, you’d like to know how close to that average you expect to be. That is, when Bob looks at some random blade of grass, he might on average get , but he might find that in one instance it’s , another is . But does he expect to find ? The complete distribution tells you that, but you want just want one number to tell how close, for example . This is a measure of how sharply peaked, or how wide is the distribution.
There are lots of different definitions you could come up with, but we’ll use the same one that everyone else does, the variance.
(1.38) |
This means we take the average difference between and outcome and the mean, square it, and then average.
In terms of discrete probabilities this is:
(1.39) |
and for a continuous distribution:
(1.40) |
This has the units of and is often referred to as . The standard deviation is then just .
Note for those that remember their classical mechanics, that the variance corresponds to the moment of inertia of an object of mass 1, about the center of mass.
Let’s do some examples. What’s the variance for the distribution in fig. 1.1? That’s the case where you flip a coin and you get 1 for heads and -1 for tails. In that case the mean was zero, so applying eqn. 1.39 you have:
(1.41) |
So the variance in this case is . For this trial you expect to make .
Lastly, there’s an interesting identity for the variance that sometimes simplifies calculations:
(1.42) | |||
(1.43) |
Now we’ll move on to a more complex example.
OK, what’s that ”*” doing up there? Well that means that if you look down the page and you start to feel like you might lose your lunch, just skip to eqn. 1.55 at the end of this section to see the final result.
Now let’s do a simple variation on this. as in problem 1.5.5, you get 0 for tails, and 1 for heads. So the mean for 1 trial is . What’s the variance?
(1.44) |
This is a standard deviation . This makes sense, because it’s the same as the previous example except the peaks are now separated by half the distance, so you expect half the width.
Now let’s consider what you get with 10 tosses. What happens is now . For tosses . The mean we found out was . So they are both proportional to .
How do I know this is the answer? What, you think I looked it up? You can do this following the methods we used in section 1.5.6.
We can write down the variance directly from the binomial distribution, just like we did for the mean (eqn. 1.36).
(1.47) | |||
As with the mean, you can do a bad-ass differentiation trick to get the answer, it’s a bit messy, but it works. However I’ll follow the same kind of reasoning we talked about for the mean to get the answer.
So you use the same variable as we did before . Now we want to calculate the variance, so let’s see how far we can get:
(1.48) |
Well we’ve got pretty far, but not quite far enough. Now we have the the square of a sum. How do you handle that? Think of the case , . This looks like a double summation of two indices. So generalizing this, we see that that summation in the above equation can be written as a double sum:
(1.49) |
But the average of a sum is the sum of the average (even for a double sum) so
(1.50) |
Now there are two kinds of terms in this double sum; ones with and with . If , for example then we can use independence to figure out the answer. In that case
(1.51) |
Because as we saw in section 1.5.6 for independent variables, the average of the product is the product of the average. But is just a constant so averaging that gives doesn’t change it. So
(1.52) |
So all terms with vanish. We’re only left with the terms in other words a single sum:
(1.53) |
Well each term in the sum is just the variance for a single trial. Let’s compute that, say for :
(1.54) | |||
Good, you’re still awake! Must have had some pretty strong coffee. Well we’re almost there. Now we sum over all terms and obtain
(1.55) |
That’s a pretty simple answer. I could’ve just written it down and left it that. But that would take away all the fun.
1. Suppose you have an unbiased coin, that is, it lands heads and tails with equal probability. You toss it 16 times and count the total number of heads you get. What’s the standard deviation of the number of total number of heads. Now toss it 256 times. What’s the standard deviation now?
2. Suppose you have the same setup as the last problem, with an unbiased coin. You give this coin to Bob so he can perform experiments on it. You want Bob to estimate the probability that a coin will land heads. Bob tosses it 16 times. He defines the probability of it landing heads to be (total number of heads/total number of trials).
(a) On average, what result will Bob obtain?
(b) You want to estimate the standard deviation in Bob’s result. Using the formula for the standard deviation of the binomial distribution, calculate the standard deviation of that Bob will ascertain.
(c) What happens to the standard deviation when Bob repeats the calculation 256 times instead?