But is this the only way to define an average? It makes sense because this is a procedure that could be implemented to any level of accuracy desired, and can be described by a real procedure: playing the game a zillion times. But there are other ways of defining the average that give the same results, but are easy to do calculations with. The drawback is that these methods don’t have the same air of reality, but if it makes calculation easier and it gives identical results, they’re probably worth considering.
Let’s concentrate on averaging the variable . What is its probability distribution? It looks like this (fig 1.1).
It’s if , and otherwise. Let’s define the average of this distribution as
(1.16) |
The sum is over all possible values that can take weighted by . In this case it’s .
You can see that in this example, these two definitions, the average over games, and the average over the probability distribution, are identical. We expect that this will be generally true. But averaging the second way is normally easier in practice, so we’ll stick with it. This definition, a sum weighted by is something that we can do fairly easily if we know .
Another thing I should mention is that this kind of average is what I referred to above as an ”expectation value”. You’re averaging over an infinite number of games, so you’re getting a well defined result. Just a normal average, over say 10 games, would give you a different result every time you did it. So it makes more sense to deal with the case of a well converged result, the expectation value, rather than a result that fluctuates every time you do the experiment.