**Law of large number** is one of the foundational theorem in probability theory. It says that the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

This theorem is very simple and intuitive. And perhaps because it is too intuitive, it becomes counter-intuitive. Why ? Let’s talk about **the Gambler’s Fallacy**: in a binary-result event, if there has been a long run of one outcome, an observer might reason that because the 2 outcomes are destined to come out in a given ratio in a lengthy set of trials, the outcome that has not appeared for a while is temporarily advantaged. Upon seing six straight occurences of black from spins of a roulette wheel, a gambler suffering from this illusion would confidently bet on red for the next spin.

Why is it fallacious to think that sequences will self-correct for temporary departures from the expected ratio of the respective out-comes ? Ignoring for a moment the statistically correct answer that each turn is independent from each other and imagining that the gambler’s illusion is real, we can still point out many problems with that logic. For example, how long will this effect last ? If we take the roulette ball and hide it for 10 years, when unearthed, how will it know to have a preference for red ? Obviously, the gambler’s fallacy can’t be right.

So why can’t the Law of Large Number be applied in the case of Gambler’s Fallacy ?

Short answer: *Statistically speaking, humans are shortsighted creatures.*

Long answer: People generally fail to appreciate that *occasional long runs of one or the other outcome are a natural feature of random sequences*. If you don’t buy it, let’s do a small game: take out a small piece of paper and write down a sequence of random binary number (1 or 0 for example). Once you are done, count the length of the longest run of either value. You will notice that that number is quite small. It has been demonstrated that we tend to avoid long runs. The sequences we write usually alternate back and forth too quickly between the two outcomes. This appears to be because *people expect random outcomes to be representative of the process that generates them*, so that if the trial-by-trial expectations for the two outcomes are 50/50, then we will try to make the series come out almost evenly divided. People generally assume too much local regularity in their concepts of chance, or in other terms, *people are lousy random number generators*.

So there you are, we can see that human, in nature, are statistically detail-oriented. We don’t usually consider the big picture but regconize only some “remarkable” details which will shape our point of view about the world. When we meet a new person, the observations of a few isolated behaviors leads directly to judgments of stable personal characteristics such as friendliness or introversion. Here it is likely that observations of the behavior of another person are not perceived as potentially variable samples over time, but as direct indicants of stable traits. This problem is usually described as * “The Law of small number”*, which refers to

*the tendency to impute too much stability to small-sample results.*

Obviously, knowing about this won’t change our nature, but at least once we acknowledge about our bias, we can be more mindful of the situation and of our decisions.