Human thought process from a data nerd’s point of view

This post is inspired by a late night discussion with a friend at a party (yes, because at 2am and a stomach full of mojito, there is no better topic to talk about than Machine Learning), so please take it with a grain of salt.

The ultimate goal of Artificial Intelligence is, as its name suggests, to create a system that can reason the way human does. Today, despite all the hype about Deep learning, people who work in Machine Learning and AI know that there is still a long way to go in order to achieve that dream. What we have done so far is extremely impressive, but every single modern ML algorithm is very data-intensive, which means that it needs a lot of examples to work well. Beside, in some sense, ML algorithm is just “remembering” what we show them, their abilities to extrapolate the knowledge are very limited, or nonexistent. For example, show a baby a dog, and later she can easily distinguish between a dog and a cat, even though she didn’t “know” the cat. Most of ML models can not do that, if you show them something that they have never seen, they will just try to find the most similar thing in their vocabulary to assign to it.

terminator_salvation77
“Sorry dude, I have no idea how to create Skynet.”

Anyway, today’s topic is not about the machines. In this post, I want to take the opposite approach and compare the human thought process with a machine learning model.

For me, the way we reason and make decision follows a generative model: we compute the probability distribution for each options, and then we choose the most probable one. We use extensively Bayes’ rule to incorporate new observation into our worldview, which means that in our mind, we already have a prior probability distribution for every phenomenon. Each time we have a new information about a particular phenomenon, we will then update the corresponding prior. For example, for someone who spends his whole life in a tropical country, when he sees that it is sunny, he is 100% sure that it is hot. Now if he moves to a country in the temperate zone, he will then have to “update” his belief because in winter, it is cold with or without the sun.

bayes_theorem_1
Bayes’ theorem. Source: gaussianwaves.com

The prior belief is what people usually call “prejudice”, and the question whether it is hard to change or not depends on each individual. I will argue that for a young, open-minded person, her “prejudice distribution” has the form of a Gaussian curve with high variance, which doesn’t have a lot of statistical strength, which allows her to update her belief easily. In contrast, someone with very low Gaussian variance has a firm belief (or prejudice), and it’s very difficult to change their mind.

normal_distribution_pdf-svg
For the same mean, a person with higher “variance” is more open-minded (the peak is lower, the tail has more weight). Source: Wikipedia.

Now let’s think about the decision making process. Just like a ML model, we use “data” to make a “prediction”. Each “data point” is a collection of many features, ie. information that can potentially affect the decision.

Before going any further, we need to talk about the “bias-variance dilemma” in Machine Learning. In his amazing article “A few useful things to know about machine learning”, Pedro Domingos gave the following explanation:

Bias is a learner’s tendency to consistently learn the same wrong thing.

Variance is the tendency to learn random things irrespective of the real signal.

note_aftml_bias_variance_in_dart_throwing
Source:  “A few useful things to know about machine learning”, Pedro Domingos.

The bias-variance says that if the bias increases, the variance will eventually decrease and vice versa. This tradeoff links directly to a severe problem in ML: overfitting and underfitting.

Overfitting occurs when the model is learning from the noise and it can’t generalise well (low bias – high variance).

Underfitting is the opposite: the model is not robust enough to make any decision (high bias – low variance).

When building a model, every ML practitioner faces this challenge: find the sweet spot between bias and variance.

In my opinion, the problem with human thought process is that in auto mode, our mind is constantly “underfitting” the “data”, the reason being our mental model is too simple to deal with the complexity of life (wow I sound like a philosopher haha!). I need to emphasize  the “auto mode” here because when we are conscious about the situation and focus at the task at hand, we become much more effective. However, over 90% of the decisions we make everyday is in unconscious state (just to be clear, I don’t mean that we are in a coma 90% of the time …).

The question now is: why is our mental model too simple ? From a ML point of view, I can think of 3 reasons:

  1. Lack of data: this can seems to contradict to what I just said in the beginning about our great ability to learn well with few observations. However, I still stand my ground: humans are amazing at learning and extrapolate concrete concepts. The problem raises when it comes to the abstract, complex ones that don’t have a clear definition. In these cases, the decision boundary is a non-linear, extremely complicated one, and thus without enough data, our mind fails to fit an appropriate model.
  2. Lack of features: this one is interesting. When building a ML system, we are usually encouraged to remove the number of features because it can help our model to generalize better and avoid overfitting. Moreover, a simpler model will need less computational power to run. I believe that our mind works in the same way: by limiting the number of features going into the mental model, it can process the information faster and more efficiently.The problem is that for complex situations, the model doesn’t have enough features to make good decisions. One obvious example is when we first meet someone. It is commonly know that we just have seven seconds to make a first impressions. Statistically speaking, this is because our mental model for first impression just takes into account the appearance as feature, it doesn’t care (at that very moment) the personality of the person, his job, or his education, …
  3. Wrong loss function: loss function is the core element of every ML algorithm. Concretely, to improve the performance of the prediction, a ML algorithm needs to have a metric that allows it to know how well it does so far. That’s when loss function comes into play: it decides what is the “gap” between the desired output and its actual prediction. The ML algorithm then just needs to optimise that loss function.If we think about our thought process, we can see that for certain tasks, we have the wrong idea about the “loss function” since the beginning. An extreme example is when we want to please or impress someone, we begin to bend our opinions to suit theirs, and eventually our worldview is largely affected by theirs. This is because our loss function in this case is “her satisfaction” instead of “my satisfaction”. This is why people usually say that the key to success is to “fake it till you make it”: if your loss function is your success, get out of your comfort zone and do whatever the most succesful ones are doing, your mental model will eventually be changed to maximize it.

So what can we do to improve our mental model, or more concretely, to make better decisions ? This is a very hard question and I’m not at all qualified to answer it. However, for the sake of argument, let’s think about it as a ML model: what would you do if your ML model doesn’t work well ? Here are my suggestions:

  1. Experience more: this is the obvious solution for the lack of data. By getting out of your comfort zone, stretching your mind, you will “update” your prior belief more quickly. So do whatever that challenge you, be it physically or mentally: read a book, ride a horse, run 20 km, implement a ML algorithm (my favorite ahah), just please don’t sit there and let the social media shapes your mental model.
  2. Be mindful: as I said earlier, when we are really conscious of our actions, we can perform in a whole new level with an incredible efficiency. By being mindful, we can use more “features” than what our mental model usually takes into account, and thus we can have a better view of the situation. However, this is easier said than done, I don’t think that  we can biologically maintain that state all the time.
  3. Reflect on yourself: each week/month/year, spend some time reflecting on your “loss function”: what is your priority ? what do you want to do ? who do you want to become ? Let it be the compass for your actions, your decisions and you will soon be amazed by the results.

In conclusion,  mental model is just like a ML model in production: you can not modify its output on the fly. If you want to improve its performance systematically, you need to take time to analyse and understand why it works the way it does. This is a trial-and-error, an iteration process that can be long and tedious, but it is crucial for every model.

Experience more, embrace mindfulness and reflect often, sooner or later you will possess a robust mental model. All the best!