The Brain is a ‘Black Box’

There is, according to some commentators, a crisis in AI. This is because ‘models are not explainable’ or in other words they are a ‘black box’ – answers come out but they are not necessarily explained. Perhaps more correctly, the answer comes out, but these commentators want the model to babysit them through the entire process of figuring it out – preferably in a magical, low effort way on the part of the babysat person that avoids all the math that is the real reason decisions are made. As much as I would like to spend some more time mocking the low intelligence of such commentators, my main point here is to point something else out – that our own human decision processes are also ‘black box’.

Firstly, recall that the brain is built from relatively simple pieces. A neuron has many inputs from other neurons. If the right neurons provide input at the same time, an output is generated by the neuron. Over time, a neuron strengthens or weakens the values of its inputs, learning. While genetics, biochemistry, and all make the actual operations exceedingly complicated, the end result is pretty simple – the brain is a massive network. To my chagrin, most neuroscience is lost in the small details of the operations, and very little is actually understood about how the insanely massive network generates consciousness (one human brain could be said to have many more connections than the entire internet).

Now, any human’s decision is based on a result of that entire network’s activity. The butterfly effect is certainly relevant here. Experience from childhood, the meal eaten the night before, all of these things have shaped the network and can effect decisions not at all related to that particular data. One example that few will deny is the almost universal power of attractiveness. People who are prettier are favored by others and so usually privileged over their less attractive peers. Attractiveness can also have widespread social repercussions effecting the mindset and actions of a group of people randomly across time.

Yet, you might argue, that doesn’t make the brain a black box, just a complicated box. Attractiveness is a factor that can easily be listed, even if perhaps it is not openly admitted most of the time. A decent counter-example might be people’s choice of vacation/holiday destinations. Generally you will get some variation on “I thought X would be cool… and my great-grandmother lived there for six months” – that last being a feeble attempt at a more solid reason which probably had very little influence on the actual decision. There is absolutely nothing wrong with this ‘feel good’ approach most of the time – you don’t really need to cognitively understand the heuristics your brain is working by for choosing where to travel, it just needs to be ‘good enough’.

In the business world, a good example I am aware of is the choosing of locations for expanding new warehouses I observed in my last job. The previous expansions – jumping from the Midwest down to Texas and Florida – seemed to be based more on a desire for an excuse of the executive/owners to visit warm places in winter, because the markets there are relatively smaller, per population, than the north as snowfall drives more tire sales. An acquisition in California probably happened simply because the opportunity was there. Further new expansion seemed absolutely chaotic. There were a number of logical reasons to guide growth – strength of existing competitors in market, overall size of the market, proximity to existing warehouses for transfers. None of these really seemed to play a part, except in eliminating a few options. Ultimately decisions were made from ‘the gut’.

So perhaps I should say this, humans can follow very exact decision guidelines, but almost never do when:

  1. There is a short period of time to decide, or the decider is impatient
  2. There are many (probably more than 3) major factors to consider
  3. The decision is low value
  4. The person making the decision really doesn’t like to think

One simple thing I would like to point out is that AI systems are rarely employed in easy situations – if it were easy or simple, humans would do it already without the expense or bother. Thus, when a human demands a simple answer from an AI, they are in a situation where even a human would not be able to accurately list their reasoning either.

A last point to make about this is the difference in how we trust our own human black boxes. A person with more experience develops better internal heuristics – a better gut feeling – over what to do. They are a trusted black box, but environments change, heuristics from forty years ago will make the wrong decision now. Our society often trusts an old white male where we would demand detailed accountability from a less privileged person.

As a confrontational person, I often end up making people desperately try to rationalize their decisions. This usually annoys me far more than any thing they did which I disagreed with. The person made a ‘feel good’ decision, not based on any relevant logical factors. I am generally making the point to them that they trust their ‘feel good’ systems too much – most people do. Underneath this, our society really doesn’t accept that the brain is a black box, a mystery engine. Instead of interacting with their own thoughts with a proper layer of skepticism and awareness, they blindly assume rationality, and are desperate to stay that way. Ultimately, I believe most arguments I am in owe a lot to this basic misunderstanding – my making them seem irrational seems a grave insult, whereas really they need to understand that irrationality – or at least, a black box mystery – is the fact of their minds with which they need to better interact.

Of course, it’s all just shades of grey. Decisions always have a reason, the complexity just varies widely.

Leave a Comment

Your email address will not be published. Required fields are marked *