It’s become increasingly difficult to ignore the exponential progress that’s been made in the field of artificial intelligence. From self-driving cars to nearly flawless speech synthesis, things most of us thought impossible only a decade ago are now a practical reality. Virtually all of these developments have exploited what has turned out to be one of the most fruitful analogies ever made: that of the human brain to a computer. In particular, the development of neural networks—arguably the most successful family of artificial intelligence models—was explicitly inspired by the structure and function of the brain.
For about a decade, we’ve exploited the brain/computer analogy by drawing inspiration from the brain to build better and better AI systems. But now that our technology has in many respects caught up to, and even exceeded, human performance, it’s worth asking the question in reverse: what insights can we borrow from artificial intelligence, to better understand our own brains and reasoning processes, and how they can go wrong? As it turns out, there are quite a few, and they go a long way toward explaining the breakdown of cross-partisan communication and unwillingness to engage with opposing viewpoints that have characterized our political arena in the age of #MAGA and #MeToo.
So let’s embrace the analogy, and think of our brains as computers built on a substrate of cells rather than silicon: they consume, process, and store data. There’s a limit to how much data they can store, however, and the amount of information they’re exposed to vastly exceeds that limit. Luckily for us, our brains have found a way around this problem, and it’s called data compression. Rather than remembering each and every noise that two politicians made during a political debate, for example, our brains distill the sounds they hear down to a reduced representation of the exchange, that we end up retaining as a “take-home message,” “lesson,” or “memory,” depending on the context. And these form the basis of our opinions.
In computer science and artificial intelligence, the process by which a highly complex body of information (every sound that reaches your ears during a one-hour political debate) is converted to a simple semantic representation (like, “candidate X won the debate, and I dislike candidate Y’s tie”) is called “dimensionality reduction.” Dimensionality reduction is what allows us to take controlled sips from the information firehose that’s pointed in our direction during every moment of our existence.
The problem with dimensionality reduction is that it tends to leave us with “memories,” “lessons,” and “take-home messages,” but little or no ability to recall what facts those memories or lessons were really based on. Although we’d like to think that our opinions are formed by coherently adding together the raw data that we’ve observed, in practice we end up doing something much less rigorous. First, we encounter a fact, then we distill that fact into a memory or lesson, form an opinion, and then, in general, we forget the fact. We’re then unlikely to change this opinion even if the fact upon which it is based is subsequently disproven or updated, because the connection from fact to opinion was lost when dimensionality reduction occurred.
We’re then left with only our nascent opinion, which we’re all too keen to reinforce thanks to confirmation bias. As we seek out sources of information that support our initial hunch, our opinion becomes more and more entrenched. This is part of the reason why political discussions can be so contentious: rather than argue about how to best add up the facts (which we generally don’t remember), it’s easier to lob opinions at one another in the hope of overpowering our peers by sheer force of will.
For example, imagine browsing your Twitter feed, and encountering an article with a headline proclaiming that “Jane Smith launched into an anti-human tirade during a recent campaign rally.” It won’t take your brain long to convert that raw data into an opinion. And that opinion, more likely than not, will be something like, “Jane Smith is bad person.”
A few weeks pass, during which you hold on to your opinion about Jane Smith, but forget the original reason that you formed that view. Now imagine that your friend tells you they’re excited to vote for Jane Smith because they heard her speak, and found her stance to be very pro-human. You recoil: how could your friend support such a morally bankrupt human being? Dimensionality reduction has caused you to lose track of the fact that your opinion of Jane Smith was formed entirely as a result of a claim that has now been disputed. Rather than engage with the claim, you’re more likely to engage with the opinion. So concludes the full cycle of pathological opinion formation via dimensionality reduction: encounter fact, form opinion, forget fact, fail to update opinion when fact is challenged or disproven.
This cycle is all the more treacherous because we regularly form our views without reference to facts at all, and instead adopt the opinions of others in our peer group without doing our own homework. Because of dimensionality reduction, opinions formed in this way often seem just as important—and just as reliable—as those we form by examining actual facts ourselves. In essence, as we forget the process by which we came to form our opinions, we fail to treat them with due skepticism. If we’re lucky, this failure is pointed out to us eventually, our brains throw an error, and we find ourselves in the position of a college protester who realizes they don’t actually know what disagreements they have with the speaker they’ve been chanting slogans at all evening. If we’re unlucky, we go on with our lives none the wiser, and continue to build our picture of the world atop this shaky foundation.
Fortunately, there are measures that each of us can take to keep the excesses of dimensionality reduction in check. Unfortunately, they all require a potentially uncomfortable dose of humility. Dimensionality reduction guarantees that none of us is ever truly playing with a full deck. As a result, the certainty with which most of us hold onto our opinions is rarely justified. Recognizing this is our first and best defense against irrational stubbornness, and the risk of losing ourselves in a rabbit hole of confirmation bias.
At the end of the day, any good faith effort to understand the world will require us to be vulnerable, and lay out the facts on which our opinions are based while inviting others to question their accuracy and completeness. This takes patience, and a willingness to challenge our innate tribal tendencies, and it’s no silver bullet. But it can allow us to move beyond quibbling over opinion, and onto a deeper and more substantive type of discussion, from which we stand to gain a better understanding of our own views, and the real reasons that we disagree with others.
Jeremie Harris is a physicist, turned machine learning engineer. He is the co-founder of SharpestMinds, a Y-Combinator backed company that connects aspiring data scientists to professional developer mentors who invest in their future success. You can follow him on Twitter @jeremiecharris