Freshman Democratic congresswoman Alexandria Ocasio-Cortez is making headlines on a daily basis, because she’s simultaneously hugely popular among young progressives, and regularly enraging to conservatives and moderates. In just one of several headline moments from the past few days, Ocasio-Cortez appeared on an MLK Day broadcast with author Ta-Nahesi Coates, and declared that facial recognition technologies “always have these racial inequities that get translated, because algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated. And automated assumptions—if you don’t fix the bias, then you’re just automating the bias.”
The comments were highlighted on Twitter by Ryan Saavedra, a reporter for the conservative Daily Wire, who commented: “Socialist Rep. Alexandria Ocasio-Cortez (D-NY) claims that algorithms, which are driven by math, are racist.”
Socialist Rep. Alexandria Ocasio-Cortez (D-NY) claims that algorithms, which are driven by math, are racist pic.twitter.com/X2veVvAU1H
— Ryan Saavedra (@RealSaavedra) January 22, 2019
The implication of Saavedra’s attempted rebuttal (which was followed by a classic double-down) is that mathematics constitutes a kind of objective truth, unswayed by the flawed nature of human judgment. That’s a notion that’s quite widespread in crypto circles, though not usually directly linked to the issue of racism. It’s also total horseshit, and deeply dangerous.
Anyone with even a tangential familiarity with computer science will recognize that Saavedra’s counterpoint is based on a fundamental misunderstanding of the nature of computers in general, and artificial intelligence in particular. Rather than emotionless, all-seeing eyes, machine-learning algorithms are better thought of as blind, deaf, silicon-based infants, being fed information about how humans think until they can crudely imitate those patterns.
Both the data used to train these systems, and the training practices themselves, can pass on human biases about race. Some varieties of the facial-recognition software Ocasio-Cortez was referring to, for example, have been found by MIT and Microsoft researchers to misidentify dark-skinned people at vastly higher rates than light-skinned people. That’s a near-perfect analogue of white people’s tendency to misidentify people of color, leading to higher rates of false arrest and conviction.
It’s hard to describe that as anything other than “a racist algorithm,” but humans are ultimately to blame—apparently the flaw came about because the data sets used to train the software had far more light-skinned males than members of other groups. Bias can find its way into algorithms in many different ways, though: another great example (highlighted by our new intern Benjamin Powers), was a recruiting tool built by Amazon that simply reproduced human recruiters’ bias towards male job-seekers, because it was only trained using selections made by humans biased towards male job-seekers.
And it’s not all about race and gender. Saavedra was quickly confronted with clear examples of how right Ocasio-Cortez was . . . directly from his own Twitter feed. Saavedra had repeatedly complained about supposed bias in social media algorithms, including tweeting that “tech companies tend to be liberal & something is off with their algorithms because they won’t show a lot of content I find by manual search.” There’s not much evidence for this claim of anti-conservative algorithmic bias—content algorithms seem to favor extremism in general more than any particular ideology. But Saavedra is clearly happy to echo Ocasio-Cortez’ points about algorithmic bias, when it suits his narrative.
In short, algorithms aimed at making subtle decisions involving human beings are, mostly, really good at repeating human errors, very fast, many times over.
This matters immensely for blockchain entrepreneurs, whose rhetoric often implicitly echoes the idea that math is inherently infallible. That’s perhaps best summed up by the mantra that “code is law” among smart-contracts advocates, a phrase which threatens to conflate the inflexibility of blockchain transactions with infallibility—two extremely different things. That blurring might be made easier because crypto is premised on a particular case where code is, in a very limited sense, infallible—cryptography. Cryptography doesn’t involve any real judgment or decision-making, just numerical matching: A hash or a private key are either right or wrong.
But that clarity fails pretty quickly as you move into more complex realms of human life, and crypto and blockchain efforts are aimed at some areas where bias is a clear risk, most obviously finance. Loan and credit decisions have historically been highly subject to human racial and gender bias, and there’s concern that the existing credit-rating system may contain algorithmic elements that perpetuate racial bias. If crypto hopes to break down the power imbalances built into the legacy banking system, that seems like a decent place to put some energy.
It’s also worth noting that there’s something much deeper to the specifically conservative yearning for a perfect, neutral, algorithmic decision-maker. The nature of that yearning may be best captured in the writing of Eliezer Yudkowski, an AI theorist and philosopher. Yudkowski has for more than a decade pursued the possibility of perfect human reasoning, while also believing that we will inevitably create an all-knowing artificial intelligence that will become a sort of super-rational God.
Saavedra, with his tweet about how algorithms “driven by math” can’t be racist, was echoing Yudkowski’s unending faith in the perfectibility of reason and reasoning technology. It turns out that the possibility that ethics can be computerized and formalized is pretty appealing to arch-conservatives, and Yudkowski’s thought constitutes a core pillar for the Silicon Valley wing of neo-reactionaries who often refer to themselves as “rationalists.”
Yudkowski’s rationalism is also, notoriously, fatally flawed: His system of coldly logical reason, it turned out, was by many accounts completely undone by a logical paradox known as Roko’s Basilisk. The thought experiment is too complicated to unpack here, but boiled down to the irrefutable conclusion that any super-intelligent AI that arrives in the future, even a “friendly” one, would endlessly torture a simulated copy of any human that hadn’t worked to create it.
That’s a hell of a conclusion for a system of thought aimed at creating a future rationalist utopia. For super-nerd bonus points, it’s also arguably a spin on Godel’s incompleteness theorem, which argues that no purely rational algorithmic system can completely and consistently model reality, or prove its own rationality. The whole thing is worth reading about in the excellent book Neoreaction, A Basilisk. But the takeaway is clear enough: Just because something is expressed in numbers doesn’t make it right, or even rational. And rationality itself might not be quite what it’s cracked up to be.