It was with a strangely deflated feeling in his gut that Harvard biologist Mohammed AlQuraishi made his way to Cancun for a scientific conference in December. Strange because a major advance had just been made in his field, something that might normally make him happy. Deflated because the advance hadn’t been made by him or by any of his fellow academic researchers. It had been made by a machine.
DeepMind, an AI company that Google bought in 2014, had outperformed all the researchers who’d submitted entries to the Critical Assessment of Structure Prediction (CASP) conference, which is basically a fancy science contest for grown-ups.
Every two years, researchers working on one of the biggest puzzles in biochemistry, known as the protein folding problem, try to prove how good their predictive powers are by submitting a prediction about the 3D shapes that certain proteins will take. That might seem like a weird thing to forecast, but it’s actually crucial for how scientists develop new drugs. DeepMind has a pretty clear explanation of how this works.
By harnessing the power of machine learning, DeepMind won the CASP contest by a huge margin. What this advance represents for both biochemistry and machine learning is fascinating and important.
In his post, AlQuraishi describes the gamut of emotions he experienced at CASP. He discusses his initial melancholy — he felt like he and the other academics had been made obsolete — and how he ultimately overrode that feeling “as my tribal reflexes gave way to a cooler and more rational assessment of the value of scientific progress.” His meditation struck me as important, because if we’re going to solve high-impact problems, we need to find a way to be psychologically open to the notion that our own minds will not always be the best tools.
I spoke to AlQuraishi about how researchers can cope with the fact that AI is changing their scientific fields — and may even change our perception of science itself. A transcript of our conversation, lightly edited for length and clarity, follows.
You wrote a very personal blog post where you describe feeling a gamut of different emotions from the moment you headed to the conference to the moment you left. Can you walk me through those emotions?
So for people who are in the field, the results actually came out two days before the conference began. We could all go online and look. I was surprised because I wasn’t expecting [DeepMind] to do so well. I was also disappointed because I was participating [in the contest] too, and I didn’t do so well. There was that emotion, disappointment, because I had a personal stake in the matter.
And then over the next couple days, it dawned on me that this is a field that people have been working in for decades. The fact that a new group could come in and do so well, so quickly — I felt bad because it demonstrated the structural inefficiency of academia. I felt really bad for the people who’ve been in this even longer than I have. So there was a feeling of solidarity with some of the other academic groups.
It then became more a feeling of, okay, we should look at this from a different perspective: This is great, it’s going to bring attention to the [protein folding] issue.
Do you think there was a psychological drive propelling your colleagues to want to undersell DeepMind’s contribution because it was displacing them?
I think if it were one of the established academic groups [making the advance], people would’ve said, “It’s no surprise, we always knew this group would do well.” But the fact that it was coming from outside, with this machinery … that probably created some level of resentment, yes.
But the DeepMind team was pretty open in sharing their insights, and from my perspective, the fact that there’s a new group in the field — it’s all for the better. The point is to compete to do better science, not to claim credit.
AI advances can sometimes be overhyped and can yield emotion that isn’t merited by the actual situation — either too much optimism or too much pessimism. Do you think people walked away from the conference with realistic expectations for what AI can contribute to the field in the future?
The lay coverage was a little too optimistic, too overly enthusiastic. In the scientific community, it’s hard to say. I myself keep vacillating. Crystal-balling is a difficult thing.
I place myself in the camp of being fairly bullish on machine learning. I do think that if you look at the history of science, it’s rare to see sustained improvement over a long period of time. I think what we’ve seen over the last, say, six years in machine learning is a fairly unique thing. It’s a once-in-a-generation type of thing, a genuine advance of the first order, comparable to the big intellectual revolutions.
It strikes me that there are established economies of prestige in academia. How do you think machine learning advances will change the prestige economy that we’re used to?
[Laughs] That’s an interesting question. [Long pause] One version is to say, “This is going to make it such that being able to make sense of data will be more important, will increase in prestige.” I think that’s reasonable to expect. We’ve had this tendency as a field to be very obsessed with data collection. The papers that end up in the most prestigious venues tend to be the ones that collect very large data sets. There’s far less prestige associated with conceptual papers or papers that provide some new analytical insight.
From my perspective, if there were a shift from the data collection exercise to the analysis exercise, I think that’d be a good thing in a way. In a lot of sciences, we’ve focused too much on data and not enough on understanding.
That’s reminiscent of the conversation I hear about AI among my fellow journalists. Just recently news reports came out about how AI is writing articles — one-third of the articles at Bloomberg are written with the help of AI. People always say, don’t worry, it’ll be a good thing because that’ll free up the journalists to do deeper thinking on more nuanced issues rather than focusing on the “who, what, where, when, why” — so there’s a funny parallel there.
Yeah, absolutely. And I actually think, looking further ahead, this will change our whole perception of what science is. Going away from human-conceived theories and models of natural phenomena to more data-driven methods and models.
I think it’s a little silly to think we’re always going to be the smartest creatures on the planet. We’re going to be out-thought by machines eventually. In the interim, I suspect that what we’ll see in our modeling of natural phenomena is that it’ll increasingly be built by machines.
What’s interesting is, when we cross this threshold (and in some ways, we already are), we’ll get to a point where the models we built are entirely incomprehensible to us. This will raise questions about, what is the nature of the scientific enterprise? What is it that we mean by “doing science”? Is science about understanding natural phenomena or about building mathematical models that’ll predict what will happen?
That’s so fascinating, that this could actually upend what we understand science to be.
Given that kind of development, can we do a little thought experiment? Let’s say a smart undergraduate student with loads of potential comes to you tomorrow and says, “Hey, professor, I’m thinking of dedicating my career to researching protein structure prediction.” At this point, would you advise her against it? Would you tell her to leave the academy and go work for DeepMind?
I would encourage her to become fluent in machine learning and computing more broadly, because that’s going to be critical over the next few decades. It’s going to be one of the most important skill sets to have, irrespective of what phenomena you want to study. In terms of whether you stay in academia or go to DeepMind or elsewhere, I think that’ll probably be driven by the person’s motivation. If you’re really keen on solving a problem, then an industrial lab is the way to do it. If you’re more curiosity-oriented, if you want to tackle whatever topic draws your fancy at a given point in time, then perhaps academia is still the best place because you’re more independent.
I think there’s also a class dimension at work here, right? Someone who’s highly trained, who has highly specialized knowledge, can potentially retrain or adapt the focus of their work so they’re not competing directly against AI. Do you think it’s easier for you than for, say, a factory worker to override the gut-level fear about being made obsolete?
Absolutely. Somebody who’s an academic who knows about machine learning today has pretty high job security. Somebody who’s doing certain other kinds of jobs, like truck-driving — I think if your job is less secure, absolutely it’s a completely different consideration.
At one point, a lot of people thought there was a hierarchy among jobs — intellectual jobs would be the last to be replaced and mechanical jobs would be the first. But that’s actually unclear. It may well be that jobs that are mechanical will take a long time to replace because it’s actually hard to make robots that make certain gestures. And things in the higher echelons of intellect can maybe be more quickly replaced.
In these conversations about automation there’s always the worry about people losing their incomes, but there’s also a worry about the felt loss of meaning — the meaning we derive from the work that we do.
Is that something you worry about — scientists feeling they derive less meaning from their work as machine learning makes more inroads in the field? Or is it not an issue to you because machine learning will just make the science advance faster and then we can all derive meaning from that?
I think both things are true. A lot of scientists judge themselves based on how smart they are, how quickly they can solve a problem. So much social currency is based on that. And we compare ourselves to other people and we feel like imposters. So this could lead to a kind of widespread existential crisis.
In the short term, I think this will be destructive. The profit motives will end up obsoleting a bunch of people. Maybe in the medium term, there’ll be some ways to adapt. We’ll think of ways to change society to define our value and identity in different ways. But I also suspect that in the very long run, there’ll be some form of augmentation or a cyborg scenario where machines and humans are integrated in some way.
As we move into the longer time frame, we’re going to start to have to confront some of these broader questions. It behooves us, and particularly the machine learning community, to start communicating about the potential implications and thinking through ways we could smoothly transition to an era where machines are smarter than humans.
I think it’s about equity and making sure we live in a place where we’re happy to live.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.