With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous – as AI.
As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.
Unlike efforts to develop artificial general intelligence (AGI), or even an artificial superintelligence (SAI), the human brain already presents us with a pre-existing intelligence to work with. Radically extending the abilities of a pre-existing human mind — whether it be through genetics, cybernetics or the integration of external devices — could result in something quite similar to how we envision advanced AI.
Looking to learn more about this, I contacted futurist Michael Anissimov, a blogger at Accelerating Future and a co-organizer of the Singularity Summit. He’s given this subject considerable thought — and warns that we need to be just as wary of IA as we are AI.
Michael, when we speak of Intelligence Amplification, what are we really talking about? Are we looking to create Einsteins? Or is it something significantly more profound?
The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.
The first step will be to create a direct neural link to information. Think of it as a "telepathic Google."
The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualization and manipulation capabilities. Imagine being able to imagine a complex blueprint with high reliability and detail, or to learn new blueprints quickly. There will also be augmentations that focus on other portions of sensory cortex, like tactile cortex and auditory cortex.
The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real.
For it to be otherwise would require that there is some mysterious metaphysical ceiling on qualitative intelligence that miraculously exists at just above the human level. Given that mankind was the first generally intelligent organism to evolve on this planet, that seems highly implausible. We shouldn't expect version one to be the final version, any more than we should have expected the Model T to be the fastest car ever built.
Looking ahead to the next few decades, how could AI come about? Is the human brain really that fungible?
The human brain is not really that fungible. It is the product of more than seven million years of evolutionary optimization and fine-tuning, which is to say that it's already highly optimized given its inherent constraints. Attempts to overclock it usually cause it to break, as demonstrated by the horrific effects of amphetamine addiction.
Trailer for Limitless
Chemicals are not targeted enough to produce big gains in human cognitive performance. The evidence for the effectiveness of current "brain-enhancing drugs" is extremely sketchy. To achieve real strides will require brain implants with connections to millions of neurons. This will require millions of tiny electrodes, and a control system to synchronize them all. The current state of the art brain-computer interfaces have around 1,000 connections. So, current devices need to be scaled up by more than 1,000 times to get anywhere interesting. Even if you assume exponential improvement, it will be awhile before this is possible — at least 15 to 20 years.
Improvement in IA rests upon progress in nano-manufacturing. Brain-computer interface engineers, like Ed Boyden at MIT, depend upon improvements in manufacturing to build these devices. Manufacturing is the linchpin on which everything else depends. Given that there is very little development of atomically-precise manufacturing technologies, nanoscale self-assembly seems like the most likely route to million-electrode brain-computer interfaces. Nanoscale self-assembly is not atomically precise, but it's precise by the standards of bulk manufacturing and photolithography.
What potential psychological side-effects may emerge from a radically enhanced human? Would they even be considered a human at this point?
One of the most salient side effects would be insanity. The human brain is an extremely fine-tuned and calibrated machine. Most perturbations to this tuning qualify as what we would consider "crazy." There are many different types of insanity, far more than there are types of sanity. From the inside, insanity seems perfectly sane, so we'd probably have a lot of trouble convincing these people they are insane.
Even in the case of perfect sanity, side effects might include seizures, information overload, and possibly feelings of egomania or extreme alienation. Smart people tend to feel comparatively more alienated in the world, and for a being smarter than everyone, the effect would be greatly amplified.
Most very smart people are not jovial and sociable like Richard Feynman. Hemingway said, "An intelligent man is sometimes forced to be drunk to spend time with his fools." What if drunkenness were not enough to instill camaraderie and mutual affection? There could be a clean "empathy break" that leads to psychopathy.
So which will come first? AI or IA?
It's very difficult to predict either. There is a tremendous bias for wanting IA to come first, because of all the fun movies and video games with intelligence-enhanced protagonists. It's important to recognize that this bias in favor of IA does not in fact influence the actual technological difficulty of the approach. My guess is that AI will come first because development is so much cheaper and cleaner.
Both endeavours are extremely difficult. They may not come to pass until the 2060s, 2070s, or later. Eventually, however, they must both come to pass — there's nothing magical about intelligence, and the demand for its enhancement is enormous. It would require nothing less than a global totalitarian Luddite dictatorship to hold either back for the long term.
What are the advantages and disadvantages to the two different developmental approaches?
The primary advantage of the AI route is that it is immeasurably cheaper and easier to do research. AI is developed on paper and in code. Most useful IA research, on the other hand, is illegal. Serious IA would require deep neurosurgery and experimental brain implants. These brain implants may malfunction, causing seizures, insanity, or death. Enhancing human intelligence in a qualitative way is not a matter of popping a few pills — you really need to develop brain implants to get any significant returns.
Most research in that area is heavily regulated and expensive. All animal testing is expensive. Theodore Berger has been working on a hippocampal implant for a number of years — and in 2004 it passed a live tissue test, but there has been very little news since then. Every few years he pops up in the media and says it's just around the corner, but I'm skeptical. Meanwhile, there is a lot of intriguing progress in Artificial Intelligence.
Does IA have the potential to be safer than AI as far as predictability and controllability is concerned? Is it important that we develop IA before super-powerful AGI?
Intelligence Augmentation is much more unpredictable and uncontrollable than AGI has the potential to be. It's actually quite dangerous, in the long term. I recently wrote an article that speculates on global political transformation caused by a large amount of power concentrated in the hands of a small group due to "miracle technologies" like IA or molecular manufacturing. I also coined the term "Maximillian," meaning "the best," to refer to a powerful leader making use of intelligence enhancement technology to put himself in an unassailable position.
Image: The cognitively enhanced Reginald Barclay from the ST:TNG episode, "The Nth Degree."
The problem with IA is that you are dealing with human beings, and human beings are flawed. People with enhanced intelligence could still have a merely human-level morality, leveraging their vast intellects for hedonistic or even genocidal purposes.
AGI, on the other hand, can be built from the ground up to simply follow a set of intrinsic motivations that are benevolent, stable, and self-reinforcing.
People say, "won't it reject those motivations?" It won't, because those motivations will make up its entire core of values — if it's programmed properly. There will be no "ghost in the machine" to emerge and overthrow its programmed motives. Philosopher Nick Bostrom does an excellent analysis of this in his paper "The Superintelligent Will". The key point is that selfish motivations will not magically emerge if an AI has a goal system that is fundamentally selfless, if the very essence of its being is devoted to preserving that selflessness. Evolution produced self-interested organisms because of evolutionary design constraints, but that doesn't mean we can't code selfless agents de novo.
What roadblocks, be they technological, medical, or ethical, do you see hindering development?
The biggest roadblock is developing the appropriate manufacturing technology. Right now, we aren't even close.
Another roadblock is figuring out what exactly each neuron does, and identifying the exact positions of these neurons in individual people. Again, we're not even close.
Thirdly, we need some way to quickly test extremely fine-grained theories of brain function — what Ed Boyden calls "high throughput circuit screening" of neural circuits. The best way to do this would be to somehow create a human being without consciousness and experiment on them to our heart's content, but I have a feeling that idea might not go over so well with ethics committees.
Absent that, we'd need an extremely high-resolution simulation of the human brain. Contrary to hype surrounding "brain simulation" projects today, such a high-resolution simulation is not likely to be developed until the 2050-2080 timeframe. An Oxford analysis picks a median date of around 2080. That sounds a bit conservative to me, but in the right ballpark.
Top image: imredesiuk/shutterstock.