Last month, some of the biggest names in technology signed a pledge promising not to develop lethal autonomous weapons. Coming just after the recent employee-led protest over Google’s Project Maven, some have praised these initiatives as ethical and moral victories. Some, but not all. For Sandro Gaycken, a senior advisor to Nato, such initiatives are supremely complacent and risk granting authoritarian states an asymmetric advantage. “These naive hippy developers from Silicon Valley don’t understand – the CIA should force them,” says Gaycken, founder of the digital society institute at ESMT, a Berlin-based business school.
Gaycken’s hard advice reveals a schism emerging in the future development of AI for military purposes. On the one side are those that believe pursuing the development of military AI will lead to an unstoppable arms race. On the other side, people like Gaycken believe the AI arms-race has already begun. For them, prohibiting AI research for military purposes will not lead to peace but give the upper hand to authoritarian systems. Therefore, if the West wants to stay in the lead, it needs to unify around a concerted strategy. “Within most military and intelligence organisations it’s a concern,” Gaycken argues. “And it’s about to become a much larger concern.”
If this race has already begun, the stakes are significant. AI’s pattern recognition capabilities, ability to judge and weigh probabilities and make sense of large amounts of data at speed, could confer numerous advantages to military and intelligence organisations. “Ultimately war is about decision-making. And AI, above all else, is a decision-making technology,” says Kenneth Payne from the defence studies department at King’s College London.
Machine learning tools will likely be applied across the whole spectrum of military operations; from improved strategic thinking, down to low-level tactical applications, like controlling swarms of autonomous unmanned weapons systems. However some of the most influential applications in a military theatre could be felt away from the battlefield. “It will play a key part in logistics, in the provisioning of armies to fight,” Payne says. “It may even play a part in weapons design: thinking about and testing what sort of weapons are likely to perform well in conflict with other AI weapons.”
The other domain in which AI is likely to play a decisive role is in future cyber conflict. Elsa Kania, an expert in Chinese military strategy, believes that machine learning will prove an essential tool in achieving “superiority across the electromagnetic spectrum”. Developing faster, more insightful AI could enable one side to enhance the communications and situational awareness of their forces. It could also enable them to disrupt, degrade and deny those of their adversary.
Developing superior AI cyber weapons will enable one side to identify and exploit computational weaknesses within an adversary’s ICT infrastructure. From a military perspective, this opens up a great deal of creativity. “You could attack a military command and control centre, you could attack military vehicles, weapons systems and platforms, you could attack entire battleships and even drones,” says Gaycken.
As AI cyber weapons move beyond speculation, militaries are beginning to formulate methods for their tactical and strategic use. Nato recently released a paper which lay the theoretical framework for “AI Cyber hunters” – defensive AI agents, which patrol friendly systems and detect enemy malware. Offensive AI cyber weapons are already in development, but in Gaycken’s opinion they are still rudimentary. Nevertheless, the advantages that superior AI grants, means that nations are trying to gain dominance in this area. But how exactly do we measure AI power? And is it clear who is winning?
Trying to measure AI power is no easy task. The tools, technologies and know-how are all “dual-use” – they lie scattered across civilian and military spheres, in locations around the world. Understanding a country’s relative AI power requires a deep knowledge of both the public and private sectors, with information often classified or deliberately misleading. Gaycken bemoans the commercial hype surrounding machine learning that, at the moment, makes it almost impossible to determine true capabilities.
Even if accurate information can be obtained, knowing how these capabilities might be deployed in a conflict scenario remains a mystery. Military power is only truly understood in real conflict scenarios. “War has a way of surprising you and exposing exactly how good your AI and all the money you’ve invested in it actually is,” Payne cautions. “It’s hard to know that before the fact.” Just as the battleship was unexpectedly stripped of its superweapon status by the aircraft carrier during the Second World War, the real deployment of military AI could lead to completely unexpected results.
Despite these uncertainties, experts are trying to gain a rough understanding of AI strengths and weaknesses. “What we have in reality is a sort of mixed, messy development in AI,” says Gaycken. “It’s not an equal, linear development, where everybody is getting equally good in all different fields or the same fields.”
The US is still considered to be at the forefront of AI research, leading the way in industrial and military applications of AI. The country retains dominance across many of the standard metrics used as proxies to evaluate AI power, particularly intellectual talent, research breakthroughs and superior hardware. But despite US dominance, China’s strategic investments and vision, enshrined in the government’s ten year plan, is enabling it to catch up at a rapid pace. “China is rapidly emerging as an AI powerhouse,” says Elsa Kania, adjunct fellow with the Center for a New American Security’s technology and national security program, a Washington DC-based think-tank.
From a military perspective, China is using AI to develop a range of unmanned aerial, ground, surface, and underwater vehicles that are becoming increasingly autonomous. It is also attempting to use AI as a way to get around one of its serious military disadvantages – a lack of real conflict experience. “The People’s Liberation Army (PLA) is also focused on the potential of AI in war-gaming, simulations, and realistic training that could help to compensate for its lack of actual combat training,” says Kania.
Then there is the use of AI in disrupting and degrading adversary communications. Given its consistent concentration on electronic warfare as the workforce of its information operations capabilities, Kania believes that the PLA is also likely prioritising cognitive electronic warfare capabilities. And China is not the only nation pioneering the use of AI in information operations. “We know that the Russians are very good at AI combined with information warfare,” says Gaycken.
The most important components in achieving superior AI are data and talent. Access to new, large, structured datasets will give one side’s AI a considerable advantage over an adversary. The more consolidated and complete one side’s data becomes, the greater potential their AI has to make deeper and more accurate inferences. The quality and newness of this data is also crucial and depends upon where it is from and how accurately it has been labelled.
The size of datasets is particularly important. “You have to have a lot of data, you have to be able to structure it and you have to be able to understand the causal relations from the simple statistical correlations or common-cause correlations,” says Gaycken. Put simply, more data makes it easier for machine learning systems to distinguish genuine causal relationships, from those that have arisen by chance.
In terms of access to consolidated data, China has an advantage. As the Chinese AI expert Kai-Fu Lee summarised in a recent report, China has 1.39 billion mobile phone and internet users; three times more than in the US and India. Chinese citizens also use their mobile phone to pay for goods 50 times more often than Americans.
And China’s data superiority shows no sign of waning. National infrastructure is being designed to maximise the amount of data creation, capture and analysis. Nationwide programmes like the Social Credit System will add to an already vast, centralised trove of data. And the close relationship between the private sector, government, military and intelligence communities will make the sharing of data much easier – not to mention the relative absence of privacy concerns.
In contrast, the picture in the US, UK and EU is less centralised and more fragmented. Large US technology companies like Google, Facebook, Amazon and Apple, have access to huge amounts of data. However they protect it fiercely. Unlike in China, cooperation between organisations, whether public or private, is much harder to insist upon.
It’s here that the West’s fragmented startup ecosystem may present some drawbacks. An ecosystem of many, small AI companies can help foster competition, a plurality of opinion and innovation. However the competitive divisions which exist between these companies – and their reluctance to share data – makes for patchy and fragmented data-sharing. For developing stronger AI, Gaycken suggests that the startup ecosystem is not the optimal solution. “Startups have to be embedded into large corporate structures, to have access to the kind of data they require, to build high-quality AI,” he argues.
Just as important as the amount and quality of data, are the brains and engineering talent, which need to make sense of it. “You have to have the brains to work on the customisation and the improvement of the algorithms to fit to the specific vertical where you want to apply it,” says Gaycken.
In regards to engineering talent, America leads the way followed by the UK, Canada and some parts of the EU. Kai-Fu Lee’s recent report claims that Google has as much as 50 per cent of the world’s top 100 AI scientists, working across Google Brain, Google Cloud and DeepMind. Much of this talent is spread throughout the US, UK, Canada and Europe. “There’s a limited pool of AI talent out there and where does that talent want to go to work? It wants to go to San Francisco, London, Toronto and Paris,” says Payne.
In China, the government is making strategic investments to create a new generation of home-grown computer and data scientists. President Xi Jinping has invested considerable capital in overhauling China’s education system and placed great emphasis on mastery of STEM subjects (in 2013 Shanghai’s students ranked first in the OECD’s PISA tests) as well as a new curriculum which emphasises creative thinking, teamwork and innovation.
There are also clear differences in how talent can be utilised in more authoritarian systems. The command and control economies of authoritarian countries can compel citizens, experts and scientists to work for the military. “Where you require very good brains to understand what is going on and to find your niche, to find specific weaknesses and build specific strengths – in those countries they simply force the good guys to work for them,” Gaycken explains.
Another practical challenge that Western militaries face is the competition for rare talent with the private sector. “Not even the defence industry is able to compete with the IT industry,” Gaycken says. In the US, graduates with PhDs in machine learning are taking home salaries of between $300,000 to $500,000. And giant technology companies like Amazon, Uber and Google are renowned for raiding the machine learning and robotics departments of top universities.
Measuring financial investment in machine learning R&D can also be used as a proxy to estimate AI capabilities. But, as Kenneth Payne argues, “money is a pretty crude indicator”. Financial investment can reveal intent but not necessarily capability. It’s difficult to tell how well money is being spent and whether investment is being used to fund longer-term fundamental research or to achieve short-term commercial gains.
Looking at publicly available data, China is setting the pace when it comes to public investment. The government’s strategic investment programme, has grown from just over $5 billion in 2008 to $27 billion in 2017. As Kai-Fu Lee notes, there’s also been a large increase in private sector investment, rising from just under $5 billion in 2014 to over $25 billion in 2017. Much of this has flowed into China’s dominant internet companies, including Baidu, Tencent and Alibaba. However it is also supporting a rapidly growing start-up ecosystem, which includes companies such as Face++, iFlyTek, DJI and 4th Paradigm. Investment levels in the US, UK and EU are also growing. But public money is not matching levels of private investment.
A more accurate proxy to understanding capability is to explore the number and quality of research breakthroughs. Here the US still leads the way, followed by the UK. “It’s Google, it’s DeepMind that have been making some of the big running in their decision-making, in computer games or in their ability to convincingly manipulate video for example,” says Payne.
But China is closing the gap. According to one study, the ethnicity of the top 100 AI journals and conferences increased 43 per cent between 2006 and 2015. The number of citations went up 55 per cent during the same period.
The US still leads the way in the development of AI hardware. Led by companies like Nvidia, Intel, Altera and AMD the US still has the edge when it comes to designing and developing AI chips. As Kai-Fu Lee explains, these companies “have a major advantage over these Chinese startups in terms of intellectual property, manpower, resources and industry experience”. But whether through commercial acquisition, domestic innovation or theft, the Chinese are looking to address these weaknesses.
Some of these efforts may have already begun to pay off. Just last month, Baidu announced the creation of a new AI chip. The capabilities of this chip have yet to be revealed and it is not yet ready for manufacture. However, this announcement signals that China’s focus on AI hardware is meaningful. Kania believes that if the Chinese are able to overcome their persistent difficulties in the semiconductor industry and design truly indigenous AI chips, this would represent “a key inflection point” in the race for AI dominance.
Beyond innovation, theft is another important tactic in the race for AI superiority. Gaycken believes that the theft of intellectual property is occurring at numerous levels: “Stealing certain improvements in the environment, improvements in sensor data, improvements in the speed and quality of computing – everything that is implementing and configuring AIs and that customizes AIs for specific verticals – there’s very, very strong interest from intelligence agencies around the world.” Theft is also focused on talent resources. “The targeted recruitment of talent, particularly researchers with tacit knowledge that is vital to advances in such complex technologies, will become ever more of a priority,” says Kania.
Governments are attempting to clamp down on the theft of IP and knowledge sharing. Some recent policies, such as the US’s decision to restrict visas to Chinese students studying advanced technical subjects in American universities, seem heavy-handed. Policymakers will certainly have their work cut out for them. Unlike previous technologies, like nuclear weaponry, AI is largely being created privately and is not confined within a centralised and restricted environment, like The Manhattan Project. “How can they [governments] acquire knowledge that’s being generated privately and how can they safeguard knowledge that may have a military advantage, from their potential adversaries?” questions Payne. This technology is widely available, easily shareable and almost impossible to contain.
Currently the race for AI superiority is a contest between a highly centralised Chinese system and a more fragmented but open public-private arrangement in the West. In China, AI strategy is being built on a well-funded, long-term strategic plan, involving close cooperation between the state and private sector. Fusing military and civil AI has now become a priority for the Chinese. Just last month the vice president of Tsinghua University, You Zheng, outlined the importance of “military-civil fusion” in China’s development of AI.
To be sure, there are potential disadvantages to having such a close relationship between the military and civilian sectors. As Elsa Kania has argued in a recent article, “the expansion of the CCP’s presence within tech companies - which are now expected to promote the implementation of ‘Xi Jinping thought’ – may harm creativity and innovation.” Excessive state involvement could also lead to excessive levels of investment – leading to a tech bubble – as well as formenting power struggles between political and technology leaders. It is not clear that China’s way will win the day.
For the time being, America and other Western nations still possess dominance in technology, knowledge and research breakthroughs. But, according to Gaycken, in order for the West to win this race, it must change approach. “The industries will have to cooperate very strongly and very closely with the military and they will have to exchange their intellectual property with each other.” A scenario which, for the time being, seems unlikely.