The US chip industry starts to wake up to new competitive reality | F…


Concerns that China may be looking to dominate the sector has prompted the US defence department to invest in reshaping it © FT montage

Richard Waters in San Francisco
Print this page
The imminent death of Moore’s Law, which describes the main force behind long-running performance improvements in semiconductors, has frequently been predicted.

But according to Bill Chappell, chief microelectronics expert at Darpa, the US defence department’s research arm, the idea “died a decade ago”.

The US chip industry is now starting to face up to an uncomfortable reality. The dependable gains in chip performance that have long underpinned the industry’s growth — and supplied the computing power to open up new tech markets — have been weakening for some time.

Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon University, said this had been apparent since 2004. Speaking at a Darpa conference in San Francisco last week, she pointed out that chip companies had also been struggling with lower returns from their R&D.

“The revenue is not keeping pace with the increasing investment they’re having to put in,” she said.

The 60-year-old US chip industry has sailed through this ominous period with barely a blink. A cyclical boom and the rise of new markets for silicon, from driverless cars to machine learning, have lifted the sector’s sales and led to a doubling of the Philadelphia semiconductor index since the start of 2016. That compares with a 38 per cent gain for the S&P 500.

But the self-confidence of an industry that has been one of the key forces behind US tech leadership is showing signs of cracking.

A report from a presidential advisory group issued early last year spelt out the dangers. Along with “fundamental technological limits” that were starting to slow innovation, the industry leaders behind the report warned of “a concerted push by China to remake the market in its favour”, using an industrial policy backed by massive government funding in a way that “threatens the competitiveness of US industry”.

It is against that background that Darpa — best known for research that contributed to breakthroughs such the internet and self-driving cars — last week announced its backing for a set of research projects to help shape the long-term direction of the industry.

The grants, to a wide range of academic and corporate researchers, are part of a $1.5bn push by the agency, its first large-scale involvement in chips in nearly four decades. The intervention is needed because the industry is at an “inflection point” where it needs to shift its sights to longer term forces, said Mr Chappell.

The attention looks long overdue. The industry’s main trade body proposed a $600m budget for long-range research more than a decade ago, said Prof Fuchs. The amount eventually committed: $20m.

The research projects Darpa is backing cover chip architecture, new materials and design, and offer a glimpse of the technologies that could drive growth a decade from now. They also point to potentially significant shifts in the shape of the chip industry as the old certainties of operating under Moore’s Law fall away.

Chart showing how improvements in chip density are becoming more difficult

In the realm of chip architecture, for instance, the industry is facing diminishing returns from general-purpose chips that have been the main workhorses of the digital age, for example Intel’s CPUs. While they can deliver unmatched computing power for the price, they cannot be optimised for particular tasks. The future is more heterogeneous, said Mike Mayberry, head of research at Intel.

One focus of Darpa’s research is new chip architectures that combine both power and flexibility. This includes “software-defined hardware”, or chips that can be reprogrammed on the fly — an extension of today’s field-programmable gate arrays, integrated circuits that can be adjusted after they are manufactured and provide a limited amount of flexibility.

The rise of machine learning already offers an early glimpse of what this new world would look like, said Mr Chappell. Demand for specialised chips capable of handling the huge volumes of data needed to train deep learning systems that are used in artificial intelligence has brought a new lease of life for Nvidia, whose graphic processing units were originally designed to handle video. It has also opened a door for Google, which has come up with a big-data chip design of its own known as the TPU. Intel, meanwhile, is using FPGAs to enhance the performance of its CPUs for machine learning — an approach Microsoft has adopted for Project Brainwave, its bid to bring deeper AI capability to its data centres.

The rise of “domain-specific architectures” like these represents the most important force shaping the chip industry, according to John Hennessy, chairman of Google parent company Alphabet, and a hardware designer for much of his life.

This could set a pattern in other emerging computing markets. For instance, it is possible to imagine a new form of processor optimised specifically for personal robots that need to operate in the physical world alongside people, said Mr Mayberry.

This trend could bode well for companies such as Intel, Nvidia and Qualcomm. It points to a potential future where large “horizontal” chip companies, whose products can be applied across many fields, would continue to hold sway in more specialised computing domains, said Mr Chappell.

Other long-term technology trends, however, could open the way for newer competitors and bring more disruptive change.

In materials, Darpa is backing research into ways of integrating specialised components that are already added to today’s chips to enhance their performance in different ways. This is largely intended to overcome a fundamental bottleneck: the need to shuttle information from the memory component of a chip to the logic processor and back, something that slows performance particularly in applications that involve very large amounts of data. One result, according to Darpa, could be a new hardware design that turns the old approach on its head and moves processing power to where the data are stored — a clear sign of the increasingly data-centric nature of modern computing.

In the field of chip design, meanwhile, Darpa is backing new techniques to bring down costs through greater automation, while breaking design tasks into smaller units.

Together, trends like this point to a future where new entrants could shape specialised chip designs. This could give companies like Facebook and Amazon more control over the delivery of services, from owning the data all the way through to designing the specialised silicon, said Mr Chappell.

Breaking down design and manufacturing into smaller components could also make it economic for much smaller companies to become involved, reversing a trend that has seen the industry consolidate into a small handful of global powers. Mr Chappell likens this to the craft brewing industry, where start-ups and companies with an interest in “vertical” markets (like medical imaging) design and produce smaller batches of chips.

It is not surprising that such a future would appeal to defence department researchers. The US military has come to rely on its ability to commission relatively small numbers of highly optimised chips, said Mr Mayberry. The Pentagon’s global technological lead, as well as that of the US chip industry, could be on the line.

The chip industry rule of thumb, spelt out in 1965 by Intel co-founder Gordon Moore, predicted roughly a doubling of the number of transistors on a chip every two years. The term has since been stretched to include other forces that affect chip performance, such as the steady improvement in power efficiency known as Dennard scaling, and now encapsulates the overall price-performance gains in chips.

Falling below the exponential growth curve of Moore’s Law over the past decade has left the density of transistors on today’s chips a factor of ten behind where it would otherwise be, he says. That represents a significant loss of performance, even if transistor densities have improved 1m-fold over the past 40 years.

Mr Chappell, meanwhile, argues that the main force behind the fall-off has been economic rather than technical. The huge upfront design costs for new chips and the expense of fitting out the “fabs” needed to make them have eaten into the industry’s customary price-performance gains for all but the highest-volume products, he says.