RISC-V is pushing further into the mainstream, showing up across a wide swath of designs and garnering support from a long and still-growing list of chipmakers, tools vendors, universities and foundries. In most cases it is being used as a complementary processor than a replacement for something else, but that could change in the future.
What makes RISC-V particularly attractive to chipmakers is its open-source roots. Developed at UC Berkeley at the beginning of this decade, the RISC-V instruction-set architecture (ISA) is available under a Berkeley Software Distribution license, which allows widespread distribution of the design with minimal restriction. That works particularly well for startups developing prototypes, but it also works for highly specific applications such as a security co-processor because the source code can be tweaked. In addition, it plays well in markets such as China where where there is a national effort to reduce the trade deficit in semiconductors, as well as the cost of IP in those chips.
Most proponents readily admit that RISC-V still has a long way to go before it becomes a serious threat to established processor cores in the market. It takes time to develop software and microarchitectures for specific applications, and the RISC-V Foundation has only been in existence since 2015. All of this makes the architecture an interesting choice for a co-processor, but not necessarily the primary processing element in a commercial design. In fact, it’s not clear RISC-V will ever actually displace some of the leading processor architectures. But it certainly is finding a home alongside those established processors, a role that will only grow as the RISC-V architecture and software mature.
“When you look at what’s in the marketplace, x86 is not going away, Arm architectures aren’t going away,” said Ted Marena, director of FPGA marketing at Microsemi, and spokesperson for RISC-V Foundation. “The way to think about RISC-V, and the way a lot of the customers look at the technology, is it’s an option. It enables a level of innovation that someone may need. There are a lot of people that don’t need it, and they can use a lot of the choices that are out there. But for those folks that want the next level of capabilities, that’s where RISC-V fills a hole.”
Arm, MIPS, Synopsys (ARC) and Cadence (Tensilica) have been successful in promoting their own ISAs, along with a full suite of tools and software, but they discourage extension of those architectures. ARM and MIPS have dominated the mobile and networking markets with their processors. In addition, each has a focused ecosystem and OS/middleware preferences.
“With (Synopsys’) ARC and (Cadence’s) Tensilica, they encourage extension and have methodologies to assist users with this,” said Simon Davidmann, CEO of Imperas. “And they have been successful with specific audio and DSP markets that don’t require extensive ecosystem support.”
On the other hand, he pointed out that RISC-V is designed to be extended, and the ecosystem is growing with commercial tools to assist in the design and verification of these extension. RISC-V adopters are targeting emerging markets such as AI, ML, and IoT that have not yet established OS or middleware preferences. He said there are numerous market segments for each of these architectures.
Fig. 1: SiFive’s RISC-V Linux-ready architecture. Source: SiFive
Sergio Marchese, technical marketing manager at OneSpin Solutions, agreed. “There are many opportunities for service companies and EDA vendors to provide tailored solutions that fit into an open, interoperable design development framework where engineers can pick and choose best-in-class solutions and deploy them seamlessly. Consider the benefit of having an open, formal RISC-V ISA specification and using it to deliver an unprecedented level of rigor and automation in the verification of both simple and complex micro-architectures implemented in hardware.”
But vendors also will be competing on the merit of their solutions in whatever part of the RISC-V ecosystem they play in, whether that involves IP, software tools or EDA.
“The beauty of the way RISC-V is set up is that people can chose to carve a niche, or provide directly competing solutions,” said Neil Hand, director of marketing for the Design Verification Technology Division of Mentor, a Siemens Business. “Playing nice is the only real option.”
Not everyone agrees. Dave Kelf, chief marketing officer at Breker Verification Systems, said some people on the RISC-V Foundation see the RISC-V effort as directly competitive with Arm.
“If one looks at the open nature of the RISC-V instruction set architecture versus Arm and other commercial processor providers, it is easy to see why,” Kelf said. “However, when looking at the practicalities of the market, it is unlikely that RISC-V will displace Arm in any of its core businesses anytime in the near future. One interesting advantage RISC-V has over Arm is the ability to extend its instruction set while still making use of a standardized tool flow. This makes it more competitive with extendable processors such as Tensilica and ARC, as well as replacing internal processor efforts. Looking at the projects so far within companies, it is these applications to which RISC-V has been leveraged. So far, it has not directly come up against Arm. However, the threat of an open ISA has to make Arm nervous, and a web page briefly posted by Arm—before it was quickly removed—suggests that this might indeed be the case.”
Where RISC-V works best
For some engineering groups, the promise of customization attained with an RISC-V-based processor is attractive, and many people are using RISC-V to replace internal proprietary accelerators so that they can leverage the software ecosystem because RISC-V is highly extensible. These accelerators generally are hidden from the user while the Arm cores are what is exposed to the software developers, he noted.
Microsemi’s Marena pointed to Western Digital as an example. “They wanted a particular kind of bus and interfacing. For their situation, they required something that was beyond the standardized architecture. Processors do a lot of things really well, but there’s some things they don’t do quite as well. And so when you want a specialty-purpose function, that next level of innovation, this is where RISC-V comes in.”
It works the same for hardware security. Open-source hardware is considered more secure because it is developed by more people for more end applications.
“The IoT is a particularly insidious ecosystem to secure,” said Martin Scott, CTO of Rambus. “It has vulnerabilities from silicon to the cloud and everywhere in between. There may be inherent vulnerabilities in a design that is connected to an ad hoc worldwide network and software stack that are not secured. There are many different processes, both in business and in security, that have to be managed, and there is no central authority or central standard.”
Scott said there are practical ways to deal with security in hardware, such as using layers of security. But he added that a key advantage of open source is the ability to share information about where vulnerabilities have shown up and how to address them. “Why we’re using RISC-V is that we’re starting with an ISA that can be manipulated to be secure. That’s really important. The implementation of the microarchitecture is as important as the architecture itself because from a side-channel perspective, equivalent functional implementations done differently can have very different security problems.”
And this is where RISC-V is finding a home at the moment. Tim Whitfield, vice president of strategy, embedded and automotive at Arm, pointed to a shift toward more heterogeneity in devices, with specialized processors inside of them. “Is that sort of general-purpose compute going to change a little bit? The innovation that RISC-V enables around what that answer might be is good. Specialists have been using RISC-V in the deeply embedded space where, yes, it’s replacing proprietary cores doing very specialist tasks. And it makes an awful lot of sense because you have the flexibility to go and play with the architecture and do bit twiddling and build the interfaces. That’s the place where it really fits well at the moment.”
But RISC-V also could gain traction with proprietary architectures, given the huge investments organizations make in code and instruction sets and architectures, suggested Rupert Baines, CEO of UltraSoC. “Another facet to this is critical mass, and it can be very expensive and very difficult to develop support, maintain anything below that critical mass. For companies like Nvidia, which have their own completely custom thing, well now they’ve got a RISC-V. They benefit from all the tools, compilers, and it’s still their own custom thing, but they’ve just made their development costs much lower because they can leverage everything else.”
Other companies such as Andes and Codasip are providing RISC-V-based cores, and are keeping their business model the same—they’re licensing a core and a development environment, but because they use a common ISA they can leverage that investment from the rest the world and achieve critical mass, Baines said.
RISC-V-based processors already sit alongside Arm processors within SoCs, and Whitfield expects this to continue as it has with other architectures. “Other architectures have existed, and will exist in perpetuity, whether it’s a Tensilica, which offers a similar sort of architectural flexibility where people need that, or [RISC-V processors]. They coexist with Arm applications processors, or Arm embedded processors.”
Indeed, most SoCs that do not have just one core in them tend to have larger multicore application processors (such as Arm or MIPS) running the main OS, such as Linux, with smaller ‘minion’ processors (such as Andes, or other RISC-V cores) around them running RTOS and other kernels, or bare metal, to accelerate the application’s performance, Davidmann said.
“You have to remember [RISC-V] is an architecture, which at the end of the day is a piece of paper,” added Whitfield. “And the likes of Andes and Codasip and others will build a microarchitecture. That’s expensive and difficult. Arm is much more than a CPU architecture. It’s an IP company and a system solutions software ecosystem. So yes, we can coexist, and that sort of pitching it as a ‘winner takes all’ death match — it’s definitely not a zero sum game. There has always been room for other architectures to play. Where it makes technical sense, at the moment I see that deeply embedded proprietary type place. Maybe there’s something else in the future. There’s no reason why Arm can’t replicate that same goodness in some way. There might be a different future for processing that Arm has to develop different IP to fill that hole.”
For RISC-V to really take off on a commercial basis requires tooling and software, as well as an understanding that future SoC designs will be increasingly heterogeneous in terms of processor ISA, IP vendors, and the software stack.
“The industry needs new advanced tools to model, simulate, port software, and develop and debug new software,” said Davidmann. “Verifying correct operation of these new heterogenous multicore systems will be a large part of verification budgets going forward.”
Also needed is a consistent way to test compliance of RISC-V based processors, not including extensions, and to acclerate functional verification of systems that contain some RISC-V technologies, said Mentor’s Hand.
One of the biggest issues is the confusion between an open instruction-set architecture and open-source cores or software tools, Breker’s Kelf noted. “RISC-V is an open ISA, but this does not necessarily mean open-source implementations. This leads to questions regarding areas such as compliance of individual implementations to the ISA standard and, as such, verification is a big question—especially when the ISA is extended. It is true there are open-source implementations of the cores, software tools and other aspects of RISC-V available, but there is a question mark as to the commercial readiness of these capabilities. More commercial offerings are required that are robust enough for companies to risk their SoCs on. Of course, SiFive is one of the companies to watch as it develops more powerful implementations.”
SiFive, for its part, believes the cost of developing complex chips is so high today that it is unhealthy for the chip industry.
“It’s all about the survival of the semiconductor business,” said Naveed Sherwani, CEO of SiFive. “If open source is a stack, you can use whatever is available for free, and you you can always go an buy yourself a better version. But what it means is you can build a prototype based on open-source components. All the IP you need to build a chip is free. If your chip goes to production, you pay whatever the price of IP is. Sol now what you’ve done is reduce the cost of producing a prototype. The cost of developing a chip is so high that no VCs will fund it and no young people will try it. If the cost today is $10 million, I would like to see it at $1 million. This includes all IP, all SerDes, all DDR controllers. That is the goal. And this is what it means to be open source in hardware.”
There are software challenges to heterogeneous systems, as well, which have yet to be resolved.
“Historically in heterogeneous systems, the problem is most tools have been designed to cope with an open architecture, so you end up with a collection of silos and then you’re toggling between different incompatible environments where each one is specific,” Baines said.
This isn’t confined to RISC-V. “There’s been GPU and video accelerators and various pieces of acceleration in different architectures, and it is a problem for software,” said Whitfield. “A lot of what we see are deeply embedded applications with their own ecosystem or no open programmability. I don’t think we’ve seen a world yet where you’ve got two application processors or where we see a specific accelerator with a programmable accelerator with an Arm chip next to it.”
Baines believes those systems do exist, but said the canonical case for it has always been an application processor and a DSP in some sort of a modem chip. “Modem chips have always had that architecture, and it’s always been horribly difficult because you have an Arm, and a CEVA, and debugging that combo is difficult.”
This explains why the only real coarse-grained offload engines that have persisted have been graphics and video, where they’re able to grow their own ecosystem and there’s enough benefit in having a separate engine to support that. “We’re seeing that in AI,” Whitfield said. “We’re starting to see neural network accelerators and a whole ecosystem, so that will be another coarse-grained offload engine. With fine-grained stuff that RISC-V enables, it’s going to be very interesting to see whether the world really needs it in a mass market sense. You’ve seen it with Tensilica and with ARC, where some people need that specialization, but mostly it’s come out of the GPU. You get out of the CPU general-purpose compute, go into the accelerator, and then there’s an architectural revision that says we’ll put something in that makes it general-purpose because of that ecosystem problem. It makes it easier if you build that back into the architecture. That’s always worked until now. But there’s that end of Moore’s Law, with slightly domain-specific architectures. Does it happen? Does it not happen?”
Mentor’s Hand agrees. “The software side is always a problem. Each core has a different tool chain, and so it is very complex to work with these systems and manage tasks that cross parts of the system. If many proprietary accelerators are replaced with RISC-V based, that can help create a common ecosystem. This is not the case today, as each RISC-V vendor has their own definition of a tool chain.”
Another challenge is how to simulate and debug these systems. “If you get tools from your IP provider, those work well with their IP but tend to fall short when trying to get it to work with other vendors’ IP,” said Davidmann. “To develop software for an SoC that incorporates IP from various IP vendors customers, you need to ensure their models, simulators, debug, verification, analysis and profiling tools can work with many vendors and many ISAs.”
For users that want to adopt RISC-V now, choices are limited. This will change as IP and tool providers develop solutions.
“Peripherals, hardware, some of the tools and ecosystem are happening depending on how much of an early adopter you are, how comfortable you are with that, or whether you are going to wait another six months until they’re a bit more mature,” Baines said. “For deeply embedded, a lot of people would now say that’s pretty mature. And if you’re designing a deeply embedded supervisor or something, it would be a very sensible thing to do. Moving into application processors, moving into customer-facing Linux class processors, that’s what’s being developed now, so it’s less mature.”
Coordination of many different tasks across cores and then validating those before signoff is critical. “The complexity of these systems is increasing, and with that the interdependencies,” said Mentor’s Hand. “Also, many of the new applications are involved in areas where functional safety is key. So not only do we need to make sure something works correctly, but we also need to make sure it fails correctly (a whole new area for most design teams).”
On top of that, there are issues around patents as an architecture is adapted for a particular market.
“There’s RISC-V, which is an architecture, and then there are microarchitecture CPUs,” said Whitfield. I’m interested in how you get from architecture to microarchitecture. I think there are three routes to that. You either take a piece of paper, and you design it yourself, ground up, and that is pretty specialist. There are a few teams around the world who are capable of doing that, especially as you move it to that applications class. You take an open source version from somewhere and you modify it and use it. And then you have conversations around code provenance—who owns it, where did it come from? If you’re talking about functional safety and security, functional safety is all about knowing from spec through to implementation. ‘I’m doing requirements tracking and I have it all.’ How do you overcome that? Or you go to a third-party IP vendor—an Andes or a Codasip—and then it’s a very similar model to Arm and they solve those problems for you. But then the free attraction has to go away at that point because somebody has invested a huge amount of money in getting to that point.”
There leads to challenges involving IP protection and patent infringement.
“The architecture is clean from a pattern perspective, but as soon as you move into that microarchitecture space there are the Intels and Qualcomms and Arms, and lots of other people who, in and of themselves would probably not go anywhere near that,” Whitfield noted. “The patent trolls spend a lot of time indemnifying our partners, so there’s a challenge around that. As you build a micro architecture, you’re almost certainly going to violate some microarchitectural pattern somewhere, and at some point a patent troll is going to come along. Who indemnifies you if you picked it from an open source or you rolled your own?”
On the flip side, it becomes harder to safeguard IP when there are huge numbers of derivatives based on the instruction-set architecture.
While RISC-V continues to gain traction, there are gaps in the tools and software, as well as risks involving any customizable architecture. That may limit how and where RISC-V is used in designs, at least in the short-term. But there are enough market incentives and opportunities that will make this an interesting technology to watch in coming years, particularly as the chip world increasingly leverages architectures as the best way to turn up performance and lower power, rather than relying on increasingly dense implementations of the same processors.
—Ed Sperling contributed to this report.