NVIDIA still dominates PC graphics, as its performance at CES 2019 clearly showed. It will own the laptop gaming space this year with its all-new RTX Max-Q designs, as numerous PC makers introduced fast and light devices based on the GPUs. NVIDIA managed to shrink its chips down to a point where you can get a GeForce RTX 2080 Max-Q GPU into a 4.5-pound laptop, which is a pretty incredible feat.
During his last keyonte, Huang spent more time talking about ray-tracing than Max-Q. First he showed off some cinematic ray-traced graphics and how they improve games like Atomic Heart. Next he flaunted the company's AI-powered DLSS anti-aliasing tech running at 1440p resolution to smooth out blocky pixelation, along with a cool demo of light-scattering "caustics" in Justice.
After unveiling the mid-range GeForce RTX 2060 at a not-exactly-cheap $345, NVIDIA launched into G-SYNC, announcing that it was testing and certifying a number of monitors packing AMD's FreeSync tech. Out of hundreds tested, he noted, only a dozen were certified, but Huang didn't stop there. "As you know, we invented the area of adaptive sync," he said. The truth is most of the FreeSync monitors do not work. They do not even work with AMD's graphics cards."
However, opinions of RTX are mixed so far. A recent patch for Battlefield V has eliminated some of the stuttering and other problems, but frame rates are still about half of what they are when the feature is turned off. DLSS is another feature that makes games look better, but degrades performance. The results can be beautiful, but gaming companies must do extra work to implement both types of tech, knowing it won't work at all on AMD cards.
Finally, Huang said that FreeSync tech is problematic even on FreeSync monitors, but that doesn't quite jibe with users' experiences. G-SYNC does work better, especially near the lower limits of a monitor's certified refresh rates, according to many gamers and reviewers. However, they also feel it's not worth the hefty premium you pay for the NVIDIA tech that must be integrated into the display. It's another way NVIDIA locks users into its ecosystem, giving itself more power to set (higher) prices.
Finally, it's worth noting that NVIDIA sells a large chunk of its GPUs to workstation users for video editing, 3D animation and graphics chores. While it has boasted that its latest chips allow for real-time 8K editing, the ray-tracing and DLSS tech doesn't yet help content creators in any way.
AMD's 7-nanometer response
AMD unveiled not only a CPU that should unsettle Intel (Ryzen third-generation with 7-nanometer tech) but the $699 Radeon VII graphics card. Using the 7-nanometer Vega 20 engine, it seems to have at least as much horsepower as NVIDIA's $799 RTX 2080, though it obviously lacks its rival's ray-tracing and DLSS features. Still, it's the world's first 7-nanometer GPU, meaning it's well ahead of the 12-nanometer technology in NVIDIA's latest graphics cards.
AMD claims gaming performance that's more or less on par with the RTX 2080, besting it in some areas and falling short in others. Moreover, it shows its GPU beating NVIDIA's card when editing 8K video and doing color correction and other tasks.
However, all we saw from AMD at CES 2019 in terms of gaming laptops were ASUS' TUF laptops with Ryzen 5 CPUs and Radeon RX 560 discrete graphics. The company is supposed to start shipping more mobile GPUs soon, but until then, it's been all NVIDIA, all the time.
The Radeon VII doesn't have the AI tricks and ray-tracing of its NVIDIA counterpart, however. While that doesn't impact gamers much yet, AMD has acknowledged that it will become an important feature. Ray-tracing is "deep in development, and that development is concurrent between hardware and software," said CEO Lisa Su. "The consumer doesn't see a lot of benefit today because the other parts of the ecosystem are not ready."
What this means for gamers
It's encouraging to see that for once, AMD is actually ahead of NVIDIA with its 7-nanometer chip technology, at least in terms of transistor density. Given how NVIDIA significantly ramped its RTX pricing over the previous-generation GTX tech (the RTX 2060 starts at $345, while the GTX 1060 first hit the market at $100 less), anything that forces it to compete is a good thing.
The Radeon VII appears to stack up well to the RTX 2080 in terms of raw horsepower. Many gamers might prefer to pay $100 extra for the latter card, however, to get the ray-tracing and anti-aliasing performance. If AMD's card does eat into its sales, NVIDIA has room to discount, and a price war between the companies would be great for buyers. AMD likely won't have a long time to stay ahead in the nanometer race, as NVIDIA is expected to switch its next-gen production to a 7-nanometer process (both companies' chips are built by TSMC).
The problem, though, is that the market is starting to get fragmented. In an effort to tighten its grip, NVIDIA is placing more emphasis on exclusive feature like ray-tracing, DLSS and G-SYNC. At the same time, it's de-emphasizing benefits where AMD can compete directly, like higher TFLOPs and memory bandwidth. AMD's own ray-tracing features, when they arrive, will likely be limited to its own cards, too. That means that both consumers and game designers need to choose between one ecosystem or another.
Both companies are competing for a smaller chunk of the graphics pie, now that the bottom has fallen out of the crypto-currency market. PC gamers are likely still feeling burned by the whole Bitcoin era, which saw graphics card prices skyrocket in price. Both companies, particularly market leader NVIDIA, should be laying out the red carpet to try to drum up consumer enthusiasm. Instead, the new era of ray-tracing and AI powered graphics might leave everyone more confused (and abused) than ever.