Intel enters the market for discrete GPUs for laptops with Xe Max | GeekComparison

gigapixel ai demo
Enlarge / This is Intel’s DG1 chipset, the heart of the Xe Max GPU.

This weekend, Intel released preliminary information about its latest laptop component: the Xe Max discrete GPU, which works alongside and in conjunction with Tiger Lake’s integrated Iris Xe GPU.

We first heard about Xe Max at Acer’s Next 2020 launch event, where it was mentioned as part of the upcoming Swift 3x laptop, which will only be available in China. The new GPU will also be available in the Asus VivoBook Flip TP470 and the Dell Inspiron 15 7000 2-in-1.

Intel Xe Max vs Nvidia MX350

During an extended product briefing, Intel stressed to us that the Xe Max beats Nvidia’s entry-level MX 350 chipset by just about every measure imaginable. In another year this would have been exciting, but the Xe Max will only appear in systems with Tiger Lake processors, whose Iris Xe integrated GPUs already handily outperforms the Nvidia MX 350 in both Intel’s tests and ours.

The confusion here stems largely from mainstream consumer expectations of a GPU versus what Intel is doing with the Xe Max. Our GPU tests largely revolve around gaming, using 3DMark’s well-known benchmark suite, which includes gaming, fps-focused tests like Time Spy and Night Raid. Instead, Intel’s expectations for the Xe Max revolve almost entirely around content creation with a side of machine learning and video encoding.

Xe Max is roughly the same GPU with 96 Execution Units (EU) found in the Tiger Lake i7-1185G7 CPU we already tested this year. a higher clock speed, dedicated RAM and a separate TDP budget.

Tiger Lake’s Iris Xe has a maximum clock speed of 1.35 GHz and shares the CPU’s TDP limitations. Iris Xe Max has its own 25W TDP and a higher peak clock speed of 1.65GHz. It also has its own 4GiB of dedicated RAM, although that RAM is the same LPDDR4X-4266 that Tiger Lake uses itself, which is a first for discrete graphics and could lead to better power efficiency.

An improvement of Iris Xe, not a replacement

Intel’s marketing materials promote the idea of ​​workloads running on both the integrated Iris Xe GPU and the discrete Xe Max GPU at the same time, using the term “additive AI.” However, this should not be confused with traditional GPU interleaving such as AMD Crossfire or Nvidia SLI – Xe Max and Iris Xe will not work together to perform display tasks on a single screen.

Intel’s version of “additive AI” instead refers to workloads that can be easily split to run on separate GPUs, with separate memory spaces. Such tasks can be assigned to both iGPU and dGPU at the same time, and since the two have very similar performance profiles they can do so without much complication, with the overall workload potentially slowing down due to significant portions running on the slightly slower integrated GPU.

Sharpen images with Gigapixel AI

One of Intel’s more impressive demonstrations involved using Topaz Gigapixel AI to sharpen grainy images. If you’re not familiar with Gigapixel AI, it’s essentially the modern, real-life version of Blade Runner‘s infamous enhancement scene. Topaz, of course, cannot add sincere adds information to a photo that wasn’t there before, but it can use machine learning to produce information that resembles it should belong.

In Intel’s demonstration, an Xe Max-equipped laptop used Gigapixel AI to enhance a very large, grainy photo seven times faster than a comparable laptop equipped with an Nvidia MX 350 GPU. While that was impressive, we pressed Intel for comparisons to other hardware, which an Xe Max-equipped laptop could reasonably compete with “in the wild”.

After about a day, an Intel engineer came back to us and said we could expect an Xe Max-equipped laptop to complete Gigapixel AI workloads seven times faster than an MX 350, five times faster than an RTX 1650 and 1.2 times faster than an Xe Max laptop. as fast as a Tiger Lake laptop with only Iris Xe graphics.

The engineer also noted that Xe Max has significant untapped potential to be unlocked in further optimizations – and that in MLPerf ResNet-50 workloads, using INT8 data, Intel is already seeing 1.4x improvements over standalone Tiger Lake systems without Xe Max.

Xe Max vs Iris Xe

Despite being effectively a higher clocked version of Tiger Lake's integrated Iris Xe GPU, Xe Max doesn't always outperform it - probably because it's a PCIe x4 lane away from the CPU, rather than on-die.
Enlarge / Despite being effectively a higher clocked version of Tiger Lake’s integrated Iris Xe GPU, Xe Max doesn’t always outperform it – probably because it’s a PCIe x4 lane away from the CPU, rather than on-die.

Intel

The real question – for now at least – is who will benefit enough from an Xe Max-equipped laptop to justify the extra cost. We think both gamers and general purpose computer users shouldn’t worry too much about it just yet – neither workload is likely to yield much benefit.

Xe Max doesn’t always speed up gaming workloads at all – as the slide above shows, running Metro Exodus on an Xe Max boosts performance by about 7 fps. But running DOTA 2 on it would reduce performance by about the same amount. Fortunately, Intel’s implementation of the Xe Max includes automatic workload shifting. An Xe Max-equipped Tiger Lake system should shift each game to the better-suited GPU, without the gamer having to know which is which.

For machine learning, the story is more positive: Xe Max at least seems to consistently outperform the integrated Tiger Lake GPU. But Tiger Lake’s success story with integrated graphics takes a lot of the shine off Xe Max. A 20 to 40 percent performance boost for an extra few hundred dollars isn’t a terrible proposition, but it’s a considerably smaller boost than most people expect from adding a discrete GPU.

For now, users who encode a lot of video throughout the day are probably the best target for Xe Max. For this very specific task, an Xe Max-equipped laptop revolves around a Tiger Lake laptop without it – or even a laptop with an otherwise much more powerful RTX 2080 Super.

Leave a Comment