So Why Two Sets of Graphics?

To answer this question, there can be a few possible answers.

The cynical approach is to say that Intel is rehashing a H-series design for the CPU portion, so rather than spending money to make masks that cuts it off, Intel is being cheap for what would be a low volume product.

A technical reason, which readers may or may not agree with, is to do with functionality and power. Despite these chips being 65W and 100W, we are going to see them being used in 15-inch and 17-inch high-end devices, where design is a lifestyle choice but also battery life is a factor to consider. For doing relatively simple tasks, such as video decoding or using eDP, firing up a big bulky graphics core with HBM2 is going to drain the batteries a lot faster. By remaining on the Intel HD graphics, users can still have access to those low power situations while the Radeon graphics and HBM2 are switched off. There is also the case for Intel’s QuickSync, which can be used in preference of AMD encoders in a power-restricted scenario.

The Radeon graphics in this case offers power-gating at the compute-unit level, allowing the system to adjust power as needed or is available. It provides an additional six displays up to 4K with the Intel HD graphics that has three, giving a total of nine outputs. The Radeon Graphics supports DisplayPort 1.4 with HDR and HDMI 2.0b with HDR10 support, along with FreeSync/FreeSync2. As a result, when the graphics output changes from Intel HD Graphics to Radeon graphics, users will have access to FreeSync, as well as enough displays to shake a stick at (if the device has all the outputs).

Users that want these new Intel with Radeon Graphics chips in desktop-class systems, might not find much use for the Intel HD graphics. But, for anything mobile or power related, and, especially for anything multimedia related, it makes sense to take advantage of the Intel iGPU.

Intel Core with Radeon RX Vega M Graphics Launched Navigating Power: Intel’s Dynamic Tuning Technology
POST A COMMENT

67 Comments

View All Comments

  • haukionkannel - Monday, January 8, 2018 - link

    This. Vega is very effisient in low clock rates! Reply
  • Yojimbo - Sunday, January 7, 2018 - link

    Where do they claim that it will beat the GTX 1050 in terms of power efficiency? They show some select benchmarks that imply a certain efficiency in those specific cases, but I didn't see that they mentioned general power efficiency or price at all.

    This package from Intel does have HBM, which is more power efficient than GDDR5. That will help. But overall, my expectation is that Intel's new chip will be less efficient in graphics intensive tasks than a system with a latest generation discrete NVIDIA GPU. The dynamic tuning should help in cases where both CPU and GPU need to draw significant power, though.

    We probably know how Vega performs. Assuming that the chips aren't TDP constrained, the more powerful of the two variants should probably perform somewhere between a 560 and 570 in games. The lesser variant should perform around a 560, less or more depending on how memory bandwidth plays into things. We'll have to see how power constraints factor into to things though.

    Another thing to keep in mind is that for most of its lifetime, this chip will probably be going up against NVIDIA's next generation of GPUs and not their current generation. Intel did benchmark it against a 950M, but I wouldn't put it past them to ignore price differences in a comparison they release. The new chips will probably be expensive enough that they will have to go up against the latest generation of their competitor's chips.
    Reply
  • Kevin G - Monday, January 8, 2018 - link

    This does leave room for Intel produce a slimmer GT1 or even omitting a GPU entirely for mobile when the know that it will be paired with a Radeon Vega on package. That'd permit Intel to decrease costs on their end, though this would up to Intel to pass onward to OEMs. Reply
  • nico_mach - Monday, January 8, 2018 - link

    AMD wasn't good at efficiency mostly due to fabbing. That's easily fixable with a deep-pocketed and suddenly desperate partner like Intel. Reply
  • artk2219 - Wednesday, January 10, 2018 - link

    Vega is actually pretty efficient, just not when they try to chase high performance, then the power requirements jump exponentially in response to the higher clocks and voltage. Also, AMD has had the fficiency crown multiple times, just not recently. The Radeon 9700 pro, 9800 pro, 4850, 4870, 5850, 5870, 7790, 7950, and 7970 all say hello when compared to their Nvidia counterparts of the time. Reply
  • jjj - Sunday, January 7, 2018 - link

    Ask AMD for a die shot so we can count CUs lol Reply
  • shabby - Sunday, January 7, 2018 - link

    8th generation... kaby lake? Have i been sleeping under a rock? Reply
  • evilpaul666 - Sunday, January 7, 2018 - link

    Is there a difference between Skylake, Kaby Lake and Coffee Lake that I'm unaware of? Reply
  • shabby - Sunday, January 7, 2018 - link

    In mobile the only difference was the core count, it doubled when coffee lake was released, but this kaby lake has similar core counts for some reason. Reply
  • extide - Sunday, January 7, 2018 - link

    Yeah, for U/Y (and now G) series 8th gen is 'Kaby Lake refresh, not Coffee Lake) Reply

Log in

Don't have an account? Sign up now