So Why Two Sets of Graphics?

To answer this question, there can be a few possible answers.

The cynical approach is to say that Intel is rehashing a H-series design for the CPU portion, so rather than spending money to make masks that cuts it off, Intel is being cheap for what would be a low volume product.

A technical reason, which readers may or may not agree with, is to do with functionality and power. Despite these chips being 65W and 100W, we are going to see them being used in 15-inch and 17-inch high-end devices, where design is a lifestyle choice but also battery life is a factor to consider. For doing relatively simple tasks, such as video decoding or using eDP, firing up a big bulky graphics core with HBM2 is going to drain the batteries a lot faster. By remaining on the Intel HD graphics, users can still have access to those low power situations while the Radeon graphics and HBM2 are switched off. There is also the case for Intel’s QuickSync, which can be used in preference of AMD encoders in a power-restricted scenario.

The Radeon graphics in this case offers power-gating at the compute-unit level, allowing the system to adjust power as needed or is available. It provides an additional six displays up to 4K with the Intel HD graphics that has three, giving a total of nine outputs. The Radeon Graphics supports DisplayPort 1.4 with HDR and HDMI 2.0b with HDR10 support, along with FreeSync/FreeSync2. As a result, when the graphics output changes from Intel HD Graphics to Radeon graphics, users will have access to FreeSync, as well as enough displays to shake a stick at (if the device has all the outputs).

Users that want these new Intel with Radeon Graphics chips in desktop-class systems, might not find much use for the Intel HD graphics. But, for anything mobile or power related, and, especially for anything multimedia related, it makes sense to take advantage of the Intel iGPU.

Intel Core with Radeon RX Vega M Graphics Launched Navigating Power: Intel’s Dynamic Tuning Technology
POST A COMMENT

67 Comments

View All Comments

  • MFinn3333 - Monday, January 8, 2018 - link

    Intel and AMD were forced to work together because they hated nVidia.

    This is a relationship of Spite.
    Reply
  • itonamd - Monday, January 8, 2018 - link

    and hate qualcomm works for windows 10 Reply
  • artk2219 - Wednesday, January 10, 2018 - link

    Never underestimate the power of hatred against a mutual enemy. It worked for the allies in World War II, at least until that nice little cold war bit that came after :). Reply
  • itonamd - Monday, January 8, 2018 - link

    Good Jobs. but still dissapointed. intel not use hbm2 as l4 cache and share processor graphics when user wants to use graphics card and acording to ark.intel . And it is still 4 cores not 6 cores like i7 8700k Reply
  • Cooe - Monday, January 8, 2018 - link

    It's a laptop chip first and foremost, and the best & latest Intel has in it's mobile line is 4c/8t Kaby Lake for power reasons (and the max 100W power envelope here precludes 6c/12t Coffee Lake already, even if a hypothetical mobile CL part existed). Not to be rude, but your expectations were totally unreasonable considering the primary target market (thin & light gaming laptops & mobile workstations). Reply
  • Bullwinkle-J-Moose - Monday, January 8, 2018 - link

    "It's a laptop chip first and foremost" ???
    -----------------------------------------------
    It may have been presented that way initially but there were hints for other products from the very beginning

    Once the process is optimized over the next few years, we may start seeing some very capable 4K TV's without the need for thunderbolt graphics cards

    Now, about that latency problem.......
    Whats new for gaming TV's at CES?
    Reply
  • Kevin G - Monday, January 8, 2018 - link

    6c/12t would have been perfectly possible with Vega M under 100W. The catch is both the CPU and GPU wouldn't coexist well under full load. The end result would be a base clock lower than what Intel would have liked on both parts for that fully loaded scenario. Though under average usage (including gaming were 4c/8t was enough), turbo would kick in and everything would be OK.

    The more likely scenario is that Intel simply didn't have enough time in development of this product to switch Kaby Lake for Coffer Lake in time and get this validated. Remember that Coffee Lake was added to the road map when Cannon Lake desktop chips were removed.
    Reply
  • Bullwinkle-J-Moose - Monday, January 8, 2018 - link

    You had it right the first time

    Coffer Lake holds all the cash
    Reply
  • Hurr Durr - Monday, January 8, 2018 - link

    Come on, this particular cartel is quite obvious. Reply
  • Strunf - Monday, January 8, 2018 - link

    This all proves that X86 wars are a thing of the past and NVIDIA is pushing these two into a corner... Reply

Log in

Don't have an account? Sign up now