For long-time AnandTech readers, Jim Keller is a name many are familiar with. The prolific microarchitectural engineer has been involved in a number of high-profile CPU & SoC projects over the years, including AMD’s K8 and Zen CPUs and Apple’s early A-series SoCs. Now after a stint over as Tesla for the past couple of years, Intel has announced that they have hired Keller to lead their silicon engineering efforts.

After rumors on the matter overnight, in a press release that has gone out this morning, Intel confirmed that they have hired Jim Keller as a Senior Vice President. There, Keller will be heading up the 800lb gorilla’s silicon engineering group, with an emphasis on SoC development and integration. Beyond this, Intel’s press release is somewhat cryptic – especially as they tend not to be very forward about future processor developments. But it’s interesting to note that in a prepared statement included with the press release, Dr. Murthy Renduchintala – Intel’s Chief Engineering Officer – said that the company has “embarked on exciting initiatives to fundamentally change the way we build the silicon as we enter the world of heterogeneous process and architectures,” which may been seen as a hint of Intel’s future direction.

What is known for sure is that for most of the last decade, Keller’s engineering focus has been on low-power hardware. This includes not only his most recent stint at Tesla working on low voltage hardware, but also his time at Apple and PA Semiconductor developing Apple’s mobile SoCs, and even AMD’s Zen architecture is arguably a case of creating an efficient, low-power architecture that can also scale up to server CPU needs. So Keller’s experience would mesh well with any future development plans Intel has for developing low-power/high-efficiency hardware. Especially as even if Intel gets its fab development program fully back on track, there’s little reason to believe they’re going to be able to duplicate the manufacturing-derived performance gains they’ve reaped over the past decade.

As for any specific impact Keller might have on Intel’s efforts, that is a curiosity that remains to be seen. Keller’s credentials are second to none – he’s overseen a number of pivotal products – but it bears mentioning that modern processor engineering teams are massive groups working on development cycles that span nearly half a decade. A single rock star engineer may or may not be able to greatly influence an architecture, but at the same time I have to imagine that Intel has tapped Keller more for his leadership experience at this point. Especially as a company the size of Intel already has a number of good engineers at their disposal, and unlike Keller’s second run at AMD, the company isn’t recovering from a period of underfunding or trying to catch up to a market leader. In other words, I don’t expect that Intel is planning on a moment of Zen for Keller and his team.


One of Jim Keller's Many Children: AMD's Raven Ridge APU

Though with his shift to Intel, it’s interesting to note that Jim Keller has completed a de facto grand tour of the high performance consumer CPU world. In the last decade he’s worked for Apple, AMD, and now Intel, who are the three firms making the kind of modern ultra-wide high IPC CPU cores that we see topping our performance charts. Suffice it to say, there are very high-profile engineers of this caliber that these kind of companies will so openly court and/or attempt to pull away from the competition.

For those keeping count, this also marks the second high-profile architect from AMD to end up at Intel in the last 6 months. Towards the end of last year Intel picked up Raja Koduri to serve as their chief architect leading up their discrete GPU development efforts, and now Jim Keller is joining in a similar capacity (and identical SVP title) for Intel’s silicon engineering. Coincidentally, both Kodrui and Keller also worked at Apple for a time before moving to AMD, so while they haven’t been on identical paths – or even working on the same products – Keller’s move to Intel isn’t wholly surprising considering the two never seem to be apart for too long. So it will be exciting to see what Intel is doing with their engineering acquisitions over the coming years.

Source: Intel

Comments Locked

70 Comments

View All Comments

  • Kevin G - Friday, April 27, 2018 - link

    There is still a cost here even with the wide spread of C and libraries. In fact, various libraries due leverage assembler for performance reasons. Drivers often touch assembler a bit too. Compilers need to be written. Various interpreted languages need to be ported and validated. In fact, even if a project is entirely C, it'll need to be tested throughly before it is production ready.
  • peevee - Monday, April 30, 2018 - link

    Thankfully, supporting a new hardware platform got much easier with LLVM.
  • FunBunny2 - Friday, April 27, 2018 - link

    "This is great of course because it means you can run 20 year old software on your modern CPU."

    there's at least millions, if not billions, of lines of COBOL and C/C++ code older than that running *nix on X86 machines. like it or not, X86 is the 360 of today.
  • peevee - Friday, April 27, 2018 - link

    "The only thing that makes x86 (including x86-64) feel outdated is the insistence on carrying so much legacy going forward. "

    Unfortunately, that is not it. That might have been the case 20 years ago. We are way past it. Legacy 386 command set decoder takes like 0.0000001% of silicon now. Things got WAY worse than that due to disconnect of the feature scale and architecture. Simplifying, only big data processing (like video etc) matters anymore, and the big data is stored millions of times farther than registers, meaning it takes so much more time to carry the signal and so much more energy both for overcoming resistance and RF noise in the long long lines.
  • Kevin G - Friday, April 27, 2018 - link

    There are still 8 bit and 16 bit modes supported in hardware. Various x86 prefixes have been retired only to come back in x86-64 under a different mode (looking at you VEX). The x87 FPU has been deprecated but you still need those registers in hardware even if most of those instructions are likely microcoded into SSE/AVX operations. That also highlights a bit of a hidden cost for legacy: special mode by passes that other wise wouldn't existed. Going back to x87, an 80 bit FPU operation could be microcoded to run in the more modern SSE/AVX hardware as two 64 bit operations for 128 bit precision but additional hardware/microode would be necessary to cast that 128 bit result down to 80 bit. This additional hardware can increase the difficulty in pipelining operations and/or make additional stages necessary.

    It is true that the amount of transistors for this support is consuming less and less die space by percentage. This is a natural result of putting increasingly more cache on-die vs. the amount of processing logic. The opt of the line Xeon Platinum has 66.5 MB of L2 and cache which is ~3.2 billion transistors just in SRAM for the data, not including tags or controller logic: around half of the transistor count is just cache. The reason why the extra transistors matter for the carrying legacy is how often is a cache line accessed vs. the transistors used for an instruction decoder? Removing legacy is about optimizing the most frequently used parts of the execution pipeline, not freeing up aggregate transistors to utilize else where.
  • peevee - Friday, April 27, 2018 - link

    "as far as x86 and especially x64 being "outdated" that is laughable"

    You are clearly very ignorant.
    If you look at a modern computer (for example, the one in your pocket), no critical computation if performed on CPU anymore.
    Modem? Special processor.
    Camera/stills? Special processor/ISP.
    Camera/video? Special processor.
    GPU? Special processor.
    Music? Special processor/DSP.
    Display control - special processor (not the same as GPU - it is for color space conversion etc).
    AI - special processor.
    Motion processing - special processor.
    Because the 1980s and earlier ideas and their implementation in ARM/Intel etc are GOOD FOR NOTHING these days.
    Von Neumann architecture was invented for lamp computers of 1940s, fitting in one big room. If you scale a modern PC so the ALU would in in one big room, memory DIMMs will be on the next continent, with series of caches in the next city and state so it would not be so far away. It just does not fit anymore.
  • FunBunny2 - Friday, April 27, 2018 - link

    "Von Neumann architecture was invented for lamp computers of 1940s"

    makes not an iota of difference what hardware existed back then. the architecture is the result of maths logic. the fact that processor tech has returned to single threaded performance is the tell. there just aren't many user space embarrassingly parallel problems.
  • peevee - Monday, April 30, 2018 - link

    "makes not an iota of difference what hardware existed back then"

    It does, because it influenced their thinking.
  • T2k - Friday, April 27, 2018 - link

    "Both x86 (more correctly, x64) and any RISC are outdated. Both are concepts based in the understanding from the 80s (and many ideas from the 70s or earlier)
    (...)
    starting from the main idea from the 40s of separation of CPU and memory which hit physical speed of light limitations"

    Are you high?
  • peevee - Friday, April 27, 2018 - link

    Nope. I am enlightened. :)

Log in

Don't have an account? Sign up now