For the past eighteen months, Intel has paraded its new ‘Lakefield’ processor design around the press and the public as a paragon of new processor innovation. Inside, Intel pairs one of its fast peak performance cores with four of its lower power efficient cores, and uses novel technology in order to build the processor in the smallest footprint it can. The new Lakefield design is a sign that Intel is looking into new processor paradigms, such as hybrid processors with different types of cores, but also different stacking and packaging technologies to help drive the next wave of computing. With this article, we will tell you all you need to know about Lakefield.

Part Smartphone, Part PC

When designing a processor, there are over a thousand design choices to be made. The processor can be built to tackle everything, or it can be aimed at a niche. For high performance computing, there might be a need for a high power, high performance design where cooling is of no consideration – compare that to a processor aimed at a portable device, and it needs to be energy efficient and offer considerable battery life for a fixed battery size. There is also the cost of designing the product, how much to invest into research and development, how many units are expected to sell, and thus how many should be produced and what size the product should be. What the price range of the target market is can be a huge factor, even before putting pen to paper.


The New Samsung Galaxy Book S

This is all why we have big multi-core processors with lots of compute acceleration in servers, more moderate power and core counts in home machines that focus on single core performance and user experience, and why smartphone processors have to physically fit into a small design and offer exceptional battery life.

Laptop processors have always sort of fit into the middle of the PC and smartphone markets. Laptop users, especially professionals and gamers, need the high performance that a desktop platform can provide, but road warriors need something that is superbly efficient in power consumption, especially at idle, to provide all-day battery life as if they were on a good smartphone. Not only this, but the more energy efficient and the smaller the footprint of the processor and its features, the thinner and lighter the laptop can be, offering a premium design experience.

As a result, we have seen the ultra-premium notebook market converge from two directions.

From the top, we have AMD and Intel, using their laptop processor designs in smaller and smaller power envelopes to offer thin and light devices with exceptional performance and yet retain the energy efficiency required for battery life. For the most premium designs, we see 12-15+ hours of laptop battery life, as well as very capable gaming.

From the bottom, we have Qualcomm, building out its high-performance smartphone processor line into larger power envelopes, in order to offer desktop-class performance with smartphone-class connectivity and battery life. With the designs using Qualcomm’s processors, a user can very easily expect 24+ hours of battery life, and with regular office use, only charge the system once every couple of days. Qualcomm still has an additional barrier in software, which it is working towards.

Both of these directions converge on something in the middle – something that can offer desktop-class performance, 24hr+ battery life, capable gaming, but also has a full range of software support. Rather continue with trying to bring its processors down to the level it requires, Intel has decided to flip its traditional processor paradigm upside down, and build a smartphone-class processor for this market, matching Qualcomm in its bottom up approach while also looking into novel manufacturing techniques in order to do so.

This processor design is called ‘Lakefield’.

Lakefield at the Core, and the Atom

For the past two decades, Intel has had two different types of x86 CPU design.

The Big ‘Core’ CPU

Intel calls its high power/high performance x86 design the ‘Core’ family. This can make it very confusing, to differentiate between the general concept of a processor core and a ‘Core’-based processor core.

Over the years, Core-based processor cores have been designed for power envelopes from low-power laptops all the way up to the beefiest of servers. The Core line of processor cores implement more complex logic in order to provide additional acceleration, at the expense of physical size and power.

The Small ‘Atom’ CPU

The second type of x86 design from Intel is its more energy efficient implementation, called ‘Atom’. With the Atom cores, Intel simplifies the design in order to maximise efficiency for a given power or a given performance. This makes the design smaller, cheaper to manufacturer, but has a lower peak performance than the Core design. We typically see Atom designs in power restricted scenarios where performance is not critical, such as IoT, or low cost laptop designs.

Where Core Meets Atom

Normally we characterise a processor core design in terms of this power and performance. Due to the variation in the design, we see where some designs work best, at various points for a given power or for a given performance. In the case of Intel’s latest generation of Core and Atom hardware, it looks something like this, if we compare one thread against one thread:


Modified from Intel’s Slides

From this graph, which measures Performance on the bottom axis and power on the side axis, there is a crossover point where each design makes the best sense. When the demand for performance is below 58%, the Atom design is the most power efficient, but above 58% then a Core design is preferred.

Homogenous CPUs (all the same) vs
Heterogeneous CPUs (mix of different)

Now in modern processors, especially in laptops, desktops, and servers, we only experience one type of core design. We either have all Core or all Atom, and the performance is designed to scale within those homogeneous designs. It becomes a simple curve to navigate, and when more parallel performance is required, more of those types of cores are fired up to serve the needs of the end user. This has been the case for these markets for the last 30-50 years.

The smartphone space, for the last decade, has been taking a different approach. Within the smartphone world, there are core designs listed as ‘big’ and core designs listed as ‘little’, in the same way that Intel has Core and Atom designs.

These smartphone processors combine numbers of big cores with numbers of small cores, such that there is an intrinsic benefit to running background tasks on the little cores, where efficiency is important, and user experience related elements on the big cores, where latency and performance is important.

The complexity of such a heterogeneous smartphone-like design has many layers. By default most items will start on the little cores, and it is up to either the processor or the operating system to identify when the higher performance mode during a user experience moment is needed. This can be tricky to identify.

Then also comes the matter when a workload has to actually move from one type of core to the other, typically in response to a request for a specific level of performance – if the cores are designed significantly different, then the demands on the memory can likely increase and it is up to the operating system to ensure everything works as it should. There is also an additional element of security, which is a larger topic outside of the scope of this article.

Ultimately building a design with both big cores and little cores comes down a lot to what we call the scheduler. This is a program inside the operating system that manages where different background processes, user experience events, or things like video editing and games, get arranged. The smartphone market has been working on different types of schedulers, and optimizing the designs, for over a decade as mentioned. For the land of Intel and AMD, the push for heterogeneous schedulers has been a slow process by comparison, and it becomes very much a chicken and egg problem – there is no need for an optimized heterogeneous scheduler if there is never a heterogeneous processor in the market.

So why bring all this up?

Lakefield is the first x86 heterogeneous processor.

In its marketing, Intel calls this a ‘hybrid’ CPU, and we will start to see logos identifying this as such. At the heart of its design, Lakefield combines one of the big Core designs with a cluster of four smaller Atom designs, all into one single piece of silicon. In normal x86 processor talk, this is essentially a ‘penta-core’ design, which will commonly be referred to as a 1+4 implementation (for one big core and four small cores).

Intel’s goal with Lakefield is to combine the benefits of the power efficient Atom core with the better user-experience elements provided by the more power hungry but better peak performing big Core. As a result, it sits in the middle of Intel’s traditional homogeneous designs which only contain one type of x86 design – somewhere above the ‘all Atom’ 0+4 design and somewhere below the ‘all Core’ 4+0 design (in actual fact, it’s closer to 0+4).

Based on our conversations with Intel, and the small demonstrations we have seen so far, the best way to consider the new Lakefield processor is to consider it similar to one of the older quad-core Atom processors, with the benefits of the single core performance of a big Core. The cluster of four smaller Atom CPUs will take care of the heavy lifting and parallel performance requests, because there are four of them, while the big Core will respond when the user loads an application, or touches the screen, or scrolls a web browser.

Being a new form of x86 hybrid CPU is not the only thing that Lakefield brings to the table.

Now, just for some form of clarification, we have already had some experience with these sorts of hybrid CPU designs on operating systems like Windows. Qualcomm’s Windows on Snapdragon laptops, like the Lenovo Yoga, use a 4+4 design with the Snapdragon smartphone chips, and Qualcomm has had to work extensively with Microsoft to develop an appropriate scheduler that can manage workloads between the different CPU designs.

The main difference to what Qualcomm has done and what Intel is doing with Lakefield is in software support – Qualcomm processors run ‘Arm’ instructions, while Intel processors run ‘x86’ instructions. Most Windows software is built for x86 instructions, which has limited Qualcomm’s effectiveness in penetrating the traditional laptop market. Qualcomm's design actually allows for ‘x86 translation’, however its scope is limited and there is a performance penalty, but is a work in progress. The point being is that while we have not had a hybrid CPU scheduler for Windows on an x86 system previously, there has been a lot of work put in by Microsoft to date while working with Qualcomm.

Visualising Heterogeneous CPU Designs


Not to any sort of scale

Here are some examples of mobile processors, from Intel and Qualcomm, with the cores in green. On the left is Intel's own Ice Lake processor, with four big cores. In the middle is Intel's Lakefield, which has two stacked silicon dies, but it's the top one that has one big core and four small ones. On the right is Qualcomm's Snapdragon 8cx, currently used in Windows on Snapdragon devices, which uses four performance cores and four efficiency cores, but also integrates a smartphone modem onboard.

In this article, over the following pages, we'll be looking at Intel's new Lakefield processor in detail, covering the new multi-core design, discussing chiplets and Intel's new die-to-die bonding technology called Foveros, the implications of such a design on laptop size (as well as looking at the publicly disclosed Lakefield laptops coming to market), die shots, supposed performance numbers, thermal innovations, and the future for Lakefield. Data for this article has come from our research as well as interviews with Intel's technical personnel and Intel's own presentations on Lakefield at events such as HotChips, Architecture Day, CESIEDM, and ISSCC. Some information is dissected with helpful input from David Schor of Wikichip. We also cover some of Intel’s innovations with the scope of other semiconductor companies, some of which may be competitors.

A Stacked CPU: Intel’s Foveros
Comments Locked

221 Comments

View All Comments

  • returnzer0 - Friday, July 3, 2020 - link

    So no, they did not, in fact, read the same article.
  • s.yu - Monday, July 6, 2020 - link

    Mystery solved!
  • vanilla_gorilla - Friday, July 3, 2020 - link

    https://www.anandtech.com/show/15877/intel-hybrid-...

    "At the top is the single Sunny Cove core, also present in Ice Lake. Intel has stated that it has physically removed the AVX-512 part of the silicon, however we can still see it in the die shot. This is despite the fact that it can’t be used in this design due to one of the main limitations of a hybrid CPU. We’ll cover that more in a later topic."

    It was NOT physically removed but it cannot be used so it doesn't really matter. In practice this will have no AVX-512.
  • dotjaz - Friday, July 3, 2020 - link

    Also "However, all modern software assumes a homogeneous processor", that's why they have to support exactly the same ISA extensions. I didn't realise Windows is the only modern software in existence.
  • Meteor2 - Friday, July 3, 2020 - link

    Why so rude, dotjaz?
  • dotjaz - Saturday, July 4, 2020 - link

    So which part is rude? Is this rude asking you what's rude? Sorry your feelings got hurt. There, happy now?
  • jospoortvliet - Sunday, July 5, 2020 - link

    Linux also expects it. Modern enough?
  • jeremyshaw - Thursday, July 2, 2020 - link

    Good. This action ensures this segment of products will be easier to emulate on arm, helping to tear these products away from Intel's grasp.
  • Kangal - Sunday, July 5, 2020 - link

    To be honest, this is a great innovation.
    It's just the execution is quite lacking, and on top of that, it's a couple years too late.

    Just imagine a SoC such as:
    3x Big processor (Intel Core M), eg/ Core i7-8500Y
    5x Small processor (Intel Atom), eg/ Atom x7-Z8750

    Dynamic Scaling:
    (Idle) 4x Small Cores run at 500MHz at Idle
    (Very-low power) 4x Small Cores clock starting at 1.0GHz
    (Low-power use) 5x Small Cores clock upto 2.5GHz
    (Medium power) 3x Big Cores starting at 1.5GHz, 5x Small Cores upto 2.5GHz
    (Regular power) 3x Big Cores at 2.0GHz, 5x Small Cores at 2.0GHz
    (High-power use) 3x Big Cores at 3.0GHz, 5x Small Cores at 2.0GHz
    (Very-high power) 3x Big Cores at 4.0GHz, 5x Small Cores at 2.5GHz
    (Max-power use) 1x Big Cores at 5.0GHz, 2x Big Cores at 4.0GHz, 5x Small Cores at 2.5GHz
  • Kangal - Sunday, July 5, 2020 - link

    Now imagine all of this, competing against AMD.
    Their 12nm node is fairly competitive against Intel's 14nm. And their Zen+ architecture is somewhat competitive against Intel's Skylake architecture. So comparing the above Hybrid Processor, to a 4c/8t (eg/ Ryzen-3780U). Well that's a no contest victory for Intel. And AMD would struggle to fit those technologies into a 8-core laptop processor, so there would be no threat from above.

    Once AMD steps up to either Zen2 architecture, or 7nm node, or both!...
    ....that's when things get heated. Since in the 15W / Ultrabook market, the above setup by Intel would secure a slim victory against similar 4c/8t AMD processor. But when you step up to the 25W / Laptop market, then AMD will pull ahead with their 8c/16t processor. However at least in this scenario, Intel has a good showing of their competitiveness and capabilities. That works upto 2021, but after that, Intel will have to make noticeable performance improvements to both Big/Small Core architectures, AND, they will have to make substantial efficiency improvements on the lithography side (maybe execute on their 8nm nodes, versus TSMC's 5nm).

    First question, why use examples of Cherry Trail and Amber Lake?
    Well, they're both on Intel's (pretty good) 14nm node. Also this is the most efficient "Small Core" Atom architecture that Intel has. Later produced Intel Pentium/Celeron/Atom processors come from the same family, however, they're designed for higher energy consumption. Whereas the "Big Core" stated above is a Core M processor (now rebranded as a Core i7-Y), and it is the latest and best they have when it comes to performance whilst maintaining efficiency.

    Why the 3/5 Split you may ask?
    Well, the most useful is the first/main thread. Followed closely by the second thread as most code has evolved for Dual-Cores in the past 20 years. And somewhat important is the third core, as we've also had an evolution to Quad-Cores in the past 10 years. However, most code hasn't made the full transition from single to dual threads, the same way that dual threads haven't translated well to quad threads. So instead of 2+6 split, which will have some performance drops on Quad thread code, it's better to go for 3+5. So you may ask, then why not just go for an even 4+4 split? Well, most of those quad threads don't utilise the 4th core very well, so we can make do by relegating that to a Small Core instead. This saves us some efficiency, which is what we want to achieve with this concept in the first place. The least energy using split would be 0+8. The most performant split would be 8+0. So this 3+5 split is basically the best of both worlds, since you will get 90% of the single-threaded performance, 70% of the multi-threaded performance, and 50% of the energy expenditure characteristics. So it's not perfect, but it's the closest you can get there... until the code evolves further. And we've started transitioning codes running for 8-core processors only around 2015-2017, so there's a good chance we aren't going to see the evolution tipping point until around 2025.

Log in

Don't have an account? Sign up now