For the past eighteen months, Intel has paraded its new ‘Lakefield’ processor design around the press and the public as a paragon of new processor innovation. Inside, Intel pairs one of its fast peak performance cores with four of its lower power efficient cores, and uses novel technology in order to build the processor in the smallest footprint it can. The new Lakefield design is a sign that Intel is looking into new processor paradigms, such as hybrid processors with different types of cores, but also different stacking and packaging technologies to help drive the next wave of computing. With this article, we will tell you all you need to know about Lakefield.

Part Smartphone, Part PC

When designing a processor, there are over a thousand design choices to be made. The processor can be built to tackle everything, or it can be aimed at a niche. For high performance computing, there might be a need for a high power, high performance design where cooling is of no consideration – compare that to a processor aimed at a portable device, and it needs to be energy efficient and offer considerable battery life for a fixed battery size. There is also the cost of designing the product, how much to invest into research and development, how many units are expected to sell, and thus how many should be produced and what size the product should be. What the price range of the target market is can be a huge factor, even before putting pen to paper.


The New Samsung Galaxy Book S

This is all why we have big multi-core processors with lots of compute acceleration in servers, more moderate power and core counts in home machines that focus on single core performance and user experience, and why smartphone processors have to physically fit into a small design and offer exceptional battery life.

Laptop processors have always sort of fit into the middle of the PC and smartphone markets. Laptop users, especially professionals and gamers, need the high performance that a desktop platform can provide, but road warriors need something that is superbly efficient in power consumption, especially at idle, to provide all-day battery life as if they were on a good smartphone. Not only this, but the more energy efficient and the smaller the footprint of the processor and its features, the thinner and lighter the laptop can be, offering a premium design experience.

As a result, we have seen the ultra-premium notebook market converge from two directions.

From the top, we have AMD and Intel, using their laptop processor designs in smaller and smaller power envelopes to offer thin and light devices with exceptional performance and yet retain the energy efficiency required for battery life. For the most premium designs, we see 12-15+ hours of laptop battery life, as well as very capable gaming.

From the bottom, we have Qualcomm, building out its high-performance smartphone processor line into larger power envelopes, in order to offer desktop-class performance with smartphone-class connectivity and battery life. With the designs using Qualcomm’s processors, a user can very easily expect 24+ hours of battery life, and with regular office use, only charge the system once every couple of days. Qualcomm still has an additional barrier in software, which it is working towards.

Both of these directions converge on something in the middle – something that can offer desktop-class performance, 24hr+ battery life, capable gaming, but also has a full range of software support. Rather continue with trying to bring its processors down to the level it requires, Intel has decided to flip its traditional processor paradigm upside down, and build a smartphone-class processor for this market, matching Qualcomm in its bottom up approach while also looking into novel manufacturing techniques in order to do so.

This processor design is called ‘Lakefield’.

Lakefield at the Core, and the Atom

For the past two decades, Intel has had two different types of x86 CPU design.

The Big ‘Core’ CPU

Intel calls its high power/high performance x86 design the ‘Core’ family. This can make it very confusing, to differentiate between the general concept of a processor core and a ‘Core’-based processor core.

Over the years, Core-based processor cores have been designed for power envelopes from low-power laptops all the way up to the beefiest of servers. The Core line of processor cores implement more complex logic in order to provide additional acceleration, at the expense of physical size and power.

The Small ‘Atom’ CPU

The second type of x86 design from Intel is its more energy efficient implementation, called ‘Atom’. With the Atom cores, Intel simplifies the design in order to maximise efficiency for a given power or a given performance. This makes the design smaller, cheaper to manufacturer, but has a lower peak performance than the Core design. We typically see Atom designs in power restricted scenarios where performance is not critical, such as IoT, or low cost laptop designs.

Where Core Meets Atom

Normally we characterise a processor core design in terms of this power and performance. Due to the variation in the design, we see where some designs work best, at various points for a given power or for a given performance. In the case of Intel’s latest generation of Core and Atom hardware, it looks something like this, if we compare one thread against one thread:


Modified from Intel’s Slides

From this graph, which measures Performance on the bottom axis and power on the side axis, there is a crossover point where each design makes the best sense. When the demand for performance is below 58%, the Atom design is the most power efficient, but above 58% then a Core design is preferred.

Homogenous CPUs (all the same) vs
Heterogeneous CPUs (mix of different)

Now in modern processors, especially in laptops, desktops, and servers, we only experience one type of core design. We either have all Core or all Atom, and the performance is designed to scale within those homogeneous designs. It becomes a simple curve to navigate, and when more parallel performance is required, more of those types of cores are fired up to serve the needs of the end user. This has been the case for these markets for the last 30-50 years.

The smartphone space, for the last decade, has been taking a different approach. Within the smartphone world, there are core designs listed as ‘big’ and core designs listed as ‘little’, in the same way that Intel has Core and Atom designs.

These smartphone processors combine numbers of big cores with numbers of small cores, such that there is an intrinsic benefit to running background tasks on the little cores, where efficiency is important, and user experience related elements on the big cores, where latency and performance is important.

The complexity of such a heterogeneous smartphone-like design has many layers. By default most items will start on the little cores, and it is up to either the processor or the operating system to identify when the higher performance mode during a user experience moment is needed. This can be tricky to identify.

Then also comes the matter when a workload has to actually move from one type of core to the other, typically in response to a request for a specific level of performance – if the cores are designed significantly different, then the demands on the memory can likely increase and it is up to the operating system to ensure everything works as it should. There is also an additional element of security, which is a larger topic outside of the scope of this article.

Ultimately building a design with both big cores and little cores comes down a lot to what we call the scheduler. This is a program inside the operating system that manages where different background processes, user experience events, or things like video editing and games, get arranged. The smartphone market has been working on different types of schedulers, and optimizing the designs, for over a decade as mentioned. For the land of Intel and AMD, the push for heterogeneous schedulers has been a slow process by comparison, and it becomes very much a chicken and egg problem – there is no need for an optimized heterogeneous scheduler if there is never a heterogeneous processor in the market.

So why bring all this up?

Lakefield is the first x86 heterogeneous processor.

In its marketing, Intel calls this a ‘hybrid’ CPU, and we will start to see logos identifying this as such. At the heart of its design, Lakefield combines one of the big Core designs with a cluster of four smaller Atom designs, all into one single piece of silicon. In normal x86 processor talk, this is essentially a ‘penta-core’ design, which will commonly be referred to as a 1+4 implementation (for one big core and four small cores).

Intel’s goal with Lakefield is to combine the benefits of the power efficient Atom core with the better user-experience elements provided by the more power hungry but better peak performing big Core. As a result, it sits in the middle of Intel’s traditional homogeneous designs which only contain one type of x86 design – somewhere above the ‘all Atom’ 0+4 design and somewhere below the ‘all Core’ 4+0 design (in actual fact, it’s closer to 0+4).

Based on our conversations with Intel, and the small demonstrations we have seen so far, the best way to consider the new Lakefield processor is to consider it similar to one of the older quad-core Atom processors, with the benefits of the single core performance of a big Core. The cluster of four smaller Atom CPUs will take care of the heavy lifting and parallel performance requests, because there are four of them, while the big Core will respond when the user loads an application, or touches the screen, or scrolls a web browser.

Being a new form of x86 hybrid CPU is not the only thing that Lakefield brings to the table.

Now, just for some form of clarification, we have already had some experience with these sorts of hybrid CPU designs on operating systems like Windows. Qualcomm’s Windows on Snapdragon laptops, like the Lenovo Yoga, use a 4+4 design with the Snapdragon smartphone chips, and Qualcomm has had to work extensively with Microsoft to develop an appropriate scheduler that can manage workloads between the different CPU designs.

The main difference to what Qualcomm has done and what Intel is doing with Lakefield is in software support – Qualcomm processors run ‘Arm’ instructions, while Intel processors run ‘x86’ instructions. Most Windows software is built for x86 instructions, which has limited Qualcomm’s effectiveness in penetrating the traditional laptop market. Qualcomm's design actually allows for ‘x86 translation’, however its scope is limited and there is a performance penalty, but is a work in progress. The point being is that while we have not had a hybrid CPU scheduler for Windows on an x86 system previously, there has been a lot of work put in by Microsoft to date while working with Qualcomm.

Visualising Heterogeneous CPU Designs


Not to any sort of scale

Here are some examples of mobile processors, from Intel and Qualcomm, with the cores in green. On the left is Intel's own Ice Lake processor, with four big cores. In the middle is Intel's Lakefield, which has two stacked silicon dies, but it's the top one that has one big core and four small ones. On the right is Qualcomm's Snapdragon 8cx, currently used in Windows on Snapdragon devices, which uses four performance cores and four efficiency cores, but also integrates a smartphone modem onboard.

In this article, over the following pages, we'll be looking at Intel's new Lakefield processor in detail, covering the new multi-core design, discussing chiplets and Intel's new die-to-die bonding technology called Foveros, the implications of such a design on laptop size (as well as looking at the publicly disclosed Lakefield laptops coming to market), die shots, supposed performance numbers, thermal innovations, and the future for Lakefield. Data for this article has come from our research as well as interviews with Intel's technical personnel and Intel's own presentations on Lakefield at events such as HotChips, Architecture Day, CESIEDM, and ISSCC. Some information is dissected with helpful input from David Schor of Wikichip. We also cover some of Intel’s innovations with the scope of other semiconductor companies, some of which may be competitors.

A Stacked CPU: Intel’s Foveros
Comments Locked

221 Comments

View All Comments

  • Spunjji - Monday, July 6, 2020 - link

    That's the exact impression I got, too. They seems to be jumping around waving "look, we can do this too" when really it would have made far more practical sense *not to do it*.
  • watzupken - Sunday, July 5, 2020 - link

    Conceptually, this is a good way to lower power requirements to make Intel more competitive against ARM SOCs. However I agree that this is indeed for part smartphone, and part PC. Which unfortunately also means it may not be good for either one. From a smartphone perspective, while this may be a low power chip, but I am still not convinced that an x86 chips can be as efficient as a high end ARM chip. On the PC/ laptop space, I feel it will be more economical to just go for the pure Tremont based chips which should offer sufficient performance for light chores, and still offer good battery life. In my opinion, this is going to be a very niche chip and likely won't be cheap either.
  • Farfolomew - Monday, July 6, 2020 - link

    The engineers for this got screwed over by the marketing teams. There is no way this is supposed to compete with higher-end Core chips. It's supposed to be the New Atom replacement: It's supposed to fix everything that was wrong with previous 0+4 Atom CPUs: sluggish OS response and, to an extent, slow single-threaded perf. And also a boost in GPU capabilities.

    I hope Intel doesn't abandon this, even though, as Ian said, this first gen is going to get slammed
  • serendip - Tuesday, July 7, 2020 - link

    I think Intel should have gone the other way by bringing Sunny Cove idle power down to ARM levels, instead of making this Frankenstein's monster of a chip. ARM licensees all use big cores to speed up UI threads and OS response but Intel seems to be using little Tremont Atom cores for everything. An upgrade to Atom wouldn't have saved Intel's position in mobile, not when ARM big cores have higher perf/watt.
  • PandaBear - Monday, July 6, 2020 - link

    Did I see $2499 for the Lenovo? Holy smoke, it is going to fail with this kind of processor. I think Intel is doomed with this being done on 10nm.
  • hanselltc - Monday, July 6, 2020 - link

    How does Comet lake fit into that power/performance graph?
  • qwertymac93 - Monday, July 6, 2020 - link

    Those performance numbers have me shaking my head. Currently, the chip is acting more like a "1 OR 4" core, not a "1 AND 4" core design. I can't help but wonder if two "big" cores would have been both faster and more power efficient... Clearly this product is suffering from first-gen-itus. 2021 can't come soon enough for Intel.
  • MS - Tuesday, July 7, 2020 - link

    Another science project by someone who has a stake in the Atom design. Some things are so bad, it's impossible to kill them. Avoton, Covington, they were all terrible products and all you need to do is look at the performance/power graph to see that there isn't even a net power saving. It's like putting a Pinto engine into a Mustang and expecting better gas mileage. What are they thinking? Or are they? The only thing they missed is piggy backing a Larrabee.
  • name99 - Tuesday, July 7, 2020 - link

    "here’s what Intel did, with each connection operating at 500 mega-transfers per second. The key point on this slide is the power: 0.2 picojoules of energy consumed per bit transferred."

    It's worth comparing this to the competition. TSMC has LIPINCON which is not exactly identical (I don't believe TSMC has publicly demo'd a stacked version) but at the abstract level of "chiplet to chiplet communication" it's the same thing.
    TSMC gets 0.56pJ of power, but at a substantially faster 8GT/s. I don't know the extent to which this would scale down if reduced to Intel speeds, and whether its energy costs would go down (or even up, but that seems unlikely) if it were operating vertically rather than horizontally through an RDL.

    Point is Foveros is a branding more than a technology per se. It's Intel's way of performing a particular type of packaging, but as far as we can tell, the same style of packaging is available to anyone who uses eg TSMC, if it met their particular goals.

    (So far it hasn't because Apple, QC, Huawei, etc, can fit their entire SoC on a single die, they don't need to go through these contortions to either reduce the die size or deal with the limited capabilities of their fabs...

    That sounds snarky, but Lakefield is deeply fishy. Sure, you want to save area, but the target is a tablet, and Apple's tablet SoC's have been 120 to 150mm^2. You'd figure a single die Lakefield all on 10nm would fit in ~150mm62 or less. So???)

    And Samsung? I would guess so, but I don't know.
  • Spunjji - Friday, July 10, 2020 - link

    Apple aren't trying to squeeze all of their profit margins out of the CPU alone, though. That's the difference.

    This is Intel trying to preserve margins by using fancy packaging technology to increase yield (and thus both output and margins) on their increasingly capacity-constrained nodes.

Log in

Don't have an account? Sign up now