When I wrote my first article on Intel's Atom architecture I called it The Journey Begins. I did so because while Atom has made a nice home in netbooks over the years, it was Intel's smartphone aspirations that would make or break the product. And the version of Atom that was suitable for smartphone use was two years away.

Time sure does fly. Today Intel is finally unveiling its first Atom processors for smartphones and tablets. Welcome to Moorestown.

Craig & Paul’s Excellent Adventure

Six years ago Intel’s management canned a project called Tejas. It was destined to be another multi-GHz screamer, but concerns over power consumption kept it from coming to fruition. Intel instead focused on its new Core architecture that eventually led to the CPUs we know and love today (Nehalem, Lynnfield, Arrandale, Gulftown, etc...).

When a project gets cancelled, it wreaks havoc on the design team. They live and breathe that architecture for years of their lives. To not see it through to fruition is depressing. But Intel’s teams are usually resilient, as is evidenced by another team that worked on a canceled T-project.

The Tejas team in, er, Texas was quickly tasked with coming up with the exact opposite of the chip they had just worked on: an extremely low power core for use in some sort of a mobile device (it actually started as a low power core as a part of a many core x86 CPU, but the many core project got moved elsewhere before the end of 2004). A small group of engineers were first asked to find out whether or not Intel could reuse any existing architectures in the design of this ultra low power mobile CPU. The answer quickly came back as a no and work began on what was known as the Bonnell core.

No one knew what the Bonnell core would be used in, just that it was going to be portable. Remember this was 2004 and back then the smartphone revolution was far from taking over. Intel’s management felt that people were either going to carry around some sort of mobile internet device or an evolution of the smartphone. Given the somewhat conflicting design goals of those two devices, the design team in Austin had to focus on only one for the first implementation of the Bonnell core.

In 2005, Intel directed the team to go after mobile internet devices first. The smartphone version would follow. Many would argue that it was the wrong choice, after all, when was the last time you bought a MID? Hindsight is 20/20 and back then the future wasn’t so clear. Not to mention that shooting for a mobile extension of the PC was a far safer bet for a PC microprocessor company than going after the smartphone space. Add in the fact that Intel already had a smartphone application processor division (XScale) at the time and going the MID route made a lot of sense.

The team had to make an ultra low power chip for use in handheld PCs by 2008. The power target? Roughly 0.5W.

Climbing Bonnell

An existing design wouldn’t suffice, so the Austin team lead by Belli Kuttanna (former Sun and Motorola chip designer) started with the most basic of architectures: a single-issue, in-order core. The team iterated from there, increasing performance and power consumption until their internal targets were met.

In order architectures, as you may remember, have to execute instructions in the order they’re decoded. This works fine for low latency math operations but instructions that need data from memory will stall the pipeline and severely reduce performance. It’s like not being able to drive around a stopped car. Out of order architectures let you schedule around memory dependent operations so you can mask some of the latency to memory and generally improve performance. Despite what order you execute instructions, they all must complete in the program’s intended order. Dealing with this complexity costs additional die area and power. It’s worth it in the long run as we’ve seen. All Intel CPUs since the Pentium Pro have been wide (3 - 4 issue), out of order cores, but they also have had much higher power budgets.

As I mentioned in my original Atom article in 2008 Intel was committed to using in order cores for this family for the next 5 years. It’s safe to assume that at some point, when transistor geometries get small enough, we’ll see Intel revisit this fundamental architectural decision. In fact, ARM has already gone out of order with its Cortex A9 CPU.

The Bonnell design was the first to implement Intel’s 2 for 1 rule. Any feature included in the core had to increase performance by 2% for every 1% increase in power consumption. That design philosophy has since been embraced by the entire company. Nehalem was the first to implement the 2 for 1 rule on the desktop.

What emerged was a dual issue, in-order architecture. The first of its kind from Intel since the original Pentium microprocessor. Intel has learned a great deal since 1993, so reinventing the Pentium came with some obvious enhancements.

The easiest was SMT, or as most know it: Hyper Threading. Five years ago we were still arguing about the merits of single vs. dual core processors, today virtually all workloads are at least somewhat multithreaded. SMT vastly improves efficiency if you have multithreaded code, so Hyper Threading was a definite shoe in.

Other enhancements include Safe Instruction Recognition (SIR) and macro-op execution. SIR allows conditional out of order execution depending if the right group of instructions appear. Macro-op execution, on the other hand, fuses x86 instructions that perform related ops (e.g. load-op-store, load-op-execute) so they go down the pipeline together rather than independently. This increases the effective width of the machine and improves performance (as well as power efficiency).

Features like hardware prefetchers are present in Bonnell but absent from the original Pentium. And the caches are highly power optimized.

Bonnell refers to the core itself, but when paired with an L2 cache and FSB interface it became Silverthorne - the CPU in the original Atom. For more detail on the Atom architecture be sure to look at my original article.

The World Changes, MIDs Ahead of Their Time
Comments Locked

67 Comments

View All Comments

  • Mike1111 - Wednesday, May 5, 2010 - link

    IMHO Anand meant app-centric smartphones, David Pogue calls them app phones.
  • jasperjones - Wednesday, May 5, 2010 - link

    i don't see how recent symbian devices are not "app centric." you have the publicly available sdk, the ovi store, etc.
  • BrooksT - Wednesday, May 5, 2010 - link

    So your argument is that symbian is a bigger player in the app phone market than Apple because their *latest* phones support apps?

    The "smartphone" / "app phone" semantic difference is annoying, but if we look at, say, number of applications available or downloaded, Symbian and RIM are distant third and fourth places. Likewise with app usage, even just internet browsing.

    If you want to talk about smartphones as they existed in 2006, then yes, both Symbian and RIM are much bigger than Apple or Android.
  • jasperjones - Wednesday, May 5, 2010 - link

    To clarify: I said "recent" because the first Symbian smartphones came out almost 10 years ago--of course, those weren't app-centric.

    My original comment on Anand's article still stands. I'm talking about IDC's and Canalys' reports on 2010:Q1 smartphone sales which became available just days ago. Of course, most of the smartphones sold by Nokia and RIM in the first quarter allow for installation of apps such as Facebook, Ovi Maps, etc., etc.
  • WaltFrench - Sunday, May 9, 2010 - link

    “…Apple and Google dominate the smartphone market. This is utter nonsense.”

    All you have to do is to look at the developer space. How many app developers are creating apps for the unreleased RIM OS 6? … for the Symbian OS^3, due out in “select” markets sometime in Q3?

    If older apps work OK in these new OS incarnations, and if Blackberry and Nokia users are heavy app downloaders (or for some reason will become heavy users), then the current sales-share leaders are relevant, but still not dominant, in the future of app phones.
  • nafhan - Wednesday, May 5, 2010 - link

    I'm curious about the PCI bus requirement for Windows 7 that would prevent it from running on Moorestown devices. Does it have something to do with storage, maybe? I'm having trouble finding specifics online as well. If someone could enlighten me, it would be appreciated.
  • DanNeely - Wednesday, May 5, 2010 - link

    This is almost certainly a factor of windows being a monolithic kernel and MS not having any way to say "this PC doesn't do PCI". This is something that MS will have to deal with in the medium term future anyway. PCI slots are going away from some high end mobos; it's only a matter of time before they disappear from mainstream boards and stop being used to attach misc controllers like PATA (slowly going away entirely) or FireWire (FW3200 will need PCIe bandwidth). At that point intel will want to take it out of their chipsets as a cost saving feature, and oems will not be happy if they have to install a PCIe to PCI bridge to maintain windows compatibility.
  • Drizzt321 - Wednesday, May 5, 2010 - link

    Maybe HP/Palm should get with Intel and optimize WebOS for this. Much of the WebOS stack is just Linux, Webkit, plus other F/OSS stuff like gstreamer and the like so I wouldn't be surprised if it isn't as big an effort as, say, Symbian or anything like that.

    This could be a big break for Intel and HP/Palm, since HP/Palm needs something big to help it move on to the next WebOS device, and the OS could certainly see some benefits to more CPU power. I've heard the overclocking patches raising the CPU to 800MHz can really help things.
  • sleepeeg3 - Wednesday, May 5, 2010 - link

    Please stop designing faster phones.

    Phone A lasts 24 hours standby
    Phone B lasts 6 hours standby

    After 6 hours, Phone B's battery is dead. How much use do you get out of a phone with a dead battery? 0.

    999GHz x 0 is still... 0!

    This push toward faster phones, without even considering battery life, is nuts. Phones are impractical tools for just about everything, but calling, messaging and photographs. None of these are CPU intensive. Dependability is more important than how fast the dial screen opens.

    Moorestown may include better power architecture, but it throws this away by jacking up the processor speed.

    Lets get back to practicality and make phones functional again. This push toward cutesy 1000mAH/1GHz+ phones that die in a few hours is moronic.

    Is it too much to ask for phones that last a week?
  • metafor - Thursday, May 6, 2010 - link

    There are plenty of phones that last a week...

    They even cost significantly less than GHz smartphones and usually don't come with a 2-year contract.

    But they don't have giant 4.2" AMOLED screens (which btw, is ~50% of the power consumption) either.

Log in

Don't have an account? Sign up now