The Cortex-A78 Micro-architecture: PPA Focused

The new Cortex-A78 had been on Arm’s roadmaps for a few years now, and we have been expecting the design to represent the smallest generational microarchitectural jump in Arm’s new Austin family. As the third iteration of Arm's Austin core designs, A78 follows the sizable 25-30% IPC improvements that Arm delivered on the Cortex-A76 and A77, which is to say that Arm has already picked a lot of the low-hanging fruit in refining their Austin core.

As the new A78 now finds itself part of a sibling pairing along side the higher performance X1 CPU, we naturally see the biggest focus of this particular microarchitecture being on improving the PPA of the design. Arm’s goals were reasonable performance improvements, balanced with reduced power usage and maintaining or reducing the area of the core.

It’s still an Arm v8.2 CPU, sharing ISA compatibility with the Cortex-A55 CPU for which it is meant to be paired with in a DynamIQ cluster. We see similar scaling possibilities here, with up to 4 cores per DSU, with an L3 cache scaling up to 4MB in Arm’s projected average target designs.

Microarchitectural improvements of the core are found throughout the design. On the front-end, the biggest change has been on the part of the branch predictor, which now is able to process up to two taken branches per cycle. Last year, the Cortex-A77 had introduced as secondary branch execution unit in the back-end, however the actual branch unit on the front-end still only resolved a single branch per cycle.

The A78 is now able to concurrently resolve two predictions per cycle, vastly increasing the throughput on this part of the core and better able to recover from branch mispredictions and resulting pipeline bubbles further downstream in the core. Arm claims their microarchitecture is very branch prediction driven so the improvements here add a lot to the generational improvements of the core. Naturally, the branch predictors themselves have also been improved in terms of their accuracy, which is an ongoing effort with every new generation.

Arm focused on a slew of different aspects of the front-end to improve power efficiency. On the part of the L1I cache, we're now seeing the company offer a 32KB implementation option for vendors, allowing customers to reduce area of the core for a small hit on performance, but with gains in efficiency. Other changes were done to some structures of the branch predictors, where the company downsized some of the low return-on-investment blocks which had a larger cost on area and power, but didn’t have an as large impact on performance.

The Mop cache on the Cortex-A78 remained the same as on the A77, housing up to 1500 already decoded macro-ops. The bandwidth from the front-end to the mid-core is the same as on the A77, with an up to 4-wide instruction decoder and fetching up to 6 instructions from the macro-op cache to the rename stage, bypassing the decoder.

In the mid-core and execution pipelines, most of the work was done in regards to improving the area and power efficiency of the design. We’re now seeing more cases of instruction fusions taking place, which helps not only performance of the core, but also power efficiency as it now uses up less resources for the same amount of work, using less energy.

The issue queues have also seen quite larger changes in their designs. Arm explains that in any OOO-core these are quite power-hungry features, and the designers have made some good power efficiency improvements in these structures, although not detailing any specifics of the changes.

Register renaming structures and register files have also been optimized for efficiency, sometimes seeing a reduction of their sizes. The register files in particular have seen a redesign in the density of the entries they’re able to house, packing in more data in the same amount of space, enabling the designers to reduce the structures’ overall size without reducing their capabilities or performance.

On the re-order-buffer side, although the capacity remains the same at 160 entries, the new A78 improves power efficiency and the density of instructions that can be packed into the buffer, increasing the instructions per unit area of the structure.

Arm has also fine-tuned the out-of-order window size of the A78, actually reducing it in comparison to the A77. The explanation here is that larger window sizes generally do not deliver a good return on investment when scaling up in size, and the goal of the A78 is to maximize efficiency. It’s to be noted that the OOO-window here not solely refers to the ROB which has remained the same size, Arm here employs different buffers, queues, and structures which enable OOO operation, and it’s likely in these blocks where we’re seeing a reduction in capacity.

On the diagram, here we see Arm slightly changing its descriptions on the dispatch stage, disclosing a dispatch bandwidth of 6 macro-ops (Mops) per cycle, whereas last year the company had described the A77 as dispatching 10 µops. The apples-to-apples comparison here is that the new A78 increases the dispatch bandwidth to 12 µops per cycle on the dispatch end, allowing for a wider execution core which houses some new capabilities.

On the integer execution side, the only big addition has been the upgrade of one of the ALUs to a more complex pipeline which now also handles multiplications, essentially doubling the integer MUL bandwidth of the core.

The rest of the execution units have seen very little to no changes this generation, and are pretty much in line with what we’ve already seen in the Cortex-A77. It’s only next year where we expect to see a bigger change in the execution units of Arm’s cores.

On the back-end of the core and the memory subsystem, we actually find some larger changes for performance improvements. The first big change is the addition of a new load AGU which complements the two-existing load/store AGUs. This doesn’t change the store operations executed per cycle, but gives the core a 50% increase in load bandwidth.

The interface bandwidth from the LD/ST queues to the L1D cache has been doubled from 16 bytes per cycle to 32 bytes per cycle, and the core’s interfaces to the L2 has also been doubled up in terms of both its read and write bandwidth.

Arm seemingly already has some of the most advanced prefetchers in the industry, and here they claim the A78 further improves the designs both in terms of their memory area coverage, accuracy and timeliness. Timeliness here refers to their quick latching on onto emerging patterns and bringing in the data into the lower caches as fast as possible. You also don’t watch the prefetchers to kick in too early or too late, such as needlessly prefetching data that’s not going to be used for some time.

Much like the L1I cache, the A78 now also offers an 32KB L1D option that gives vendors the choice to configure a smaller core setup. The L2 TLB has also been reduced from 1280 to 1024 pages – this essentially improves the power efficiency of the structure whilst still retaining enough entries to allow for complete coverage of a 4MB L3 cache, still minimizing access latency in that regard.

Overall, the Cortex-A78’s microarchitectural disclosures might sound surprising if the core were to be presented in a vacuum, as we’re seeing quite a lot of mentions of reduced structure sizes and overall compromises being made in order to maximize energy efficiency. Naturally this makes sense given that the Cortex-X1 focuses on performance…

Two New "Big" Micro-architectures: A Business Model Change The Cortex-X1 Micro-architecture: Bigger, Fatter, More Performance
Comments Locked

192 Comments

View All Comments

  • Wilco1 - Tuesday, May 26, 2020 - link

    Disappointed in what way? Flagship phones have been more than fast enough in the last few years. There is a balance between power consumption and performance - and I think the improved efficiency of Cortex-A78 will be more useful in typical use-cases. It won't win benchmarks, but if you believe iPhone performance is measurably better in real-life use (rather than benchmarks), why not just buy one?
  • syxbit - Tuesday, May 26, 2020 - link

    Put it in context. You pay $1500 for a Galaxy S20 ultra that's slower than a $400 iphone.
    If you do a lot of web browsing on javascript heavy pages, nothing beats single threaded perf. You can't improve it by just throwing slower cores at it.
    Discourse did a good writeup that's still valid today.
    https://meta.discourse.org/t/the-state-of-javascri...
  • Wilco1 - Tuesday, May 26, 2020 - link

    You could also get the $699 OnePlus 8 and beat the S20 ultra on both performance and cost. Where is the difference?

    Javascript and browsers depend heavily on software optimization, and that's the real issue.
  • armchair_architect - Tuesday, May 26, 2020 - link

    syxbit is right. Javascript and browsers are not just software. They stress CPU in different ways than the usual Spec/Geekbench and X1 will not be just a benchmark core.
    If you look at DVFS curve of A77 vs A78, X1 will probably be even lower power than A78 in the region of perf in which they overlap.
    For the simple reason that to achieve same performance as A77/A78, X1 will need much lower frequency and voltage. This will greatly offset the intrinsic growth in iso-frequency power that X1 will for sure have.
    My point would be: going wider helps you be more efficient iso-perf vs narrower cores.
    The power efficiency hit only comes when you go over the peak perf offered by the narrower core.
    So you could argue that something like X1 is taking the A78 DVFS curve and pushing it down (lower power) and of top of that it extends it to new performance point not even reachable on A78.
    Obviously you pay in area for this :)
    But Apple has clearly showed over the years that this is the winning formula
  • ZolaIII - Wednesday, May 27, 2020 - link

    You are completely wrong. It's much more about caching than wider core's. X1 is not 50% faster than A78 but it is 50% bigger. Best approach would be wider ISA with same execution units multiplied in numbers like RISC V did lay out already foundations for 256 bit ISA (still a scratch) and is finalising 128 bit one. But there's a catch in tool's and compilers support.
  • soresu - Wednesday, May 27, 2020 - link

    X1 does have wider NEON SIMD, twice as wide in fact - so for content that favors SIMD (like dav1d AV1 decoding) you will get a serious jump in performance.

    Unfortunately the benchmarks do not really give us much of an idea of real world improvement for something like this, so we'll have to wait for products to get a better idea.
  • dotjaz - Thursday, May 28, 2020 - link

    ARM specifically said A78 was designed to INCREASE EFFICIENCY vs A77, a lot of the decisions concur with that.
    X1 was designed to MAXIMIZE PERFORMANCE sacrificing efficiency and area in the process. When you factor in the leakage caused by larger die. X1 would almost certainly be less efficient than A78 when you drop it to below 2GHz.
  • Wilco1 - Thursday, May 28, 2020 - link

    "Javascript and browsers are not just software."

    They are just software. Fun fact: your Android browser is built with -Oz. Yes, all optimizations are turned off in order to reduce binary size. That's an insanely stupid software decision which means Android phones appear to be behind iOS when in fact they are not.
  • name99 - Saturday, May 30, 2020 - link

    It's not an "insanely stupid software decision"...
    Fun fact: Apple ALSO builds pretty much all their software at either -Oz or -Os! Both Apple and Google (and probably MS) are well aware that the "overall system experience" matters more than picking up a few percentage points in particular benchmarks, and that large app footprints hurt that overall system experience. Apple's recommendation for MOST developer code (and followed internally) has been to optimize for size for yikes, at least 20 years, and hasn't changed in all that time.

    Look at the (ongoing) work in LLVM to reduce code size ( "outliner" is one of the relevant keywords); the people involved in that span a range of companies. I've seen a lot of work by Apple people, a lot by Google people, some even by Facebook people.
  • Wilco1 - Saturday, May 30, 2020 - link

    There is a world of difference between optimizing performance without regard for codesize and optimizing for smallest possible codesize without any regard for performance. -Ofast is the former, -Oz is the latter. Most software, including Linux distros, uses -O2 as the best tradeoff between these extremes. Non essential applications use -Os (or even -Oz if performance is irrelevant). However a browser is extremely performance sensitive. Saving a few bytes with -Oz loses 10-20% performance and that means you lose the equivalent of a full CPU generation. I call that insanely stupid, there are no other words to describe it.

Log in

Don't have an account? Sign up now