Back in November last year, we reported that SK Hynix had developed and deployed its first DDR5 DRAM. Fast forward to the present, and we also know SK Hynix has recently been working on its DDR5-6400 DRAM, but today the company has showcased that it has plans to offer up to DDR5-8400, with on-die ECC, and an operating voltage of just 1.1 Volts.

WIth CPU core counts rising with the fierce battle ongoing between Intel and AMD in the desktop, professional, and now mobile markets, the demand to increase throughput performance is high on the agenda. Memory bandwidth by comparison has not been increasing as much, and at some level the beast needs to be fed. Announcing more technical details on its official website, SK Hynix has been working diligently on perfecting its DDR5 chips with capacity for up to 64 Gb per chip.

SK Hynix had previously been working on its DDR5-6400 DRAM, which has 16 Gb which is formed of 32 banks, with 8 bank groups, with double the available bandwidth and access potential when compared with DDR4-3200 memory. For reference, DDR4 uses 16 banks with 4 bank groups. The key solution to improve access throughout is the burst length, which has been doubled to 16 when compared with 8 on DDR4. Another element to consider is DDR4 can't by proxy run operations while it's refreshing. DDR5 is using SBRF (same bank refresh function) which allows the system the ability to use other banks while one is refreshing, which in theory improves memory access availability.

As we've already mentioned, SK Hynix already has DDR5-6400 in its sights which are built upon its second-generation 10nm class fabrication node. SK Hynix has now listed that it plans to develop up to DDR5-8400. Similar in methodology to its DDR5-6400 DRAM, DDR5-8400 requires much more forethought and application. What's interesting about SK Hynix's DDR5-8400 is the jump in memory banks, with DDR5-8400 using 32 banks, with 8 bank groups.

Not just content at increasing overall memory bandwidth and access performance over DDR4, the new DDR5 will run with an operating voltage of 1.1 V. This marks a 9% reduction versus DDR4's operating voltage which is designed to make DDR5 more power-efficient, with SK Hynix reporting that it aims to reduce power consumption per bandwidth by over 20% over DDR4.

To improve performance and increase reliability in server scenarios, DDR5-8400 will use on-die ECC (Error Correction) and ECS (Error Check and Scrub) which is a milestone in the production of DDR5. This is expected to reduce overall costs, with ECS recording any defects present and sends the error count to the host. This is designed to improve transparency with the aim of providing enhanced reliability and serviceability within a server system. Also integrated into the design of the DDR5-8400 DRAM is Decision Feedback Equalization (DFE), which is designed to eliminate reflective noise when running at high speeds. SK Hynix notes that this increases the speed per pin by a large amount.

In the above image from specification comparison between DDR4 and DDR5 from SK Hynix, one interesting thing to note is that it mentions DRAM chips with density up to 64 gigabit. We already know that the chip size of DDR5 is 65.22mm², with a data rate of 6.4 Gbps per pin, and uses its 1y-nm 4-metal DRAM manufacturing process. It is worth pointing out that the DDR5-5200 RDIMM we reported on back in November 18, uses 16 Gb DRAM chips, with further scope to 32 Gb reported. SK Hynix aims to double this to 64 Gb chips which do double the density, at lower power with 1.1 volts.  

Head of DRAM Product Planning at SK Hynix, Sungsoo Ryu stated that:

"In the 4th Industrial Revolution, which is represented by 5G, autonomous vehicle, AI, augmented reality (AR), virtual reality (VR), big data, and other applications, DDR5 DRAM can be utilized for next-gen high-performance computing and AI-based data analysis".

SK Hynix if still on schedule with the current Coronavirus COVID-19 pandemic, looks set to enter mass production of DDR5 later this year.

Related Reading

Source: SK Hynix

POST A COMMENT

86 Comments

View All Comments

  • Irata - Friday, April 3, 2020 - link

    That‘s the beauty of the io die - the CPU chiplets don‘t have a memory controller.

    Of course, the interconnect needs to support the increased bandwidth.
    Reply
  • Pro-competition - Friday, April 3, 2020 - link

    ECC please.

    I really want to see greater adoption of ECC in system DRAM (as distinct from GDDR used in consumer GPUs). There's a good reason why Apple, Dell etc. deploy ECC in its workstation computers.

    There's minimal real-world performance degradation by using slower ECC RAMs compared to non-ECC RAMs. But there are tangible benefits to having one's system not silently corrupting (and eventually giving all sorts of unexplained system errors).

    ECC RAMs are more expensive than non-ECC RAMs because ECC RAMs are not currently being produced on a large scale. But for consumers like me who value stability and reliability would not mind the ~20% higher price. The problem is, it's virtually impossible to find anyone who sells ECC RAMs to consumers.

    To clarify, I'm only advocating for ECC in system RAM. I'm not advocating for registered memory.
    Reply
  • Pro-competition - Friday, April 3, 2020 - link

    Oh ECC also requires extra silicon, so that's another reason why it's more expensive. Reply
  • mode_13h - Saturday, April 4, 2020 - link

    Meh, 72 bits per 64? That *should* be only 12.5% overhead. Reply
  • PixyMisa - Saturday, April 4, 2020 - link

    With DDR5, there are two independent 32-bit channels per module, and each needs 7 bits of ECC.

    Which doesn't justify the pricing of DDR4 ECC.
    Reply
  • willis936 - Saturday, April 4, 2020 - link

    DDR4 ECC RAM is nearly twice the cost and half the speed of DDR4 non-ECC RAM. What’s more, CPUs are heavily, heavily limited by memory throughput. That is why most of the transistors are spent on speeding up the memory subsystem (see: SRAM cache and SMT). And now we’re in the core wars, where main memory throughput requirements have never been higher. We’re still only on 2 channels of main memory but we want to feed 12 4 GHz cores? Look into memory scaling performance on current gen CPUs. Memory performance matters. Reply
  • mode_13h - Saturday, April 4, 2020 - link

    price != cost Reply
  • willis936 - Saturday, April 4, 2020 - link

    It is if you're a consumer rather than a producer. Reply
  • Pro-competition - Saturday, April 4, 2020 - link

    Hopefully a similar test as the one below would be conducted on a more modern system

    https://www.techspot.com/article/845-ddr3-ram-vs-e...

    "If you absolutely need the fastest system possible and if fractions of a percent actually do make a difference, then ECC might not be right for you. But our testing has only furthered our belief that in any other situation, ECC memory is simply a better choice than non-ECC memory due to its incredible reliability with only a tiny loss in performance."
    Reply
  • Pro-competition - Saturday, April 4, 2020 - link

    And perhaps G.Skill et al. will be willing to bin high speed ECC RAM and sell it to consumers. There's nothing to stop manufacturers from "overclocking" ECC RAM past the SPD speeds and timing, and including XMP profiles into ECC RAM. Reply

Log in

Don't have an account? Sign up now