Way back at CES 2014, Razer’s CEO introduced a revolutionary concept design for a PC that had one main backplane and users could insert a CPU, GPU, power supply, storage, and anything else in a modular fashion. Fast forward to 2020, and Intel is aiming to make this idea a reality. Today at a fairly low-key event in London, Intel’s Ed Barkhuysen showcased a new product, known simply as an ‘Element’ – a CPU/DRAM/Storage on a dual-slot PCIe card, with Thunderbolt, Ethernet, Wi-Fi, and USB, designed to slot into a backplane with multiple PCIe slots, and paired with GPUs or other accelerators. Behold, Christine is real, and it’s coming soon.

‘The Element’ from Intel

Truth be told, this new concept device doesn’t really have a name. When specifically asked what we should call this thing, we were told to simply call it ‘The Element’ – a product that acts as an extension of the Compute Element and Next Unit of Computing (NUC) family of devices. In actual fact, ‘The Element’ is a product of the same team inside Intel: the Systems Product Group responsible for the majority of Intel’s small form factor devices has developed this new ‘Element’ in order to break the iterative design cycle into something that is truly revolutionary.

(This is where a cynic might say that Razer got there first… Either way, everyone wins.)

What was presented on stage wasn’t much more than a working prototype of a small dual-slot PCIe card powered by a BGA Xeon processor. On the card was also two M.2 slots, two slots for SO-DIMM LPDDR4 memory, a cooler sufficient for all of that, and then additional controllers for Wi-Fi, two Ethernet ports, four USB ports, a HDMI video output from the Xeon integrated graphics, and two Thunderbolt 3 ports.

The M.2 slots and SO-DIMM slots are end-user accessible, by lifting a couple of screws from the front. This is in no-way a final design, but just a working prototype. The exact cooler, styling, and even the product name is in no way final yet, but the concept is solid.

The product shown used a Xeon BGA processor, however it was clear that this concept can be moved into consumer processors as well. As with the current NUC family, this would likely migrate into the mobile processors rather than BGA versions of desktop processors, and the fact that there are Thunderbolt 3 ports on the side would hint towards 10th Generation Ice Lake, however Intel stated that all options at this design stage are open at this point.

This whole card has a PCIe slot, which we believe at this time to be PCIe 3.0. It stands to reason that if this Element becomes a generational product, then it would migrate to PCIe 4.0 and PCIe 5.0 / CXL as and when Intel moves its product families onto those technologies. Intel is planning to bundle the card to partners with a backplane – a PCB with multiple PCIe slots. One slot would be designated the master host slot, and the CPU/DRAM/Storage combination would go in that slot. Discrete GPUs, professional graphics, FPGAs, or RAID controllers are examples of cards that could fit into the other slots.

In these configurations, in every instance the CPU compute card is the host, rather than an attached device. Intel does offer CPUs-on-a-card-as-a-device, which is Intel’s Visual Compute Accelerator (VCA), which pairs three Xeon E3 CPUs onto a slave card that is accessed from the host. We asked if Intel has plans that its Element cards could be used as a slave card in this configuration, but Intel stated there are no current plans to do so.

The backplane would also be the source of power. A direct PSU into the backplane would serve as offering 75W to each of the PCIe slots, as well as any other features such as system fans or additional on-backplane controllers. This power could come from a PSU, or from a 19V input, depending on the exact configuration of the system. The Element card we saw had an additional 8-pin PCIe power connector, suggesting another 150W could be powered to the card, giving a total of 225W for CPU, DRAM, and storage: which would beg the question if the card could support something like a Core i9-9900KS.

On the topic of cooling, the demo unit shown had very much a basic cooling setup. As stated, Intel said that this is in no-way the final version of what Intel is trying to do here. When asked if it would be easy enough for users to liquid cool the CPU, the Intel spokesperson said it would be customizable, though it would be up to component manufacturers to enable that themselves.

For board partners, Intel stated that they are not seeing this Element form factor as something that partners would create themselves. In essence, there would be no AIB partners like in the GPU market, but for OEMs that to build pre-built systems, they could take the Element card and customize on top of the Intel design, as well as develop their own backplanes and such.

Ultimately with the Element, Intel wants to make it easier for integrated system upgrades. Customers can keep the chassis, keep the system setup, keep the backplane, and all they would do is change the Element card to get the latest performance and features. This was the ultimate goal with something like Razer’s Project Christine, and is certainly something to work towards. However, by keeping the storage on the Element rather than having it as a separate add-in card, this is somewhat limiting as it would require swapping the drives out. This might not be much of an issue, if one of the PCIe slots on the backplane was used for M.2 drives (or even with drives on the backplane itself).

Intel stated that the plan for the Element to see daylight in the hands of OEMs would be sometime in Q1 2020, likely at the back-end of Q1. Our spokesman said that exact CPUs and configurations are still in flux, and as one might expect, so is pricing. Exactly how the Element will be named is a mystery, and how it will be packaged either to end users or OEMs is a question to answer.

Given that this is a product from the same group as the NUC, I highly expect it to follow the same roll-out procedure as other NUC products. Personally, I think this form factor would be great if Intel could standardize it and open it up to motherboard partners. I imagine that we might see some board partners do copy-cat designs, similarly with how we have several variations of NUCs on the market. Intel stated that they have a roadmap for the Element, which is likely to extend over multiple generations. I theorised a triple slot version with an Xe GPU, and the idea wasn't dismissed out of hand immediately.

We asked about RGB LEDs. The question recieved a chuckle, but it is going to be interesting whether Intel limits the Element to a professional environment or opens it up to more run-of-the-mill users.

We’ve politely asked Intel to let us know when it is ready so we can test. Our Intel spokesman was keen to start sampling when it is ready, stating that sampling budget in this context is not a problem. I think we’ll have to hold them to that.

Related Reading

Comments Locked

86 Comments

View All Comments

  • stephenbrooks - Monday, October 7, 2019 - link

    Imagine in the future you could hot-swap 2 of these system cards: the first system would mirror its state onto the second and you could then remove the first, giving a system upgrade with no downtime. Also, you could imagine a high reliability version where 2 or 3 system cards have to stay in sync with each other (like redundant systems on spacecraft).
  • tspacie - Monday, October 7, 2019 - link

    You just described a Stratus ftServer from the early 2000s.
  • duploxxx - Tuesday, October 8, 2019 - link

    exactly.... only the OS and drivers were a headache. I remember those good old days.
    Stratus was totally blown out of the market by VMware HA... not the same level of protection but good enough for a way lower price...
  • thunderbird32 - Monday, October 7, 2019 - link

    So it's the modern equivalent of the old Intel 8080 based S-100 bus systems from the late 70's?
  • HStewart - Monday, October 7, 2019 - link

    Every desktop system is basically derived in someway from S-100 bus system design

    S-100 -> ISA -> PCI -> PCIe …. 1.0, 2.0, 3.0, 4.0, 5.0....

    So here is big difference with this designed - you can start out CPU and GPU card and possibly IO card, but later add addition CPU or GPU cards. Further more if they come out with faster and more powerful version of CPU card, you can add it - but a question can you mix and match them. What about different vendors Intel and AMD CPU's on same box or AMD and NVidia GPUs on same box.

    This is Xeon system, so multiple CPU are in the pictures - so how about 24 CPU' modules all working with each CPU modules containing multiple core 8 or 16 or more depending on design. Better yet if one of modules fail - then replace just it.

    People need to get past the old desktop designs and move toward to future. This is not 70's design but 2020 design.
  • Kevin G - Monday, October 7, 2019 - link

    You forgot AGP and PCI-X in there for historical purposes.

    The big change was PCI(-X) to PCIe as that dropped the shared bus to a dedicated point-to-point serial link. The driver model was same to permit rapid adoption and transition but the underlying layers were all different.

    The problem with point-to-point is that for a modular system like this, additional IO slots are either bifurcated form the core card or route into a massive PCIe switch. Either way it is an easy means to introduce a bottleneck in IO.
  • mode_13h - Tuesday, October 8, 2019 - link

    Exactly. Either this architecture will be incredibly limited, or it will require a PCIe switch, in the backplane.

    IMO, not worth it. Blade servers are already good for what they do. PCs are good for what *they* do. This is a rather pointless waste of time.

    What would make more sense is to standardize on some mechanical housing form factors for USB4 devices, so that they can stack nicely or slot into enclosures. That's the way to expand small-form-factor devices like NUCs and mini PCs.
  • Samus - Tuesday, October 8, 2019 - link

    PCI-X servers used to give me nightmares. Specifically the compatibility, or lack there of. Fortunately it died a quick death in the mainstream...Apple supported it for a long time after PCIe took the server space in their G3\G4 workstations. Not sure why they loved it so much. But Apple.
  • sing_electric - Wednesday, October 9, 2019 - link

    PCI-X actually made it to the single-core G5s, believe it or not... The answer is probably that they were between a rock and a hard place with what Motorola (and IBM) could give them with chipsets.

    Keep in mind, this is when Apple was in love with USB... because it finally meant that they no longer had to either push OEMs hard to make Mac-compatible peripherals (or rely on costly, low-volume Mac specialty suppliers like Elgato). The ability for high-end Macs to be able to use off-the-shelf components was a plus, not a minus, back before Apple was the 800 lb gorilla in the room.
  • sing_electric - Wednesday, October 9, 2019 - link

    This. Of course, it may work for a lot of consumer use cases - GPUs don't typically saturate PCIe 3.0 x16 connectors, so really, once you go to 4.0, you'll have a reasonable amount of bandwidth for whatever else you'll need (as long as they keep Thunderbolt on the host card). Seeing as it looks like PCIe 5.0 might happen sooner rather than later, you might, on net, be OK.

Log in

Don't have an account? Sign up now