Specifications and Teardown Analysis

The Supermicro X11SDV-4C-TP8F-01 motherboard used in the SuperServer E302-9D is a Flex ATX board (9" x 7.25"). It integrates the Xeon D-2123IT SoC and supports up to four DIMM slots. Since the SoC is soldered on to the board, the memory slots can only run up to the maximum supported by the Xeon D-2123IT - 2400 MHz.

Prior to looking at all the features of the motherboard, some context is provided below in the form of an overview of the capabilities of the Xeon D-2100 series SoCs in general and D-2123IT in particular.

The Xeon D-2123IT, being the entry-level member, comes with four processor cores, and does not have the QuickAssist technology feature integrated. The memory controllers are also limited to 2400 MHz. Server vendors, however, have the ability to make use of the two PCIe 3.0 x16 lanes and twenty HSIO lanes to create a variety of systems targeting different markets. The block diagram below shows Supermicro's approach in the X11SDV-4C-TP8F-01.

The four DIMM slots are arranged on either side of the SoC heat-sink. To one end, we have the PCIe 3.0 x8 and PCIe 3.0 x16 slots. The baseboard management controller (ASPEED AST2500) is seen above the x16 slot. The M.2 SATA / PCIe 3.0 x4 (M-Key) slot is positioned such that the M.2 SSD covers the BMC SoC. Below that, we have a mini-PCIe 3.0 x1 slot and a M.2 B-Key slot (that is also muxed between SATA and PCIe, allowing either type of SSD to be used). Four SATA headers and two mini-SAS / U.2 (SATA / PCIe 3.0 x8) headers round out the other major components seen on the motherboard. The rear I/O on the board has the LAN ports and the USB 3.0 Type-A ports indicated in the block diagram.

The CES-E302iL chassis used in the E302-9D has a removable top cover. Two 2.5" drives (up to 7mm each) can also be installed with a mounting tray inside the system. The power connections to the board are already in-place because of the use of an external power supply. However, users still need to install the DRAM and storage drive(s) on their own.

The gallery above presents a view of the internals and Supermicro's approach to passively cooling a SoC with a TDP of 60W.

Introduction Setup and Usage Impressions
Comments Locked

34 Comments

View All Comments

  • Jorgp2 - Thursday, July 30, 2020 - link

    Maybe you should learn the difference between a switch and a router first.
  • newyork10023 - Thursday, July 30, 2020 - link

    Why do you people have to troll everywhere you go?
  • Gonemad - Wednesday, July 29, 2020 - link

    Oh boy. I once got Wi-Fi "AC" 5GHz, 5Gbps, and 5G mobile networks mixed once by my mother. It took a while to explain those to her.

    Don't use 10G to mean 10 Gbps, please! HAHAHA.
  • timecop1818 - Wednesday, July 29, 2020 - link

    Fortunately, when Ethernet says 10Gbps, that's what it means.
  • imaheadcase - Wednesday, July 29, 2020 - link

    Put the name Supermicro on it and you know its not for consumers.
  • newyork10023 - Wednesday, July 29, 2020 - link

    The Supermicro manual states that a PCIe card installed is limited to networking (and will require a fan installed). An HBA card can't be installed?
  • abufrejoval - Wednesday, July 29, 2020 - link

    Since I use both pfSense as a firewall and a D-1541 Xeon machine (but not for the firewall) and I share the dream of systems that are practically silent, I feel compelled to add some thoughts:

    I started using pfSense on a passive J1900 Atom board which had dual Gbit on-board and cost less than €100. That worked pretty well until my broadband exceeded 200Mbit/s, mostly because it wasn’t just a firewall, but also added Suricata traffic inspection (tried Snort, too, very similar results).

    And that’s what’s wrong with this article: 10Gbit Xeon-Ds are great when all you do is push packet, but don’t look at them. They are even greater when you terminate SSL connections on them with the QuickAssist variants. They are great when they work together with their bigger CPU brothers, who will then crunch on the logic of the data.

    In the home-appliance context that you allude to, you won’t have ten types of machines to optimally distribute that work. QuickAssist won’t deliver benefits while the CPU will run out of steam far before even a Gbit connection is saturated when you use it just for the front end of the DMZ (firewall/SSL termination/VPN/deep inspection/load-balancing-failover).

    Put proxies, caches or even application servers on them as well, even a single 10Gbit interface may be a total waste.

    I had to resort to an i7-7700T which seems a bit quicker than the D-2123IT at only 35Watts TDP (and much cheaper) to sustain 500Mbit/s download bandwidth with the best gratis Suricata rule set. Judging by CPU load observations it will just about manage the Gbit loads its ports can handle, pretty sure that 2.5/5/10 Gbit will just throttle on inspection load, like the J1900 did at 200Mbit/s.

    I use a D-1541 as an additional compute node in an oVirt 3 node HCI gluster with 3x 2.5Gbit J5005 storage nodes. I can probably go to 6x 2.5Gbit before its 10Gbit NIC becomes a bottleneck.

    The D-1541’s benefit there is lots of RAM and cores, while it’s practically silent with 45 Watts TDP and none of the applications on it require vast amounts of CPU power.

    I am waiting for an 8-core AMD 4000 Pro 35 Watt TDP APU to come as Mini-ITX capable of handling 64 or 128GB of ECC-RAM to replace the Xeon D-1541 and bring the price for such a mini server below that of a laptop with the same ingredients.
  • newyork10023 - Wednesday, July 29, 2020 - link

    With an HBA (were it possible, hence my question), the 10Gbps serves a possible use (storage). Pushing and inspection exceeds x86 limits now. See TNSR for real x86 limits (wighout inspection).
  • abufrejoval - Wednesday, July 29, 2020 - link

    That would seem apply to the chassis, not to the mainboard or SoC.
    There is nothing to prevent it from working per se.

    I am pretty sure you can add a 16-port SAS HBA or even NVMeOF card and plenty of external storage, if thermals and power fit. A Mellanox 100Gbit card should be fine electrically, logically etc, even if there is nothing behind to sustain that throughput.

    I've had an Nvidia GTX1070 GPU in the SuperMicro Mini-ITX D-1541 for a while, no problem at all, functionally, even if games still seem to prefer Hertz over cores. Actually GPU accellerated machine learning inference was the original use case of that box.
  • newyork10023 - Wednesday, July 29, 2020 - link

    As pointed out, the D2123IT has no QAT, so a QAT accelerator would take up an available PCIe slot. It could push 10G packets then, but not save them or think (AI) on them.

Log in

Don't have an account? Sign up now