ASRock Rack has created the first AM4 socket server boards, X470D4U, X470D4U2-2T

maybe I missed it… but does it use java or html5 for the IPMI interface ?

Hey @Altkey can you verify Java or HTML5 for the IPMI interface?

1 Like

I think my older asrock rack boards were java i believe (would need to check and mine are like lga1150 boards) Not sure if they have changed since then tho

There’s both.

1 Like

As in you choose at login or you update the firmware?

Idk, should support more than 4 SATA drives really and a single M.2 maybe.

Whats a 9211-8i?

1 Like

Cheers!

There’s a web GUI where you can change various settings as well as start a KVM session. There’s a button for a web version and one for a Java version.

2 Likes

Got my board yesterday it’s a proper retail version, haven’t installed it as of yet but I have a 2700, 16GB ECC & an M.2. It doesn’t come with much though, the I/O Shield, a single SATA 3 cable and a single M.2 screw, basic manual and that’s it.

The board it’s self looks well made, the big white rings around the screw holes is weird but I guess it makes it easier to see. One possible issue is there isn’t a whole lot of room between the socket and the RAM so… be very careful with coolers, looks like the Wraith Spire might fit with the closest slot populated (haven’t tested it though) but if you go non ECC and it has a heatsink… good luck :wink: .

1 Like

Thought I’d give you guys an update. Got my board running with the 2700, a stick of Crucial CT16G4WFD8266 16GB 2666MHz ECC & a WD Green 120GB M.2 (SATA).

I was a little worried the RAM wouldn’t work because I know Supermicro boards can be picky but it works in ECC according to Windows. Booting can be slow to get through the BMC initialisation (takes about 20 - 30 seconds) but once it’s through it doesn’t seem to do it again until you drain the capacitors by unplugging the power or switching off the PSU.

Side note on the M.2 screws, the board did come with 2 they just put them in separate bags for whatever reason.

1 Like

Sata port expanders are unreliable garbage that will only give you problems, so the best way to attach more hard drives to a motherboard when everything onboard is full, is to use cheap SAS controller PCIe cards pulled from servers, especially ones based on an LSI chip and flashed with firmware for JBOD operation.

These cards have a variety of brands, with dell being the most common.
These cards can be found in two different configuration, one is flashed with RAID firmware, the other is flashed with “IT” firmware, which is the kind you want because the card passes the disk through without trying to do anything to it (JBOD). This allows software raid and ZFS solutions to be used with no problems.
Ebay is the typical place to get these cards. “8i” means it has 8 SAS lanes (2 physical ports), and is meant for internal connections. “16e” would mean it has 16 SAS lanes (4 physical ports which are pointed outside the computer). These run around $50
https://www.ebay.com/sch/i.html?_from=R40&_trksid=p2380057.m570.l1313.TR4.TRC2.A0.H0.X9211-8i.TRS0&_nkw=9211-8i&_sacat=0

Because these cards are meant for servers with forced airflow, they can sometimes overheat in a normal deskop. It is reccomended to take off the heat sink, drill two holes in opposite corners, twist tie a little 40mm fan to it, and use some nice paste to put it back on. about $13.

Then SAS to SATA breakout cables are used to connect normal hard drives to the card. $10-13

You’d use different cables for connecting actual SAS drives, or connecting to some backplanes or an SAS expander.

The 8 lane cards (two physical ports) are the most economical to get, and if you need to connect more drives and perhaps are running out of pcie slots a SAS expander can be used, These do not need to be connected to a pcie slot, but can be powered by one, or sometimes by a molex cable. Using these expanders can bottleneck HDD’s when all ports are filled under some heavy use cases, but generally isn’t a big issue. These used to run around ~$70

Do pay careful attention to the connections and what you are connecting to, there are a lot of options. 3gb and 6gb cables are flatish. 12 gb cables are square-ish.

When and where bottlenecks may occur is not something I am familiar with, but should anyone like, I can at least provide a sanity check for a setup they are considering due to having set up my own system.

Been running a Perc H200 flashed for FreeNAS for the last 6 months; a fanmod isn’t necessary as long as the airflow in the case is sufficient. Not saying not to, as it is better for the hardware, but not strictly necessary if the case has decent air pushed into it

Also, bookmarking that SAS expander, thanks :slight_smile:

Could anybody be so kind as to check if “vanilla” Oracle Solaris 11.x (the latest free release for non-commercial use) can run without any hardware/driver issues on this board? Do the I211 work for sure?

Thanks!

Last few weeks I have looked for it…it’s out of stock

there is a new version that supports dual 10G nics instead of just gigabit
https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U2-2T#Specifications

3 Likes

But at a great cost, namely cutting the center PCIe 3.0 x4 to a measly 2.0 x1.

I’d rather just put a Supermicro X550 T2 PCIe 3.0 x4 card into that slot and maintain the flexibilty.

I’ve ordered one X470D4U for experimentation purposes - planning a 3700X homelab to replace a shitty Kaby Lake E3 Xeon.

Is there any reason to doubt that these boards will handle 32 ECC UDIMMs (yes I saw the specsheet only stating 64 GB total system memory)?

1 Like

Oooooooh nice. May have to upgrade.

I think I found an error in the X4704DU documentation, couldn’t verify it since I only got the board and no spare AM4 CPU yet.

My unit got shipped with BIOS 1.50 so Ryzen 3000 would propably not run prior to a BIOS update to 3.x.

My unclarity:

The X470D4U has 3 proper PCIe 3.0 slots:

  • PCIe slot 6 - x16 - Supplied by the CPU

  • PCIe slot 4 - x8 - Sharing lanes with slot 6, if both a populated then they’ll run in x8/x8 or x4-x4/x4-x4 configuration

  • PCIe slot 5 - x4 - Supplied by the CPU

So far so good, AM4-non-APU-Ryzen got 24 PCIe 3.0 lanes, 4 are used for the connection to the chipset, the rest can be used by your own devices.

BUT

One of the two M.2 slots has a PCIe 3.0 x2 connection, not PCIe 2.0 so it has to be supplied by the CPU and not the chipset that can only supply PCIe 2.0 lanes.

Where are these two PCIe 3.0 lanes coming from?

The manual doesn’t state that there are “shared” M.2 PCIe slots.

I think that may be an error in the documentation like you said. This is a breakdown of the CPU and Chipset diagram for x370/x470. The board has opted to replace the usual NVMe M.2 x4 with the PCIe 3.0 x4 slot instead, so the NVMe drives are almost certainly coming from the chipset