They don’t fit. HL 15 is made for HDDs and SATA/SAS, not 2.5" SSDs with NVMe.
Nothing wrong with EPYC, you get all the lanes you need for NVMe and pretty much everything else. Can’t say much about the board though, I don’t know much about Supermicro boards.
I’d go for 25Gbit Connectx-4. 40G is a bit of a dead-end technology. With 25Gbit you can use 100Gbit switches with 25Gbit breakouts and use 100G NICs later. And I’m not that fond of cheap old,loud and thirsty enterprise switches for 40G.
Get some high-efficiency rating. Less waste → less heat → less noise. I’m about to pull the trigger on a BeQuiet! Dark Power 13 Titanium rating with 95%@50% load
For the case…pick a standard case that can house the board (E-ATX compatible) and has 1-2 5.25" external slots to use U.2 hot-swap bays from Icy Dock. Or mount the drives internally. Or get PCIe carrier cards, also less cabling.
I know that the Fractal Torrent is great for EPYC (Wendell used the Torrent too), but there is just no space to mount&cool 4x U.2 unless you tape it to the back of the case fans.
I considered Mellanox Connectx-6 also for being a Smart NiC, unless I misunderstood, the compute for the network stack could be dumped on it and have more CPU for the actual jobs.
For the drive caddy, I thought maybe some 3d printed or Third-party, some people will definitely wish to put some fast storage in that case, at least I hope.
If I use 15x Samsung PM1653 will the HL15 manage to use all the performance?
…yeah storage systems can sometimes be like asking about religion, people have their strong preferences.
The problem with the HL15 is that it has a built in backplane that you are forced to use, even if you have caddies to adapt the 2.5" drives to the 3.5" bays. and that backplane doesn’t specify nvme support that the PM1653s need.
I’d stay away from 40G ethernet as @Exard3k said, its a dead end technology and a power hog.
I believe there are some 25G ethernet adapters that are Connectx-6 based.
Intel has 25G and 100G adapters based off of the E810 chipset as well that are worth a look. not sure how the Nvidia/Melanox adapters stack up to the Intel ones in terms of what accereation technologies they support though.
I considered removing the backplane of the HL15 and wiring the SSDs directly to the RaidCard, but I am not sure if there are cables that will work okay with this kind of setup.
Any recommendation?
I would really like to go with Micron 9400 pro, even over PCIe Gen 5 devices, these seem the most stable, and with better thermals, compared to other solutions.
It’s a case for 16 toploading HDDs. If you don’t use HDDs, you are looking at the wrong case. Basically any other case is better suited for NVMe.
Micron released the new 7500, the very latest PCIe4 with 232 layer NAND and NVMe 2.0. They didn’t hit retail yet, but I will wait for them before I buy my stuff. Maybe 7450 can be had for cheaper or 7500 are just damn good.
Cooling on 7450 and 7500 should be easier with all the additional heatsink (can’t say if it makes a difference)
Cables to plug directly from Raid/HBA card into U.2 or U.3 SSDs are avilable, but I’m not sure if they’d fit in the HL15 because of how close the “bottom” of the disk bay is to the bottom of the actual case, you’d need a right angle cable which I’ve never seen before.
There are much better cases for 2.5" nvme storage anyways; pretty much anything with 5.25" bays could house U.2/U.3 nvme drives with icydock’s adapters.
I think the case of choice would be (if possible) “Falcon Northwest RAK”, I wrote an email to their sales, and I wonder if they will sell it separately.
For the U.3: ICY DOCK MB699VP-B V3
PSU: FSP Twins PRO - 900W
Silverstone has some Twin PSU, would it be a better choice?
oops, that’s a typo. Should be SNK-P0084AP4. It is effectively the exact same cooler as the XE04.
Long ago I had the A13, and while not directly comparable to the J12, it was very very loud. The J12 is supposed to be even louder.
Silverstone rates the XE04 for 400 watts, although I don’t think I’d want to run the cooler that hard for a sustained period.
The problem is that I have the Dynatron J12 already, thanks a lot for the info, I will change the Fan to a Noctua I hope it fixes the noise problem.
What is worrying me right now are the VRM heatsinks and the DRAM temperatures, somebody told me that I should use some high airflow fans set to low RPM like these “AFB1212HHE-TP02”, but as far as I know these fans are quite an airplane engine by themselves.
ahh I see; one positive to the J12 is that it is going to cool the VRM better than the XE04.
I would strongly recommend not replacing the J12’s fan with a Noctua fan. Noctua’s fans are very weak in this size category and would definitely undercool the VRM. I’d recommend changing the fan out to something like a 9RA0812P1G001 (or maybe a 9RA0812P1K001 if you don’t want to lose much performance over OEM), it is a much nicer fan that has better blade geometry than the Noctua/OEM and they actually bothered to balance the fan with extra weights in the form of glue blobs.
I’m running the 92mm version of this fan and am very happy with its performance:
word of warning though, the fans come with bare wire ends so you would have to be comfortable attaching/soldering the 4 pin fan header yourself.
As long as the CPU cooler is blowing a bunch of air over the VRMs you shouldn’t have too much problems with VRM temps.
Cooling the DRAM might be difficult depending on the applications you’ll run. Some applications really stress the memory and it will overheat even with reasonable airflow over it in these modern systems. Many other applications are easy one the DRAM and it can stay cool without air even blowing over it.
The AFB1212HHE-TP02 is a decent fan for it’s size and noise class, but it is a 120mm fan.
No PCIe 5.0 hardware raid controllers exist commercially yet.
I know GRAID’s first gen product, which I believe they called the 1000 series, did really dangerous things with data to improve speed so I could not recommend it.
VROC is also another semi-hardware raid option people mention but it also does (less)bad things with your data, specifically deferred parity calculation because raid 5 parity calculations overwhelm the CPU.