New W680 Server sanity check

Hey there - new to the forums and I’ll confess right off the bat that I am a filthy casual.

I’ve built several servers - to varying success. I started with a FreeNas server and experimented with Unraid servers, Ubuntu Servers and until a few months ago- ran a Synology NAS paired with a NUC as a hypervisor.

I want to go back to my roots, and the recent Optane video(s) from the YouTube channel got me thinking ZFS again.

I’m thinking of the following as my build- I’d appreciate any guidance or thoughts.

i7 12700K - I upgraded to a 13900k for my gaming rig and have this sitting around.

64GB ECC unbuffered DDR5 from Micron - seemed like a safe choice - not on any QVLs though. We’ll see when the boards get here.

4 - P1600X 118GB Optane SSDs - I want to run this as a ZFS metadata special devices (this is actually the whole reason for this build)

Trying to fit 4 M.2 was a challenge - I want to run a somewhat efficient server and want to transcode for Plex. That’s what is making me lean toward Intel and the W680 - and because I can’t seem to find a board with x4,x4,x4,x4 bifurcation I narrowed my selection to two selections.

Asus W680 ACE IPMI or Asrock W680D4U-2L2T/G5- I’m struggling with this decision.

Asrock
I like that the Asrock board has dual 10GBE and IPMI built-in. I would have to use the Oculink ports and pair it with a IcyDock MB873MP-B to fit all of the Optane drives. If I go this route, I’m going to pair this with a JBOD with a LSI9200 card and that should give me all the storage I’ll need for the foreseeable future.

ASUS
Has 3 M.2 Slots - I can run the other one with a cheap add-in card that doesn’t require bifurcation. Therefore I won’t need the Icydock.
I could use the X540 card I have lying around or some Mellanox SFP+ cards for memory and likewise use the remaining PCIE slot for the LSI9200 card and JBOD.

OOOORRR…I could just cheap out and use an H670 board I got for another project and just YOLO it with DDR4 I have.

It might be good to just play around with it before deciding to sell the rackmounted Synology units I have.

I have confidence in ZFS, I just don’t share the same confidence in my ability to implement it.

Any thoughts are appreciated.

I have the x570 equivalent of that board. So comparison isn’t really possible, but the features are really unmatched. I love my IPMI and I like not buying a NIC and spend a slot to get 10GbE. The x550 intel on-board NIC is just great. Passthrough of individual ports, speed…flawless from my experience. Board was totally worth it.

24 threads is usually overkill for storage, but with ZFS you can play around with different compression algorithms to get out the most out of your cores. 10Gbit ZSTD won’t be any problem.

200GB is very little by todays standard. Depending on the kind of data on your pool, it may be enough, but you can’t really use special_small_blocks for small files. And if you don’t have much metadata in the first place, ARC is where metadata is usually stored and tuning parameters or increasing cache might be a better approach. You may need/want those slots for L2ARC and/or SLOG. What is useful and what is not entirely depends on the kind of data and usage.

But you need 4+ lanes and a slot to get 10Gbit NIC back into the mix. I have 2666MT/s ECC memory and it certainly isn’t a bottleneck anywhere. Capacity is vastly more important for ZFS than frequency/bandwidth for HDD/SATA SSD pools.

I always managed to fill my storage, no matter how much I had. Consolidation on a centralized ZFS pool with compression certainly slowed down things quite a bit. But I also added another mirror to the pool in the meantime. Building with expansion in mind served me well so far.

Looking forward to new AsRock Rack home servers and especially the newer Intel kind. Lots of forum threads and people with their AMD boards. My x570D4U-2L2T is probably the best board I ever bought although it has some issues, there is a 100±page thread for a reason :slight_smile:

Learning curve is a bit steep in the beginning, but once you are past this, you’ll be glad you did it. Design principle was “end the suffering” of administration after all.

Yeah - I have a P905 that I might use as a SLOG.

Well I bought a 45 bay JBOD - so that should do for me for now. I hope.

SLOG needs 5 seconds of writes, so 8GB capacity is fine for 10Gbit networking.

So better use P1600x for SLOG and p905 for special. Wasting 950GB on a p905 while lacking capacity on special is not the way to go.

Got it, I assume I’ll want redundancy on special devices - that’s what I went with 4 P1600Xs.

You always need redundancy on your vdevs. But 200GB is very limited while 960GB is much more reasonable. My 1TB special is at 40% cap with 24TB in the pool and it should fit nicely with 48T max cap. Wendell used 4TB for special for his 172TB array. These are sizes that can’t be cached by memory, while 100-200GB can still be easily solved by memory. And with large files, it doesn’t matter at all unless you run tree or ls -lR / and reboot all the time.

I have not done this, but can’t you switch to a bigger vdev for special and log when time comes and the drive runs full? I have not looked for the command line, but at least my Truenas web UI has an option to replace these vdevs with another (unused and unpartitiond) drive.

ZFS expand/autoexpand should work. Means once all disks are resilvered, capacity is increased.

But an over-spilling special isn’t a problem because ZFS just allocates all further blocks to HDDs instead (as normal) if special doesn’t have space left. So you don’t break or halt the pool. Getting the stuff back into a newly-expanded special is probably the largest problem.

Got it, maybe I can use 1 P1600X for a SLOG and then get another P905 (or 3) from what I understand SLOG redundancy is nice but not necessary. I could return the Icy Dock NVME dock (though it seems like a very special device) and just run the 905s through the oculinks from the Asrock board. Only reason I’m thinking going 1 P1600X is that the Asrock board only has 1 m.2 slot.

Hi Josh,

I am just casually looking for user experience/potential guide about ASRock Rack W680D4U-2L2T/G5.

I just received my board and waiting for RAMs to start my first Home Lab build.

Living in HK, it is still extremely hard to find DDR5 ECC UDIMM in retail market. Are these rams available in your area/country?

Yes, it’s now pretty available through eBay - I’m using 128GB of Micron and it’s working very well. I got an opportunity to go 25GBE so I swapped out the board for an Asus ACE W680 IPMI, but the Asrock system worked well too. Don’t know if I can offer any useful advice- especially on this forum, but let me know how I can help!

Thank you Josh for the tips on ebay. let me try to search to see if there’s any seller to ship DDR5 ECC UDIMM to Hong Kong.

I am looking for either Kingston or Micron, 32GB x 4.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.