UGREEN NASync PCIe Lane Allocation -- Practical Performance Implications?

I’m currently backing the UGreen NAS 8-bay.
I’ve got about a week to decide if I want to cancel my pledge or not.

A couple of things have come out recently that have me wondering if I’d be better off just building my own:
(1) One of the NVME drives is PCIe 3.0x2, the other is PCIe 4.0x4. So in a ZFS mirror, your fastest speed on these is 3.0x2 (2 GB/s). Is that realistically a real bottleneck for, like, Proxmox VM storage in a home server environment?

(2) SATA is Gen 3.0x2, running 4 drive bays, and the other 2-4 drive bays, depending on your model, are ran off two SATA ports internal to the CPU (2x SATA 3.0). I can see this being an issue for, like, SATA SSDs, but is it a real bottleneck for 6-8 SATA 7200 RPM HDDs in ZFS?

(3) [ZFS Woes] RAM is limited to 64 GB DDR5. If I wanted to fill it with 8 HDDs and use ZFS, what size drives would be the max before I ran into issues with, theoretically, not having enough ARC cache? With/without deduplication? I’d planned on using 14TB drives, but that might not be the way to go. The only ZFS system I’ve used so far is all-flash, with a ridiculous amount of surplus RAM vs. available storage, so this hasn’t come up. (Though maybe with 112 TB of raw storage (before putting it in a ZFS pool, I don’t need deduplication?).

I’m the only real human user of this thing. Otherwise, it’ll be backing storage for VMs and LXCs that need mass storage, mass storage for Proxmox Backup Server, backing storage for Macs using Time Machine or Carbon Copy Cloner, etc.

I’ve been watching excellent YouTube videos and reading documentation meant for people who get paid to do this to teach myself, so I apologize if (3) is an obvious sort of question. I’d rather ask now than find out by spending a bunch of money on HDDs and having regrets. :slight_smile:

I assume a lot of us have been eyeing these.
That said, I too was disappointed in the pcie lane starvation.
I am NOT the ZFS expert but it is my understanding that ZFS will use as much or little ram as you give it. Basically if its available it will want to consume it and if not, will work with what it has. ZFS configuration can be optimized but generally runs ok by defaults.
For the pcie 3x2 ouch. Max theoretical there is 2GB/s.
I think that one stuck me the most when I learned.

You have to ask yourself. What is your needs.
Do you frequently backup or move large files to / from the NAS on a regular basis?
If you do, is this automated and done after hours (your away and wont notice the bottleneck)?
The avg single user watching his/her plex/embi movies it wont be an issue. Running the “arr’s” also should not cause concern.

In the end, its a lovely little box with an OS that shows potential or you could load one of your current favorite OSes. That said, after I learned of the limitations in the hardware I am not buying one; my own choice. I feel I can get more capability (does not mean I will utilize said capability) with a roll-your-own NAS.

1 Like

This is where I am with it right now; I’m really close to cancelling my pledge. It’s too expensive, IMHO, even at the 40 percent discount level I’m at, to justify that kind of hardware hobbling.

I’d probably be fine with the limited SATA lanes and PCIe lanes on the NVME now, but for $1500 MSRP, this machine should have a long, long life.

For what it’s worth, as it sounds like you’re on the fence about it as is:

Not going to be an apples to apples comparison, but I ran a test on a Thinkpad T480 (i5 8350u, 64GB RAM) with a pair of drives in a ZFS mirror on Gen3 x2 speeds on each drive. It was, with the ARC caching, reading about as fast as my Gen 4x4 drive does on AM5. Writes were slower as one might expect, about 1000 MB/s.

I wont go so far as to say “you won’t notice” because I don’t know what you will or wont notice. But it’s still faster than SATA SSDs on write, and significantly faster on reads.

Gen3 x2 should be good for about 1.9GB/s theoretically, so even with 4 SATA SSDs on that it shouldn’t be significantly bottlenecked. HDDs even less so (150-250MB/s per spindle in their ideal sequential life, depending on the drive’s RPM and where the data’s sitting on the platter).

RAM usage from the ARC isn’t directly drive size dependent, except in the case of using dedup. Most everything I’ve read recommends very strongly against using dedup (up to and including a scenario where the array was stuck and unable to be used as dedup had grown too much, and they had to wait for a server to come out with more than their existing server’s max RAM capacity just to mount the pool), and to use compression instead.

$1500 for 6 bays and 2 NVMe slots does sound like a lot, though. Synology and Asustor have similar things (DS1821+ or the Lockerstor AS6508T), in both cases for about $500 less at least by a quick poke at amazon.

I would say mostly not, unless you have particularly performance sensitive backups. Some of the flows you mention get over 1 GB/s but, in my experience, software that sustains more than 1-2 GB/s over large enough transfers that it matters is uncommon to rare. 10 GbE is ~1.2 GB/s so a ~1.7 Gb/s 3.0 x2 NVMe won’t be limiting unless both NICs are reasonably active.

For your 3.5 question, figure ~250 MB/s large file sequential per actuator as @Molly mentioned. If you’re thinking a couple RAID10s that’s ~1 GB/s read capability on each array, given utilization of both mirrors, and ~500 MB/s write. Double it if Exos 2X14 is the reason you mentioned 14 TB and you’re thinking eight actuator RAID10. Same if it’s JBOD and the workloads have the threads, async calls, data layout, or whatever to keep all the heads active. So potentially NIC limited but more likely software limited.

Wait and see for me. Benches on the DXP4800 Plus are often well under hardware potential and it’s pretty noisy, so I’m guessing UGOS probably has a ways to go yet to effectively utilize DXP8800 Plus hardware. The potential’s there and I hope it works out for UGreen.

I’m not exactly reproducing the OP but perhaps a block diagram will appear for the DXP8800. An i5-1235U with a 600 chipset and 4.0 x4 + 3.0 x2 NVMes leaves 4.0 x4 and 3.0 x12 lanes available, x4 going to the expansion slot. If the NICs were AQC113 that’s 4.0 x2 and two ASM1164s would support all eight drives from 3.0 x4 without too much potential for lane bottlenecking (the 600 offers two SATA from 3.0 x2, not four, and there’s not SATA off the i5-1235U). That leaves 4.0 x2 and 3.0 x4 unallocated. So I’m curious what UGreen actually implemented.

+1 if zfs isn’t required. No Synology drive markup (or reporting crippling) and, from the handful of UGreen Kickstarter images, maybe better airflow.

2 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.