Hi, I have a 20bay NAS that’s in need of a bit of a maintenance upgrade,
Heres the current specs:
TGC-4420 Rack Mountable Server Chassis - 4U
MSI 890FXA-GD70
AMD Phenom™ II X6 1090T
16gb DDR3 (only 12gb working as one slot is dead)
3 x 9201-8i LSI Adapter
52TB shared across 18 disks with 2 Parity and 2 SSD Cache
52TB is a mix of SAS Constellation ES.3 and SATA Iron Wolf drives
FSP FSP1200-50ADB 1200W Industrial ATX Power Supply (because I needed a lot of Molex and wanted to avoid splitters)
Unraid OS Pro
Problem I am having is finding a decent performing system with at least 3 x x8 slots on the cheap,
The 890FXA-GD70 with a phenom seems to be a freak as its got 5 x16 slots, Gen2 (2x16, 2x8, 1x4).
Main issues is the bad ram slot is possibly a sign of things to come, I use a Poweredge R410 with 2 x x5660 and 128gb ram for my virtualisation, so the NAS just really needs to be a NAS and run basic media centre/emby and Nextcloud.
Only other option i can see is to replace 2 of the HBA 8i cards with a single 16i card, but most desktop boards have a x16 slot and a x4 slot that i can find, so im still short an x8 slot.
Yeah the 890fxa stuff were the dump trucks of the era…
An H8SGL and an opteron would be nearly identical but you could use a ton of ddr3 ecc reg ram on it. Has onboard video. I still have one of these in production.
Next would be one of the chinese x79 boards but you would need to buy or make a gpu fit in one of the 1x slots so that you had the 3 full lungth slots open, as these do not have onboard video.
Anything beyond that and cost goes up a lot. Ie: AM4, SP3, even X99 is still pretty expensive ATM.
is this worth the upgrade?
in theory I can re use my current memory without having any issues.
will have to throw in a dual port intel Pcie x1 card just so I at least have redundant gigabit.
It’s got aes-ni ; and you get to keep the rest of the system… it’s old but a good deal (assuming those are $AUD)
I wonder… you have 50T (750 USD) worth of capacity, give or take 1500-2000 iops … what’s the ROI/break even for electricity compared to e.g. a much newer but lower iops AM4 rig. (e.g. take 4x18T drives in raidz1, and stick them straight into the motherboard of a Ryzen 5 5500, use nvme for L2ARC)
5 x16 pcie2, that is 80 channels of pcie2.
equal in bandwidth to
40 channels of pcie3, or
20 channels of pcie4.
Check out this card, x8 pcie3, 16 lanes of 12gbps sas.
I was recently educated in one of the features of 12gbps sas is “databolt” which allows several slower links to be sent over a faster link (12gbps or faster) simultaneously without needing to upgrade your drives, though you may need a new expander.
These are very old systems, consider getting the more recent CPUs on a smaller process node and that could save you on power bills? You are saving on parts but you could actually be spending more on operating costs.
for less than $100, seems like a good band aid till I can afford to reassess the whole setup.
I thought of converting it to a jbod, or using an expander card, but then I’m relying on a single point of failure.
hdds were $32 each except for my initial 4 x ironwolf which were purchased back when they were $245 each
system has built up over 6 years, main factor is cost of the build.
been pushing money into my business so larger hdds and newer hardware are out of the question at the moment,
so I have a Sonoff Pow2 monitoring the rack,
reading are idle, which they spend most of their time.
OPNsense box - i5 2400
50w
Mediacentre - i5 2500 - with 6 hdd’s (hoping to merge this into the NAS)
100w
Security box (cameras) - i5 6600K with 4 hdds (Might merge into NAS)
75w
ProxMox R410 - 2 x x5660 - 250
250w
Unraid NAS - currently a Phenom X6 ( the system to be fixed)
245w
720w per hour, 17.28kw a day,
$3.30 a day / $1,204.5 power per year.
Ryzen 5600x based workstation with ultrawide
220w
$1 a day / $366 a year
Thanks for everyone’s ideas,
I have decided to Band-Aid the box with the cheap x79 option for now,
Found some epyc motherboard and cpu combos for around $900 AUD
AMD EPYC 7401P CPU 24 Cores + Supermicro H11SSL-i Motherboard +4x 32GB 2133P RAM
so this seems to be where I will go once I have a bit of spare cash, at that point I might retire the r410.
If you really need the PCIe lanes, definitely go for Epyc. Although, a cheap-ish B550 + Ryzen 5500 + 32 GB of DDR4 RAM only sets you back $500 AUD-ish, these days:
Also, might want to replace a few of the PCIe cards with these perhaps?
[Edit] Also, as others have said… 18 drives for a 52 TB setup? Man, I could RAID5 a 4 disk 16 TB and get 48 TB of usable storage for that - more if I use a ZFS pool. Your cheapest per TB option seems to be a Western Digital Pro 16 TB for $440, which should give 4 drives for a total investment of ~$1800 AUD.
thanks, yeah the amount of drives for the space is due to building up over 7 years, 5 of the 4tb are from 2016 when I started the box, and they were $375 each back then. the rest i have picked up over time for next to nothing.
main use of the box till now has been UrBackup and Nextcloud, so as i ran out of space, i threw known good drives from my second hand pile at it to expand the array, as i merge the other boxes into it, the smaller drives will be upgraded and it will be running more Dockers.
its due for a data and backup culling as there is 48tb of data on it…
If you take controller and case and power into account at 20+ drive system, it costs about $20-$25/year to keep a drive spinning, about half of that is electricity.
If your unRAID actually spins down the 2T drives, it’s still cheaper to keep the drives - than to replace them with a brand new 14T-20T and take out the 2T drives into a field with a baseball bat (or just peek inside with a screw driver - might be a better way to dispose of the data)
If on the other hand, they’re constantly spinning - docker and what not - then they’re not worth keeping - … so yes, is in your best interest - replace 5 of them or so with a 14T even if you need to put it on your credit card, you’ll still be saving money.
Server costs are the same no matter how many drive you have attached (SAS expanders are a thing, a 4x4 lane - good for 16 SATA drives expander card is about $60-$100… and a 9207-8i, which with two of those can serve 32 disks off of 8x PCIe 2.0 is a bit less than $100, there’s disk shelves with integrated backplanes containing expanders, and cheap non-backplane mining cases like that intertech 4F28).
If you have fewer drives, you don’t need anything exotic - can take a regular large workstation PC case as long as it has enough mounting brackets.
With ZFS or other raid, you also can’t really spin drives down (costly for small drives) With unRAID, or if you’re DIY with SnapRaid + MergerFS you can — but then it’s up to you to migrate the data from ZFS into the rust mountain (scripting), and you need to be cost conscious (don’t let drives spin up for nothing and keep track with Prometheus for accounting - basically more scripting).
All this scripting is hardly worth it for less than a few hundred disks.
How quickly is your data growing e.g. per month or per year?
Last year I went through a pile of 31 backup and transition hdd, hand dedupped them and consolidated the data onto a single 8TB which I then replicated to 2 other independent hard drives.
The 8TB is much smaller than the source drives, but it only took a couple of weeks to process it all, and now I am unburdened.