Buget low power platform for bulk SSD storage on network?

My partner’s got a conundrum. Lots of backups and projects with an insane number of versions, which total several terabytes and is always growing, because .wav files are chunky. Upgrading to bigger external HDDs is very 2006, so I’m wanting to figure out network storage, but it’s all very confusing.

Ideally, it’s something low power that can handle ~1gbps transfers so it’s not painful to open them in a DAW, and doesn’t make the energy bill spike hurt. UK Energy prices are terrifying.

What’s a good platform for this? Don’t have a lot of money, so it’s gonna be budget, rather than some Epyc system running in a server rack with NICs that cost 500 quid a piece. Haswell X99 platforms use a lot of power, so even though ECC is nice and they’re low cost, they’ll need active cooling and add like 60W on idle at least, never mind when actually doing stuff.

Also, SSDs. Most of the reviews look like thinly veiled ads, and folk recommending the Crucial MX500. Which might have been a great SSD back in the day, but there must be better bang for buck (and faster!) alternatives these days.

SATA ssd are amazing from price and power/perf ratio and since its dead market segment, old recommendations still hold true.

So Crucial MX500, Samsung EVO 870 or use datacenter SATA drives, it doesnt really matter. As long its TLC from good known brand, its good choice.*

And traditional 3,5 enterprise sata drives are not that bad, as long you go for small amount and high capacity.

20TB toshibas are around 400€ now and I have small truenas server next to me with 2x20TB+2x16TB sipping 38W in active idle and 60W during active writes.

*Except samsung QVO series, look up my post history for some of the poor bastards that tried it, thinking they got ssds for a steal.

2 Likes

The trick next is finding a platform. Not ruling out SATA SSDs, but would also like to look at nvme SSDs. Main reason being, they’re tiny, and can find simple pciex16 → 4x nvme adapters (bifurcation, not PLX) for cheap. AMD’s consumer platforms really skimp on the pcie lanes, which is a shame as some of the CPUs are pretty low power.

Skip nvme drives, they are NOT an option if want power efficient or cost efficient solution.

On the tragedy of consumer segment

Lack of pcie lanes is not new thing limited to amd, you literally cannot get anything with available 20+ lanes in consumer segment, period.

It has been that way since forever, but once we could finally use all that bandwidth, we finally noticed how fucked we are.

Using single x16 → 4 x4 bifurcation based boards is doable on almost any consumer platform, if you are willing to sacrifice primary x16 slot meant for gpu.

That is usually the only slot offering bifurcation, if present at all (not guaranteed at all). This applies only to pcie4 platforms , pcie5 platforms for some reason offer only x8x8 bifurcation , which is useless.

All other slots are shared, muxed or conditionally turned off to make do within small pcie lane budget.

What else is there for discerning and cheap gentlebeing?

You can get used epyc platform for cheap from ebay, i.e roughly for about 500-700€ everything included. You cannot get anything comparable to that kind of hardware for at least 4 time price new.

It will not be power efficient from consumer standpoint (80-120W idle without peripherals), but doable.

You will have 128 PCIE3/4 lanes (depending on mobo) available for expansion. Go wild.

** Thats wayy to much power **

Synology/qnap off the shelf solutions or gtfo then. Very underpowered and overpriced given what they offer, but usually power conservative and only solutions offering hotswap drive cages in design.

I built my own truenas server, but it wasn cheap or easy. Power efficient, yes.

4 drives, ~~ 36TiB of effective redundant storage and 40-65W on 240V running near 24/7.

3 Likes

Just my 2 cents – the consumer Asus ROG Zenith Extreme Alpha has 64 PCIE Express lanes… Not that I’m recommending this MB, but there are other manufacturers with more mundane consumer TR MBs which would have comparable PCIE lanes. Then, as has been pointed out above, there are server oriented MBs with PCIE lanes aplenty. I particularly like these server MBs as they aren’t trying to “fry” your CPU and RAM with their default BIOS settings to achieve good benchmarking numbers.

1 Like

I wouldn’t exactly put threadripper in consumer bracket power wise or cost wise, but proti gustu není dišputát.

I would put in in get used epyc server option, and the server will likely be cheaper one :slight_smile:

2 Likes

Yeah it was a bad idea by the sounds of it, looking at NVME for this stuff. What I could do is have that bifurcated pcie slot, and then just do a bunch of SATA ports as well for a tiered storage setup.

Which platform did you use for your NAS? A while back I found that Fujitsu have the lowest power platforms, but they were going for like $300 a piece.

Seems like you’re looking for something I had made myself some months ago:

I’ll avoid repeating that build log so I’m linking you to it.
As of now it has been rock solid, gigabit transfers over the network are a walk in the park for this machine, should be using the least amount of power possible for what it is (still didn’t take any power readings).

One thing I don’t suggest you to go for is the case. Get a proper case for an mATX board if you got the space.

I bought the SSDs on the last black friday at 200$ each. I think the total came up to ~1600$, without the boot SSD I already had.

1 Like

ASRock Rack Epyc3541D4I2-2T supports 12 SATA-devices, 10G RJ45 networking and a low power (relative) CPU onboard.
The catch: 1800€


I am running all the brands mix of 16TB spinning rust drives in my NAS. BTRFS Raid10, hitting 2-ish to 3-ish Gbit, is a jank-tastic-NAS though.

1 Like

I’ve used WD Red SATA SSDs in a CI server that gets around 300GB writes a day. We tried with Patriot consumer drives and they crapped themselves within 3 months. The WD Reds are going strong after a year.
Wise words for all SSDs: stay far away from QLC drives (Samsung QVO and any drive that seems cheap pr. GB). These drives will, in best case, perform like trash and in worst case lose your data suddenly and violently.

3.5" rotating rust don’t really use more power under load than decent SSDs, but will idle higher. A couple of decent drives in a mirror will easily saturate a 1gbit network in sequential reads/writes, especially if ZFS is involved. You could consider a tiered storage of some sort where very old projects are on slower but much cheaper drives.

A final point: Is storing the old projects as FLAC a possibility? This usually gives you something like 30-50% compression ratio.

2 Likes

I was going to recommend precisely this. FLAC has switches that will save all of the WAV metadata from editors etc.

@FiftyTifty , how much total storage are you looking for?

1 Like

I went diy route, since similar off the shelf product were too expensive and had significant tradeoffs. Availability was also an issue, and I was somewhat cash strapped then. Now I am just cheap within reason.

Primary aim was device similar to HP microserver n40l with more processing power and at least slightly better expansion potential.

Details and photos here:

Only thing I miss is hotswaps capability or at least proper drive cages, but those are impossible to deploy into consumer chassis.

To adress some points you mentioned:

  • saturating 1GBE was trivial ten years ago with single drive, and given with any working setup nowaday. Even friggin rasperry pi has enough processing power.
  • Do you really need all-flash array? Why? Think about it.
    • you seem to need reasonably fast archive and maybe performant scratch space
    • have that scratch space on mirrored ssd array and offload archive to raidzX HDD array of whatever level that lets you sleep at night.
  • How much storage needs do you have now and how did it develop over time? If you follow up your use pattern, how much storage will you need in 5 or 10 years?

And now some deeper planning:

  • ENERGY USE: find out how much do you actually pay for 1 kWH of power.
  • Plan around assumption that for each 1W of continous power use:
    • consumes 1(W) * 24 (H) * 30 (D) = 720 WH/month = 0,72 kWh/month per watt of baseload
    • same applies for yearly calculation, for each W of base power, you will consume at least 0,72 * 12 = 8640 WH/year = 8,64 kWH/year
  • now multiply by actual power price and see how much each component or solution will cost you
3 Likes

It’s not for my use, but for my partner. I’m used to faffing with PCs and they’re not at all, so I’ll be building it. It’s a work required thing that everything be done in wav format for whatever reason.

The WD Red’s aren’t too dire in price. There’s also the Kingston DC500M/R, they’ve got a big cache and TLC, but dunno the durability.

That’s not a bad shout at all. The mATX ones are about 140 quid here, which is very steep, but if that’s how much an efficient board costs, then it’s just how it is. Hmm

1 Like

The flash array is to avoid a shit ton of clicking, fast on-demand access to projects, stems, wavs, etc., fast search (super important with how many files are punted out!), and easy to expand. Also having a bunch of flash drives is a lot less heavy and awkward to work with than a bunch of mechanical drives.

My partner is using about a TB every 6 months or so. And my idea is to make a big network storage machine that can have another large SATA SSD plonked into it.

Another thing I’ve thought about is doing a RAID, but having to get 2x the number of drives for parity is super expensive. A lazy alternative is to just have a bunch of junctions?

Well you gotta consider that you’re getting a board + CPU for 140£. It’s quite a bit for what it is, but it’s almost impossible to find something like that from a reputable brand for less. You need to look into chinese boards with those embedded CPUs but those carry the JMB58x SATA controllers on board so no C10 idle. Gotta take the compromise.

2 Likes

Forgive me if I get the quote wrong, but in the last 90s the NASA director had a saying “faster, better, cheaper. Pick two.”

Meaning you could have any two, but not the third.

I think this is the situation with nas type storage. You can have “faster, lower power, cheaper, pick two”. And it is more likely “faster, lower power, redundancy, cheaper. Pick two”

There just isn’t a fast, low power, and cheap, solution.

2 Likes

Brand-name 4TB TLC SATA SSDs are currently running about GBP 60-70 per TB.
This will be your biggest expense.

I would begin by deciding on what kind of storage you’ll be getting.
Personally, I think investing in 4TB SATA SSDs in 2024 is a bad idea for most use cases.

And some kind of redundancy would be nice, even with an all-SSD array (n+1 or n+2 is fine, you don’t need double for parity)

3 Likes