Recommendations for new TrueNAS server

It’s starting to look like my old TrueNAS server is dying (I think some hardware is starting to fail or something) and I’d like to move to a new box but with the shortage it seems like everything is expensive so I’ve had some trouble pricing out a new box for something reasonable (or at least reasonable for a tightwad like me).

My previous build was in a decade old Dell Poweredge server with 12x LFF bays and some room for internal 2xSFF SSDs that I used for the OS (since flash drives aren’t good enough) and I was using a PCIe Mellanox card for 10Gb connections to my switch.

In theory I could rebuy what I have for very cheap but that doesn’t seem like a good long term choice.

I was thinking that in a new server I’d like these features:

More modern CPU - my old board was running E5640s and DDR3, that probably was eating more power than necessary at idle

12xLFF bays - I’m only using 6 atm but my next expansion will likely require me to use the next 6

2xSFF bays - Hotswap is nice but not necessary, I mostly need it just for the OS

Room for a GPU and Mellanox card - I would like to keep a 10Gb connection to my switch and will likely want a GPU soon for Plex HW encoding

IPMI - I like being able to access my machine’s console remotely (though I’d like a newer version that is more secure than what mine currently has though it’s not like it’s exposed to the web)

One alternative that I’ve thought about:

I’ve considered the idea of moving my main storage pool to an external disk rack. That would allow me to use a SFF bay server for the main server.

This would make it so that I didn’t have to worry about having room inside the server for the extra OS drives and would make it easier to consider adding some other cheap SSDs for speeding up my storage with some caching (possibly Persistent L2ARC and SLOG) or maybe a small separate pool for my docker containers.

The only issue with this is:

  • more points of failure
  • might not help cost over just 2xSFF bays
  • might end up using the same or more power than current

Anyways

That’s all I can think of for the moment, I’ve looked at things like the Dell R730/xd and HP DL380 G9 but it seems like the prices for those went up during the shortage compared to before Big-C and they are sitting at possibly 2x what I’d like to spend.

I’d love for some help since living without my server is already becoming a pain.

Consider upgrading your existing box with new mainboard/CPU/RAM. This of course depends on how well Dell has kept to the ATX standards designing your case. From there, there’s 2 directions you can take:

  1. use server-grade hardware (expensive!) to max out connectivity etc. Or:
  2. use pro-sumer hardware: slightly less expensive but considerably restricted in connectivity.

There are EPYC 3000 SoC boards available, mostly mITX (or thereabouts) which won’t cost you a fortune but also don’t have a lot of expansion options. Then there’s the Asrock Rack series, who have consumer-CPU based boards mingled with server-grade features like IPMI. Still not cheap as ‘normal’ mainboards based on the same chipset, but definitely cheaper then true server grade EPYC (or in Intel-land: XEON) server boards.

Investigate before jumping in, it’ll save you money!

I’m genuinely unsure of how well Dell has kept to the ATX standard and am not certain I’d rely on them to have followed it.

I’m not looking for new hardware but “new” hardware. I’m fine with used enterprise gear if it’s still got a good shelf life on it.

I don’t need the absolute most modern equipment, just reasonably modern, rock solid and, hopefully, reasonably cheap.

I’ve been doing some research on my own and looking at Supermicro since everyone seems to talk about how good their equipment and they are.

I found this: Supermicro 2U CSE-826 1028W 12+2 bay Server Chassis with X9DRW-CTF31 Motherboard | eBay

Seems like it’s a good deal but I’d like to get other’s opinions on it.

From what I can see
Pros:

  • Gets me into Supermicro ecosystem
  • Seems to have an upgrade path via motherboard replacements for a few generations
  • A slightly newer system than what I have
  • Still old enough that I can reuse my DDR3 RAM from the previous server
  • As long as it has the proper riser cards, it should support a GPU and all the other connectivity I need
  • Comes with the 2xSFF Hotswap bay so guaranteed OS storage
  • Includes all the needed trays (or at least appears to)

Cons:

  • Still not as new as something like an R730 or another system that uses DDR4, so the energy usage might not be much better
  • Board is proprietary so locked into Supermicro (seems it might fit some standard boards from other posts but wouldn’t work with the PCIe)
  • Still have to purchase CPU separately

I also happen to have a spare 9211-8i that I can use for the HBA, so I don’t need to spend on the additional raid card.

All-in-all, I am really liking it and think I’ll probably go for it unless someone points out something major.

I’d still say to have a closer look at the Dell, measure things up and see what options that gives you. I’m not so sure if that Supermicro server would be an improvement for your situation, but of course it’s your call. And money. And data :wink:

FYI: you can get IPMI functionality on ‘standard’ mainboards using the Pi-KVM or TinyPilot projects. Both are based on a Raspberry Pi, a cheap USB3 HDMI capture device and some software. This means you would be able to use ‘standard’ ATX mainboards, like cheap-ish X99/X97 boards from China (Aliexpress) or AMD APU-based pre-Ryzen boards (which are DDR3, I have several!). But again: only if Dell has at least partially (or better in full) implemented ATX standard in mounting the mainboards to their server chassis. I can’t tell, I don’t have a Dell :stuck_out_tongue:

This gets you a ton of flexibility in terms on hardware… not to mention it’s cheaper and works better than probably every ipmi implementation out there.

So you can get e.g. an x399 threadripper for example, or an old and epyc (they’re relatively cheap compared to what you get)

You’ve mentioned it a few times so you’re already very aware of how much power these retired enterprise servers consume. But I would weight that heavily in my decision of what to purchase. Your old PowerEdge with the E6540’s was probably sucking down $25/m in power. Sure, the R730 will be a little more power efficient but it’s nowhere as efficient as a EPYC SoC like @Dutch_Master suggested. The total cost of ownership for a SoC or prosumer system might actually be lower than retired enterprise when factoring in power consumption. Just something to think about.

1 Like

I’ll be honest, it’s not particularly easy checking manually or finding information online if my Dell case could continue working but from what I understand and have heard, it’s unlikely to be anything upgradeable. (see: Wendell talking about Dell locking EPYC cpus to their MBs, Gamer’s Nexus showing non-standard motherboards with non-standard power supplies in Dell PCs).

I’ll definitely agree with this. I’ve seen it before and the implementation I saw was really thorough and since it’s open will definitely last a long time. However, whether it’s cheaper or not will really depend on if we can nail down hardware to get. If it turns out a server board with IPMI is cheaper and it’s from a reputable vendor then it’s probably the better choice.

I’m not technically paying the power bill myself, however, I do like to be green and the energy cost does indirectly affect me.

A cursory search suggests most of the SoC’s don’t have a lot of PCIe lanes (I keep seeing ones with single 3.0x16 slots) which wouldn’t allow me to connect up enough.

How much should I expect to pay for another system you guys like more such as an EPYC SoC based one?

I had been hoping to sit between $400 - $800. So I need some help coming to terms with it if there is a way better solution/experience that is out of that range.

Continuing to try and find additional options for approval.

I found this listing that seems promising:

It’s newer overall, includes a newer backplane (though I don’t know the benefits that might have for my situation), newer MB for newer CPUs.

Though the tradeoff is that I’d definitely have to buy more stuff for it and there is no rear 2x2.5 trays and those appear expensive to get separately.

It states the backplane is SAS, not SATA. That’s not compatible, so you need all new (to you) SAS HDD’s. Those aren’t cheap, even used.

Keep in mind: change for the sake of change is rarely an improvement and I’m really under the impression you want to get rid of the Dell chassis at all cost.

Should your Dell die before you manage to obtain a replacement, a 9th gen Intel proc on compatible mainboard in a generic case will do fine for basic tasks, likewise a pre-Ryzen AMD system can be made to work also for a similar basic workload. Those will fall within your budget, enterprise gear, like the AMD SoC I mentioned, is fairly expensive.

SAS is compatible with SATA. I’ve been using a SAS backplane and SAS HBA with SATA drives for awhile now.

What I’m looking for is an actual upgrade path. I’m almost certain my Dell is not ATX compatible, I compared it to a spare MB that I have lying around and several of the holes are missing or don’t line up and even then I genuinely wouldn’t be surprised if Dell spun around the pin-out on their PSUs in a proprietary fashion that would fry any MB that I threw in the thing.

My Dell is effectively already dead. While it will boot, for some odd reason it now consistently runs extremely slowly to the point of the terminal failing to function. Likely something has severely died in either the CPU or MB. The only hint was a part of the MB at higher temps than the rest of the system.

I literally posted links to several pieces of enterprise gear that are well within my budget within this very thread. So I don’t really understand why you want me to just start grabbing consumer components and attempting to throw them into my, also enterprise gear, chassis that will not support it.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.