PC (server) with RAID 1 for a small business

Greetings,

We have 5 (soon 20) Desktops/Laptops all saving and collecting very small files from a single PC, for which we require replacement. There is a software which we use that requires such design, and since those files are important, we would like a protection other than weekly copy paste to USB which is already used on the current PC. Cloud solutions are not permitted, thus RAID 1 is something which we would like to use again.

On current PC we have i5 9600, 16GB, integrated graphic, and none of that reached 10% usage at any time, because our RAID was made from HDDs, which is too slow.

I planned to use 2 or 3 1TB M.2 sticks, and I would like to learn what would be the best motherboard and CPU (with integrated GPU) for RAID 1, in terms of stability and endurance since it would be powered on at all times.

Example:

Asrock B550M Steel Series
Ryzen 7 5700G
2x8GB RAM (whichever matches motherboard’s recommendation list)
2x Samsung 970 EVO Plus 1TB

Could you guide me please?

ECC and ZFS (mirror) is probably a good idea to be added on your list

AM5 and Asus motherboards are pretty solid choices in that regard
I would look at something else than Samsung SSD due to firmware quirks

2 Likes

You probably want at least some sort of off site backups. If you don’t like the cloud then you should find a off site location or encrypt your data before uploading it to a bare kup provider like backblaze.

For your hardware I would go for much more RAM and a few more CPU cores. For ram I would go for at least 32gb at a minimum but ideally it should be 64gb or 128gb. For CPU I would find something with 16 cores and good speeds. This is going to be expensive but it will last longer and scale much better. You need to build a server out of server parts and that also means ECC ram.

From the sound of your comment I am going to go out on a lim and guess you are a business that doesn’t do a lot of computing. That’s is great an all but make sure you are following good management and security practices. Businesses like your are often the target of many cyber attacks.

One last thing. Someone here mentioned ZFS which is great and very reliable but it is Linux only for the most part. If you are interested in expanding your software stack a bit to add flexibility you could install TrueNAS and run any software you might need in a VM. For the office I manage I used vfio to passthough a GPU and USB controller to a Windows VM so that the staff could manage and occasionally fix QuickBooks.

1 Like

ZFS works very well on FreeBSD and on Solaris

Thank you all for replying. I forgot to mention that the Windows 11 is a must due to the software we use. And yes, we are doing accounting for others, so pretty much Excel and that proprietary software that I must not name. No computing. And also, after working for 5 years, we are still under 200GB so even 1TB SSD is a bit of an overkill. And our PCs are working without being connected to the internet. It’s a closed network.

Asus PRIME B650M-A WiFi II or do l need to go for X670 chipset? Also, how well does Gigabyte stands?
Is it better 4x16GB or 2x32GB? Also, I can’t find ECC ram being sold in my area, is it mandatory?
Ryzen 7 7700: would this be suffice? Maybe Ryzen 7 8700G

Finding proper SSD is tricky in my area at this moment, while l believe that Samsung sorted the Firmware problem last year, I checked the other solutions and others doesn’t appear to have onboard cache. Example l could find are:

Crucial P5 Plus
Lexar NM790
Samsung 980 Pro (also Samsung and with cache, but pricier)

I’d pick a B650 motherboard over a X670 motherboard, they are more simple (less likely to have a firmware bug because of the way the x670’s hardware daisy chaining works) and actually use the exact same chipset as the X670 boards in a more sane configuration.

​​​ ​ ​
​​​ ​ ​
IMO Asrock has the best standing in making quality motherboards, Asus is second and then maybe Gigabyte.

​​​ ​ ​
​​​ ​ ​
ECC memory is great, but only some AM5 motherboards “unofficially” support it, meaning that the ECC implementation might be outright broken or might be removed with a future BIOS update. Asrock tends to be the manufacture that most often has ECC enabled, MSI almost never enables it.

*there are a very small amount of AM5 motherboards that do officially support ECC that don’t have have broken implementations, but none of them are from the consumer mainstream brands.

Another interesting sidenote is that to my knowledge the only AM5 motherboards to officially support ECC memory are B650 based, there are no X670 motherboards that officially support ECC memory.

@twin_savage
There’s a lot of incorrect information in your post

Can you please quote a in depth technical source for the following claims?

I’d pick a B650 motherboard over a X670 motherboard, they are more simple (less likely to have a firmware bug because of the way the x670’s hardware daisy chaining works) and actually use the exact same chipset as the X670 boards in a more sane configuration.

ECC memory is great, but only some AM5 motherboards “unofficially” support it, meaning that the ECC implementation might be outright broken or might be removed with a future BIOS update. Asrock tends to be the manufacture that most often has ECC enabled, MSI almost never enables it.
This is incorrect, Asus for example supports ECC on almost all of their lineup.

Examples:

CPU Support (or Memory)

CPU Support (or Memory)

…and so on

1 Like

They’re solid but being outphased (EOL), still good drives.

Nothing said is factually incorrect, perhaps there is some misunderstanding on the meaning of the terms being used.

X670 is two Asmedia/AMD Promontory 21 chipsets daisy chained together, and the way it is setup is less deterministic than the single Asmedia/AMD Promontory 21 chipset B650.
I used “less likely” deliberately as opposed to x670 has more bugs than B650; because it is harder to validate an x670 platform after an update because it is less deterministic.

​​​ ​ ​
​​​ ​ ​
Regarding ECC memory on AM5, all AM5 boards “support” ECC UDIMMs in that they can use the RAM in 64bit mode without functioning ECC (“on-die” ECC is not being considered here), this is why ASUS site you shared lists these DIMMs on their site.

I make a distinction between unofficial and official ECC support because the boards that officially support ECC memory have it as validation criteria for BIOS development. This is why there are many threads asking why ECC memory functionality broke after a BIOS update:

2 Likes

I just figured out that they do in fact have 1GB cache on. I’m not worried about EoL especially since it has 5 years warranty. l guess that is making it viable for consideration.

I’ve noticed we haven’t mention any Intel in this price range. How well do they fare?

Ok, so l don’t need to aim toward pricier X670, which is nice.

Also if l understood, ECC is not mandatory, correct? I’m checking the sites of the shops in area and l don’t see that they have any in stock at this moment, so I would like to skip them if possible.

Can you quote any (reasonable) source for your claims otherwise please don’t state it so it may be interpreted some general fact. I’m also not sure why you’re using ASRock as some defacto standard for the platform. They’ve never listed ECC memory as being tested (to my knowledge) and having a quick look it seems that they don’t list any ECC modules in their Memory QVL (I might be wrong on this) which also likely explain the posts on the forums about it not working.

@SrdjanC
ECC is not mandatory but it’s a very good investment in terms of reliability.

1 Like

It is hard to call ECC mandatory unless someone else is telling you that you must have it for a deployment; but ECC is very nice to have because it can start reporting errors and clueing you in when memory is dieing or has bad contact. Theoretically if you did not have ECC and your RAM started producing errors, it could corrupt file actively being worked on.
In practice though many programs go through there own ECC routines in software to protect you from this type of problem. Also with the advent of DDR5, there is something called on-die ECC that does provide some ECC functionality, but it will never report to your operating system when an error is encountered.

I can’t send you any validation criteria the OEMs go through when an AGESA update is sent out, but it should be self evident that a more complex system is going to be harder to debug and validate (B650 vs X670). Also the fact that B650 is selling more and has a larger install base than x670 is only going to add to the trend because companies tend to invest more in products that make them the most money over products that don’t.
To be clear I’m not saying all x670 boards are buggy and B650s are perfect, just that one is harder to validate and is less mainstream and I suspect eventually bugs are more likely to get through.

​​​ ​ ​
​​​ ​ ​
WRT ECC on AM5:
ECC officially supported to me means that not only will memory errors be detected and corrected, but that all the platform reporting functionality works as well. Also that ECC functionality is validated every AGESA update.

ECC unofficially supported to me means that memory errors are detected and corrected, but platform reporting may not be implemented or is not fully implemented. Also ECC functionality is not being validated every AGESA update

As far as I’m aware only a few boards are the former category. There are many boards in the later category, like that X670E Steel Legend motherboard that had ECC working and then a BIOS update broke it, ECC was being advertised as “Supported” on that Asrock board.

There isn’t going to be any kind of broad authoritative answer on ECC functionality straight from AMD beyond “it works if the motherboard manufacture did it correctly” because they don’t keep tight control of the validation efforts like how Intel does on consumer platforms.
Because of this you’re going off the word of the motherboard OEM which doesn’t have as much weight and as history has shown time and time again, is not always reliable (at least for consumer boards).

I am making two assumptions for the below:

  1. All your machines are on a Windows AD already
  2. All your machines are on one local network or reachable through some VPN

The “almost easy” route:

  • Build some server-box
  • install MS-Server on there
  • ReFS some drives (4 or more)
  • license and install Veeam
  • install Veeam-Clients on all the devices you want backed up
  • check your backups regularly! Veeam will spam you with emails, read them!

You said you are backing up some spreadsheets, so you don’t need a race car. HDDs tend to fail more gracefully, so consider some good old spinning rust.

Consider your workload.
Some napkin math (fill in your own numbers):

  • 20 machines
  • each has 50 Gigabyte of operating system and programs on it
  • And 20 Gigabyte of important files (all files are important!)

= 1400 GB worth of data.

Through Deduplication and Compression, you can probably shrink that down by a factor of 5, which still leaves you with a lot of data!

And then the next week/month comes round and you add that 300 Gigabyte doubles in size.


  1. Ask yourself how much time you can and want to spend for setup and operation
  2. How much budget does this project have?
  3. How urgent is this? → You may be better of tasking a local IT-company with this

Some ideas:

Idea 1: Dell PowerEdge T360

Buy a proper ready-made server, stuff your own software and configuration on it.

Intel Xeon E-2468 (8C, 16T)
2x 32 Gig ECC Memory
1x 480 GB SSD for boot
4x 8TB HDD for backup data (ZFS in Linux, ReFS in Microsoft Server)
2x Redundant PSU
(optional: Intel X710 for 10Gig networking)
Cost: 5500€/$ ish

Idea 2: Off-the-shelf NAS

Several vendors to pick from. They all have their advantages and weaknesses.
Keep in mind these are popular targets for ransomeware, so budget another 1k$ for a firewall.
Synology RackStation RS3621xs+
QNAP TS-873A

Idea 3: DIY-Everything

AKA Doing-IT-from-Zero
LDAP can be hosted on Linux, Windows-machines can be joined into a Domain, SAMBA, the whole jazz!

Part Link Note
AMD 5700 Link 8c/16t
ASRock X570D4U-2L2T Link 10G NIC, 8x SATA
Kingston KSM32ED8 (RAM) Link as per the QVL
Seagate Exos 7E8 Link 4x 8TB
Kingston DC600m Link 480GB boot drive
Silverstone CS382 Link 8x hotswap bay
Silverstone GM600-G Link 600W, Hot-swap, redundant
Arctic Freezer A35 CO Link CPU-cooler
MS Server 2019 16-core Link Learn to hate M$
Veeam Link
Lots of your time… …and sanity

I won’t disagree but BSD is slowly fading into obscurity and Linux is very much a dominant player. You will find way more help for Linux than BSD

Use the same number of memory sticks as memory channels on your CPU. As far as ECC memory goes it is highly recommended. Consider Windows Server with active directory to manage your machines as the server version is much more stable and will not slow down or have forced updates.

First off, how important is it that these files are correct and intact? Is a data loss catastrophic to the company or merely a headache?

It sounds like your storage needs are low, so you could get away with something like the Asustor Flashstor 12 Pro and three or four DRAM 2TB drives - if you do not require ECC. I believe this will be your cheapest option by far. This will cost you around $800 + $100-$150 per drive. However, the Flashstor could corrupt a few files since it lacks a few crucial bits and pieces. So make sure all your data is backed up regularly!

For a proper server with ECC RDIMMs, yes, look to Dell, they can give you a proper solution. Compared to the Flashstor it is expensive, but it’s also built to preserve and respect your data as much as possible.

I see some people here suggesting Harddrives… No, sod off, both HDD and SATA is a terrible idea in this usecase. It sounds like OP needs something like 2TB-4TB of storage for 20-ish users, and most of it sounds like small-but-frequent files usage so think 1kb-2MB files being accessed, seek times will matter much more than transfer speeds and NVMe excel at both.

Don’t DIY unless you absolutely positively are out of options. Doing it proper is even more expensive than Dell, doing it half assed and now your ass will be on fire every time something goes wrong.

Mission-critical devices should be doubled up regardless.

They are excellent for providing lots of space on the cheap. We are not trying to upsell OP on Quantum Myriad, Dell PowerStore or Pure X-series here. Small business server that will be neglected for some stretches of its life. This thing needs to be as robust and fail as gracefully as possible.

Consider the workload again. People arrive to work (usually not all at once), open up their hand full of files, work on them saving infrequently. One spike in writes before the lunch break, another spike in reads after the lunch break.
No massive file-transfers, no massive copy-pastes. The initial setup may be the highest load this machine is going to experience.

Agreed.

Dunno… had at least one servers from both Dell and HPE show up with loose screws rattling about inside.

1 Like

Exactly - and when the server only need 2TB of storage, why the heck would you go with HDDs when a 2TB NVMe is less than $100?