Server

Hello,
I’ve come accros your channel and love ti, however I’m not a proffesional, I just know a bit of computers.

We’re a small architectural office. Untill now we were working with a synology NAS as a “server”, basically mapping the drive to workstations. The NAS had 4 drives in raid 6, so the volume was bigger + in case of a drive faliure we simply replaced the drive and kept working. The NAS was also setup to backup the data to two locations every night.
It worked good for us.

However, we’re tranfering to new software, which will need a server, running some software, which cannot run on a Synology (needs windows) + needs fore proccessing power.
We would still like the same simplicity. Basically a server “computer” which would run the windows and needed software from one SSD or NVME drive. At the same time this computer would have multiple additional SSD drives (in raid?) where the project data would be stored. Multiple disks are needed because we need ca 16TB capacity. The raid would ensure we can keep working in case of a drive faliure and that all drives are seen as one volume…
We have received some offers for HP or Lenovo servers, but the prices are extremely high.

What would you suggest. Can we configure a workstation (i.e. I7 with 64GB of RAM, no special graphic card needed) and somehow set it up? Would software raid be sufficiet? Is windows built in raid any good? If the OS drive fails, can we access the data on the other drives?

I know it would be best to get a proffesional to do it, but honestly we cannot afford it right now. We need something very simple, easy to expand, easy to replace drives. We would still keep the two backups to backup every night.

Thank you and best regards.

2 Likes

First, welcome :hugs:

Second: it would help to know where you’re located (city+country suffices, we don’t need your full address) so someone local might be able to help you out in person.

As for your question: you can use an older workstation as a server, provided it has the required space to house all the drives. You do want a later model workstation (10th gen Intel, AMD Threadripper if you can find a used one for a decent price) with as much RAM as it can handle. Then install a base OS, like TrueNAS Scale or Proxmox, and make a virtual machine for the Win-OS and one that runs the storage pool.

HTH!

2 Likes

Hmmm… Your demands are kinda special here. What is the reasoning for the Windows only server software? You are aware a ton of Windows-only software nowadays can run on Linux and BSD including .NET Core, yes?

I would solve this with two machines, keep the Synology (or better yet, upgrade to full m.2 storage via something like either the Asustor Flashstor or, if you really need more capabilities, a 16 core EPYC or Xeon with two or three 4x4 NVMe bifurcation cards) and invest in a $1.5k server to the side.

However, my best advice here is to simply contract a network / server engineer to solve your issues here. Sounds like you need a specially tailored solution and while yes, the contractor will cost you, trying to DIY this can cost you a lot more in the end.

Dunno about custom build; as others said, a normal computer/second hand sever off eBay would have the beef to do the job
But, as the machine is for work, I would be more concerned about support.

Unless you want to fix the machine if anything needs replacing

I’m not suggesting any of the big players are good at support, only saying that might be the reason for outrageous high prices.

If you are doing the setup yourself, a tower case should hold enough drives, and you could even get a raid card (plus cables) if you did not want to use windows software raid.
You should not need fancy hot swap or anything, so just searching for a case with 8 drive bays or whatever should suffice

1 Like

I would actually discourage the usage of a hardware raid controller. I think Wendell made a video titled something like “Hardware raid is dead” that covers the why, but obviously they need some kind of solution for a big, decently performant drive so we cant skip some kind of Host Bus Adapter completely.

IF these people are happy to build their own server (which would obviously come with a lack of support from a vendor or other team) they could get a 2nd gen threadripper with the 12 core 24 thread CPU (cant remember designation) and mobo along with a good chunk of memory. they could get an LSI SAS controller flashed to IT mode so it’s only acting as a pass through connection source for a bunch of additional drives.

you could install windows to an NVME for speed (since this is a business server i’d recommend a WD Red) and hookup some Intel P4500 or simmilar U.2 drives (i think you could put 4 on a PCIE 3x16 card to get full lane utilization). they can come in good capacities, and they dont slow down on transfers just because they are full. in addition, the drives are rated (depending on exact model) for multiple drive overwrites in a single day for the entirety of the drives warranty period. the kind of reliability you need in a business server.

you smash maybe 8 U.2 SSDs together in a Windows Storage Space three way mirror with ReFS and bam, one big, reliable, performant drive, that also gives you some basic data integrity protection. throw in a 10gbps DAC or Fiber card if there’s money for it and you’ve got a fine server that should last several years almost unsupervised.

1 Like
I think I should step aside, and let people who work with enterprise Big Metal comment

I agree a separate box (for the storage) running a real OS with software raid, would be better than hardware raid.(and one for the compute)

I do not think storage spaces, is as good as a hardware raid controller.

Personally, I myself only use software raid, on IT-flashed raid cards, because I can babysit or ditch&switch pretty easily

But for an enterprise, I am not sure I would propose such a solution. This is the kind of place where I would actually lock myself to a single vendor, and just use them, and their support.

So I don’t really understand why you need a single computer to run everything. If I was in your position I would maintain the Synology as the storage server and then just spec out a tower for the Windows appliance.

1 Like

for a well established enterprise i would agree, and i’ve not found much info good or bad about storage spaces beyond the fact that SP parity mode is a notoriously poor performer unless configured just right.

it sounds like this is a smaller company that may not be able to afford a continuing support contract and a 5 digit server bill otherwise they could configure something more robust.

alternatively as you suggest they could use a truenas box (or something similar) with mirrored pairs of drives setup to host the files on a samba share or block storage and use a run of the mill workstation.

1 Like

I like Ucavs idea of keeping familiar synology, with bigger drives, and just getting a compute machine, better.

As long as whatever tasks, can be done via network share…

4 Likes

If running single parity then use 3 drives or 5 drives, and configure the interleave as a direct ratio to the allocation unit size. If using dual parity then use 6 or 10 drives. Those configurations will give you good performance in parity mode.

1 Like

Id probably just make the synology the actual backup and put new drives in the new server if it was me as I am sure they dont have an actual backup

1 Like

Yeah regardless I don’t see any value in decommissioning the Synology. It can either be the primary or the backup. Honestly I would keep it as the primary and then just do nightly backups to an S3 bucket for offsite backups.

Since they are already familiar with Synology I don’t think they will get a simpler storage appliance than that.

1 Like

What software are you talking about? Small engineering firm and windows makes me think of things like Autodesk Revit or vault. It only runs on Windows and is only supported for windows server.

Those need windows server cal licenses and maybe SQL standard.

You could run it on a workstation but people in your company cost time and you can’t afford to be standing still. So you kind of have to shell out.

There are ways to soften the blow by optimizing the setup. I would keep the nas seperate to run backups to (and have offsite backups)

It’s really important to look at the total cost, time also costs money so it might even be better to run it in the cloud or subcontract the whole system.

Okay, so DIY-on-company-time? :wink:

Never touch a running system. Since that Synology is working for you, keep using that.

What levels of compute power are we talking here? A 16-core server won’t break the bank hardware wise (M$ Server 2022 for 16 cores is 1080€/$ if I remember correctly).
Local storage in the server based on a few SSDs is probably a good idea to keep the Synology “snappy” for the office-PCs (nightly backups from server to synology or reverse is a good idea).
Raid however, not a great idea. If you are running Microsoft Server, you have the option to use ReFS, which does some fancy stuff a traditional RAID can’t.

Many servers support RAID in the Bios. If you RAID 1 the boot-drive, then you are more resilient in that regard. As said above, the “operational”-drives are best be kept single until the OS gets its hand on it (ReFS or similarly “smart” file system).


Napkin math:

Part Price (aprox)
HP DL385 Gen10 (chassis, mb, etc.) 3000
AMD Epyc 7302 (16 core) 600
64 GB RAM 800
6x 4TB U.2 Drives 1600
2x 400GB U.2 Drives 600
MS Server 2019 16-core 1000
Total (roundabout) 8000

Hello,

Kind of new here, but I have a similar situation…
I’m an architect with a small studio that has similar requirements.
I have configured a virtualised system with a windows AD, TrueNas and Pfsense VMs.
Hardware is Asrock rack AM4 server motherboard with Ryzen 5800X, 64GB ram, two 1TB nvme, two 10Gbit ethernet on board, and SAS HBA with 8 x 4TB sata SSDs in raidz2 pool.

The system can be mirrored on two servers for redundancy.

Details available if you need…

@h4ns I’m sure people would be happy to help you out. Consider starting a new thread specific to your situation so that we don’t hijack this one.

2 Likes

Thanks ucav117, but I don’t actually need help (as far as I’m aware). At least not yet.
I should have worded my reply differently…
I have built and configured the system in my description, and it runs very well (actually, not just one, but three in three different architectural studios)… and since the purpose/requirements are very similar to what blaznarocnine described, I offered further info and help…

1 Like