1U Server Build Advice for Colocation

After about 6 years of running my company out of cloud service provider, it’s time to move to a small server colocated somewhere. This is my first server and my requires are actually fairly small I think. The most important thing is data security, data redundancy, data intregrety, uptime, and single thread performance (in that order).

I’m not storing a lot of data in the grand scheme of things, I make about 1TB of data over the course of a year, with them mostly being audio files around the 500kb - 1MB in size around 2000 - 3000 new files each day. I really only intend of keeping the data around for about 1 - 2 years at most before removing it from the audio archive.

It’s going to be hosting a web server that’s Nginx, PHP and SQLite. I use Deno as a web socket server right now, but I’m considering just doing the websocket service in rust so it ties into my user facing services neatly.

I have no requirements as far as system architecture, I hold no manufacture as the gold standard over another, I don’t care if it’s x86_64 or arm64 as long as the single thread performance is good. I really want something that fits into a 1U and have enough storage for about 2 - 4 TB that’s fully redundant.

Budget is ideally under $3000.

Anyone have any suggestions for this space, as this is pretty new to me.

Whats the specs on the cloud rigs youre running?

Check out the “Save My Server” store on Ebay. I have purchased a bunch of stuff from them. It’s all used, tested, works well out of the box.

I am not affiliated with them, I am just a happy customer.

when u have 1 server i would advice against running production systems on it.
You should atleast have 2 (3 minimum is optimal).

and i would advice a offsite backup (daily,hourly).

1 Like

Currently I have a VPS instance that’s 2vCPUs 4GB of RAM and 25 GB of SSD space with a 400GB block partition added for “cold storage”.

Thank you, will look into that.

Yeah. I would love to do that, and I know that I’m playing with fire by not having a server in a second location. I’m doing offset backups every 24 hours right now. It’s not a “hot spare”, but I can start a new server if needed in under an hour with a VPS provider if needed.

That could be a valid strategy (using vps as failover). But the costs for it will be substantial.

Moving all your data into the cloud for a failover will surely be expensive. (bandwidth is what drives your costs up significantly)

Especially when running your software without hardware redundancy i would highly advise changing some of the software/system architecture to allow parts of the application to fail gracefully (to reduce those pesky cloud charges on failure).

I would prioritise things in the following order:

  • Data safety (You are now responsible for it)

  • Uptime of your storage abstractions. if you don’t have it yet i highly advise not using the vms own storage directly. Because if this fails you will pay big $$$ to get it into a cloud service provider
    Some helpfull suggestions in order:
    (minio, ceph, Riak S2 but i havent used it yet)

  • Good dns abstractions over any internal network traffic. No more localhost usage ANYWHERE. If anything fails you can fail over to another non local system easily this way. EG db is not called as localhost:3601 but db01.example.com

  • automated failovers and good healthchecks. You do not want to be caught of gaurd a 3 am when something fails. write and test scripts to get things back online when it does fail.

  • Seperation of responsibilities, Do not run it all on a single instance. use virtualisation to seperate workloads. Like the db, storage, webserver

There might be more but i dont know enough about the situation to help further than this

1 Like

So this is around a $10k investment to do correctly at the least? Not including the monthly cost of bandwidth / power. 3 Identical servers in three different locations with fail over from one to the other and obviously replication between them. DevOps always seems like it would be so much fun until you actually have to do it.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.