Im building a datacenter

You can see the fee schedule for the municipal fiber, its not exactly a secret. $0.062 per strand per foot per month.

I am hoping to get a cheaper rate for the long-haul highway stuff but have not managed to figure out who I need to talk to at the state level for that. If you imagine what it would be like to walk a DMV employee through an oil change over the phone, you have a pretty good picture of what I have been going through with the state government.

3 Likes

Thank you holy government for your zoning laws and your restriction of competition. I can feel how free the free market is by the restrictions put in place by your holiness. Amen!

4 Likes

I have had the idea rolling around in my head to set up a proper exchange point. If I can get an uplink to Spokane or Seattle and my own facility I would love to create an open exchange, similar to the SIX.

I wonder if there is any demand for it.

3 Likes

Quick status update:

Drives are done. 4 of them reported “track following errors” but I have 20 good ones. I won’t put too much fuss into the failed ones, I might return them.

I tried to install the new RAID controller into one of the servers. First the R610 - the cables don’t match. Not sure what kind of cables I am looking at, will have to do some digging to see if I can get an adapter. In the meantime, I suppose this server can just have a bunch of single-disk RAID-0 arrays.

Next, I tried to put it in the R810. It won’t quite fit. Maybe if I had some right-angle connectors it would work, but ill likely just buy a longer SAS cable and move the controller to a different slot.

Lastly for the R720; I plan to just re-flash its controller back to IT mode so I won’t worry about it much.

Bonus puppy picture. I turn my back for a second and he finds a 2x4 to munch on. I wish he had cheaper tastes.

11 Likes

I kept tossing and turning last night. Something about the R810 was bothering me but I couldn’t put my finger on it.

After scratching the mental itch in the shower and engaging in some deep contemplation whilst sitting upon my porcelain throne, I resolved to just go back and look at it.

I couldn’t make the connectors fit with the controller plugged in, simply not enough clearance. I could, however, remove the riser and anchor points. So that’s what I did. I plugged the card into the riser, then plugged the mated riser and card back into the motherboard, then screwed the anchor points back in.

It’s tight, but it fits.

8 Likes

Is that dram in a slot on the raid card? Wow

1 Like

Yes, those are used for cache in hardware RAID cards. :slight_smile:

2 Likes

Just waiting on some SAS cables to arrive before being able to properly spin the cluster back up.

I purchased them off eBay, it’s been a few weeks and they have not shipped. I purchased them from Amazon, it’s past due and they have not shipped.

I just want a stinking SAS cable, it shouldn’t be this hard!

1 Like

I have zero technical knowledge to contribute to this thread… But I really enjoy reading about your project! Keep up the good work :sunglasses::+1:

5 Likes

Its been a bit but here is an update:

Hardware

Everything is up and running. The CEPH cluster is chugging away nicely, read and write speeds are great (for spinning rust, at least) so long as the IOPS don’t get too heavy. Its really nice to be able to reboot nodes or even replace hard drives entirely without needing to take any services offline.

Out of the 15x 900GB drives (3 servers w/ 5 each), that leaves me about 8TB worth of usable space. Not great, but its a starting point.

Network

I called up Spectrum a week or two ago to officially sign up for my own internet service and stop piggy-backing off the business downstairs.

As it turns out the 1000/1000 Mbps plan they offer is for residential only. The absolute best they offer in business class (at about $250/mo) is 1000/35 Mbps. I asked if I could sign up for residential class instead since its on the same infrastructure anyways, but that won’t fly.

So… Going back to the drawing board on internet. Lucky for me the CEO of one of the local ISPs happens to be a family friend, so we are working together to see what we can come up with. Nothing is quite set in stone yet, but if I cover the cost of fiber ($30-45k) they would be willing to be flexible on longer term pricing. We shall see.

8 Likes

Be sure to keep us posted : )

1 Like

Here’s an idea, I don’t see affordable LXC hosting. Linode has their LKE (kubernetes stack) for OCI containers, but it’s just not the same thing. And all the other LXC vendors are offering 1CPU / 1GB of RAM / 20GB of storage starting from $3.5 (usually $5 on average). I say that’s bollocks. If you can offer a 1CPU / 256 MB of RAM / 3GB of storage / 100mb up-down (best-case obviously) LXC container starting from $1, you’ll probably get lots of people and heavily disrupt the market.

This is definitely not enough for an Ubuntu install (which takes about 7GB, yuck), which is what you want. A bare minimal Ubuntu container takes about 740MB of storage and 35MB of RAM cold-booted. But it’s PLENTY for an Alpine container. I just created a default unprivileged Alpine container in Proxmox and tried bloating it a little with a few programs that I consider would be used by people: wireguard-tools, htop, (neo)vim, iptables, fail2ban, haproxy, certbot, wget, curl, tinyPortMapper, make, cmake, git, doas, tar, xz, bzip2, logrotate.

I managed to use a whopping… wait for it… 184MB of storage! Imagine that, a full Linux distro in under 200MB. Well, full is probably not right, but it’s definitely all you need. And it’s using 26MB of RAM with wg running!

I see a scenario where you open your own Wiki to tell people how to setup an Alpine container for self-hosting inside their own networks at home, but using this container as the gateway from the internet. I would recommend this everywhere. Get an Alpine LXC container and a domain name. Point the domain to the container’s IP in the DNS. Setup wireguard on the container and at your home, then connect your home router or VM or server to the container using the domain name. Then make the container listen to incoming requests from the Internet using tinyPortMapper and redirect traffic from itself through the wg tunnel to your infrastructure at home. But even without self-hosting, this can be a really cheap DIY VPN service, let’s see Mullvad or BoxPN offer VPNs for $1/month (I think they start at $2.99/month).

Obviously, teach people through the Wiki to set iptables to be very restrictive and set fail2ban to block IPs that had too many failed login attempts. I imagine this could be even more attractive if you include subdomain names in the price, say for example subdomain.tinyselfhosting.com - the domain itself is available for $10, grab it before someone else takes it I guess. You allow people to create their own subdomain names, have a small container to access their self-hosted stuff securely from the Internet (and allow others to) and you take away resource usage from your own infrastructure. RAM isn’t hard-wired to a VM, LXC uses as much RAM as it needs, so you can cram lots of containers on 1 host. Not to mention the CPU.

This kind of setup is pretty banging, I’m almost tempted to do it myself by borrowing a VPS (would make a nice, albeit small profit which would basically be passive income). By not giving 1Gbps, you restrict heavy usage like torrenting, but still allow for some basic video streaming for 1 to 4 people, or lots of website visitors to 1 server.

Since it will be very cheap, customers may want to stick with you and pay more for other stuff, because they would be saving up to $4/month which would have been spent just on a VPS alone otherwise. You can then come with a backup service a la Amazon S3 by using MinIO. If you offer backup solutions to your customers for cheap (dunno how much is backup storage lately, should probably check backblaze), they will probably take it. The backup service could be priced similar to the competition, just to earn more margin which you will probably need.

Hope you make good use of the idea, it’s a market segment that probably awaits lots of Linux people, which could help you build a “community” (or something like that) around your hosting services. As mentioned, have a Wiki in place and also make a forum. I don’t know, I wouldn’t be using Discourse for something like that, CentOS Forums use phpBB. Maybe if you want to have a cool kids forum, use Pleroma or Friendica (would probably not recommend though), but here you can find lots of forum software (alongside other inspirational ideas).

4 Likes

Biky,

It’s about growing fast and not having to worry. It was never about being cheaper or more efficient.

Here’s a pretty good text that goes into the kind of mindset and business plan people who use such services have:

Issue is, smaller companies and entrepreneurs see these fast-paced companies around them using such services and think that it’s a requirement to be a good company. Getting this out of their heads can be a great ordeal.

This is all in response to your comments regarding LXC, S3, etc. pricing.

Best regards,

vhns

3 Likes

My point was also about growing fast, mostly about attracting customers and potentially building a good reputation about being open.

3 Likes

I like the idea. My previous “small tier” was $5 for 2CPU / 2GB RAM / 32GB Disk, but I could definitely sell smaller. Say $2.50 for 1CPU / 512MB RAM / 4GB Disk. Definitely not “bang for your buck” territory, but when price is more valuable than performance it could sell.

I would definitely want to add a disclaimer saying that I would spend very little if any time helping troubleshoot issues with it. For every hour worth of my time I spent helping them out, they would have to be a subscriber for three months just to break even if I paid myself minimum wage.

In the same line of thought as “other stuff” - I do have a reverse proxy service that some folks have shown interest in. I am still tweaking the configs but it should transparently cache static content with all the performance tweaks and gzip enabled. Plus, if a client does not need a public IP address and can use the proxy service instead then ill offer a good discount.

IP space will definitely be a pain point early on. Crossing fingers for IPv6, but wont hold my breath for it.

I do like the idea. I have one client (well, hopefully, if I can get better internet) who is a videographer. He generates data faster than his upload speed will allow him to shunt it off site. My pitch to him is that with me this wont be a problem, because a station wagon full of hard drives moves data pretty darn quick. Just stick a bunch of drives in the mail, or better yet just drive them to my office, and ill take it from there.

Amazon S3 is currently at $23/TB for their S3 storage. Doing some napkin math regarding power usage, redundancy, and failure rate of these hard drives… Well I should be able to half that cost and still make money.

The only problem I see with MinIO is that it handles redundancy on its own. This means it would need to write redundant data to the VM disks, which themselves have redundancy built in. This takes my logical to physical data ratio from a 1:3 to about 1:12. Riak looks pretty neat. If all else fails I could expose the underlying Ceph cluster, but would really rather not if I can help it.

I am selling “storage space” via a VPS with a samba share and a fat attached disk to the business downstairs. Not quite as robust as object storage, but the vast majority of people just want a browsable folder and don’t really care about the underlying tech.

I think the bottom line when it comes to this sort of thing is that if you already know what you want, you know that there are better and more established players in the game who can get you what you need for cheaper. What I am selling is less the physical infrastructure, and more my ability to provide and support a solution.

I very much appreciate the link. I gave it a read and will mull it over a bit.

2 Likes

Yep, saving a few bucks a month, but hosting stuff yourself would definitely be something. Not that you couldn’t fit a small web server there though.

That’s actually great news. If you combine your reverse proxy with the subdomain names and a small container instance that doesn’t have any publicly addressable IPs, it would make for a great combo-pack for small self-hosted websites, but I guess without some kind of port forwarding, it would make a VPN setup more difficult or have to depend on the home connection to have a DynDNS setup (like the one built-in in Asus routers).

At a small scale, it might make sense, but you’d probably want in the future some kind of ingest station setup, where people can come to a terminal (as in, a PC with a monitor and probably a fancy interface), log in to their account and upload the content from their drives or even connect something like a HP ProLiant microserver to your network or to the terminal and upload directly from the terminal. This, at the very least, would make a lot of sense for an initial full backup and then only upload incrementally / differentially through the internet.

Ouch.

That’s interesting. I would still write a Wiki for that, to try to avoid people requesting support for easy problems. Personally, I wouldn’t care about support almost at all, so long that your infrastructure works and that’s how I’d establish my business, but each to their own. What you’re doing is not bad, it’s just different.

I’d like to have a data center where support is limited, but the platform is no-nonsense and the physical stuff just works. There is definitely a market for a service that provides more support to your customers. I’d like to attract people who know what they’re doing and share knowledge (which is why I like wikis), but the idea of helping less experienced folks with their setups is also very entertaining, just not what I would do.

This shouldn’t be said, but just make sure you secure that thing really well, to prevent people accessing other people’s shares.

Slightly off-topic side note. There was a VPC provider that I really liked working with. Their cloud (IaaS) was really neat, in more of a “pay as you go” style. But they were using VMware vCloud Director (vSphere / vCenter), eeewww. I’d have an SSD storage pool allocated to me, I wouldn’t pay for it, it was just available to add to my VMs and the VM vdisks themselves didn’t have a fixed allocation in the physical infrastructure (i.e. they were only reporting the actual storage used in the vdisk, not the total vdisk capacity). I could use LVM to add more storage, but I’d only pay for the actual storage used, the the one allocated to the VM, nor the one from the pool. I found that really cool.

Also, I can’t :hearts: comments, because I reached my daily limit.

2 Likes

Meshach was reporting that several of its drives were down in the Ceph pool. I ran some updates and told it to reboot. It decided not to come back up.

A half dozen VMs had a few seconds of downtime while they were picked up by Shadrach and Abednego, but I still have to deal with a troublesome server now.

The system boots to a grub shell. I figured that was an easy problem to solve, I rebooted with the intention of using a live-cd to rescue it. I noticed while rebooting that there was a handy option in my disk controller to “verify drive” which described that it would check for bad blocks. APPARENTLY, it does this destructively and wiped the drive before telling me “it’s all clean!” There needs to be a warning label on this stuff.
The good news is that all the drives are just fine health-wise. Not sure what the initial problem was but now we will never know.

I told CEPH to remove that node from the cluster and am waiting on the data to rebalance. I’ll need to tell ProxMox to remove the node and reinstall it. Not the end of the world, just not how I expected to spend my Saturday.

The good news is I made it to the 2nd round of interviews with a real nifty company. Maybe if I am lucky ill get paid to do this sort of cool stuff!

11 Likes

The problem ended up being a dead disk. All the drives reported they were fine to the OS, SMART data was good, the disk controller read everything as just peachy.

One drive, however, appeared to crash the disk controller when accessed. All of a sudden all the other drives would be forcibly unmounted. On a hunch, I told the disk controller to verify that drive in particular, and sure enough, the system crashed. I stuck a new drive in and all is well.

I reinstalled ProxMox and rejoined the cluster. Ceph did its thing. Overall no data was lost.

13 Likes

Cool to see this stuff in action doing things correctly (other than the HD’s troubleshooting)

2 Likes

Oh man that’s nasty… But that’s really great news that Ceph kept on working! Keep it up! :slight_smile:

3 Likes