Ok, here’s a third opinion, initially trying to help you answer your own questions, but (like always) derailing later on.
- Is cloud expensive for your client?
- If not, would cloud not be a better deal, since they are not likely to grow fast very soon (at most 14 people)?
With Google Workspace, you can edit the same excel or word file at the same time with others. I think ms 365 might have something similar, but don’t quote me on that, never used it. With both, you also get emails and chat + VOIP (hangouts, or whatever google uses nowadays, or teams).
If you are dead set on fully local infrastructure, then:
- Would it be possible to go lower with commodity hardware, but instead upgrade the internet plan at least in one location?
- If you go with new hardware, are you (or rather your client) prepared to pay the premium for it?
- Do you have a budget you are working with, or are you first trying to do a proof of concept to give a quote?
- If you go with used hardware, do you have enough of it to ensure some amount of redundancy?
Not all companies need HA, but at least some replication between sites might be useful and gives you the option to launch a service or VM on the other side in case something goes wrong in one location. I’d still make one of the sites the main one and have it serve the other, the one with the higher upload bandwidth.
With Asterisk PBX you have the option to go with smaller codecs. G.711 a-law and u-law use quite a lot of bandwidth and are generally the default used in most PBX software AFAIK. We used to have a 5 Mbps dedicated line between Europe and Asia and we switched to g.729, with decent results. But this was a dedicated line, but it was also crossing the pond(s) through India to reach the SEA datacenter.
I think it wouldn’t be much of a problem if your sites are in the same country, but don’t get too far, like from New York to San Francisco. Marseille to Paris, or Napoli to Milano should do.
Speaking of sites and internet, if you are using the same ISP in both locations, ask them for a deal to get better bandwidth in their own network. You don’t really need more than what you have to connect to the internet (unless you have some heavy, many and frequent youtube users), but it would be nice if you’d get at least 50 Mbps (or better yet 100 Mbps) from one site to the other. Some ISPs have things like this by default, but you need to ask first.
- Are you going to manage everything part time?
- If you die tomorrow, or just want to end your IT career and move in a cabin in the woods, how easy would it be for your client to replace you?
These are questions anyone should be asking themselves. You should definitely make good documentation on the infrastructure, no matter what you use, but how easily (and cheaply) can you be replaced? The technologies you will deploy need to be easy to find knowledgeable people to hire (this is why you typically don’t deploy Gentoo or nixOS in production, even if they are better, most people only know Ubuntu, SUSE and RHEL).
I would argue AD is easier to manage, but finding someone who knows how to do it is hard and these guys ask for quite some money. And the documentation last time I tried learning AD was poor, no guides or anything. But maybe I’m misremembering.
I had a tech-newbie learn Samba via some shell scripts I made in about 4 months or so. He had no previous linux experience. And it wasn’t even something fancy, like TrueNAS GUI, it was plain CentOS. If you don’t get into configuring Samba as an Active Directory Domain Controller, then using Samba for CIFS should be fine (and cheaper). This gets rid of the need for Windows (both licensing and managing it).
Did you really have to go with some of the more expensive stuff? I have run HP gigabit Switches with balance-alb connections and ubiquiti unifi 5 APs, with real junk routers (seriously junk, 10 years+ uptime 1U servers and even 10 years ago they were deprecated core 2 duos with 8 GB of DDR2 RAM) and everything was fine for 300+ VMs and a few physical servers here and there (which later I retired and made them into VMs). And everything was gigabit, not even 10G. And we had 70+ employees. You are overspecing for 14 people, big time.
For the network infrastructure, since these are probably small sites, I’d go with cheap gigabit capable stuff, unless you find a real killer deal on 10G (which I doubt, compared to how many people are basically giving away gigabit stuff by now). Something like a ProtectLi router, but maybe cheaper should do. I wouldn’t go virtualized on it and I wouldn’t go second hand on the routers, but everything else could be. Up to you.
Making a site2site VPN with these would probably be hard, since you seem to be using some pretty awful internet plans, but it should be doable with hackery like DynDNS.
It just kinda hit me, but I think we’re going a bit too far with this, really. Both sites have terrible internet and I doubt they are equipped to handle the equipment.
- Do you have a secure rack to put the servers in?
- Is your rack going to be air conditioned?
- Do you have a UPS to power all the high-powered electronics you were planning to get? For how long?
- Is the electricity costs going to be worth it basically duplicating the infrastructure?
In some parts of Europe, electricity costs grew up 3x in price. Even back when I used to manage 5 racks, it was expensive. I was talking with my ex-colleagues a few weeks ago, when I left the company we only left 2 racks (which we collocated) and now they are trying to buy the best server hardware they can to lower electricity costs.
In all honesty, I think you should talk to your ISP and see if they are able to collocate a single server. If they can’t, look for someone who will, but typically with ISPs, you can get better bandwidth if you stay in their infrastructure and don’t go out to the “greater” internet. Worst-case scenario, you can go with dedicated VPSes (but these tend to be expensive for what they offer). But for 14 people, probably even a single 4 core 8-16GB of RAM VPS will suffice, as long as you have enough storage.
For collocating, get something newer and power efficient, maybe a 16 core epyc. Then virtualize the router on it (which I generally don’t suggest, but should be fine for up to 30 people - after that, the business has grown enough to make it worth going physical). Make the router a VPN and have everyone connect to the VPN with laptops.
Treat both sites like coffee shops, everyone just connects to a rando wifi, but use a VPN to connect to the infrastructure. Make a split-tunnel VPN (i.e. don’t redirect all traffic through the VPN, only what is necessary). Wireguard has that option to only give the routes to the internal network. You should be able to do the same with OpenVPN I think (just use wireguard).
Then get a cloud plan, or a VPS with tons of storage somewhere and do backups to it (I haven’t used restic before, but I keep hearing only good opinions on it, but considering finding people to know stuff, maybe you want to use something more easily understandable, like backupPC, which I recommend).
This way, you get to buy a single server, everything runs on it and the business has the potential to grow and even relocate without anything changing. And you also enable work from home, which I’m sure your client will appreciate when someone in the company gets sick and can’t come to the office, but is still capable of working (not to mention lockdowns).
Just make sure that if you get a collocated server, to get something like a Pi-KVM and a dedicated network for it, so you can potentially troubleshoot it from anywhere in the world, without having to ask the collocation support for assistance, which typically incurs costs. Just don’t open iDRAC or iLO on the internet. Even get a cheap Pi and make it a VPN to connect to, to then connect to the IPMI, if your server already has one and you don’t want to spend money on the KVM part of Pi-KVM.
But do ask for collocation perks, the ISP where I used to collocate servers were offering free internet KVMs during lockdowns (although they were so ancient, you were forced to use I.E.
some old activeX stuff). Before that, they would offer free KVMs on-site, but that was only so you wouldn’t have to dress up in cloth hazmat and go inside the datacenter and potentially introduce dust and stuff in. Still, better than having to go downstairs and plug a monitor and keyboard in the servers (which we did when we first participated in racking the servers, because we had some strict connection requirements, although we were mostly just watching and guiding).
A single internet site will absolutely be a better deal than dealing with the split-brain problem with low resource interconnects. If you had a 200 Mbps synchronous plan on each site, I would have recommended some decent options with site replication and stuff, but as it currently stands, there are too many sacrifices to make, which would complicate stuff. It really could be doable to have 1 site serve the other and not have to bother with setting up the other site, but why bother when there are better options in this scenario that you don’t have to make any sacrifices, really.