Return to

Site in a box


So, not sure if this is “enterprise” but it is FOR an enterprise, and doesn’t fit anywhere else.

The goal:
A travel case (ruggedised) sized box providing the following…

  • Riverbed Steelconnect SDWAN (VM or appliance)
  • Windows virtual machine(s)
  • Network routing/firewalling
  • 8+ port gigabit (10G uplink(s) if i can swing it - so i can later uplink to a distribution switch when structured cabling goes in) managed switch
  • Wireless networking
  • 2-4 TB NVME SSD storage
  • secondary rust storage
  • 32-64 GB RAM
  • as many cores as will fit (guessing 8C)

Desirable if possible

  • hardware fault tolerance (either via 2 NUCs running oVirt or similar)
  • under 7500 AUD total BOM

The concept is to be able to take said luggage to a remote site in the middle of nowhere, plug a power cable in one hole, plug a DHCP allocated network in the WAN port and have a small office network ready to go, ideally pre-built and shipped with a non-IT nerd as checked baggage.

So far we have been spitballing in the office.

  • Small 19" chassis rails in a box exist
  • one or more NUCs should fit - they can be purchased as rack mounted options
  • shuttle look like they have some interesting hardware that could work

Use case: early deployment of a small office server/network/vpn appliance in (literally) the middle of nowhere, where cloud service may not be available and internet will be really bad satellite or mobile coverage only.



So far, i’m thinking the Shuttle XH310R, as a base chassis and include 1 or more of them for redundancy -

But can’t find pricing locally yet…



How do you plan to deal with cooling the devices in the box?



The box we’re looking at (a colleague has the spec) has front and rear panels that are removable for airflow. Built in 19" rack mount rails.

It will be in an air-conditioned environment (transportable building with split system AC unit(s) in it), and hopefully the components in said box are low enough power that cooling won’t be too bad. Looking at 35-65w CPUs for example.

Essentially we’re done paying 6-8k for a 2U rack-mount server (to get redundant PSUs and hot swap disk) that doesn’t even have SSD and figure we can build something far cheaper with greater performance IO wise for local DB) in a smaller form factor and handle the fault tolerance with multiple boxes on consumer hardware (to get better bang for buck) instead of enterprise grade.

Kinda like Google did, but on much smaller scale and with some additional motivating factors…

We often don’t have space for a full depth rack, etc. either.

Primary goal with this is we can configure everything in a case, tell the guy on site to plug in two cables (WAN+Power) and connect to our corporate SSID via wifi and job done…

And yeah in an ideal world we would have working internet of a speed that cloud would be an option. But we don’t. Sometimes there’s no internet or internet that is unavailable for extended periods of time (African telco forgot to pay AT&T or whatever - yes that has happened before - Zambian telco didn’t pay their AT&T bill a few years back and AT&T cut them off for several days :smiley: )



I could get on the NUC idea…

Get two low end i3 NUCs that support a 2.5" drive…
-configure each with a nmve 1TB m.2 and a 2 or 4 TB 2.5" SSD or HDD (for either ceph or glusterFS and using some of the nvme for bcache/lvmcache)

and two i7 NUCs for a proxmox/kvm compute cluster.

1 Like


Yeah, i’m thinking 2 or 3 of them with

  • 1x 2TB NVME (SQL servers data)
  • 3x 2TB SATA (the shuttle model i listed has 3x 2.5" bays, the sata ports and an m.2 slot for SSD) - for other non performance critical VMs (say 2-3 of them - e.g., file/print, SDWAN appliance, etc.)
  • 32 GB RAM (only has 2 slots)

I’d be going i7 though to get the cores for virtualization.

Maybe running oVirt.

No, it isn’t ECC, no it doesn’t have proper fault tolerance, etc. - but i reckon in this application we can get away without the ECC and can make up for the fault tolerance with more boxes… :slight_smile:

1 Like