Buying & Reusing hardware for personal & startup machines - any suggestions & critiques?

I’m a software developer that’s working building my own startup, with the main goal being to develop software relating to Machine Learning, AI, and then selling the software / trained models.

Wasn’t 100% sure where to put this, but a lot of my questions are more so enterprise / business related so I figured this would be the best place.

Between the hardware I’ve been using for personal use (such as my desktop and NAS) to a development server I’ve built, I want to sort of standardize the hardware and software I’m using. This basically would end up becoming a combination of shifting around and buying new a hardware, which is why I’m asking for some help. Please bear with me in my long explanation since my plan is quite a bit complicated.

Here are the current systems I have:

  • Personal Desktop

    • Ryzen 7 3700X
    • Asrock X470 Taichi
    • G.Skill 3200MHz 4x16GB (64GB total) memory
    • RTX 2080 (waiting on the 3080 to go back in stock :pensive:) - will be given to my younger brother, receiving his current GTX 1080 in return
    • Intel 660P 500GB for Windows, some Samsung & WD Blue SSD’s for games, and a few HDD’s for misc stuff
    • Intel X520-1 SFP+ 10Gbps NIC
  • Personal NAS

    • Ryzen 5 3400G
    • Asrock B450 Fatal1ty mITX
    • G.Skill 2400MHz 2x16GB (32GB total) memory
    • No GPU - using onboard graphics
    • Uses a SilverStone RL-FS303B 2x5.25" front bay to 3x3.5" HDD slot adapter - populated by 3 WD Red 8TB drives running in a ZFS array (2 for storage, one for parity)
    • Intel X520-1 SFP+ 10Gbps NIC
  • Current Dev Server

    • Ryzen 7 2700X
    • 2x8GB Corsair & 2x8GB G.Skill 2400MHz (32GB total) memory
    • Asus TUF B450M-Plus Gaming mATX
    • Some GTX 1050 GPU
    • Intel X520-1 SFP+ 10Gbps NIC

Those SFP+ NIC’s are all connected together using a MikroTik 5-port SFP+ switch (CRS305-1G-4S+IN).

I’m planning on building at least 2 new servers for just the startup in some rackmount case, which will not share any sort of networking / power related / physical storage gear as my personal NAS. I just want to point out that these proposed builds are more “dream” builds than anything - as much as I’d love to get the latest and greatest I’m not beyond choosing a cheaper or just as good alternative.

  • New Dev Server:
    Sole purpose is to be fast as greased lightning to handle different workloads, especially Machine Learning and the like.

    • Threadripper 3970X (or equivalent from newer gen Ryzen TR chips coming soon)
    • Noctua NH-D15 with sTRX4 mounting
    • MSI Creator sTRX4 mobo (or some equivalent mobo with on-board 10Gbe NIC)
    • 4x32GB G.Skill 3600MHz (128GB total) memory
    • RTX 3090 (or some equivalent GPU with good ML performance & lots of VRAM)
    • Planning on using Ubuntu (since that’s what I’m most comfortable with) installed on some SSD
    • Going to only use NVMe storage for high-speed data, probably 4 WD Black SN750’s connected through an Asus Hyper 4xM.2 NVME to PCIe x16 adapter
  • Data Storage Server:
    I’m totally unsure what to get for this. I figure I’d rather build the storage server myself with consumer hardware, but the only requirements are this:

    • Some AMD Ryzen or Threadripper CPU (don’t need TOO many cores, but should be fast enough for handling ZFS and database management). I’m not against an Intel chip, but I figure multi-core performance is better for such a task, or at least AMD seems to be significantly cheaper.
    • Also probably getting 4x32GB G.Skill 3600MHz (128GB total) memory
    • Any GPU, really doesn’t matter - most likely going to reuse the GTX 1050 from my Current Dev Server
    • Plan is to use some WD Red 10TB NAS drives in a ZFS array - not sure how much storage I’ll need but I figure I can always add more drives as time goes on to expand the array.
    • Supplementing the ZFS array is going to be 4 WD Black SN750 drives as cache / SLOG devices connected through an Asus Hyper 4xM.2 NVME to PCIe x16 adapter

Here are my plans so far:

  1. Building a new personal desktop - probably Ryzen 5000 based, can’t really plan on parts that aren’t out yet.
  2. Basically move my current Personal Desktop into a rackmount case and move my existing NAS drives & ZFS array to this system. Specs would largely remain the same, minus the GPU and moving around my personal drives to a new system.

Misc gear needed:

  • 12U server rack for personal NAS + network switch - could probably use a smaller size but future expandability, y’know?
  • 24U or greater server rack for the startup’s servers.
  • Rackmount 10Gbe RJ45 network switch for personal machines - really only needed for NAS and family to use for Plex and the like.
  • Rackmount 10Gbe RJ45 network switch for startup’s servers - only need up to maybe six 10Gbps connections, the rest can be 1GBps
    • Looking at a patch panels for the personal and startup’s server racks, I believe there’s soemthing called Keystone patch panels that are “universal” in terms of what you can assign the ports to be, will probably look at those.

This a lot of planning for stuff I’m going to get over the coming 6-12 months, I don’t plan on just buying everything at once (I’m in the working class but I’m not made of money).
I’m sure I missed some important gear that I’d need to get, and this has been a really long and extensive post as is. Any critiques, questions, comments, concerns, etc. are welcome!

One last question I’m unsure of - what should I do with my old personal NAS and dev server builds? My planning pretty much involves completely replacing those systems, and I’m not sure if I should just look to sell them to recoup some cost from the planned builds or whether to do something like distributed computing and such - any suggestions?

I don’t see much “enterprise” gear here, I guess the 10gbe switch could be an old enterprise unit, but new microtik or ubiquiti stuff works just as good and is a lot quieter and more energy efficient.

I see a lot of desktop processors listed. while those are good for some workloads, they’re not server CPUs. Considering buying an EPYC based Proliant (dirt cheap) for anything that’s going to be relied on for making money, versus a roll your own threadripper (still a desktop cpu.) Why does your storage server need a GPU?

2 Likes

Yeah I have no attachment nor much experience with enterprise networking, so any pointers are greatly appreciated there.

My understanding is that server CPU’s have different warranty and long-term reliability compared to desktop systems, and at a much greater cost than their desktop counterparts. I can’t seem to fidn a specific price on Proliant servers with Epyc, but I expect them to cost much more than a self-built Threadripper machine - as much as I’d love to say I have the money for high-end parts, I’m not sure I’m at the stage where I can go so much higher than what I listed (these are put together as “dream” systems, I’d love to have them but these are just targets for performance & price).

I figure I’m not the biggest expert in networking nor in server configuration, so I don’t want to be stuck in a situation where I can’t access a server over a network and need to change something locally to restore it’s network connection (had that happen a lot on FreeNAS and don’t want to get stuck ina situation where I don’t even have display output).

I’d say the opposite about Threadripper. It’s a server CPU stuffed into a consumer package. The differentiator between that and ryzen is really the number of PCIe lanes you need. For a NAS, I don’t see the point in spending more for Threadripper unless you’re using it for compute also.

If you’re only going to be using one GPU, then you might not need it there either. In general some types of models parallelize better than others, so you’ll have to consider that when picking parts. You may be better off getting two 3080s instead of one 3090 depending on what models you’ll be using, and how far off the CEO math about the 3090 is from reality. :slightly_smiling_face:

Having a cheap GPU in a machine that you often use headless isn’t a bad idea for those unfortunate times when you can’t access it over the network for some reason, but most workstation/server boards have an onboard BMC which usually has some kind of display output, so you could just use that if it’s available.

I just bought a HPE DL385 with a EPYC 7302 for pretty cheap brand new. I’m super happy with it. 3.3 ghz max clock and 16 cores 32 threads. TDP is only 155w. I know there are other options out there but just my 2 cents. I’m a home user who definitely doesn’t need the computational horsepower but I wanted to learn virtualization and dove into the deep end lol.

1 Like

For the NAS, I looked up some Ryzen desktop motherboards of EATX or ATX XL size so I can fit some M.2 to PCIe or SAS / SATA expansion cards, and all of them come up as just as and often more expensive than sTRX motherboards - might as well get something like quad-channel memory and more storage drive connections on the Threadripper chips than the desktop Ryzen chips, no?

I figure one 3090 would work OK for now and in the future get a 2nd one and combine them with the NVLink bridge - only the 3090 has the SLI bridge fingers this time around too.

What’s the policy on adding parts to those systems, such as a GPU? That’s really the main thing I need for machine learning workloads, a CPU with high core counts and system memory comes second to that.

As far as I know the gpu side is actually pretty simple buy a gpu that is supported, pretty much all of the workstation ones and then you’ll need the performance fan kit. Which is two other fans and their connector. They list all the prices and what you need for it on their website and I would just hunt down the parts from other vendors.

Looked at the listing on the HPE site, I figure a database and storage server would need some decent core counts - my personal NAS with a 3400G gets maxed out during file transfers, and that’s a 4c 8t CPU, I figured at least a 16c 32t CPU would be completely preventative of that issue, but the Proliant ones I’ve found are decent core count but really low memory configs, and end up being damn expensive as expected. Damn you enterprise gear!

Then it sounds like you do want more PCIe lanes, so yeah. :stuck_out_tongue_closed_eyes:

I just had the same exercise upgrading my NAS. I had an i3 6100k on hand from a previous build, and I ended up saving a few hundred by picking an older board from supermicro with 8 sata ports and an onboard SAS controller so I could use the CPU I had on hand. I’m not using the NAS for compute, so an older CPU works fine there.

Yea the ability to pack in 1tb of ram per processor is insane lol. The good news is mine has only one core with the ability to add another down the road. I also bought my ram on amazon you don’t need the “smart” ram. I haven’t run into any issues so far.

I wish you the best of luck with the new business but a first tip would be to focus less on building your perfect infrastructure and more on the development activity you need to do. Grow the hardware / services organically when you need the horsepower. You may find the initial part of the activity can be done with your existing equipment and off-prem (cloud) infrastructure.

This use-case lends itself to scalable cloud infrastructure. Unless what you are doing is super secret and you absolutely do not trust your security skills at securing your buckets and virtual machines, I would recommend putting any ML workload into the cloud before buying very expensive hardware to do it locally.

As above these are not details you should be worrying about. Focus on developing the software and building your business. If you find it needs on-prem hardware then revisit the need at that point, with specific hardware specs needed for the workload. For example your listed 3970X may be utterly useless if the workload is GPU bound and that 3090 may perform less well than an older quadro you can pick up for cheap.

Try not to mix up personal and business expenses. The boring reason is that there are various tax implications but more broadly you need to start thinking about your business as a business, separate from home life. It will help you focus and keep perspective. Also for security implications I would keep the “work” systems away from personal infrastructure so you can keep it locked down.

Best of luck with the startup and I hope you find this general advice useful.

6 Likes

Spending more time looking at configurations I’d like from the Proliant servers, I kind of don’t like the cost vs any gains there are from Epyc itself.

Based on hardware I already own, the proposed setup for the “workhorse” server would be my current dev server, but replace the GTX 1050 with a GTX 1080 - I can base future decisions regarding performance based on how that goes.

But just for a fun comparison against the Proliant server, I made this “dream” build for the storage / database server with the following parts:
CPU: AMD Threadripper 3960X 3.8 GHz 24-Core Processor ($1349.99 @ Amazon)
CPU Cooler: Noctua NH-U14S TR4-SP3 82.52 CFM CPU Cooler ($79.90 @ Amazon)
Motherboard: MSI Creator TRX40 EATX sTRX4 Motherboard ($679.99 @ B&H)
System Memory: Crucial Bundle with 128GB (4 x 32GB) DDR4 PC4-21300 2666MHz RDIMM (4 x CT32G4RFD4266), Dual Ranked Registered ECC Memory ($789.00 @ Amazon)
Storage: 4x Western Digital SN750 1 TB M.2-2280 NVME Solid State Drive ($149.99 @ Newegg)
Storage: 4x Western Digital Red 12 TB 3.5" 5400RPM Internal Hard Drive ($304.99 @ Amazon)
Video Card: EVGA GeForce GTX 1050 2 GB SC GAMING Video Card (Purchased For $0.00)
Custom: ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion Card Supports 4 NVMe M.2 (2242/2260/2280/22110) up to 256Gbps for AMD 3rd Ryzen sTRX40, AM4 Socket and Intel VROC NVMe Raid ($69.98 @ Amazon)
Total: $4788.78

With this I end up getting more cores/threads, higher clockspeeds per core, a nice mix of HDD space and SSD caches, and pretty much the same PCIe lanes as the enterprise level Epyc servers I’ve found, and still ending up cheaper. I guess in the cost savings I end up losing enterprise support & warranty, while also costing more in terms of wattage.

So I’ll take the advice of using what I have unless I deem that I definitely need something more powerful in the future. Still love to get my hands on that high end gear though :sweat_smile:

In the info I listed above, I tried to limit any sort of hardware that would be shared between personal and business use - that’s why I specifically listed that the personal stuff would be on it’s own server rack with it’s own networking & misc gear, while the business stuff would also be on it’s own server rack with it’s own rack gear and such. The reason I mentioned both in the post is because I was also asking for suggestions in networking / server gear to use in my personal server rack when I migrate over from my current desktop & NAS to new setups. I guess it came off as looking like I was mixing and matching personal and business hardware - so far the only thing I believe that I own personally that would be used for a business project would be if I built a separate data storage server for the business and reuse my already existing GTX 1050, but other than that nada.

yeah epyc is more pricey. you oculd go intel if you want to save money have security holes but, ryzen would do. my new home server is ryzen AND it it has ECC!

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.