I’m a PhD student at a lab in Bergen, Norway (coding/information theory and crypto). Several of the people in the lab are running heavy simulations (currently on their laptops/desktops) so we’re planning to buy a compute server to be shared by everyone in the lab. The plan is for people to SSH in and run whatever simulations they have. We can basically use as many cores and as much memory as we have. We’ve not done this before so I promised to look into it. We’re looking to spend around $5K or less. This is a pilot project for us so it’s better to spend less and observe how it’s used before we commit to spending a lot.
I’m very interested to hear if you have any suggestions. Since we need little reliability (can just re-run the simulations) consumer hardware could be a good choice. Maybe Threadripper? Or would you go with something more enterprise like Xeon? Prototyping software is often done on one core so it can be a bit painful to have very many slow cores compared to a few fast. Optane is an interesting option since it could give fast caching for the people that need lots of memory.
Thanks a lot for reading this far. Again, any suggestions or thoughts are appreciated!
AMD Epyc is definitely the way to go, great performance per dollar as your workload should be highly multithreaded. You may also benefit greatly from adding a GPU (or two) if your software supports GPU offloading.
I also wouldn’t immediately discount going through a traditional server OEM (Dell, HP, etc.) for the base configuration (CPU, PSU, and maybe RAM) and adding your own storage after you receive it. You will instantly get a better warranty in terms of failed part replacement speed, and generally you’ll also get a 3yr warranty instead of 1yr for consumer gear. They generally make a huge profit on the addons (HDDs, NICs, etc.) and you will be surprised how competitive they are on chassis, CPU, PSU, (sometimes) RAM compared to building your own, with much less headache.