NVME SSD Recommendations - Proxmox

Hello!

I am putting together a Proxmox build. Hardware so far:

CPU: Xeon E5-2650Lv3
MB: Gigabyte x99 UD4
RAM: 64GB DDR4
Proxmox Boot Drive: 2 x Intel SSD DC S3700 200gb ZFS Raid 1

PCIE Cards:
1 x video card
1 x LSI 9211-8i
1 x Mellenox connect x-3
1 x Intel i340-T4

That leaves me with only 3 pcie x 1 slots left. I could remove the video card because I only need it for the display which won’t be commonly needed during production but I’d prefer not to.

I am thinking about adding an NVME SSD for the VMs. Sadly this board only has one m.2 slot otherwise I would consider getting two NVME SSD’s for raid 1.

Given I only have the option for one NVME SSD, I think write endurance will be important. Does anyone have recommendations for an affordable NVME SSD with good write endurance? I think 1tb would be enough.

Would also be open to going a different route if NVME doesn’t seem like a good idea here.

Thanks!

You could consider using a graphics card with a PCIe x1 connector.

That would open up a larger PCIe slot for NVMe SSDs.

I had considered that but it seems difficult to find a decent x1 graphics card for a reasonable price. May still go that route.

1 Like

If you really just need it for host console i always just grab whatever gpu i have laying around that is not worth anything and cut the pcie connector down with a dremel until it fits in whatever slot i have available.

Also, you can cut out the back edge of the pcie slot so the card hangs out as long as nothing on the mainboard gets in the road.

I feel like i should be an episode of The Red Green Show.

I had also considered this option but the idea makes me feel a bit uneasy. I do have a Dremel so maybe I should pursue this. I have a couple of passively cooled Radeon HD 5450’s that I use for servers. I’d love to continue using these.

It seems cutting the back edge of the pcie slot would be the easiest approach given the HD 5450’s are full pcie x16 length. Although the MB is probably worth a bit more than the card.

Any thoughts on what is easier to do?

Regarding NVME SSDs with best endurance I found this discussion where “Dunuin” has a nice list of NVME SSDs and their endurance:

It seems like the Micron 7450 PRO would be a good option they are currently going for ~$150.

After considering this for some time I’m leaning towards just getting a pair of standard Intel DC SSDs used on eBay. They won’t be as fast but then I will have peace of mind not waiting for the day the nvme drive kicks the bucket and I have to reinstall my whole system from back up to get my network back on line.

This server is going to be hosting a my main pfsense router/firewall so if it goes down so does the entire network. I’d like it to be somewhat high availability.

it is usually easier to cut the connector off the GPU. mark it with a permanent marker and just nibble away testing as you go.

if you do want to cut the MB slot, be careful as slipping and whacking some part of the MB will probably spell game over. also the plastic coming off of the dremel can sometimes find its way into any of the PCIE sockets, or even the ram sockets, or CPU socket if there is no CPU installed. and those tiny pieces of plastic can be a pain to get out.

either job has some risk, if you have a bad MB and bad GPU practice on both and see what works for you.

write endurance actually is not a big of a deal for home servers as it seems to be. my system runs all the time and my house generates around 2tb of internet traffic a month and i have never ran up against a endurance limit before something else happened. (usually i just need more space)

i just gave a 960gb neutron XT to one of my friends a few days ago, it had 55000 power on hours, 30 power cycles, and 1tb written. That is on the extreme low side, i do have some drives that get used ‘a lot’ but still under real data center usage by a mile.

that is the 256gb NVME that i have had proxmox installed on for 6 years. it writes logs and does all the proxmox root file system stuff all the time. this system has 9 VMs online all the time, 2 of them windows. and it shows 2%

those have the VM OS drives on them, they were used when i got them and i still am only at 8% after 6 years.

NOTE: counts UP not DOWN.

Depending on how much storage you need, I am building some machines for a Proxmox Ceph cluster for our developer. We are using some HP Z640 workstations. These machines support x16 bifurcation to x4x4x4x4 so I put in a four space U.2 adapter. ConnectX5 100 GB adapter and Quadro K620 graphics. On these workstations the X4 and X8 slots are open ended so you can put the graphics card in a x8 slot. I have this running on Windows 2022 for some testing and I was able to full saturate a 10GBe connection. I can’t test the 100Gbe until I get the second one built. Which should by the end of this week.


1 Like

@Zedicus Thanks! That gives me some much needed perspective.

@kiszka69 Nice! That system is impressive.

Unfortunately I’ve only got an old gaming motherboard to work with which has no bifurcation capabilities. While I could probably get a u.2 to work, I think u.2 is a bit outside my budget for the VM drive(s). I’m trying to keep it under $150.

Edit: Actually maybe u.2 isn’t out of the realm of possibility. It looks like Intel DC u.2 nvme drives are relatively affordable on ebay and a pcie card for 1 u.2 drive isn’t terrible expensive. I’ll check it out.

Thanks!

Just wanted to update with what I ended up doing. I found a “Jaton NVIDIA GeForce 8400 GS 512MB PCIE x1” on ebay for $25 and went with that for the video card to free up an x16 slot.

It is working just fine, though I’m going to have to macgyver the low profile bracket into a full height bracket.

I was surprised to find that this 8400 GS seemed to use about 1 watt less than the Nvidia GT 210 that I was using at idle. I was really expecting it to use significantly more power since it’s such an old card.

I now have another open PCIE slot but I already opted to go with 2 x Intel DC S3700 800GB SSD’s. After calculating what I think I need for VM storage I should still have about 400GB left over with 800GB of space so that should do fine for now.

The Intel DC S3700 drives have incredible endurance and I think I value peace of mind over the performance gain I will get by going NVME. Plus it was much easier on my wallet at only $95 for the pair. I’m already WAY over my budget on this build.

My existing server has VMs on spinning rust so regular SSD’s will be a performance improvement anyway.

1 Like