Hi ive got a 33tb pool of drives in a software controlled jbod/raid through the stablebit drivepool software and was looking to see if there is a pcie expansion card that has 6 sata ports on it that will increase IOPS or throughput in any meaningful way compared to my asus tuf x570 gaming wifi plus chipset sata controller.
also I am potentially looking at adding a X4 nvme pcie card to add some fast caching and storage drives and was looking for suggestions for the fastest budget oriented price for performance nvme ssds of decent size (at or around 1 tb or 500gb size depending on speed)
PC specs;
64gb g.skill trident z neo 3600mhz RAM(air cooled via 2X50mm fractal design fans)
AMD R7 5800x CPU; (liquid cooling Arctic liquid freezer 2 480mm aio).
GTX 1080TI FTW 3 Hydro GPU.
evga g3 1000W power supply.
Be Quiet Dark Base 900 Rev. 2 Orange
7 140mm, fans 3 120mm fans (silent wings 3) in a positive pressure setup.
any suggestions on how to improve this setup are welcome but keep in mind I use windows 10 for game compatibility and ease of use so linux based solutions wont help as I’ve never used linux and don’t have time to learn it currently. also I have not setup a NAS because I have no idea how to start or setup a NAS, and my whole home network is limited to 5gb max due to my network switches and router being a limitation. (not to mention I use the onboard ethernet on my mobo and that is limited to 1gb I believe so if you wish to recommend a good NAS setup I would need a good network card recommended too).
Edit; Any compatible PCIe SAS cards with sata adapters are welcome if they have good reliability and superior speed and/or processing.
SATA is SATA, unless there’s something very wrong with the SATA you’re using, which there probably isn’t unless you’re using a cheap expansion card with low shared bandwidth or port doublers or something. It should all more or less perform the same.
Unless you’re all solid state, I would think 5Gbe is probably fine even for some pretty optimal striped RAID setups. Even 2.5Gbe is around 200MB/s of bandwidth, so if you did want to go with a NAS, it shouldn’t be too bad. That said, unless your storage needs to be accessible to multiple computers, or you’re lacking in physical space to place drives, or some other reason why you wouldn’t want drives in your desktop, there’s not really a reason to set up a NAS.
If you decide you’re interested, though, there doesn’t have to be anything special about a NAS. It’s just a computer that has files hosted on the local network for other computers to access.
It would be helpful to know what you hope to accomplish with these upgrades. What drives are you using in your 33TB pool? Are they underperforming in some way? What software is underperforming and in what ways?
If you’re hoping to increase your FPS or improve load times by getting a SATA expansion card, it’s not worth it. Even NVME vs SATA SSD, there’s very little real world performance benefit right now.
Some Samsung Evo, Intel … 600p or 660p, can’t remember for sure the model name. I’m mostly sure Intel and Samsung haven’t made garbage tier SSDs; somebody else feel free to correct me if I’m wrong.
Adata on the other hand. Can’t recommend specific models because of their bait and switch bullshit they’ve done, and sure as hell can’t recommend their brand in general for the same reason.
Add Crucial (basically Micron) and SK Hynix to that list.
And finally, don’t buy low end drives (Intel 660p, Crucial P1, Samsung 870 QVO, or any other QLC drives) and expect good performance out if it, that’s just setting yourself up for failure.
Probably not, though seeing as m.2 is gaining popularity and we might soon even see more or less serious GPUs pop up in this form factor, this might be an option, I have no idea how well this works though:
Sad reality is that SATA is dying, spinning rust still holds the capacity/dollar but SSDs are becoming fast enough, and big enough, that SATA is just too damn slow by now. Even in RAID0 triple striping you have a theoretical maximum of 540 MB/s and by then you have eliminated all the cost advantage of the 20 TB drive since a single SATA SSD is about as fast as that - and 3 SATA SSDs cost about as much as one Mech with the same overall capacity, roughly speaking.
So, hello NVMe, embrace the future one drive at a time. You might also want to invest in a motherboard with 4+ m.2 slots in the future.
Its just that when i hit all 5 drives with a load, I notice that even with read striping i get a noticeable performance hit, I on average see around at least 300mbs reads and around 220mbs write per drive. I was wondering if maybe the chipset or sata controller was the bottleneck, I have 2 3tb drives, 2 6tb drives, and one 18tb drive and 2 nvme gen 3 ssds in a raid 0 for my boot drive. I am planning on adding more topping out at 7 drives total (mechanical that is) as that’s all my case natively supports.
I use O&O defrag to defrag all 33tb nightly so my performance stays at its peak.
As for a NAS I think I already set up something similar in windows at least because any of my network pc’s can access each other.
Fair enough I use spinning rust for storage and 2 nvme gen 3 ssd’s in raid 0 for my boot but my cpu and mobo support PCIe bifurcation so I can add one of those quad nvme x4 expansion cards to it ( I don’t know what one I should get between asus’s older or newer model however)
I don’t recommend defragging that often. Fragmentation should only start to be a problem when your drives have been going for a while without it, and defragmentation isn’t really a one size fits all solution for it.
Your drives aren’t really lined up for size, too. Are your drives in striped pairs for the 2tb and 6tb drives, and then a standalone 18tb? It doesn’t sound like a Raid5/6 setup to me, and 300MB/s actually sounds pretty good for pairs of mechanical drives in 3~6tb capacities. 220MB/s actually sounds a lot like what an 18TB drive would write at peak.
Are you sure it’s from hitting all the drives at once, and not from, say, long file copies? Windows likes to cache writes to memory, and transfers do appear to slow down when the cache fills, because it’s now waiting on the hardware to finish working.
Also, before you think about bifurcating to 4 m.2 NVME, the lower slot on your board is only electrically a 4x PCIE slot. I don’t think it would even work with more than the first slot on the card, unless you use the upper 16x slot.
Thanks for the info on that il keep that in mind about defragging.
i have the 3 and 6tb drives setup in stablebit drivepool as parity data for read striping and then the standalone 18tb is for files I don’t need duplicates of, or that I don’t need read fast. Also the way i have the stablebit software setup is that it write caches to my raid 0 gen 3 ssd’s then transfers it over to my spinning drives so during small file downloads or tranfers (under 400gb) it writes at speeds up to 6gbs then it transfers over to either the big storage if its a file marked as non duplicate or it duplicates it across the parity drives in realtime but the bottleneck is present during when all of the drives are hit with a big load like defragging or a massive file transfer process I can see the speeds dip down to 25-50mbs per drive peak.
as for the way bifurcation works on my mobo is I’m pretty sure it has 2x16 slots therefore you can have both running at x16 to get full bandwidth on both it also has a regular x4 pci slot and I think one more pcie x8 slot if I recall correctly.
even if my second slot is x8 i wont lose much performance if i move my video card to that slot if needed. I think gamers nexus or ltt did a video showing its like a 10% performance loss.
I took a look around, and both Plus and Pro tuf x570 boards have only 2 16x slots, with the second being a 4x link.
Is there one I’m missing?
25~50MB/s per drive sounds fine for lots of smaller files, or for defragging. Mechanical drives do slow down a lot for more complex file transfers, due to physical seek times.
yea that seems right about the 2 x16 pcie and I thought there was a x8 pcie slot too but no it looks like that is in fact a second x4 pci slot. my bad I must’ve been thinking about my older asus board I had for my 4770k.
Also good to know 25-50mbs is normal for those tasks I thought for sure it was my sata controller not able to keep up with the demand. any thoughts on a pci/pcie sata card that may take some stress off my sata controller and perhaps improve speeds at all?
It won’t improve performance at all, but if you need more ports, getting a used LSI HBA card on ebay, along with any required SAS breakout cables, it’s going to be better than any straight SATA card you can buy, as SATA PCIE cards just aren’t that good. Downside is, though, that it’s going to be running PCIE Gen2 4x. Plenty of bandwidth for 8 SATA, but there goes your 4x slot.
Again, this won’t improve performance at all. Your SATA controller is perfectly capable of handling every port on it at full speed. It’s only the physical process of moving the read head in your mechanical drives that’s limiting your performance.
Ah I see that sounds good I’ll look into getting that card and sata breakout cables and its not a big deal if I lose a x4 port I don’t use them for anything else at the moment except maybe in the future getting a faster than onboard network card but then i have 2 x4 slots anyways so I can still do that.
Also good to know my sata controller is not a bottleneck I wasn’t sure if that was what the limiting factor was.