Hi, me again with another unhinged idea. I understand there might not be a real answer to this but here goes:
-I need around 240TB of raw storage in a rugged, portable form (spinning drives are out, I think)
-Need to fit in a carry-on form factor (intl. average)
-More accessible than LTO-9
-Will be running ZFS on TrueNAS Scale off an internal NVMe SSD
-I am willing to (or have to) trade-off performance (IOPS) for packaging
Use-case:
-Consolidated sequential writes from a network source
-Sequential reads from a network source
So the current idea is this;
-Get 32x 7.68TB SATA SSDs ($$$?)
-Obtain drive bays (mini-SAS HD)
-Shuck a reputable USB4 eGPU enclosure and some kind of SFF PSU (because no Thunderbolt support on TrueNAS apparently)
-Obtain an HBA that doesn’t require bifurcation
-Fit all of this in a Pelican case
Concerns;
-Obviously performance, reliability and cost.
-No ECC memory on current platform (laptop) (stability woes?)
I know this is bad but I’m just trying to gauge how “bad” it is. Yes, apparently these eGPU enclosures are limited to around 2500-2800MB/s, but its hard to gauge that with IOPS and ZFS access to the drives.
I’m looking for improvements I could make or more realistically, better ideas. I know commercial products in this configuration do exist by OWC, but it made more sense to me to use an exposed PCIe endpoint to have a chance to use an HBA that will pass through SMART data and serial numbers, and have more of a track record. Even though it seems like a moot point at this moment. Thanks for reading this far and I appreciate all feedback!
You can get a hold of Samsung 870 QVO for $350, however getting 32 of them in one machine might prove difficult and would cost you roughly $11k.
Might be better in that case to order 16x16 TB drives for around $26k:
Though, I would look into 8x32 TB myself, I know such drives are available though cost an arm and a leg and are sold exclusively to enterprise customers at the moment. You might also be Interested in a 4x64 TB drives setup for ~$45k?
How much money do you have? There are enterprise ssd’s that have 15 or 30TB per drive. like the kioxia CD-8R. 240tb is still quite a bit. and you would need 16 of them.
Thank you for the detailed reply! I forgot that Kioxia made a 30.72TB model. The last option with the Samsung drives makes the most sense to me at the moment. So this is going to sound silly but the problem with those drives is I’ll only have one PCIe slot and there is only one card that supports that many connections so its only feasible for the 30.72TB model. Unfortunately, I’ve been told on here that the CD-6 series, the only drives close enough for my budget, have SATA SSD level IOPS. So in terms of price efficiency, which is a silly metric considering the case at hand, it’s like 3.5x the price with not much in return since SAS HBAs with more connections are easier to come by too. That is if my plan of somehow turning the 16x 12G SAS connections into 32x 6G SAS
God I forgot that Nimbus drive even existed, they’re too rich for me. I’m close to the 8TB option since I’m not too familiar with the Teamgroup brand. While I’ll have redundancy, its one more gamble in a risky build. Thank you for the reply! Plenty of options
It has 9 sas connections, which is exactly enough for 3 6 bay holders. So it supports 28 drives on 1 expansion card. You won’t get the full speed though as its only PCI-E 3 8x.
Your plan to create a portable 240TB storage solution using SATA SSDs and a customized setup is ambitious, but it comes with its challenges. Here’s a breakdown of your concerns and potential improvements:
Concerns:
Performance: SATA SSDs can provide decent sequential read/write speeds, but they might not match the IOPS performance of NVMe SSDs. Given your focus on sequential reads and writes, SATA SSDs should suffice for your use case. Ensure the SSDs you choose have good sustained write performance to handle sequential writes effectively.
Reliability: SSDs, even in RAID configurations, have a limited lifespan in terms of program/erase cycles. Ensure you select enterprise-grade SSDs with good endurance ratings, or consider implementing a strategy for drive replacements over time.
Cost: SSDs can be expensive, especially when you’re aiming for 240TB. Compare the cost of your DIY solution to pre-built storage appliances from reputable vendors like Dell, NetApp, IBM, or HP . Sometimes, purchasing from vendors like ALTA Technologies might yield a better overall cost-to-storage ratio.
ECC Memory: ECC memory is crucial for data integrity, especially in storage systems. If your current platform lacks ECC memory, consider investing in a more stable platform that supports ECC memory. Data storage equipment from renowned vendors often comes with ECC memory support.
Potential Improvements:
Consider NVMe SSDs: If your budget allows, consider using NVMe SSDs for improved sequential read/write speeds. They can significantly enhance your storage system’s performance, especially in sequential operations.
Vendor Solutions: Explore solutions offered by Dell, NetApp, IBM, or HP. They have extensive experience in designing reliable, high-capacity storage systems. Their offerings might align with your requirements and prove to be more cost-effective in the long run.
Consult Experts: Consider consulting with storage experts or integrators who specialize in data storage equipment. They can provide tailored solutions based on your specific needs and budget constraints.
Future Expansion: Think about future scalability. Ensure your chosen setup allows for easy expansion or replacement of drives as your storage needs grow or technology advances.
Data Backup and Redundancy: Implement a robust backup and redundancy strategy. No matter how reliable your storage system is, data loss can occur. Regular backups and redundancy mechanisms are vital.
In summary, while your DIY approach is commendable, evaluating pre-built solutions from established vendors might offer better long-term reliability, support, and cost efficiency, especially when dealing with substantial storage capacities. Consider your long-term requirements, budget constraints, and the importance of data integrity and reliability when making your decision.
If U.2 is on the table, most U.2 options already beat the crap out of SATA and M.2 in terms of $/TB for the highest density devices. They will soon be the highest density devices you can get; density translates to portability. SATA and U.2 SSDs are currently equal with regards to density, but you will be paying double per TB per cubic centimeter for SATA. I’m basing this on the fact that a 16 TB SATA SSD goes for no less than $1,600, and that the lowest priced 32 TB U.2 SSD sells for around the same price.
Forget about M.2; you cannot get an 8 TB M.2 SSD for less than $650. That 8 TB Samsung 870 QVO SATA SSD has gone as low as $320 in the recent months—less than half the price of the M.2.
The upcoming 61 TB U.2 SSD from Solidigm performs well and is likely to cost less than $3,500, making it a tad bit pricier in terms of $/TB compared to the Samsung 870, but would give you 240 TB using just 4 SSDs. I could fit that inside just one ICY DOCK ToughArmor MB699VP-B V2—the same footprint as two stacks of four SATA SSDs which top out at 16 TB right now.
For $3k more choosing the U.2s over the SATAs you planned on buying, you would:
need only 4 SSDs versus 32
need only the equivalent physical space of 8 SATA SSDs (assuming SATA drives are 7 mm thick, while U.2s are 15 mm)
possibly eliminate the need for (and cost of) extra add-in cards/HBAs
have the bandwidth of PCIe 4.0 Ă— 16 versus SATA 6 gbps
The goal is 240TB of solid state storage. It is hard to reach that on a single computer without running out of PCIe lanes. Did you read about the apex?
Going by the picture, the Apex takes up quite a bit of space (948,301 mm²). And with the M.2 SSDs topping out at 8 TB, 168 TB is about as far as you can go for that footprint (5,645 mm²/TB). The highest density M.2 SSDs are also unreasonably expensive with thrice the $/TB of the next lowest density.
Assuming 16 lanes, bifurcation, and that the 64 TB-class U.2 drive will be released soon, it might be worth waiting for the densest option that can reach 240 TB. There are those PCIe adapter cards that’ll turn a PCIe slot into four OCuLink ports, and Mini ITX cases with a 5.25″ drive bay for 4 U.2 SSDs. The ICY DOCK ToughArmor MB699VP-B V2 come in at around 903,867 mm², yielding footprint of 3,531 mm²/TB (⅝ of the Apex Storage X21).
The ICY DOCK ToughArmor MB516SP-B takes 16—super expensive—SATA drives, and those could be populated with 16 TB SATA SSDs to reach 256 TB in a footprint of two 5.25″ drive bays (1,931,580 mm² for 7,545 mm²/TB). The problem is that chassis with more than one drive bays are scarce these days.
If you need the highest storage density today, and money is no object, then the Apex Storage X21 is the densest one-slot solution, although that would not get you to 240 TB used as-is. You’d populate 17 M.2 slots with 8 TB SSDs, getting you to 136 TB. Then use an M.2-to-OCuLink adapter in the remaining four slots and run it to four internal drive bays (or the ICY DOCK ToughArmor MB699VP-B V2), with four 32 TB U.2 SSDs, getting you to 264 TB. (As an aside: 264 TB coincidentally happens to be 240.1066 TiB. ) That achieves your objective, and I’m willing to bet (with the right chassis) it also fits comfortably in carry-on luggage.
240GB/8GB = 30 drives. If they were nvme, ie pcie connected ie on the apex
@ $100/TB
@$80/TB
Looking into it, I only saw those in 2tb increments For larger sizes you need nvme.
240GB/2GB = 120 drives. (which can be sata)
These sas expanders allow connections to 28 downstream drives, and 2x4 channels of upstream.
120/28 = 4.28, ie 5 cards
You probably need the extra ports for the parity drives to achieve 240 GB of usable space.
with
You actually want more due for parity, probably 10% to 20%.
You can achieve this with 1 sas expander with 16 internal channels connected to 2 cards like:
Though that looks like it requires a pcie slot it really doesn’t, it just needs a power supply like: https://www.amazon.com/dp/B07SBWHNWM
I am not saying that is the correct one, different cards require different voltages.
Both the controller and cards need to be 12gbps or faster due to certain features on that version of SAS.
The LSi card should be 93xx or later, with at least 16 internal channels.
You are going to need to run truenas either natively or via a VM.
Due to the number of drives, you will need to have a second layer of SAS expanders.
1 pcie card x8 pcie4
2 sas expanders each with 9 connectors, use the connectors as 4 sets of 2.
Card 1, 1 set of 2 connects upstream, 3 sets of 2 connect to 3 cards downstream.
4 drive connections spare
Card 2, 1 set of 2 connects upstream, 2 sets of 2 connect to 2 cards downstream.
12 drive connections spare
5 additional sas expanders to connect to drives. with 1 set of 2 connectors upstream, and the rest connecting to drives.
You can probably now understand why I recommended the apex cards. If you think the apex cards are bulky, this is the alternative.