Portable Storage Solution Jank-o-Meter

Hi, me again with another unhinged idea. I understand there might not be a real answer to this but here goes:

-I need around 240TB of raw storage in a rugged, portable form (spinning drives are out, I think)

-Need to fit in a carry-on form factor (intl. average)

-More accessible than LTO-9

-Will be running ZFS on TrueNAS Scale off an internal NVMe SSD

-I am willing to (or have to) trade-off performance (IOPS) for packaging

Use-case:

-Consolidated sequential writes from a network source
-Sequential reads from a network source

So the current idea is this;

-Get 32x 7.68TB SATA SSDs ($$$?)
-Obtain drive bays (mini-SAS HD)
-Shuck a reputable USB4 eGPU enclosure and some kind of SFF PSU (because no Thunderbolt support on TrueNAS apparently)
-Obtain an HBA that doesn’t require bifurcation
-Fit all of this in a Pelican case

Concerns;

-Obviously performance, reliability and cost.
-No ECC memory on current platform (laptop) (stability woes?)

I know this is bad but I’m just trying to gauge how “bad” it is. Yes, apparently these eGPU enclosures are limited to around 2500-2800MB/s, but its hard to gauge that with IOPS and ZFS access to the drives.

I’m looking for improvements I could make or more realistically, better ideas. I know commercial products in this configuration do exist by OWC, but it made more sense to me to use an exposed PCIe endpoint to have a chance to use an HBA that will pass through SMART data and serial numbers, and have more of a track record. Even though it seems like a moot point at this moment. Thanks for reading this far and I appreciate all feedback!

You can get a hold of Samsung 870 QVO for $350, however getting 32 of them in one machine might prove difficult and would cost you roughly $11k.

Might be better in that case to order 16x16 TB drives for around $26k:

Though, I would look into 8x32 TB myself, I know such drives are available though cost an arm and a leg and are sold exclusively to enterprise customers at the moment. You might also be Interested in a 4x64 TB drives setup for ~$45k?

2 Likes

How much money do you have? There are enterprise ssd’s that have 15 or 30TB per drive. like the kioxia CD-8R. 240tb is still quite a bit. and you would need 16 of them.

If it really has to be small then m2 ssd’s would be the smallest.
Icy dock has a 8 m2 in one 5.25inch enclosure. MB873MP-B_8 Bay M.2 NVMe SSD PCIe 4.0 Mobile Rack Enclosure for External 5.25" Drive Bay (8 x OCuLink SFF-8612, no Tri-mode support)
MB872MP-B_Rugged 12 x M.2 SATA SSD Mobile Rack Enclosure for 5.25" Bay (3 x OCuLink)

that would also mean 8tb ssd’s. so you would need 30 ssd’s. or 3 drivebays. connecting that up would need something to merge 9x16x PCI-E lanes.

Going for less drives that have more capacity is probably easier. with an LBA.
Samsung PM9A3 with sas. 15tb per drive and 1000 euro per drive.
into 3 6 bay holders MB118VP-B_6 Bay 2.5" U.2/U.3 NVMe SSD PCIe 4.0 Mobile Rack Enclosure for External 5.25" Drive Bay (3 x SlimSAS SFF-8654 8i)
I really don’t know anything about these kinds of controllers, so I’m not going to suggest one.

1 Like

Thank you for the detailed reply! I forgot that Kioxia made a 30.72TB model. The last option with the Samsung drives makes the most sense to me at the moment. So this is going to sound silly but the problem with those drives is I’ll only have one PCIe slot and there is only one card that supports that many connections so its only feasible for the 30.72TB model. Unfortunately, I’ve been told on here that the CD-6 series, the only drives close enough for my budget, have SATA SSD level IOPS. So in terms of price efficiency, which is a silly metric considering the case at hand, it’s like 3.5x the price with not much in return since SAS HBAs with more connections are easier to come by too. That is if my plan of somehow turning the 16x 12G SAS connections into 32x 6G SAS

God I forgot that Nimbus drive even existed, they’re too rich for me. I’m close to the 8TB option since I’m not too familiar with the Teamgroup brand. While I’ll have redundancy, its one more gamble in a risky build. Thank you for the reply! Plenty of options

I don’t know if this works. but what about a sas expander like HPE ProLiant DL380 Gen10 12Gb SAS Expander Card Kit With Cables (DL380 G10 / DL385 G10) Raid Controller - Kenmerken - Tweakers

It has 9 sas connections, which is exactly enough for 3 6 bay holders. So it supports 28 drives on 1 expansion card. You won’t get the full speed though as its only PCI-E 3 8x.

with this cable?

1 Like

I think this might work, with the Mini SAS IcyDock bays. I’ll try figuring out the cabling.

Your plan to create a portable 240TB storage solution using SATA SSDs and a customized setup is ambitious, but it comes with its challenges. Here’s a breakdown of your concerns and potential improvements:

Concerns:

  1. Performance: SATA SSDs can provide decent sequential read/write speeds, but they might not match the IOPS performance of NVMe SSDs. Given your focus on sequential reads and writes, SATA SSDs should suffice for your use case. Ensure the SSDs you choose have good sustained write performance to handle sequential writes effectively.
  2. Reliability: SSDs, even in RAID configurations, have a limited lifespan in terms of program/erase cycles. Ensure you select enterprise-grade SSDs with good endurance ratings, or consider implementing a strategy for drive replacements over time.
  3. Cost: SSDs can be expensive, especially when you’re aiming for 240TB. Compare the cost of your DIY solution to pre-built storage appliances from reputable vendors like Dell, NetApp, IBM, or HP . Sometimes, purchasing from vendors like ALTA Technologies might yield a better overall cost-to-storage ratio.
  4. ECC Memory: ECC memory is crucial for data integrity, especially in storage systems. If your current platform lacks ECC memory, consider investing in a more stable platform that supports ECC memory. Data storage equipment from renowned vendors often comes with ECC memory support.

Potential Improvements:

  1. Consider NVMe SSDs: If your budget allows, consider using NVMe SSDs for improved sequential read/write speeds. They can significantly enhance your storage system’s performance, especially in sequential operations.
  2. Vendor Solutions: Explore solutions offered by Dell, NetApp, IBM, or HP. They have extensive experience in designing reliable, high-capacity storage systems. Their offerings might align with your requirements and prove to be more cost-effective in the long run.
  3. Consult Experts: Consider consulting with storage experts or integrators who specialize in data storage equipment. They can provide tailored solutions based on your specific needs and budget constraints.
  4. Future Expansion: Think about future scalability. Ensure your chosen setup allows for easy expansion or replacement of drives as your storage needs grow or technology advances.
  5. Data Backup and Redundancy: Implement a robust backup and redundancy strategy. No matter how reliable your storage system is, data loss can occur. Regular backups and redundancy mechanisms are vital.

In summary, while your DIY approach is commendable, evaluating pre-built solutions from established vendors might offer better long-term reliability, support, and cost efficiency, especially when dealing with substantial storage capacities. Consider your long-term requirements, budget constraints, and the importance of data integrity and reliability when making your decision.

I am getting to this a bit late, But you can get whatever commodity motherboard you want, and just install 2 of these:

https://www.apexstoragedesign.com/apexstoragex21

I think it is by far the cheapest and most compact option.

You can see a review here:

If U.2 is on the table, most U.2 options already beat the crap out of SATA and M.2 in terms of $/TB for the highest density devices. They will soon be the highest density devices you can get; density translates to portability. SATA and U.2 SSDs are currently equal with regards to density, but you will be paying double per TB per cubic centimeter for SATA. I’m basing this on the fact that a 16 TB SATA SSD goes for no less than $1,600, and that the lowest priced 32 TB U.2 SSD sells for around the same price.

Forget about M.2; you cannot get an 8 TB M.2 SSD for less than $650. That 8 TB Samsung 870 QVO SATA SSD has gone as low as $320 in the recent months—less than half the price of the M.2.

The upcoming 61 TB U.2 SSD from Solidigm performs well and is likely to cost less than $3,500, making it a tad bit pricier in terms of $/TB compared to the Samsung 870, but would give you 240 TB using just 4 SSDs. I could fit that inside just one ICY DOCK ToughArmor MB699VP-B V2—the same footprint as two stacks of four SATA SSDs which top out at 16 TB right now.

For $3k more choosing the U.2s over the SATAs you planned on buying, you would:

  • need only 4 SSDs versus 32
  • need only the equivalent physical space of 8 SATA SSDs (assuming SATA drives are 7 mm thick, while U.2s are 15 mm)
  • possibly eliminate the need for (and cost of) extra add-in cards/HBAs
  • have the bandwidth of PCIe 4.0 Ă— 16 versus SATA 6 gbps
1 Like

The goal is 240TB of solid state storage. It is hard to reach that on a single computer without running out of PCIe lanes. Did you read about the apex?

Apex Storage X21

Going by the picture, the Apex takes up quite a bit of space (948,301 mm²). And with the M.2 SSDs topping out at 8 TB, 168 TB is about as far as you can go for that footprint (5,645 mm²/TB). The highest density M.2 SSDs are also unreasonably expensive with thrice the $/TB of the next lowest density.

Assuming 16 lanes, bifurcation, and that the 64 TB-class U.2 drive will be released soon, it might be worth waiting for the densest option that can reach 240 TB. There are those PCIe adapter cards that’ll turn a PCIe slot into four OCuLink ports, and Mini ITX cases with a 5.25″ drive bay for 4 U.2 SSDs. The ICY DOCK ToughArmor MB699VP-B V2 come in at around 903,867 mm², yielding footprint of 3,531 mm²/TB (⅝ of the Apex Storage X21).

256 TB could fit in this soon.

The ICY DOCK ToughArmor MB516SP-B takes 16—super expensive—SATA drives, and those could be populated with 16 TB SATA SSDs to reach 256 TB in a footprint of two 5.25″ drive bays (1,931,580 mm² for 7,545 mm²/TB). The problem is that chassis with more than one drive bays are scarce these days.

256 TB could fit in this now.

If you need the highest storage density today, and money is no object, then the Apex Storage X21 is the densest one-slot solution, although that would not get you to 240 TB used as-is. You’d populate 17 M.2 slots with 8 TB SSDs, getting you to 136 TB. Then use an M.2-to-OCuLink adapter in the remaining four slots and run it to four internal drive bays (or the ICY DOCK ToughArmor MB699VP-B V2), with four 32 TB U.2 SSDs, getting you to 264 TB. (As an aside: 264 TB coincidentally happens to be 240.1066 TiB. :slightly_smiling_face:) That achieves your objective, and I’m willing to bet (with the right chassis) it also fits comfortably in carry-on luggage.

Assuming you are using a server motherboard, you can use a pcie bifurcation card to split the slot into two slots.

If you aren’t using a server motherboard you only get 20-28 PCIe lanes, and I can’t think of any solution that can work for you with nvme drives.

You can get a 16 channel sas card, then run that through a Pair of sas expanders that support Sata, then use a ton of m.2 Sata cards.

Either individually or through an adapter like:
https://global.icydock.com/product_237.html

240GB/8GB = 30 drives. If they were nvme, ie pcie connected ie on the apex
@ $100/TB

@$80/TB
Looking into it, I only saw those in 2tb increments For larger sizes you need nvme.
240GB/2GB = 120 drives. (which can be sata)

These sas expanders allow connections to 28 downstream drives, and 2x4 channels of upstream.
120/28 = 4.28, ie 5 cards
You probably need the extra ports for the parity drives to achieve 240 GB of usable space.

with

You actually want more due for parity, probably 10% to 20%.

You can achieve this with 1 sas expander with 16 internal channels connected to 2 cards like:

Though that looks like it requires a pcie slot it really doesn’t, it just needs a power supply like:
https://www.amazon.com/dp/B07SBWHNWM
I am not saying that is the correct one, different cards require different voltages.

Both the controller and cards need to be 12gbps or faster due to certain features on that version of SAS.

The LSi card should be 93xx or later, with at least 16 internal channels.
You are going to need to run truenas either natively or via a VM.

Due to the number of drives, you will need to have a second layer of SAS expanders.

1 pcie card x8 pcie4
2 sas expanders each with 9 connectors, use the connectors as 4 sets of 2.
Card 1, 1 set of 2 connects upstream, 3 sets of 2 connect to 3 cards downstream.
4 drive connections spare
Card 2, 1 set of 2 connects upstream, 2 sets of 2 connect to 2 cards downstream.
12 drive connections spare
5 additional sas expanders to connect to drives. with 1 set of 2 connectors upstream, and the rest connecting to drives.

You can probably now understand why I recommended the apex cards. If you think the apex cards are bulky, this is the alternative.

Or just get a rack mount (loud) server that meets your needs with an integrated backplane:

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.