Level1: 172tb+ Storage Server | Level One Techs

But what if you use SSDs instead of spinning rust?

I'm not talking just about your specific project. I mean, in general.

Wouldn't it be more efficient to use a dedicated chip on the RAID card, that is designed to do one task, like integrity checking, instead of using a general purpose CPU?

It's not really the case that these hashing algorithms use the general purpose parts of a general purpose cpu anymore:

Between "MMX" style instructions that operate on vectors of data, and dedicated silicon for hash computation, the CPU is often as fast or faster than what you can get on a PCIe card. There are PCIe cards out there that cost thousands of dollars that have "desktop" CPUs in them to help with that, but generally you want to distribute the load and computation across a bunch of devices. For a lot of hard drives (where "a lot" is probably around 20 or more) it just doesn't make sense to try to cram all that onto a PCIe card. If you've got a bitchin storage system, you want a bunch of front end servers to have "equal opportunity" to take advantage of it. So there is a natural tendency to separate your workload from your storage subsystem.

1 Like

But dedicated chips can ONLY have instructions and silicon dedicated for a single purpose, without the unnecessary general purpose stuff, therefore making them more efficient.

That's why we use GPUs for graphics, instead of MMX and SSE instructions on the CPU.

Maybe they were too lazy to make custom chips.

no but you will have to fight off captain Kirc action figure. Great video love the yellow racing collor of the google server/dell. Any who my nas is not as impressive as yours I have a msi 970 with 3tb btrfs raid 1 with a amd 8 core 8320 running ubuntu server while running mythbuntu backend to record over the air tv and motion to record my ip video camerias also have a apc 700 ups to shut down the system in case of a power outage. i have though about running a few virts on it I still have more to learn but I have been impressed with the btrfs raid one myself. you can use any size drive and multiple drives. yes it is expermental but i have reloaded my os and did not lose the brtfs raid which is way cool. just wanted to share

Great build and episode!
Would be interesting to see an episode with optimizations to reduce power consumption. Spinning down drives? wake-on-LAN? Replace PSUs that are old or suboptimal? ...?

I have just a tiny 5x4TB :) FreeNAS but have not yet dared to play with spinning down drives and wake on LAN. I am worried spinning down the hdd might cause corruption and wear of the drives.

In Enterprise environments, it's been found that spinning down drives increases their chances of failing overall. It's a very stressful thing and generally running them all the time is best.

This is as true for consumer environments.

I'm not sure you have the time to answer this question, but I'd be interested to understand more how you installed ZFS on fedora:

  • what repo did you use to install? or did you compile source, etc?
  • is zfs your boot filesystem?
  • why did you use fedora workstation instead of fedora server? (I'm a noob - what did you mean by "I can get server parts with enf easy enough"?)
  • a general tutorial of setting up zfs on fedora would be on my wish list (currently I am using btrfs because my initial research indicated that zfs wasn't a great choice with fedora because licensing issues made it near impossible to properly set up)

I appreciate any answers/advice you or the community can provide

1 Like

Google ZFS on Linux, use the recommended setup. Creating the pool from the CLI can be tricky. You want to do testing of different disk configs. Raidz for archival and light workloads, mirroring for heavy workloads. Though all SSD arrays with raidz1 seems pretty kick-ass imho.

Enf should read dnf because auto-correct. It's the package manager. I had a USB stick for WS, but server is a fine choice too.

ZFS is not the boot FS.

ZFS is portable across systems so you can create your pool on another os then just import it

4 Likes

Sorry to necro,
but what shelfs are you using and how are they connected?
I just bought an r710 and in the future I may need to get some additional disks.

Old rebadged lsi enclosures. NetApp in a former life but lsi basically

2 Likes

<3 Thanks wendell.
One more question..
Do you just have an LSI controller with external sas connectors going to them? Or something else?

Yep just plain sas

2 Likes

I have purchased some netapp shelves and was looking at cabling options. I am following the guide https://library.netapp.com/ecm/ecm_get_file/ECMM1280392 and was trying to compare their cabling to yours in the video. Perhaps I'm incorrect but the cabling that you have implemented looks different to the configurations in this document. Could you clarify how you connected the diskshelves to each other and then to the HBA card.

I have a lsi hba card with 2 external minisas ports, these are connected to the disk shelves with minisas to qsfp cables. I'm just not positive on the cabling between the shelves and the server with regards to whether to cable the top or bottom IOM and whether to use circle or squares. Here is the diagram that I am basing my system on. I have two shelves in the stack.

Any help would be appreciated

I had dual controllers. You should experiment with disk benchmarking and arrangement to find what works best. That's what I did. I started with this diagram though

1 Like

Hey, how’s the system been running? An update video would be awesome!

Also yeah, any video about your workflow would be really cool to see. I always enjoy seeing Linus and you guys working on systems that run these businesses.

-Jeb

6 Likes

bump. plus one!

1 Like

I am curious how the old crusty google search appliance is going as well and if it still fedora and zfs ?

2 Likes

+1 Also interested in MOAR storage videos, maybe how to setup and maintain ZFS on linux?

2 Likes

Yes, an update please.

2 Likes

Well - hopefully our voices were heard.

Cheers gents.