The NZXT H630 a superb case, if you dont have a standard EATX motherboard.
The case is a behemoth and has many features to make you feel like you’ve made a good purchase.
The painted surfaces are powder coated matte black. The exterior and plastic surfaces are matte black as well.
A lining of foam surrounds the side panels and top to provide sound dampening. No side window on this puppy.
One of my favorite features of this case is the buttons and ports being a separate module affixed to the chassis. This allows removal of the front and top to be hassle free.
There are even silly features like LED’s to light the back IO area.
The downside is the motherboard compatibility. Both the manufacturer and retailer listed the case as being EATX compatible. But It is not Standard 330mm (13’ in) EATX compatible. The motherboard plane is recessed which prevents modification or addition of standoffs to accommodate a larger PCB.
After testing several cases I settled on the InWin 707. Its not the best case but it fits standard EATX and has eight 3.5" drive bays and four 5.25" drive bays three of which can hold a hotswap drive pod.
The drive trays are ok, just uses dowels and they have holes to mount 2.5" drives.
The fan filters are a joke in my opinion. They are just a crummy mesh held on by grommets or press fit into the case. Not a good solution at all.
The case will fit any size motherboard and still have plenty of room for cable management.
Unlike the NZXT the port cables are attached to the face and not as easy to remove. The InWin also lacks a reset button.
The motherboard chipset gets hawt on this system. There is only a passive heat sink and I thought I’d mount an extra 60mm fan to it. The fan is a little loud and I’m replacing it with a little better one later. But its keeping the chipset much cooler.
The EVGA power supplies motherboard connector does not have long enough wires to rout the cable neatly. But it turned out ok otherwise.
Given the nature as your NAS, I’d seriously consider going with a raidz2, especially if your buying lots of your drives in one go (thus due to the bathtub curve of failures with age, much more likely to have a failure during rebuild when they all get worn in together). Otherwise sounds great, the pair of xeons are a LOT of raw horsepower to play with.
Dumb question, you have updates like “Case Review”, are these publicly available somewhere yet?
Thanks for the suggestions. My crash plan is: on a drive failure, sync backup, replace faulty disk and rebuild. If another drive fails during rebuild I’ll still have the backup. I have suffered total data loss enough to be numb to it. Two stages of redundancy is acceptable to me. I currently only use mirrored disks so this will be an improvement anyway.
The only thing I need protected is my password vault which is not on this server.
I broke it already. I was able to initially install and setup zfs-dkms from the debian repos. Its an old version and does not support native encryption. So I tried to build zfs from git and now I have broken it. Kept throwing errors and couldn’t figure out how to recover. SOooo, i went nuts with rm -R -f on everything that said zfs until the system crashed. Now im reinstalling…
And apparently when using synaptic and marking certain applications for complete removal it will uninstall gnome entirely… Done fucking with it for tonight
Hate it when that happens. It like, oh, you don’t want minesweeper? Probably don’t need that Gnome thing either. Meta-packages come in handy sometimes but sometimes it’s just best to install something like openbox if you need a GUI on the server. Cockpit is pretty good for administering servers over the network, I use it on mine; can recommend.
Getting things online and trying to transfer a few files from the existing server to the new box.
Initial speeds are a a little disheartening.
Old server has two 4tb 7200rpm drives mirrored with mdadm.
Old server sustained writes were consistantly 102mb/sec tested on 2gb file. Smaller files crank out 120mb/sec easily.
Writing to the ZFS drives have very consistantly been churning 36.5mb/sec. I can hear the disks, write, then pause repeatedly.
This is the first transfer from the server I’ve attempted and will do some diagnostics after it completes.
Anybody have some initial thoughts, is this the best the system will achieve?
Needs more power! Had to scrounge for a sata cable. Found one for my corsair power supply but the pinout is different. Changed the pins to match the EVGA psu and this will at least get me two more plugs to get my backup drives online for testing. Need to setup rsync and things still.
Been having issues with the system crashing, instant reboot. Logs allude to memory issues but I thought id make a post about it and see if anyone had other thoughts.
I haven’t been able to reproduce the issue and there doesn’t seem to be a pattern.
I have ran memtest and had a random crash.
Memtest has completed multiple cycles successfully as well.
The kern.log file showed: EDAC MC0: 1 CE error on CPU#0Channel#0_DIMM#0 (channel:0 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)
I then swapped dimms on the associated cpu sockets and now the log shows: EDAC MC1: 1 CE error on CPU#1Channel#2_DIMM#0 (channel:2 slot:0 page:0x0 offset:0x0 grain:8 syndrome:0x0)
edac-util is not showing any errors.
So it would seem to follow the memory module which is good.
I will grab another set of the same memory and replace the faulty stick. Hopefully that fixes the issue.
I have a very similar system (SM board, same xeons, same spec RAM, etc). Been happy with it. I was using it exclusively for Emby, but am planning to convert it to an ovirt host and run Emby in a vm.
Did you end up going with the older zfs package over building it from the git repo? Do any of the openzfs packages on other distros (Fedora?) have encryption support yet?