Hello community... Question

See how I got you to open it by NOT having it in the Title…Just kidding.

BLUF- Whats my best option for a enterprise level or like hypervisor for my machine considering its components (Id like to keep the raid card to see if I can integrate alerts, emails etc to simulate a help/service center) and my goals? It will be used for a test bed for Linux and windows (MAYBE), a plex server (I need this one set up asap as my wife is going crazy missing her shows), Pihole?, pfsense?, any other projects I can find to learn, and whatever interface OS that will work best in this scenario. I have tried to install proxmox…20 different ways…yeah. Should I keep trying? It doesn’t have to be easy to use by any means given my goal of learning. Any help appreciated.

To business! I have assembled a new machine I hope to use as a test bed for learning Linux (some experience, a 2 of 10), and navigating - learning enterprise level environments. I would like to learn more networking as far as routes and security. My goal is to maybe find a job as a Remote IT tech. The biggest reason for this is my disability, I am a veteran and I am disabled (nuf said… period).

I stumbled into your website and was im pressed with the content as well as the knowledge of your community so I joined up.

I had seen this ASRock Rack X470D4U Micro ATX AM4 socket motherboard and I had to give it a shot. I am intrigued by the tech involved and the challenge. I had already managed to master the LSI 9260-8i Megaraid, so I wanted to continue. My previous MSI B550M Mortar did not have a ECC option. I already have cloud and redundant local storage, so its not needed but I wanted to try it. (lot more on that later, another post?).

Heres what I have-
cpu- Ryzen 7 2700
cooler- modified MSI Frozr L (modified with noctua mount-post later?)
ram- Kingston KSM26ED8/16ME Server Premier unbuffered ECC 2666
mobo- ASRock Rack X470D4U
LSI 9260-8i with 4x 3TB Ironwolf Drives and a Samsung 883 DCT Series SSD
GPU- MSI GTX 1650 Super Gaming X (I will need code to unlock trans-code limit?)
Drives- Samsung 860 Evo 1TB M.2 SATA, SATA 2.5 version same size, Seagate Firecuda 2.5in 2TB SHHD, and WD Blue 6TB 3.5in.
All housed in a Fractal Node 804 Case with some Noctua Fans.

Thank you for reading all of this. I know its a lot. here’s a star :star2:

Proxmox?

1 Like

Thank you, I think it will work. I finally got it to install correctly, it was an issue with the boot settings Involving CSM and legacy or UEFI boot settings. Took a bit but its all working now. Now to dive into Proxmox lol Next on the list in enabling access to my Megaraid LSI 9260-8i, then setup a ZFS pool for VM’s and the like I think. Then have to make sure ECC is in fact working. LOL
Then picking a primary OS or Docker/Container to get Plex running.
I appreciate the help.

1 Like

Hi, welcome,

Does 9260-8i get you jbod? with ZFS it’s ideal if you just don’t do any kind of hardware-raid and get the controller to give to the os raw access to the drives as much as possible, so that the os can deal not only with the data is laid out, but also gets to deal with error detection and error handling and recovery.

If I’m counting right, you have 8 sata drives and that board should come with 8 sata interfaces - if you’re having issues with that controller, then you may want to consider saving yourself the 8x PCIe lanes taken up by the card and just hook up all of the drives directly.

I see mention of that a lot. With the way my RAID is configured with 4x 3TB drives in RAID 5 (Data is backed up religiously off site and in house) with a pair of data center Samsung SSD’s as cache the through put works great. I know its a pain to integrate the controller, but I’m doing it for a challenge. So only 5 drives are connected to the raid card itself. I wanted to create this an an accessible resource for all my clients as its own pool. The other drives I have can become a large ZFS pool or different pools for different needs. I may go that route to make it work but not till I have exhausted all options. Will using that raid card in that configuration decrease performance or remove error correction? I was under the impression Proxmox has support for that raid card and would be able to utilize it to its full potential. I’d like to integrate it into either the ASRock Rack X470D4U server management tool, or proxmox’s equivalent to get email alerts and such if possible. Try to really mirrior a enterprise set up. Thank you for the input.

ZFS+JBOD vs. ZFS on top of a hardware raid5 doesn’t give you the same feature set in terms of reliability.

The former (ZFS on top of JBOD) is more reliable, potentially faster, and nicer to your logging/journalling/writeback SSDs than your hardware dedicated controller will be.

(before taking into account the popularity
(That’s before taking into account that ZFS is used to deliver raid solutions orders of magnitude more often than the firmware on the mega raid card)

If you build a block device out of your hard drives and expose it as a single block device, ZFS will do checksumming for each piece of data and each piece of metadata that it writes to that single block device. If an error or corruption not detectable by the drive happens, it will still very likely be able to detect errors, but it won’t not be able to fix them due to having a single copy of the data on a single block device it sees. It also has a single checksum for that copy of the data or metadata.

Your harddrive will itself also checksum data internally, if your hard drive reads bad data, it may detect that, retry, and eventually return an error to the controller. The controller can then choose to reconstruct the data and overwrite the bad data on the bad drive. The drive can then reuse old space, or internally remap that block.

When you’re running raidz, and letting ZFS handle the raid, ZFS will checksum each strip of data and metadata on every write and every read independently on every device. Every read to any drive that fails or succeeds but returns wrong/corrupted data is something ZFS can detect and potentially recover from by using other known good (thanks to checksums) data on other drives.

There’s a lot more checksumming going on in ZFS, it gives you a layer of protection above that present in the drives and you can scrub all the data on each disk explicitly e.g. once every couple of months, to ensure that the drives and data are staying healthy.


Both ZFS and your controller will write a journal to avoid the raid write hole. ZFS knows whether or not the underlying data that is being overwritten will be needed later - your controller does not (unless it supports trim which I’m not sure it does) In order to bring the array into a consistent state, your controller needs to be able to retry every write. That means that all the data is always written to the write ahead log/write back cache on the SSD and then eventually makes it to the drive. In the case of ZFS if you’re doing large sequential writes to empty space and not sync-ing the data frequently (e.g. writing out a backup archive, any kind of log, shuffling audio/video data - anything larger than a megabyte that you don’t have to sync immediately), ZFS can write in it’s intent log that it’s writing out that data and avoid logging the data itself - if it makes it there, great! if not -nevermind. This takes less of a toll on your ZIL / SLOG device, and makes your ZIL/SLOG device more effective in turn - increasing performance of the filesystem as a whole.

3 Likes

Wow, well that sounds fantastic. Thank you for the in-depth explanation. Helps me to gain a better understanding of what I am working with.

Would this also take advantage of ECC of my memory or is that an entire separate animal all together? I have activated ECC from my bios. I saw there was a major discourse about whether it is actually functional or not on this board…I didn’t see a definite conclusion to that though.

I am also new to Linux so I am having trouble with some of the steps for mounting the drives. Currently they all populate in proxmox, but only one that has usage listed as “No” is available for me to do anything with. I assume I have to format them through the shell? or mount them and edit the “file” (forget the name of it) to add in a line for it to be mounted at every boot? I know some of these questions are very basic stuff, I apologize to take up your time with them.

As it stands now, here is what I would like to do if it is possible and/or if it makes sense or is even possible.

Samsung 970 Evo 500GB- Boot drive

Samsung 850 Evo M.2 SATA 1TB and 850 Evo SATA 1TB RAID 1 Cache equivalent for larger ZFS storage Pool / OR I can use them partitioned out for VMs and then Use just the one Samsung 883 DCT Series 480GB as a cache or enhancer for ZFS?

4 Seagate Ironwolf 3TB drives- large storage pool for media and files enhanced with a cache listed above? like some redundancy for 1 drive failure.

6TB WD Blue- A place for snapshot data? is that enough or do I need something larger. I like the idea of having some place locally within the machine to get data from as my network is a mess in its current state. I’ll be addressing that later I am sure.

Seagate Firecuda-Id like to have this available to for everyone, but as a repository for DVR recordings. I like the idea of one dedicated drive for this because its data that if I lose it I don’t really care and the constant recording writing and deleting if it fails I can replace it easily.

Either way I’ll probably be getting a pcie card just for load balancing I saw Wendell recommend it to control 8 drives from one card. He explained the benefits vs just onboard sata but I forget what was said specifically.

I guess I had hoped to learn how to install the drivers and MegaCLI so that the correctional data could be routed to the proxmox log and I could access that drive and health data from there. Maybe a project for another time. I also liked the safety as I do not have a UPS yet… I know shame on me. I hope to get one that can communicate with my board to power off in case of power-loss to avoid any unforeseen issues.

I, just want to get my foundation right so I dont have to tear it all down anytime soon.

Those boards are more complicated because of the IPMI / BMC; people expect to see ecc logs in both the proxmox host OS and the IPMI.

See this thread for some commands you can run from proxmox to verify what you have: https://forum.proxmox.com/threads/patch-x570-ryzen-edac-support-into-pve-6-1.63744 .


re disks: the “mounted” is your proxmox drive, the “partitions” ones have a partition table on them that you need to wipe from a command line interface, in order to use the drive from the proxmox web ui. (fresh factory drives are typically completely empty - so you wouldn’t have this issue generally when installing on fresh hardware).
You can use the # fdisk -w /dev/sda on the command line / ssh (or you can try to find the device mapper block nodes - I forget).


edit: btw, ideally look for 9211-8i (or a 9240 flashed to IT mode, or an IBM M1015) - if you plan to use ZFS

root@HSHL:~# dmesg | grep -i edac
[ 0.205119] EDAC MC: Ver: 3.0.0
[ 7.672871] EDAC amd64: Node 0: DRAM ECC enabled.
[ 7.672872] EDAC amd64: F17h detected (node 0).
[ 7.672914] EDAC MC: UMC0 chip selects:
[ 7.672915] EDAC amd64: MC: 0: 0MB 1: 0MB
[ 7.672916] EDAC amd64: MC: 2: 8192MB 3: 8192MB
[ 7.672918] EDAC MC: UMC1 chip selects:
[ 7.672919] EDAC amd64: MC: 0: 0MB 1: 0MB
[ 7.672920] EDAC amd64: MC: 2: 8192MB 3: 8192MB
[ 7.672920] EDAC amd64: using x8 syndromes.
[ 7.672921] EDAC amd64: MCT channel count: 2
[ 7.673023] EDAC MC0: Giving out device to module amd64_edac controller F17h: DEV 0000:00:18.3 (INTERRUPT)
[ 7.673033] EDAC PCI0: Giving out device to module amd64_edac controller EDAC PCI controller: DEV 0000:00:18.0 (POLLED)
[ 7.673034] AMD64 EDAC driver v3.5.0
root@HSHL:~# edac-util -v
-bash: edac-util: command not found
root@HSHL:~#
Seems it is working, thank you. Seems I’ll need to load the “root@HSHL:~# edac-util -v” tool to be able to get that reading. I just wanted to make sure I was utilizing that tech. I did pick memory from the QVL to be safe.

Edit: also that was for the x570 but seemed to work for x470 as well lol So I’m happy. Day after tomorrow I’ll have a whole day to bang my head against the wall lol :joy:

1 Like

Its wiki so I dont take is as 100% but helped me get the general I deal and now I’m more comfortable with the switch over to ZFS. It seems really intelligent in how it handles both storage availability and structure.

1 Like