Exploring NAS build and testing

I have been following a couple of tech youtubers for a while, especially Wendell, and thought I would finally jump into the tech forums as I start down the road of building a new NAS and expand my home lab. I just completed a 3 node cluster, running proxmox with ceph. The hardware I had laying around or got it from the company I was working for, which went out of business this year.

Cluster Specs
3x ASRock B450M PRO4
3x Ryzen 5 6core 3.6Ghz
3x Silicon Power 256GB NVMe(OS)
3x 64GB G.Skill TridentZ 3200
6x Adata SU800 512GB(OSD)
3x Adata SU800 256GB(DB)
3x Intel 540 10GbE Nics
3x Intel quad port 1GbE Nics

Still going through and tightening security as I learn more about Proxmox.

Here is my goal, I want to build the following NAS using TrueNAS Scale. Yes it is extreme over kill but I have a plan to utilize it as much as I can.

AsRock Rack ROMED8-2T
Epyc 7272 12-Core 2.9 GHz
Crucial 256GB DDR4 3200
Icy Dock Cremax RD MB516SP-B 16BAY
8x PNY CS900(to start)
16x 8TB Seagate Exo(from my old job)
Silicom PE310G4I71LBEU-XR-LP(Intel 710X)
Qlogic 16Gb HBA
LSI HBA IT-mode(internal connection to SSD)
LSI HBA IT-mode(External for HDD)

I have a spare MSI x99 mobo with a 10-Core Zeon processor that I will be using to test configurations and performance to make sure certain parts work with TrueNas even though the forums say they do. Silicom Quad port 10GbE for example says its basically an Intel 710X rebranded but I have burned a few times trying to use hardware that is suppose to work but doesn’t. So when the Silicom card gets here that will be the first thing I will test. Next will be the Icy Dock 16bay and 8 SSDs, which should be here the first week of the new year. If all of those work I will purchase the rest of the hardware.

Now this NAS will be used for personal and business. Proxmox backups will go to the NAS along with Plex, with hw acceleration, running as a VM as well as hosting storage for my entire family. Also working out a plan ,with a friend, to dump his companies data to my NAS which I will then purge to a LTO 6 tape. Oh yeah, I also have a MSL4048 with one LTO6 tape drive that my former boss was going to throw out.

So this is going to be a long journey to the finally goal. Look forward to sharing my mistakes and successes as I build this out.

3 Likes

UPDATE #1

I received my Silicom PE310G4I71LBEU-XR-LP without SPFs but I do have a few spare Intel SFPs model # FTLX8571D3BCVIT1 which do work with the Silicom card. Also, ran into my first problem. The Silicom card would not fit on my test board since it has a shield over the sound card which prevents the card from seating correctly. The Silicom card is made as low profile as possible which is perfect for server motherboards. As you can see in the picture compared to a mellanox.


But I created a work around by using a PCI-E ribbon for now.

After wiping my test rig and installing TrueNAS Scale, I verified that it could see the Silicom card with LSPCI and it connected to the switch without any issues.

02:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
02:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
02:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
02:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)

With that part out of the way I am going to work on testing performance next to verify that I can push all 10GbE connection at once. This is going to be tricky with TrueNAS seeing that I am having issues with creating two separate subnets on the same card(like i could do with FreeNAS originally). But I will figure it out and post my finds.

2 Likes

Posting just to follow this.

I subscribe to follow as well. Sounds like an interesting project.

One question out of interest. Are you unhappy with the performance of your existing ceph cluster, or is there any other reason you build the NAS on TrueNAS?

UPDATE #2

After quite a bit of tinkering today, I was able to get a bonded connection on TrueNAS fully saturated using two of the proxmox nodes. But the test was only done using iperf3. Plan to utilize ipef3 ability to actually write data to a SSD teir on the NAS when all the parts come in.

Here is the complete test setup.

-Node1 one port 10GbE Intel X540
-Node3 one port 10GbE Intel X540
-Mikrotik CRS317-1G-16S+ running SwOS
-TrueNAS 2 ports Silicom PE310G4I71LBEU-XR-LP(bonded)

Settings for TrueNAS
image

Settings for the switch
Lagsettings -

2 Vlans which are isolated from each other.
Vlan 10 - cluster communications for the proxmox server
Vlan 20 for everything else, which the testing was conducted on.

Results----


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec 0 sender
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec receiver
iperf Done.
root@Node1:~#


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec 40 sender
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec receiver
iperf Done.
root@Node3:~#

[ ID] Interval Transfer Bitrate
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec receiver


Server listening on 5201


[ ID] Interval Transfer Bitrate
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec receiver


Server listening on 5202


Very surprised that it worked. Was a little concerned that the switch couldn’t handle it but was proven other wise.

Ordered more SFPs and FC to maximize the Silicom card in the test machine. Looking forward to seeing if I can push it all the way to 4GbE.

1 Like

Actually the ceph storage is built into the proxmox nodes for ease of migration between nodes. Tested using just ZFS and replicating/migrating between nodes but it wasn’t as fast as using ceph. We are talking less than a minute to migrate a 100GB VM using ceph where ZFS took a couple of minutes. Working with a few friends to provide infrastructure for their website design business.

For the NAS I have a completely different plan that I hope I can make little money too. I have 16x 8TB drives which were graciously donated to me when the company I worked for closed this year.

Ah ok, thank you.

Update #3

Received the additional SFPs and FC I ordered. Added all SPFs to the switch, ran all new FCs and configured two more ports to the LAG. All ports on the TrueNAS server have a MTU set to 9000. Once a ping returned from all 4 servers, to verify communication with the LAG, I proceeded to slam the LAG with iperf3 from all 4 servers. And what do you know, IT WORKED!!

-Node1-
root@Node1:~# iperf3 -c 192.168.30.30 -P 1 -t 30 -p 5202
Connecting to host 192.168.30.30, port 5202
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec 64 sender
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec receiver

-Node2-
root@Node2:~# iperf3 -c 192.168.30.30 -P 1 -t 30 -p 5203
Connecting to host 192.168.30.30, port 5203
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 34.5 GBytes 9.89 Gbits/sec 44 sender
[ 5] 0.00-30.00 sec 34.5 GBytes 9.89 Gbits/sec receiver

-Node3-
root@Node3:~# iperf3 -c 192.168.30.30 -P 1 -t 30 -p 5204
Connecting to host 192.168.30.30, port 5204
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec 0 sender
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec receiver

-Node4-
root@prox-bkup:~# iperf3 -c 192.168.30.30 -P 1 -t 30
Connecting to host 192.168.30.30, port 5201
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 34.5 GBytes 9.89 Gbits/sec 0 sender
[ 5] 0.00-30.00 sec 34.5 GBytes 9.88 Gbits/sec receiver

Now I did see that there were a few retires but it still was able complete the test on all 4 ports. After the first 30 sec count was complete I decided to up the count to 2 minutes to see if it could sustain without any issues. Now I did ran into an unusual issue, depending on which node started first. One of the for 4 servers is not exactly as strong as the others but once I figured out what order to start them in, it was able to sustain 2 minutes without an issue.
image

Going to do a little more tinkering to see if I can choke the Silicom card but next I will be testing write/read performance once my icy dock ToughArmor MB998IP-B comes in. There are 8x PNY CS900 and a LSI HBA IT mode waiting to be push to their limits. Plan to test different ZFS configurations, along with VM load and throughput testing to see what the CS900 really can do. I know many of these tests have been done but doing them all for my own sanity as well as test my own skills.

Also, hope everyone has an enjoyable and safe Christmas/New Years.

2 Likes

Update #4

Well things got a little interesting in the past few weeks. Tested the IcyDock Though Armor MB998IP-B and one of the slots was bad. Did some troubleshooting to verify and same result, slot 4 would not recognize. Then while testing that one of the nodes in my proxmox cluster went south. Turned out one of the RAM sticks decided to puke. And this is a 128GB kit, 4x 32GB sticks, which all sticks had to be sent back for RMA. This took a second node down because I split the kit between to nodes. Top it all off with a nice little cherry, my UDM SE bricked itself after a midnight update. UBIQUITI!!! Which makes 3 failures and hopes that is it… FOR NOW! So now after waiting 3 weeks for all RMAs everything is stable.

I grabbed someone else’s FIO command, which I found in forums, and tweaked it to test parallel file size. Now the SSDs in this NAS will predominantly do READs; estimate of 90% Read use with about 10% write. They SHOULD NOT wear out within their warranty. Plus 8 more will be added in the next few months to expand the array. Even looking at SED SSDs for an encrypted pool now that I have learned you can use a KMS(Key Management Server) with TrueNAS but that is a test for another day. Now for the testing with the IcyDock and PNY SSDs.

Following command with only changing the size variable.

fio --bs=128k --direct=1 --directory=/new --gtod_reduce=1 --ioengine=posixaio --iodepth=32 --group_reporting --name=randrw --numjobs=12 --ramp_time=10 --runtime=60 --rw=randrw --size=256M --time_based

First raidZ1
Size - 128MB

Run status group 0 (all jobs):
READ: bw=6209MiB/s (6511MB/s), 6209MiB/s-6209MiB/s (6511MB/s-6511MB/s), io=364GiB (391GB), run=60006-60006msec
WRITE: bw=6219MiB/s (6521MB/s), 6219MiB/s-6219MiB/s (6521MB/s-6521MB/s), io=364GiB (391GB), run=60006-60006msec

Size - 256MB

Run status group 0 (all jobs):
READ: bw=2680MiB/s (2810MB/s), 2680MiB/s-2680MiB/s (2810MB/s-2810MB/s), io=157GiB (169GB), run=60012-60012msec
WRITE: bw=2684MiB/s (2814MB/s), 2684MiB/s-2684MiB/s (2814MB/s-2814MB/s), io=157GiB (169GB), run=60012-60012msec

Size - 512MB

Run status group 0 (all jobs):
READ: bw=2205MiB/s (2312MB/s), 2205MiB/s-2205MiB/s (2312MB/s-2312MB/s), io=129GiB (139GB), run=60035-60035msec
WRITE: bw=2207MiB/s (2314MB/s), 2207MiB/s-2207MiB/s (2314MB/s-2314MB/s), io=129GiB (139GB), run=60035-60035msec

Size - 1GB

Run status group 0 (all jobs):
READ: bw=1951MiB/s (2046MB/s), 1951MiB/s-1951MiB/s (2046MB/s-2046MB/s), io=114GiB (123GB), run=60054-60054msec
WRITE: bw=1954MiB/s (2049MB/s), 1954MiB/s-1954MiB/s (2049MB/s-2049MB/s), io=115GiB (123GB), run=60054-60054msec

Now for raidZ2
Size - 128MB

Run status group 0 (all jobs):
READ: bw=6126MiB/s (6424MB/s), 6126MiB/s-6126MiB/s (6424MB/s-6424MB/s), io=359GiB (386GB), run=60037-60037msec
WRITE: bw=6134MiB/s (6432MB/s), 6134MiB/s-6134MiB/s (6432MB/s-6432MB/s), io=360GiB (386GB), run=60037-60037msec

Size - 256MB

Run status group 0 (all jobs):
READ: bw=2526MiB/s (2649MB/s), 2526MiB/s-2526MiB/s (2649MB/s-2649MB/s), io=148GiB (159GB), run=60035-60035msec
WRITE: bw=2528MiB/s (2651MB/s), 2528MiB/s-2528MiB/s (2651MB/s-2651MB/s), io=148GiB (159GB), run=60035-60035msec

Size - 512MB

Run status group 0 (all jobs):
READ: bw=1075MiB/s (1127MB/s), 1075MiB/s-1075MiB/s (1127MB/s-1127MB/s), io=63.5GiB (68.1GB), run=60455-60455msec
WRITE: bw=1076MiB/s (1129MB/s), 1076MiB/s-1076MiB/s (1129MB/s-1129MB/s), io=63.5GiB (68.2GB), run=60455-60455msec

Size - 1GB

Run status group 0 (all jobs):
READ: bw=174MiB/s (182MB/s), 174MiB/s-174MiB/s (182MB/s-182MB/s), io=10.4GiB (11.2GB), run=61306-61306msec
WRITE: bw=174MiB/s (182MB/s), 174MiB/s-174MiB/s (182MB/s-182MB/s), io=10.4GiB (11.2GB), run=61306-61306msec

Now for kicks lets see what a stripe can do on the smallest setting.
Size - 128MB

Run status group 0 (all jobs):
READ: bw=7034MiB/s (7376MB/s), 7034MiB/s-7034MiB/s (7376MB/s-7376MB/s), io=413GiB (443GB), run=60056-60056msec
WRITE: bw=7041MiB/s (7383MB/s), 7041MiB/s-7041MiB/s (7383MB/s-7383MB/s), io=413GiB (443GB), run=60056-60056msec
MACH 7 - MAXED out. HA HA!

Now let me clarify, I chose to run multiple writes/reads at different sizes to reach as close to full potential as possible. Yes I could tweak some setting to get more precise workload measurements but I will do that for when I add NVME. Also, I will NEVER utilize all the bandwidth these drives provide and surprised that these cheap PNY SSDs can stand up to these tests. This pool will house my plex vm, all of personal shared files and proxmox backups, IE Mostly READs.

Now its time for me to pull the trigger and order all the parts for the full NAS build. I have a few new ideas to add later down the line but first I need to order it. =)

While running the test you should take a look at the zpool performance using zpool iostat -v to get a real picture to what the SSDs are doing.
The big drop-off from 128MB to 256MB suggests that zfs caching played a major part in the test.
A raidz2 setup suggests that you use at least 4 SSDs. A raidz2 performance of 174MiB/s read / 174MiB/s write does not sound like it’s the best the hw can do.
ZFS requires skillful tuning with all-SSDs pools at this moment. Otherwise the results are typically underwhelming and unpredictable.

You have a good point. After going back to read what you and NickKF where troubleshooting, there are a few issues with the test rig that are similar to your findings. x99, 10 core xeon running 2166MHz memory might also be contributing factor… And yes, the SSDs are NOT the top of the line but I took a chance to see what a “cost effective” SSD would do. And now I am thinking I may not mix cheap SSDs with any other SSD I may purchase. But then again, this is one tier in a three tier NAS I am building. NVMe, SSD, SAS rust.

Once I get the new NAS built, I should see a difference in base line performance. Then we can start diving into the guts. And I greatly appreciate your input.

Oh! And one small side note that I learned, the HBA needs adequate airflow. I noticed the HBA started to flake out when no air flow existed. Using a 120 fan to directly blow air onto the heatsink helped to stabilize the card. Note: Need to double check my NAS build to make sure there is enough air flow.

Yes, cooling is more important for enterprise-class gear than for consumer-grade items.

I caution that the zfs performance is not (solely) related to your tested mix of devices. It is inherent to the design of zfs. ZFS exploits the fact that HDDs are so slow that modern CPUs can perform a lot of tasks while waiting for HDDs to respond (in relative terms).
This performance gap has closed so much with the advent of NVMe SSDs that many features don’t quite work anymore as designed. An obvious example is in-line compression. But also the complex caching mechanism and the inherent “write amplification” of the zfs design factor heavily.
At this stage I look at zfs as a premium file system for HDDs as primary storage medium with support of SSDs to improve performance for specific use cases.

I plan on conducting my own zfs tuning exercise on all-nvme zfs pools and will make sure to document the findings here on Level1.

Well this update isn’t going to be exciting do to the fact that I keep getting parts that are failing or DOA. And the one major problem I ran into is with the AMD Starship/Matisse GPP Bridge downgrading links on different devices. Originally thought it was one of the PCIe cards but when all cards are removed, including the NVMe drives, I was still seeing links downgraded. Tried different BIOS versions along with different firmware versions but in the end the mobo will be RMA’d per vendor’s request.

So I tested three different pool designs just to see if there is a difference in performance. One vdev raid-z1, 2x vdevs in one Raid-z1 pool and then 2x Raid-z1 pools. Only did a few random tests to get an idea of base numbers. There is a slight performance difference but nothing to drastic. So it begs the question, which design is best for when use case.

Test numbers for 2x Raid-z1 pools: reason for this test to see if the two different SSD vendors perform different.
12x2 jobs, randrw, 256M size - both tests run in parallel to hit both pools at the same time.

PoolZ1-1(Crucial)

READ: bw=3441MiB/s (3608MB/s), 3441MiB/s-3441MiB/s (3608MB/s- 3608MB/s), io=202GiB (217GB), run=60015-60015msec
WRITE: bw=3447MiB/s (3614MB/s), 3447MiB/s-3447MiB/s (3614MB/s-3614MB/s), io=202GiB (217GB), run=60015-60015msec

PoolZ1-2(PNY)

READ: bw=3206MiB/s (3362MB/s), 3206MiB/s-3206MiB/s (3362MB/s-3362MB/s), io=188GiB (202GB), run=60057-60057msec
WRITE: bw=3211MiB/s (3367MB/s), 3211MiB/s-3211MiB/s (3367MB/s-3367MB/s), io=188GiB (202GB), run=60057-60057msec

12x2 jobs, randread, 256M size

PoolZ1-1(Crucial)

READ: bw=13.9GiB/s (14.9GB/s), 13.9GiB/s-13.9GiB/s (14.9GB/s-14.9GB/s), io=832GiB (893GB), run=60011-60011msec

PoolZ1-2(PNY)

READ: bw=13.9GiB/s (14.0GB/s), 13.9GiB/s-13.9GiB/s (14.0GB/s-14.0GB/s), io=836GiB (898GB), run=60005-60005msec

2 vdev in a RAID-Z1 pool - 8 drives per pool

12 jobs, randrw, 256M size

READ: bw=6756MiB/s (7084MB/s), 6756MiB/s-6756MiB/s (7084MB/s-7084MB/s), io=396GiB (425GB), run=60014-60014msec
WRITE: bw=6764MiB/s (7093MB/s), 6764MiB/s-6764MiB/s (7093MB/s-7093MB/s), io=396GiB (426GB), run=60014-60014msec

12 jobs, randread, 256M size

READ: bw=30.8GiB/s (33.1GB/s), 30.8GiB/s-30.8GiB/s (33.1GB/s-33.1GB/s), io=1848GiB (1984GB), run=60004-60004msec

12 jobs, randwrite, 256M size

WRITE: bw=5199MiB/s (5452MB/s), 5199MiB/s-5199MiB/s (5452MB/s-5452MB/s), io=305GiB (327GB), run=60020-60020msec

1 vdev in RAID-Z1 pool - 16 drives/one pool - This test had some interesting results which makes me thing since the different vendors have different performance and response times, this made the results bounced all over the place.

12 jobs, randrw, 256M size - run 1

READ: bw=2404MiB/s (2521MB/s), 2404MiB/s-2404MiB/s (2521MB/s-2521MB/s), io=142GiB (152GB), run=60319-60319msec
WRITE: bw=2409MiB/s (2526MB/s), 2409MiB/s-2409MiB/s (2526MB/s-2526MB/s), io=142GiB (152GB), run=60319-60319msec

12 jobs, randrw, 256M size - run 2

READ: bw=1671MiB/s (1752MB/s), 1671MiB/s-1671MiB/s (1752MB/s-1752MB/s), io=98.2GiB (105GB), run=60166-60166msec
WRITE: bw=1671MiB/s (1752MB/s), 1671MiB/s-1671MiB/s (1752MB/s-1752MB/s), io=98.2GiB (105GB), run=60166-60166msec

12 jobs, randrw, 256M size - run 3

READ: bw=1824MiB/s (1913MB/s), 1824MiB/s-1824MiB/s (1913MB/s-1913MB/s), io=107GiB (115GB), run=60017-60017msec
WRITE: bw=1824MiB/s (1913MB/s), 1824MiB/s-1824MiB/s (1913MB/s-1913MB/s), io=107GiB (115GB), run=60017-60017msec

12 jobs, randread, 256M size

READ: bw=30.7GiB/s (32.0GB/s), 30.7GiB/s-30.7GiB/s (32.0GB/s-32.0GB/s), io=1843GiB (1979GB), run=60004-60004msec

12 jobs, randwrite, 256M size - run 1

WRITE: bw=2525MiB/s (2648MB/s), 2525MiB/s-2525MiB/s (2648MB/s-2648MB/s), io=148GiB (159GB), run=60055-60055msec

12 jobs, randwrite, 256M size - run 2

WRITE: bw=2487MiB/s (2607MB/s), 2487MiB/s-2487MiB/s (2607MB/s-2607MB/s), io=146GiB (156GB), run=60021-60021msec

What did we learn? Stick to one vendor if you are wanting to create one pool of SSDs BUT the reason I bought two different vendors is to see if there would be a problem. And from the few tests of the 16 drive pool there SEEMS to be an issue with running two different vendors. But in the 2 vdev pool it seems to balance out a little better than one big pool. The two pool test where very stable numbers and the least fluctuation of test results. There are tons more test I would like to do but I have to disassemble and ship the mobo in for testing/replacement.

This is quite the first thread. I am mainly commenting so that I can read about the progress.

Nice! Did you follow some tutorials or documentation?

I run TrueNAS Scale and before I was on TrueNAS Core and FreeNAS before that for many years. I have multi TrueNAS synchronization and backups utilizins ZFS sync and encryption. So I have some experience with ZFS and all that. But TrueNAS Scale is so far a little disaponting for me - I was expecting better virtualization support. It is KVM, so it can run VMs, but he management support is quite poor, for example no migration or similar features. Not to mention their problems with networking between host and VMs.

For this reason I am considering proxmox cluster, and I am looking at ceph proxmox+ceph cluster option… but it looks a bit scary? :slight_smile: How complicated is it really?

Update #6
Welp. I got the “new” motherboard in from ASRock aaaaaand… Don’t buy the ASRock EPYCD8-2T. It looked really good on paper but this board has been nothing but problems. Even after the swapped the original board out for this one, I am still having the same issues, Critical Interrupt #0xfe Asserted Bus Degraded, and now I have a thread at 100%. I have re-installed truenas scale and it still persists.

Problem #
1c0:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge (prog-if 00 [Normal decode])
LnkSta: Speed 5GT/s (downgraded), Width x1 (downgraded)
40:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge (prog-if 00 [Normal decode])
LnkSta: Speed 8GT/s (ok), Width x4 (downgraded)
40:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge (prog-if 00 [Normal decode])
LnkSta: Speed 5GT/s (downgraded), Width x8 (ok)
40:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge (prog-if 00 [Normal decode])
LnkSta: Speed 8GT/s (ok), Width x4 (downgraded)
45:00.0 Fibre Channel: QLogic Corp. ISP8324-based 16Gb Fibre Channel to PCI Express Adapter (rev 02)
LnkSta: Speed 8GT/s (ok), Width x4 (downgraded)
45:00.1 Fibre Channel: QLogic Corp. ISP8324-based 16Gb Fibre Channel to PCI Express Adapter (rev 02)
LnkSta: Speed 8GT/s (ok), Width x4 (downgraded)

Problem#2

Did notice that they put a beta bios on the new board which makes me wonder if they still have bugs to work out.

Sadly I am extremely disappointed in this brand when many people talked so highly of this vendor. I will have to stick to the well established vendors like Supermicro. Worked with Supermicro for 10+ years and NEVER had problems like this one.

If anyone has any ideas it would be greatly appreciated. Currently engaged with ASRock in hopes they can help.

Update #7

Good news is Truenas core does NOT have the same issue of a run away thread like you saw above with Truenas Scale. I have re-download Truenas Scale, ran through the install and the run away thread is still there. I know Scale is still new but why is the thread running at 100%? Scale would nice to have for the advanced abilities but is not a must. Anyone know why I have a thread that is at 100%? Going to install Centos 8 and ubu 22 to make sure I am not missing anything.

Now to an even more interesting and confusing experience, the block diagram vaguely(very blurry) shows that the nvme, oclink and mini-hd have their own lanes. That is not how it seems to react when I start moving parts in/out of the system. When I plug-in a nvme a link “downgrades” from 16x to 4x. Plug in another nvme causes a different 16x to downgrade to 4x. Seeing that nvmes are no where near capable of 16x makes me think that they share lanes with other devices, possibly oclink and mini-HD? This will be verified when I get a U.2 drive for testing. There are two other “downgraded” links that I am having troubles tracing back but they seem to not be causing issues as of now.

This board is one of the most unusual boards I have worked with. The documentation seems lack luster and ASRock support hasn’t been much help seeing they just told me to upgrade my bios; to later ask me to send it in for testing. Only to get the same board back with a beta bios. Sadly, this experience has put a very bitter taste in my mouth for this brand. Next time I will think long and hard about purchasing this brand again.

In the mean time, going to testing with Centos and Ubu to make sure I am not missing anything and then bug support again to explain what the other two downgraded links are. Hopefully I will have a operational NAS soon. =/

My apologies. I thought I had answered your question but can’t find it anywhere. So to answer your question, using Ceph with proxmox has gotten much easier to implement. Originally I had to do everything from CLi but now you can pretty much setup everything from the gui. But when you set it up from the gui, you will get just a base config and settings. If you are wanting to have some tun-ability, you will want to do it from the CLi. For beginners, the gui is simple and effective. Don’t be afraid to get you feet wet and make mistakes along the way. Lord knows how many times I screwed up and pulled my hair out trying to back out instead of just starting over.

One thing I would suggest, IF you can afford it, make sure you have a 10GbE network for the cluster/migration network or Ceph will constantly complain about being behind on sync operations.

1 Like

Just Realized that I didn’t share the build changes. Below are the part that were purchased.

AsRock EPYCD8-2T
Epyc 7302P 16-Core 3.1GHz
Crucial 256GB DDR4 3200
2x Icy Dock 8x 2.5" Bay
8x PNY CS900
8x Crucial MX500
Silicom PE310G4I71LBEU-XR-LP(Intel 710X)
Qlogic 16Gb HBA
LSI 9400-16i HBA IT-mode(internal connection to SSD)

Just wanted to share an interesting issue I have run across on the ASRock EPYCD8-2T motherboard. Slot 2 looks to have a problem only because I have swapped out 3 pcie cards(10GbE mellanox, Qlogic QLE2672 and 9300-8i) which all caused a single thread max. I even force gen version in the bios which did absolutely nothing. So I have engaged ASRock support to find out what is going on with my slot 2. Used the same three cards on all other ports and no max thread. hm…