Does the Intel X540T2 work with FreeNAS 11.1

Freqlabs

The freeNAS forum actually had me run that and the highest speeds i was seeing was 365. So its something with the settings of that card. They also sent me a list of Tunables with numbers to try. spent about an hour playing with those numbers and every change made it worse. :confused:

Another guy on the forum suggested i try the other slot so when i get home tonight i will try that. He thinks even though HP says the 8x is wired as such it just may be a low end 8x and the band with may not be there. The only reason that makes sense is that the ASUS card is a 4x and it worked perfectly. This card is an 8x so it may just need more than that 8x port can put out.

Tonight i will hopefully have better news after i make this change.

Fmissio95

This server didn’t come with the HP disk that everyone talks about using and i have yet to find an iso image that i can put on a flash drive. But freeNas sees all the drives and writes to them without a problem.

When you swap the card take a photo of the motherboard, the pcie slots with the network card and the raid card, and post it here so we can look at what raid controller it uses and maybe get some other clues.

Also you can follow here the instructions to run DD which is a nice utility that among other things it does it let’s you check the real performance of your zfs pool

Follow the part where it says:


To use dd here is an example…

  1. Create a dataset which has compression turned off. This is important because compression will give you a false reading.
  2. Open up a shell window.
  3. Type “dd if=/dev/zero of=/mnt/pool/dataset/test.dat bs=2048k count=10000”
  4. Note the results.
  5. Type “dd of=/dev/null if=/mnt/pool/dataset/test.dat bs=2048k count=10000”
  6. Note the results.
  7. Lastly cleanup your mess and “rm /mnt/pool/dataset/test.dat” to delete the file you just created.

Note: /mnt/pool/dataset will depend on your specific pool name and dataset name.


One run is to check the reads and one the writes, the results are a good indication of the raw performance of the disks together

FreeNAS also ships with fio installed, if you feel like sinking hours (or days) into detailed storage performance measurements and analysis :slight_smile:

Link to the FreeNAS forum thread?

The forum thread is here.

Hours maybe days wife wont give me time for that. lol.

Fmissio

I’ll look into that. Do you a Zvol? I currently have all 6 ssd’s in a striped set with all my data on it and my back up nas is supper slow only has gig connections and its like 2 days to transfer everything back so i don’t want to go through that again.

Looking into building out half of a 12 bay 4u hot swap tower with an old amd fx 8350 (for now) (because i have the parts not doing anything) so i can run 6 4tb in a raidz so i can put 10 gig nic in that so going back and forth wont take days on end.

Blagh I need to register on their forum to see your uploads. Nevermind!

Try looping the card back on itself and setting up two manually assigned addresses on the ports, and see if iperf3 still is sucking. That way we can be sure the problem is somewhere in that box before messing with settings. And speaking of settings, set the tunables back to their original values if you haven’t.

Don’t worry you don’t need to delete all the stuff from you zfs pool to do the DD test, you just have to create a dataset in your current volume, that is almost equivalent to a folder where DD creates the testfile test.dat (that is a file full of zeroes) and then read from it, when you finish you just delete the file and the dataset as it was a folder and you’re done.

That could be a nice backup solution, the FX-8350 is a bit power hungry but it’ll do the job just fine, you could use also raidz2 for that if you don’t need all the space so you have a bit more redundancy. 10 gig is definitely worth it even if with the HDDs you can saturate it, it’s anyway a big step up from just gigabit, and more so because you already have a 10G switch

Another thing that could matter is the cable, if your run is less than 30 meters Cat6 is sufficient, if it’s more you have to use Cat6A

Freqlabs

Definitely worth the join.

I’ll try that tonight when i get home and see what i get. Yes i reset all the Tunables back to the original settings.

Fmissio95

Great I definitely don’t want to go through that again. And i wouldn’t be running that test. lol.

Ya its a little power hungry but if i under clock it and and under volt it a little it should help. I have it in an ASUS sabertooth board with 32 gig of ram so that should cover me for now. Probably in a year or 2 i’ll find an older xeon and board that i can put 64 gigs in. Then i was thinking by then i’ll need to populate the other 6 bays then i could do a mirror of the raidz basically turning it into a raidz2 with ~40tb of long term back up storage. Atleast from what I’ve read that is what i can do.

1 Like

Yeah, believe me, I know the hassle :laughing:

Or you could just disable half the cores, I think it would be better as 10Gig likes high clock speed and being just a backup machine it doesn’t need 8 cores unless you wanna do something fancy like virtualization, 32 gigs of ram is more than enough. :smile:

As for the raidz and mirroring, i think what you are looking for is
RaidZ+0 that would be the equivalent of RAID 50 on a hardware raid card, and in this case you can loose up to 1 drive per mirror (or vdev) because with RaidZ1 you need a minimum of 3 disks and you can loose one at a time (equivalent of RAID5)

RaidZ2 requires at least 4 drives and with all the parity thingies that it does you can loose up to 2 disks at a time (like on RAID6)
So as I understand you can do also RaidZ2+0 where you can loose up to 2 drives per mirror and that would be the equivalent of RAID60 on a hardware raid card

I just tested this and hilarity ensued. There has to be some more thought put into this I think :stuck_out_tongue:

Here is a test that actually worked:

Just got home actually looking at how to run that now.

Tried tweaking windows helped a little on the reads i’m up to 400. but the writes im now back down to 250. i switched my card from the 8x side to the 16x side. no change.

below are pictures of everything.

For iperf 3 i need to run that from the shell from freenas correct?

It wont let me set up the other connection port on the same network.
this is my iperf test from my windows work station to my freenas box

I’m at a loss. Just sucks i know this server could get full 10 gig speed when i had win 10 pro on it when i was just testing it out.

Unconfigure the 10G ports in FreeNAS. Plug the same cable into both 10G ports on the NIC. Assuming you have a separate management interface, open two SSH connections to FreeNAS, otherwise you can do it on the console with tmux. Run the commands I showed on my last screenshot (you won’t need doas if you’re root, or use sudo instead of doas if you’re not root).

The commands run iperf3 in jails on the same host, using vnet to give each jail a separate TCP/IP stack. This make sure the traffic goes through the cards instead of being internally forwarded.

Here are the commands. Replace mlxen0/mlxen2 with your interface names, and run the first two commands in different shells (the first command will stay running in the foreground):

# start the server in one jail
jail -c name=iperfS \
    vnet vnet.interface=mlxen0 \
    exec.start="ifconfig mlxen0 10.1.0.1/30 up" \
    command=iperf3 -s -B 10.1.0.1

# start the client in a different jail
jail -c name=iperfC \
    vnet vnet.interface=mlxen2 \
    exec.start="ifconfig mlxen2 10.1.0.2/30 up" \
    command=iperf3 -c 10.1.0.1

# when the test is done, remove the server jail
jail -r iperfS
1 Like

freqlabs

Thank you for those directions i couldn’t find any info last night how to go about doing this.

FreeNAS guys said to try and run a live version of Linux or BSD and see what transfer speeds are like. So i will try both of these tonight.

Problem solving is soo much fun. lol

1 Like

Don’t get discouraged, one thing at a time everything can be fixed :grin:

First the 10g network with iperf then the raid stuff.

I have looked at the photos and checked some reference material from HP and for the Pcie slots there is no problem, you can use both the 8x that is wired full 8x so no bottlenecks there, and the x16 is full x16 so the 10G card can go in either of them.

The raid controller seems to be the HP Smart Array P410i and the additional board should be the cache module that enables fancy features like raid 5 and 10 50 and in theory is connected with that wire to the dedicated battery backup for the raid card.
As I understand the P410i apparently does not support AHCI / IT mode, but i could be wrong and may be some tools to enable it, but we’ll see about that later

After you do the DD test we’ll know more :smile:

yes it can and i wont feel like I’m smashing my head against a wall either. :grinning: lol.

Yes yes 10g first then I’ll start looking into the raid controller.

1 Like

well i ran iperf in the shell. glad this thing has 4 1 gig ports on the board. my numbers defentaly dont look good.

iperfS