Does the Intel X540T2 work with FreeNAS 11.1

I’ve been doing a lot of reading hoping to come across a post that said that they got an ASUS xg-c100c 10gig card working in their NAS but i have not. So i started looking for a new 10 gig car that would work with my NAS box since the ASUS i had from before will no longer work with it. I came across a post that said this card is good for FreeNAS but i’ve read mixed posts about it not working with their builds. Before i drop the $225 on Amazon for this i want to make sure it will work with FreeNAS 11.1. I was looking at the one on amazon https://www.amazon.com/gp/product/B0077CS9UM/ref=ox_sc_act_title_1?smid=A1AIQD4V579J24&psc=1

As far as I know the Aquantia AQC107 chip that is used on the Asus xg-100c is not yet supported by Freenas (on Windows and Linux it works perfectly), I read a while ago that someone is working on the drivers but nothing is coming soon.

The X540 T1 and T2 and the newer X550 T1 and T2 are very well supported by Freenas and give very good performance.

Before buying on amazon you should look on ebay for an X550, you can find them from china for around 170$, there seems to be a good amount of OEM versions avaliable, you have to wait a couple of weeks but i think it’s worth it.

The X540 and X550 as I understand have the same performance but the X540 being of an older generation is pcie 2.0 8x and it produces quite a lot of heat, the X550 being the newer one uses pcie 3.0 4x and produces less heat. Still for both cards a good airflow is reccomended.

I had come across that too but i don’t think i could wait that long when i could always use that card in another tower. or use it as a second card in my main tower which i have one of those in already.

Thanks for the info. I am doing a lot of transfers to get everything back on this NAS and I’ve had to only transfer a little at a time otherwise it comes to a stand still. So i don’t know if i can wait a few weeks.

This card is going in a HP ProLiant DL360 G7 8-Port. I don’t think this has pcie 3.0 so I’m not sure it is worth getting the 550 when it is more. When i really wont see the speed and the fans keep that unit very cool anyways.

Yes it is a bit frustrating to wait weeks for a thing you need right now, I’m upgrading to 10G for my HP Microserver Gen8 NAS and main workstation too and it’s been two weeks now that I’m waiting for an x550 to arrive from china.
I have no hurry but the less time to wait the better.

The ProLiant DL360 G7 still uses pcie 2.0 so i guess you’re good with the x540, and it definitely has the airflow to keep it cool without problems.

Ya the last time i ordered something from china it took 6 weeks to get to me but 2 Xeon E5-2683 V3 for $300 was worth the wait. Put them in my video editing rig and they are massive work horses.

Ordered this card first thing this morning i’ll post an update when i get it installed. I’m running 4 2 tb ssd’s currently in that rig ordered 2 more when i ordered the card. So hoping to see the speed max out.

Two 14 core Xeons for $300 was a really good price! Plus the 35megs of cache i bet they are work horses! And you get a ton of pcie 3.0 lanes too.

Yeah, I too had to wait a month or longer for stuff from china, but if it is a good deal like those xeons it’s more than worth the wait.

If the cpus in the server have high clock speeds i guestimate you could saturate 10G already with the 4 ssd’s, and even without using jumboframes.

Ya i got lucky on those 14 core Xeons only 2.0GHz but 56 cores are nice when transcoding video. Now i can’t find anything like that for the CPU’s i have.

i was just going to get 2 more of those and an asus Z10PE-D8 WS and put it in a 4U case and build my own hot swap server. But all the cheep CPU’s from china disappeared. so i ended up getting this server used/rebuild from Server Monkey. For a really good price.

The server has 2 3.46 GHz Hex-Core Xeon X5690 with 12MB Cache 64 gigs ecc ram and those drives. I had put windows 10 on this machine with the ASUS card in it and i was getting transfer speeds of 600 to 1gig depending if i was going ssd to server or ramdisk to server. So i know my computers will saturate a 10 gig card.

I was wondering if i should Link Aggregation this card on my server to my network or have 1 port to my network and one port direct to my work station. I now have 2 ASUS 10gig cards and the left over one will fit in my work station so i could run my work station, 1 to my network and 1 to my server.

Yeah, those kind of deals disappear as fast as they appear, you’ll have to wait till the next datacenter upgrade (i guess all these nice parts we find are from datacenters or big companies upgrading their infrastructure).

The server is quite a powerhouse too, you have quite some processing power to spare, you could use to run some VMs or something, the new version of FreeNAS has the function to do it, i believe it’s still in beta but it’s worth a try.
Just remember that Freenas with zfs doesn’t like raid controllers, zfs wants direct access to the drives, so if on the server you have one of the fancu HP Smart Array it’s better if you put it in IT mode or flash it like you can do with the Lsi cards.

As for the LinkAggregation, the Intel card supports it perfectly well, the Asus cards with the Aquantia chip not so sure, if you want just the server to have 20G to be able to handle multiple clients at once I guess you could go with it, if your switch has enough 10G ports I think it would be better to connect everything to it, easier to manage and less cables to run.

YA they do. Definitely Data center upgrades. Most people aren’t trying to sell 10 pairs of those CPU’s normally.

Mostly that server is for Plex and fast access to my files in my house. It has a nightly back up to a 16tb WD NAS in a raid 5. But 11.1 does have vm capability on it so i tossed GNome 16.04 on there the other night just to see how the vm system worked.

ya i separated all the drives out into individual drives in the raid setup. Right now i have it all just striping the drives. Probably going to order the last 4 2 tb ssd’s and install them and try a raidz2 or raidz1 just so ill be able to swap drives is one dies without having to rebuild everything.

Ya i have a BUFFALO 12-Port 10 Gigabit Switch (BS-XP2012) as my main switch, and a couple 8 port 1 gig switches . I could move some things around and run another line from my work station to the switch. It sits in the same rack as my server so i have plenty of room there.

1 Like

So I got the Intel x540t2 and it works great. I set it up as a link aggregation pair and I’m thinking my Buffalo switch doesn’t like it too much because the max transfer rates I’m getting are 280 mbps. I set the Buffalo switch port trunking on those ports. I probably should just go to a single line to the NIC and see what I’m getting.

When i had Win10 on this server when i was testing it out. I had the ASUS xg-c100c and i was getting full gig writes when using a ram disk on my sending machine. So i know i should be able to get more than i am. But i also was only using 1 port not trying to use 2.

I’m glad the X540T2 arrived, my X550-T2 finally arrived today from china and I’ll start doing some testing as soon as I can.

As for the Link Aggregation, sometimes it can be a bit fiddly, on some Netgear switches that i used I had to play around a bit to get Lag working smoothly.
On the buffalo switch if I’m correct, in the LAG section it lets you choose the ports you want to aggregate and that’s it. Maybe you have to assign certain priorities in the Traffic Control section.

Anyway as you said I think it’s better to try first with just a single port and seeing if it saturates 10G and then doing the lag settings and seeing what affects what.

You’re correct the Buffalo switch does have you assign the ports. I have the NIC on ports 10 and 11.

Last night i was at a Tenacious D concert with my buddy so i didn’t have time to change anything over.

Tonight I’m going to switch it back to 1 port and see if that Saturates the 10g connection. If 1 port does i will probably just leave it that way for now.

1 Like

I’m not sure if you have been able to saturate a 10 gig network but after going back to a single line i see no difference. I seam to have maxed out at 300 now. Not really sure what is going on.

For now I just have 3 WD reds on the nas and a sata ssd on my workstation so the best I can do is 560 MB/s when using the ram of the server as cache and 250/260 MB/s when the reds are in use.

Have you resetted completely the two interfaces on freenas? sometimes it gets pesky when you change some parameters of the networks, when I installed the X550 I had to manually reconfigure the 10G port and the 1G onboard and reboot for whatever reason.
And delete also all the LAG configuration on the switch and reboot that too, you never know…maybe change port and see if it does something different.

Being in a rack server I doubt the card would be thermal throttling, are you sure the slot you put it in is wired full by x8 and not x4?

As far as i know it is a 8x port but i will have to recheck.

I have restarted the machine twice since i removed all the interfaces and went back to only one. With the Asus card i was getting consistent 600+ writes and reads SSD but that was when the server had win 10 pro on it.

Sometimes they put the x8 slot but electrically it’s only x4, I doubt that that’s the case but checking doesn’t hurt

Zfs has a bit of a performance penalty because of all the safety controls and redundancy, but even so it’s strange that you are getting only 300 MB/s with the SSDs when I get 260 with spinning rust…it could also be that freenas/zfs is not happy with the raid controller of the server

According to HP it is in an 8x slot wired for 8x the other slot is a 16x wired for 16x so I’m good there.

That is possible. I posed the question to the FreeNAS forum and they have asked for some info hopefully they will be able to make heads or tails of it. Well this array is in a striped array so I’m not sure about the ZFS but it might be. It wouldn’t surprise me if freenas didn’t like the raid controller.

Good for the pcie slot.

It could very well be that, on my Microserver Gen8 I had to configure the Hp SmartArray thing to AHCI for freenas to like it, maybe it wants a similar thing for yours. if I remember well the controller is integrated to the motherboard but it’s still its own thing, if it is based on an LSI chip you could flash it to IT mode to behave just like a sata controller.
But better to wait on what the freenas guys say :smile:

You could try running iperf3 to isolate the network performance from the filesystem performance.

1 Like

Right! Didn’t think about that!

Also DD could be used to test the read/write performance of the Zfs pool