Let's talk turkey about 10 gigabit ethernet! This video is just a 15 minute introduction to 10 gigabit ethernet and where it is used effectively (hint: it has nothing to do with your internet connection). In this video we want to indtroduce you to 10 gigabit ethernet and introduce some hardware from Intel, Marvell and Netgear that operates at 10 gigabit. We'll also take a quick look at SFP+/fiberoptic 10 gigabit ethernet. 10 gig ethernet is so fast that most computers just can't push 10 gigabits of traffic through a single queue; today it is like it was when gigabit ethernet first came out. </p<
We have a lot of content coming up for 10 gigabit and this video is a good introduction if you are familiar with networking concepts and have at least a basic understanding of how networks are put together. We mention a lot of topics that will be making an appearance in the future, and we will talk about why 10 gigabit "works differently" right now than gigabit ethernet.
This video will help organize the videos we're going to post to our other channels and we will provide a reference back to this video that provides an overview of 10 gigabit and what it means for us in general terms.
If you are completely lost, check out the Home Server 101 video below; we'll have an update to that video in a few months. Once you're hooked on having a (home) server to do your bidding, the value here will be more apparently.
Absolutely awesome! Loved how you started getting into the multiple receive queues and the DNS trick in the second video. Can't wait for more videos in this series, so keep 'em coming!
Way more speed than I need at home but I love learning about it. Looking forward to your updated Home Server 101 video as well. Keep up the great work!
Hi Wendell, could you give us the poor man "peasant setup" just useing 10 gig dual nic cards? i only got 2 home computers and a laptop that i could use the thunderbolt port for a 10 gig connection. emulex? useing Direct attach copper or RJ-45 cat 7 or Short reach optical. thank you for all the wisdom you give.
Nice, I was waiting for this a long time, first 10gig switch video made me buy 8 port version of that netgear switch,2 Tehuti TN4010 cards and x540 t1 card for future freenas box ,It took a while to get them at reasonable price.I can`t wait for the rest of the series! Many thanks!
Stoked for these videos, but just a word of warning in case you plan on going down this path, RDMA and Intel NICS don't play nice... :(
SOFS w/ multi tier pools off a JBOD enclosure, SMB3, and RDMA is insanely fast... We use it for our Hyper-V storage and it's STUPID how fast it is (even encrypted). Servers reboot so fast the RDP session doesn't timeout and I wind up second guessing that they actually rebooted... lol
While these aren't super accurate, this is the testing we did... Results based on average over 10 minutes using 40GB file
The whole setup is (mind you this was 30% cheaper than one comparable Equallogic or EMC unit with less features and no servers plus the bells are free like dedupe, SMB3, etc.):
3 Hyper-V hosts with dual 10G nics (active/passive because active/active was screwing with RDMA) 2 SOFS hosts with dual 10G nics (active/passive) DAS JBOD with 17 platters and 7 SSDs w/ connectivity to both SOFS boxes Juniper 10G switches
The SOFS boxes will both access the JBOD simultaneously and serve it to the same hyper-v host. MS's tech around SMB 3 is pretty sweet especially considering how stupid easy it's compared to the complexity of iSCSI that we came from.
*NOTE: It's the holiday weekend and I might still be a little drunk so accuracy can't be guaranteed ;)
Interesting topic since I mostly keep in touch with regular consumer parts. In the consumer realm I've kinda gotten the feeling that all the companies are betting on wireless eventually supplanting Gigabit LAN and that we might therefor never see 10Gbit making the big drop into consumer space. We'll see I guess.
In the meantime I'd also be interested to see some recommendation or quick test of some dual port NIC in the consumer price range that could be used as a quick solution to double(ish) the bandwith between a workstation(ish) computer and the el cheapo home/small office ITX fileserver running [gasp] non-server Windows. It's not fancy, but if you could get almost twice the speed I reckon it could be worthwhile for quite a few users.
Wendell, very unclear about flow control: In my tests, it signals the entire switch to transfer at the lower Ethernet speed, from 1000 to 100 so its recommended to always be disabled. Unless it's implemented in a smarter way in this device ?
It depends. This can happen. I've seen this a lot on Dell switches, especially the older 32xx series managed switches.
When the flow control works properly, it should notify the sender to slow down basically. Some older equipment would buffer. So what you are seeing a s a reduction in throughput is actually caused by an increase in latency which can be offset by recieve side scaling. I may be reading too much into this.
I got to talk to Dave Taht about buffer bloat at length and it is pretty cool that a lot of modern equipment manages this in a totally different way now. He coined buffer bloat, and Cisco now actually uses that in its recruiting stuff asking for folks with experience with buffer bloatr.
so the loss of throughput you experienced in the past may have been due to increased communications latency due to buffer shenanigans
You can get dual port intel NICs pretty cheap on ebay (around $30), but it doesn't really work that way. You only see a performance increase if you have multiple computers accessing the server, for one device to another it won't ever be faster than the speed of a single NIC.
If you have multiple PCs accessing the server then it's worthwhile setting up, but you also need a managed switch which supports link aggregation.
Awesome video. Love this topic or anything beyond the current state of consumer level tech that you can get on million other channels. Head+ and shoulders+ above anything else on.
I have a gigabit network at my office, and it is starting to slow everyone down. There are four to six primary users with around four people using files on the server at any given time.
What I am interested in is the Link Agg. Control Protocol Wendell mentioned as a cheaper way to speed a small business network up. We are using a gigabit switch, which I think is enough, but the gigabit server interface needs improving.
What exactly do I need to buy? Just a four-port intel PCIe NIC and a switch that has this Link Agg. Control Protocol?
that is basically all it takes to get this up and running, bear in mind that u don't really need a link agg enabled switch to be able to balance traffic. note when wendel mentions there is a hak way to setup load balancing using dns but, this does require a fair bit of infrastructure to already be working. easiest option is to just get a small business smart switch with link agg functions and a good pci-e nic at which point u can feel free to go crazy and have 4-6 gigabit ports in link agg ...that is assuming u make the new switch the core and all heavy users are directly connected to this core otherwise the bottle neck will be the links between the core and user switches...if that makes sense...