Big Compute - Dell M1000e Bladecenter | Tek Syndicate


Let's take a look at a Bladecenter from Dell!



This is actually older equipment (about 3-5 years old, depending on which part of it) that is being repurposed and moved around, but modern bladecenters still follow the same protocols. Actually, modern blades from Dell, such as the M630, are still designed to work in this M1000e enclosure.


The Bladecenter chassis provides IP/KVM connectivity, remote SSH power control and many other nifty features. The built-in networking dramatically simplifies networking and connectivity needs while giving you the functionality to trunk connections if you need to.


This particular bladecenter is fully loaded with 10 gigabit ethernet and extra dual or quad port gigabit nics. Inside the Bladecenter, each blade has at least 3 network connections, with many blade having 5 network connections.


Connected to that we've also got our 24-disk 6010 SAN which provides iSCSI targets. It is also possible to boot from iSCSI with the blades in this particular bladecenter.


Most of the time the real benefit of a bladecenter like this is that you get a lot of compute horsepower in a relatively small package. Mondern Blades for this bladecenter would pack in 4-8 times as much computational horsepower at comparable power levels.


This thing is powered by six 2300 watt power supplies.


It's great for starting your own search engine, hadoop cluster or strong AI.



Any questions?




This is a companion discussion topic for the original entry at https://teksyndicate.com/videos/big-compute-dell-m1000e-bladecenter
4 Likes

How much does the unit you showed in the video cost (with all the componets)

What is it being used for?

will it run crisis? JK But really, what are you using this for, also at load, what does it sound like?

Nice video guys!

Although I must admit that a lot of that went over my head. It was fairly reminiscent of Ash lecturing midevil peasants about his Boom Stick, and to shop S-mart, while they smiled and nodded in fear...

I definitely wish that I had gotten into these kinds of things when I was younger like you both did.

He fired 3 ropunds without reloading out of that double barrel. Just saying.

Qain may have too, but I would never have noticed. Hence how fitting the reference is

So is that what you run your skynet program on?

1 Like

Serious question here since I've never worked in an environment that had any sort of ability to utilize blade servers, but are they really still a thing? It seems like you could do a similar function with a strong machine and virtualization.

Wow, this really makes me drool. I'm guessing you're using it to handle the site and maybe taking care of the videos and so on, but do you actually need all that? I mean, it sure is awesome but is it something that you have needs for? Or is it just something to have fun playing around with?

I don't know much about high performance enterprise but I could picture this running a cluster of virtual machines.

Imagine RAID but with full virtual machines.

It was also mentioned in the video about the computing power in a limited space.

They are still a very real thing.

We primarily use the Cisco UCS 5108 platform in our Datacenters. Think of it this way.. It's more efficient and cost effective to get all of your hardware and support from a single vendor. It also allows your IT team to specialize and know the ins and outs of their platform and rely less on outside contract work.

As far as virtualization; that is the purpose of these blade centers. We run tens of thousands of virtual machines, switches, routers, etc. on stacks of blade chassis. VMware is our primary suite but you can design them for basically any application. When you realize that you can pool the resources of the entire 8,10,12 blade chassis and then interface with an entire fleet of other blade chassis to use them as a single compute stack.. things can get pretty out of hand.

i.e. Minecraft (shhh don't tell anyone)

Some other benefits
1. installation and replacement is ridiculously easy (We can have replacement parts, blades, chassis delivered within 4 hours of failure as per our SLA)
2. High availability
3. Simple cabling (Example)
4. Power savings (Newer models can automatically hibernate blades on demand to power only what's needed, even power one socket or a specific amount of DIMM slots per blade)
5. Space savings

Anyway, I hope that answers some of your questions.

Thanks Wendell and Qain for showing us the big boy hardware. I've been a BIG fan of Tek Syndicate for a long time and I can only hope you'll show us more things like this in the future. I think there is a big need to show the community the huge differences in enterprise level hardware because how else will the next generation of IT professionals get the jobs they want when they would never get to work with or even see this type of hardware without having said job already.

4 Likes

I had a question during the whole video. If it's a retired a unit, what will happen to it? Do you will use it for something Teksyndicate related or something else?

Seriously; What, other than this video are you using this for??

In a past life I did a fair bit of 3d stuff (the under the hood par -- some special effects software and other specialized 3d software) and for a while this kind of thing was one of the best setups for running with e.g. renderman or other autodesk rendering products. ArcGIS number crunching and hadoop also worked well on setups like this. It is still being used, I just got to repurpose it so while I had the downtime with it I thought why not lets take a peek. Even though its older, its still very good hardware. This hardware especially th 610 and 620s will be used for years to come. the 630s are the very newest blades you can get, so not that behind the curve.
Plus also a terabyte of ram in a box is cool.

6 Likes

Just as already said on twitter... wow.. gorgeous! I am really hoping to see more enterprise stuff if possible =) whips away the drool

So the backplane(s) are interchangeable? this ABC stuff? Are the individual blades actually interconnected through something else than the network interface(s) on the back?

Concerning the 6 PSUs, how many could fail before the bladecenter goes down? I Imagine its a 3/3 redundancy?

@wendell
Awesome video!
i am a HP Proliant guy myself but its fun to see more enterprise gear on the channel.
more please!

1 Like

My experience with Fight Club has managed to allow me to catch a single frame slip. Right at 12:50 ish.

I see memes coming towards us ^^

12:53. Just checked again. It is in fact only one frame.