1st-timer Xeon (dual x5690) server build queries!

Thanks to ebay, and my inability to resist a bargain(£92!), I’ve recently acquired this:

It’s a dual xeon x5690, with 36GB(9x4GB) of Kingston 1.35V PC3-10600R (spec), all mounted on an EE-ATX Supermicro X8DAH±F Rev2.01 motherboard (manual).

There’s also an ARC-1212 4 port SAS/SATA raid card (manual), an Intel dual gigabit NIC, a Quadro 4000, and a huge PSU (something silly like 865W)

I’m being fairly exhaustive with the specs, because my questions are quite specific :slight_smile:

The current box is both huge, tank-like in its construction, and exceedingly loud, so I intend to transplant the machine into an old Coolermaster Cosmos 1000 tower, and swap out the CPU HSFs with Hyper212 EVOs.

Besides a NAS, I don’t really know precisely what I’m going to use it for yet, so I’ve just stuck Windows 10 Pro on it (which seems to perfectly support all the hardware). So for now my queries are focused on the hardware side of things.

So here they are:

  1. The RAID controller.

a) The current cabling to the drive bays appears to be standard SATA cables, not SAS. Will the cables need replacing if I intend to attach SAS drives, or will a set of SATA to SAS adapters suffice?

b) Are there going to be any hideous incompatibilities with this old SAS raid controller? Should I restrict myself to a maximum size and/or type of drive to avoid problems?

  1. The Memory
    The current configuration is 2 dimms per channel on CPU 1, and 1 dimm per channel on CPU 2.
    According to the manual, the full 1333MHz is attainable on this CPU model, at 1.5V so long as only channels 1 & 2 are populated, and none of the dimms have more than 2 Ranks.
    The motherboard appears to be quite smart about this; it’s currently running the 2 dimm per channel memory on CPU 1 at 1.5V, while the sparser memory on CPU 2 is only running at 1.35V. Thus maintaining 1333MHz.

a) Is running the memory of the different CPUs at different voltages intrinsically bad?
Should I pull out 12GB to bring both CPUs down to only 1 dimm per channel, and so allow 1.35V operation for both?

b) As there appears to be no mention in the manual of capacities affecting speed.
Does this mean I can put 3 8GB (1 or 2 rank) dimms in CPU 2’s open memory slots for a cool 60GB, without compromising speed? (With just a bump of CPU 2’s memory to 1.5V)

c) The manual warns against mixing voltages, but doesn’t say anything about cas latency or other timings. If I buy any more ram, do I need to precisely match what’s there already?
If so, what should match?
memory in the same channel positions? (A1/B1/C1),
or memory within each channel? (A1/A2/A3)
and what about memory across CPU banks?

  1. The GPU.
    Does the Quadro 4000 have any useful purpose?
    I currently intend to just flog it, as speed-wise it seems lacklustre, though it is quite generously endowed with 2GB of VRAM.
    I assume it must have a use, as it still commands a respectable price on ebay?

  2. I’m aware that I will have to drill a few mounting points to get an EE-ATX board properly supported in the Cosmos 1000 tower, but are there any other pitfalls I might not have considered in the transplant?

Thanks to any and all input!

1 Like

With that many threads you should use it to crunch BOINC projects. The Quadro would be good for GPU projects like Milkyway@home.

1 Like

A couple points-

  1. Since the motherboard is EE-ATX and not E-ATX, you may have to do some DIY mounting. I do not know how many of the mobo holes line up with E-ATX standard on the case, and how many you would have to drill + tap new holes.

  2. Hyper 212s work wonderfully on server 1366. You will have to buy some standoffs, male to female m3x12mm if memory serves. See this page for more info.

  3. If you buy sas drives, get an SFF-8087 to SFF-8482 cable. There are two types, one you supply sata power and the other Molex. I think the current cable is probably a SFF-8087 to sata.

  4. If you want to try PCI passthrough, the CPUs and Motherboard should both support VT-d and the IOMMU groups are hopefully ok, and you have plenty of cores and ram to play with. My x8dtl-i has ok-ish groups.

1 Like

I am not knowledgeable enough to answer your questions about memory configuration as I am pretty new to server builds myself, but while doing my research I found that there are some Arctic coolers that are compatible with LGA1366 (If you use the 115x screws that come with them)

So you have a few more options for coolers. Check out this video for the details.

1 Like

Just a note on RAID controllers, you might find that a lot of the older raid controllers have an upper limit on 2TB disks. There may be new firmware updates available to remove/raise the disk size limitations but it’s something you should be aware of when looking for a suitable card. The manuals for older controllers can be hard to find but some like LSI (now Broadcom) shouldn’t be too difficult to get hold of.

Depending on your use case, I have seen that a lot of people recommend using older cards in IT mode (which can be flashed with firmware) to remove the RAID functionality and present the disks as individual disks (kind of like RAID JBOD but without the RAID firmware overhead). This seems to be the way to go if you are planning on using the machine as a NAS with something like FreeNAS or another software RAID based file-system.

A raid card will most likely have one or more mini SAS connectors so as TheCakeIsNaOH mentioned you will probably need to get one or more SFF-8087 cables, these have a mini SAS connector on one end with the other end broken out to 4 SATA connections.

1 Like

Thanks to those who’ve advised; so far progress is slow.
The Cosmos 1000 is NOT a good choice for mounting an EEATX mobo.

Only 4 of the 10 mounting holes match up with existing stand off points.
Of the 6 additional points that need drilling, 2 lie over holes in the back panel. So I’m going to engineer a plastic bridge that’ll double as a stand off. (Might call upon my dad’s 3d printer!)

The machine came with a mini-sas (SFF-8087) to sata breakout cable. (like this).
If I did pick up some SAS drives, would 4 SATA to SAS adapters work? (like this )
Or do I need to get a dedicated 8087->8482 cable? (like this )

My understanding so far, is that cabling-wise there’s no difference between SATA and SAS. The differences lies solely in the connector, and even then it’s only a case of 7 (redundant) pins, and a physical tab intended to block sata connectors.

Either should work fine.

The reason I suggested the 8087>8482 cable it that it makes for less complex cabling and is only slightly more then 4x SATA>SAS adapters off of ebay. ($10 USD vs $8 USD)

Regarding DDR speeds:
1 or 2 dimms per channel will run at 1333 with 1333 ram.
3 dimms will run at 800

You may run into various MB/BIOS combos that are setup for worse (800 or 1066 for 2 no matter what), but the CPU itself should be able to handle 2 dimms per channel @ 1333.

Of course your memory has to be 1333 not 1066 as well.

Is it normal for the heatsink of the x58 North bridge (Intel 5520) [of CPU1] to get very hot?
The only PCIe device going through CPU1’s north bridge is the GPU.

Are chipsets of this era prone to running hot?

I’ve yet to check the state of the TIM, so it might well need replacing; however if the heatsink is getting hot, that’d rather indicate the TIM is operating just fine.

What’s the best course of action?
do nothing?
swap the heatsink for something bigger?
attach a small fan directly to the heatsink?

I might well be removing the GPU anyway, so presumably that’d significantly reduce the strain on the north bridge?

heh that is a server motherboard, its meant to have the cooling provided by the those 5k rpm fans . you decided to take it out of rackmount case designed to keep those parameters in check.edit. try putting it back in that server chassis and test if it makes any difference first. if it doesnt you can try changing thermal compound. if its TIM it will be easy, if its thermal pad then you will have to get one with proper thickness. server hardware has spec like cmf over heatsink needed to keep it cool

I have a similar generation with x5670s. The chipset gets really hot so I put a 60mm fan on it.

1 Like

nonsense. I picked up a dell r810 quad socket populated with 10c/20t each socket and 10g nics and 256gb ddr3 ecc for $50

I’d cry when I get my electricity bill if I did that though.

Yeah, that seemed like the easiest solution.

Though rather than mount it flush to the heatsink, I 3d printed a 1cm deep stand-off couling (to reduce backdraft), and through it fitted a 70mm fan I had lying around.

It’s not the quietest solution, but it keeps the hottest of the two Intel 5520’s much cooler, and also gives some flow over the neighbouring 5520 and southbridge.

Wow! and I thought I got a bargain!
What on Earth do you use it for?! :smiley:

Bargins are always had at electronics recycle places and DRMO places (if you’re near a base)

Use fedora and KVM for pentesting vulerable OS’s

So I bought a set of four ‘new’ 2TB Seagate ST2000NM0023 Constellation ES.3 SAS (non-SED) drives.

They’re SAS-2 (6GB/s) drives, manufactured in 2014.
The controller (ARC-1212) is an old (discontinued) SAS-1 device from Areca.

With the drives hooked up, I get this:

The drives report SMART ‘O.K.’, but nothing I do will bring them out of the ‘failed’ state.

I tried hooking up just 1 drive, no change.
I then tried hooking up a 500GB SATA HDD (using the same SAS cable), and that worked fine.
Unfortunately I don’t have any 2TB+ SATA drives with which to test.

I finally tried flashing the firmware/bios/boot (was 1.49) to the latest version (1.51 2012/07/04).

This too had no (positive) effect, though it did unlock some new options in the bios such as “warn only” for SMART failures, and a “reactivate failed drive” option.
The former had no effect, the latter, when applied to the drives in question causes the BIOS to lock up.

I can’t see any jumpers on the RAID controller, so no ‘SAS-mode’ magic jumper. (not that I’d expect such a thing on any modern-ish hardware).

I’ve contacted the seller, and he assures me the drives are new.
Is it likely this is simply an incompatibility between either the SAS version of the drives/controller, or specifically the drive capacities?

Though that’d contradict numerous sources on the 'net that claim the ARC-1212 has no problem with 2TB+ drives.
There’s some mention that the ARC-1212 doesn’t like SED drives, but these are definitely non-SED drives =/

Any ideas?
Or should I just return the drives, and stick to SATA.

Ended up buying a cheap(£10) LSI 2108 SAS-2 controller (a Fujitsu D2616-A12 GS), and that reported the same; as soon as the array attempts to initialise, all 4 drives flip to the failed state.

Oh well, guess they are duff drives :frowning:
At least I got my money back.

1 Like