Surprisingly the 64GB modules were the ones available on the QVL - I’m in Canada. Question - is it better to acquire 8 smaller DIMMs (I will keep looking for supply) or is starting with 2 of the monster 64GB and adding as needed OK? Does the CPU prefer more DIMMs is what I’m asking?
Have some time to get back to this build - truly struggling sourcing RAM. Kingston apperas to be most readily available, but not on the ECC QVL - ok to consider Kingston R-DIMMs or LR-DIMMs you figure?
We are really lucky in our workspace - the workstations actually sit under us in a cooled room. Kinda like a server closet. We tend to go with open air designs as it allows for easy access and we aren’t too worried about noise. We fan them up as we want to keep everything chilly cool.
What are people liking out there for open air cases these days? Would house the above components.
How does having the workstation under you in a cooled room work for cabling – especially the Thunderbolt add-in card you’re buying? I guess it’s workable enough to get 3m cables for HDMI or (at least some) USB, but Thunderbolt 3/4 cables are 2m tops (and those are the expensive active cables.) Do you just keep all TB peripherals in the room below, going down there when needing to connect/disconnect? (I’ve very curious as I’m struggling with Thunderbolt cable limits even with an adjustable/standing desk.)
Good point, although that wouldn’t work just to attach a portable bus-powered external TB3 NVMe drive enclosure – unless I guess you have your optical cable connecting into a (powered) TB3/4 dock, which you can then daisy chain the portable drive off. I’d likewise be curious also about what its compatibility with TB4 multiport docks would be, like when the GC-MAPLE RIDGE add-in card comes out (assuming it will work fine with the WRX80-SU8-IPMI).
As an aside, it’s kind of interesting (frustrating, actually) how typically tech vendors these days don’t provide much in the way of real technical explanations of their product. For product details, Corning seems to offer a few bullet points on a mostly marketing brochure, e.g.:
Corning recommends only using Thunderbolt 3 optical cables with macOS®
Does not support native USB or DisplayPort
Does not provide bus power
I mean, no copper so no power or backwards USB cable compatibility is easy enough to understand. But, why the only macOS “recommendation”? And, what does “Does not support… DisplayPort” mean (or is supposed to read “Does not support native… DisplayPort”? Is that saying that the optical cable doesn’t carry the 2 DP channels that are part of normal TB3/4 standard (and thus wouldn’t pass on DP(s) to a downstream dock with DP out)? Or is it just saying that the optical cable wouldn’t work as a USB C DP ALT cable?
I don’t know that that’s a thing on that board? I believe it’s TBT header is just TB3. My boards have shipped so I’ll know more shortly but don’t see the advantage of TB4 - it’s the same speed, just more security from my understanding.
As for MacOS only by Corning - they work fine under Windows. We haven’t had Macs in ages now. I think they are making that statement cause TB3 has been difficult on AMDs for some time now - seems only Gigabyte truly gets it. Asus appears to have given up - their latest SAGE baord makes no mention of TBT.
My thinking - it’s time to move away from TBT and figure out a new strategy. I spoke to 45 Drives this morning to see where they’re at in terms of througput. Maybe 40Gbe is the way to go moving forward?
Just waiting on their engineers to see if they can move data fast enough to actually edit right off the array. Should be interesting to see what they come up with…
Looks like both the Maple Ridge and Titan Ridge add-in cards use the same header cables:
Hopefully, it’s just a matter of plugging different ones in and getting either a DSL7540 (TB3) or a JHL8540 (TB4) chipset and functionality. That is, of course, assuming BIOS/etc. support is in place. @wendell , in the BIOS of your new WRX80-SU8-IPMI, do you see any mention of Maple Ridge?
Interestingly enough, checking out the various Gigabye Thunderbolt add-in cards:(hxxps://www.gigabyte.com/Motherboard/Thunderbolt-Card),
none list any AMD motherboards as “Certified Motherboards” for use with the card. What’s more, for all the Intel boards, the support typically is qualified as (in “PCH attached” slot). Likewise, with other newer Z590 motherboards with Thunderbolt 4 – e.g. Gigabyte Z590 VISION D and MSI MEG Z590 ACE – the Thunderbolt chip is hung off the PCH. (The Asus PRIME Z590-A manual likewise suggest connecting to a PCH attached slot, although the manual illustrates the card going into a CPU attached slot.) I wonder what the PCH-attached vs direct CPU connection means for AMD, where the abundance of PCIe lanes allows for easy direct CPU connection.
I have a MacBookPro laptop with Thunderbolt. It’s actually sometimes handy to use TB3 networking to do a quick high speed connection to my desktop (for file transfer, etc. I even had a TrueNAS VM running on my desktop that I used for Time Machine backups of my laptop, which was cooler than doing it over regular GigE.)
The multi-port docs support seems pretty useful. For example, if you had an optical TB4 cable running from a machine under the floor to a TB4 multiport doc on your desk, you could connect up to 3 portable TB3 devices (e.g. 3 bus-powered NVMe drives, 2 NVMe drives and a Thunderbolt networked laptop, etc.). With Thunderbolt 3, you’d have to disconnect and swap those connections, as you get only one downstream TB port (tops) with TB3.
Of course, there some hints of details of how that multiple functionality works with older TB chipset, like Alpine Ridge, but it’s kind labyrinth to understand the exact details, e.g.:
There are still a few good use cases:
Fastest easily portable storage with TB3 PCIe 3.0 x4 NVMe enclosures (I guess until maybe some PCIe 4.0 x4 external U.2 protocol comes around)
Easy high speed laptop to desktop networking
Flexible PCIe expansion, like if you had a PCIe adapter that you didn’t want to dedicate to an internal PCIe (or have it take up a high speed PCIe 4.0 x16 slot, if it’s just a PCIe 3.0 x4 or lower connection). This is less of a deal on AMD, but on Intel platforms where you only have 20 PCIe lanes (x4 taken by a boot device NVMe, and then tops 3 PCIe slots x8, x4, x4), it’s more useful, especially if it just hangs off the PCH, sharing bandwidth with other devices as needed.
I think Intel has totally bungled the TB marketing however, so I’m not optimistic.
High speed Ethernet is interesting, especially with RDMA capabilities allowing for various resource disaggregation. Why have a ton of local fast storage, if you can attach to equally fast networked storage over something like NVMe-oF? Why have a ton of local GPU, if you can attached to a networked bank of them over something like GPUDirect RDMA? I’ve been focusing just on RDMA 25Gbe, just because I wanted remote storage to be at least close to speeds of local NVMe storage capabilities. In that regard, I’m always a bit disappointed when a board includes a 10Gbe NIC (especially a -T/RJ45 version), and/or one without RDMA functionality.
But in the meantime, tons of PCIe lanes and Thunderbolt on top of that is a good way to go!