[Build Log] My first "server-grade" SFF virtualization and network storage build

I was planning to continue my existing thread where I started off by sanity checking my idea of buying a kit and then placing it what’s needed but due to many many things, I ended up building my own SFF computer!

…with off-the-shelf parts and some hacks :slight_smile:

So, here’s the boring stuff…

Part Name Quantity Price per unit Total Price
Chassis LOUQE Ghost S1 Mk III 1 $351.56 $351.56
Cooling NOCTUA NH-L9i chromax.Black CPU Air Cooler 1 $65.69 $65.69
Processor Intel Core i9-11900K 1 $441.92 $441.92
Storage Sabrent 4TB Rocket Q4 NVMe PCIe Gen4 SB-RKTQ4-HTSS-4TB 1 $755.07 $755.07
Storage Samsung 870 Evo 4TB 2.5″ SATA III 6GB/s V-NAND SSD MZ-77E4T0BW 3 $508.60 $1525.8
Memory G.Skill F4-4400C19D-64GTZR Desktop Ram Trident Z RGB Series 64GB (32GBx2) DDR4 4400MHz 1 $528.35 $528.35
Power Supply Corsair SF600 Platinum Fully Modular Power Supply, 80+ Platinum 1 $141.54 $141.54
Motherboard Gigabyte Z590I VISION D (rev. 1.0) 1 $380.48 $380.48
External Networking QNAP USB 3.0 to 5GbE Adapter (QNA-UC5G1T) 1 $161.60 $161.60
Total $4352.01

Now these are standard off the shelf parts that’ll fit into the chassis no problem and do everything as required with no jank whatsoever. Well, no jank = no fun and when deciding to pick my motherboard, I came across the Z590I’s manual (source) and found their block diagram.

This page shows the presence of an M.2 slot that I wasn’t expecting at all (labelled as M2P_SB) and searching for that term yielded me a particular beauty.

Product Specifications

So, I have a PCIe 3.0 x4 slot at my disposal… hmm… now, for Thunderbolt 10GbE, the authority more or less seems to be the “OWC Thunderbolt 3 10G Ethernet Adapter (OWCTB3ADP10GBE)” and it does not seem to be capable of Thunderbolt daisy-chaining… at least if these images are to be believed.

image

image

Now, I find it kinda odd that I should give up a 40Gbps Thunderbolt 4 port for 10Gbps Ethernet. I could get a dock but they’re pricey and I’ll be forced to import it, which will attract heavy costs.

So, why don’t we use this PCIe 3.0 x4 slot available as an M.2?

Now, we need a card that’ll behave with such a slot, so I started hunting for a PCIe NIC that’ll take good advantage of those four lanes and after a lot of searching and reading product specifications, the Intel X550T2BLK’s product brief had this to say…

Alright, you have my full attention.

So that figures what we’re gonna put in there, now the question is, how do I bridge this?

I remembered of a video where Linus Tech Tips tried to bridge an LattePanda Alpha with a Titan RTX card (source) and I figured, alright, unlike a Titan, we are mapping 4 lanes to 4 lanes so, let’s get what we need but less jank and through enough Google-fu, found the R42SR M.2 Key-M PCI-E x4 Extension Cord for LattePanda Alpha & Delta.

Alright, everything checks out, M-key male to M-key female, 4x (M.2) to 4x (adapter) to 4x (card) but wait, there’s a need for a 4-pin 12V input.

Well, shit.

Attempting to find a Molex to floppy 4-pin connector resulted in being charged outrageous prices, to the tune of $28.82, which I’m not paying. After going through multiple online retailers, I remembered I have a spare non-modular Corsair 450W PSU that’s collecting dust that has a cable that goes Supply-SATA-Molex-Molex-4PIN…

So… if I snip off the cable where it interfaces between SATA and Molex, I got myself a dinky little adapter, for free :slight_smile:

We’re back in business! The SF600 has a Molex female and naturally as our makeshift adapter is from a PSU, it too has a female connector so we just need a Molex male-to-male adapter, which thankfully costs a more reasonable… $4.70 (still too much money but the alternative would be to go to a faraway place and spend more in transport than on the cable)

So, we have the ingredients ready for 10GbE within the case. I plan on installing a graphics card but due to one slot being taken up by networking, we are restricted to single-slot cards, which is perfect as AMD makes workstation single-slot cards and I only have a 600W power supply.

I’m waiting on a workstation version of the rumored RX7800 (source) but yeah, I’m glad!

Part Name Quantity Price per unit Total Price
Network Card Intel Corp X550T2BLK Converged Network Adapter 1 $475.75 $475.75
Converter R42SR M.2 Key-M PCI-E x4 Extension Cord 1 $46.56* $46.56*
Converter Generic 4-pin Molex Male to 4-pin Molex Male Connector cable 1 $3.88 $3.88
Converter Generic 4-pin Molex Female to 4-pin Floppy Connector 1 $0 $0
Total $526.19

* - the extension cable unfortunately had to be imported by me and the cost includes shipping and a 75% import tax

Now, one drawback (if you can call it that) of the Ghost S1 Mk III is that there’s no front IO. Which, for my use case, is perfectly fine. But that also means, if we go back to the block diagram of the Z590I VISION D, we’re leaving quite a lot of connectivity on the table.

We can’t have that!

Unfortunately, we cannot use the standard USB header to PCIe slot-mounted ports as we simply don’t have the slot real estate so we go for the next best thing, using a USB DOM to hold TrueNAS Scale.

Unfortunately, there is absolutely nobody selling a USB2 DOM, let alone a USB3 DOM, not even server vendors in my area. So, I still don’t want my ports being wasted and since I was reading into SFF builds, I remembered that these things exist…

It’s for a Lian-Li case but I’ll snag one… thanks! So, we got ourselves a USB3.2 Gen 1 port. My initial thought was to buy a flash drive but then I remembered that when it comes to the pecking order of NAND, flash drive NAND tends to be kinda crappy.

I have a 970 EVO Plus that I used to have as my boot drive on my current rig just collecting dust and now it has a time to shine! Well, I’ll need an enclosure as I don’t own one so I just bought one which seemed reputed enough and there we have it, a makeshift USB DOM except it’s not a DOM and it’s really messy but it works :blush:

Part Name Quantity Price per unit Total Price
Storage Samsung 970 EVO Plus 250GB PCIe NVMe M.2 (2280) Internal SSD 1 $0 $0
Converter ASUS ROG Strix Arion Aluminum Alloy M.2 NVMe External SSD Enclosure 1 $62.49 $62.49
Converter Lian Li LANCOOL II-4X 3.1 Type C Cable for LANCOOL II - LANII-4X 1 $13.67 $13.67
Total $76.16

I’ve bought double sided tape and a dremel for a super secret update which I’ll document if I don’t bung it up.

1 Like

Just curious, but it looks like a decent amount of effort was setup for getting 10GbE in an ITX formfactor. DId you by chance consider the AsRock Rack X570D4I-2T and a Ryzen 5000 CPU? This would also give you IPMI, though the board comes at the expense of additional USB ports on the back and Thunderbolt. Does come with Occulink connectors letting you connect 2x PCIe 3.0x4 U.2 NVMe SSDs or 8x SATA drives via breakout cables (not sold with the board, unfortunately)

Could then use the x16 slot for something like a low profile NVMe addin card like this kinda thing

This was my primary reason to buy an expensive AsRock Rack board. IPMI + 10GbE on board, saving two slots that you usually need for GPU and 10GbE NIC. Expansion capability is rather limited on consumer chipsets, so getting these on-board brings a lot of freedom for PCIe slots.

And don’t buy 4TB M.2 drives. If you want 4TB NVMe, get a U.2 enterprise drive for less money and vastly better write performance and endurance.

Whats the point with that old intel stuff? Get 12th gen or Ryzen.

1 Like

And don’t buy 4TB M.2 drives. If you want 4TB NVMe, get a U.2 enterprise drive for less money and vastly better write performance and endurance.

DId you by chance consider the AsRock Rack X570D4I-2T and a Ryzen 5000 CPU?

My build was something I came up with on-the-fly. I was looking for the slot-in components for what was supposed to be a kit computer, small, easy, no-frills and giving up flexibility was something I was content with.

As things started falling apart, I found myself in between orders, cancellations and out of money so to justify the expense, I went all in. I did think of ASRock Rack but… they don’t sell here.

Not through hardware retailers, not through e-commerce stores, not directly, nothing. The U.2 is something I didn’t know about but out of curiosity I tried to see if I could even buy it and… there isn’t anyone selling it.

For server hardware, the market seems tilted towards “buy plug and play solutions, period, for all else, order in bulk quantities large enough for us to call you back” rather than allowing retail purchase of server parts.

The few sellers that do exist (like, on Amazon) have a substantial markup where at that point paying up that 2x increase of local pricing (which is already higher compared to base international pricing) is burning money to the ground.

So, this build is admittedly a byproduct of chaos, rather than something articulated but since I was doing these goofy looking hacks to get the most out of my hardware, I figured I’d share them.

Whats the point with that old intel stuff? Get 12th gen or Ryzen.

Hackintoshability, 11th Gen is the end of the road in that sense. Yes, I am building this primarily as a server in mind but in the back of my head, the question of “can it run macOS” still lingers and going Ryzen for my current rig is something that bites me back even to this day as macOS is shaky at best (I need 32-bit apps).

None of this matters if I’m using virtualization but for some reason I felt compelled to go with something relatively well established compared to Intel’s first foray into big.LITTLE design and Ryzen still having bugs ironed out when it comes to server deployments (these bugs may have been squashed for all I know but stuff I read from years ago made me do a double take on Ryzen, that compared with being a first generation Ryzen adopter has made me run back to Intel).

Yeah, if availabilty is limited, you have to get what is sold at a reasonable price.
M.2 really caps out at 2TB because you can only fit so much chips on a gum stick. The 4TB versions have to use very expensive high-density modules, that’s why they cost like 4x as much.
2.5" form factor has a lot more space to work with, which is why SATA SSDs are so cheap with 4-8TB and why U.2 2.5" enterprise drives usually use this form factor too.
You don’t get U.2 drives in your average electronics store or online gaming pc store. And you need an adapter via PCIe slot or M.2. I’m not saying this is a good thing to buy, but I would consider it before getting 4TB M.2 :slight_smile:

That’s a pretty good reason. If you want to keep the machine compatible (who knows what the future holds?), that’s a perfectly reasonable choice. I was a bit confused on the expensive lastgen stuff when modern Ryzen and 12th gen Intel make up better home servers for less money and power.

Availability here in Europe is worse than last year. I checked my AsRock Rack board lately and supply chain issues certainly take their toll. And these are fairly niche and special products, not a 64GB USB pen drive you can get in every grocery store.

1 Like

I was a bit confused on the expensive lastgen stuff when modern Ryzen and 12th gen Intel make up better home servers for less money and power.

That is understandable, doing a PassMark comparison between an 11900K and a 5900X shows the 5900X blowing it out of the water

What Intel makes up for in less impressive numbers, they make up with me not having to deal with issues that are present when you use Ryzen.

It is a damn shame, really that I had to pick Intel because an OS that is deprecated for its architecture (but still meets needs that neither Windows nor Linux can) won’t play nice with what is the current market leader (this is an oversimplification, there are technical nitty-gritties that make it so) but yeah, bang-for-the-buck, this is not.

Ryzen ticked all my other checkboxes and then some more, they have out-of-the-box ECC support, they have so many cores to spare with each single core dishing out enough performance that I could conceivably host even more VMs, has an iGPU unless you explicitly buy a non-iGPU SKU, most of AMDs bugs have been ironed out and Zen is been battle tested, I know where Ryzen shines (rise-and-shine, get it?)

All these goodies on top of better performance I have to give up, why? Because macOS was built around Intel’s x86_64 implementation, not AMD’s.

My wallet sheds a tear. But no use crying over spilt milk, I got what I paid for.

1 Like

Sorry to say, but if you want MacOS just buy a damn Mac already. Hackintosh was a nerd dream that died the day Intel failed to deliver om their promises → Apple developing the M1.

Now it is just a question whether Apple stops supporting x86 before Microsoft stops supporting Windows 10. Purely academic at this point of course.

Incidentally Apple sells their Mac minis in your country almost guaranteed, for cheaper than what you paid. Sooooo… Dunno what you won on that to be honest.

I wonder if we in ten years time will find as rabid a Hackintosh fanbase as the Amiga diehards though… :grin:

1 Like

Sorry to say, but if you want MacOS just buy a damn Mac already

Supply chain issues mean that the Mac Studio is unavailable and will continue to be unavailable in the foreseeable future. But, that’s assuming I would buy a first-generation Mac Studio to begin with.

M1 is Apple’s first foray into a desktop SoC and it hasn’t reached feature parity with their x86_64 counterparts, no nested virtualization, no Thunderbolt GPU support, the first one being a dealbreaker for development work, not to mention transition-related bugs and hiccups with vendors just starting to provide feature parity between their x86_64 and ARM64 versions.

I can’t have that.

My extent to hackintoshing on this SFF rig is for development, not daily driving. For macOS intents and purposes, it’ll last me for ~3 years before Apple chops the axe (they do still have to support their Intel machines for quite a bit of time), which is long enough for me.

I don’t expect macOS support on this to last a lifetime, just long enough.

M1 did lean quite a bit on its special-purpose silicon to bolster its performance claims (for example, I don’t care for ProRes but M2’s H.264 support is welcome) and until its general-purpose silicon can reasonably compete shoulder-to-shoulder while vendors sort early adopter bugs out, it’s not ready for prime time on my desk.

My ideal setup would be two rigs, one handling server duties and one being my personal rig.

Server would run containers and virtual machines for stuff like cross-platform native and cross compilation (see why I need hackintoshing?), be NAS platform to store all my data and backups, be a “disposable VM dispenser” for when I need to work on platform-specific projects (Xcode lock-in is real) and in general, handle work duties.

So, if I’m going to be virtualizing macOS anyways, why did I go Intel? Because I want the option to go bare metal if i have to without everything just crashing down.

Personal rig will run let me play games, run Discord, do creative stuff in Ableton Live and Illustrator and just have fun and unwind. I can use VS Code to have it connect to my dev machine and work from there and so my computer doesn’t become unusable when I have to compile stuff.

An M2 Ultra Mac Studio might be my personal rig but that doesn’t exist yet. My personal right as of now is my Ryzen 1800X machine I assembled ~5 years ago and once this SFF system is up and running, I’ll migrate all my development stuff to a VM within it and switch to Windows.

Whenever Apple releases a Mac Studio that meets all my needs, I’ll switch from Windows to macOS and my setup will be complete.

Update!

What happened to the 10GbE NIC?

Turns out, the “R42SR M.2 Key-M PCI-E x4 Extension Cord” comes with a SATA to 4-pin converter, at least when you purchase it from Mouser, so I didn’t need to do the whole “extract cabling from a PSU and then male-to-male connect a female port so that I can pass it to the floppy power pin”, silly me.

Though, I realized that I’ll need a PCIe riser to actually take the output port and pass it to the NIC and so I bought a PCIe 3.0 x16 riser from Silverstone (since x4 risers don’t really exist, the only other alternative is x1 to x16, the likes used by crypto miners and that would have me give up quite a lot of lanes so I went with x16).

This cable is ungodly large for my use case and wouldn’t fit in my case so with a rotary tool (my first purchase of a power tool and my first use of one), I got rid of everything after the x8 lanes and figured that damage in the x8 section is acceptable as long as it is confined there and put some electrical tape on the exposed ends after a small isopropyl bath and bam, makeshift x4 PCIe 3.0 riser!

How was the first start?

Going into the BIOS, the CPU reported itself as 60C idle, which was extremely concerning.

So, sinking more money into this project, I bought two Louqe’s medium hats and four Noctua NF-F12 iPPC-2000s to match to gave it some airflow, that helped and so I went to start installing TrueNAS Scale.

My boot drive corrupted quickly and errors galore, an attempt at a reinstall caused random extraction errors and kernel panics, going into the BIOS, the CPU frequency was jumping between 4.1GHz and 5.3GHz and that was concerning me.

I figured that might be the possible cause and then underclocked the computer by capping the TDP at 95W, capping the CPU at 4GHz (and memory at 4000MHz to match), disabling all lower power states and limiting the access the OS has to power management.

Since then, it’s been behaving well.

Did I use double sided tape?

Yes, if you follow the official guide given by Louqe, you can install one drive behind the PSU and another drive with the bracket*. Physically, behind the PSU, there were enough spaces for two 2.5" drives and the bracket was positioned in a way such that the SSD always touched the VRMs keeping it dangerously close to operating temperature limits (apparently VRMs can go upto 100-110C and SSDs have a max temperature of 70C) so I just taped to the wall of the GPU region.

* - if you go on the website, they say you can install upto three drives but the guide and some Reddit posts only guided me to install 2 drives

What did I do with the USB NVMe enclosure?

Opened it, took out the PCB, attached my old NVMe drive to it (again, with double sided tape), two layers of double sided taped it to the wall of the GPU region next to the third SSD.

Also physically removed the LEDs cause I don’t like RGB


Sorry that this isn’t my usual write up with sources and I did it in this shoddy FAQ style but I’ve written this off the top of head compressing a week’s worth of shenanigans. This is most certainly not advice but I kinda feel proud for bodging things together and felt like sharing.

1 Like

Glad to hear you’ve figured out most parts, hope the end result is awesome! :slight_smile:

1 Like

Update (Nyaa~~!)

Goodbye Louqe Ghost S1, hello Jonsbo N1!


  RAID != Backup
          - Sun Tzu, Scary Monsters and Nice Sprites (from The Life of Pablo), c. 1984

Well, shit. So now I need to come up with a backup… and you know what, this Chia farming stuff sure sounds cool, and you know what… Linus Tech Tips did a video (yes, I’m convinced Linus Media Group is a psyop engineered to trick people into thinking system and network administration is fun) on the Jonsbo N1 (source).

How about I spend $321.78 for a case that goes for $150.00 because I’m not American and developing country economic policies dictate that exports must be subsidized and imports must be penalized :3

MS Paint's primitive image scaling algorithms give it an aesthetic that I don't think anyone can really rival

So, yeah. I changed cases.

In exchange for the backplane and kinda cool looking bluish grey aesthetic, I got thermals so horrible, I now have a literal fan placed against this case to prevent my SSDs from dying too quickly (no, warranty isn’t on the table here, I’ve double sided taped them so many times, there are scuffs and dents from me using a flathead as a prying tool and the tape peeled off the sticker for one of the SSDs)

The champion responsible for keeping my SSDs from reaching upwards of 85C

It appears that no amount of money can keep me away from the jank. This is definitely not a result of me not doing adequate enough research, nope, not my fault.

P.S. I did try to run the Jonsbo with the case exposed and found that without active circulation of air, nothing was changing and when I put the shroud back on, there was little to no impact in temperature.

@MSMSMSM </3 ESXi

The reason I went with ESXi was that I wanted something that works, is reliable and doesn’t need tinkering to make the most of it (unlike Proxmox) and “just works™”, I even made a post about it on the Linus Tech Tips forum (source) which eventually helped me decide what my setup was going to be.

Well, it didn’t.

  • esxi-unlocker (source), the tool that’s used for unlocking Hackintoshing capabilities, works reliably (at least in my setup) on ESXi 6 (forgot what the minor release was) and caused a bunch of problems on the (then) latest version of ESXi, so I downgraded to ESXi 6 and got to work

  • I wanted to make use of the Intel i225 that comes on my motherboard to connect to my home network (which has access to the broader Internet) so I can free up the second 10GbE port on my X550T.

    To do that, I need the “Community Networking Driver for ESXi” Fling which requires ESXi 7 or above (source) and I really don’t like having to choose between two functions that are pretty important to me.

  • ESXi uses either NFS or iSCSI block devices (formatted with VMFS) as data stores. Aside from the pfSense and TrueNAS VM, everything else was on a zvol in a zpool managed by the TrueNAS VM.

    Which means, ESXi was not happy that the iSCSI device wasn’t present when it boots and I have to wait for the TrueNAS VM to run, so that I can manually have ESXi refresh everything so it can detect the device and the VMs on it. This… was somehow more reliable than NFS, which was as reliable as the Bulldozer architecture.

  • I need to buy an ESXi license. It didn’t occur to me that the free ESXi license comes with limitations, specifically the limitation of 8 vCPUs in a virtual machine (source) proved to be a problem as I was planning to dedicate a majority of my cores to CPU intensive virtual machines.

    No, disabling SMT is not an option. I’ve already kneecapped this 11900K by keeping it in something with the ambient temperature of the innards of a geothermal reactor with a cooler that can dissipate up to 65W worth of TDP reliably (source). I need all I can get.

I found myself doing reboots and reinstallations and spending so much time dealing SSH’ing into the hypervisor that I went “wait, what am I using ESXi, again?”. The whole premise of it being treated like an appliance has been thrown out of the window, so let’s use the power of Proxmox.

I spent an ungodly amount of time recovering the VMs from the VMFS zvol. In fact, I needed a fork of vmfs-tools (source) that took care of a bug that was otherwise preventing me from moving a pretty important VM. It dawned onto me that if it weren’t for that contributor’s work, I’d probably be SOL and that made me even more enthusiastic for Proxmox.

Debian <3

HARDER, DEEPER, FASTER, STRONGER

5x16TB WUH721818ALE6L4 WD ULTRASTAR enterprise drives. Four in RAIDZ2, one being used as an iSCSI device for Chia farming (for fun, there’s no profit, there’s math to back it up).

Hold up, don’t you need an SSD for plotting?

Yes, I bought another Samsung 870 EVO as my NVME SSD was acting flaky (lots of “corrected errors”) and thanks to the power of RAIDZ2, I could have the drive swapped without any downtime and since I’m now stuck with a QLC SSD (which I shouldn’t be relying on anyways on account of being QLC) that costs a fuck ton, I went “eh, we’ll give it to the Chia farm while it’s plotting and figure a use for it later”*

P.P.S. This is strictly speaking, not true. I tried using it “productively” by creating four partitions on it alongside an 870 QVO and then using it as my read cache, write cache, deduplication store and metadata store (mirroring a partition from the NVME SSD with a partition from the SATA SSD, each). It worsened performance and nearly killed the otherwise purely hard-drive based array.

Thankfully I was tinkering with only replication from my flash-based pool and so I could afford to purge and start again, this time with no deduplication and no weird hi-jinks.

Don’t worry, Chia plotting temporary store isn’t what I’m going to relegate it to for its lifetime, I’m not a sadist (and don’t worry, I’m using a maximum of 1/3rd of the SSD’s capacity so that I don’t wear out the SSD (fingers crossed wear leveling does its magic) and the QVO SSD is now in an external drive enclosure that I use for miscellaneous stuff.

Can I use this post as evidence to write off this build as a business expense?

No.

Wait, why?

I’m not god-tier soldier main, elmaxo (source).

Bill of materials

There’s a broken down list at the start of this post. That is no longer accurate. Neither is my original post (source).

Part Name Standard Pricing Local Pricing Quantity Amount Spent Notes
Chassis JONSBO N1 $150.00 (source) $321.78 1 $321.78 Fuck me. I should’ve just bought a used Dell server, why did I bother with this SFF stuff?
Cooling NOCTUA NH-L9i chromax.Black CPU Air Cooler $54.95 (source) $65.69 1 $65.69 I still prefer the Noctua brown, tbh
Processor Intel Core i9-11900K $364.99 (source) $441.92 1 $441.92 Going to be limited to a 95W 65W TDP in BIOS.

The 11900K was cheaper than the 11600 non-K variant for reasons I do not understand.
Storage Sabrent 4TB Rocket Q4 NVMe PCIe Gen4 SB-RKTQ4-HTSS-4TB $599.99 (source) $755.07 1 $755.07 L
Storage Samsung 870 EVO 4TB 2.5″ SATA III 6GB/s V-NAND SSD MZ-77E4T0BW $448.99 (source) $508.60 4 $2034.4 My 960 EVO from 2017 still goes strong. I hope these do too.
Storage 16TB WUH721818ALE6L4 WD ULTRASTAR ENTERPRISE HARD DRIVE $277.00 (source) $356.79 5 $1783.95 …and these were the cheapest in terms of $/TB
Storage HBA SilverStone ECS06 6 Ports SATA Gen3 (6Gbps) Non-RAID PCI Express Gen3 x2 $78.69 (source) $55.62 1 $55.62 The only time I didn’t get fucked
Converter R42SR M.2 Key-M PCI-E x4 Extension Cord $19.90 (source) $46.56 1 $46.56 Shipping and duty makes this… not economical
Converter ASUS ROG Strix Arion Aluminum Alloy M.2 NVMe External SSD Enclosure $76.68 (source) $62.49 1 $62.49 Not going to be able to resell this if I ever want to
Converter Lian Li LANCOOL II-4X 3.1 Type C Cable for LANCOOL II - LANII-4X $14.99 (source) $13.67 1 $13.67 I could’ve gotten it for $12.08 if I went to a different retailer
USB Hub Generic USB 3.1 10-port hub with external power N/A $52.54 1 $52.54
Network Card Intel Corp X550T2BLK Converged Network Adapter $398.95 (source) $475.75 1 $475.75 Works out of the box, can’t even complain.
Memory G.Skill F4-4400C19D-64GTZR Desktop Ram Trident Z RGB Series 64GB (32GBx2) DDR4 4400MHz $339.99 (source) $528.35 1 $528.35 I downclocked this thing to 3500MHz to match the CPU underclock. Someone end me.
Power Supply Corsair SF600 Platinum Fully Modular Power Supply, 80+ Platinum $144.99 (source) $141.54 1 $141.54 The 600W variant for some reason is still cheaper than the 450W variant
Motherboard Gigabyte Z590I VISION D (rev. 1.0) $220.97 (source) $380.48 1 $380.48 Amazon US lists this for $299.00 (source)
eGPU Enclosure OWC Akitio Node Titan Thunderbolt 3 External GPU Enclosure $249.00 (source) $482.82 1 $482.82 Guess what? 11th Gen Intel processors don’t work with macOS.

RX570 from an old rig, 10GbE card, SATA HBA in an ITX case… pick two
Total Price $5947.58 $7642.63 ~28.4% difference in price, amounting to $1695.05.

A trip from my city to LAX and back costs $849.73. I still need a physical fan to keep these machine’s thermals stable and it’s really, really loud.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.