RTX Pro 6000 Blackwell in the house

RTX Pro 6000

96g vram. nim license for cloud stuff. it’s nuts.

What tests/workloads you want to see?

Livestream

15 Likes

On a not so serious note… Walking into any town on S.T.A.L.K.E.R.2. What’s the frame rate like?

1 Like

OpenCL benchmark! Interested whether the FirePro W8100 I rescued from a dumpster is faster for FP64. Not in the RTX 6000’s wheelhouse, but it makes for a hilarious comparison.

1 Like

How performant Looking Glass is when using the Looking Glass client on the host and a vGPU for a VM with the Looking Glass server at the same time.

1 Like

I just got three RTX 5000 Ada GPUs in from Puget Systems. We bought then for software that doesn’t even take advantage of the GPU, yet. It’s in Alpha, but dependant of Army Corp of Engineers to push it into mainstream, so those Ada cards will be EOL before TBH.

Game on 1 of the 4 MIG sections! What average fps can you get with all 4 running games?

Can you plug 4 monitors, 4 sets of keyboard/mice and play like it’s a N64 (kinda)?

1 Like

Personally, I would like to see how much more performant this is than the Max-Q version, but I bet you don’t have a Max-Q version as well.

He should be able to simulate the Max-Q performance by nvidia-smi -pl 300.

1 Like

soon Livestream ™

3 Likes

Around when is Livestream? Will it be on YouTube?

About 48 minutes as of this posting: https://www.youtube.com/watch?v=WUw9XUOAFaY

2 Likes

Would like to see inference performance on the best unquantized 70B LLMs. Cogito 70B, Qwen2.5 72B, LLama 70B, others? They should fit entirely?

Then performance on supposedly better, larger LLMs that won’t fully fit on the GPU. Qwen3-235B-A22B?

Aside: found your YouTube and this forum recently and great job!

2 Likes

That would be neat.

much a good looking card that costs more than some used cars ive been looking at

Amazing to see that you got this card!

I am really curious to see how the RTX Pro 6000 compares against particularly the H100 SXM5 since I have one in a Silverstone RM52! I think at that point maybe we are just looking at how the performance of specific kernels performs on wider memory interface of HBM3 with 80 memory channels compared to 8 channels on the GB202 (albeit much much faster data rate). All my other A100s and H100 are busy at the moment but if you have a preferable benchmark, I can find some downtime and I can drag race them to compare against!

Keep up the great work Wendell!

EDIT:
Now I see the benchmark, just waiting for the results on stream! (I got a max of 250 at its max 300W running similar parameters with the A100)

2 Likes

I’d love to see Davinci Resolve performance.

Wendell, I really appreciated your live stream this evening. I have been having anxiety regarding my order of two Max-Q cards for my RAK system and a project I am embarking on. Seeing how well the card performed at the 300 watt power level really eased my worries. I do hope the process of getting them working in Linux will be a bit smoother when they arrive here, but that you were able to get it working is a good sign.

Thank you.

Youtuber3000 :wink:

I was on the fence with Max-Q variant too. Ultimately decided to go the 600w variant, there were no price difference between the two, and I can always power constrain the devices. In fact I’ll have to until I can upgrade power in my office.

1 Like

I actually considered getting TWO of the 600W Workstation variant, but I wondered how the flow through cooling would work in that set up, so decided on the Max-Q variants for that reason. The reduction in power usage is just a benefit I guess. As I noted during the livestream, 80% of the performance for 50% of the watts. Looks like a good deal to me.

1 Like

I posted on another thread on my RTX PRO 6000 workstation edition RTX PRO 6000 Blackwell Workstation Ordered - #52 by level1mo

Some highlights


Use a good cable folks




Installed drivers on Windows with Nvidia App. Installed on Fedora with their RPM Fusion nvidia package and using akmod to build with the open source kernel module

sudo sh -c 'echo "%_with_kmod_nvidia_open 1" > /etc/rpm/macros.nvidia-kmod'
sudo akmods --kernels $(uname -r) --rebuild
# very important to wait for kernel modules to fully compile, so I use this command to make sure everything compiled before I rebooted
watch -n 2 "ps aux | grep kmod" 

More info in the original posts I linked to above.

2 Likes