Python Script not working on Threadripper 7995WX + 2x 4090 GPU + 1TB ram setup but working on AWS

I deployed a python script that uses ProcessPoolExecutor, max_workers = 5 on an AMD Threadripper Pro 7995WX CPU and 2 NVIDIA RTX 4090 GPUs. The script processes 500 very large images (8192*8192 RGB), segments objects and classifies them. I can get it to run for 10-100 images, but it almost always crashes without reason beyond the 100-image mark. Sometimes it threw segmentation fault, core-dumped in the past but following some improvements it doesn’t do that anymore.

I have enough memory 1 TB RAM, GPU is not saturated, I have 50% GPU memory usage including spikes monitored through nvidia-smi. My debug logs don’t record anything, I have tried valgrind that records nothing out of the ordinary as well.

To top it all my 500 image script works on AWS g6.12xlarge AMD EPYC 7R13, 48 cores and 200 GB RAM, NVIDI L4 GPUS 24GB each, which is far less intense than my current setup. We run the same ubuntu OS version, slightly different NVIDIA drivers (but should this matter given that the script runs for small batches on my machine).

My leading theories are:

  1. Thermal throttling
  2. Consumer grade setups are different from AWS
  3. Faulty equipment

Am I missing something? Has anyone encountered this issue?

So now the process just exits without showing anything at all as an output?
Is there anything relevant in your journalctl logs or even dmesg?

Not exactly. It crashes the entire computer and a force reboot is required. Nothing relevant from journalctl or dmesg.

Ouch. Can’t even SSH into it?

What size is your power supply? Have you tried setting a more conservative power limit for those 4090s?

Have you done tests to guarantee that the RAM in that system is not faulty?

Power supply is 2kw, no we haven’t set a power limit for the 4090s

Yes we have tested that the RAM is not faulty

Not much to go with but my guesses would be:

  1. System instability
  2. Faulty hardware

I do not know the script or libraries your are using, is this a CPU or GPU bound workload, or even both?

How have you tested if the memory works correctly? With this amount of ECC you’d need to monitor the corrected errors that get reported. Otherwise single bit errors would get corrected until it catastrophically fails with a 2-bit error.

It is both.

Do you mind elaborating on system instability?

If that’s something that you’d be able to test quickly, it could eliminate the possibility of a lacking PSU, or a PSU that doesn’t deal properly with transients from 2x 600W GPUs.