I deployed a python script that uses ProcessPoolExecutor, max_workers = 5 on an AMD Threadripper Pro 7995WX CPU and 2 NVIDIA RTX 4090 GPUs. The script processes 500 very large images (8192*8192 RGB), segments objects and classifies them. I can get it to run for 10-100 images, but it almost always crashes without reason beyond the 100-image mark. Sometimes it threw segmentation fault, core-dumped in the past but following some improvements it doesn’t do that anymore.
I have enough memory 1 TB RAM, GPU is not saturated, I have 50% GPU memory usage including spikes monitored through nvidia-smi. My debug logs don’t record anything, I have tried valgrind that records nothing out of the ordinary as well.
To top it all my 500 image script works on AWS g6.12xlarge AMD EPYC 7R13, 48 cores and 200 GB RAM, NVIDI L4 GPUS 24GB each, which is far less intense than my current setup. We run the same ubuntu OS version, slightly different NVIDIA drivers (but should this matter given that the script runs for small batches on my machine).
My leading theories are:
Thermal throttling
Consumer grade setups are different from AWS
Faulty equipment
Am I missing something? Has anyone encountered this issue?
I do not know the script or libraries your are using, is this a CPU or GPU bound workload, or even both?
How have you tested if the memory works correctly? With this amount of ECC you’d need to monitor the corrected errors that get reported. Otherwise single bit errors would get corrected until it catastrophically fails with a 2-bit error.
If that’s something that you’d be able to test quickly, it could eliminate the possibility of a lacking PSU, or a PSU that doesn’t deal properly with transients from 2x 600W GPUs.