If your budget is real low like 150$ range I’d shoot for a tesla m40
It only has 2 encoders but it will have Superior ai performance and has 12, or 24GB of vram depending on which model
It’s the same chip as the Maxwell Titan
They’re cheap enough you can get 2-3 for the price of a newer card that has multiple encoders, more power consumption but more performance
I was thinking around 5k-7k
I know server grade nvidia gpus are more scattered when it come to features and they don’t really mix transcoding/rendering and AI
I will run the gpu in a kubernetes env that will share the gpu to provide internal services
I did look at the A100 but reading about it, It was made only for AI. they didn’t put NVENC or NVDEC nor ray tracing (but don’t need that)
“Note: Because the A100 Tensor Core GPU is designed to be installed in high-performance servers and data center racks to power AI and HPC compute workloads, it does not include display connectors, NVIDIA RT Cores for ray tracing acceleration, or an NVENC encoder.”
This table explain well which gpu support transcoding.
AI is till in research phase so that gpu would be mostly for transcoding but can also run ai tests too.
When ready, I might find another more suitable for production
I used my old Titan V (desktop card with same die as V100) for multi nvenc and plex streams + for some of my AI work before and worked like a champ before getting my NVIDIA A30 GPU and it was solid. So v100 or a30 might be of intreset for you