The way I see it, AMD usually have two key advantages for ML these days:
Better Price / Performance ratio. This makes them a better choice hardware-wise for the budget-minded, but the more expensive RTX will still have better performance in the end. And Intel CPUs still has better single-threaded performance.
Better OpenCL and Linux support. If you want to create a multi-user setup where people can send batch processing to your PC (say you are part of a faculty at a University), Linux makes this extremely easy to set up, and is rock solid. It also allows you to run headless, meaning you do not waste any cycles on the GPU for displaying a desktop. So if you want to squeeze every last drop from the system, AMD sure helps here.
However, CUDA is the performance king at the moment, no disputing it. OpenCL might reach 80-85% of CUDA on a good day.
Another thing you might want to consider is to get a beefy FPGA. When it comes to Performance-Per-Watt, nothing comes close to beating it, which sure helps if you want to put that thing in a mobile battery-driven unit. On the flip side, it’s an extremely low-level language, so it will take a lot of time to get a neural network onto it. Depending on your use case, an FPGA could be a good alternative.