The Frontier supercomputer is equipped with 9,472 Epyc 7A53 CPUs and 37,888 Radeon Instinct GPUs.
However, the team only used 3,072 GPUs to train an LLM with one trillion parameters.
The paper also mentions a key challenge in training such a large LLM is the amount of memory required, which was 14 terabytes at minimum.