I have this Dual Mi210 machine and I am running benchmarks on it. THATS COOL BUT I’d love to show some computational fluid dynamics demos, or other workloads that I can get up and running on ROCm 5.
What sorts of cool visualizations have you seen that I could bring in and show? Do you work on anything with CFD or “big gpu systems” and can you tell us a little about what it is? (for me to include in a future video)
I’ve got some Magnetohydrodynamic studies I could compile to have run/benchmarked “standalone”… the thing is they won’t take advantage of any GPGPU… but they are cross platform and will run on x86 on Mac, Windows and Linux as well as on ARM on Mac.
Post processing is part of the compiled study app so the visuals can be made to look good; b-field, current, velocity fields, pressure, reynolds numbers can all be visualized at the same time.
My area of marginal expertise is in turbulent non-compressible mhd simulations; think earth’s dynamo as opposed to plasma physics.
There is backstory as to why current simulations don’t take advantage of GPU: a problem big enough to warrant GPU offload will typically use several hundred gigabytes of working memory and GPUs don’t have this. Running memory over the PCIe bus is even slower than the CPUs main memory access.
MHD simulations are fairly memory bound since they need to keep a big solution matrix in memory as it is being worked.
I haven’t been keeping as up to date as I probably should with openfoam but I believe there are ROCm implementations of the paralution solver for openfoam, but nothing applicable to MHD problems, in fact I don’t think openfoam’s MHD module even support turbulent flow at the moment. Also there aren’t as many “useful” boundary conditions to apply in openfoam as comsol.
PS: I believe everyone should have two dishwashers.
I have some (pretty bad) OpenACC code in C++ that does numerical integration. There are a few pieces that rely on cublas, but we could convert those calls to hipblas. I don’t think OpenACC is supported by ROCm, but GCC 10+ can target Radeon architectures with OpenACC. I doubt the GCC OpenACC is as well optimized as the Nvidia HPC SDK though. Not quite what you asked for, but we probably could make it work on the hardware.
Random Multiphysics FEA tangent I thought was funny:
The opensource Multiphysics Object-Oriented Simulation Environment (MOOSE) suite that the Idaho national lab maintains got in trouble for someone trying to commit a particular type of physics module to it that is export controlled. literally forbidden software, can’t have people understanding how a certain branch of physics works, that would be illegal.
Incidentally MOOSE can take advantage ROCm using HIP for some operations already.