Hello everyone! I’ve been trying to run a Simscape simulation for a robot in Matlab 2022a but, for some reason, on a system equipped with a Radeon 6650XT I get a super weird memory leak that fills up all the RAM, all the GPU memory and even and additional 4.5GB of “virtual” GPU RAM. I need to run parallel simulations on different machines due to time constraints and poor optimization somewhere in the worker. Funnly enough on two systems equipped with Nvidia GPUs (old laptop with a GT635M and my desktop with a 2070 Super) this doesn’t happen and it’s smooth sailing.
I tried updating the AMD driver, but it didn’t work. Matlab just takes up all the memory it can until it crashes to the desktop without any error message on screen. I tried disabling OpenGL to see if that would help or closing the Simscape window in hope to unload the GPU. No shot, it does the same thing over and over again.
Unfortunately I can’t share the robot I’m working on due to it being under NDA so I know that makes it harder to debug for everyone else. I couldn’t find much on the internet about it. Though a friend with an AMD 5600U based laptop is not having issues with simulations, though it’s a different one.
If anyone has any idea on the matter I’ll be happy to hear you out, thanks!
As a person who uses matlab all the time im afraid to tell you that the truthful only solution is to change to a system with NVidia GPU. There are two competing GPU interfaces: NVidia’s CUDA system, and every else’s (opencl) the sad fact of the matter is MathWorks has chosen CUDA and intel based acceleration (cpu side but this can be overriden)
Your in luck though. If your on linux its the window manager that causes these issues. Switch from wayland to x11. See if it launches
If your not on linux well we need more information. Driver version. Operating system. Versions of the suite and maybe export or dump your matlab settings out. Maybe we can see something that could be causing it
Yep, AMD GPUs are not supported for computing. But that’s not what I’m trying to do. The viewport for the Simscape simulation is the only part that’s GPU accelerated. All the physics computation is done on the CPU. So I don’t think that’s the issue.
It should default to software rendering because the workload is not GPU accelerated. Only the viewport that shows the robot is accelerated due to it being a 3D model that can be animated after all the physics calculation have been done.
Two of the three machines are equipped with Intel CPUs, a 3630QM and a 4790K.
I’m not really sure about it, but it’s using the robotics package (duh) and spawning quite a lot of Java processes. I should dig a bit further for a definitive answer.
RAM size is not an issue because the simulation is running fine on my old laptop equipped with 16GB of RAM. It’s not as complex as stable diffusion, is “just” physics calculations done on a model. That’s it.
The model is a plain old Simulink block diagram with all the variables put in. I didn’t use and CUDA conversion to get acceleration for the physics calculations (would’ve been stupid of me thinking that a GT635M could do the same job as a 2070S). The CPU is more than enough for it, even if it takes time due to Matlab being a bit stupid about it.
Adrenalin 23.3.2, Windows 10 Pro 21H2, Matlab 2022a. Didn’t change any setting in Matlab, all stock.
I opened a ticket just now. There’s no error report so I can’t go through there, which would’ve been ideal. The forums don’t have much about this issue.
You mean opening a new thread on the matter?
Exactly! In my case the viewport stays empty or, if I minimize the program, comes back up with a black box. I can’t even render a component of said robot.
I’m in the process of re-compiling (?) the model through the step files I got from Inventor and see if that helps.
This really grinds my gears. Stable Diffusion completely ignored my RX 6700 and rendered on the CPU only (a Ryzen, somewhat ironically.) I had to actually dig a 1050 Ti out of my junk pile to run SD at acceptable speeds. Render time went from 15 - 20 minutes to about 2 - 3 minutes.
Off topic but you can now use ROCm on linux for SD or open ML on windows with a 1/2 speed penalty, at least you can use them with automatic 1111
You might have to do some shenanigans to make it use ROCm on linux since it’s a GFX10 card
You won’t be able to use transformers to user memory compression, and I can’t remember if you can use fp16 or not
So I tried my best to explain the situation and got Matlab to crash to an error once. In the first email exchange support told me to start it with the mesa flag from command prompt. That took me a step forward in the sense that, when starting the simulation, the system memory wouldn’t be out right filled to the brim but it would take the “normal amount” of RAM it takes on the other systems. But the viewport was still broken and the program didn’t get anything done anyway.
The second ticket I opened was met with them linking to THIS THREAD opened 3 damn years ago and saying “we know we have issues with AMD cards”. That’s it.
I hadn’t see thousands of dollars of software that works this poorly since Autocad for Mac.
Yeah, I saw the comment. Seems also obvious to me that if you have any sort of AMD GPU installed in your system Matlab won’t work properly. Which is INSANE if you ask me, the stupidest thing I’ve been having to deal with lately.