I want to run docker image that requires more than 64 GB of RAM, so I added more RAM to the PC, so now I have 96 GB total, but docker decided 32 GB is limit now, even tho before I added the RAM using more was possible.
I have 1950x and when I checked in Ryzen master the memory mode is set to Local, and I did not find any way to set it to distributed, I tried all bios options for Memory Interleaving on my X399 GAMING PRO CARBON AC. (I am not sure if this option have anything to do with it or not, I just found it weird that I can not change it)
Sorry, I can’t see any obvious limitations on RAM that should prevent you allocating more than 32GB to a VM on that version of Windows. I’m on an older build and only have 32GB so can’t try a VM with anything larger, but Hyper-V will allow me to set it - even though I can’t run it.
You could try removing the NUMA spanning in Hyper-V to see if that helps. You could also try setting the VM to dynamic memory (it will start up with only 1024MB) and see how much is allocated once you start your containers up.
I can not edit memory settings while it is rinning, and docker change it back to 32 GB every time it starts it, stoping it manually changing it and starting it again cause it to be in permanent bootloop of 2 seconds uptime
(setting it to less than 32 GB decreases the memory usage)
From what I can see Docker Desktop does like to manage the VM resources and uses a JSON config file to store the settings - so it will adjust the it’s VM to match when the service starts up. You can check the file here: ‘%APPDATA%\Docker\settings.json’
Going back to your Threadripper memory modes, Local should be the safe option here as Windows and HyperV should recognize that as traditional NUMA with a bank of RAM allocated to each NUMA node - in your case 48GB per node (provided you have matched Dimms evenly installed). I don’t think changing it would help, but I’m speculating, I don’t have TR.
So… I’m pretty much out of ideas and I don’t have the kit to try to reproduce the problem.
thank you very much I did not even think about possibility it can be some bug in docker, and I also did not think my version can be old
I updated and it ask me to use WSL2, I said yes, the memory option disappeared from the UI, so I need to change it in the config file, but the app is now running and using 48 GB, in few hours I will know if 96 is enough or not
It did not solve the secondary problem, but I prefer local mode anyway, I just encounter it, when I think it could be the limiting factor. So I do not care about this one.
btw. how could you get 48 GB with matched dims? 48/2=24, and 48/4=12 I never heard about dims with these capacities. I have 22 16GB on the primary channels, and 228 GB on the secondary channels, so primary node have 64 GB, and secondary 32 GB. The dims does not match and require different voltages, so they are on different memory controlers.
LOL, good point, I guess I was just thinking back to when I was last playing with big workstations that had physical NUMA - they were all tripple channel boards so 48GB per NUMA node wasn’t uncommon (6 x
8GB DDR3 Dimms).
I’m glad it’s working for you now, but I dread to think what kind of workload you are running that needs that kinds of RAM for a single container - ML or modeling? I’m usualkly dealing with production database servers or ESXi/Hyper-V hosts when I get above 64GB
oh I forgot intel is doing that weird configuration with tripple channel AMD have 2, 4, 8 channel CPUs to make it simple https://github.com/cmu-sei/pharos this monstrosity suit specifically OOAnalyzer.
I really hope it will work, right now I am at 99% of third step (from I rather do not want to know how many). and it is at 99% for about 2 or 3 hours, and total memory usage is 57 GB right now
Did you manage to solve this? It’s just I was looking at Digital Ocean today and realized how much they have moved on since I last looked some years back. You could rent a VM with 192GB of RAM for a few hours and it won’t break the bank: