Windows Docker limited to 32 GB of RAM after adding more memory to the PC

I want to run docker image that requires more than 64 GB of RAM, so I added more RAM to the PC, so now I have 96 GB total, but docker decided 32 GB is limit now, even tho before I added the RAM using more was possible.
I have 1950x and when I checked in Ryzen master the memory mode is set to Local, and I did not find any way to set it to distributed, I tried all bios options for Memory Interleaving on my X399 GAMING PRO CARBON AC. (I am not sure if this option have anything to do with it or not, I just found it weird that I can not change it)

Can anyone help me please?

Have you tried stopping hyper-v service and enabling back on?

Have you tried this?

Can you confirm if you are using Docker for Windows on Windows 10 or if you have Windows Server 2019 installed and are using Windows Containers?

Could you also confirm if you are actually running Windows containers or Linux containers?

There are substantial differences between how the containers actually run so worth being clear on your setup and container types.

EDIT: The JPG below makes some of that clear :slight_smile:

yes I tried it, nothing after 32 GB have effect

I have Windows 10 Pro desktop, and I am using linux container

does reboting machine many times to change things in bios counts as stoping the service?

Yes, a reboot restarts services.

You might want to fire up Hyper-V manager and see how much mem is allocated to the MobyLinux VM.

32 GB :thinking: and I can not change it there

What version of Windows are you running? Are you on Win 10 Pro for Workstations (I don’t think this should matter here).

Check the NUMA spanning setting in Hyper-V manager:

Windows 10 Pro, and I have this setting also enabled obrazek

Sorry, I can’t see any obvious limitations on RAM that should prevent you allocating more than 32GB to a VM on that version of Windows. I’m on an older build and only have 32GB so can’t try a VM with anything larger, but Hyper-V will allow me to set it - even though I can’t run it.

You could try removing the NUMA spanning in Hyper-V to see if that helps. You could also try setting the VM to dynamic memory (it will start up with only 1024MB) and see how much is allocated once you start your containers up.

I can not edit memory settings while it is rinning, and docker change it back to 32 GB every time it starts it, stoping it manually changing it and starting it again cause it to be in permanent bootloop of 2 seconds uptime :frowning:

(setting it to less than 32 GB decreases the memory usage)

From what I can see Docker Desktop does like to manage the VM resources and uses a JSON config file to store the settings - so it will adjust the it’s VM to match when the service starts up. You can check the file here: ‘%APPDATA%\Docker\settings.json’

What version of Docker do you have installed? Your screenshot looks like the old logo I think. Definitely check for an update.

You should also be able to use the Troubleshooting options, see here for more details: https://docs.docker.com/docker-for-windows/troubleshoot/

Going back to your Threadripper memory modes, Local should be the safe option here as Windows and HyperV should recognize that as traditional NUMA with a bank of RAM allocated to each NUMA node - in your case 48GB per node (provided you have matched Dimms evenly installed). I don’t think changing it would help, but I’m speculating, I don’t have TR.

So… I’m pretty much out of ideas and I don’t have the kit to try to reproduce the problem.

thank you very much :slight_smile: I did not even think about possibility it can be some bug in docker, and I also did not think my version can be old :thinking:
I updated and it ask me to use WSL2, I said yes, the memory option disappeared from the UI, so I need to change it in the config file, but the app is now running and using 48 GB, in few hours I will know if 96 is enough or not :pray:
It did not solve the secondary problem, but I prefer local mode anyway, I just encounter it, when I think it could be the limiting factor. So I do not care about this one.
btw. how could you get 48 GB with matched dims? 48/2=24, and 48/4=12 I never heard about dims with these capacities. I have 22 16GB on the primary channels, and 228 GB on the secondary channels, so primary node have 64 GB, and secondary 32 GB. The dims does not match and require different voltages, so they are on different memory controlers.

LOL, good point, I guess I was just thinking back to when I was last playing with big workstations that had physical NUMA - they were all tripple channel boards so 48GB per NUMA node wasn’t uncommon (6 x
8GB DDR3 Dimms).

I’m glad it’s working for you now, but I dread to think what kind of workload you are running that needs that kinds of RAM for a single container - ML or modeling? I’m usualkly dealing with production database servers or ESXi/Hyper-V hosts when I get above 64GB :smiley:

oh I forgot intel is doing that weird configuration with tripple channel :smiley: AMD have 2, 4, 8 channel CPUs to make it simple :smiley:
https://github.com/cmu-sei/pharos this monstrosity suit specifically OOAnalyzer.
I really hope it will work, right now I am at 99% of third step (from I rather do not want to know how many). and it is at 99% for about 2 or 3 hours, and total memory usage is 57 GB right now

1 Like

not even 96 GB is enough :sob:

Did you manage to solve this? It’s just I was looking at Digital Ocean today and realized how much they have moved on since I last looked some years back. You could rent a VM with 192GB of RAM for a few hours and it won’t break the bank: