"NT disallows overcommiting memory"
Page 2 of 2

Author:  linguofreak [ Wed Dec 08, 2021 12:34 am ]
Post subject:  Re: "NT disallows overcommiting memory"

eekee wrote:
Yes. I recall Linux 2.0 allowed practically unlimited overcommitting with the excuse that it's impossible to predict how memory use will change in a multitasking operating system. I think they meant something like this: If process A tries to allocate more memory than is available, it shouldn't be prevented from doing so because memory may be freed by other processes before process A actually uses its memory. In practice, I saw a lot of processes killed due to segmentation faults on my 4MB 486 with 8MB swap. :) I naturally bought myself a much more powerful machine, but then an OOM-killer was implemented which, in its early form, always killed the X server. (The X server never segfaulted...)

Current kernels can still do unlimited overcommit, it's just controlled by a file in /proc/sys/vm and no longer the default (at least, on most distros).

I started using Linux around 2009, and between modern systems having lots of RAM and improvements in the kernel, I've not seen the OOM killer take an especial hatred to X. What I was seeing a couple years ago was some kind of weirdness that seemed to involve the USB stack and applications using the graphics card: There'd be a USB event (often to wake up xscreensaver, but it happened a few times in games as well), and all local interactivity would hang: the lock LEDs on the keyboard would stop responding to their associated keys, the mouse tracking LED (IIRC) would turn off, and the screen would stop updating (or, as it often happened when xscreensaver had the screen blanked, would never come out of power save. Incidentally, the incidence of these crashes was much higher when xscreensaver was set to blank the screen). The machine would still respond over the network, and a look at the logs would reveal an OOM event that happened with tons of free RAM (and even more free swap), generally with some message about a delay in an allocation function, and a stack trace that contained calls to functions with names including the substring "usb" and/or "nvidia". As tons of main memory was free, I get the impression that graphics memory is being treated as a separate NUMA node, and some allocation specifically for that node was failing. In any case, I've not had it happen in a while (I think since I went to Ubuntu 20.04), but it was really weird.

Author:  ColonelPhantom [ Tue Feb 15, 2022 9:58 am ]
Post subject:  Re: "NT disallows overcommiting memory"

Some others have already mentioned this implicitly, but the important part is that overcommitting involves not just the physical RAM, but also swap. So if I have 8 GB RAM and an 8GB swap partition, as soon as all 16GB are allocated and a program tries to allocate more, NT will simply refuse the request, returning a null pointer instead.

Overcommit means that the kernel instead says "sure, here you go". Then as soon as the program (or even another one) actually tries to use the 'allocated' memory, the OutOfMemory-killer gets invoked, and some random process on your system gets the axe (typically one that is using a lot of memory).

In the OP you mentioned that you don't understand how swapping is done without overcommitting, but as you probably got by now, giving a page of memory and backing it with some disk space in swap is not overcommit. For it to be overcommitting, it has to not be backed by anything at all.

For the people talking about hypervisors, consider the following: why not give the guest OS little RAM, and then let the host OS use its disk cache to speed up swap if other machines are not actively swapping? That way you can balance memory, while the guest OS gets more of a pleasant instead of unpleasant surprise with performance. In other hypervisor-memory related features, I know that QEMU has a "balloon" device that allows the guest to give back memory to the host.

Page 2 of 2 All times are UTC - 6 hours
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group