Heh. Thanks for the replies! Including this as part of my VMM is just a stray, vague idea at this point. I have no idea if I'll even want to try implementing it someday.
But basically, the point is: start with the traditional process of needing a free page, and therefore forcing a page swap out to disk -- which, as we all know, is damned slow to recover. It also puts some load on the disk bus -- which might be doing more important things.
This only really matters if the response time of a particular process is important, of course. When the swapped out page is finally needed again, the process that owned it will simply hang (for a long time) until it's back.
But let's say that userspace memory is not being completely utilized. The parts that are being used have some sparse arrays in them, or were initted by a program to all 0xffffffff's or something. Wouldn't it be kinda neat if the VMM pager could just check quick to see if some of memory was quickly and easily compressible? In 100M of space, maybe it finds 8K of contiguous memory that it can compress out -- in a process, say, that just got initialized and then task switched out. There's 2 pages free! Suddenly no disk swap is needed -- at the expense of doing an 8K compression/decompression. And if you are talking about an RLE compression, that means a few thousand opcodes. (And, yes, I DO realize that scanning 100M of memory every so often to find compressible stuff is going to be rather "expensive".)
Candy wrote:
What about the worst case? A program de-encrypting a compressed object in memory?
Correct. The parts of memory that are being actively used will almost certainly not be usefully compressible. This is meant as a marginal method, to free up a few pages of memory when you really need them.
os64dev wrote:
RE: bloatware ... Mostly it is not the OS parts that are huge, but the programs running on it.
Mostly, but WinXP uses what, 100MB? More? It puts itself in virtual space, and pages itself out, and wastes a million years of cpu cycles doing it.
os64dev wrote:
You as a developer will problably like the latter in case of releases but software compagnies or business in general will prefer the bloated version.
Neither the OS developer, nor the software companies, own the machine that is running the specified software. If a program can run in either 2MB with tracing, or 200K without -- the superuser of the machine should be able to specify which will happen. Not the OSdever, or the software company that doesn't give a sh*t about the performance of the user's machine.
JAAman wrote:
most OSs dont actually allocate memory pages when its requested, they allocate the pages (on demand)
Yes, that part of my vague concept is intended as a replacement for on-demand paging. Instead of fully setting up the processes' page tables, pointing to unallocated memory pages -- the page tables themselves would only be set up on demand, out of the processes' pool of RLE compressed memory.
Brendan wrote:
This means that you'd be compressing pages, not larger things.
Well, I'm not sure I agree with that. If you have a giant pool of compressed space, and you mess up a page in the middle of it, then you have compressed space up to there, and a messed up actual allocated memory page, and compressed space after. You certainly have to be able to handle the concept of multiple compressed spaces with allocated pages in the gaps, but it seems theoretically possible.
Brendan wrote:
I've got an 80486 here with 8 MB of RAM ....
I'm requiring a 586 with a minimum of 16M, so you're outta luck on my OS.
But I was not suggesting this as a complete replacement for paging out to disk ... just as a mechanism to minimize disk swapping.
And if you're gonna be so silly as to overflow your machine's capacity to that extent, then you deserve all the lack of responsiveness that you get!