OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 9:08 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 19 posts ]  Go to page Previous  1, 2
Author Message
 Post subject: Re: Understanding memory allocation for a process manually ?
PostPosted: Fri Feb 19, 2016 4:32 am 
Offline

Joined: Wed Oct 07, 2015 5:36 am
Posts: 8
ok mate....


Top
 Profile  
 
 Post subject: Re: Understanding memory allocation for a process manually ?
PostPosted: Fri Feb 19, 2016 5:45 am 
Offline
Member
Member
User avatar

Joined: Wed Aug 05, 2015 5:33 pm
Posts: 159
Location: Drenthe, Netherlands
manoj9372 wrote:
Code:
Basicly the person who programmed the code running in that process decided it needs "x" amount of memory.

Brendan explained (very detailed) how memory allocation itself works. Like how a car is able to move. But asking on what basis the process knows that it needs "x" amount of memory is like me asking you on what basis you know where to drive your car.



I didn't asked that question in a bad sense,i just wanted understand how a programmer calculates the MEMORY REQUIREMENTS for a process ?
that's what my question is,kindly don't take it in bad manner... [-X [-X
Don't act in bad manner when someone gives a simple answer to your question.
manoj9372 wrote:
i want to know on what basis the process know that it needs "x" amount of memory ?
Ask better questions and be nice.

_________________
"Always code as if the guy who ends up maintaining it will be a violent psychopath who knows where you live." - John F. Woods

Failed project: GoOS - https://github.com/nutterts/GoOS


Top
 Profile  
 
 Post subject: Re: Understanding memory allocation for a process manually ?
PostPosted: Fri Feb 19, 2016 6:28 am 
Offline
Member
Member

Joined: Thu Aug 13, 2015 4:57 pm
Posts: 384
Brendan wrote:
Hi,
Yes, but...

From this post:
"When the kernel is running out of free physical RAM, it sends a "free some RAM" message (including how critical the situation is) out to processes that have requested it. VFS responds by freeing "less important" cached file data. A file system might do the same for its meta-data cache. Web browser might respond by dumping cached web page resources. DNS system might do the same for it's domain name cache. A word processor might have enough data to allow the user to undo the last 5000 things they've done, and might respond by reducing that so the user can only undo the last 3000 things they've done."

With a system like this; if you don't allow over-commitment you don't waste resources as much - those resources are still being used for caches and other things.

To me; this is the right way to do things - don't allowing over-commit, except for resources that can be revoked.

Of course most existing OSs (and things like POSIX) don't have any way for the kernel to ask for resources back, so they're stuck with the "waste resource or over-commit" compromise.


Agreed, though there's something that I can't place my finger on that bothers me a bit with that..

Regardless of the system, system level caches (FS, browser, etc) should be reduced in low memory conditions, so that applies for both. I'm don't know if your suggestion is as optimal when compared to over commitment, though in practical cases it might be close enough that the remainder doesn't matter and of course avoids the entire OOM killer scenario.

Btw, I think it might be acceptable for the Word to extend my undo-buffer from 5k to 10k, but I certainly would not consider reducing the buffer acceptable, that's something that reduces functionality and my ability to go back. Granted that 3k is probably more than I'd ever need, but then it should have been 3k to begin with. Point being, it's a safety feature and I personally would not like if such things change behind my back for the worse.


PS. Just before submitting I read your post again and instead of rewriting I'll add this, my above reply is directed at how I initially read your post, now I realized that at the end you mention over-commitment for revocable items. If that was the point it might have been useful to focus/emphasize on that instead. That might indeed be more useful:
- normal malloc
- revocableMalloc; anything allocated with this can at any time without warning be revoked by the OS
- c++ exception (signals, callbacks, etc); signal to the app that part of it's memory has been revoked

For something simple like DNS cache, you might use normal malloc for everything, once you've given the reply to the customer you label that part of memory revocable. For a more complex scenario you might make most allocations revocable and then will have to recreate the revoked contents on demand.

Given that most content can be recreated the above should work quite nicely, with the obvious drawback of slowing the system down. Over commitment can be allowed for all of the revocable memory...


Top
 Profile  
 
 Post subject: Re: Understanding memory allocation for a process manually ?
PostPosted: Fri Feb 19, 2016 6:56 am 
Offline
Member
Member

Joined: Thu Aug 13, 2015 4:57 pm
Posts: 384
linguofreak wrote:
As for system messages coming up while you're working in a terminal, Linux will print all sorts of errors to the system console regardless of what you may be doing on that virtual terminal. My box is currently spewing errors about a failed CD drive that I haven't had the time to open up the machine to disconnect. An OOM situation is next thing to a kernel-panic / bluescreen, both of which will happily intrude while you're doing other things, so I don't see any big problem with the OOM killer doing the same thing.


Not sure if it's changed but previously there were quite a few things that affected what gets printed where. Whether it's a physical console vs SSH shell, whether it's root or normal user, etc.

Being informed of issues is one thing, spewing errors on the console is really awkward. I've actually had to use such systems and it's very difficult to fix the situation when you get 100+ messages every second on a physical console since you can't do almost anything to actually get a look at what's happening. There are commands that will stop spewing errors to the console, but personally I've rarely needed those and IIRC they vary from system to system (Linux vs FreeBSD vs other) so probably not once have I actually remembered how to do it.

Finally, and this applies to Windows GUI as well, I really hate it when a pop-up comes up, asking: "Do you want the world to explode?", while I'm writing a document, email or just an address to Firefox's address bar, and just happen to press space/enter/"y" at the exact right time. There's just no good way you can ever put a popup asking something like that out of the blue.


linguofreak wrote:
The OS doesn't need to totally grind to a halt. It will have to stall on outstanding allocations until the user makes a decision or a process ends on its own or otherwise returns memory to the OS, but processes that aren't actively allocating (or are allocating from free chunks in their own heaps rather than going to the OS for memory) can continue running. Now, if a mission-critical process ends up blocking on an allocation, yes, you have a problem, and a user OOM killer might not be appropriate for situations where this is likely to cause more trouble than an OOM situation generally does in the first place.


The OS doesn't, but every app that requests memory does, right? And sooner or later that's going to be everyone one of them, right? Obviously there are things such as SNMP that ease monitoring servers, but the point is that the server has gotten itself into a mess, stopping to wait for a user in a Data Center with 10k real servers and 100k+ virtual servers isn't really feasible. Of course configurable OOM Killer might easily be a solution here, servers automatic, desktops users, but if the manual version isn't absolutely needed then I might prefer consistency..


linguofreak wrote:
For a server, your OOM killer could actually make use of a reverse-SSH protocol where the OOM killer makes an *outbound* ssh-like connection, using pre-allocated memory, to a machine running server management software, which could then send alerts to admin's cell phones, take an inbound SSH connection from an administrator's workstation (or phone), and pass input from the administrator to the OOMed server and output from the server back to the administrator.


True, but couldn't the SSH then just pre-allocate memory and allow inbound connections? If the connection isn't from one of the "root" users, then kick them back out, and maybe enforce stricter time-outs, etc.. My point was more along the lines that while you can do this, you may need to do it for every tool and it becomes impractical.

Figuring out that SSH needs pre-alloc is one thing, but all the tools you will need to resolve the situation? I've seen multiple times situations where I can't even use "ls" or the like simple commands, IIRC the "ls" was not working due to lack of available file descriptors to open and execute the "ls" command itself. However some commands did work, I never investigated why, but my guess would be that either they opened one or two FD less or it might be that the shell did something slightly different with them with regards to FD's.. trying to pre-alloc for each and every tool you might need isn't really feasible..


linguofreak wrote:
Not every process has bounded memory requirements. Not every process has bounded runtime. Daemons are generally supposed to keep running indefinitely.


Are you sure? Haven't really thought about every possible scenario but is there some reason this is the case? It might be that there's some really special case, like calculating the answer "The Answer to the Ultimate Question of Life, The Universe, and Everything", but for everything in normal world, are there cases that aren't bounded by mem and runtime?

A process might have a base mem requirement and then dynamic requirement based on the file it operates on, or files if multiple. With files there might be no max or it might max out at some predefined limit. As for time, I think for most cases it should be roughly knowable, and my suggestion doesn't even need that, I mentioned the running process flagging it's progress in case it has dynamically adjusting mem requirements, thus the OS knows where in the mem curve we are now and what's to expect going forward. Just in case someone is considering it, there's no need to link the Halting Problem here..

As for daemons, if you consider each request as independent request then they should fit in just fine. Having the daemon process "reset" itself after each request instead of the OS "reseting" it be spawning a new one once the old exit()'d.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 19 posts ]  Go to page Previous  1, 2

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 25 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group