I won't comment on shared memory because I haven't done a whole lot from a user app <--> kernel aspect. However, I will comment on the GUI part.
In my opinion, the GUI, especially the drawing buffer, *should not* be shared at all. The user app should not even know about or have any idea about the memory used for the display driver.
For example, let's talk about a simple dialog window. The user app only gets to know the fact that there will be a dialog displayed, but will not have any control or idea of how, what color, what shape, or any other aspect other than there will be a window drawn with certain items within the window. Period. If the dialog contains a button, it will have the shape, color, and style the GUI interface gives it. Not the User app. The User app will only know that there is a button and will receive messages when that button is pressed and released.
The user app will have the system's GUI interface do all of the drawing, moving, sizing, shadows, buttons, minimizing, maximizing, etc. The User app doesn't handle any of that. The User app can send a message to the GUI interface to minimize, resize, change the color, etc., but the User app has no control of the memory used to display the dialog.
Therefore, all drawing is done via the system's GUI interface. To the GUI interface, the User app is simply an object. To the User App the GUI interface is simply a service it can call (send messages to) or receive messages.
The User App is a (possibly) perishable object within the GUI interface and nothing more. The GUI interface is a callable service for the User app and nothing more.
Now, if you want to get a more detailed GUI User app that does do specific things, like "Owner Draw" buttons and things of that sort, using the GUI interface, you set a flag stating that the button is owner draw. When it comes time to draw that object (a button in this case) the GUI interface will expect that you have already drawn that button to a specified buffer, the buffer having a specific style. For example, that buffer might be defined as 32-bit pixels, X pixels wide, and Y pixels tall. Then the GUI interface will transform/convert that buffer to the style of buffer it uses and push that to the display buffer. It will then use that buffer as is until you send it a message stating that you have changed the pixels and/or size of that buffer, where as it will retrieve the contents once again.
Another example would be video or at the very least, an animated icon. Your User app will mark the object as "user draw" and draw the current image to a specified buffer, then send a message to the GUI interface that it is ready for drawing to the screen. The GUI interface uses that image until it receives another message from the User app stating it has updated the buffer, where as the GUI interface will grab and use the new image until told otherwise.
The simplest apps will have nothing more than a message handler receiving messages such as button presses/releases, menu item selections, etc. These simple apps will have absolutely no clue what-so-ever of display memory, drawing memory, etc. Nothing.
More sophisticated apps will then use things like "owner draw" buttons and the like to be more involved with the display, but still will remain completely independent of the actual display.
Again, your user apps should not have any idea, clue, or whereabouts of the system's display buffer. It is for the GUI display interface only.
Hope that helps,
Ben
-
http://www.fysnet.net/osdesign_book_series.htm