Hi,
AlexHully wrote:
So in the last discussion we had, ditching x11 altogether didn't seem the right way.
To many components still rely on it.
I think you'll find that (on Windows, Andriod, Haiku and OS/2) nothing relies on X11 at all. Even for OS's like GNU/Linux (not andriod), the BSDs and Solaris (e.g. all the OSs that almost nobody wants even though they're free, simply because they're stuffed full of antiquated puss from half a century ago) you'll probably find that most software depends on libraries like QT and GTK, and don't depend on X directly.
AlexHully wrote:
Now, is it problematic to only ditch the x11 server part (so no sockets/equivalent), keep the code in it (keyboard, joysticks, mouse..) and placing it in a framework that every app will use instead of x11, with shared memory.
Yes, it's problematic. For a well designed OS you have layers - the kernel as the lowest layer; the drivers on top of that; things like file systems, network stack and virtual terminals in the layer on top of the drivers; and things like GUI and applications in the highest layer nowhere near drivers.
Originally X11 was designed for this; but Linux (and other *nix clones) sucked badly (because they were stuck in "text only 1970") and failed to provide usable video drivers; so the X11 people (and others - e.g. SVGAlib) had little choice but to shove video drivers into user-space (where they never should've been for a "monolithic" kernel design) despite massive security problems and other problems (e.g. corrupted/unrecoverable device state when X11 crashed). Of course things have changed since, and now there's parts of video drivers in the kernel (e.g. KMS) to partially fix some of the original incompetence.
AlexHully wrote:
No code change for the drivers and apps, they think they communicate with x11 server but instead they talk with the framework (same function calls, contents changed).
This sounds wrong too. For example, when the user presses a key something needs to send the key press to the window that currently has focus (and not to all apps/processes/key-loggers).
It's best to think of it as a kind of virtualisation. Applications are given their own virtual video device, their own virtual keyboard device, their own virtual sound device, etc; and "something" (window manager) is responsible for mapping these virtual video/keyboard/mouse/sound devices to the real devices. Of course because they're virtual devices the interfaces can be abstract.
Note that almost everything works on this "kind of virtualisation" idea. For example, every process is given its own virtual CPU ("thread") and something (scheduler) is responsible for using the real CPU/s to emulate hundreds of virtual CPUs/threads; every process is given its own virtual address space and something (kernel's memory manager) is responsible for using the real/physical address space to emulate virtual address spaces; etc.
I guess what I'm saying is that if you think applications should have direct access to real devices (including video, keyboard, etc) then you've failed to understand basic/ubiquitous multi-tasking concepts.
AlexHully wrote:
Every app is now responsible for the drawing, with the help of the framework (it just assures that you don't write on another part of the window).
I'd hope not. The only thing that should ever do actual drawing is the video driver (with or without GPU acceleration). Applications should only tell the video driver what it wants drawn and should not draw anything themselves. This "description of what to draw" may be a list of OpenGL commands, or a set of "X protocol" requests, or whatever (mostly it depends on how you felt like designing the "virtual video device" abstraction).
Cheers,
Brendan