Brendan wrote:
Hi,
Erm...
To get competitive performance in a modern (e.g. resolution independent) graphics system; you mostly must use the GPU (even for just 2D). This means that the video driver (with the assistance of GPU) must do all rendering (and if/when the video driver doesn't support the GPU it still has to do the rendering in software).
For the video driver/GPU to do the rendering it needs to know what you want drawn; so you need to have some sort of higher level description of what to draw. This can be something like a list of OpenGL commands or something else (or something better). It doesn't really matter too much (it's beyond the scope of what I'm trying to say here).
Yes, I am planning to implement the vulkan api, which is the api from Khronos that will succeed opengl. I plan to implement software rendering first (as I don't have time to write graphics drivers ATM), then follow it up with maybe an intel integrated GPU driver for one of my systems since they seem to have the best documentation. It does work at a higher level of sorts, but not as basic as opengl, it works straight on the command buffers, command queue's, etc. It hasn't been finalized but is VERY close to AMD's Mantle API (they even used the same function names, just replaced the gr with vk). My question was more about the interaction between all the parts, I mostly understand what each part does, but there is some overlap (and it can be done in multiple ways). I have most of the mantle core library implemented, and part of my software rendering driver done.
Brendan wrote:
Now; a single massive "higher level description of everything" would be completely insane. You want a to break it down into pieces. You want a graph. More specifically you want a dependency graph.
Yes, my compositor will be responsible for storing information about which window is where, which is on top, etc. The graphics API (Vulkan) will take the low level commands which copy textures into graphics memory, send vertex and index buffer arrays, load pixel/vertex shaders ,etc.
Brendan wrote:
This allows basic/intelligent caching behaviours. If the higher level description for texture #9 is changed then the video driver knows it has to update the lower level data for texture #9, but (due to the dependency graph nature of it) also knows that it has to update texture #3 (assuming that texture #3 includes/uses texture #9 in some way). It can even be smarter than that - e.g. if something is occluded in some way (e.g. hidden behind something else, or past the edge of the screen) maybe the video driver doesn't need to re-render the lower level data for the thing and can skip it until/unless it becomes visible at some point in the future.
Hmm.. shouldn't the app be responsible to know if it needs a texture changed (or the compositor if it's a function of it)? I mean, when I did a lot of 3D proramming, I didn't expect the graphics driver to take care of anything, I specifically told it about the data, when it changed, and what to do with it. Maybe I am misunderstanding your view of 'High Level', but to me anything high level the graphics driver should not be aware of (at least not directly). I'm not really sure I would want to put all that into the driver, as the driver really just needs to know how to convert formats and send commands to/through the gpu. I don't disagree there could be a higher level on top of that, but I don't expect the driver itself to know all that.
Brendan wrote:
Now; let's add a basic ownership system to this. What if each "thing" in the dependency graph has an owner, where only the owner can set/modify the thing's higher level description? In this case the GUI might own the root thing (the frame buffer); and texture #9 might be owned by an application; and the higher level description for the root thing might say "put texture #9 at (x, y)" in the root texture". Whenever the application tells the video driver to change the higher level description for texture #9, the video driver automatically knows it has to update the GUI's root thing/frame buffer.
That is pretty much what I was going to use the compositor for. Keep track of each apps buffer and its location. It can then render each buffer in the correct location, in the correct order (on top, etc) and even add its own dressing to it (borders, buttons, etc)
Brendan wrote:
With this sort of a system; a compositor may just be part of the code that creates the higher level description of the GUI's root thing. It's not a separate/independent piece of the system. It doesn't need anything like a special API. There's nothing special about it at all.
We are basically saying a similar thing, except in my case, the compositor isn't a part of the code that creates the description, it is the thing that handles it. I was just wondering where to put things like a standard button? Most windows apps all look the same because they have generic types for most objects. If you leave it up to the app developer, no two apps will look the same. If I expose some standard gui elements via the compositor (or something else, like a standard gui interface api that doesn't have to be part of the compositor), then everyones apps can look the same and if a user wants to change something, it takes effect for everything. I am trying to figure out where each piece of code belongs, how many layers it will be, and how it can all work together and not against each other. I think my focus is going to be getting the Mantle API implemented with a software back end working first so i have a way to test it out.
Thanks for the in depth post, gives me some more to think about.