Brendan wrote:
There's no GUI support library needed. Each application just sends its list of commands to it's parent; and doesn't need to know or care what its parent is; regardless of whether the parent happens to be one of many very different GUIs, or something like a debugger, or another application (e.g. a word-processor where the document contains a picture that you're editing), or if its parent is the "virtual screen layer" (e.g. application running full screen with no GUI installed on the computer at all).
So, what you're saying is every application developer has to reimplement the code to write these commands...
Brendan wrote:
Owen wrote:
OK, so your only consideration is software rendering because you can't be bothered to write hardware accelerated graphics drivrs?
I wrote a simplified description of "raw pixel" and a simplified description "list of commands", deliberately omitting implementation details that would do little more than obscure the point. You are including all sort of complexities/optimisations for the "raw pixel" case that I deliberately didn't include, and you're failing to recognise that the "list of commands" can also have complexities/optimisations that I deliberately didn't include. The end result is an extremely unfair comparison between "complex/optimised vs. deliberately simplified".
To avoid this extremely unfair comparison; we can both write several books describing all of the intricate details (that nobody will want to read); or we can try to avoid all the irrelevant implementation details that do nothing more than obscure the point. Assuming "LFB with software rendering" is a good way to focus on the important part (the abstractions themselves, rather than special cases of which pieces of hardware are capable of doing what behind the abstractions).
In addition, assuming "LFB with software rendering" does have obvious practical value - no hobby OS developer will ever have a native driver for every video card. It would be very foolish to ignore software rendering; and it makes a lot more sense to ignore hardware acceleration (as anything that works for software rendering will just work faster when hardware acceleration is involved anyway).
No it won't! GPUs behave very differently from CPUs! The kind of optimizations you do to make graphics fast on the CPU do not work on the GPU.
Executing one GPU command which draws a million pixels and one which draws a thousand have, from the CPU perspective, identical cost - and modern 3D engines are generally CPU bound on all cores anyway.
But, by your logic all that should ever be considered are PCs because for one person supporting every possible ARM device is impossible... ignoring that there might be other good reasons for supporting ARM (e.g. embedded use of your OS)
Brendan wrote:
Owen wrote:
You pretend that DMA and GPU IOMMUs (or for obsolete AGP machines, the GART) don't exist
I only said "pixels are created for the second video card's display". I didn't say how the pixels are created (e.g. by hardware or software, or by copying using IOMMUs or whatever else) because this is mostly irrelevant (they're still created regardless of how).
If the framebuffer hardware is reading the pixels directly from the memory buffer in which they were created, thats' still a pixel being created to you?
Your maths don't make any sense...
Brendan wrote:
Owen wrote:
Brendan wrote:
I didn't specify any specific colour depth for either display. Let's assume one is using 24-bpp
XvYCC and the other is using 30-bpp "
deep colour" sRGB.
So you run the framebuffer in 30-bit sRGB and convert once.
And, among dual monitor setups, how common is this scenario?
It's an intentionally manufactured scenario which is comparatively rare.
An abstraction is either flexible or it's not. You are complaining because your abstraction is not as flexible.
It's also possible to design an architecture which is too flexible - i.e. in which you spend too much time trying to deal with corner cases to the detriment of users and developers.
In this case, you're trying to avoid one measly buffer copy in a rare scenario.
Brendan wrote:
It was an example of one way that certain commands are difficult to understand, that ignores the fact that (in my opinion) those commands should never have existed in the first place.
Owen wrote:
So what is your proposed alternative to developers shaders?
The physics of light do not change. The only thing that does change is the "scene"; and this is the only thing an application needs to describe. This means describing:
- the location of solid/liquids and gases consisting of "materials"
- the properties of those "materials", which are limited to:
- reflection
- absorption
- refraction (in theory - likely to be ignored/not supported due to the overhead of modelling it)
- diffraction
- the location of light sources and their properties:
- the colour of light it generates
- the intensity of the light it generates
- the direction/s of the light it generates and how focused the light is (e.g. narrow beam vs. "all directions")
- ambient light (because it'd cost too much overhead to rely on diffraction alone)
- the camera's position and direction
Whatever is doing the rendering is responsible for doing the rendering; but that has nothing to do with applications. For example, if a video card driver is responsible for doing the rendering then it can use the video card's shader/s for anything it likes; but that has nothing to do with applications.
So all your OS is supporting is ultra-realistic rendering, and none of the intentionally non-realistic models that many games choose to use for stylistic purposes?
Plus, actually, it's supporting said ultra-realistic rendering badly, by failing to account for normal mapping, displacement mapping, and similar techniques used to enhance the verisimilitude of materials and fake scene complexity without requiring that e.g. a gravel texture be decomposed into several million polygons.
Brendan wrote:
Owen wrote:
How does that change nothing? If you have transparency, you need to alpha blend things.
If something creates a pixel by combining 2 or more pixels from elsewhere then it creates 1 pixel; and if something creates a pixel by merely copying a pixel from elsewhere then it creates 1 pixel. In either case the number of pixels created is 1.
The number of pixels created only changes if alpha blending would mean that other pixels elsewhere would need to be created when otherwise (without transparency) they wouldn't. For example, if the second application's background is "solid white" then the pixels for the textures underneath don't need to be created; and if the second application's background is transparent then the pixels for the textures underneath it do need to be created.
Except for the second application's background (which obscures the 2 textures behind it), there are no other textures obscuring anything that isn't already being created.
Of course the "number of pixels created" metric is a simplification that ignores the actual cost of creating a pixel (which depends on which specific methods were used for implementation; like whether a Z-buffer was involved or if there was overdraw or whether we're doing sub-pixel accurate anti-aliasing or whatever else); but it's impossible to have specific details like that in a discussion that's generic enough to be useful.
So, what you're saying is that it's intentionally tilted to make your display list system look better. What a waste of a thread...
Brendan wrote:
Owen wrote:
Your display list system also received a texture update for texture 1. It also set a clipping rectangle and then drew the background+window1bg+texture1+window2bg+texture3, at a total cost of... 10240px. Of course, the compositor scales O(n) in the number of windows, while your display list system scales O(n) in the number of layers
Um, why is my system suddenly drawing things that it knows aren't visible?
Because what was the last UI system that users actually wanted to use (as opposed to things which look like Motif which only programmers want to use) which didn't have rounded corners, alpha transparency or drop shadows somewhere?[/quote]
I see now - you're ignoring an advantage of "list of commands" in one scenario by redefining the scenario. Nice!
I think most of the GUIs I've seen support transparent windows; but I can't be sure because every single application I've used doesn't have transparent backgrounds (I did enable transparent backgrounds for the terminal emulator in KDE once, but I disabled it after about 10 minutes). The only place I actually see any transparency is in window decoration (e.g. slightly rounded corners in KDE, and actual transparency in Windows/Aero); but none of the window decorations in my example were obscuring any textures.[/quote]
Media players shaped like kidney beans and similar are (unfortunately) not rare...
Brendan wrote:
Owen wrote:
So your application developers are going to write their UI drawing code from scratch for every application they develop?
Yes; but don't worry, I've got clean abstractions - application developers will be able to make their applications impressive and unique without dealing with a truckload of trash (or unwanted "widget toolkit" dependencies).
So you have a widget toolkit somewhere, just you're not saying where, because you want to pretend that the idea of a widget toolkit is trash, even though what an application developer fundamentally wants is to be able to drag a button onto a form and for it to work - not to then have to wire up all the code to draw it in the correct states, etc...
Brendan wrote:
Owen wrote:
And Brendan has yet to answer, in his scenario, particularly considering he has said he doesn't plan to allow buffer readbacks, he intends to support composition effects like dynamic exposure HDR
In HDR, the scene is rendered to a HDR buffer, normally in RGB_11f_11f_10f floating point format. A shader then takes exposure information to produce an LDR image suitable for a feasible monitor by scaling the values from this buffer. It often also reads a texture in order to do "tonemapping", or create the curve (and sometimes colouration) that the developers intended.
In dynamic exposure HDR, the CPU (or on modern cards a compute shader) averages all the pixels on screen in order to calculate an average intensity, and from this calculates an exposure value. This is then normally integrated via a weighted and/or moving average and fed back to the HDR shader (with a few frames delay for pipelining reasons)
In his system, either the entire scene needs to get rendered on both GPUs (so the adjustment can be made on both), or you'll get different exposures on each and everything is going to look awful.
You're right - the video drivers would need to send a "max brightness for this video card" back to the virtual screen layer, and the virtual screen layer would need to send a "max brightness for all video cards" back to the video cards. That's going to cost a few extra cycles; but then there's no reason the virtual screen layer can check for bright light sources beforehand and tell the video cards not to bother if it detects that everything is just ambient lighting (e.g. normal applications rather than games).
--
I've had a little more time to think about this. The goal would be to simulate the human iris, which doesn't react immediately. The virtual screen layer would keep track of the "current iris setting", and when it tells the video driver/s to draw a frame it'd also tell it the current iris setting without caring how the new frame effects things; then the video driver/s would send back a "max brightness" after they've drawn the frame, and the virtual screen layer would use these "max brightness" responses to modify the current iris setting (that will be used for subsequent frames).
OK - that works for games which want "realistic" HDR by some definition. What about games which want to simulate intentionally non-realistic HDR?
Brendan wrote:
Owen wrote:
So now your graphics drivers need to understand all the intricacies of HDR lighting? What about all the different variations of tonemapping things could want?
What about global illumination, boekh, dynamic particle systems, cloth, etc? Are you going to offload those in their infinite variations on the graphics driver too?
As if graphics drivers weren't complex enough already...
A renderer's job is to render (static) images. Why would I want to spread the complexity of rendering all over the place? Do I win a lollipop if I manage to spread it so far that even people writing (e.g.) FTP servers that don't use any graphics at all have to deal with parts of the render's job?
Of course this works both ways. For example, things like physics (inertia, acceleration, fluid dynamics, collisions, deformations, etc) aren't the render's job (you'd want a physics engine for that).
A renderer's job is to render what the developer tells it to render.
In that regard shaders are a massive win, because shaders let artists and developers tweak all the parameters of their models to their hearts content, rather than having to deal with those formerly baked into the hardware and that you're planning to bake into your API.