OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 4:02 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 101 posts ]  Go to page Previous  1, 2, 3, 4, 5 ... 7  Next
Author Message
 Post subject: Re: OS Graphics
PostPosted: Tue Jul 30, 2013 5:12 pm 
Offline
Member
Member
User avatar

Joined: Thu Jul 26, 2007 1:53 am
Posts: 395
@Brendan: I understand your idea and I think it is sound solution. It's just that there is something about keeping track of higher level commands from a lower level that feels backwards. It's like some sort of remote procedure call potentially initiated from the kernel and not from a userspace program. I wonder if a thing like this should be part of something else like a network transparancy layer or something that is seperated from rendering? It wont be as fast but it might make more sense.

_________________
Fudge - Simplicity, clarity and speed.
http://github.com/Jezze/fudge/


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Tue Jul 30, 2013 7:46 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Owen wrote:
Brendan wrote:
Owen wrote:
No newly designed compositing window system does server side decoration (Quartz and Wayland follow this) - that is to say, the decorations are already drawn around the buffers that the client submits (often by the mandatory to use windowing library). So, lets go with the decorated window sizes; 21525px each. The 20000px per window cost previously accounted for can therefore be disregarded (because you would just draw into the buffer eventually presented)


So to work around the problem of being badly designed, they force application's to use a mandatory windowing library and make things worse in a different way? This is good news! :)


So to work around the problem of being badly designed, you force every application to re-implement its' own GUI support library?


There's no GUI support library needed. Each application just sends its list of commands to it's parent; and doesn't need to know or care what its parent is; regardless of whether the parent happens to be one of many very different GUIs, or something like a debugger, or another application (e.g. a word-processor where the document contains a picture that you're editing), or if its parent is the "virtual screen layer" (e.g. application running full screen with no GUI installed on the computer at all).

Owen wrote:
Brendan wrote:
I tried to make the example simple/generic, so that people can see the point clearly. I was hoping that the point I was trying to make wouldn't be taken out back and bludgeoned to death with implementation details. Let's just assume the video driver is using an LFB buffer that was setup by the boot loader and all rendering is done in software (unless you're volunteering to write hardware accelerated video drivers for all of our OSs).


OK, so your only consideration is software rendering because you can't be bothered to write hardware accelerated graphics drivrs?


I wrote a simplified description of "raw pixel" and a simplified description "list of commands", deliberately omitting implementation details that would do little more than obscure the point. You are including all sort of complexities/optimisations for the "raw pixel" case that I deliberately didn't include, and you're failing to recognise that the "list of commands" can also have complexities/optimisations that I deliberately didn't include. The end result is an extremely unfair comparison between "complex/optimised vs. deliberately simplified".

To avoid this extremely unfair comparison; we can both write several books describing all of the intricate details (that nobody will want to read); or we can try to avoid all the irrelevant implementation details that do nothing more than obscure the point. Assuming "LFB with software rendering" is a good way to focus on the important part (the abstractions themselves, rather than special cases of which pieces of hardware are capable of doing what behind the abstractions).

In addition, assuming "LFB with software rendering" does have obvious practical value - no hobby OS developer will ever have a native driver for every video card. It would be very foolish to ignore software rendering; and it makes a lot more sense to ignore hardware acceleration (as anything that works for software rendering will just work faster when hardware acceleration is involved anyway).

Owen wrote:
Brendan wrote:
I see you're using the new "quantum entanglement" video cards where the same texture magically appears in 2 completely separate video card's memory at the same time. Nice... ;)


You pretend that DMA and GPU IOMMUs (or for obsolete AGP machines, the GART) don't exist


I only said "pixels are created for the second video card's display". I didn't say how the pixels are created (e.g. by hardware or software, or by copying using IOMMUs or whatever else) because this is mostly irrelevant (they're still created regardless of how).

Owen wrote:
Brendan wrote:
Owen wrote:
I believe the two different display depths to largely be a red herring: when did you last see a 16bpp monitor?


I didn't specify any specific colour depth for either display. Let's assume one is using 24-bpp XvYCC and the other is using 30-bpp "deep colour" sRGB.

So you run the framebuffer in 30-bit sRGB and convert once.

And, among dual monitor setups, how common is this scenario? :roll: It's an intentionally manufactured scenario which is comparatively rare.


An abstraction is either flexible or it's not. You are complaining because your abstraction is not as flexible.

Owen wrote:
Brendan wrote:
Owen wrote:
Given the following simple OpenGL snippet, and assuming everything else is in its' default case - i.e. no vertex buffers/etc bound to the pipeline
Code:
glUseProgram(aShader);
glDrawArrays(GL_TRIANGLES,  0, 3);


Can you discern what is going to be drawn? Note that I just provoked the processing of 3 vertices with no buffers bound - yes this is legal. Actually, even if I'd bound a bunch of buffers that wouldn't help, because all the render system would know is something along the lines of "The buffer has a stride of 8 bytes and offset 0 in each stride contains a 4-vector of half floats bound to shader input slot 0"


Have I ever done or said anything to make you think I care about OpenGL compatibility?

Note: Current graphics systems are so lame that application/game developers feel the need to cobble together their own shaders. On one hand this disgusts me (in a "how did potentially smart people let things get this bad" way), but on the other hand it makes me very very happy (in a "Hahaha, I can make this so much easier for application developers" way).


I was saying nothing about OpenGL support. I was using it as an example of one way in which making sense of the commands that people pass to modern renderers are difficult to understand from the perspective of the people receiving them.


It was an example of one way that certain commands are difficult to understand, that ignores the fact that (in my opinion) those commands should never have existed in the first place.

Owen wrote:
So what is your proposed alternative to developers shaders?


The physics of light do not change. The only thing that does change is the "scene"; and this is the only thing an application needs to describe. This means describing:
  • the location of solid/liquids and gases consisting of "materials"
  • the properties of those "materials", which are limited to:
    • reflection
    • absorption
    • refraction (in theory - likely to be ignored/not supported due to the overhead of modelling it)
    • diffraction
  • the location of light sources and their properties:
    • the colour of light it generates
    • the intensity of the light it generates
    • the direction/s of the light it generates and how focused the light is (e.g. narrow beam vs. "all directions")
    • ambient light (because it'd cost too much overhead to rely on diffraction alone)
  • the camera's position and direction

Whatever is doing the rendering is responsible for doing the rendering; but that has nothing to do with applications. For example, if a video card driver is responsible for doing the rendering then it can use the video card's shader/s for anything it likes; but that has nothing to do with applications.

Owen wrote:
Brendan wrote:
Owen wrote:
Except this is an utter falsehood. Any practical GUI involves alpha blending, because otherwise things like curved window borders look awful and you can't have things like drop shadows which users appreciate for giving the UI a sense of depth. So, actually, display 1 draws
22500 (background) + ~8000 (background window) + 2048px (texture 1) + ~16000 (decorated window) + 7200 (texture 3) + 640 (app2's text) = ~56388
while display 2 draws
22500 (background) + ~15000 (window containing T2) + 1600 (Texture 2) + ~7200 (window containing T4) + 1500 (T4) = ~47800
That gives a total of 104188px. We know from above that each window is 21515px, so the actual figure is 10138. Add in the caching you proposed and you're at 104138. Note that you can't optimize out any of those draws unless you know that nothing being drawn has any translucent segments.


It's obvious (from the pictures I created) that nothing was transparent except for each application's text. If you want to assume that the window borders had rounded corners, and that each application's smaller textures also had transparency, then that changes nothing anyway. The only place where transparency would make a difference is if the second application's background was transparent; but that's a plain white rectangle.

How does that change nothing? If you have transparency, you need to alpha blend things.


If something creates a pixel by combining 2 or more pixels from elsewhere then it creates 1 pixel; and if something creates a pixel by merely copying a pixel from elsewhere then it creates 1 pixel. In either case the number of pixels created is 1.

The number of pixels created only changes if alpha blending would mean that other pixels elsewhere would need to be created when otherwise (without transparency) they wouldn't. For example, if the second application's background is "solid white" then the pixels for the textures underneath don't need to be created; and if the second application's background is transparent then the pixels for the textures underneath it do need to be created.

Except for the second application's background (which obscures the 2 textures behind it), there are no other textures obscuring anything that isn't already being created.

Of course the "number of pixels created" metric is a simplification that ignores the actual cost of creating a pixel (which depends on which specific methods were used for implementation; like whether a Z-buffer was involved or if there was overdraw or whether we're doing sub-pixel accurate anti-aliasing or whatever else); but it's impossible to have specific details like that in a discussion that's generic enough to be useful.

Brendan wrote:
Owen wrote:
Your display list system also received a texture update for texture 1. It also set a clipping rectangle and then drew the background+window1bg+texture1+window2bg+texture3, at a total cost of... 10240px. Of course, the compositor scales O(n) in the number of windows, while your display list system scales O(n) in the number of layers


Um, why is my system suddenly drawing things that it knows aren't visible?

Because what was the last UI system that users actually wanted to use (as opposed to things which look like Motif which only programmers want to use) which didn't have rounded corners, alpha transparency or drop shadows somewhere?[/quote]

I see now - you're ignoring an advantage of "list of commands" in one scenario by redefining the scenario. Nice!

I think most of the GUIs I've seen support transparent windows; but I can't be sure because every single application I've used doesn't have transparent backgrounds (I did enable transparent backgrounds for the terminal emulator in KDE once, but I disabled it after about 10 minutes). The only place I actually see any transparency is in window decoration (e.g. slightly rounded corners in KDE, and actual transparency in Windows/Aero); but none of the window decorations in my example were obscuring any textures.

Owen wrote:
Brendan wrote:
Owen wrote:
An alternate situation: the user picks up one of the windows and drags it.

The damage rectangle per frame is going to be slightly higher than 20000px because that is the size of the window, so we will take that figure (the actual number is irrelevant). None of the window contents are changing in this scenario, and we'll assume the user is moving the topmost window left, and 16000px of the background window are covered. The display list system draws the 20000px of background, 16000px of background window decorations/background, 2048px of texture 1, 20000px of foreground window decorations/background, 7200+1500px of T3+T4: 66748px


I'm not too sure what's going on with the 2 completely different systems that you seem to have invented.

For my "list of commands" (as described), if the second application's window (the topmost window) is being dragged to the left; then the GUI would send its main list of commands each frame, causing the first video driver to redraw 22500 pixels and the second video driver to redraw 22500 pixels for most frames (there are 2 textures that were never drawn, that would need to be drawn once when they become exposed).

However, my "list of commands" (as described) is only a simplified description because I didn't want to bury the concept with irrelevant details. Nothing prevents the video driver's from comparing the GUI's main list of commands with the previous version and figuring out what needs to be redrawn and only redrawing the minimum it has to. This would make it as good as your system/s without any "damage events" or toolkits or whatever other extra burdens you want to force the unfortunate application developers to deal with.


So your application developers are going to write their UI drawing code from scratch for every application they develop? :roll:


Yes; but don't worry, I've got clean abstractions - application developers will be able to make their applications impressive and unique without dealing with a truckload of trash (or unwanted "widget toolkit" dependencies).

Owen wrote:
Of course not, they're going to use a widget toolkit that, hopefully, your OS will provide. Supporting said damage events in a widget toolkit isn't difficult. In the worst case, implementing it is no more complex than implementing the same feature in the GUI system.


Why would I want to inflict this on application developers and users?

Owen wrote:
Brendan wrote:
Owen wrote:
And Brendan has yet to answer, in his scenario, particularly considering he has said he doesn't plan to allow buffer readbacks, he intends to support composition effects like dynamic exposure HDR

In HDR, the scene is rendered to a HDR buffer, normally in RGB_11f_11f_10f floating point format. A shader then takes exposure information to produce an LDR image suitable for a feasible monitor by scaling the values from this buffer. It often also reads a texture in order to do "tonemapping", or create the curve (and sometimes colouration) that the developers intended.

In dynamic exposure HDR, the CPU (or on modern cards a compute shader) averages all the pixels on screen in order to calculate an average intensity, and from this calculates an exposure value. This is then normally integrated via a weighted and/or moving average and fed back to the HDR shader (with a few frames delay for pipelining reasons)

In his system, either the entire scene needs to get rendered on both GPUs (so the adjustment can be made on both), or you'll get different exposures on each and everything is going to look awful.


You're right - the video drivers would need to send a "max brightness for this video card" back to the virtual screen layer, and the virtual screen layer would need to send a "max brightness for all video cards" back to the video cards. That's going to cost a few extra cycles; but then there's no reason the virtual screen layer can check for bright light sources beforehand and tell the video cards not to bother if it detects that everything is just ambient lighting (e.g. normal applications rather than games).


I've had a little more time to think about this. The goal would be to simulate the human iris, which doesn't react immediately. The virtual screen layer would keep track of the "current iris setting", and when it tells the video driver/s to draw a frame it'd also tell it the current iris setting without caring how the new frame effects things; then the video driver/s would send back a "max brightness" after they've drawn the frame, and the virtual screen layer would use these "max brightness" responses to modify the current iris setting (that will be used for subsequent frames).

Owen wrote:
So now your graphics drivers need to understand all the intricacies of HDR lighting? What about all the different variations of tonemapping things could want?

What about global illumination, boekh, dynamic particle systems, cloth, etc? Are you going to offload those in their infinite variations on the graphics driver too?

As if graphics drivers weren't complex enough already...


A renderer's job is to render (static) images. Why would I want to spread the complexity of rendering all over the place? Do I win a lollipop if I manage to spread it so far that even people writing (e.g.) FTP servers that don't use any graphics at all have to deal with parts of the render's job?

Of course this works both ways. For example, things like physics (inertia, acceleration, fluid dynamics, collisions, deformations, etc) aren't the render's job (you'd want a physics engine for that).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Tue Jul 30, 2013 8:05 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Jezze wrote:
@Brendan: I understand your idea and I think it is sound solution. It's just that there is something about keeping track of higher level commands from a lower level that feels backwards. It's like some sort of remote procedure call potentially initiated from the kernel and not from a userspace program. I wonder if a thing like this should be part of something else like a network transparancy layer or something that is seperated from rendering? It wont be as fast but it might make more sense.


My OS is intended as a distributed system, where processes can be running on any computer without the programmer or user caring where (e.g. when a new process is started the OS decides which computer it should use based on load, bandwidth, etc). This means that I can't assume that application/s, GUI/s, the "virtual screen layer" or video driver/s are running on the same computer.

It's also one of the main reasons why I want all these pieces using "lists of commands" (rather than trying to push large blobs of raw pixels around the network because the GPU isn't in the same computer as the application or GUI)... ;)


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 12:36 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Brendan wrote:
Because it avoids drawing pixels for no reason; while also providing a clean abstraction for applications to use; while also providing device independence; while also allowing a massive amount of flexibility.


Perhaps, but different expectations of what user code will be used for does influence how the GUI looks like.

Also, when you say that you won't layer your implementation into GUI, 3D, 2D and basics API it sounds like you will not provide adequate abstractions for applications.

Brendan wrote:
a) Applications have to know/care about things like resolution and colour depth.


Only resolution (not color depth). That is natural in our application because it contains many images that need to have the correct dimensions so they don't need to be scaled. Thus, the application decides itself that it want to run at 640x480 or 800x600, and then will only work if this resolution is available. That makes the application much simpler to code (fixed pixel coordinates) and fast (no transformations). Some test applications for RDOS work in any resolution, but then need to scale itself. It would be perfectly possible to create another abstraction layer (in the class library) that could work with scaling, but so far this need has not arisen.

Brendan wrote:
If they don't you need extra processing for scaling and conversions which increases overhead and reduces graphics quality.


Not so. Our application demands a specific resolution.

Brendan wrote:
Because applications need to care about resolution and colour depth you have to have to add a pile of bloat to your applications that application developers should never have needed to bother with. This pile of bloat could be "hidden" by a library or something, but that just shifts the bloat into the library, and also means that applications need to bother with the hassle of libraries/toolkits/puss.


Not so as of above.

Brendan wrote:
b1) If the user wants to send a screenshot to their printer, the GUI and all applications have to create completely different graphics to send to the printer; which adds another pile of bloat to your applications (and/or whatever libraries/toolkits/puss they're using to hide problems that shouldn't have existed). Most OSs deal with this problem by failing to deal with the problem at all (e.g. they'll scale and convert graphics intended for the video card and produce crappy quality images). The "failing to deal with the problem" that most OSs do includes either scaling the image (which reduces quality) or not scaling the images (e.g. a small rectangle in the middle of the printed page that's the wrong size). Also note that converting colours from one colour space to another (e.g. sRGB to CMYK) means that colours that weren't possible in either colour space aren't possible in the result (for example; if the application draws a picture with shades of cyan that can't be represented by sRGB but can be represented by CMYK, then you end up screwing those colours up, then converting the screwed up colours to screwed up CMYK, even those the colours should've been correct in CMYK).


There is no direct way to send screenshots to printers, but a GUI image can be separately rendered to a printer which we use for receipt printouts using the same controls as the GUI uses.

Brendan wrote:
b2) If the user wants to record a screenshot as a file, or record a video of the screen to a file; then it's the same problem as above. For example; I do want to be able to record 5 minutes of me playing a game or using any application (with crappy low quality software rendering at 640*480), and then (afterwards) tell the OS to spend 6 hours meticulously converting that recording into an extremely high quality video at 1920*1600 (for demonstration purposes).


There is a feature that can do a screenshot to a PNG-file. This will save it in the current resolution. If somebody needs to change resolution of the image, they can do that in Windows with Paint or similar. No need to reinvent the wheel here.

Brendan wrote:
c) Because performance would suck due to applications drawing lots of pixels for no reason; you need ugly hacks/work-arounds in an attempt to reduce the stupidity (e.g. "damage events"). This adds yet another pile of bloat and hassles for the application/GUI developers (and/or whatever libraries/toolkits/puss they're using to hide problems that shouldn't have existed) to bother with.


I've never seen any need to do that. There is no hacks like that available.

Brendan wrote:
d) If the graphics have to be sent over a network, sending "raw pixels" is a huge waste of network bandwidth and is extremely stupid. It'd be like using BMP (raw pixel data) instead of HTML (a description of what to draw) for web pages. In addition to wasting lots of bandwidth, it doesn't distribute load well (e.g. the server generating the content and rendering the content, rather than load being shared between server and client). I'm doing a distributed system so this is very important to me; but even for a "normal" OS it can be important (e.g. remote management/administration, businesses using desktop virtualisation, etc). Note: If I'm playing a 3D game (or doing whatever) in my lounge room, I want to be able to transfer my "virtual screen" to a computer at the other end of the house and keep playing that game (or doing whatever), without the game/applications knowing or caring that everything (video card, monitor, etc) changed. This sort of thing would be easy for my OS to support (for all applications and GUIs, without any extra code in any application or GUI).


We have devolopped a remote screen access utility for our terminal. It works by sending the control-list and properties, and is very small and efficient. It works on slow GPRS connections without problems (which typical remote desktops won't). The host runs on Windows, and needs to know how to draw the controls of RDOS, and also needs to have all the images used (these are not transferred).

Brendan wrote:
e) If an application or GUI is using multiple monitors then "raw pixel" can't cope well with different resolutions and/or different colour depths (the application can only handle one resolution and one pixel depth). To cope with this most OS's will suck - e.g. they'll tell the application to use the same resolution for everything and then scale the image for the other monitor (which increases overhead and/or reduces graphics quality). Note that this includes the GUI itself, and while "different colour depths" is usually avoidable "different resolutions" is quite common. E.g. I'm currently using one monitor at 1920*1600 and another at 1600*1200; both monitors are different physical sizes (one is smaller in both directions) and have different aspect ratios (16:10 and 16:9). Sadly; both of my monitors have different "white points" and different brightness, which is annoying. I've tried adjusting them to get colours on both to match and it simply doesn't work (the range of adjustments is too limited) and the OS I'm using is too stupid to solve the problem simply by generating different colours for different monitors. Note: What I'd really like is to be able to point a web-cam at both monitors and let the OS auto-calibrate the colours to match (e.g. using a short series of patterns).


I see no reason why to implement such complexities. It is possible to use multiple monitors (given the appropriate drivers), but the way this would be done is by assigning one application per monitor.

Brendan wrote:
f) The same "different resolutions" problem occurs for other things that have nothing to do with multiple monitors. For example, most OSs have a "magnifying glass" utility which increases the size of a certain area of the screen (here's an example). In this case you get poor quality scaled graphics because it's too hard for applications/GUIs to do it properly.


I see no reason for implementing a magnifying glass as I don't do photo-editing or similar in RDOS. We use Windows as a host for that.

Brendan wrote:
g) For real-time graphics (e.g. games) the person writing the application can't know how fast the computer is in advance. To work around this they add a pile of stupid/annoying/silly controls to their application (e.g. to setup/change resolution, to set texture detail level, to enable/disable shadows, etc) which just adds up to extra hassle for the application developer and extra bloat. Even with all the trouble, it doesn't actually work. For example, the user might get 60 frames per second on a virtual battlefield, but then a building will explode and the complexity of the scene will increase, and the use will end up with 20 frames per second. To avoid this, the user (who shouldn't have needed to waste their time fiddling with the controls that the application shouldn't have needed) could reduce the quality of graphics so they do get 60 frames per second for the complex scenes, but then they just get worse quality graphics for no reason for the simpler scenes. The main problem here is that those stupid/annoying/silly controls that the application developer wasted their time bothering with can't/won't dynamically adjust to changes in load. To avoid this problem, in theory, it would be possible for games developers to add even more bloat to dynamically adjust detail to cope with changes in load; but I have never seen any game actually do this (game developers don't want the hassle). For my way; the OS (e.g. video drivers) should be able to take care of this without too much problem.

f1) For real-time graphics (e.g. games) the person writing the application/game can't know which feature's the video card supports. Because the graphics API the applications/games have to use doesn't provide an adequate abstraction; this means that different applications/games won't run on different video cards (e.g. because the game requires "shader puke version 666" and the video card only supports "shader puke version 665"); and also means that older games won't benefit from newer features. In an attempt to cope with this, games developers spend a massive amount of time and money creating bloat to support many different pieces of hardware; which is incredibly retarded (e.g. as retarded as having to write 20 different pieces of code just to read a file because the OS failed to abstract the differences between AHCI and USB3 or between FAT and ISO9660).


I have no plans to support games.

Brendan wrote:
All of these problems can be solved easily, without making software developers deal with hacks/bloat/workaround/hassles.


No, they can't. They require years of coding, and when they are done likely are no longer modern. In addition to that, gaming constructors construct games for specific environments, and wouldn't bother with a hobby-OS anyway. Unless you want to write games yourself, I see no reason to support games. And unless you want photo-editing or rendering complex 3D-scenes (and plan to write such tools yourself, because nobody would do it for you) I see no reason to implement that either. Many of these things can be done on another host, and then packaged in a simpler format so they can run on your OS. We do our animations in Photoshop and then export them as PNGs (we should use some video-format, but that is not yet ported).


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 1:46 am 
Offline
Member
Member
User avatar

Joined: Fri Jun 13, 2008 3:21 pm
Posts: 1700
Location: Cambridge, United Kingdom
Brendan wrote:
There's no GUI support library needed. Each application just sends its list of commands to it's parent; and doesn't need to know or care what its parent is; regardless of whether the parent happens to be one of many very different GUIs, or something like a debugger, or another application (e.g. a word-processor where the document contains a picture that you're editing), or if its parent is the "virtual screen layer" (e.g. application running full screen with no GUI installed on the computer at all).

So, what you're saying is every application developer has to reimplement the code to write these commands...

Brendan wrote:
Owen wrote:
OK, so your only consideration is software rendering because you can't be bothered to write hardware accelerated graphics drivrs?


I wrote a simplified description of "raw pixel" and a simplified description "list of commands", deliberately omitting implementation details that would do little more than obscure the point. You are including all sort of complexities/optimisations for the "raw pixel" case that I deliberately didn't include, and you're failing to recognise that the "list of commands" can also have complexities/optimisations that I deliberately didn't include. The end result is an extremely unfair comparison between "complex/optimised vs. deliberately simplified".

To avoid this extremely unfair comparison; we can both write several books describing all of the intricate details (that nobody will want to read); or we can try to avoid all the irrelevant implementation details that do nothing more than obscure the point. Assuming "LFB with software rendering" is a good way to focus on the important part (the abstractions themselves, rather than special cases of which pieces of hardware are capable of doing what behind the abstractions).

In addition, assuming "LFB with software rendering" does have obvious practical value - no hobby OS developer will ever have a native driver for every video card. It would be very foolish to ignore software rendering; and it makes a lot more sense to ignore hardware acceleration (as anything that works for software rendering will just work faster when hardware acceleration is involved anyway).


No it won't! GPUs behave very differently from CPUs! The kind of optimizations you do to make graphics fast on the CPU do not work on the GPU.

Executing one GPU command which draws a million pixels and one which draws a thousand have, from the CPU perspective, identical cost - and modern 3D engines are generally CPU bound on all cores anyway.

But, by your logic all that should ever be considered are PCs because for one person supporting every possible ARM device is impossible... ignoring that there might be other good reasons for supporting ARM (e.g. embedded use of your OS)

Brendan wrote:
Owen wrote:
You pretend that DMA and GPU IOMMUs (or for obsolete AGP machines, the GART) don't exist


I only said "pixels are created for the second video card's display". I didn't say how the pixels are created (e.g. by hardware or software, or by copying using IOMMUs or whatever else) because this is mostly irrelevant (they're still created regardless of how).


If the framebuffer hardware is reading the pixels directly from the memory buffer in which they were created, thats' still a pixel being created to you?

Your maths don't make any sense...

Brendan wrote:
Owen wrote:
Brendan wrote:
I didn't specify any specific colour depth for either display. Let's assume one is using 24-bpp XvYCC and the other is using 30-bpp "deep colour" sRGB.

So you run the framebuffer in 30-bit sRGB and convert once.

And, among dual monitor setups, how common is this scenario? :roll: It's an intentionally manufactured scenario which is comparatively rare.


An abstraction is either flexible or it's not. You are complaining because your abstraction is not as flexible.


It's also possible to design an architecture which is too flexible - i.e. in which you spend too much time trying to deal with corner cases to the detriment of users and developers.

In this case, you're trying to avoid one measly buffer copy in a rare scenario.

Brendan wrote:
It was an example of one way that certain commands are difficult to understand, that ignores the fact that (in my opinion) those commands should never have existed in the first place.

Owen wrote:
So what is your proposed alternative to developers shaders?


The physics of light do not change. The only thing that does change is the "scene"; and this is the only thing an application needs to describe. This means describing:
  • the location of solid/liquids and gases consisting of "materials"
  • the properties of those "materials", which are limited to:
    • reflection
    • absorption
    • refraction (in theory - likely to be ignored/not supported due to the overhead of modelling it)
    • diffraction
  • the location of light sources and their properties:
    • the colour of light it generates
    • the intensity of the light it generates
    • the direction/s of the light it generates and how focused the light is (e.g. narrow beam vs. "all directions")
    • ambient light (because it'd cost too much overhead to rely on diffraction alone)
  • the camera's position and direction

Whatever is doing the rendering is responsible for doing the rendering; but that has nothing to do with applications. For example, if a video card driver is responsible for doing the rendering then it can use the video card's shader/s for anything it likes; but that has nothing to do with applications.


So all your OS is supporting is ultra-realistic rendering, and none of the intentionally non-realistic models that many games choose to use for stylistic purposes?

Plus, actually, it's supporting said ultra-realistic rendering badly, by failing to account for normal mapping, displacement mapping, and similar techniques used to enhance the verisimilitude of materials and fake scene complexity without requiring that e.g. a gravel texture be decomposed into several million polygons.

Brendan wrote:
Owen wrote:
How does that change nothing? If you have transparency, you need to alpha blend things.


If something creates a pixel by combining 2 or more pixels from elsewhere then it creates 1 pixel; and if something creates a pixel by merely copying a pixel from elsewhere then it creates 1 pixel. In either case the number of pixels created is 1.

The number of pixels created only changes if alpha blending would mean that other pixels elsewhere would need to be created when otherwise (without transparency) they wouldn't. For example, if the second application's background is "solid white" then the pixels for the textures underneath don't need to be created; and if the second application's background is transparent then the pixels for the textures underneath it do need to be created.

Except for the second application's background (which obscures the 2 textures behind it), there are no other textures obscuring anything that isn't already being created.

Of course the "number of pixels created" metric is a simplification that ignores the actual cost of creating a pixel (which depends on which specific methods were used for implementation; like whether a Z-buffer was involved or if there was overdraw or whether we're doing sub-pixel accurate anti-aliasing or whatever else); but it's impossible to have specific details like that in a discussion that's generic enough to be useful.


So, what you're saying is that it's intentionally tilted to make your display list system look better. What a waste of a thread...

Brendan wrote:
Owen wrote:
Your display list system also received a texture update for texture 1. It also set a clipping rectangle and then drew the background+window1bg+texture1+window2bg+texture3, at a total cost of... 10240px. Of course, the compositor scales O(n) in the number of windows, while your display list system scales O(n) in the number of layers


Um, why is my system suddenly drawing things that it knows aren't visible?

Because what was the last UI system that users actually wanted to use (as opposed to things which look like Motif which only programmers want to use) which didn't have rounded corners, alpha transparency or drop shadows somewhere?[/quote]

I see now - you're ignoring an advantage of "list of commands" in one scenario by redefining the scenario. Nice!

I think most of the GUIs I've seen support transparent windows; but I can't be sure because every single application I've used doesn't have transparent backgrounds (I did enable transparent backgrounds for the terminal emulator in KDE once, but I disabled it after about 10 minutes). The only place I actually see any transparency is in window decoration (e.g. slightly rounded corners in KDE, and actual transparency in Windows/Aero); but none of the window decorations in my example were obscuring any textures.[/quote]

Media players shaped like kidney beans and similar are (unfortunately) not rare...

Brendan wrote:
Owen wrote:
So your application developers are going to write their UI drawing code from scratch for every application they develop? :roll:


Yes; but don't worry, I've got clean abstractions - application developers will be able to make their applications impressive and unique without dealing with a truckload of trash (or unwanted "widget toolkit" dependencies).


So you have a widget toolkit somewhere, just you're not saying where, because you want to pretend that the idea of a widget toolkit is trash, even though what an application developer fundamentally wants is to be able to drag a button onto a form and for it to work - not to then have to wire up all the code to draw it in the correct states, etc...

Brendan wrote:
Owen wrote:
And Brendan has yet to answer, in his scenario, particularly considering he has said he doesn't plan to allow buffer readbacks, he intends to support composition effects like dynamic exposure HDR

In HDR, the scene is rendered to a HDR buffer, normally in RGB_11f_11f_10f floating point format. A shader then takes exposure information to produce an LDR image suitable for a feasible monitor by scaling the values from this buffer. It often also reads a texture in order to do "tonemapping", or create the curve (and sometimes colouration) that the developers intended.

In dynamic exposure HDR, the CPU (or on modern cards a compute shader) averages all the pixels on screen in order to calculate an average intensity, and from this calculates an exposure value. This is then normally integrated via a weighted and/or moving average and fed back to the HDR shader (with a few frames delay for pipelining reasons)

In his system, either the entire scene needs to get rendered on both GPUs (so the adjustment can be made on both), or you'll get different exposures on each and everything is going to look awful.


You're right - the video drivers would need to send a "max brightness for this video card" back to the virtual screen layer, and the virtual screen layer would need to send a "max brightness for all video cards" back to the video cards. That's going to cost a few extra cycles; but then there's no reason the virtual screen layer can check for bright light sources beforehand and tell the video cards not to bother if it detects that everything is just ambient lighting (e.g. normal applications rather than games).

--

I've had a little more time to think about this. The goal would be to simulate the human iris, which doesn't react immediately. The virtual screen layer would keep track of the "current iris setting", and when it tells the video driver/s to draw a frame it'd also tell it the current iris setting without caring how the new frame effects things; then the video driver/s would send back a "max brightness" after they've drawn the frame, and the virtual screen layer would use these "max brightness" responses to modify the current iris setting (that will be used for subsequent frames).


OK - that works for games which want "realistic" HDR by some definition. What about games which want to simulate intentionally non-realistic HDR?

Brendan wrote:
Owen wrote:
So now your graphics drivers need to understand all the intricacies of HDR lighting? What about all the different variations of tonemapping things could want?

What about global illumination, boekh, dynamic particle systems, cloth, etc? Are you going to offload those in their infinite variations on the graphics driver too?

As if graphics drivers weren't complex enough already...


A renderer's job is to render (static) images. Why would I want to spread the complexity of rendering all over the place? Do I win a lollipop if I manage to spread it so far that even people writing (e.g.) FTP servers that don't use any graphics at all have to deal with parts of the render's job?

Of course this works both ways. For example, things like physics (inertia, acceleration, fluid dynamics, collisions, deformations, etc) aren't the render's job (you'd want a physics engine for that).


A renderer's job is to render what the developer tells it to render.

In that regard shaders are a massive win, because shaders let artists and developers tweak all the parameters of their models to their hearts content, rather than having to deal with those formerly baked into the hardware and that you're planning to bake into your API.


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 1:57 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Owen wrote:
I think most of the GUIs I've seen support transparent windows; but I can't be sure because every single application I've used doesn't have transparent backgrounds (I did enable transparent backgrounds for the terminal emulator in KDE once, but I disabled it after about 10 minutes). The only place I actually see any transparency is in window decoration (e.g. slightly rounded corners in KDE, and actual transparency in Windows/Aero); but none of the window decorations in my example were obscuring any textures.


At least I've added transparency to most of my GUI controls. I view that as an important feature. In the new GUI we will use transparency and overlaying a lot since it makes the design process easier and it looks good.


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 2:13 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

rdos wrote:
Brendan wrote:
a) Applications have to know/care about things like resolution and colour depth.


Only resolution (not color depth). That is natural in our application because it contains many images that need to have the correct dimensions so they don't need to be scaled. Thus, the application decides itself that it want to run at 640x480 or 800x600, and then will only work if this resolution is available. That makes the application much simpler to code (fixed pixel coordinates) and fast (no transformations). Some test applications for RDOS work in any resolution, but then need to scale itself. It would be perfectly possible to create another abstraction layer (in the class library) that could work with scaling, but so far this need has not arisen.


It might be natural for your application (which requires a specific resolution); but for most modern systems applications are expected to handle any resolution.

rdos wrote:
Brendan wrote:
If they don't you need extra processing for scaling and conversions which increases overhead and reduces graphics quality.


Not so. Our application demands a specific resolution.


I'm not designing a graphics system for one specific application that uses "antiquated" graphics.

rdos wrote:
Brendan wrote:
Because applications need to care about resolution and colour depth you have to have to add a pile of bloat to your applications that application developers should never have needed to bother with. This pile of bloat could be "hidden" by a library or something, but that just shifts the bloat into the library, and also means that applications need to bother with the hassle of libraries/toolkits/puss.


Not so as of above.


Only because the application demands a specific resolution.

rdos wrote:
Brendan wrote:
b2) If the user wants to record a screenshot as a file, or record a video of the screen to a file; then it's the same problem as above. For example; I do want to be able to record 5 minutes of me playing a game or using any application (with crappy low quality software rendering at 640*480), and then (afterwards) tell the OS to spend 6 hours meticulously converting that recording into an extremely high quality video at 1920*1600 (for demonstration purposes).


There is a feature that can do a screenshot to a PNG-file. This will save it in the current resolution. If somebody needs to change resolution of the image, they can do that in Windows with Paint or similar. No need to reinvent the wheel here.


I'm really not sure you grasp what I'm saying here.

Imagine the computer is using 320*200 256 colour mode; and an application draws "Hello World" on the screen (with really bad/blocky characters that are barely readable) with a pretty background. The user likes the pretty background, so they save a screen shot. The application doesn't do anything else so the user deletes the application. Three months later, the user buys an extremely high quality printer for printing huge glossy posters. They want to test it out, so they look for some sort of picture to print and find the "Hello World" screen shot, and print that out. They check the poster they printed with a magnifying glass (to get a really good look at the quality of the printer's printing) and they notice a small smudge in the top left corner of the letter 'H' in "Hello World". While trying to figure out where the smudge came from, they decide to open the old screen shot and zoom in on the smudge. They zoom in, and zoom in, and zoom in, and zoom in. Eventually they realise that the smudge is a copy of the entire Intel manual.

I don't think saving the 320*200 256 colour screenshot as PNG and using Paint to blow it up would produce the same quality result.

rdos wrote:
Brendan wrote:
c) Because performance would suck due to applications drawing lots of pixels for no reason; you need ugly hacks/work-arounds in an attempt to reduce the stupidity (e.g. "damage events"). This adds yet another pile of bloat and hassles for the application/GUI developers (and/or whatever libraries/toolkits/puss they're using to hide problems that shouldn't have existed) to bother with.


I've never seen any need to do that. There is no hacks like that available.


According to Owen; all GUIs (including yours) do this... ;)

rdos wrote:
Brendan wrote:
d) If the graphics have to be sent over a network, sending "raw pixels" is a huge waste of network bandwidth and is extremely stupid. It'd be like using BMP (raw pixel data) instead of HTML (a description of what to draw) for web pages. In addition to wasting lots of bandwidth, it doesn't distribute load well (e.g. the server generating the content and rendering the content, rather than load being shared between server and client). I'm doing a distributed system so this is very important to me; but even for a "normal" OS it can be important (e.g. remote management/administration, businesses using desktop virtualisation, etc). Note: If I'm playing a 3D game (or doing whatever) in my lounge room, I want to be able to transfer my "virtual screen" to a computer at the other end of the house and keep playing that game (or doing whatever), without the game/applications knowing or caring that everything (video card, monitor, etc) changed. This sort of thing would be easy for my OS to support (for all applications and GUIs, without any extra code in any application or GUI).


We have devolopped a remote screen access utility for our terminal. It works by sending the control-list and properties, and is very small and efficient. It works on slow GPRS connections without problems (which typical remote desktops won't). The host runs on Windows, and needs to know how to draw the controls of RDOS, and also needs to have all the images used (these are not transferred).


That sounds much closer to what I'm proposing. It's a lot more efficient than something like the RFB protocol.

rdos wrote:
Brendan wrote:
e) If an application or GUI is using multiple monitors then "raw pixel" can't cope well with different resolutions and/or different colour depths (the application can only handle one resolution and one pixel depth). To cope with this most OS's will suck - e.g. they'll tell the application to use the same resolution for everything and then scale the image for the other monitor (which increases overhead and/or reduces graphics quality). Note that this includes the GUI itself, and while "different colour depths" is usually avoidable "different resolutions" is quite common. E.g. I'm currently using one monitor at 1920*1600 and another at 1600*1200; both monitors are different physical sizes (one is smaller in both directions) and have different aspect ratios (16:10 and 16:9). Sadly; both of my monitors have different "white points" and different brightness, which is annoying. I've tried adjusting them to get colours on both to match and it simply doesn't work (the range of adjustments is too limited) and the OS I'm using is too stupid to solve the problem simply by generating different colours for different monitors. Note: What I'd really like is to be able to point a web-cam at both monitors and let the OS auto-calibrate the colours to match (e.g. using a short series of patterns).


I see no reason why to implement such complexities. It is possible to use multiple monitors (given the appropriate drivers), but the way this would be done is by assigning one application per monitor.


That might work for your limited number of use cases. It doesn't work for me, and doesn't work for a lot of users.

rdos wrote:
Brendan wrote:
f) The same "different resolutions" problem occurs for other things that have nothing to do with multiple monitors. For example, most OSs have a "magnifying glass" utility which increases the size of a certain area of the screen (here's an example). In this case you get poor quality scaled graphics because it's too hard for applications/GUIs to do it properly.


I see no reason for implementing a magnifying glass as I don't do photo-editing or similar in RDOS. We use Windows as a host for that.


I think that the magnifying glass is mostly intended for people that have visual impairments (or maybe just for reading the fine print at the bottom of EULAs ;) ).

I'm planning to let users download and use my OS for free; and I don't think Microsoft would like it if I gave users a free a copy of Windows so that they can use my OS.

rdos wrote:
Brendan wrote:
All of these problems can be solved easily, without making software developers deal with hacks/bloat/workaround/hassles.


No, they can't. They require years of coding, and when they are done likely are no longer modern.


Everything that's worthwhile takes years of coding (even "raw pixels" if it's done right).

rdos wrote:
In addition to that, gaming constructors construct games for specific environments, and wouldn't bother with a hobby-OS anyway.

Unless you want to write games yourself, I see no reason to support games. And unless you want photo-editing or rendering complex 3D-scenes (and plan to write such tools yourself, because nobody would do it for you) I see no reason to implement that either. Many of these things can be done on another host, and then packaged in a simpler format so they can run on your OS. We do our animations in Photoshop and then export them as PNGs (we should use some video-format, but that is not yet ported).


Convincing people to use an OS is roughly the same problem as convincing people to write software for an OS - you need to provide them with a reason to be interested.

My plan is to write a game for my OS to encourage people to be more interested in it. I've done this before (a previous version of the OS have had reversi) and know that providing some sort of game/s does help a lot; but for what I'm planning a simple board game isn't really going to showcase the OS's features very well.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 3:29 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Brendan wrote:
I'm really not sure you grasp what I'm saying here.

Imagine the computer is using 320*200 256 colour mode; and an application draws "Hello World" on the screen (with really bad/blocky characters that are barely readable) with a pretty background. The user likes the pretty background, so they save a screen shot. The application doesn't do anything else so the user deletes the application. Three months later, the user buys an extremely high quality printer for printing huge glossy posters. They want to test it out, so they look for some sort of picture to print and find the "Hello World" screen shot, and print that out. They check the poster they printed with a magnifying glass (to get a really good look at the quality of the printer's printing) and they notice a small smudge in the top left corner of the letter 'H' in "Hello World". While trying to figure out where the smudge came from, they decide to open the old screen shot and zoom in on the smudge. They zoom in, and zoom in, and zoom in, and zoom in. Eventually they realise that the smudge is a copy of the entire Intel manual.

I don't think saving the 320*200 256 colour screenshot as PNG and using Paint to blow it up would produce the same quality result.


The issue is that this requires a lot of OS coding for very little benefit. In my case, if I want a good printout of something, I'd simply render the presentation using the same controls against a printer device (possibly needing to scale coordinates, and providing better resolution images). That is a programming task rather than a user task, but it would work for me. I have no general end-users that can do things like that anyway, so it is out-of-scope for my design.

Brendan wrote:
rdos wrote:
We have devolopped a remote screen access utility for our terminal. It works by sending the control-list and properties, and is very small and efficient. It works on slow GPRS connections without problems (which typical remote desktops won't). The host runs on Windows, and needs to know how to draw the controls of RDOS, and also needs to have all the images used (these are not transferred).


That sounds much closer to what I'm proposing. It's a lot more efficient than something like the RFB protocol.


Yes, but in this case it comes at a cost. The host must know how to render every control that the presentation uses, including derived classes. Thus some content cannot be shown, like viewing the local log using a special file-view control.

Brendan wrote:
Everything that's worthwhile takes years of coding (even "raw pixels" if it's done right).


Not if it can be done in incremental steps. It seems like your design cannot. :mrgreen:

I've added things like TrueType fonts, blending and transparency as I've needed them. If I someday need an advance 3D library, I might add that as well.

Brendan wrote:
Convincing people to use an OS is roughly the same problem as convincing people to write software for an OS - you need to provide them with a reason to be interested.


I'll educate people of how to write applications for my OS because they need to use it in order to ship their products. :mrgreen:


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 6:53 am 
Offline
Member
Member

Joined: Thu Jul 05, 2012 5:12 am
Posts: 923
Location: Finland
rdos wrote:
I'll educate people of how to write applications for my OS because they need to use it in order to ship their products.


Can they use that Windows computer directly for doing that? Ok, I am joking but it just feels they could do that. Look at these quotes:

rdos wrote:
If somebody needs to change resolution of the image, they can do that in Windows with Paint or similar. No need to reinvent the wheel here.


rdos wrote:
I don't do photo-editing or similar in RDOS. We use Windows as a host for that.

_________________
Undefined behavior since 2012


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 7:27 am 
Offline
Member
Member

Joined: Wed Oct 01, 2008 1:55 pm
Posts: 3191
Antti wrote:
rdos wrote:
I'll educate people of how to write applications for my OS because they need to use it in order to ship their products.


Can they use that Windows computer directly for doing that? Ok, I am joking but it just feels they could do that. Look at these quotes:

rdos wrote:
If somebody needs to change resolution of the image, they can do that in Windows with Paint or similar. No need to reinvent the wheel here.


rdos wrote:
I don't do photo-editing or similar in RDOS. We use Windows as a host for that.


The typical setup of an RDOS development is to have OpenWatcom hosted on Windows, doing graphics in Photoshop, and then downloading the code & graphics to a development machine with RDOS (typically with FTP). You will then use the Watcom debugger hosted on Windows and remote debug over TCP/IP. That's by far the best way to go about it.

If you wanted to do development directly on RDOS, you'd need OpenWatcom RDOS host support, SVN support and some kind of image editing tool of the complexity of Photoshop. That would require several more years of development that could be spent on more important things.


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 7:30 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Owen wrote:
Brendan wrote:
There's no GUI support library needed. Each application just sends its list of commands to it's parent; and doesn't need to know or care what its parent is; regardless of whether the parent happens to be one of many very different GUIs, or something like a debugger, or another application (e.g. a word-processor where the document contains a picture that you're editing), or if its parent is the "virtual screen layer" (e.g. application running full screen with no GUI installed on the computer at all).

So, what you're saying is every application developer has to reimplement the code to write these commands...


I'll provide (something like) a header file containing an enum and a few structure definitions that they can cut & paste into their application's "front end" code.

Owen wrote:
Brendan wrote:
Owen wrote:
OK, so your only consideration is software rendering because you can't be bothered to write hardware accelerated graphics drivrs?


I wrote a simplified description of "raw pixel" and a simplified description "list of commands", deliberately omitting implementation details that would do little more than obscure the point. You are including all sort of complexities/optimisations for the "raw pixel" case that I deliberately didn't include, and you're failing to recognise that the "list of commands" can also have complexities/optimisations that I deliberately didn't include. The end result is an extremely unfair comparison between "complex/optimised vs. deliberately simplified".

To avoid this extremely unfair comparison; we can both write several books describing all of the intricate details (that nobody will want to read); or we can try to avoid all the irrelevant implementation details that do nothing more than obscure the point. Assuming "LFB with software rendering" is a good way to focus on the important part (the abstractions themselves, rather than special cases of which pieces of hardware are capable of doing what behind the abstractions).

In addition, assuming "LFB with software rendering" does have obvious practical value - no hobby OS developer will ever have a native driver for every video card. It would be very foolish to ignore software rendering; and it makes a lot more sense to ignore hardware acceleration (as anything that works for software rendering will just work faster when hardware acceleration is involved anyway).


No it won't! GPUs behave very differently from CPUs! The kind of optimizations you do to make graphics fast on the CPU do not work on the GPU.

Executing one GPU command which draws a million pixels and one which draws a thousand have, from the CPU perspective, identical cost - and modern 3D engines are generally CPU bound on all cores anyway.


From the keyboard's perspective, the computer is just a silly box that eats bytes, therefore we should feel free to waste as much CPU time as possible.

Owen wrote:
But, by your logic all that should ever be considered are PCs because for one person supporting every possible ARM device is impossible... ignoring that there might be other good reasons for supporting ARM (e.g. embedded use of your OS)


No, by my logic we should design the OS so that it's able to handle all cases (software rendering, hardware acceleration, 80x86 and ARM). By your logic we should forget about things that are realistically practical (software rendering) and design the OS for things that it'll never be able to support completely (e.g. every single video card's GPU on earth) in the hope that someone (e.g. NVidia) will see our "no graphics at all" OS and be so impressed with the graphics that they'll want to write native video drivers for it.

Owen wrote:
Brendan wrote:
Owen wrote:
You pretend that DMA and GPU IOMMUs (or for obsolete AGP machines, the GART) don't exist


I only said "pixels are created for the second video card's display". I didn't say how the pixels are created (e.g. by hardware or software, or by copying using IOMMUs or whatever else) because this is mostly irrelevant (they're still created regardless of how).


If the framebuffer hardware is reading the pixels directly from the memory buffer in which they were created, thats' still a pixel being created to you?

Your maths don't make any sense...


If the GPU reads pixels directly from system RAM or from a different video card's memory and creates/stores them in it's own RAM before sending them to the monitor; then the GPU has created pixels in it's own RAM. If the GPU reads pixels directly from system RAM or from a different video card's memory and sends them directly to the monitor without storing them in it's own RAM, then the GPU is probably faulty/hallucinating.

Owen wrote:
Brendan wrote:
Owen wrote:
So you run the framebuffer in 30-bit sRGB and convert once.

And, among dual monitor setups, how common is this scenario? :roll: It's an intentionally manufactured scenario which is comparatively rare.


An abstraction is either flexible or it's not. You are complaining because your abstraction is not as flexible.


It's also possible to design an architecture which is too flexible - i.e. in which you spend too much time trying to deal with corner cases to the detriment of users and developers.


That's the beautiful part - "list of commands" is so flexible that I don't really need to worry about these corner cases, or add extra "detriment of users and developers" (like shader languages, "damage events", toolkit/library dependancy hell, etc).

Owen wrote:
Brendan wrote:
Owen wrote:
So what is your proposed alternative to developers shaders?


The physics of light do not change. The only thing that does change is the "scene"; and this is the only thing an application needs to describe. This means describing:
  • the location of solid/liquids and gases consisting of "materials"
  • the properties of those "materials", which are limited to:
    • reflection
    • absorption
    • refraction (in theory - likely to be ignored/not supported due to the overhead of modelling it)
    • diffraction
  • the location of light sources and their properties:
    • the colour of light it generates
    • the intensity of the light it generates
    • the direction/s of the light it generates and how focused the light is (e.g. narrow beam vs. "all directions")
    • ambient light (because it'd cost too much overhead to rely on diffraction alone)
  • the camera's position and direction

Whatever is doing the rendering is responsible for doing the rendering; but that has nothing to do with applications. For example, if a video card driver is responsible for doing the rendering then it can use the video card's shader/s for anything it likes; but that has nothing to do with applications.


So all your OS is supporting is ultra-realistic rendering, and none of the intentionally non-realistic models that many games choose to use for stylistic purposes?


Can you think of any example of "non-realistic models that many games choose to use for stylistic purposes" that can't be done with "as realistic as the data and render can handle" rendering anyway?

The only thing I can think of is Minecraft, where the developer tries to pretend the "retro" look was intentional while most of the users install patches for 64-bit textures.

Owen wrote:
Plus, actually, it's supporting said ultra-realistic rendering badly, by failing to account for normal mapping, displacement mapping, and similar techniques used to enhance the verisimilitude of materials and fake scene complexity without requiring that e.g. a gravel texture be decomposed into several million polygons.


Without trying to think of any way that these things could be implemented using "list of commands", you've assumed that nobody will ever be able to implement these things for "list of commands"? You must think it's extremely hard to (e.g.) have textures with "RGB+height" pixel data.

Owen wrote:
Brendan wrote:
Owen wrote:
How does that change nothing? If you have transparency, you need to alpha blend things.


If something creates a pixel by combining 2 or more pixels from elsewhere then it creates 1 pixel; and if something creates a pixel by merely copying a pixel from elsewhere then it creates 1 pixel. In either case the number of pixels created is 1.

The number of pixels created only changes if alpha blending would mean that other pixels elsewhere would need to be created when otherwise (without transparency) they wouldn't. For example, if the second application's background is "solid white" then the pixels for the textures underneath don't need to be created; and if the second application's background is transparent then the pixels for the textures underneath it do need to be created.

Except for the second application's background (which obscures the 2 textures behind it), there are no other textures obscuring anything that isn't already being created.

Of course the "number of pixels created" metric is a simplification that ignores the actual cost of creating a pixel (which depends on which specific methods were used for implementation; like whether a Z-buffer was involved or if there was overdraw or whether we're doing sub-pixel accurate anti-aliasing or whatever else); but it's impossible to have specific details like that in a discussion that's generic enough to be useful.


So, what you're saying is that it's intentionally tilted to make your display list system look better. What a waste of a thread...


No, what I'm saying is that it's absurd to assume that every texture and every application's window will be transparent and that nothing will ever be obscured.

Owen wrote:
Brendan wrote:
Owen wrote:
So your application developers are going to write their UI drawing code from scratch for every application they develop? :roll:


Yes; but don't worry, I've got clean abstractions - application developers will be able to make their applications impressive and unique without dealing with a truckload of trash (or unwanted "widget toolkit" dependencies).


So you have a widget toolkit somewhere, just you're not saying where, because you want to pretend that the idea of a widget toolkit is trash, even though what an application developer fundamentally wants is to be able to drag a button onto a form and for it to work - not to then have to wire up all the code to draw it in the correct states, etc...


There's no special "widget toolkits" or "widget shared library" or "widget API" or whatever.

For graphics, the "list of commands" is implemented on top of messaging and forms a standardised "graphics messaging protocol"; where a process sends messages (using the graphics protocol) to its parent. An application talks to its parent (a GUI) using the graphics protocol, a GUI talks to its parent (the "virtual screen layer") using the exact same graphics protocol, and the "virtual screen layer" talks to its parent/s (the video driver/s) using (almost) the exact same graphics protocol.

For sound there's a standardised "sound messaging protocol" and the same "child sends sound messages to parent" arrangement happens. For input devices (keyboard, mouse, joystick, etc) there are more standardised messaging protocols, but messages go in the other direction (the parent sends "events" to one or more of its children). All of these different protocols are combined into a "user interface protocol" that includes video, sound, keyboard, mouse, whatever; and the user interface protocol is used by all applications and GUIs and the "virtual screen layer". However, the "virtual screen layer" is special because (unlike applications and GUIs) it has many parents. It needs to send "sound protocol" messages to the sound card driver/s, "graphics protocol" messages to the video driver/s, etc; and it receives "keyboard protocol" messages from the keyboard driver (and similar for mouse, touchpad, etc).

This means that you can have an application that talks directly to the "virtual screen layer" (with no GUI at all) and as far as the application is concerned it's no different (same "user interface protocol"). You might also have one application (e.g. an image editor) that uses the user interface protocol to talk to another application (e.g. a word-processor); and in this way you can have a word-processor document with an image embeded in it and edit that image "in place". There's nothing different about this - it's still "child talks to parent using user interface protocol". In the same way you can have a GUI talking to another GUI (e.g. where the child GUI runs in a window); or a GUI embedded in a full screen application. In fact there's no real difference between a GUI and an application (they're all just processes using the user interface protocol to talk to their parent, and to talk to none or more children).

Of course just because 2 processes are using the user interface protocol to talk to each other doesn't mean that they can't also define additional message types for other purposes (as long as they both agree on what the additional message types are for).

Now what do you think a "widget" is? It's just another process that uses the user interface protocol to talk to its parent; except that (if necessary) it defines additional message types that aren't part of the user interface protocol. Except for these extra message types; there's no real difference between a widget, an application and a GUI; and if you really wanted to you could run a widget full screen without any application or GUI; or maybe have a widget with 2 GUIs inside it, or... Note: I'm not saying these things make sense, just that it's "possible by accident".

Owen wrote:
OK - that works for games which want "realistic" HDR by some definition. What about games which want to simulate intentionally non-realistic HDR?


Why would a game want to simulate intentionally non-realistic HDR?

Owen wrote:
Brendan wrote:
Owen wrote:
As if graphics drivers weren't complex enough already...


A renderer's job is to render (static) images. Why would I want to spread the complexity of rendering all over the place? Do I win a lollipop if I manage to spread it so far that even people writing (e.g.) FTP servers that don't use any graphics at all have to deal with parts of the render's job?

Of course this works both ways. For example, things like physics (inertia, acceleration, fluid dynamics, collisions, deformations, etc) aren't the render's job (you'd want a physics engine for that).


A renderer's job is to render what the developer tells it to render.

In that regard shaders are a massive win, because shaders let artists and developers tweak all the parameters of their models to their hearts content, rather than having to deal with those formerly baked into the hardware and that you're planning to bake into your API.


I don't see it that way at all. I see it as a huge failure to hide hardware details from application developers, forcing them to all waste time implementing competing graphics engines that all do the same thing that the graphics system should have already done; except that the failure is so huge that the games developers can't cope with all the variations between all the different video cards and end users end up with completely avoidable hassles just trying to figure out if their video card will or won't run a game (assuming the user hasn't given up and got a "fixed hardware" console already). See if you can convince someone writing a normal application (e.g. text editor, IRC client, bitmap image editor, etc) to add a few simple 3D effects and see how quickly they refuse to deal with the mountain of hassles it'd cause them.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 9:56 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 18, 2006 3:45 am
Posts: 9301
Location: On the balcony, where I can actually keep 1½m distance
Brendan wrote:
Owen wrote:
OK - that works for games which want "realistic" HDR by some definition. What about games which want to simulate intentionally non-realistic HDR?
Why would a game want to simulate intentionally non-realistic HDR?
Cartoon shading? Psychedelics? The Rolling Stoneds Screensaver?

Quote:
I don't see it that way at all. I see it as a huge failure to hide hardware details from application developers, forcing them to all waste time implementing competing graphics engines that all do the same thing that the graphics system should have already done; except that the failure is so huge that the games developers can't cope with all the variations between all the different video cards and end users end up with completely avoidable hassles just trying to figure out if their video card will or won't run a game (assuming the user hasn't given up and got a "fixed hardware" console already). See if you can convince someone writing a normal application (e.g. text editor, IRC client, bitmap image editor, etc) to add a few simple 3D effects and see how quickly they refuse to deal with the mountain of hassles it'd cause them.


The industry, especially for rapidly developing systems, is always making tradeoffs for which percentage of systems to support - typically from the high ends down. Some even have graceful degradation to not use certain graphical features if they would be too demanding on the hardware in question. In practice it ends up being the visual designer wanting to be in charge, and the tech department retaliating with their proverbial drug kickoff clinics.

Bottom line, the developers want to be in charge on what hardware features get to be used, and they typically want to use the latest ones if available, and they're certainly not going to wait for some design committee to spend three years on the next engine standard, and the hardware developers are certainly not going to wait for some design committee to spend three years to get their next "great feature" rolled out to their customers. And the gaming addicts... you get the drill.



The whole lesson of this all is that performance gaming is currently just disjunct from desktop stuff. It might change for graphics, but HPC will certainly fill in that particular gap for the time to come.

_________________
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 10:08 am 
Offline
Member
Member

Joined: Sat Nov 21, 2009 5:11 pm
Posts: 852
Handling 3D graphics and UIs in an uniform way sounds like a horrible idea, you'll just end up with a system that doesn't handle either very well. Games absolutely need as much control as possible over the GPU to achieve the best results. Programmable shaders were invented to give artists the freedom to innovate, not to be hidden away by a generic solution that applications are forced to use. Artists should be able to program any effect that the hardware is capable of without waiting for it to be implemented in your "ultimate 3D engine to end all 3D engines". Furthermore, figuring out which parts of a texture will be accessed in a drawing operation is impractical for anything but the most trivial rectangle copies. Most clipping (beyond the final screen output) will have to be done by the application itself, but if you provide an automatic back buffer linked to a window, you could help by automatically cropping the clipping rectangle according to the bounding box of the window's visible region.

On the other hand, other applications usually draw directly to the screen or to a back buffer provided by the system. In this case, a clipping region has already been set by the window manager, so there is no problem. The description you give of "most existing graphics systems" is simply incorrect and is not at all what happens (with normal GUI applications), except possibly in hobby operating systems in early development. The processing of pixels that will not be visible is mainly an issue with badly programmed applications, and games, which are expected to remain in the foreground while in use.


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 10:43 am 
Offline
Member
Member

Joined: Sat Nov 21, 2009 5:11 pm
Posts: 852
Brendan wrote:
Why would a game want to simulate intentionally non-realistic HDR?

This shows that you don't have experience in developing games. As a game developer you need to be in total control of what appears on the screen. If your particular implementation of HDR hampers gameplay by making the player unable to see something important, I'd like to turn it off. The OS has no business meddling in someone's artistic decisions. You can't possibly foresee what kind of lighting someone will eventually want. In one of my applications, I need to colorize some pixels by using some other rendered pixels as a 1D texture index, but only when adjacent pixels differ, and only when the value stored in another rendered texture is greater than the current depth value. I also need to apply fog on some pixels (determined by a stencil bit) by indexing a 1D texture with a value determined by stored depth values, the screen position and stored focal widths (per pixel). This is easy with DirectX shaders, but would your system handle this? I'm guessing not.


Top
 Profile  
 
 Post subject: Re: OS Graphics
PostPosted: Wed Jul 31, 2013 4:07 pm 
Offline
Member
Member
User avatar

Joined: Wed Jul 13, 2011 7:38 pm
Posts: 558
rdos wrote:
I have no general end-users that can do things like that anyway, so it is out-of-scope for my design.

This is the problem your arguments always present. You are arguing in favour of terribly constricted specifications and single-use environments because the product you ship is only used in one way at a time, which is very specific and great for embedded systems but is totally out of the question for everyone else here. We're trying to build operating systems that have some use in a modern general user environment, and you're arguing against our ideas with references to a design scheme that's several worlds away from what we're doing. Your product sells to specific users who need an antiquated mono-interface environment? Great. You've succeeded in your niche. The rest of us sell to the general public, and our products must be radically different than yours.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 101 posts ]  Go to page Previous  1, 2, 3, 4, 5 ... 7  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 28 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group