OSDev.org

The Place to Start for Operating System Developers
It is currently Fri Apr 19, 2024 7:32 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 105 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6, 7  Next
Author Message
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 10:46 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Ready4Dis wrote:
Quote:
If the OS's file system security is so lame that it can't prevent people from tampering with a game's textures, then the OS's file system security is probably so lame it can't prevent people from tampering with a game's executable, shared libraries, scripts, etc either.

So, then the installed application (or are you only using distributed apps?) (or graphics driver?) must lock the file so no modifications can take place to it while the game is running (basically,at any point after the file integrity check)? That would make sense and solve that issue, which makes the video driver being aware of it's resource locations useful (like you said, one less hoop to jump through while loading). Maybe I don't think as far outside of the box as you, but it's near impossible to stop someone from tampering with files if they really want to, the key is that they can be checked during run-time (of course, that doesn't stop someone from editing the .exe file to stop doing a check sum, but that's an entire topic on it's own) and then disallowed to change (although, nothing to stop you from writing a file system driver that allows you to send the calls through an intercept app that then modifies said textures).


Most OSs do have at least some sort of file system permissions. The problem is that most OSs also let the root/admin user ignore them, so the existing file system permissions are effectively useless for the purpose of preventing the computer's owner from tampering with a game's files.

Of course the idea of "all powerful root/admin" idea is stupid and broken (it completely disregards the principle of least privilege, is a huge shining beacon for brute force/dictionary "root password guessing" attacks, is a major hurdle for content providers, etc). It also doesn't make any sense for business use (e.g. where the administrator is not the computer owner in the first place, but is just some "random" employee). Needless to say, I won't be doing that.

Also note that my OS will use a versioning file system. This means that nothing can modify any file (you can only create a new version of the file); which means that the game/video driver can check "version 123" of the file and continue using that version after a new version of the file is created.

Ready4Dis wrote:
Quote:
For rendering, the video driver has know how the reflective surface uses on the cube. For determining what to update; the video driver only needs to know that the reflective surface uses/depends on the cube, which is a tiny subset of the information the video driver must know (for rendering purposes) anyway.

Yes, but does the graphics system also know that through a portal there is an animated object that is moving? That means the reflective surface needs to be updated as well, even though neither the cube map description or the angle has changed. But it only needs to be updated if that portal is visible from the perspective of the object, otherwise it doesn't affect it.


Yes.

If the current frame depends on the reflective surface; and if the reflective surface depends on the cube map; and if the cube map depends on the moving/animated object; then whenever the video driver updates the moving animated object it knows it also has to update the cube map, the reflective surface and the current frame.

Ready4Dis wrote:
Or would the game engine have to notify the video driver that the texture is dirty and needs refreshing?


The game tells the video driver how the animated object changed. The video driver figures out what needs to be updated from that alone.

Ready4Dis wrote:
Quote:
In other words (makefiles!):
Code:
cubemap: cubemap_description
    update_cube_map

reflective_surface: reflective_surface_description cubemap
    update_reflective_surface


See my previous statement, that isn't near enough information to update the reflective surface.


That was enough information for what you described previously; but you've added a new dependency (a dependency on the moving animated object that wasn't mentioned before) and now it's not enough information.

Ready4Dis wrote:
Quote:
For video there are 2 choices:
"fixed frame rate, variable quality"; where reduced quality (including "grey blobs" in extreme conditions) is impossible to avoid whenever highest possible quality can't be achieved in the time before the frame's deadline (and where reduced quality is hopefully a temporary condition and gone before the user notices anyway).
"fixed quality, variable frame rate"; where the user is typically given a large number of "knobs" to diddle with (render distance, texture quality, amount of super-sampling, lighting quality, ...); and where the user is forced to make a compromise between "worse quality than necessary for average frames" and "unacceptable frame rates for complex frames" (which means it's impossible for the user to set all those "knobs" adequately and they're screwed regardless of what they do).


The third choice that lots of games use, is of course Dynamic LOD (level of detail). It may render things in the background slower (animation updates, inverse kinematics) use billboards, reduce geometry, turn of certain features, etc. These can't all be part of the video driver unless each app tells it specifically how to render different quality levels. If the frame rates start dipping they can drop from parallax mapping, normal mapping, bump mapping to just plain texturing. It can do more fancy features for up-close objects and use less intense features for things further away or out of focus, etc. I just don't see how you can dump that off as the job of the video driver without it knowing a LOT more information about how the game and artists designed the game.


Yes; some games do try to dynamically adjust detail in some way/s, but they're fighting against the design of APIs and tools that weren't designed for it and their solutions tend to end up being reactive rather than proactive causing hysteresis. For a worst case scenario consider alternating "complex, simple, complex, simple" frames - after a complex frame it reacts by reducing detail for the next/simple frame, and after simple frames it reacts by increasing detail for the next/complex frame.

I don't see what information would be needed about the game/app/GUI design.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 11:22 am 
Offline
Member
Member

Joined: Sat Nov 18, 2006 9:11 am
Posts: 571
Quote:
Most OSs do have at least some sort of file system permissions. The problem is that most OSs also let the root/admin user ignore them, so the existing file system permissions are effectively useless for the purpose of preventing the computer's owner from tampering with a game's files.

Of course the idea of "all powerful root/admin" idea is stupid and broken (it completely disregards the principle of least privilege, is a huge shining beacon for brute force/dictionary "root password guessing" attacks, is a major hurdle for content providers, etc). It also doesn't make any sense for business use (e.g. where the administrator is not the computer owner in the first place, but is just some "random" employee). Needless to say, I won't be doing that.

Also note that my OS will use a versioning file system. This means that nothing can modify any file (you can only create a new version of the file); which means that the game/video driver can check "version 123" of the file and continue using that version after a new version of the file is created.


What is to stop someone from deleting the file first and then create one with the same name? Who specifies the version number? Is the file version is specified by the installer? If you don't have an admin, what if you want to uninstall an application? Do you track all files that have been modified or created from that application and remove them? What about files the user wants to keep?

Quote:
Yes.

If the current frame depends on the reflective surface; and if the reflective surface depends on the cube map; and if the cube map depends on the moving/animated object; then whenever the video driver updates the moving animated object it knows it also has to update the cube map, the reflective surface and the current frame.


But how does the graphics system know if the moving object is visible or hidden? What if you're on the other side of the portal and can't see the reflective surface? Will the graphics driver still update the cube map because it could possibly have changed? Just because it moved doesn't mean it for sure it needs to update, how does the video driver know without knowing more about the engine? I guess in that case, if you link the rendering of the portal as a stipulation on whether or not the reflective texture needs updating it might work, since the portal 'texture' will be updated if their is movement, and if it's not visible it won't be rendered (aka, no change). I still don't see how the video driver could know about an object and where it is in a scene though. What if said object moves from one side of the portal to another (changes rooms), does the app then have to notify the video driver that the dependencies have changed? It is a very intriguing idea, I'm just not sure how all the details would work out in real life.

Quote:
Yes; some games do try to dynamically adjust detail in some way/s, but they're fighting against the design of APIs and tools that weren't designed for it and their solutions tend to end up being reactive rather than proactive causing hysteresis. For a worst case scenario consider alternating "complex, simple, complex, simple" frames - after a complex frame it reacts by reducing detail for the next/simple frame, and after simple frames it reacts by increasing detail for the next/complex frame.

I haven't seen it that bad for some time now, most use an average of a few frames to see if there is a trend, but either way, if it is that difficult to get right for a specific case, it's going to be much more difficult (impossible?) to get it right for all cases. I know a few games that do things like reduced rendering quality while in motion (or fast motion), dynamic LOD for terrain based on frame rates, using different rendering (shaders) techniques, etc. It is very difficult to predict how quickly the scene is going to render unless you can qualify different reasons for the speed changes. Typically I see (at least more recent games) make more subtle changes and not so drastic changes that you see wildly varying frame rates from frame to frame. If I am playing a FPS and turn really fast, I don't want everything to turn into a grey blob, I would much rather a low quality rendering but still able to see everything.

Quote:
I don't see what information would be needed about the game/app/GUI design.

If you store information in a BSP tree and render an indoor world, who does the graphics calls? Does the app tell the video driver what to render (and I don't just mean, render outside scene, I mean render this object, that object, etc) or does the video driver know what to render automagically? I guess i'm confused to exactly where you differentiate the app telling the video driver what it needs done and what the video driver does on its own.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 11:51 am 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Brendan wrote:
For video there are 2 choices:
  • "fixed frame rate, variable quality"; where reduced quality (including "grey blobs" in extreme conditions) is impossible to avoid whenever highest possible quality can't be achieved in the time before the frame's deadline (and where reduced quality is hopefully a temporary condition and gone before the user notices anyway).
  • "fixed quality, variable frame rate"; where the user is typically given a large number of "knobs" to diddle with (render distance, texture quality, amount of super-sampling, lighting quality, ...); and where the user is forced to make a compromise between "worse quality than necessary for average frames" and "unacceptable frame rates for complex frames" (which means it's impossible for the user to set all those "knobs" adequately and they're screwed regardless of what they do).
And then there's option 3, "fixed quality, fixed frame rate," which is ideal for games, because the user will notice even a single frame of changing quality as stuttering or flickering. Games that try to pull the variable quality trick look just as bad as games that can't stick with a consistent framerate.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 12:24 pm 
Offline
Member
Member

Joined: Sat Nov 18, 2006 9:11 am
Posts: 571
Quote:
And then there's option 3, "fixed quality, fixed frame rate," which is ideal for games, because the user will notice even a single frame of changing quality as stuttering or flickering. Games that try to pull the variable quality trick look just as bad as games that can't stick with a consistent framerate.


That doesn't really work out to well if the frame rate drops below your fixed frame rate... I know when I wrote my game engine I ran the input/physics at a fixed rate in one thread but did all the animation and rendering at the maximum speed I could render in another. It meant that the game would respond to your inputs at the same speed irregardless of your frame rate, but everything would look smoother if it was running faster (more in between frames). Even at slower frame rates, the movement still 'felt' responsive.

And while you say you will notice a single frame of less quality, I highly doubt you'd notice if you spun your character around 180 degrees that one frame in the middle was rendered at lower quality while a texture was being transferred from system ram to vram. If there isn't much movement, the frame rates should be relatively consistent, so it shouldn't be switching quality levels dramatically enough to notice. Also, I'm sure there are plenty of games that you wouldn't even notice drop from parallax mapping to normal mapping to bump mapping depending on the distance to the object, because as something gets further away, it is much harder to notice the difference and it's no longer in your focus. Everyone has seen the games that get it wrong, but nobody realizes a game that gets it right because, you just don't notice. I know when I was working on my parallax mapping it looked great up close, but you can't really tell any difference between it and other methods (that were faster to render) further away.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 12:44 pm 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Changes to level of detail as you get farther away are fine when tuned right, because the view is already changing during the transition. But a system that shows "missing texture" while things are still loading will end up looking even worse than this, which only shows low-res versions of textures to begin with and already looks awful.

You also really don't want a variable render framerate like you suggest, because it causes tearing and/or stuttering. What you want is to determine the maximum viable framerate once and then stick with that, where the refresh rate is a multiple of the monitor's. You could perhaps change that (or the quality level) for different scenes, but that's a compromise and it's definitely the game's job- not something you want the video driver doing behind your back whenever it feels like.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 1:14 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Ready4Dis wrote:
Quote:
If the video driver wants to reduce a texture's quality (e.g. to make it fit in limited amount of VRAM) it can, and if the video driver wants to generate and use mip-maps it can. It's the video driver's responsibility to decide what the video driver wants to do. All normal software (applications, GUIs, games) only ever deal with the highest quality textures and have no reason to know or care what the video driver does with them..


I see, then is the user going to be able to set the graphics level based on the application? For example, one application has no problems running at super ultra awesome high quality, and another needs to run at low quality? I just think leaving to much up the video driver is going to make things run slower than they need to. If you enable mip-mapping (via toggle, slider, whatever) what if the application only wanted certain things to use mip-maps, and others not to (just one example of course)?


I don't just want to avoid/remove end user hassle (e.g. user should be able to install a game without setting anything at all); I want to avoid/remove software developer hassle (e.g. the game/app shouldn't know or care about "graphics quality vs. performance" because that's the video driver's job).

"Variable quality, fixed frame rate" means nothing will run slower than it needs to (it's the quality of the scene/frame that might be worse than necessary). If the video driver decides it has enough time and/or memory to generate mip-maps then it generates mip-maps. There's no "enable via toggle/slider/whatever" and the application doesn't get a say (and doesn't need to deal with the hassle of bothering to decide if/when mip-maps are used).

Ready4Dis wrote:
Quote:
Part of my project is to fix the "many different file formats" problem; where new file formats are constantly being created and none are ever really deprecated/removed, leading to a continually expanding burden on software and/or "application X doesn't support file format Y" compatibility problems. Mostly I'll be having a small set of standard file formats that are mandatory (combined with "file format converters" functionality built into the VFS that auto-convert files from legacy/deprecated file formats into the mandatory file formats).


Yes, I like this idea, but it's hard for some things unless you have a very clearly defined internal representation for all the 'formats'. By that I mean a clearly defined bitmap structure that any format can load/save to/from, a text format that supports all the advanced features (bold/italics/underline, coloring, hyperlinks, embedded images?, fonts, etc). That idea I do like and plan on doing something very much like it in my own OS. I don't think each application needs to have it's own jpeg loading routine or link to a library to load a file. It should just open it and get an image out, or tell the video driver to map a .jpeg into a texture without worrying about it.


Yes; but note that in my experience designing things like this takes a considerable amount of research and time, and tends to be intertwined with other things in ways that aren't immediately obvious. ;)

Ready4Dis wrote:
Quote:
Heh. The plan was to define a "monitor description" file format (capable of handling curved monitors, colour space correction, 3D displays, etc); and then write boot code to convert the monitor's EDID into this file format (and I did a whole bunch of research to design the "monitor description" file format, and wrote out most of the specification). However; to convert the colour space data from what EDID provides (co-ords for CIE primaries) into something useful involves using complex matrix equations to generate a colour space conversion matrix; and my boot code is designed for "80486SX or later" and I can't even assume an FPU or floating point is supported. This mostly means that to achieve what I want I need to implement a whole pile of maths routines (e.g. half an arbitrary precision maths library) in my boot code before I can even start implementing the code that generates the colour space conversion matrix.


I have written a few colour space conversion routines for a project I was working on. It supports HSV, YCbCr, and regular RGB. I had support for YUV but it's basically the same as YCbCr (redundant) so I removed it. I actually removed a few now that I am looking back. HSL was removed, YUV removed conversion to CIE LUV and CIE LAB was removed... meh, oh well.


If you're given (e.g.) a predefined colour space conversion matrix it's fairly easy to do colour space conversion. Generating a colour space conversion matrix from arbitrary primaries is "less easy".

Ready4Dis wrote:
Do graphics from the 486 era even support EDID?


No; but there are far more recent 80486SX clones. One of my test machines (an eBox-2300SX) has full VBE with EDID and no FPU.

Ready4Dis wrote:
Is it really necessary to support that in the boot loader? If you ever want a hand with anything like that, I enjoy low level driver/graphics stuff more than kernel development ;). I've written a 3d rendering engine on a 486 sx (no FPU) and used tons of fixed point math before. Although, my colour conversion routines weren't meant for real-time so they are just floating point and no optimizations.


I don't know about necessary. The "EDID to my monitor description" conversion has to be done somewhere (even if that means keeping EDID around during boot so it can be handed to a utility after boot); and if the early boot code does it then all later code (including the code to select a video mode during boot and the code that generates graphics during boot) only has to worry about using my monitor description and doesn't have to support both EDID (in case my monitor description isn't present in the boot image/init RAM disk) and my monitor description.

If you're interested...

The idea is to find a set of 9 formulas that generates a matrix. The input variables for these 9 formulas are CIE co-ords for 3 primaries and white point, which is the 8 variables Xr, Yr, Xg, Yg, Xb, Yb, Xw, Yw (there is no Zw which is needed later and I forgot the work-around for that problem).

In other words, I'm looking for functions fn1 to fn9 in this matrix:
Code:
| fn1(Xr, Yr, Xg, Yg, Xb, Yb, Xw, Yw), fn2(Xr, Yr, Xg, Yg, Xb, Yb, Xw, Yw), fn3(Xr, Yr, Xg, Yg, Xb, Yb, Xw, Yw) |
| fn4(Xr, Yr, Xg, Yg, Xb, Yb, Xw, Yw), fn5(Xr, Yr, Xg, Yg, Xb, Yb, Xw, Yw), fn6(Xr, Yr, Xg, Yg, Xb, Yb, Xw, Yw) |
| fn7(Xr, Yr, Xg, Yg, Xb, Yb, Xw, Yw), fn8(Xr, Yr, Xg, Yg, Xb, Yb, Xw, Yw), fn9(Xr, Yr, Xg, Yg, Xb, Yb, Xw, Yw) |


Once these functions are found; I'd be able to plug the values from any monitor's EDID into them and generate a matrix that does "XYZ with D65 whitepoint to monitor's RGB colour space" that includes colour space conversion and chromatic adaption for that monitor.

The maths for generating the colour conversion matrix (without chromatic adaption) is described at the top of this web page; except it does the reverse of what I want (generates an RGB->XYZ conversion matrix) and needs to be inverted to get the XYZ->RGB matrix.

The maths for generating the chromatic adaption matrix is described at the top of this web page. I've chosen to use the Bradford method (ignore XYZ scaling and Von Kries).

Of course the set of 9 formulas would need to be simplified as much as possible; then any common sub-expressions identified and lifted out; then the whole mess would be analysed to determine the effective range and precision needed at each step (in preperation for implementing them as fixed point integer maths, hopefully).

Please note that the only reason I haven't done this already is that I've been lazy. :roll:


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 3:08 pm 
Offline
Member
Member
User avatar

Joined: Sun Sep 19, 2010 10:05 pm
Posts: 1074
Brendan wrote:
I don't just want to avoid/remove end user hassle (e.g. user should be able to install a game without setting anything at all); I want to avoid/remove software developer hassle (e.g. the game/app shouldn't know or care about "graphics quality vs. performance" because that's the video driver's job).

"Variable quality, fixed frame rate" means nothing will run slower than it needs to (it's the quality of the scene/frame that might be worse than necessary). If the video driver decides it has enough time and/or memory to generate mip-maps then it generates mip-maps. There's no "enable via toggle/slider/whatever" and the application doesn't get a say (and doesn't need to deal with the hassle of bothering to decide if/when mip-maps are used).

This is similar to the approach that Netflix uses when streaming video, nowadays. They start with a low-quality/low-resolution video stream just to get things started, and then they seamlessly increase quality as long as bandwidth is available.

They also decrease quality when bandwidth is reduced, which I assume you would also want to do if, for some reason, the CPU becomes busy with some other task, or the GPU gets bogged down rendering too many things in a single frame...

Halo 2 was also notable in that it rendered with low quality textures until the high quality textures were loaded, which had the affect of textures popping in after about a half a second every time the scene changed.

Both of these, to me, are borderline unacceptable, and really take away from the overall experience. But then again, so do stuttering, and long load times, so I'm not sure which I would prefer, if I had the choice...

EDIT: I should also mention that having the "Driver" handle all of the render logic, and only allowing the application to issue simple commands like "Load" and "Draw" is typically what a "3D rendering engine" is supposed to do, and there are dozens of engines out there of various quality. Moving this functionality to the driver may end up having the same problem that Direct X did in its early days -- All games will look pretty much the same, and new hardware features will be difficult to support, because it may require changes to the API to take full advantage of.

Just for the sake of argument, why not move this functionality to a OS level "tier", and let the driver simply communicate with the hardware. You can still force applications to communicate with the driver through the engine, if you want, but it would make the various drivers that you have to write a lot simpler, I would think.

_________________
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 3:29 pm 
Offline
Member
Member

Joined: Sat Nov 18, 2006 9:11 am
Posts: 571
Rusky wrote:
Changes to level of detail as you get farther away are fine when tuned right, because the view is already changing during the transition. But a system that shows "missing texture" while things are still loading will end up looking even worse than this, which only shows low-res versions of textures to begin with and already looks awful.

You also really don't want a variable render framerate like you suggest, because it causes tearing and/or stuttering. What you want is to determine the maximum viable framerate once and then stick with that, where the refresh rate is a multiple of the monitor's. You could perhaps change that (or the quality level) for different scenes, but that's a compromise and it's definitely the game's job- not something you want the video driver doing behind your back whenever it feels like.


Meh, it worked pretty good for me... also, if you render to a specific frame rate, like 30fps, and it misses and you are waiting on v-sync, you end up with 15fps. If you're target was 60fps, and you miss, you end up with 30fps. If the graphics card averages 30fps, but dips slightly below and slightly over, would you really want to limit it to 15?


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 3:39 pm 
Offline
Member
Member

Joined: Sat Nov 18, 2006 9:11 am
Posts: 571
Quote:
I don't just want to avoid/remove end user hassle (e.g. user should be able to install a game without setting anything at all); I want to avoid/remove software developer hassle (e.g. the game/app shouldn't know or care about "graphics quality vs. performance" because that's the video driver's job).


I just can't see how that's going to work smoothly. If the video driver is rendering away, it might render the background at a really high quality, realize it doesn't have enough time until next frame, and render the focused items at crappy quality, which would be exactly opposite what you would want to happen. I don't see a way around it without the application having a say in it, even if it's just a flag for 'importance' or similar.

Quote:
Yes; but note that in my experience designing things like this takes a considerable amount of research and time, and tends to be intertwined with other things in ways that aren't immediately obvious. ;)
If it was easy, we'd be running our own OS by now ;).

Quote:
No; but there are far more recent 80486SX clones. One of my test machines (an eBox-2300SX) has full VBE with EDID and no FPU.

Yes, but would you really do a conversion during boot, or after the graphics driver loaded? I understand grabbing the info early, but not using it for a splash screen.

Quote:
Once these functions are found; I'd be able to plug the values from any monitor's EDID into them and generate a matrix that does "XYZ with D65 whitepoint to monitor's RGB colour space" that includes colour space conversion and chromatic adaption for that monitor.

The maths for generating the colour conversion matrix (without chromatic adaption) is described at the top of this web page; except it does the reverse of what I want (generates an RGB->XYZ conversion matrix) and needs to be inverted to get the XYZ->RGB matrix.

The maths for generating the chromatic adaption matrix is described at the top of this web page. I've chosen to use the Bradford method (ignore XYZ scaling and Von Kries).

Of course the set of 9 formulas would need to be simplified as much as possible; then any common sub-expressions identified and lifted out; then the whole mess would be analysed to determine the effective range and precision needed at each step (in preperation for implementing them as fixed point integer maths, hopefully).

Yes, I have done the color space conversions before, but I didn't make it so D65 was default, I made it a variable as well so if someone didn't perceive color the same, they could select their own 'white' point (or if it was just personal preference). I had (probably still have somewhere) routines for converting xyz, rgb, xy, lab, srgb, yuv, yiv, hsb, and a few others as well that i can't recall off the top of my head. I had it for converting images and such. I was trying to figure out a way to also implement HDR with it, but never got that far. Let me know when you do figure it out though, I would be interested in seeing what you come up with and how you use it. Do you intend to render everything in a linear colour space or non linear and then convert, etc?

I know the feeling, I haven't worked on my stuff in a long time.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 3:44 pm 
Offline
Member
Member

Joined: Sat Nov 18, 2006 9:11 am
Posts: 571
SpyderTL wrote:
Brendan wrote:
I don't just want to avoid/remove end user hassle (e.g. user should be able to install a game without setting anything at all); I want to avoid/remove software developer hassle (e.g. the game/app shouldn't know or care about "graphics quality vs. performance" because that's the video driver's job).

"Variable quality, fixed frame rate" means nothing will run slower than it needs to (it's the quality of the scene/frame that might be worse than necessary). If the video driver decides it has enough time and/or memory to generate mip-maps then it generates mip-maps. There's no "enable via toggle/slider/whatever" and the application doesn't get a say (and doesn't need to deal with the hassle of bothering to decide if/when mip-maps are used).

This is similar to the approach that Netflix uses when streaming video, nowadays. They start with a low-quality/low-resolution video stream just to get things started, and then they seamlessly increase quality as long as bandwidth is available.

They also decrease quality when bandwidth is reduced, which I assume you would also want to do if, for some reason, the CPU becomes busy with some other task, or the GPU gets bogged down rendering too many things in a single frame...

Halo 2 was also notable in that it rendered with low quality textures until the high quality textures were loaded, which had the affect of textures popping in after about a half a second every time the scene changed.

Both of these, to me, are borderline unacceptable, and really take away from the overall experience. But then again, so do stuttering, and long load times, so I'm not sure which I would prefer, if I had the choice...

EDIT: I should also mention that having the "Driver" handle all of the render logic, and only allowing the application to issue simple commands like "Load" and "Draw" is typically what a "3D rendering engine" is supposed to do, and there are dozens of engines out there of various quality. Moving this functionality to the driver may end up having the same problem that Direct X did in its early days -- All games will look pretty much the same, and new hardware features will be difficult to support, because it may require changes to the API to take full advantage of.

Just for the sake of argument, why not move this functionality to a OS level "tier", and let the driver simply communicate with the hardware. You can still force applications to communicate with the driver through the engine, if you want, but it would make the various drivers that you have to write a lot simpler, I would think.


Yes, I agree some games get it wrong, haha. Would you rather see lower quality or sit and wait? It's a hard one, and probably depends on the user and how long the wait would be. 1/2 second, i'd probably wait. For a huge MMORPG that has to stream custom skins through the internet each time you ran into another play, I would take reduced quality while it was streaming.

Yes, having a limited amount of api interface in the video driver will limit future developments unless you give it another workaround.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 4:04 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Ready4Dis wrote:
Quote:
Most OSs do have at least some sort of file system permissions. The problem is that most OSs also let the root/admin user ignore them, so the existing file system permissions are effectively useless for the purpose of preventing the computer's owner from tampering with a game's files.

Of course the idea of "all powerful root/admin" idea is stupid and broken (it completely disregards the principle of least privilege, is a huge shining beacon for brute force/dictionary "root password guessing" attacks, is a major hurdle for content providers, etc). It also doesn't make any sense for business use (e.g. where the administrator is not the computer owner in the first place, but is just some "random" employee). Needless to say, I won't be doing that.

Also note that my OS will use a versioning file system. This means that nothing can modify any file (you can only create a new version of the file); which means that the game/video driver can check "version 123" of the file and continue using that version after a new version of the file is created.


What is to stop someone from deleting the file first and then create one with the same name? Who specifies the version number? Is the file version is specified by the installer? If you don't have an admin, what if you want to uninstall an application? Do you track all files that have been modified or created from that application and remove them? What about files the user wants to keep?


When a file is deleted it still exists, it's just marked as "deleted" (and garbage collected if/when disk space runs out, unless it's restored first). It can't actually be deleted because that breaks file system rollbacks.

The version number is determined by the VFS when the new file is closed (committed to persistent state) and is formed by concatenating a timestamp with that computer's "node ID" (so if 2 computers happen to create the same file at the same time the version numbers are still unique).

I do have an admin, I just don't have "all powerful admin".

Files and directories have an owner. Applications are users and have user IDs. An application is installed in its own directory ("/apps/bcos/myapp") and own their own directory. To uninstall an application you delete its directory. User's files go in the user's directory ("/home/Brendan/") including their application settings, files created by applications, etc; and these aren't effected when applications are removed.

Ready4Dis wrote:
Quote:
If the current frame depends on the reflective surface; and if the reflective surface depends on the cube map; and if the cube map depends on the moving/animated object; then whenever the video driver updates the moving animated object it knows it also has to update the cube map, the reflective surface and the current frame.


But how does the graphics system know if the moving object is visible or hidden?


During rendering you start with the root "thing". If it still has a lower level representation you do nothing. Otherwise you have to render it; which involves multiple steps that depend on its type (e.g. maybe vertex transformations, rasterisation, then applying textures to fragments). During that "applying textures" part; if the texture has a lower level representation you just use that; and otherwise you have to render the texture, which involves multiple steps that depend on its type (e.g. maybe vertex transformations, rasterisation, then applying textures). Basically it ends up being recursive, where everything that needs to be rendered is rendered (and nothing that is hidden is rendered), and everything that still has a lower level representation is recycled.

When a process changes the higher level description of a "thing", its lower level representation is invalidated/discarded (which forces that "thing" to be rendered again during rendering); but so is the lower level representation for anything that depends on that thing, and anything that depends on anything that depends on it, and so on (possibly and most probably, all the way up to the "root thing").

Your process does something to the moving/animated object (changing its higher level description); so the video driver invalidates/discards the lower level representation for the moving/animated object, and the cube map, and the reflective surface, and the screen. Then the video driver renders the next frame and realises it has to render the screen, and if the reflective surface is visible it realises it also has to render the reflective surface, but the reflective surface uses the cube map and the cube map's lower level representation is gone so it knows it has to render the cube map, and if the moving/animated object is visible in the cube map then its lower level representation is gone too so that also has to be rendered.

However; this is a simplification. Where I've said "lower level representation is invalidated/discarded" I don't really mean that the lower level representation is invalidated/discarded. Instead the video driver could determine how much the object changed, so that later (during rendering) it can decide whether to recycle the previous lower level description or to render it again based on how much it changed and how much time there is.

Ready4Dis wrote:
Quote:
Yes; some games do try to dynamically adjust detail in some way/s, but they're fighting against the design of APIs and tools that weren't designed for it and their solutions tend to end up being reactive rather than proactive causing hysteresis. For a worst case scenario consider alternating "complex, simple, complex, simple" frames - after a complex frame it reacts by reducing detail for the next/simple frame, and after simple frames it reacts by increasing detail for the next/complex frame.

I haven't seen it that bad for some time now, most use an average of a few frames to see if there is a trend, but either way, if it is that difficult to get right for a specific case, it's going to be much more difficult (impossible?) to get it right for all cases.


It'd be fairly easy to analyse the scene just before rendering starts and estimate how much time each thing would take to update for the specific video card/GPU/video driver (unless you're trying to do it in a game and can't know the internal details of the specific video card/GPU/video driver).

Ready4Dis wrote:
I know a few games that do things like reduced rendering quality while in motion (or fast motion), dynamic LOD for terrain based on frame rates, using different rendering (shaders) techniques, etc. It is very difficult to predict how quickly the scene is going to render unless you can qualify different reasons for the speed changes. Typically I see (at least more recent games) make more subtle changes and not so drastic changes that you see wildly varying frame rates from frame to frame. If I am playing a FPS and turn really fast, I don't want everything to turn into a grey blob, I would much rather a low quality rendering but still able to see everything.


Um. Think of a sliding scale that ranges from "photorealistic" all the way to "grey blobs". On a high-end gaming machine with appropriate native video drivers you're going to get graphics near the "photorealistic" end of the scale. On an ancient 80486 with software rendering you're not; and given the choice between "grey blobs at 60 frame per second" and "photorealistic at 1 frame per hour" I have no doubt you'll be begging for grey blobs while playing that first person shooter. ;)

Ready4Dis wrote:
Quote:
I don't see what information would be needed about the game/app/GUI design.

If you store information in a BSP tree and render an indoor world, who does the graphics calls? Does the app tell the video driver what to render (and I don't just mean, render outside scene, I mean render this object, that object, etc) or does the video driver know what to render automagically? I guess i'm confused to exactly where you differentiate the app telling the video driver what it needs done and what the video driver does on its own.


The video driver is responsible for rendering (including occlusion via. z-buffer or whatever method the video driver felt like using), which means the app wouldn't store information in a BSP in the first place.

Each process provides a sub-graph of the global dependency graph; where each node has a type that determine's the format of the node's description. For example, the node's type might be:
  • "2D texture from file" where the description is a file name
  • "2D texture from string" where the description is a UTF-8 string with some metadata (which font, etc)
  • "3D object" where the description is a mesh of vertices/polygons where each polygon has a reference to a texture (another node in the graph);
  • "3D space" where the description is a list of references to "3D object" nodes and their position/rotation within the 3D space
  • "2D from 3D space" where the description is the position/rotation of a camera and a reference to a "3D space" node
  • ....
The video driver does everything else; including loading data from files, asking font engine to convert the UTF-8 string into whatever format the font engine provides, finding (e.g.) a "radius from origin to furthermost vertex" for bounding sphere tests on 3D objects, etc before anything is rendered, including doing all the rendering, and including updating its own internal data when a process changes the node's description or adds/removes nodes. Of course in practice it's going to be a whole lot more complicated (more node types, better/more complete descriptions, etc). :)


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Tue Oct 06, 2015 4:35 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Ready4Dis wrote:
Quote:
I don't just want to avoid/remove end user hassle (e.g. user should be able to install a game without setting anything at all); I want to avoid/remove software developer hassle (e.g. the game/app shouldn't know or care about "graphics quality vs. performance" because that's the video driver's job).


I just can't see how that's going to work smoothly. If the video driver is rendering away, it might render the background at a really high quality, realize it doesn't have enough time until next frame, and render the focused items at crappy quality, which would be exactly opposite what you would want to happen. I don't see a way around it without the application having a say in it, even if it's just a flag for 'importance' or similar.


You'd just want the video driver to be smarter than "very broken" (e.g. estimate the background is going to take too long to render in high quality before it starts rendering anything, using conservative estimates in case something takes longer than estimated, etc). How important something is depends on how much it changed since last time, its size on screen, how close to the middle of the screen it is, etc - it's simple heuristics.

Ready4Dis wrote:
Quote:
No; but there are far more recent 80486SX clones. One of my test machines (an eBox-2300SX) has full VBE with EDID and no FPU.

Yes, but would you really do a conversion during boot, or after the graphics driver loaded? I understand grabbing the info early, but not using it for a splash screen.


I'm planning to support device independent colour space, resolution independence, monitor size and shapes, dithering, (some) stereoscopy and smooth/vector fonts from the instant that the boot code sets a graphics video mode (and stops using the firmware's "text output" functions); and want a smooth transition (e.g. without a sudden change screen content) when boot code hands control of video off to the OS's video driver later during boot. It won't be a splash screen though - more like a background (with a gradient from blue to black and the OS name in large white letters) with the boot log in the foreground (made to looks like its an application's window).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Wed Oct 07, 2015 7:57 am 
Offline
Member
Member

Joined: Sat Nov 18, 2006 9:11 am
Posts: 571
So, we are back to your implementing an all knowing game engine into your video driver :). So if the game doesn't store a BSP tree, octree, or some other high level description of the world it's rendering, how exactly do the physics work? Please don't tell me that's in the video driver as well ;). Also, if you load an overly large world that requires streaming (like minecraft or a very large open world game like grand theft auto), how do you describe this to the video driver? Do you just register every single object/texture/model via the video driver and it handles the loading/unloading of everything? It sounds interesting, and I really hope you get it done some day so I can see how it works in real life as it sounds like a much simpler thing to program a game in as long as it works well.

Quote:
It'd be fairly easy to analyse the scene just before rendering starts and estimate how much time each thing would take to update for the specific video card/GPU/video driver (unless you're trying to do it in a game and can't know the internal details of the specific video card/GPU/video driver).
You say it's easy, but it's not as easy as you think, just ask any major game engine company. Unless you are checking each and every object for occlusion behind each and every other object, you don't know how many texel units, shaders units, etc are going to be required. If you simply assume they will all be visible, you might be lowering the quality a lot further than needed.

Quote:
The video driver is responsible for rendering (including occlusion via. z-buffer or whatever method the video driver felt like using)
Occlusion via z-buffer can only take place during rendering, not as a pre-process as the z-buffer isn't updated at that point. How does that stop you from rendering an object that is not visible? You still 'attempt' to render and the z-buffer check during rendering discards it, but you still sent the object to be rendered and it still had to draw the triangles and compare the z values to the z-buffer. This isn't a replacement for a proper scene-graph that can discard entire objects without sending them to be rendered.

Quote:
You'd just want the video driver to be smarter than "very broken" (e.g. estimate the background is going to take too long to render in high quality before it starts rendering anything, using conservative estimates in case something takes longer than estimated, etc). How important something is depends on how much it changed since last time, its size on screen, how close to the middle of the screen it is, etc - it's simple heuristics.

Maybe i'm missing something, but unless it knows what is going to end up visible on screen BEFORE it starts rendering (z-buffer occlusion is not before rendering), it really isn't going to make very good guesses.

Quote:
I'm planning to support device independent colour space, resolution independence, monitor size and shapes, dithering, (some) stereoscopy and smooth/vector fonts from the instant that the boot code sets a graphics video mode (and stops using the firmware's "text output" functions); and want a smooth transition (e.g. without a sudden change screen content) when boot code hands control of video off to the OS's video driver later during boot. It won't be a splash screen though - more like a background (with a gradient from blue to black and the OS name in large white letters) with the boot log in the foreground (made to looks like its an application's window).


I am looking towards a similar goal with colour space and resolution independence (to an extent, because I wouldn't want to render a full desktop on a 320x200 display the same as a 4k display, you wouldn't be able to read/see anything). I am also not to concerned about my boot loader handling any of that. I don't mind having a background/splash screen ,whatever you want to call it, that isn't perfectly colourized to the specific screen. Heck, depending on how quickly the boot is, I might just wait for the graphics driver to load before even putting a splash screen up, at least if booting from a quick medium (hard drive?). I can see the need/want if booting from floppy or via a slower network connection. Last time I booted on real hardware I barely saw my boot log, I had to put delays on real hardware or watch it in Bochs just to make sure it looked right, haha.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Wed Oct 07, 2015 9:40 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Ready4Dis wrote:
So, we are back to your implementing an all knowing game engine into your video driver :). So if the game doesn't store a BSP tree, octree, or some other high level description of the world it's rendering, how exactly do the physics work? Please don't tell me that's in the video driver as well ;).


For basic fundamentals; every single useful program accepts input, processes that input in some way, and produces output. There are many possible types of input (command line args, files, keyboard, weather sensor, ...) and many possible types of output (executable's exit code, network packets sent, sound, haptic feedback, ...). A system is a collection of collaborating programs; where the output/s of one program is the input/s for other program/s; which means that for sane systems input and output requires mutual agreement between 2 or more programs.

The question here is about establishing a mutual agreement for one specific type of output (graphics); mostly in the form of a standardised graphics API that ensures the output of some programs (e.g. applications, games, GUIs) matches the input of other programs (e.g. video drivers).

Physics has nothing to do with graphics output. Physics isn't even output at all. Physics is part of processing.

Ready4Dis wrote:
Also, if you load an overly large world that requires streaming (like minecraft or a very large open world game like grand theft auto), how do you describe this to the video driver? Do you just register every single object/texture/model via the video driver and it handles the loading/unloading of everything? It sounds interesting, and I really hope you get it done some day so I can see how it works in real life as it sounds like a much simpler thing to program a game in as long as it works well.


You'd only need to tell the video driver about things that could possibly be seen by the camera (and for an "infinite" world like Minecraft there's no need to tell the video driver about the entire world); but you may tell the video driver more for whatever reason (e.g. so it can pre-load/pre-process data before its actually needed). This applies to all possible graphics APIs. The difference between mine and existing graphics APIs has nothing to do with this at all.

The difference between mine and existing graphics APIs is who manages state. For mine, the video driver manages more of the state and reduces the burden on applications/games; while for existing graphics APIs the video driver manages far less which forces the application/game to do insidious micro-managing (which has the additional side effects of greatly increasing the overhead of the connection between application/game output and video driver input because of all that stupid micro-managing causes more traffic/communication; while also crippling the video drivers flexibility and its ability to make effective performance/optimisation decisions).

Ready4Dis wrote:
Quote:
It'd be fairly easy to analyse the scene just before rendering starts and estimate how much time each thing would take to update for the specific video card/GPU/video driver (unless you're trying to do it in a game and can't know the internal details of the specific video card/GPU/video driver).
You say it's easy, but it's not as easy as you think, just ask any major game engine company. Unless you are checking each and every object for occlusion behind each and every other object, you don't know how many texel units, shaders units, etc are going to be required. If you simply assume they will all be visible, you might be lowering the quality a lot further than needed.


Um.. I say it's fairly easy when the video driver does it (and hard when the game does it); and you say it's hard when the game does it (just ask game developers!)?

Ready4Dis wrote:
Quote:
The video driver is responsible for rendering (including occlusion via. z-buffer or whatever method the video driver felt like using)
Occlusion via z-buffer can only take place during rendering, not as a pre-process as the z-buffer isn't updated at that point. How does that stop you from rendering an object that is not visible? You still 'attempt' to render and the z-buffer check during rendering discards it, but you still sent the object to be rendered and it still had to draw the triangles and compare the z values to the z-buffer. This isn't a replacement for a proper scene-graph that can discard entire objects without sending them to be rendered.


Z-buffer doesn't stop you from rendering objects; but it's the last/final step for occlusion (and was only mentioned as it makes BSP redundant and is extremely common). Earlier steps prevent you from rendering objects.

For a very simple example; you can give each 3D object a "bounding sphere" (e.g. distance from origin to furthermost vertex) and use that to test if the object is outside the camera's left/right/top/bottom/foreground/background clipping planes and cull most objects extremely early in the pipeline. It doesn't take much to realise you can do the same for entire collections of objects (and collections of collections of objects, and ...).

Of course typically there's other culling steps too, like culling/clipping triangles to the camera's clipping planes and back face culling, which happen later in the pipeline (not at the beginning of the pipeline, but still well before you get anywhere near Z-buffer).

Ready4Dis wrote:
Quote:
You'd just want the video driver to be smarter than "very broken" (e.g. estimate the background is going to take too long to render in high quality before it starts rendering anything, using conservative estimates in case something takes longer than estimated, etc). How important something is depends on how much it changed since last time, its size on screen, how close to the middle of the screen it is, etc - it's simple heuristics.

Maybe i'm missing something, but unless it knows what is going to end up visible on screen BEFORE it starts rendering (z-buffer occlusion is not before rendering), it really isn't going to make very good guesses.


Yes; I think you've missed the majority of the graphics pipeline (everything that happens before the final "z-buffer test and mapping textels to fragments" stage at the end).

Ready4Dis wrote:
Quote:
I'm planning to support device independent colour space, resolution independence, monitor size and shapes, dithering, (some) stereoscopy and smooth/vector fonts from the instant that the boot code sets a graphics video mode (and stops using the firmware's "text output" functions); and want a smooth transition (e.g. without a sudden change screen content) when boot code hands control of video off to the OS's video driver later during boot. It won't be a splash screen though - more like a background (with a gradient from blue to black and the OS name in large white letters) with the boot log in the foreground (made to looks like its an application's window).


I am looking towards a similar goal with colour space and resolution independence (to an extent, because I wouldn't want to render a full desktop on a 320x200 display the same as a 4k display, you wouldn't be able to read/see anything). I am also not to concerned about my boot loader handling any of that. I don't mind having a background/splash screen ,whatever you want to call it, that isn't perfectly colourized to the specific screen. Heck, depending on how quickly the boot is, I might just wait for the graphics driver to load before even putting a splash screen up, at least if booting from a quick medium (hard drive?). I can see the need/want if booting from floppy or via a slower network connection. Last time I booted on real hardware I barely saw my boot log, I had to put delays on real hardware or watch it in Bochs just to make sure it looked right, haha.


For me; the OS will probably have to wait for network and authentication before it can become part of its cluster, before it can start loading its video driver from the distributed file system. There's also plenty of scope for "Oops, something went wrong during boot" where the OS never finishes booting and the user needs to know what failed (which is the primary reason to display the boot log during boot).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Thu Oct 08, 2015 5:14 am 
Offline
Member
Member

Joined: Sat Nov 18, 2006 9:11 am
Posts: 571
Quote:
Um.. I say it's fairly easy when the video driver does it (and hard when the game does it); and you say it's hard when the game does it (just ask game developers!)?
I've done a lot of graphics programming, before and after 3d accelerators where available, I still don't think it's easy to do from either standpoint.

Quote:
Z-buffer doesn't stop you from rendering objects; but it's the last/final step for occlusion (and was only mentioned as it makes BSP redundant and is extremely common). Earlier steps prevent you from rendering objects.

For a very simple example; you can give each 3D object a "bounding sphere" (e.g. distance from origin to furthermost vertex) and use that to test if the object is outside the camera's left/right/top/bottom/foreground/background clipping planes and cull most objects extremely early in the pipeline. It doesn't take much to realise you can do the same for entire collections of objects (and collections of collections of objects, and ...).

Of course typically there's other culling steps too, like culling/clipping triangles to the camera's clipping planes and back face culling, which happen later in the pipeline (not at the beginning of the pipeline, but still well before you get anywhere near Z-buffer).


It doesn't make BSP redundant, a lot of BSP renderers can turn off Z checks due to their nature, which saves the gpu from having to do read backs (which are slow). Yes, that's the point of octree's, portals, bsp's, etc. To only render what it needs, but which one you use depends on the type of world/objects you're rendering. Otherwise everyone would use the same exact thing. My point was, without these structures, how does the video driver know what is visible quick and in a hurry? If you're saying the game doesn't need them, the video driver handles it, it just doesn't make sense, because without this information, the video driver doesn't have an efficient method to quickly deduce an object is obstructed by another object. Back-face culling, clipping planes, this is all normal, but it's not a fast method, you are still processing the vertex data, animations, etc. Also, most BSP schemes use the BSP map and accompanying data in order to quickly do collision detection (like checking for the interaction of the player and an object). For it to work, it needs the transformed data and the BSP. If you push everything to the video driver, you still need your map for the game meaning it's even slower still since you need to ask the video driver for information or keep two copies.

Quote:
Yes; I think you've missed the majority of the graphics pipeline (everything that happens before the final "z-buffer test and mapping texels to fragments" stage at the end).
No, I am quite familiar with the graphics pipeline, I have been doing 3d programming for about 15 years on and off. I think you are missing how much happens before it gets there and 'how' it happens. There are a lot more tests that a game engine needs to do in order to not overwhelm the graphics card, not just using rely on frustum culling and backface removal. Even those two things happen AFTER the geometry has been processed but before any rendering takes place. Also, I don't think you realize how tightly integrated the graphics pipeline and the physics pipeline are, and how much a difference there is in rendering methods for different types of scenes. A one size does not fit all, otherwise we wouldn't need different methodologies for rendering, so trying to make a one size fits all in the video driver (in my opinion) seems pretty impossible, at least until graphics power is so great that it doesn't matter anymore.

Quote:
For me; the OS will probably have to wait for network and authentication before it can become part of its cluster, before it can start loading its video driver from the distributed file system. There's also plenty of scope for "Oops, something went wrong during boot" where the OS never finishes booting and the user needs to know what failed (which is the primary reason to display the boot log during boot).

I just meant the need for graphics mode for my boot log, not the need for the boot log. Of course I want to be able to see something if it fails, but it doesn't necessarily have to be graphical (especially if it happens before or during the initial graphics routines).


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 105 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6, 7  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: SemrushBot [Bot] and 53 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group