Concise Way to Describe Colour Spaces
Re: Concise Way to Describe Colour Spaces
I'm well aware that games reuse rendering engines. What I'm wondering is how you plan to get a 3D rendering engine, suitable for games, 100% correct from the start, while also using the same video system for everything else. Because while most games do reuse a lot of code, they also heavily customize their engines, and those engines evolve rapidly with graphics hardware and advances in rendering techniques. Games also have much stricter performance requirements and need to make very specific trade-offs that make no sense for, say, word processors or web browsers.
For example, how does your transparent-network-of-hardware come into this? Games can't scale that way. If you want a game engine to use the network at all, it needs to be very, very tightly controlled, not left up to the kernel to decide where to run things that need to be done consistently within 16ms. What about your fully-abstracted color spaces? Games need much more flexibility when it comes to rendering- their goal is rarely realism, but a particular art style that takes advantage of the medium rather than ignoring it.
For example, how does your transparent-network-of-hardware come into this? Games can't scale that way. If you want a game engine to use the network at all, it needs to be very, very tightly controlled, not left up to the kernel to decide where to run things that need to be done consistently within 16ms. What about your fully-abstracted color spaces? Games need much more flexibility when it comes to rendering- their goal is rarely realism, but a particular art style that takes advantage of the medium rather than ignoring it.
Re: Concise Way to Describe Colour Spaces
Hi,
My graphics API will be much higher level. Essentially; you give the driver details for a scene (where the camera is, where lights are, where "objects" are, the meshes/textures for each object, etc), and the video driver does whatever it likes to convert the details for a scene into pixels.
Each frame does not need to be done within 16 ms. Each frame has to be done before its deadline. If it takes 3 ms to render a frame then the video driver needs "details for a scene that's 3 ms or more in the future" so that it can be rendered before its deadline and displayed at exactly the right time; and if it takes 128 ms to render a frame then the video driver needs "details for a scene that's 128 ms or more in the future" so that it can be rendered before its deadline and displayed at exactly the right time. You can have a pipeline of stages where multiple frames are in progress at the same time, and even though it might take 128 ms for a frame to go from "description of scene" to "pixel data ready for monitor" you can start a new scene every 16 ms and finish a frame every 16 ms.
Cheers,
Brendan
The graphics API provided by most (all?) OSs is far too low level. The result is that software using the API has to deal with the hardware details for a very wide range of video cards (and most fail to do that by a massive margin), and the ability for the video driver to ensure the work is done in a "most optimal for the specific video card" way is destroyed.Rusky wrote:I'm well aware that games reuse rendering engines. What I'm wondering is how you plan to get a 3D rendering engine, suitable for games, 100% correct from the start, while also using the same video system for everything else. Because while most games do reuse a lot of code, they also heavily customize their engines, and those engines evolve rapidly with graphics hardware and advances in rendering techniques. Games also have much stricter performance requirements and need to make very specific trade-offs that make no sense for, say, word processors or web browsers.
My graphics API will be much higher level. Essentially; you give the driver details for a scene (where the camera is, where lights are, where "objects" are, the meshes/textures for each object, etc), and the video driver does whatever it likes to convert the details for a scene into pixels.
I don't know why you think something like graphics (which is about as "embarrassingly parallel" as you can get) is hard to do in parallel.Rusky wrote:For example, how does your transparent-network-of-hardware come into this? Games can't scale that way. If you want a game engine to use the network at all, it needs to be very, very tightly controlled, not left up to the kernel to decide where to run things that need to be done consistently within 16ms.
Each frame does not need to be done within 16 ms. Each frame has to be done before its deadline. If it takes 3 ms to render a frame then the video driver needs "details for a scene that's 3 ms or more in the future" so that it can be rendered before its deadline and displayed at exactly the right time; and if it takes 128 ms to render a frame then the video driver needs "details for a scene that's 128 ms or more in the future" so that it can be rendered before its deadline and displayed at exactly the right time. You can have a pipeline of stages where multiple frames are in progress at the same time, and even though it might take 128 ms for a frame to go from "description of scene" to "pixel data ready for monitor" you can start a new scene every 16 ms and finish a frame every 16 ms.
Game developers will get a renderer designed for realism; and if they don't like it they can go elsewhere.Rusky wrote:What about your fully-abstracted color spaces? Games need much more flexibility when it comes to rendering- their goal is rarely realism, but a particular art style that takes advantage of the medium rather than ignoring it.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Concise Way to Describe Colour Spaces
First, it is an open question how seriously the colors are skewed by the fluorescent lamp. It's about human's perception and I just have not enough knowledge here. But from my experience I see this issue as not very important. For example, the lamps with the spectrum shifted to the highest wavelengths (red area) produce a picture that is very close to the daylight produced picture. May be there are some very minor differences, but my personal perception sees them as negligible (but I should tell I haven't measured the difference in a systematic way).Octocontrabass wrote:Fluorescent lighting visibly skews colors because of the "holes" in the spectrum. If you ignore these holes, your model won't be able to simulate the difference between fluorescent light and daylight.
And about simulation of such difference. If I see practically the same colors with fluorescent lamp's narrow bands, then it is obvious, that if we still need to correct some colors, we can add a few more bands and get much closer to the real picture. As was suggested before, we can use up to 8 bands. But again, the perception part here was not studied by me.
Because the suggested way of describing wavelengths offers some fixed bands, there shouldn't be much troubles if we decide to use different widths of the bands.Octocontrabass wrote:LED monitors typically have a narrow band of wavelengths for blue, and a wide band of wavelengths that are shared for green and red.
I want to get as close to the reality as possible without serious performance decrease. In case of bands we can easily multiply the surface reflection factor by the band amplitude and get the resulting light amplitude. While with XYZ such conversion will be much more obscure.Octocontrabass wrote:The point of simulating a whole spectrum is to take into account color addition and subtraction effects that can't be simulated any other way. If you just want to represent all visible colors, the XYZ colorspace is already more than capable of handling that.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability 

Re: Concise Way to Describe Colour Spaces
It's a bad habit to work with all possible problems in one place. If you create a general solution for all problems, then it's almost guaranteed that there will be many problems, that you just missed to pay enough attention. In reality most serious architects just concentrate on some deeply studied part of a problem and provide a way for extending base system in case of the need for the missed parts. In the example with image processing you try to guess what all users would need and, having no extensive image processing experience, suggest some generic solutions like zooming for them to suit all user needs. So, I should warn you before it's too late, do not try to invent one thing for all cases.Brendan wrote:Most (all?) existing image editors have "zoom" for people that want to work on very fine details, and I'll probably have "generic zoom" that works with all applications.
Here you miss the changes in the reality. If I need more light when I'm repairing watches, I just use another lamp. So, the reality was changed. And now you have to deal with two cases of "as close to 100% correct as practically possible", first case is without the lamp and the second is with the lamp, both are 100% close to reality. So, your reality of zoomable images fits just one of two cases of 100% reality.Brendan wrote:Think of it like this. A "100% correct video system" models reality perfectly, but isn't achievable in practice due to both the overhead of processing that would be involved and the limitations of hardware. I'm trying to get "as close to 100% correct as practically possible"
Your system should be extensible and configurable. So, if a user needs more light, he just configures your system accordingly. If a developer needs more light in his application, then he just uses your extensibility API and produces software that much more accurately suits the exact case, for which it is developed.Brendan wrote:Now; if I believed that some end users will need to constantly change monitor settings for my "as close to 100% correct as practically possible" video system; what exactly are you suggesting my video system should do about it?
No. The psychology of a discussion makes you to concentrate on defending your way and forget about the accepted advantages. So, you see it like everybody wants to point at your disadvantages, while in reality there is a consensus about advantages and they are just not discussed any more.Brendan wrote:I think that people in general have difficulty imagining a system that differs from existing software (and has different advantages and different disadvantages to existing software); and this leads to a tendency for people to focus on "perceived disadvantages" (that may or may not exist) while failing to take into account that even if there are some disadvantages the advantages be more important anyway.
I see the increased need for processing in case of wavelength based light representation. But almost all advantages on the way of representing reality as exact as possible are achieved with the help of additional processor power. And formulas for the reflection is so simple, that I was thinking it's obvious and omitted them. There are fixed bands with some bands omitted in variable length array (it means the value is zero). We need to multiply emitted band by the reflection factor to get resulting emission amplitude in simple case. But there are more complex cases with the light passing through the surface or being reflected in a direction (like mirror), in all those cases the formulas will look different.Brendan wrote:The video system will need to do millions of "texel lookups" and using variable sized texels makes that far more expensive; and I suspect you failed to provide the formulas I requested (which are also performed millions of times per frame) because they're very complicated and very expensive. On top of that, I don't see what the advantage is meant to be (given than XYZ is able to represent all visible colours anyway).
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability 

-
- Member
- Posts: 5863
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Concise Way to Describe Colour Spaces
Here is a simulation of the effect. Most of the time, colors being skewed by different light sources isn't much of a concern, unless you're a graphic designer. On the other hand, graphic designers are the primary demographic for high-accuracy color manipulation.embryo2 wrote:First, it is an open question how seriously the colors are skewed by the fluorescent lamp. It's about human's perception and I just have not enough knowledge here. But from my experience I see this issue as not very important. For example, the lamps with the spectrum shifted to the highest wavelengths (red area) produce a picture that is very close to the daylight produced picture. May be there are some very minor differences, but my personal perception sees them as negligible (but I should tell I haven't measured the difference in a systematic way).
For XYZ, the conversion is exactly the same: multiply the surface reflection factor by the light source component amplitude and get the resulting reflection component amplitude.embryo2 wrote:I want to get as close to the reality as possible without serious performance decrease. In case of bands we can easily multiply the surface reflection factor by the band amplitude and get the resulting light amplitude. While with XYZ such conversion will be much more obscure.
Re: Concise Way to Describe Colour Spaces
Hi,
Cheers,
Brendan
All GUIs that I know of allow a window to be resized, and this is something that's relatively simple and required. For my system (where it's all 3D) shifting the camera closer to a window is something that's also relatively simple and required (for 3D). By combining both of these things (e.g. resizing a window so that it's smaller while also shifting the camera closer so the window takes up more of the screen) you end up with "zoom". It is not hard and all the functionality is required anyway.embryo2 wrote:It's a bad habit to work with all possible problems in one place. If you create a general solution for all problems, then it's almost guaranteed that there will be many problems, that you just missed to pay enough attention. In reality most serious architects just concentrate on some deeply studied part of a problem and provide a way for extending base system in case of the need for the missed parts. In the example with image processing you try to guess what all users would need and, having no extensive image processing experience, suggest some generic solutions like zooming for them to suit all user needs. So, I should warn you before it's too late, do not try to invent one thing for all cases.Brendan wrote:Most (all?) existing image editors have "zoom" for people that want to work on very fine details, and I'll probably have "generic zoom" that works with all applications.
Um, what? If there are no light sources the only thing you're ever going to see is the colour black. To be useful, a GUI must have at least one light source (whether it's ambient light or a directional light) and either increasing the ambient light or adding another directional light are both things the GUI can do (and the video driver has to support). The extra lighting won't actually help anything or make any details clearer (because my "auto-iris" will just reduce the extra light anyway) in the same way that it's completely pointless and retarded in real life (because the iris in your eye will limit the amount of light entering your eye anyway), but it's possible.embryo2 wrote:Here you miss the changes in the reality. If I need more light when I'm repairing watches, I just use another lamp. So, the reality was changed. And now you have to deal with two cases of "as close to 100% correct as practically possible", first case is without the lamp and the second is with the lamp, both are 100% close to reality. So, your reality of zoomable images fits just one of two cases of 100% reality.Brendan wrote:Think of it like this. A "100% correct video system" models reality perfectly, but isn't achievable in practice due to both the overhead of processing that would be involved and the limitations of hardware. I'm trying to get "as close to 100% correct as practically possible"
An application should be like a physical piece of paper in the real world. It shouldn't glow in the dark.embryo2 wrote:Your system should be extensible and configurable. So, if a user needs more light, he just configures your system accordingly. If a developer needs more light in his application, then he just uses your extensibility API and produces software that much more accurately suits the exact case, for which it is developed.Brendan wrote:Now; if I believed that some end users will need to constantly change monitor settings for my "as close to 100% correct as practically possible" video system; what exactly are you suggesting my video system should do about it?
To be perfectly honest; for software rendering I'll be pushing against the limits of what hardware is capable of without the additional overhead; and the only cases that I can think of where there would be any visible difference between your "wavelength+amplitude" and my XYZ are flourescence and dispersion, neither of which has ever been supported in any real time renderer (and neither of which are things I intend to support in my renderer).embryo2 wrote:I see the increased need for processing in case of wavelength based light representation. But almost all advantages on the way of representing reality as exact as possible are achieved with the help of additional processor power. And formulas for the reflection is so simple, that I was thinking it's obvious and omitted them. There are fixed bands with some bands omitted in variable length array (it means the value is zero). We need to multiply emitted band by the reflection factor to get resulting emission amplitude in simple case. But there are more complex cases with the light passing through the surface or being reflected in a direction (like mirror), in all those cases the formulas will look different.Brendan wrote:The video system will need to do millions of "texel lookups" and using variable sized texels makes that far more expensive; and I suspect you failed to provide the formulas I requested (which are also performed millions of times per frame) because they're very complicated and very expensive. On top of that, I don't see what the advantage is meant to be (given than XYZ is able to represent all visible colours anyway).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Concise Way to Describe Colour Spaces
Quite to the contrary. Video drivers are already too high level to achieve optimal performance. OpenGL and DirectX manage too much state for the application in ways that never quite match up with what it needs, and is slightly different across different drivers. This makes rendering engines unable to use the graphics hardware at capacity, because they're CPU-bound trying to run all the redundant cruft in the driver.Brendan wrote:The graphics API provided by most (all?) OSs is far too low level. The result is that software using the API has to deal with the hardware details for a very wide range of video cards (and most fail to do that by a massive margin), and the ability for the video driver to ensure the work is done in a "most optimal for the specific video card" way is destroyed.
The current situation is that true cross-hardware compatibility is impossible for hobbyists, and so difficult for high-budget games that the hardware vendors actually include game-specific code for the most popular games. Graphics driver updates include whole libraries of replacement shaders and hacks for games that have been released since the hardware was sold.
The correct solution is not to continue sweeping this all under the rug by going higher level, but to provide a minimal cross-hardware API. This is, in fact, exactly where the industry is going with DirectX 12 and Vulkan (as well as, to a lesser degree, Apple's Metal and AMD's now-replaced-with-Vulkan Mantle). Vulkan/DX12 provide nothing more than buffer transfer, command queues, and a cross-architecture shader bytecode.
This makes many more optimizations available to rendering engines, especially using multiple cores, and all the higher-level code can now be shared between applications rather than being reimplemented in each driver.
This is a good interface for one particular rendering engine, but it cannot serve all applications that people might want to write and thus belongs in a higher layer, not as the only rendering API exposed by the OS. Beyond the fact that you've already given up on essentially all games and their development tools by limiting yourself to "realism at all costs," you're also making your OS impossible to use for graphics research, animated movies, and many types of scientific and medical applications.Brendan wrote:My graphics API will be much higher level. Essentially; you give the driver details for a scene (where the camera is, where lights are, where "objects" are, the meshes/textures for each object, etc), and the video driver does whatever it likes to convert the details for a scene into pixels.
...
Game developers will get a renderer designed for realism; and if they don't like it they can go elsewhere.
Pipelining only makes sense if you're not trying to react to user input in real time, because it trades low latency for higher throughput. While a typical human reaction time is only around 100-200ms, the tolerance is much lower when the human is the one initiating the action. If a player presses a button and the game doesn't react within 50ms or so they will notice an infuriating lag. It gets even worse with things like virtual reality, where you need closer to 20ms to be even acceptable. VR already has to pull tons of tricks to get there on sane operating systems, so offloading rendering to a different machine is going to make it completely impossible.Brendan wrote:I don't know why you think something like graphics (which is about as "embarrassingly parallel" as you can get) is hard to do in parallel.
Each frame does not need to be done within 16 ms. Each frame has to be done before its deadline. If it takes 3 ms to render a frame then the video driver needs "details for a scene that's 3 ms or more in the future" so that it can be rendered before its deadline and displayed at exactly the right time; and if it takes 128 ms to render a frame then the video driver needs "details for a scene that's 128 ms or more in the future" so that it can be rendered before its deadline and displayed at exactly the right time. You can have a pipeline of stages where multiple frames are in progress at the same time, and even though it might take 128 ms for a frame to go from "description of scene" to "pixel data ready for monitor" you can start a new scene every 16 ms and finish a frame every 16 ms.
Re: Concise Way to Describe Colour Spaces
Hi,
For my system; the video driver will manage a huge amount of state, but the application will not so there won't be a huge amount of state going between application and video driver.
For example, the application will tell the video driver it wants to use the file "chicken.3d" (where that file will contain a skeleton, mesh and textures), and the video driver will load the file. The application does not load the file and only tells the video driver where to place the chicken in the 3D world (more specifically, where the "control points" that make up the chicken's skeleton have moved since last time, if they've moved).
If a chicken has 10 control points (tip of left and right toe, left and right ankle, left and right knee, head, tail) and each control point is nine 32-bit values (displacement from origin + front direction + top direction) plus nine more values (for the location and direction/s of the chicken's origin); and the scene has 1000 chickens but only half have moved since the previous "scene description", then its about 201 KiB of data ("(10*9*4+9*4+4) * 500" plus some loose change - camera position, etc) going from application to video driver/s. Note: It may be less - e.g. if a chicken's original changed but its control points didn't then only the chicken's origin would be sent, and visa versa).
This does not happen once per frame. The application has no idea about "frames" at all; and everything is synchronised via. time stamps. The application says "at T=1234567 the control points for this list of chickens will be...", and if the video driver wants to create a frame for "T = 1234564" then it interpolates from the data it had from "T = 1234560" to the new data for "T = 1234567" to determine where the chickens will be at "T = 1234564".
If the application also has 5 foxes, then the control points for those foxes can be sent separately. For example, the application might send a "chicken movement report" at T=1234560, then send a "fox movement report" at T=1234568, then send another "chicken movement report" at T=1234570, then send another "fox movement report" at T=1234578, and so on. If the application also has one cow, then maybe the cow is moving very slowly and there's a "cow movement report" at T=1234500 and another at T=1234800. The video driver doesn't care, and just uses whatever information it has for each object at the time a frame is drawn, and updates the information it has when one of these "movement reports" arrives.
Also note that the application is sending future predictions; and can change it's predictions. For example, at T=1234400 the application might send a "cow movement report" saying where it thinks the cow will be at T=1234800, but then at T=1234600 it might realise it was wrong and send another "cow movement report" saying where it thinks the cow will be at T=1234800 again. The video driver will interpolate from the positions used for the last seen frame so that if predictions change significantly after its too late then you still get smooth movement (and not "teleporting cows").
Maybe the chickens are moving around a lot and the application sends a 200 KiB "chicken movement report" every 50 ms on average; maybe the foxes are moving slower and the application sends a 1 KiB "fox movement report" every 100 ms on average; and maybe it sends a 300 byte "cow movement report" every 150 ms on average. Maybe there's also a farmhouse and terrain that don't change so no data is sent for that. Maybe it adds up to an average of 400 KiB of data per second going from the application to the video driver/s for the entire "camera + 1000 chickens + 5 foxes + 1 cow + farmhouse + terrain".
More importantly, because of the higher level interface cross hardware compatibility won't be a problem. New video hardware (or a new driver for an existing video adapter) may dramatically improve the appearance of a 10 year old game (and it won't be like existing systems where a game released this year won't take advantage of capabilities that become available next year); and new games will work on old hardware (and it won't be like existing systems, where most games have come with a list of "required hardware" that only includes recent video cards).
There will be things that are possible on my video system that are not possible on existing video systems; and there will be things that aren't possible on my video system that are possible on existing video systems. This is also a huge advantage; as it means that (for things that are possible on my video system that are not possible on existing video systems) my OS has no competition. The opposite (providing "exactly the same as existing systems") is extremely stupid as it means you're competing directly with established/mature OSs that provide identical (or better) functionality.
For some things the length of time between an event happening and the result being shown on the screen is 0 ms; and no video system can achieve that, so all video systems do it "as soon as they can" (hopefully before the end of the monitor's next vertical sync). For other things the length of time is far greater. For a typical scene there's a mixture of both. For example; the player fires a rifle and you get "muzzle flash" as soon as possible, but the mountains in the distance haven't changed for 30 seconds and the enemy soldier between the player and the mountains hasn't changed for 4 seconds.
Mostly what I'm talking about is something I'd be tempted to call "auto generated imposers". For the mountains in the distance, maybe you can have 10 computers each doing 1/10th of the mountains in stunningly high detail that update the "imposter cache" within 200 ms. For the enemy soldier between the player and the mountains, maybe you can have a pipeline where one computer does vertex transformations, another does lighting, etc; where the "imposter cache" for the soldier is updated within 50 ms. For the player's muzzle flash, maybe that's done on the same computer as the video card, which is the same computer that manages the "imposter cache" (and tells the other computers what to render) and the same computer that builds the frame by combining imposters.
Mostly; there are multiple different ways to do graphics in parallel that are not mutually exclusive. The video driver analyses the scene and its imposter cache and determines how important updating each thing is and how much time there is, and decides which pieces to do itself and which pieces it should ask other renderers (not necessarily just other computers) to do; and if there isn't enough time to do all of it then detail is reduced and/or the video driver gets less fussy about recycling "not quite current" imposters.
Please note that exactly how it's going to work is something I'm still thinking about and still need to do research for; and it is very likely that I'll end up implementing a few different "software renderer" prototypes before I'm happy with it. All I'm saying is that distributing the load across multiple computers (and multiple renders on the same computer) is not "impossible" at all.
Cheers,
Brendan
No. The old APIs were too low level, which caused the need for games to constantly use the API for every trivial little detail, and this "constant API pounding" (in conjunction with not being thread safe) became a major bottleneck. To cope they made APIs that were already too low level even lower level (and thread safe) so that the idiotic "constant API pounding" could be done more efficiently.Rusky wrote:Quite to the contrary. Video drivers are already too high level to achieve optimal performance. OpenGL and DirectX manage too much state for the application in ways that never quite match up with what it needs, and is slightly different across different drivers. This makes rendering engines unable to use the graphics hardware at capacity, because they're CPU-bound trying to run all the redundant cruft in the driver.Brendan wrote:The graphics API provided by most (all?) OSs is far too low level. The result is that software using the API has to deal with the hardware details for a very wide range of video cards (and most fail to do that by a massive margin), and the ability for the video driver to ensure the work is done in a "most optimal for the specific video card" way is destroyed.
For my system; the video driver will manage a huge amount of state, but the application will not so there won't be a huge amount of state going between application and video driver.
For example, the application will tell the video driver it wants to use the file "chicken.3d" (where that file will contain a skeleton, mesh and textures), and the video driver will load the file. The application does not load the file and only tells the video driver where to place the chicken in the 3D world (more specifically, where the "control points" that make up the chicken's skeleton have moved since last time, if they've moved).
If a chicken has 10 control points (tip of left and right toe, left and right ankle, left and right knee, head, tail) and each control point is nine 32-bit values (displacement from origin + front direction + top direction) plus nine more values (for the location and direction/s of the chicken's origin); and the scene has 1000 chickens but only half have moved since the previous "scene description", then its about 201 KiB of data ("(10*9*4+9*4+4) * 500" plus some loose change - camera position, etc) going from application to video driver/s. Note: It may be less - e.g. if a chicken's original changed but its control points didn't then only the chicken's origin would be sent, and visa versa).
This does not happen once per frame. The application has no idea about "frames" at all; and everything is synchronised via. time stamps. The application says "at T=1234567 the control points for this list of chickens will be...", and if the video driver wants to create a frame for "T = 1234564" then it interpolates from the data it had from "T = 1234560" to the new data for "T = 1234567" to determine where the chickens will be at "T = 1234564".
If the application also has 5 foxes, then the control points for those foxes can be sent separately. For example, the application might send a "chicken movement report" at T=1234560, then send a "fox movement report" at T=1234568, then send another "chicken movement report" at T=1234570, then send another "fox movement report" at T=1234578, and so on. If the application also has one cow, then maybe the cow is moving very slowly and there's a "cow movement report" at T=1234500 and another at T=1234800. The video driver doesn't care, and just uses whatever information it has for each object at the time a frame is drawn, and updates the information it has when one of these "movement reports" arrives.
Also note that the application is sending future predictions; and can change it's predictions. For example, at T=1234400 the application might send a "cow movement report" saying where it thinks the cow will be at T=1234800, but then at T=1234600 it might realise it was wrong and send another "cow movement report" saying where it thinks the cow will be at T=1234800 again. The video driver will interpolate from the positions used for the last seen frame so that if predictions change significantly after its too late then you still get smooth movement (and not "teleporting cows").
Maybe the chickens are moving around a lot and the application sends a 200 KiB "chicken movement report" every 50 ms on average; maybe the foxes are moving slower and the application sends a 1 KiB "fox movement report" every 100 ms on average; and maybe it sends a 300 byte "cow movement report" every 150 ms on average. Maybe there's also a farmhouse and terrain that don't change so no data is sent for that. Maybe it adds up to an average of 400 KiB of data per second going from the application to the video driver/s for the entire "camera + 1000 chickens + 5 foxes + 1 cow + farmhouse + terrain".
For my OS; writing video drivers will be hard (regardless of whether the video driver uses software rendering or GPU or some mixture); but because of the higher level interface writing games will be much much easier (and a lot less expensive).Rusky wrote:The current situation is that true cross-hardware compatibility is impossible for hobbyists, and so difficult for high-budget games that the hardware vendors actually include game-specific code for the most popular games. Graphics driver updates include whole libraries of replacement shaders and hacks for games that have been released since the hardware was sold.
More importantly, because of the higher level interface cross hardware compatibility won't be a problem. New video hardware (or a new driver for an existing video adapter) may dramatically improve the appearance of a 10 year old game (and it won't be like existing systems where a game released this year won't take advantage of capabilities that become available next year); and new games will work on old hardware (and it won't be like existing systems, where most games have come with a list of "required hardware" that only includes recent video cards).
Wrong. That's completely idiotic (and is the direction the industry is going). Lower level means that applications have a much stronger dependence on the hardware's/driver's capabilities, and this is what causes all of the compatibility problems (and all of the performance problems). Just wait until DirectX14 comes out and all of the games that were designed for DirectX12 don't run anymore and all of the people with older hardware can't play any new games. Note: If your goal is to force people to buy new hardware and new games every 3 years, then "lower level" is the way to do it and isn't idiotic at all. I am not interested in screwing consumers for the sake of profit.Rusky wrote:The correct solution is not to continue sweeping this all under the rug by going higher level, but to provide a minimal cross-hardware API. This is, in fact, exactly where the industry is going with DirectX 12 and Vulkan (as well as, to a lesser degree, Apple's Metal and AMD's now-replaced-with-Vulkan Mantle). Vulkan/DX12 provide nothing more than buffer transfer, command queues, and a cross-architecture shader bytecode.
Everything that runs on my OS will have to be designed for my OS (and this is not limited to video alone - it's literally everything); and (because the only way to make significant improvements is to make significant changes) the fact that existing games and existing development tools can't be ported is something I consider a huge advantage.Rusky wrote:This is a good interface for one particular rendering engine, but it cannot serve all applications that people might want to write and thus belongs in a higher layer, not as the only rendering API exposed by the OS. Beyond the fact that you've already given up on essentially all games and their development tools by limiting yourself to "realism at all costs," you're also making your OS impossible to use for graphics research, animated movies, and many types of scientific and medical applications.Brendan wrote:My graphics API will be much higher level. Essentially; you give the driver details for a scene (where the camera is, where lights are, where "objects" are, the meshes/textures for each object, etc), and the video driver does whatever it likes to convert the details for a scene into pixels.
...
Game developers will get a renderer designed for realism; and if they don't like it they can go elsewhere.
There will be things that are possible on my video system that are not possible on existing video systems; and there will be things that aren't possible on my video system that are possible on existing video systems. This is also a huge advantage; as it means that (for things that are possible on my video system that are not possible on existing video systems) my OS has no competition. The opposite (providing "exactly the same as existing systems") is extremely stupid as it means you're competing directly with established/mature OSs that provide identical (or better) functionality.
I wrote this in response to your "things that need to be done consistently within 16 ms" statement; to show that "within 16 ms" isn't true in general.Rusky wrote:Pipelining only makes sense if you're not trying to react to user input in real time, because it trades low latency for higher throughput. While a typical human reaction time is only around 100-200ms, the tolerance is much lower when the human is the one initiating the action. If a player presses a button and the game doesn't react within 50ms or so they will notice an infuriating lag. It gets even worse with things like virtual reality, where you need closer to 20ms to be even acceptable. VR already has to pull tons of tricks to get there on sane operating systems, so offloading rendering to a different machine is going to make it completely impossible.Brendan wrote:I don't know why you think something like graphics (which is about as "embarrassingly parallel" as you can get) is hard to do in parallel.
Each frame does not need to be done within 16 ms. Each frame has to be done before its deadline. If it takes 3 ms to render a frame then the video driver needs "details for a scene that's 3 ms or more in the future" so that it can be rendered before its deadline and displayed at exactly the right time; and if it takes 128 ms to render a frame then the video driver needs "details for a scene that's 128 ms or more in the future" so that it can be rendered before its deadline and displayed at exactly the right time. You can have a pipeline of stages where multiple frames are in progress at the same time, and even though it might take 128 ms for a frame to go from "description of scene" to "pixel data ready for monitor" you can start a new scene every 16 ms and finish a frame every 16 ms.
For some things the length of time between an event happening and the result being shown on the screen is 0 ms; and no video system can achieve that, so all video systems do it "as soon as they can" (hopefully before the end of the monitor's next vertical sync). For other things the length of time is far greater. For a typical scene there's a mixture of both. For example; the player fires a rifle and you get "muzzle flash" as soon as possible, but the mountains in the distance haven't changed for 30 seconds and the enemy soldier between the player and the mountains hasn't changed for 4 seconds.
Mostly what I'm talking about is something I'd be tempted to call "auto generated imposers". For the mountains in the distance, maybe you can have 10 computers each doing 1/10th of the mountains in stunningly high detail that update the "imposter cache" within 200 ms. For the enemy soldier between the player and the mountains, maybe you can have a pipeline where one computer does vertex transformations, another does lighting, etc; where the "imposter cache" for the soldier is updated within 50 ms. For the player's muzzle flash, maybe that's done on the same computer as the video card, which is the same computer that manages the "imposter cache" (and tells the other computers what to render) and the same computer that builds the frame by combining imposters.
Mostly; there are multiple different ways to do graphics in parallel that are not mutually exclusive. The video driver analyses the scene and its imposter cache and determines how important updating each thing is and how much time there is, and decides which pieces to do itself and which pieces it should ask other renderers (not necessarily just other computers) to do; and if there isn't enough time to do all of it then detail is reduced and/or the video driver gets less fussy about recycling "not quite current" imposters.
Please note that exactly how it's going to work is something I'm still thinking about and still need to do research for; and it is very likely that I'll end up implementing a few different "software renderer" prototypes before I'm happy with it. All I'm saying is that distributing the load across multiple computers (and multiple renders on the same computer) is not "impossible" at all.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Concise Way to Describe Colour Spaces
The problem is not the amount of state transfer, but the format and semantics of the state transfer.Brendan wrote:For my system; the video driver will manage a huge amount of state, but the application will not so there won't be a huge amount of state going between application and video driver.
Forcing all applications to use the same high-level API forces them to make tradeoffs that will always be sub-optimal, because the API has to cater to every use case. An API that forces 1) one particular concept of a model 2) to be loaded from a file or created using the API itself 3) into one particular concept of a 3D world 4) using one particular semantics of state transfer is doomed to failure from the start.
Games, animated movies, CAD tools, and scientific/medical applications all have different needs, even within each category. Games need to use custom shaders with custom data formats for different pieces of the game, with shortcuts and tricks to get "good enough" realism. Animated movies don't care at all about latency and instead need high throughput with extremely detailed scenes. CAD tools need to display the same data in many different ways, often custom-designed for the particular tool and far too specific to include in an OS API.
You can design your chicken model API to work well for one style of one of those categories. But then you've just written e.g. a game engine that's completely unsuitable for any of the others, and also eliminated any research into new rendering techniques, because only the video driver is given the tools necessary to do that.
This is a perfect example of the tradeoffs I'm talking about. It's a fine interface for batch rendering where you don't care about latency, but it's nonsense for real-time rendering that needs to synchronize user input, physics simulation, and rendering with the monitor's refresh rate to avoid stuttering and tearing.Brendan wrote:This does not happen once per frame. The application has no idea about "frames" at all; and everything is synchronised via. time stamps.
You've talked a lot about sending only the amount of data necessary, but that's got absolutely nothing to do with your high-level API. Existing rendering engines have already done that for years with OpenGL/DirectX, and will continue to do so with Vulkan/DX12. In fact, the lower-level APIs make it easier to do so, because they allow the renderer more control over what is loaded into VRAM at any particular time.
Your higher level API will, in fact, make it more difficult to send only the necessary data. Because it dictates the types and formats available, games like (as an extreme example) Minecraft will not be able to optimize the representation of the scene or trade off precision of things like positions/normals/texture coordinates/textures/normal maps. For that to work you need control over the shaders and data buffers.
Two problems. First, there's nothing to stop these even-higher-level video drivers from having even more room for inconsistency between them. Second, there's no way to know how to scale old games up to new hardware without the games specifying what they want.Brendan wrote:New video hardware (or a new driver for an existing video adapter) may dramatically improve the appearance of a 10 year old game (and it won't be like existing systems where a game released this year won't take advantage of capabilities that become available next year); and new games will work on old hardware (and it won't be like existing systems, where most games have come with a list of "required hardware" that only includes recent video cards).
For example, the shift from fixed-function pipelines to programmable pipelines looks like a tragedy to you, because it requires applications be updated to take advantage of the new functionality. Your API would allow the video driver to speed things up or maybe increase the resolution of various things. But how does your driver know to change a 10-year-old game's water from a partially transparent textured plane to a reflecting, rippling, normal mapped shader effect? For that matter, how does it know which water shader to use? Even in your imaginary class of games that are okay with a generic "renderer designed for realism," there is a lot of choice in atmospheric effects, etc. that newer games will want to specify and that old games didn't get a chance to.
This is nonsense. Lower level does not mean dependence on the specific features of the hardware or driver for the sake of forced upgrades, it means the API provides the minimum necessary to be independent of the specific features of the hardware, while still allowing applications to actually take advantage of consumers' hardware.Brendan wrote:Lower level means that applications have a much stronger dependence on the hardware's/driver's capabilities, and this is what causes all of the compatibility problems (and all of the performance problems). Just wait until DirectX14 comes out and all of the games that were designed for DirectX12 don't run anymore and all of the people with older hardware can't play any new games. Note: If your goal is to force people to buy new hardware and new games every 3 years, then "lower level" is the way to do it and isn't idiotic at all. I am not interested in screwing consumers for the sake of profit.
This does not hurt compatibility, either. Games written for old versions of DirectX and OpenGL still work, because the old API is still there. This will be even better with Vulkan/DX12, because the old APIs can have a single implementation on top of the new ones, instead of a different implementation for each driver, increasing consistency and reducing the need for shader replacement hacks and the like.
Going the other direction, while this is not always done in practice, nothing prevents you from including software fallbacks for new features (you want to do this anyway) to let new games run on old hardware. Further, domain-specific libraries (game engines, animation tools, etc) are much more equipped to scale things down to old hardware, when that's worth doing.
I'm not talking about already-written games and tools, I'm talking about people wanting to continue to create similar games and tools on your new-and-improved OS.Brendan wrote:the fact that existing games and existing development tools can't be ported is something I consider a huge advantage.
If we're talking about a 3D game with a player-controlled camera, then you're still spouting nonsense. If the camera moves at all, everything on the screen needs to be re-rendered. This means that most of the time you do need to render everything every 16ms based on user input from the previous frame, and trying to pipeline extremely realistic mountain backdrops is futile.Brendan wrote:For some things the length of time between an event happening and the result being shown on the screen is 0 ms; and no video system can achieve that, so all video systems do it "as soon as they can" (hopefully before the end of the monitor's next vertical sync). For other things the length of time is far greater. For a typical scene there's a mixture of both. For example; the player fires a rifle and you get "muzzle flash" as soon as possible, but the mountains in the distance haven't changed for 30 seconds and the enemy soldier between the player and the mountains hasn't changed for 4 seconds.
Re: Concise Way to Describe Colour Spaces
Hi,
Games use custom shaders because the graphics API sucks. It's not necessary. It might be "desirable" in some cases, but that doesn't mean it's worth the pain.
If someone wants to do research into new rendering techniques; they can write a renderer and do all the research they want (and test it on every single game and application that's written for the OS).
Your second problem doesn't make any sense - when games specify what they want (via. a description of their scene) instead of how they want things done, there's no way to scale games up to new hardware without the games specifying what they want??

If the camera rotates the impostors remain unchanged (but you might need new impostors for anything in "previously not seen" areas). If the camera and object get closer or get further apart the impostor gets bigger/smaller and you might want to redraw the impostor but (especially if the difference is minor and/or the object is too far away for fine details to matter) you could just scale the existing impostor. If the angle that the object is drawn at changes (e.g. because camera or object moved left/right/up/down, or if the object rotated) the object needs to be redrawn from a different angle; but if the difference in angles is minor then the old impostor can be "close enough".
Often impostors aren't "good enough" and do need to be redrawn, but it's far less often than "every object redrawn every frame".
Please note that this is not something I invented. "Static impostors" have been used in games for a very long time, and (some) game developers are doing auto-generated/dynamic impostors with OpenGL/DirectX.
Cheers,
Brendan
For movies it's just going to be an application telling the video driver "Play this file" (but I haven't thought about or designed the file format for this yet). For CAD tools and scientific/medical applications they'll find a way to do "whatever", and I'm sure it'll be just as good regardless of whether they end up giving similar or different results or whether they use similar or different techniques.Rusky wrote:The problem is not the amount of state transfer, but the format and semantics of the state transfer.Brendan wrote:For my system; the video driver will manage a huge amount of state, but the application will not so there won't be a huge amount of state going between application and video driver.
Forcing all applications to use the same high-level API forces them to make tradeoffs that will always be sub-optimal, because the API has to cater to every use case. An API that forces 1) one particular concept of a model 2) to be loaded from a file or created using the API itself 3) into one particular concept of a 3D world 4) using one particular semantics of state transfer is doomed to failure from the start.
Games, animated movies, CAD tools, and scientific/medical applications all have different needs, even within each category. Games need to use custom shaders with custom data formats for different pieces of the game, with shortcuts and tricks to get "good enough" realism. Animated movies don't care at all about latency and instead need high throughput with extremely detailed scenes. CAD tools need to display the same data in many different ways, often custom-designed for the particular tool and far too specific to include in an OS API.
Games use custom shaders because the graphics API sucks. It's not necessary. It might be "desirable" in some cases, but that doesn't mean it's worth the pain.
I've avoided the need for game engines (well, more correctly the video system avoids the need for half of the game engine; and other parts, like physics, collision detection and scripting will be avoided by "services").Rusky wrote:You can design your chicken model API to work well for one style of one of those categories. But then you've just written e.g. a game engine that's completely unsuitable for any of the others, and also eliminated any research into new rendering techniques, because only the video driver is given the tools necessary to do that.
If someone wants to do research into new rendering techniques; they can write a renderer and do all the research they want (and test it on every single game and application that's written for the OS).
There's never a need to synchronise user input and physics with the frame rate. Sadly; there are a lot incompetent and/or lazy and/or stupid game developers that have something like a "do { get_user_input(); update_AI(); do_physics(); update_screen(); }" main loop. These people need to be prevented from going near any computer ever again (and are the reason why gamers want "500 frames per second" just to get user input polled more frequently; and the reason why most games fail to use multiple threads effectively).Rusky wrote:This is a perfect example of the tradeoffs I'm talking about. It's a fine interface for batch rendering where you don't care about latency, but it's nonsense for real-time rendering that needs to synchronize user input, physics simulation, and rendering with the monitor's refresh rate to avoid stuttering and tearing.Brendan wrote:This does not happen once per frame. The application has no idea about "frames" at all; and everything is synchronised via. time stamps.
By shifting the renderer into the video driver the renderer gets far more control over what is loaded into VRAM than ever before.Rusky wrote:You've talked a lot about sending only the amount of data necessary, but that's got absolutely nothing to do with your high-level API. Existing rendering engines have already done that for years with OpenGL/DirectX, and will continue to do so with Vulkan/DX12. In fact, the lower-level APIs make it easier to do so, because they allow the renderer more control over what is loaded into VRAM at any particular time.
Games like (e.g.) Minecraft only need to tell the video driver when a block is placed or removed. Not only is this far simpler for the game but allows the video driver to optimise in ways "generic code for wildly different video hardware" can never hope to do.Rusky wrote:Your higher level API will, in fact, make it more difficult to send only the necessary data. Because it dictates the types and formats available, games like (as an extreme example) Minecraft will not be able to optimize the representation of the scene or trade off precision of things like positions/normals/texture coordinates/textures/normal maps. For that to work you need control over the shaders and data buffers.
The first problem is easily handled with a "reference renderer" (e.g. a "not necessarily fast but generic" software renderer).Rusky wrote:Two problems. First, there's nothing to stop these even-higher-level video drivers from having even more room for inconsistency between them. Second, there's no way to know how to scale old games up to new hardware without the games specifying what they want.Brendan wrote:New video hardware (or a new driver for an existing video adapter) may dramatically improve the appearance of a 10 year old game (and it won't be like existing systems where a game released this year won't take advantage of capabilities that become available next year); and new games will work on old hardware (and it won't be like existing systems, where most games have come with a list of "required hardware" that only includes recent video cards).
Your second problem doesn't make any sense - when games specify what they want (via. a description of their scene) instead of how they want things done, there's no way to scale games up to new hardware without the games specifying what they want??
The game just tells the video driver the volume of the liquid, its colour/transparency and how reflective the surface is. Things like "rippling" will be done elsewhere via. control points (part of physics, not rendering). The game doesn't care if the video driver doesn't do reflections at all, or only does specular highlights with light sources, or does a reflection of the sky alone, or does reflections of everything. Fog is mostly the same, it just doesn't have a surface.Rusky wrote:For example, the shift from fixed-function pipelines to programmable pipelines looks like a tragedy to you, because it requires applications be updated to take advantage of the new functionality. Your API would allow the video driver to speed things up or maybe increase the resolution of various things. But how does your driver know to change a 10-year-old game's water from a partially transparent textured plane to a reflecting, rippling, normal mapped shader effect? For that matter, how does it know which water shader to use? Even in your imaginary class of games that are okay with a generic "renderer designed for realism," there is a lot of choice in atmospheric effects, etc. that newer games will want to specify and that old games didn't get a chance to.
Sure; and the "minimum requirements" listed by every PC game is just a suggestion, and all modern games will run on ancient "fixed function pipeline" hardware because the API provides the minimum necessary to be independent of the specific features of the hardware...Rusky wrote:This is nonsense. Lower level does not mean dependence on the specific features of the hardware or driver for the sake of forced upgrades, it means the API provides the minimum necessary to be independent of the specific features of the hardware, while still allowing applications to actually take advantage of consumers' hardware.Brendan wrote:Lower level means that applications have a much stronger dependence on the hardware's/driver's capabilities, and this is what causes all of the compatibility problems (and all of the performance problems). Just wait until DirectX14 comes out and all of the games that were designed for DirectX12 don't run anymore and all of the people with older hardware can't play any new games. Note: If your goal is to force people to buy new hardware and new games every 3 years, then "lower level" is the way to do it and isn't idiotic at all. I am not interested in screwing consumers for the sake of profit.

Working the same as it does on old hardware is not the same as taking advantage of new hardware's capabilities.Rusky wrote:This does not hurt compatibility, either. Games written for old versions of DirectX and OpenGL still work, because the old API is still there. This will be even better with Vulkan/DX12, because the old APIs can have a single implementation on top of the new ones, instead of a different implementation for each driver, increasing consistency and reducing the need for shader replacement hacks and the like.
Yes, this is possible in theory. In practice it never happens because game developers are struggling just to deal with the idiotic/unnecessary complexity of the modern API without trying to cope with "N previous iterations" on top of that.Rusky wrote:Going the other direction, while this is not always done in practice, nothing prevents you from including software fallbacks for new features (you want to do this anyway) to let new games run on old hardware. Further, domain-specific libraries (game engines, animation tools, etc) are much more equipped to scale things down to old hardware, when that's worth doing.
Did you have any relevant examples that are significant enough for me to care?Rusky wrote:I'm not talking about already-written games and tools, I'm talking about people wanting to continue to create similar games and tools on your new-and-improved OS.Brendan wrote:the fact that existing games and existing development tools can't be ported is something I consider a huge advantage.
Wrong.Rusky wrote:If we're talking about a 3D game with a player-controlled camera, then you're still spouting nonsense. If the camera moves at all, everything on the screen needs to be re-rendered. This means that most of the time you do need to render everything every 16ms based on user input from the previous frame, and trying to pipeline extremely realistic mountain backdrops is futile.Brendan wrote:For some things the length of time between an event happening and the result being shown on the screen is 0 ms; and no video system can achieve that, so all video systems do it "as soon as they can" (hopefully before the end of the monitor's next vertical sync). For other things the length of time is far greater. For a typical scene there's a mixture of both. For example; the player fires a rifle and you get "muzzle flash" as soon as possible, but the mountains in the distance haven't changed for 30 seconds and the enemy soldier between the player and the mountains hasn't changed for 4 seconds.
If the camera rotates the impostors remain unchanged (but you might need new impostors for anything in "previously not seen" areas). If the camera and object get closer or get further apart the impostor gets bigger/smaller and you might want to redraw the impostor but (especially if the difference is minor and/or the object is too far away for fine details to matter) you could just scale the existing impostor. If the angle that the object is drawn at changes (e.g. because camera or object moved left/right/up/down, or if the object rotated) the object needs to be redrawn from a different angle; but if the difference in angles is minor then the old impostor can be "close enough".
Often impostors aren't "good enough" and do need to be redrawn, but it's far less often than "every object redrawn every frame".
Please note that this is not something I invented. "Static impostors" have been used in games for a very long time, and (some) game developers are doing auto-generated/dynamic impostors with OpenGL/DirectX.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Concise Way to Describe Colour Spaces
A very close simulation is possible with the help of just up to 8 bands of wavelengths. Also I suppose that up to 8 bands is enough for the reflection factor to work properly. In the referenced article it was possible to show the color difference with only one band of the illumination source while the number of reflection factors for a surface can be estimated as being in range 4 to 6, so the maximum of 8 factors is seemed as pretty good approximation.Octocontrabass wrote:Here is a simulation of the effect. Most of the time, colors being skewed by different light sources isn't much of a concern, unless you're a graphic designer. On the other hand, graphic designers are the primary demographic for high-accuracy color manipulation.
But for XYZ there's no way of representing the color skewness while with the wavelengths approach it is absolutely possible. So, the accuracy of the representation is higher than in case of XYZ.Octocontrabass wrote:For XYZ, the conversion is exactly the same: multiply the surface reflection factor by the light source component amplitude and get the resulting reflection component amplitude.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability 

Re: Concise Way to Describe Colour Spaces
I'm not against the 3D GUI, but I'm against the too general approach. In your case the use of auto-iris is too general, so I suggest some additional ways of controlling the brightness. And the mechanics of moving the camera and changing window size is not opposed by me.Brendan wrote:All GUIs that I know of allow a window to be resized, and this is something that's relatively simple and required. For my system (where it's all 3D) shifting the camera closer to a window is something that's also relatively simple and required (for 3D). By combining both of these things (e.g. resizing a window so that it's smaller while also shifting the camera closer so the window takes up more of the screen) you end up with "zoom". It is not hard and all the functionality is required anyway.
So, your claim about "as close as possible" is not true for the real world situations, when brightness is changed significantly. Human's perception based not only on the brightness difference along the scene, but also on the brightness difference in time. And may be there are even more effects I still not aware of.Brendan wrote:The extra lighting won't actually help anything or make any details clearer (because my "auto-iris" will just reduce the extra light anyway)
As it was pointed for games - the application requirements can change significantly. So, your "one way for all" makes the idea of an attractive OS much less attractive.Brendan wrote:An application should be like a physical piece of paper in the real world. It shouldn't glow in the dark.
I am not insisting on the wavelength approach in any way. But I see some arguments that support it. For now I see no serious objection except the need for processor power, so, I agree with the objection and ready to delay the introduction of the wavelengths, but as a future way of modeling reality the wavelength approach is seen by me as very useful.Brendan wrote:To be perfectly honest; for software rendering I'll be pushing against the limits of what hardware is capable of without the additional overhead; and the only cases that I can think of where there would be any visible difference between your "wavelength+amplitude" and my XYZ are flourescence and dispersion, neither of which has ever been supported in any real time renderer (and neither of which are things I intend to support in my renderer).
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability 

Re: Concise Way to Describe Colour Spaces
For such goal there should be equivalent of a very smart compiler, that compiles bytecode with a lot of optimizations. Also there should be some extensible and configurable high level scene description, which can be used as an equivalent of the bytecode. Together, the "smart compiler" and sufficiently rich "bytecode", can produce the output, that is acceptable by the majority of users. But the word "smart" here means something really monumental, comparable to the AI. It should know a lot of rendering technics, it should be able to manipulate different algorithms to combine them consistently with a final goal, that should also be determined automatically. The amount of coding here is obvious. However, if your plans are so wide as 10-20 years, then of course, you can try and may be at least some part of the task can be implemented in the final solution.Brendan wrote:For my OS; writing video drivers will be hard (regardless of whether the video driver uses software rendering or GPU or some mixture); but because of the higher level interface writing games will be much much easier (and a lot less expensive).
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability 

Re: Concise Way to Describe Colour Spaces
Hi,
Cheers,
Brendan
Software (e.g. GUI, applications, games) control how much light objects reflect, the position of the camera, the amount of ambient light, the number and sizes of directional lights; the video driver controls how "brighter than monitor can handle" gets mapped to the intensities that the monitor can handle and (if possible) the monitor's back light. There is nothing else that software can control, and nothing I could possibly do to make you happier.embryo2 wrote:I'm not against the 3D GUI, but I'm against the too general approach. In your case the use of auto-iris is too general, so I suggest some additional ways of controlling the brightness. And the mechanics of moving the camera and changing window size is not opposed by me.Brendan wrote:All GUIs that I know of allow a window to be resized, and this is something that's relatively simple and required. For my system (where it's all 3D) shifting the camera closer to a window is something that's also relatively simple and required (for 3D). By combining both of these things (e.g. resizing a window so that it's smaller while also shifting the camera closer so the window takes up more of the screen) you end up with "zoom". It is not hard and all the functionality is required anyway.
If you've got a display that's actually capable of showing "brighter than the sun" images, then my video system won't need to use its auto-iris for that display. If you don't have a display that's capable of showing "brighter than the sun" images, then don't blame me if "auto-iris" only does the best job possible.embryo2 wrote:So, your claim about "as close as possible" is not true for the real world situations, when brightness is changed significantly. Human's perception based not only on the brightness difference along the scene, but also on the brightness difference in time. And may be there are even more effects I still not aware of.Brendan wrote:The extra lighting won't actually help anything or make any details clearer (because my "auto-iris" will just reduce the extra light anyway)
Can you provide an example of "applications requirements" that couldn't be done on my video system but actually matter enough to care about?embryo2 wrote:As it was pointed for games - the application requirements can change significantly. So, your "one way for all" makes the idea of an attractive OS much less attractive.Brendan wrote:An application should be like a physical piece of paper in the real world. It shouldn't glow in the dark.
I'm mostly going to provide:embryo2 wrote:For such goal there should be equivalent of a very smart compiler, that compiles bytecode with a lot of optimizations. Also there should be some extensible and configurable high level scene description, which can be used as an equivalent of the bytecode. Together, the "smart compiler" and sufficiently rich "bytecode", can produce the output, that is acceptable by the majority of users. But the word "smart" here means something really monumental, comparable to the AI. It should know a lot of rendering technics, it should be able to manipulate different algorithms to combine them consistently with a final goal, that should also be determined automatically. The amount of coding here is obvious. However, if your plans are so wide as 10-20 years, then of course, you can try and may be at least some part of the task can be implemented in the final solution.Brendan wrote:For my OS; writing video drivers will be hard (regardless of whether the video driver uses software rendering or GPU or some mixture); but because of the higher level interface writing games will be much much easier (and a lot less expensive).
- A device independent software renderer implemented as a service that uses CPUs (for both real-time rendering and offline rendering). This will also be used as a reference implementation.
- A "video driver template" that contains common code that any video driver would need; that just uses the software renderer service for rendering. The idea is that someone writing a native video driver for a specific video card would start with this, and add/replace pieces with code for the specific video card (rather than starting with nothing). This will also be the "raw framebuffer" generic video driver that's used by the OS.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Concise Way to Describe Colour Spaces
I'm not talking about pre-recorded videos, I'm talking about things like Pixar's render farms that are actually generating those videos.Brendan wrote:For movies it's just going to be an application telling the video driver "Play this file" (but I haven't thought about or designed the file format for this yet).
It's absolutely necessary. Custom shaders are just as important as custom CPU-side programs. You would call me insane if I proposed only giving access to the CPU through pre-existing code to process pre-existing file formats.Brendan wrote:Games use custom shaders because the graphics API sucks. It's not necessary. It might be "desirable" in some cases, but that doesn't mean it's worth the pain.
There is absolutely a need to synchronize user input and physics with the frame rate. There needs to be a consistent lag time between input and its processing, physics needs to happen at a consistent rate to get consistent results, and rendering needs to happen at a consistent rate to avoid jitter/tearing. Of course these rates don't have to be the same, but they do have to be synchronized. Many games already separate these rates and process input as fast as is reasonable, video at the monitor refresh rate, and physics at a lower, fixed rate for simulation consistency.Brendan wrote:There's never a need to synchronise user input and physics with the frame rate.
These gamers are wrong. Correctly-written games take into account all the input received since the last frame whether they're running at 15, 20, 30, or 60 frames per second. Doing it any faster than the video framerate is literally impossible to perceive and anyone who claims to do so is experiencing the placebo effect.Brendan wrote:(and are the reason why gamers want "500 frames per second" just to get user input polled more frequently; and the reason why most games fail to use multiple threads effectively).
The video driver doesn't have the correct information to utilize that control. It doesn't know the optimal format for the data nor which data is likely to be used next; a renderer that's part of a game does.Brendan wrote:By shifting the renderer into the video driver the renderer gets far more control over what is loaded into VRAM than ever before.
So you're going to include support specifically for voxel-based games in your API? Your standardized scene description format cannot possibly anticipate all the types of information games will want to specify in the future, nor can it possibly provide an optimal format for all the different things games care about specifying today.Brendan wrote:Games like (e.g.) Minecraft only need to tell the video driver when a block is placed or removed. Not only is this far simpler for the game but allows the video driver to optimise in ways "generic code for wildly different video hardware" can never hope to do.
...
when games specify what they want (via. a description of their scene) instead of how they want things done, there's no way to scale games up to new hardware without the games specifying what they want??
...
The game just tells the video driver the volume of the liquid, its colour/transparency and how reflective the surface is. Things like "rippling" will be done elsewhere via. control points (part of physics, not rendering).
Take my water example and explain how that will work- will you design a "Concise Way to Describe Bodies of Water" in all possible situations so that games can scale from "transparent textured plane" to "ripples and reflections using a shader" that takes into account all the possible ways games might want to do that, especially when there's not enough bandwidth to do ripples as a physics process so it must be done in the shader? How will this solution work when you have to specify a "Concise Way to Describe Piles of Dirt" and a "Concise Way to Describe Clouds" and a "Concise Way to Describe Plants" and a "Concise Way to Describe Alien Creatures" and a "Concise Way to Describe Stars" and a "Concise Way to Describe Cars" and a "Concise Way to Describe Spaceships" so that detail can automatically be added when new hardware is available?
Even if you could design good formats for every possible thing that might take advantage of new hardware, your format will never be optimal for what each particular game is doing, because it has to take into account every possibility. Letting games design their own formats is much better- and you don't even have the "waaaah everybody's using different image file formats" problem in that case.
This is the reason custom shaders are no more undesirable than custom CPU programs. What happens when somebody comes along with a new rendering technique that enables a new property of rendered objects that you hadn't incorporated before?
I never said the old OpenGL/DirectX APIs did this. They don't, because they added higher level features without including fallback support for them on older hardware (DirectX did a little bit better at this, though). I said Vulkan/DX12 can do this now, because the API is so much simpler that writing fallback support is also simpler. I also said domain-specific rendering engines have an easier time of scaling down the requirements for older hardware.Brendan wrote:Sure; and the "minimum requirements" listed by every PC game is just a suggestion, and all modern games will run on ancient "fixed function pipeline" hardware because the API provides the minimum necessary to be independent of the specific features of the hardware...
...
Working the same as it does on old hardware is not the same as taking advantage of new hardware's capabilities.
On the scaling-up side, we differ in our goals. I care about artistic integrity, and think games and movies should look the way their designers intended them, and that they should have the freedom to control what that look is, whether it's purposefully-blocky, shaded and outlined like a cartoon, or whatever more-subtle variation on reality they care to come up with. Thus, the only thing old games should do on new hardware is run faster and with higher resolution.
You don't care about any of that and think the video driver should redesign every game's look on every hardware update for the sake of "realism." I think this should only apply to applications that want it, like CAD tools or scientific imaging tools, that can use a specific library for that purpose, which will only need to be updated once when a new generation of hardware comes out, not again for every driver.
We agree on this point. We just disagree on which direction to go to solve it- you want a generic high level renderer, I want Vulkan/DX12 with many third-party renderers on top.Brendan wrote:the idiotic/unnecessary complexity of the modern API without trying to cope with "N previous iterations" on top of that.
I've been giving them this whole time. Any games that don't want to be "as realistic as possible" and actually have their own art style are out, as you already said. That includes, as part of a very large list, nearly every game made by or for Nintendo, Square Enix, Capcom, etc...Brendan wrote:Did you have any relevant examples that are significant enough for me to care?Rusky wrote:I'm not talking about already-written games and tools, I'm talking about people wanting to continue to create similar games and tools on your new-and-improved OS.
Imposters are a trick to improve performance when necessary, not the primary way things are drawn in games today. Parallelizing a game across a network by converting everything to imposters is not an improvement, it's throwing everything in the toilet for the sake of forcing your idea to work.Brendan wrote:Often impostors aren't "good enough" and do need to be redrawn, but it's far less often than "every object redrawn every frame".
Please note that this is not something I invented. "Static impostors" have been used in games for a very long time, and (some) game developers are doing auto-generated/dynamic impostors with OpenGL/DirectX.
The point is, in the worst case (which happens a lot) the camera will be moving and rotating and changing the perspective on everything visible, so you need to be prepared to handle this case even if it doesn't happen all the time. And if your render state is strewn across the network, you have absolutely no hope of handling this and games will just have imposter artifacts and lowered quality all the time.