OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 6:59 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 105 posts ]  Go to page Previous  1 ... 3, 4, 5, 6, 7
Author Message
 Post subject: Re: Graphics API and GUI
PostPosted: Wed Oct 14, 2015 4:30 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

kiznit wrote:
In fact, the only reason I engaged in this conversation is because I respect you based on your posts here on these forums. You have shown that you are smart, have a lot of experience and think things through. When someone like you start talking about how he wants to change graphics rendering, I am interested in hearing what this is about and understanding it. So far, I have a hard time understanding what you are proposing. Hence why I am posting at all.


In that case; maybe I've been more defensive than necessary.

I should probably point out that the description I've given in this topic is simplified; and there are other aspects to the graphics system system that I've tried to avoid mentioning in this topic (that have been mentioned in other topic/s in the past) in the hope of reducing confusion/misunderstandings. I'll mention the missing pieces if they become relevant.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Wed Oct 14, 2015 4:34 pm 
Offline
Member
Member

Joined: Mon Feb 02, 2015 7:11 pm
Posts: 898
Brendan wrote:
kiznit wrote:
5) Not future-proof: new hardware comes out and doesn't work with your RM abstraction (an example of this was the PowerVR with its tile-based rendering).
6) Not future-proof: new rendering engine algorithm comes along and doesn't map to your API (I guess thats leaky abstraction again).


Wrong. Higher level interfaces are less hardware dependent, and less likely to cause API issues.


I disagree with you. The higher level your interfaces are, to more leaky your abstraction is. History isn't on your side: entire API / engines have become obsolete because of hardware changes and new algorithms enabled by faster CPU and memory. The only graphics API that seems flexible enough to survive and adapt is OpenGL. It's also hard to be more IM then OpenGL. Direct3D doesn't count as the API completely changes from one version to the next. So I guess this could be seen as a counter example, but I personally think that DIrect3D was historically too much RM.

_________________
https://github.com/kiznit/rainbow-os


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Wed Oct 14, 2015 4:44 pm 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Brendan wrote:
kiznit wrote:
2) CPU load: talking to the GPU hardware is a fixed cost whether you use RM or IM. Anything RM does on top requires extra CPU cycles.
Forget about GPU hardware. We'll probably all be using something closer to Xeon Phi well before my OS project has any support for GPU.
I doubt the Phi will replace dedicated graphics hardware, but even if it did this still applies, especially in a distributed system.

Brendan wrote:
kiznit wrote:
3) RM APIs do just that: they retain data. This means they use more memory.
Using more memory is far more practical than dragging all that data across the network every frame. Note that half of the data (meshes, textures) will be loaded from disk by the video driver and won't go anywhere near the application.
IM doesn't means dragging all the data to the GPU every frame. Most data is loaded into VRAM once and then reused over an entire scene. Mostly what gets sent per-frame is transformations, which change every frame anyway.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Wed Oct 14, 2015 4:49 pm 
Offline
Member
Member

Joined: Mon Feb 02, 2015 7:11 pm
Posts: 898
As for the other points: you asked us what are the problems with a RM API. I've given a few. And they are still valid. They are cons of a retained mode API. They all apply to your system. You *will* have to send rendering data over the network. You *will* have to update that data. Even if you are super efficient at caching everything and sending minimal delta, it is still work that you don't have to do in an IM system. Not having a GPU and using Phi doesn't change anything: you are still spinning the CPUs doing stuff you don't need for an IM API.

Now given that what you are trying to achieve is rendering graphics over a network connection, your replies make sense. If you are trying to send scene descriptions, dependency graphs and so on over a network pipe, then obviously a retained mode API is going to be more efficient that sending that data over an over.

What I do not see is this system being efficient at all (for games, that's where I am coming from). Synchronizing a dependency graph is far more work then synchronizing a few simulation entities with animation parameters.

_________________
https://github.com/kiznit/rainbow-os


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Wed Oct 14, 2015 9:18 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

kiznit wrote:
Are you targeting games with this? And/or are games going to be running over this networked system? If that is the case, then my answer is that it is orders of magnitude more efficient to synchronize the game simulation then it is to send the rendering graphs over and over. That's how game engines work. That's because it is the most efficient way.


It's likely that my plans are far more convoluted than you currently imagine. I'm not sure how familiar you are with my project from other posts, or where to start, but...

Imagine a word processor. Typically (on traditional OSs) this would be implemented as a single process; for my OS it's not. For my OS you'd have a "back end" process that deals with things like file IO and managing the document's data; plus a second "front end" process that deals with the user; plus a third process that does spell check. Processes are tied to a specific computer; and by splitting applications up into multiple processes like this the load is spread across multiple computers. There are other advantages, including making it easier to have 20 different applications (written by different people) all using the same spell checker at the same time, making it easier to have 2 or more "front ends" connected to the same "back end" (for multi-user apps), making it so that one company can write a new front end even when the original word processor was a proprietary application written by a different company, etc. There are also disadvantages (different programming paradigm, network lag/traffic, a much stronger need for fault tolerance, etc).

For the sake of an example, let's assume we're designing a game like Gnomoria (for no reason other than it's the game I played most recently). Just like the word processor; this would be split into cooperating processes and distributed across a LAN. You might have one process for the user interface, one process doing physics (mostly just liquids), one doing flora (tree, plant and grass growth), one for enemy AI, one or more for path finding, one for tracking/scheduling jobs for your gnomes to perform, one for mechanisms, etc. In this way a single game may be spread across 10 computers. For most games you'll notice there's almost always one or more limits to keep processing load from growing beyond a single computer's capabilities. For Gnomoria, there's 2 of them - map size is limited to (at most) 192*192 tiles; and there's a soft limit on population growth. By spreading load across multiple computers these limits can be pushed back - rather than limiting a kingdom to 192*192 on one computer, maybe this limit could be increased to 512*512 when running on 10 computers; and rather than limiting population to ~100 gnomes, maybe you can allow 500 gnomes.

Some of these processes may be running on the same computer, and some of them might be running on different computers. The programmer only sends messages between processes without caring where the processes are. The design of applications and games (and how they're split into multiple processes) has to take this into account. Essentially you need to find pieces where the processing load is high enough to justify the communication costs.

You also need to design messaging protocols to reduce communication. For example, when the spell checker is started you wouldn't want to have to send a dictionary to it from another process; you'd want to send a file name or (more likely) a language code and let it get the dictionary from disk itself; and you'd want to tell it the language you want as far in advance as possible so that it (hopefully) has time to finish loading the dictionary before you start asking it to check any spelling.

Now, a detour...

In theory; when a computer boots the OS's boot code sets a default video mode (using firmware) then (later) the OS does PCI bus enumeration, finds video cards and starts their native drivers. In practice there's no native video drivers so the OS starts a "generic framebuffer" driver for each video card/monitor the firmware let it setup during boot, which is typically limited to one and only one monitor (because, firmware isn't too fancy). Once the video driver is started it uses a file to figure out which "station" its connected to and establishes a connection to that station's "station manager". Other human interface devices (keyboard, mouse, sound, etc) do the same. A station is basically where a user would sit. If one computer has 4 monitors and 2 keyboards, then maybe you configure one station for 3 monitors and one keyboard and another station for 1 monitor and 1 keyboard.

Of course it's a micro-kernel; all drivers are just processes, and processes communicate without caring which computer they're on. If you want multiple monitors but firmware (and lack of native drivers) limits the OS to one monitor per computer, then that's fine - just configure a station so that monitors from 2 or more computers connect to the same "station manager".

Once a station manager has enough human interface device drivers (not necessarily all of them) it does user login; and when a user logs in the station manager loads the user's preferences and starts whatever the user wants (e.g. a GUI for each virtual desktop). The station manager is the only thing that talks to human interface device drivers. Apps/games/GUI talk to the station manager. For example; (for video) an app sends a description of its graphics (as a dependency graph) to the station manager, it keeps a copy and sends a copy to each video driver. If a video driver crashes (or is updated) then another video driver is started and the station manager sends it a copy of those descriptions without the application or GUI knowing the video driver had a problem and/or was changed. Of course you can have an application on one computer, a station manager on another computer, and 2 video drivers on 2 more computers.

Also; there isn't too much difference between boring applications (text editor, calculator) and 3D games - they all send 3D graphics to the station manager/video driver (and if someone wants to look at your calculator from the side I expect that the buttons protrude like "[" and not be painted on a 2D surface and impossible to see from the side).

That should give you a reasonable idea of what the graphics API (or more correctly, the messaging protocol used for graphics) needs to be designed for.

So, latency...

Worst case latency (time between app sending anything and video driver receiving it) for a congested network can exceed 1/60th of a second. Due to the nature of the OS, you can expect the network to be fairly busy at best. Anything that involves "per frame" data going in either direction is completely unusable. The graphics is not 3D, it's 4D. An application doesn't say "chicken #3 is at (x, y, z)" it says "at time=123456 in the future, chicken #3 will be at (x, y, z)". The video driver figures out when the next frame will be displayed, calculates where objects will be at that time, then renders objects at the calculated locations. Applications/games (wherever possible) predict the future. A player presses a mouse button which starts a 20 ms "pulling the trigger" animation and the game knows a bullet will leave the gun 20 ms before it happens (and with 10 ms of lag, the video driver does the animation in 10 ms and the bullet leaves the gun at exactly the right time despite the lag).

"An object at rest will remain at rest unless acted on by an unbalanced force. An object in motion continues in motion with the same speed and in the same direction unless acted upon by an unbalanced force". We only need to tell the video driver when an object will be created, destroyed, or when it will be acted upon by an unbalanced force. This includes the camera.

Then there's bandwidth and memory consumption...

The descriptions of what the application wants to draw need to be small, not just because they consume network bandwidth but also because the descriptions are stored in the "station manager" and in all video card drivers. Textures and meshes are big, file names are small, give the video driver file names and let it load textures/meshes from disk itself. Alpha channel bitmaps are big, UTF8 strings are relatively small, give the video driver a UTF8 string and let it get the alpha channel bitmaps from the font engine itself.

And finally, performance...

Ask for everything to be prefetched as soon as possible. While the user is fumbling about with your game's main menu you could be loading the most recently used save game while the video driver is loading graphics data. If the user loads a different save game, you can cancel/discard the "accidentally prefetched" data. If the video driver has more important things to be doing, IO priorities are your friend. Don't just give video driver the file name for a texture to load, also give it a default colour to use if/when the video driver hasn't been able to load the texture from disk before it needs to be displayed. For meshes do the same (but use a radius and a colour). I want to play a game now; I do not want to wait for all this graphics data to get loaded before I start blowing things up just so you can show me awesome high-poly monkey butts; and if I do want to wait I'm not too stupid to pause the game until the video driver catches up.

Video driver should reduce quality to meet a frame's deadline if it has to; and graphics should be locked to the monitor's refresh rate. The app doesn't need to know when frames are due (it's using prediction) or what the screen resolution is. Video driver should also be smart enough to figure out if there's some other renderer somewhere that it can offload some work to (e.g. get them to do some distant imposers, up to just before rasterisation to avoid shoving pixels around); and don't forget about the prediction that's hiding latency (you can ask another renderer to start work 10 frames ahead if you want, and cancel later if the prediction changes). I don't want to be a guy sitting in a room surrounded by 10 computers where 9 of them are idle just because one is failing to delegate.

Now; with all of the above in mind; which do you think is better - immediate mode, or retained mode?


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Wed Oct 14, 2015 9:55 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

kiznit wrote:
Brendan wrote:
kiznit wrote:
5) Not future-proof: new hardware comes out and doesn't work with your RM abstraction (an example of this was the PowerVR with its tile-based rendering).
6) Not future-proof: new rendering engine algorithm comes along and doesn't map to your API (I guess thats leaky abstraction again).


Wrong. Higher level interfaces are less hardware dependent, and less likely to cause API issues.


I disagree with you. The higher level your interfaces are, to more leaky your abstraction is. History isn't on your side: entire API / engines have become obsolete because of hardware changes and new algorithms enabled by faster CPU and memory. The only graphics API that seems flexible enough to survive and adapt is OpenGL. It's also hard to be more IM then OpenGL. Direct3D doesn't count as the API completely changes from one version to the next. So I guess this could be seen as a counter example, but I personally think that DIrect3D was historically too much RM.


The lowest level interface possible is giving application direct access to the video card's memory, IO ports, IRQ/s, etc. This is also the least abstract - all the details have been leaked through a thin veil of "nothing".

The highest level interface is something like (e.g.) VRML. It's so abstract that it's impossible to tell anything about the underlying renderer or hardware being used. No lower level details leak through the abstraction.

You can pretend that the sun is cold, water runs up-hill, or whatever you like. It doesn't change anything.

Rusky wrote:
Brendan wrote:
kiznit wrote:
2) CPU load: talking to the GPU hardware is a fixed cost whether you use RM or IM. Anything RM does on top requires extra CPU cycles.
Forget about GPU hardware. We'll probably all be using something closer to Xeon Phi well before my OS project has any support for GPU.
I doubt the Phi will replace dedicated graphics hardware, but even if it did this still applies, especially in a distributed system.


To be perfectly honest; in the next ~10 years I expect discrete video to get slaughtered by integrated video and for Nvidia to die because of it; I expect AMD/ATI to continue their recent lack of success and eventually get split up and sold off as pieces to the highest bidder; and I expect Intel to bump AVX width a few more times (maybe up to 2048-bit) and then (after AMD and NVidia are gone) start wondering why they're wasting engineering effort to put 2 different types of "wide SIMD" processors on the same chip when there's no viable competition anyway and games are all designed for lower performance consoles (and ported to 80x86) and don't need half the power of GPUs in the first place.

kiznit wrote:
Now given that what you are trying to achieve is rendering graphics over a network connection, your replies make sense. If you are trying to send scene descriptions, dependency graphs and so on over a network pipe, then obviously a retained mode API is going to be more efficient that sending that data over an over.


Yes!

kiznit wrote:
What I do not see is this system being efficient at all (for games, that's where I am coming from). Synchronizing a dependency graph is far more work then synchronizing a few simulation entities with animation parameters.


In terms of efficiency; I don't think it's possible for any of us to really know until it's been implemented, then re-implemented (with lessons learnt from the previous attempt), then tuned/tweaked/evolved for several years. I think it will be fine for casual gaming (without GPU support); even if it's not as good as (e.g.) Crysis running on a pair of high-end SLI cards.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Thu Oct 15, 2015 12:26 am 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
Brendan wrote:
The lowest level interface possible is giving application direct access to the video card's memory, IO ports, IRQ/s, etc. This is also the least abstract - all the details have been leaked through a thin veil of "nothing".

The highest level interface is something like (e.g.) VRML. It's so abstract that it's impossible to tell anything about the underlying renderer or hardware being used. No lower level details leak through the abstraction.
It's not a linear scale, and VRML has precisely the kind of problems we've been talking about.

Brendan wrote:
To be perfectly honest; in the next ~10 years I expect discrete video to get slaughtered by integrated video and for Nvidia to die because of it; I expect AMD/ATI to continue their recent lack of success and eventually get split up and sold off as pieces to the highest bidder; and I expect Intel to bump AVX width a few more times (maybe up to 2048-bit) and then (after AMD and NVidia are gone) start wondering why they're wasting engineering effort to put 2 different types of "wide SIMD" processors on the same chip when there's no viable competition anyway and games are all designed for lower performance consoles (and ported to 80x86) and don't need half the power of GPUs in the first place.
Nonsense. Integrated graphics are nowhere near the power of even "lower performance consoles," and AVX is nowhere near what graphics-optimized hardware does. I do expect integrated graphics to get more and more powerful (you can already play old/low end games on newer Intel graphics), but discrete/console graphics will also keep pace and new games will continue to utilize them.

The fact that Gnomoria is your point of reference is rather telling. Not to disparage that style of games, but it's nowhere near as demanding as even a few-years-old "typical" 3D game.

Brendan wrote:
Processes are tied to a specific computer; and by splitting applications up into multiple processes like this the load is spread across multiple computers.
Perhaps it would make more sense to leave the video drivers as lower-level interfaces- not all the way down to direct hardware access, but in more of a sweet spot that's portable across a wide range of hardware both in time and in power. Like, say, OpenGL. This process would of course be tied to the rendering hardware, but so could various higher-level renderers that make different tradeoffs and can even be game-specific when necessary.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Thu Oct 15, 2015 6:04 am 
Offline
Member
Member

Joined: Sat Nov 18, 2006 9:11 am
Posts: 571
My intentions at this point are to just get Mantle/Vulkan drivers, I can put another layer in/on-top later if I feel it is necessary, and since those are very close to hardware, the drivers will be *relatively* small. Any parts that deal with loading textures from files or rendering a hierarchy list can be separate from the driver proper. Separate could mean in a different process, or it could mean loaded as a shared library so all graphics drivers have the same copy of this high level code mapped into their address spaces and then have a standard low level interface that it uses to communicate with the GPU. Either way, that's still a bit out for me to worry about, but there is nothing stopping me (or anyone) from implementing Brendan's ideas in my OS if I chose a lower level API now, although it will affect design considerably I'm sure. I do appreciate the inputs and descriptions from everyone as it helps my clarify my thoughts and gives me more information to base my decisions on.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Thu Oct 15, 2015 9:53 am 
Offline
Member
Member

Joined: Mon Feb 02, 2015 7:11 pm
Posts: 898
Brendan wrote:
It's likely that my plans are far more convoluted than you currently imagine. I'm not sure how familiar you are with my project from other posts, or where to start, but...
(...)


Very interesting stuff about distributed computing. Thanks for taking the time to explain it all.

Brendan wrote:
So, latency...

Worst case latency (time between app sending anything and video driver receiving it) for a congested network can exceed 1/60th of a second. Due to the nature of the OS, you can expect the network to be fairly busy at best. Anything that involves "per frame" data going in either direction is completely unusable. The graphics is not 3D, it's 4D. An application doesn't say "chicken #3 is at (x, y, z)" it says "at time=123456 in the future, chicken #3 will be at (x, y, z)". The video driver figures out when the next frame will be displayed, calculates where objects will be at that time, then renders objects at the calculated locations. Applications/games (wherever possible) predict the future. A player presses a mouse button which starts a 20 ms "pulling the trigger" animation and the game knows a bullet will leave the gun 20 ms before it happens (and with 10 ms of lag, the video driver does the animation in 10 ms and the bullet leaves the gun at exactly the right time despite the lag).

"An object at rest will remain at rest unless acted on by an unbalanced force. An object in motion continues in motion with the same speed and in the same direction unless acted upon by an unbalanced force". We only need to tell the video driver when an object will be created, destroyed, or when it will be acted upon by an unbalanced force. This includes the camera.


You just described what everyone else calls a "game engine". This is why I commented earlier about you building a superset of existing game engines: you are building a game engine that is going to be used by all the apps / games running on your OS. When you tell your system where chicken #3 is going to be in the future, you are effectively sending simulation data (and not rendering data). It is a sensible approach already in use by lot of games.

What distinction to you make between a game engine and your system (which I can't get around to call a video driver just yet)?



Brendan wrote:
Now; with all of the above in mind; which do you think is better - immediate mode, or retained mode?


What you described above is a game engine. A game engine is a framework and pretty much intrinsically retained mode. It's not a question of which is better, it's no question at all!

_________________
https://github.com/kiznit/rainbow-os


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Thu Oct 15, 2015 10:14 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Rusky wrote:
Brendan wrote:
To be perfectly honest; in the next ~10 years I expect discrete video to get slaughtered by integrated video and for Nvidia to die because of it; I expect AMD/ATI to continue their recent lack of success and eventually get split up and sold off as pieces to the highest bidder; and I expect Intel to bump AVX width a few more times (maybe up to 2048-bit) and then (after AMD and NVidia are gone) start wondering why they're wasting engineering effort to put 2 different types of "wide SIMD" processors on the same chip when there's no viable competition anyway and games are all designed for lower performance consoles (and ported to 80x86) and don't need half the power of GPUs in the first place.
Nonsense. Integrated graphics are nowhere near the power of even "lower performance consoles," and AVX is nowhere near what graphics-optimized hardware does. I do expect integrated graphics to get more and more powerful (you can already play old/low end games on newer Intel graphics), but discrete/console graphics will also keep pace and new games will continue to utilize them.


A while ago I bought a new Haswell system with an AMD R9 290. The video card was annoyingly noisy and didn't support VGA output (needed for my KVMs) and wasn't too stable either. I ripped it out thinking I'd get a (more power efficient) Nvidia card, and just use the Intel GPU until the Nvidia card arrived. A day or 2 turned into a week, which turned into a month. I've been using Intel's GPU for about a year now, simply because I'm too lazy to go the local computer shop (and because it does run everything I've thrown at it just fine, and there's no additional fan noise). I can't see the point in bothering with discrete video cards now (at least for Haswell); and Intel's GPUs are improving at a faster rate than NVidia/AMD.

The thing is, NVidia (and AMD and Intel) need to spend a lot on R&D to remain competitive. Less market share means less money for R&D which means less competitive products, which means less market share which means.... the downward spiral just gets faster and faster from there. My guess is that Nvidia has about 5 years before it reaches the beginning of its end (and 10 years until it reaches the end of its end). They'll probably end up doing smartphone GPUs at $1 per chip or something.

AMD is already showing signs of trouble - if their Zen cores aren't close to Skylake's performance I'm not sure they'll survive much beyond that; which is sad. I like AMD (partly because they don't cripple features for "product differentiation" like Intel) and want them to make Intel sweat.

Rusky wrote:
The fact that Gnomoria is your point of reference is rather telling. Not to disparage that style of games, but it's nowhere near as demanding as even a few-years-old "typical" 3D game.


It was just the game I played the most recently. Before that was Cities: Skylines, and before that Skyrim, and before that Minecraft. They all have limits on world size or population or AI complexity or something; and all could be less limited if they were able to use 2 or more computers. Note that I'm mostly talking about CPU load here. The graphics on Gnomoria is simple, but the pathfinding is known to cause the game to freeze for 2+ seconds when mechanisms change previously valid path/s.

Rusky wrote:
Brendan wrote:
Processes are tied to a specific computer; and by splitting applications up into multiple processes like this the load is spread across multiple computers.
Perhaps it would make more sense to leave the video drivers as lower-level interfaces- not all the way down to direct hardware access, but in more of a sweet spot that's portable across a wide range of hardware both in time and in power. Like, say, OpenGL. This process would of course be tied to the rendering hardware, but so could various higher-level renderers that make different tradeoffs and can even be game-specific when necessary.


Imagine you've got 4 computers with 4 monitors and 4 video drivers; one is doing software rendering on normal quad-core desktop CPU, one has an ancient "fixed function" pipeline, one is modern GPU with unified shaders, and one is using a set of Xeon Phi cards to do real-time ray casting. An application is sending its "4D" graphics data (3D + time information that's necessary to hide lag); and it doesn't know its data is being displayed on those 4 monitors and only describes a scene. Which version of OpenGL is the application using?


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Thu Oct 15, 2015 10:37 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

kiznit wrote:
Brendan wrote:
So, latency...

Worst case latency (time between app sending anything and video driver receiving it) for a congested network can exceed 1/60th of a second. Due to the nature of the OS, you can expect the network to be fairly busy at best. Anything that involves "per frame" data going in either direction is completely unusable. The graphics is not 3D, it's 4D. An application doesn't say "chicken #3 is at (x, y, z)" it says "at time=123456 in the future, chicken #3 will be at (x, y, z)". The video driver figures out when the next frame will be displayed, calculates where objects will be at that time, then renders objects at the calculated locations. Applications/games (wherever possible) predict the future. A player presses a mouse button which starts a 20 ms "pulling the trigger" animation and the game knows a bullet will leave the gun 20 ms before it happens (and with 10 ms of lag, the video driver does the animation in 10 ms and the bullet leaves the gun at exactly the right time despite the lag).

"An object at rest will remain at rest unless acted on by an unbalanced force. An object in motion continues in motion with the same speed and in the same direction unless acted upon by an unbalanced force". We only need to tell the video driver when an object will be created, destroyed, or when it will be acted upon by an unbalanced force. This includes the camera.


You just described what everyone else calls a "game engine". This is why I commented earlier about you building a superset of existing game engines: you are building a game engine that is going to be used by all the apps / games running on your OS. When you tell your system where chicken #3 is going to be in the future, you are effectively sending simulation data (and not rendering data). It is a sensible approach already in use by lot of games.

What distinction to you make between a game engine and your system (which I can't get around to call a video driver just yet)?


To me; a game engine is a much larger thing, handling physics, sound, user input devices, etc; and anything that only handles graphics (and doesn't handle any of those other things) isn't a game engine.

Existing systems are sort of like this:

Code:
     --------    --------    ----------
    | Sound  |  | Video  |  | Keyboard |
    | Driver |  | Driver |  | Driver   |
     --------   ---------    ----------
             \      |       /
              \     |      /
              -------------
             | Game Engine |
             |-------------|
             |    Game     |
              -------------


Where for my OS it'd be more like this:

Code:
     --------    --------    ----------
    | Sound  |  | Video  |  | Keyboard |
    | Driver |  | Driver |  | Driver   |
     --------   ---------    ----------
             \      |       /
              \     |      /
              --------------
             | OS's Station |
             | Manager      |
              --------------
                    |
                    |
              -------------------
             | Game "Front end"  |
              -------------------
                    |
                    |
              -------------------
             | Game world        |
              -------------------
                   /       \
                  /         \
         --------------    ---------   
        | Game Physics |  | Game AI |
         --------------    ---------



Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Thu Oct 15, 2015 11:09 am 
Offline
Member
Member

Joined: Mon Feb 02, 2015 7:11 pm
Posts: 898
You are right, a game engine is a lot more then just synchronizing entities over a network. What I meant to say is that what you describe is a distributed simulation and that is at the core of a game engine.

A game engine is much more than a single box on your first diagram... It's multiple boxes: graphics engine, simulation engine, network engine, sound engine, physic engine, and so on. The term "game engine" is just a convenient term to talk about all these things as one. it's basically everything except the drivers/APIs and the gameplay specific stuff (but it does include the scripting engine / scripting support code).

_________________
https://github.com/kiznit/rainbow-os


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Thu Oct 15, 2015 11:29 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

kiznit wrote:
You are right, a game engine is a lot more then just synchronizing entities over a network. What I meant to say is that what you describe is a distributed simulation and that is at the core of a game engine.

A game engine is much more than a single box on your first diagram... It's multiple boxes: graphics engine, simulation engine, network engine, sound engine, physic engine, and so on. The term "game engine" is just a convenient term to talk about all these things as one. it's basically everything except the drivers/APIs and the gameplay specific stuff (but it does include the scripting engine / scripting support code).


For my diagrams; each of the boxes represents a separate process (which may be split into many pieces internally).


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Thu Oct 15, 2015 11:56 am 
Offline
Member
Member
User avatar

Joined: Wed Jan 06, 2010 7:07 pm
Posts: 792
I always go for laptops with integrated graphics only, for precisely the reasons you mention. On the other hand, you still don't seem to be taking into account any more demanding games- I have an R9 270X (a high mid-range card) that I use for games, and it can pull of 1920x1080x60fps on high-to-highest quality settings (depending on the game), while I can't even hit 30fps on the lowest quality settings with lower resolutions with my Haswell's HD 4600.

Brendan wrote:
Intel's GPUs are improving at a faster rate than NVidia/AMD.
They're also playing catch-up. You can't extrapolate their future performance based on these first few generations that are far, far below what discrete cards are capable of. I love Intel GPUs for what they are and I hope they continue to improve, and I also would love AMD's CPU business to get back in the game (as an aside, their integrated graphics are a lot better than Intel's, and so are Nvidia's for their mobile chips), but I don't really see them competing in the same niche as discrete GPUs.

Brendan wrote:
They all have limits on world size or population or AI complexity or something; and all could be less limited if they were able to use 2 or more computers.
In some cases, sure.

Brendan wrote:
Which version of OpenGL is the application using?
I didn't intend to suggest OpenGL itself, merely something similar. However, I don't know that fully supporting fixed-function hardware is entirely worthwhile (programmable pipelines are over 15 years old today- what fixed-function hardware will even still work by the time your OS exists?), or that mixing rendering techniques for a single application makes any sense, but we've already been over that so ¯\_(ツ)_/¯.

_________________
[www.abubalay.com]


Top
 Profile  
 
 Post subject: Re: Graphics API and GUI
PostPosted: Thu Oct 15, 2015 3:39 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Rusky wrote:
I always go for laptops with integrated graphics only, for precisely the reasons you mention. On the other hand, you still don't seem to be taking into account any more demanding games- I have an R9 270X (a high mid-range card) that I use for games, and it can pull of 1920x1080x60fps on high-to-highest quality settings (depending on the game), while I can't even hit 30fps on the lowest quality settings with lower resolutions with my Haswell's HD 4600.

Brendan wrote:
Intel's GPUs are improving at a faster rate than NVidia/AMD.
They're also playing catch-up. You can't extrapolate their future performance based on these first few generations that are far, far below what discrete cards are capable of. I love Intel GPUs for what they are and I hope they continue to improve, and I also would love AMD's CPU business to get back in the game (as an aside, their integrated graphics are a lot better than Intel's, and so are Nvidia's for their mobile chips), but I don't really see them competing in the same niche as discrete GPUs.


You can extrapolate (but it's not a linear line - it's a curve).

In terms of raw performance Intel's GPUs will always be beaten; but performance is only one factor. Once you take heat/power and price into account things change; especially when you consider Intel's GPU as a "free extra" bundled with the CPU. Most people (where "most" aren't gamers) don't bother with discrete cards now, not because of performance but because of heat/power/price. Market share looks like this:

Image

Low end gamers are/have shifted away from discrete cards already. For example; if you take a look at Steam's survey you'll see Intel has ~27% of gamers (and that doesn't include AMD's integrated GPUs). The "medium end" gamers will be next. Game developers aren't stupid either - as Intel increases their market share more and more game developers are going to make sure their games at least work reasonably on Intel GPUs to avoid losing a > ~27% of the potential market; and more games running "at least reasonably" on Intel isn't going to help NVidia's market share either.

Brendan wrote:
Brendan wrote:
Which version of OpenGL is the application using?
I didn't intend to suggest OpenGL itself, merely something similar. However, I don't know that fully supporting fixed-function hardware is entirely worthwhile (programmable pipelines are over 15 years old today- what fixed-function hardware will even still work by the time your OS exists?), or that mixing rendering techniques for a single application makes any sense, but we've already been over that so ¯\_(ツ)_/¯.


You can still buy systems with fixed function pipelines today; but they're small embedded things and/or thin clients. This is a reasonably important use case for my OS - e.g. get 12 monitors, glue a cheap thin client on the back of each monitor, add the "4*3 wall of monitors" to the LAN, and have a pool of headless Xeon servers in a back room.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 105 posts ]  Go to page Previous  1 ... 3, 4, 5, 6, 7

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 43 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group