OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 7:01 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 19 posts ]  Go to page 1, 2  Next
Author Message
 Post subject: drawing a GUI for my kernel, [ something like desktop ]
PostPosted: Fri Dec 26, 2014 11:52 am 
Offline
Member
Member

Joined: Thu Sep 18, 2008 11:17 pm
Posts: 37
So, I have an idea.
A GIF/BMP of my 'desktop' that is drawn pixel by pixel.

Then, I manage to calculate the 'displacements' of other drawings so I don't redraw the entire screen pixel by pixel and only modify the actual changes.

How does that sound to people? Any better ideas?


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Fri Dec 26, 2014 12:46 pm 
Offline
Member
Member
User avatar

Joined: Sat Mar 31, 2012 3:07 am
Posts: 4591
Location: Chichester, UK
All GUIs work by only modifying the bits of the screen that have changed.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Fri Dec 26, 2014 7:04 pm 
Offline
Member
Member
User avatar

Joined: Mon Jun 16, 2014 5:33 pm
Posts: 213
Location: Costa Rica
sweetgum wrote:
So, I have an idea.
A GIF/BMP of my 'desktop' that is drawn pixel by pixel.

Then, I manage to calculate the 'displacements' of other drawings so I don't redraw the entire screen pixel by pixel and only modify the actual changes.

How does that sound to people? Any better ideas?


Well, that's the basic idea, but there's A LOT more to do with GUI's. How do you think to manage animated objects, such as mouse pointers? The primary concept is to have structures/objects that define were the hell the correwsponding graphical objects are, their properties, etc... In your back buffer, you would write your desktop first, then background windows, with the first begin backgrounded first. Then the active window, then the mouse pointer, etc... If you recieve a graphical object with less than 0xFF as it's alpha channel, you must calculate the effect that this imposes over the appearance of background windows. If a fully opaque object is recieved (0xFF alpha), [b]everything[b] underneath it will be erased. Once all calculations are done, the back buffer is copied to VRAM. This is usually done at 50/60Hz. Problem solved.

_________________
Happy New Code!
Hello World in Brainfuck :D:
Code:
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Sat Dec 27, 2014 6:15 pm 
Offline
Member
Member

Joined: Sat Nov 21, 2009 5:11 pm
Posts: 852
So in other words, your OS spends it time mostly drawing the same invisible things over and over again?


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Sat Dec 27, 2014 8:17 pm 
Offline
Member
Member
User avatar

Joined: Mon Jun 16, 2014 5:33 pm
Posts: 213
Location: Costa Rica
Gigasoft wrote:
So in other words, your OS spends it time mostly drawing the same invisible things over and over again?


Oh no, that would be awful! Do you know (and understand) what Back Buffering is at least? You're confusing the concepts of "drawing invisible things" and Back Buffers. A common problem in this spot is that the applications will report at different times from each other their image updates. The solution? Back Buffering! When your OS recieves an image update, it'll draw it to the back buffer, a memory location of the same size as VRAM, but separate from it. Once every while, the back buffer is copied over to VRAM. This is not a cheap process, so it is done at a considerably short frequency, at least to the CPU: 50/60Hz, that is, every 20/~16.66 miliseconds. This is one situation when -O3, -ffast-math, SSE, SSE2, SSE3, and AVX can help you a lot!. When copy time is reached, you'll normally add the mouse pointer to the mix, and pray to SSE3 to do it well. After the copy is done, the process is repeated, with the desktop being drawn first.

That's however, the most basic of the solutions. A more complex (and better) design would have window and canvas objects everywhere registered. Once copy time is reached, the desktop is drawn in back buffer, then all the windows/canvas, then the mouse pointer. This gives you an extra layer of flexibility and reliability. You can move your windows by changing some field. You can have a fullscreen application by omitting everything but the fullscreen canvas and the mouse pointer. You can do some math-intensive calculations at copy-time that can improve reliability and look-and-feel. An example is the common way OSes draw the foreground window differently from the background ones. You can use that back/foreground information even to do what you mean. You could calculate which parts of a window are shown, and display them only. Although a benchmarking should be done, as I'm not completely sure if this could affect perfomance.

Anyway, please rework your answer. Maybe I misunderstand you :wink: .

_________________
Happy New Code!
Hello World in Brainfuck :D:
Code:
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Sun Dec 28, 2014 4:43 pm 
Offline
Member
Member

Joined: Sat Nov 21, 2009 5:11 pm
Posts: 852
Whether you draw things to a back buffer or to VRAM, you are still drawing them. Many times per second, even if they are unchanging and even if they are covered up by other things. This method is suitable for games where everything is expected to be constantly moving anyway, but for most applications the display remains relatively static. For example, as I am typing this, the only things that change are the characters on the screen, the 20 pixel tall blinking cursor, and the list of animated GIF smilies. There is therefore no need to redraw the entire screen every time I press a key, much less 60 times a second.

In my OS, the window system always knows what portion of a window is visible, and it also keeps track of a dirty region for each window. Whenever an application thread becomes idle, the system checks if there is a window belonging to that thread that needs to be painted. If so, the application is directed to perform painting of that window, and all drawing calls are clipped to the intersection of the dirty and visible regions. An application can also manually initiate painting in an arbitrary portion of a window. Finally, it can scroll a rectangular portion of a window, using a VRAM to VRAM copy. A window can optionally be set to use double buffering, making updates atomic. The mouse cursor works in a different way. The system just saves the pixels that are under the cursor, and restores them when moving it. During painting, the cursor is hidden.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Sun Dec 28, 2014 8:44 pm 
Offline
Member
Member
User avatar

Joined: Mon Jun 16, 2014 5:33 pm
Posts: 213
Location: Costa Rica
Gigasoft wrote:
Whether you draw things to a back buffer or to VRAM, you are still drawing them. Many times per second, even if they are unchanging and even if they are covered up by other things. This method is suitable for games where everything is expected to be constantly moving anyway, but for most applications the display remains relatively static. For example, as I am typing this, the only things that change are the characters on the screen, the 20 pixel tall blinking cursor, and the list of animated GIF smilies. There is therefore no need to redraw the entire screen every time I press a key, much less 60 times a second.

A single and simple question beats all this paragrah: Where does the game runs on? Your OS of course! If the OS doesn't refreshes the screen at 60Hz, how could the game (maybe DOS-style VGA direct writes :| )

Gigasoft wrote:
In my OS, the window system always knows what portion of a window is visible, and it also keeps track of a dirty region for each window. Whenever an application thread becomes idle, the system checks if there is a window belonging to that thread that needs to be painted. If so, the application is directed to perform painting of that window, and all drawing calls are clipped to the intersection of the dirty and visible regions. An application can also manually initiate painting in an arbitrary portion of a window. Finally, it can scroll a rectangular portion of a window, using a VRAM to VRAM copy. A window can optionally be set to use double buffering, making updates atomic. The mouse cursor works in a different way. The system just saves the pixels that are under the cursor, and restores them when moving it. During painting, the cursor is hidden.

A fatal error on here. Try to move a window or mouse and you get the same stuff that Windows does when moving a screen when the system overload, the trace of the moment remains onscreen. That's because you'll need to do other (unmentioned in your design) extra innecessary computations in order to have everything right. Your design is "dirty"; it involves the direct registering and managing of memory cells as windows, instead of using objects, that are later serialized into temporary statical graphical entities.

This design is theorically possible, although it requires lots of FPU/MMX/SSE* instructions to calculate all details, without mentioning you probably must be a genius before being able to desing such algorithms. That overload can easily take up all the system performance. A back buffer may even be preceded by a second-level back buffer. It could contain a collection of window canvas, so that once everywhile, maybe when a window is actively drawing, everything is serialized into the first-level back buffer, which is ultimately copied to VRAM.

Remember that GUIs are not just square, pixelated windows. They involve several graphical effects. Maybe, someday, a graphical effect you add won't work, neither theorically nor pratically, with your design. A simple example?: The Desktop Cube effect of Compiz. My design would simply replace the simple collection of objects that is know the second-level back buffer with tons of trees and objects describing the scene. Anyway, the second-level back buffer is always plasmated someway into the first-level back buffer, which goes onto VRAM, or whatever way graphics are drawn on some platform.

BTW, did you said you could update the screen asynchronously? Never mind! Some standards specificy a exact frequency for data to be transfered. HDMI for instance.

_________________
Happy New Code!
Hello World in Brainfuck :D:
Code:
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Mon Dec 29, 2014 8:24 am 
Offline
Member
Member

Joined: Sat Nov 21, 2009 5:11 pm
Posts: 852
KemyLand wrote:
A single and simple question beats all this paragrah: Where does the game runs on? Your OS of course! If the OS doesn't refreshes the screen at 60Hz, how could the game (maybe DOS-style VGA direct writes

Any application can update any portion of its window at any time just fine. Other windows are unaffected by this.

KemyLand wrote:
A fatal error on here. Try to move a window or mouse and you get the same stuff that Windows does when moving a screen when the system overload, the trace of the moment remains onscreen. That's because you'll need to do other (unmentioned in your design) extra innecessary computations in order to have everything right.

In the case of an overloaded system, I prefer the system to remain responsive rather than having to wait for it to redraw everything on the screen before I can do anything at all.

KemyLand wrote:
This design is theorically possible, although it requires lots of FPU/MMX/SSE* instructions to calculate all details, without mentioning you probably must be a genius before being able to desing such algorithms. That overload can easily take up all the system performance.

Obviously, keeping track of where the windows are, what is visible and what must be redrawn is much faster than shoving millions of pixels around in the system repeatedly. There is no floating point arithmetic involved and the structures describing these regions are typically very small. They can basically be thought of as a list of rectangles. This does not require that all windows have to be rectangular. I could easily make windows with rounded corners if I wanted to.

KemyLand wrote:
Remember that GUIs are not just square, pixelated windows. They involve several graphical effects. Maybe, someday, a graphical effect you add won't work, neither theorically nor pratically, with your design. A simple example?: The Desktop Cube effect of Compiz.

For the Desktop Cube effect, one would simply replace all the top level windows with a large transparent window that manages the application windows in its own way. It would keep a separate buffer for each window, and redraw the composited view whenever one or more windows have changed. Then it would wait for a small amount of time before checking again. Transparent windows (and other effects, such as blurring) can be implemented by having the system draw everything that is in the transparent part of the window into a temporary surface which is passed to the application, which then draws its own UI on top.

KemyLand wrote:
BTW, did you said you could update the screen asynchronously? Never mind! Some standards specificy a exact frequency for data to be transfered. HDMI for instance.

That does not hamper the ability to transfer things into the display chip's RAM. If you are talking about avoiding tearing, one can simply delay drawing until the next blanking period.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Mon Dec 29, 2014 8:53 pm 
Offline
Member
Member
User avatar

Joined: Mon Jun 16, 2014 5:33 pm
Posts: 213
Location: Costa Rica
Gigasoft wrote:
KemyLand wrote:
A single and simple question beats all this paragrah: Where does the game runs on? Your OS of course! If the OS doesn't refreshes the screen at 60Hz, how could the game (maybe DOS-style VGA direct writes

Any application can update any portion of its window at any time just fine. Other windows are unaffected by this.

Let's debug these two programs:
Program 1:
  1. Drawing something at 60Hz... The PIT comes in! Going to Kernel!
  2. Finishes drawing.

A fatal error on here. Try to mo
Program 2:
  1. Drawing something at 60Hz... The PIT comes in! Going to Kernel!
  2. Finishes drawing.

Kernel (on PIT interrupt):
  1. ...
  2. Going to next process! If 1 was running, 2 is ran. If 2 was running, 1 is ran.

If there were several processes running. This is what the users thinks:
  1. Let's start a game! (Begins process 1)
  2. Let's start...
  3. Let's start a game! (Begins process 2)
  4. WTF?! Both games are flickering at about 1/10th of a second!

Gigasoft wrote:
KemyLand wrote:
A fatal error on here. Try to move a window or mouse and you get the same stuff that Windows does when moving a screen when the system overload, the trace of the moment remains onscreen. That's because you'll need to do other (unmentioned in your design) extra innecessary computations in order to have everything right.

In the case of an overloaded system, I prefer the system to remain responsive rather than having to wait for it to redraw everything on the screen before I can do anything at all.

How do you think the OS you were writting this post on does things? The system won't remain more rensponsive this way, as graphics are not the only stuff done. 60Hz is just enough so the lag is unnoticeable on an overloaded system. Obviously, if other things are done, the lag will be noticed. You do not wait it to do that! Do you remember interrupts exist? Anyway, I'm not sure if you are talking about an RTOS, because you're preferring responsiveness over perfomance :? .

Gigasoft wrote:
KemyLand wrote:
This design is theorically possible, although it requires lots of FPU/MMX/SSE* instructions to calculate all details, without mentioning you probably must be a genius before being able to desing such algorithms. That overload can easily take up all the system performance.

Obviously, keeping track of where the windows are, what is visible and what must be redrawn is much faster than shoving millions of pixels around in the system repeatedly. There is no floating point arithmetic involved and the structures describing these regions are typically very small. They can basically be thought of as a list of rectangles. This does not require that all windows have to be rectangular. I could easily make windows with rounded corners if I wanted to.

Again, that's dirty and error prone. You cannot simply do:
Code:
class Window {
  // ...
  bool isThisWindowVisible;
  // ...
}

Because of the ability of windows to overlap other (3D-appearance?) you can't just have a single variable. If a window overlaps another by a little X/Y misalignment, a visible space made of a vertical and/or horizontal slice will appear. You would reserve a lot of dynamically allocated floats and doubles just for this. There's no floating point arithmetic you say, eh? I don't know if to laugh or cry. Do you at least remember that floating point computation is essential for both 2D and 3D graphical computation? You can't tell me that FP computation is anyway faster than integral computation. I should admit something here, and it's that you should avoid the redrawing of minimized windows, but nothing more.

Gigasoft wrote:
KemyLand wrote:
Remember that GUIs are not just square, pixelated windows. They involve several graphical effects. Maybe, someday, a graphical effect you add won't work, neither theorically nor pratically, with your design. A simple example?: The Desktop Cube effect of Compiz.

For the Desktop Cube effect, one would simply replace all the top level windows with a large transparent window that manages the application windows in its own way. It would keep a separate buffer for each window, and redraw the composited view whenever one or more windows have changed. Then it would wait for a small amount of time before checking again. Transparent windows (and other effects, such as blurring) can be implemented by having the system draw everything that is in the transparent part of the window into a temporary surface which is passed to the application, which then draws its own UI on top.

I should replay onhere that you said "FP Arithmetic isn't needed". For a 3D effect would you say me [-X ?! This are simple mathematical laws. You need matrices and vectors for doing this kind of stuff. The Cube effect would transform graphically the properties of windows. In between a cube switch, the windows are multiplied by several matrices to get a transformed version that can graphically fit on what appears to be a rotating 3D cube. You cannot keep separate buffers for each window, because they're really complexly transformed. You can't constantly malloc()/free() (goodbye performance), neither depend on std::vector :wink: . Everything is unpredictable on this land!

Gigasoft"}
[quote="KemyLand wrote:
BTW, did you said you could update the screen asynchronously? Never mind! Some standards specificy a exact frequency for data to be transfered. HDMI for instance.

That does not hamper the ability to transfer things into the display chip's RAM. If you are talking about avoiding tearing, one can simply delay drawing until the next blanking period.
[/quote]
No, it doesn't harms the ability to do a innecesary parcial copy, but your model. Here, you are forced to update the screen everything X Hz, and that's against your design =D> . Even VGA does this internally!

_________________
Happy New Code!
Hello World in Brainfuck :D:
Code:
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Mon Dec 29, 2014 10:21 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

sweetgum wrote:
How does that sound to people? Any better ideas?


For graphics, it's a set of layers: the video hardware itself, the video driver, some sort of virtual desktop layer, the GUI, the application/s, plus maybe some sort of widget library.

Note: The "virtual desktop layer" may serve 3 purposes. The first purpose is to combine physical screens into a single larger virtual screen (for example, maybe you've got a 400 mm wide monitor on the left and a 600 mm wide monitor on the right, and combine them to form a single virtual 1000 mm wide monitor). The second purpose is to have multiple desktops and allow the user to switch between them (for example, maybe you've one monitor and 12 virtual desktops, and the user can press "control+F1" to "control+F12" to switch between different virtual desktops; where each virtual desktop might be running a different GUI). The third purpose is to handle secure login (for example, maybe it's the only piece of software in the entire OS that's allowed to change the current userID), including starting whichever program the user's preference say the user wants as their GUI.

All of those pieces need to communicate in some way, and it needs to be fast. You do not want to push large amounts of raw pixel data through multiple layers. For example, if the application wants to display an animated GIF, then you don't want the application sending raw pixels to the GUI, which sends raw pixels to the virtual deskop layer, which sends raw pixels to the video driver/s. Instead, you want to split things into "control" and "data", and optimise the path that data takes. For example, maybe the application only tells the GUI the file name of the GIF and the location it wants it displayed in the application's window, and the GUI tells the virtual desktop layer similar information, which tells the video driver similar information; and the video driver itself loads the GIF and handles the animation (possibly including pre-processing it and caching it in video display memory so that you end up doing fast/hardware accelerated blitting for each frame of the animation). In that way the amount of "control" information is tiny (and only happens once rather than once per step in the animation), and the path data takes is "from video driver to video hardware" and not "from application all the way through multiple layers to video hardware, with a stop-over at Los Angeles to switch flights and get some nice bagels".

I'd also suggest doing it so that different pieces communicate in the same way. For example, if the GUI and virtual desktop layer communicate use a standardised messaging protocol over messaging, then the applications and GUI should communicate using the same standardised messaging protocol, so that you can have a full screen application talking directly to the virtual desktop layer with no GUI running at all (or maybe even a GUI running inside a window of another GUI). Note: "standardised messaging protocol over messaging" is just an example - it could be something using pipes, or sockets, or remote procedure calls or even maybe direct calls (and shared libraries) or whatever.

Also, your abstractions should be abstract. What I mean here is that device specific things should be hidden. Nothing (except the video driver/s) should need to know what the resolution or colour depth happens to be (for any of the monitors). The GUI and applications shouldn't be forced to care if they're currently being displayed by the virtual desktop layer or not; or if they're actually using one physical monitor or (e.g.) a massive 20 * 16 grid of screens. Applications shouldn't need to know if they're talking to the GUI or the virtual desktop layer.

Finally, whatever you're using for the communication between the pieces should be designed for hardware accelerated video drivers. In practice, this mostly means that the video driver will end up doing a whole lot of software rendering (until you're able to write native video drivers). If you don't do this in the beginning, then it becomes extremely difficult to support hardware accelerated video later on (without redesigning the communication/protocols and everything that uses them, most likely including throwing away most of your code and rewriting everything from scratch because it's too hard to retro-fit).

Basically what I'm saying here is that before you design a GUI itself, you want to design the video driver interface and communication between the pieces; then implement the video driver (all the software rendering, etc); then implement the virtual desktop layer.

Further thoughts:
  • Nothing prevents the video driver from doing HDR.
  • Nothing prevents the video driver from doing "hue shifting" for colour blind users (e.g. if the user can tell the difference between green and blue, then when that user logs in the virtual desktop layer might tell the video driver/s that the user's profile wants hue shifting and the video driver/s might make greens more red).
  • Nothing prevents the video driver from rendering a frame in a higher resolution and then scaling down, to improve the perceived resolution. Note: This could be extremely advanced - e.g. taking into account the physical properties of the screen.
  • Nothing prevents the video driver from rendering a frame in a high colour depth (e.g. 48 bits per pixel) and then doing dithering to convert to whatever the video card actually wants, to improve the perceived range of colours.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Tue Dec 30, 2014 12:17 am 
Offline
Member
Member

Joined: Mon Jan 03, 2011 6:58 pm
Posts: 283
Brendan wrote:
...


Maybe you could preface your posts like this with a "If you're looking for A correct way to do this" or something simliar, that make make your views/posts more acceptable/approachable.

Beyond that, for my own info:
Brendan wrote:
  • Nothing prevents the video driver from doing HDR.


Except Full-screen applications that wish to use Cel Shading instead.

Brendan wrote:
  • Nothing prevents the video driver from doing "hue shifting" for colour blind users (e.g. if the user can tell the difference between green and blue, then when that user logs in the virtual desktop layer might tell the video driver/s that the user's profile wants hue shifting and the video driver/s might make greens more red).


Good point. Noted for my info.

Brendan wrote:


Except full screen applications that couldn't possibly render in real time at a higher resolution than they specify. (That resolution should be user-defined obviously, and be chosen from a list of resolutions that the user's monitor supports at all times)

Brendan wrote:
  • Nothing prevents the video driver from rendering a frame in a high colour depth (e.g. 48 bits per pixel) and then doing dithering to convert to whatever the video card actually wants, to improve the perceived range of colours.


See point above, except I would require all "image data" to be of a fixed format. (Currently, I would say 32-bit format [8A8R8G8B])...

On 2nd thought, not sure about that last point simply for the fact of devices like the Oculus Rift, which require more info than that...

- Monk


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Tue Dec 30, 2014 2:47 pm 
Offline
Member
Member

Joined: Sat Nov 21, 2009 5:11 pm
Posts: 852
KemyLand wrote:
If there were several processes running. This is what the users thinks:
Let's start a game! (Begins process 1)
Let's start...
Let's start a game! (Begins process 2)
WTF?! Both games are flickering at about 1/10th of a second!

I have no idea of what you are trying to say, but you seem to have a quite uninformed conception of how process scheduling works.
KemyLand wrote:
Because of the ability of windows to overlap other (3D-appearance?) you can't just have a single variable. If a window overlaps another by a little X/Y misalignment, a visible space made of a vertical and/or horizontal slice will appear. You would reserve a lot of dynamically allocated floats and doubles just for this. There's no floating point arithmetic you say, eh? I don't know if to laugh or cry. Do you at least remember that floating point computation is essential for both 2D and 3D graphical computation?

And where did I say that? I said a list of rectangles. Actually, it's a subdivision of screen space into vertical parts which are further subdivided horizontally. And the number of pixels on my screen is an integer, so there is no floating point math, or any other math. If there is a complicated 3D shape, or something dynamically generated, it can always be represented as a partially transparent rectangle.

Quote:
Here, you are forced to update the screen everything X Hz, and that's against your design =D> . Even VGA does this internally!

I am not forced to do anything. It is the display chip's job to handle the signal output. I do not have to spend time transferring the same image into VRAM that is already there for it to be able to output the same image over and over.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Tue Dec 30, 2014 2:59 pm 
Offline
Member
Member
User avatar

Joined: Mon Jun 16, 2014 5:33 pm
Posts: 213
Location: Costa Rica
Gigasoft wrote:
KemyLand wrote:
If there were several processes running. This is what the users thinks:
Let's start a game! (Begins process 1)
Let's start...
Let's start a game! (Begins process 2)
WTF?! Both games are flickering at about 1/10th of a second!

I have no idea of what you are trying to say, but you seem to have a quite uninformed conception of how process scheduling works.

I must accept it wasn't quite a easily understandable example. What I meant is that if you don't have back buffers, the unpredictable action of scheduling can make your graphics flicker and tire when done at high speeds. You are the one who doesn't understands how scheduling works. You don't even provide a single argument justifying your point :roll: .

Gigasoft wrote:
KemyLand wrote:
Because of the ability of windows to overlap other (3D-appearance?) you can't just have a single variable. If a window overlaps another by a little X/Y misalignment, a visible space made of a vertical and/or horizontal slice will appear. You would reserve a lot of dynamically allocated floats and doubles just for this. There's no floating point arithmetic you say, eh? I don't know if to laugh or cry. Do you at least remember that floating point computation is essential for both 2D and 3D graphical computation?

And where did I say that? I said a list of rectangles. Actually, it's a subdivision of screen space into vertical parts which are further subdivided horizontally. And the number of pixels on my screen is an integer, so there is no floating point math, or any other math. If there is a complicated 3D shape, or something dynamically generated, it can always be represented as a partially transparent rectangle.

Are you still not accepting that ALL graphics require strong math? Please prove me wrong and write a graphical program where you don't use floating point math. In abstract graphics, everything is relative, thus FP math is required. See Brendan's answer. Even he says that you cannot depend on pixels all the way down the graphics stack (if it can be called so :? )

Gigasoft wrote:
KemyLand wrote:
Here, you are forced to update the screen everything X Hz, and that's against your design =D> . Even VGA does this internally!

I am not forced to do anything. It is the display chip's job to handle the signal output. I do not have to spend time transferring the same image into VRAM that is already there for it to be able to output the same image over and over.

What I trying to say is that you'll not be able to provide neither "responsiveness" nor "atomicity" with this model, as your image will be updated with a static frequency. Also, your model depends on VRAM. What will you do if a theorical device is not mapped into RAM? If all the drawing is expected to be done through I/O communication, and the image is expected to be resended every tick? (remember everything on here is theorical, but it can exist...)? You'll be forced to follow my model.

_________________
Happy New Code!
Hello World in Brainfuck :D:
Code:
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Tue Dec 30, 2014 3:03 pm 
Offline
Member
Member
User avatar

Joined: Mon Jun 16, 2014 5:33 pm
Posts: 213
Location: Costa Rica
This thread has became a harsh discusion between me and Gigasoft. Shouldn't we fork a separate thread, as the OP has right to a answer (which only Brendan has gave :? )?

_________________
Happy New Code!
Hello World in Brainfuck :D:
Code:
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.


Top
 Profile  
 
 Post subject: Re: drawing a GUI for my kernel, [ something like desktop
PostPosted: Tue Dec 30, 2014 7:55 pm 
Offline
Member
Member

Joined: Mon Mar 25, 2013 7:01 pm
Posts: 5100
sweetgum wrote:
So, I have an idea.
A GIF/BMP of my 'desktop' that is drawn pixel by pixel.

Then, I manage to calculate the 'displacements' of other drawings so I don't redraw the entire screen pixel by pixel and only modify the actual changes.

How does that sound to people? Any better ideas?

This is how most (all?) GUIs work, when hardware acceleration is unavailable or unsupported (or disabled by the user).

GUIs with hardware acceleration might give each window its own back-buffer and let the GPU composite them. This means a partially hidden window might receive updates for the part that is hidden, but most computers that support GPU-accelerated GUIs have the memory and PCI bandwidth to handle the extra load.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 19 posts ]  Go to page 1, 2  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 48 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group