OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 1:25 pm

All times are UTC - 6 hours




Post new topic Reply to topic  [ 12 posts ] 
Author Message
 Post subject: QEMU VBE Get PMode interface
PostPosted: Sat Mar 10, 2018 4:21 pm 
Offline
Member
Member

Joined: Sat Sep 24, 2016 12:06 am
Posts: 90
So, im want to get VBE protected mode table(int 0x10, ax = 0x4F0A, bl = 0(http://www.ctyme.com/intr/rb-0287.htm)), but VBE returns ax = 0x0100. someone faced such a problem(in QEMU)?


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Sat Mar 10, 2018 4:27 pm 
Offline
Member
Member
User avatar

Joined: Sat Dec 27, 2014 9:11 am
Posts: 901
Location: Maadi, Cairo, Egypt
This function is optional. It doesn't need to be supported by the firmware.

On top of that, this function is used for bank switching and palletes, technologies that were deprecated in the 90s. Just use a linear framebuffer with high color or true-color, which will also let you build a device-independent graphics API, independent of VBE, GOP, etc.

_________________
You know your OS is advanced when you stop using the Intel programming guide as a reference.


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Sun Mar 11, 2018 4:01 am 
Offline
Member
Member

Joined: Sat Sep 24, 2016 12:06 am
Posts: 90
Im need in ax = 4F07 function, set display start. That function used for swap video pages(not buffers) e.g., im drawing to page 0, but user now sees page 1, I swap pages and now user sees page 0, im drawing on page 1. That using to non-twinkle image on screen.


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Sun Mar 11, 2018 4:12 am 
Offline
Member
Member
User avatar

Joined: Sat Dec 27, 2014 9:11 am
Posts: 901
Location: Maadi, Cairo, Egypt
Don't use it. It'll make you depend on VBE and not a device-independent graphics API. Like I said, just use a linear framebuffer and implement pages/buffer/whatever you want to call them in software, not in hardware.

_________________
You know your OS is advanced when you stop using the Intel programming guide as a reference.


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Sun Mar 11, 2018 4:39 pm 
Offline
Member
Member

Joined: Sat Sep 24, 2016 12:06 am
Posts: 90
Im using LFB, but have flickering image. Are u heard me?


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Sun Mar 11, 2018 7:38 pm 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
MrLolthe1st wrote:
Im using LFB, but have flickering image. Are u heard me?


Are you Double Buffering, and synchronizing the video update to the vertical refresh? Both of those are pretty much necessities for smooth animation, even for just scrolling and the like.

It sounds like you are trying to use the VBE calls to double buffer by switching pages, but as Omarx024 points out, that isn't what the call does - it is for switching banks, and if you are using an LFB, then there aren't any banks. The code isn't doing what you think it does.

Also, the other point that omarrx024 is making isn't that the approach won't work on a system with a compatible BIOS, it is that few if any BIOSes from after around 2000 are compatible with it - it won't work on most systems at all. If this is OK to you, then all is good; however, you do need to consider if being limited to 20 year old hardware (or emulations of same) is worth it to you.

Note that the VBE approach does not, to the best of my knowledge, support vsync - that requires handling an interrupt in your own driver. This means that even with double buffering, you will still need to have your own GPU driver in order to prevent screen tearing (which is usually not all that noticeable when scrolling rendered text or moving windows, but can be very visible for more elaborate animation subjects).

EDIT: I stand corrected. According to the RBIL page for Int 10/AX=0x4F07, what it actually does is reset where in the frame buffer the video adapter should begin the current active video frame at. While this was apparently intended originally for selecting banks, it can indeed be used to define pages, sort of, but if it is like most BIOS routines (of any type), the performance is likely to be poor. Also, using the routine with the argument BL=0x80 will indeed force the display reset to occur during the on the next available vertical refresh cycle.

Can someone with more experience on the topic give more information about this, please? I have a strong suspicion that the rather perfunctory explanation in RBIL is missing most of the really crucial details.

BTW, would you kindly give us a run-down if the host system hardware and software, as well as the QEMU configuration you are testing with? In your previous thread last week, you were very inconsistent in your statements about the target system - at one point you stated that you were targeting x86-64, but in the same sentence mentioned targeting the 486DX2 (which predates the long mode extensions for x86 by over a decade). I have this sense that we really need more information about your code and what you are trying to accomplish before we can really give you usable advice.

One final thing: if you have a source code repository set up on GitHub or some similar free VCS host, please pass us a link to it (maybe adding to you signature would be a good idea). If you don't... well, in that case, stop what you are doing and set one up, ASAP.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Sun Mar 11, 2018 10:31 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Schol-R-LEA wrote:
EDIT: I stand corrected. According to the RBIL page for Int 10/AX=0x4F07, what it actually does is reset where in the frame buffer the video adapter should begin the current active video frame at. While this was apparently intended originally for selecting banks, it can indeed be used to define pages, sort of, but if it is like most BIOS routines (of any type), the performance is likely to be poor. Also, using the routine with the argument BL=0x80 will indeed force the display reset to occur during the on the next available vertical refresh cycle.

Can someone with more experience on the topic give more information about this, please? I have a strong suspicion that the rather perfunctory explanation in RBIL is missing most of the really crucial details.


The only things I'd add is that the vertical sync is a "busy wait for vertical sync" and doesn't use an IRQ (and can waste up to 16.666 ms of CPU time waiting); that this function effects where the video card gets data from video RAM when sending it to the monitor (which has nothing to do with how that video RAM happens to be mapped into the CPU's physical address space) and therefore it makes no difference if you're using bank switching or LFB; and that its the caller's responsibility to make sure what it's planning to do is sane (e.g. that the video card actually has enough memory to support 2 frame buffers if the caller plans to use this function for "switch frame buffer on vertical sync" purposes).

Of course it's mostly not worth bothering with - it takes a lot of code to cover all the cases (e.g. "if( BIOS and not UEFI) { if (VBE3) { do annoying ugly mess to suit VBE3 } else if (VBE2) { do annoying ugly mess to suit VBE2 } else { do annoying ugly mess to suit real mode } } else { don't bother because it doesn't exist in UEFI }").


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Mon Mar 12, 2018 10:37 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Brendan wrote:
The only things I'd add is that the vertical sync is a "busy wait for vertical sync" and doesn't use an IRQ (and can waste up to 16.666 ms of CPU time waiting);


If you don't mind, I would like to confirm (for others mainly, but I will admit that I am unfamiliar with it myself) that you are specifically referring to the VBE function Int10/AX=0x4F07 when using the BL=0x80 option, and that (IIUC) vertical sync does (or at least can) issue an interrupt which a more sophisticated (and, you know, actually usable) video driver can trap.

I am pretty sure most older video cards - both un-accelerated cards and fixed accelerator cards - did issue a vsync interrupt, while GPU-based cards (and IGPU or APU video subsystems) synchronize the memory paging internally without requiring either explicit programmer action or busy waiting on the CPU. Is this correct?

If I am wrong here, how would polling for the vsync work? I can't imagine that card designers would find forcing a busy-wait on the developers acceptable, as it would make all those shiny acceleration features pretty close to worthless if the programmers couldn't do anything with them because they were stuck in a wait loop.

Brendan wrote:
Of course it's mostly not worth bothering with - it takes a lot of code to cover all the cases (e.g. "if( BIOS and not UEFI) { if (VBE3) { do annoying ugly mess to suit VBE3 } else if (VBE2) { do annoying ugly mess to suit VBE2 } else { do annoying ugly mess to suit real mode } } else { don't bother because it doesn't exist in UEFI }").


As flippant as this example might sound, it is actually an excellent point - for a programmer who has sufficient documentation for the GPU, using the VBE routines is not just less effective, it is also more effort, due to the inconsistent ways it was implemented and the fact that most video card and motherboard manufacturers stopped implementing the majority of VBE long ago as a pointless exercise. The OP is actually making things harder for themselves by going the VBE route.

The 'sufficient documentation' part is the main sticking point, as is the need for separate drivers for a variety of GPUs. While Intel has been providing full documentation on their IGPU subsystems for about 12 years (for all the good that does anyone, given how limited they are), AMD only started opening theirs up in (IIRC) 2014 (and appear to still keep the best goodies to themselves), while nVidia keep the necessary information under lock and key - though this hasn't stopped people like the Nouveau Group from reverse engineering large chunks of it. Still, most of the basic functions are pretty widely known, so unless you need the 3D accelerated functions, or want to program the GPU directly for high-performance gaming and simulation or CGPU computing, you can get away with just the better known aspects of them for quite a while.

While all of that is a lot of work, MrLolthe1st (cute) will probably find that using VBE isn't worth the effort, while writing the GPU drivers will be. However, it is their call, not ours, all we can do is give advice.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Mon Mar 12, 2018 11:41 am 
Offline
Member
Member

Joined: Thu May 17, 2007 1:27 pm
Posts: 999
Modern GPUs do not need a vsync IRQ (or busy wait) to perform buffer swapping. The have double-buffered registers (including framebuffer base address, cursor plane base, overlay base 1 to n and so on) and some way to swap the register state on next vsync.

The vsync IRQ is still useful to report to userspace that a frame is actually visible. Other than that, yes, the GPU itself (i.e. command units and not output pipelines) report events via memory.

Usual disclaimer: The above is true for Intel GPUs; I expect it to be similar for other manufacturers but that is speculation.

_________________
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Mon Mar 12, 2018 11:58 am 
Offline
Member
Member
User avatar

Joined: Wed Oct 27, 2010 4:53 pm
Posts: 1150
Location: Scotland
I use the BIOS for smooth hardware scrolling in any direction, but it isn't possible on all machines in their highest quality screen mode as they don't all have enough screen memory for that. My cheap netbooks can do it, but a very expensive laptop that I've tested can't (although that also means it can't do the job properly even if you write proper drivers for it).

When you call the BIOS to check for vertical sync, it simply doesn't return control to you until the retrace happens, which means you will lose a chunk of processing time, but you can minimise that loss by running a timer every time the BIOS returns control to you so that you don't call the BIOS again until the timer runs out, thereby reducing the wait to a small fraction. It's not ideal, but it is viable, just so long as you maintain the ability to switch back to real mode to call the BIOS.

(I wrote half a real-mode emulator a few years ago with the intention of using that to call the BIOS for video calls while running in protected mode - it may be that this would enable you to multi-task the BIOS call while still doing useful work throughout the whole time it's waiting, but I've never reached the point where I can test that.)

If triple buffering is available, there may be an option to use the BIOS to return from the call straight away with the switch between buffers being made as soon as the retrace happens, but it's no use trying that with double-buffering as you need to know when you can start writing to the freed buffer, and that has to wait until it's been switched away from so that the mess you're making there isn't visible on the screen.

_________________
Help the people of Laos by liking - https://www.facebook.com/TheSBInitiative/?ref=py_c

MSB-OS: http://www.magicschoolbook.com/computing/os-project - direct machine code programming


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Mon Mar 12, 2018 11:35 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Schol-R-LEA wrote:
Brendan wrote:
The only things I'd add is that the vertical sync is a "busy wait for vertical sync" and doesn't use an IRQ (and can waste up to 16.666 ms of CPU time waiting);


If you don't mind, I would like to confirm (for others mainly, but I will admit that I am unfamiliar with it myself) that you are specifically referring to the VBE function Int10/AX=0x4F07 when using the BL=0x80 option, and that (IIUC) vertical sync does (or at least can) issue an interrupt which a more sophisticated (and, you know, actually usable) video driver can trap.


Yes - I was talking about the VBE function Int10/AX=0x4F07 when using the BL=0x80 option.

For native video drivers (not VBE) a vertical sync IRQ may or may not exist. In general ancient video cards used to have one (CGA, EGA), then VGA didn't bother, and some "(S)VGA clones" did and some didn't. Then video moved to PCI and most (all?) video cards used a PCI IRQ (possibly only when in "native mode" and not "legacy VGA mode"), then things like fixed function 3D accelerators (and GPUs) got added and video cards ended up supporting a vertical sync IRQ because they had an IRQ for other things anyway.

Of course I'm a little fuzzy on the exact details, because it's not something I care about - I'm an OS developer not an video driver developer (see last paragraph in this post).

Schol-R-LEA wrote:
If I am wrong here, how would polling for the vsync work? I can't imagine that card designers would find forcing a busy-wait on the developers acceptable, as it would make all those shiny acceleration features pretty close to worthless if the programmers couldn't do anything with them because they were stuck in a wait loop.


For games (especially poorly designed games) often there's an incredibly silly "game loop" that would (e.g.) get keyboard/mouse/network input, update stuff, generate the next frame of graphics, then busy wait for vertical sync and do a buffer flip; where the entire game and all timing is dependent on waiting for vertical sync. Also don't forget that for all graphically intensive video games there's a bizarre "if this game isn't using the CPU/s then nothing is using the CPU/s" assumption (multi-tasking is a feature that's almost always ignored by game developers). These things combined mean that often "busy wait for vertical sync" is what game developers wanted, and if there was a vertical sync IRQ then it'd be mostly only be used for "busy wait until IRQ occurs".

Part of the reason for this is that often there's "global state" (e.g. where each entity is) that is modified by multiple pieces (by player controls, by enemy AI, by physics, etc) and also read by graphics; where doing everything serially avoids synchronisation difficulties (e.g. no need to worry about something modifying the data when graphics is trying to read it to generate the next frame).

Of course do I think this is extremely bad/wasteful; but even now a lot of games are still written like this.

Schol-R-LEA wrote:
Brendan wrote:
Of course it's mostly not worth bothering with - it takes a lot of code to cover all the cases (e.g. "if( BIOS and not UEFI) { if (VBE3) { do annoying ugly mess to suit VBE3 } else if (VBE2) { do annoying ugly mess to suit VBE2 } else { do annoying ugly mess to suit real mode } } else { don't bother because it doesn't exist in UEFI }").


As flippant as this example might sound, it is actually an excellent point - for a programmer who has sufficient documentation for the GPU, using the VBE routines is not just less effective, it is also more effort, due to the inconsistent ways it was implemented and the fact that most video card and motherboard manufacturers stopped implementing the majority of VBE long ago as a pointless exercise. The OP is actually making things harder for themselves by going the VBE route.

The 'sufficient documentation' part is the main sticking point, as is the need for separate drivers for a variety of GPUs. While Intel has been providing full documentation on their IGPU subsystems for about 12 years (for all the good that does anyone, given how limited they are), AMD only started opening theirs up in (IIRC) 2014 (and appear to still keep the best goodies to themselves), while nVidia keep the necessary information under lock and key - though this hasn't stopped people like the Nouveau Group from reverse engineering large chunks of it. Still, most of the basic functions are pretty widely known, so unless you need the 3D accelerated functions, or want to program the GPU directly for high-performance gaming and simulation or CGPU computing, you can get away with just the better known aspects of them for quite a while.

While all of that is a lot of work, MrLolthe1st (cute) will probably find that using VBE isn't worth the effort, while writing the GPU drivers will be. However, it is their call, not ours, all we can do is give advice.


For video; I'd define several levels of support. For example:
  • Level 1: Boot loader sets up framebuffer using whatever makes sense for that boot loader; OS has no native video driver and just does software rendering for the frame buffer that the boot loader set up
  • Level 2: OS uses a native video driver that is capable of changing video modes and almost nothing else; and still uses software rendering for the frame buffer (that it set up itself)
  • ...
  • Level 999: OS uses a native video driver that has full support for everything; including shaders, GPGPU, movie decoders, SLI/crossfire, etc

My point here is that each level is a temporary stepping stone to the next level; and that "level 2" is superior to VBE and significantly easier than "level 999".

It's a huge mistake to say "I'll never bother with level 2, because level 999 is too hard"; and it's probably a mistake to spend so much time trying to make "level 1" as awesome as technically possible that you don't have any time left to think about reaching the (superior) "level 2".

I'd also say that the goal of an OS developer shouldn't be to write all drivers themselves (that's impractical due to time constraints alone). An OS developer's goal should be to provide enough to convince other people to write drivers for them; where once you reach a certain point (e.g. a small number of "level 2" video drivers for cases where it was easy, and everything else using generic "level 1" video support) things that make it easier for other people to write drivers (e.g. documentation/specifications, tools, etc) can be far more important than working on drivers yourself.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: QEMU VBE Get PMode interface
PostPosted: Tue Mar 13, 2018 10:52 am 
Offline
Member
Member
User avatar

Joined: Fri Oct 27, 2006 9:42 am
Posts: 1925
Location: Athens, GA, USA
Brendan wrote:
For games (especially poorly designed games) often there's an incredibly silly "game loop" that would (e.g.) get keyboard/mouse/network input, update stuff, generate the next frame of graphics, then busy wait for vertical sync and do a buffer flip; where the entire game and all timing is dependent on waiting for vertical sync. Also don't forget that for all graphically intensive video games there's a bizarre "if this game isn't using the CPU/s then nothing is using the CPU/s" assumption (multi-tasking is a feature that's almost always ignored by game developers). These things combined mean that often "busy wait for vertical sync" is what game developers wanted, and if there was a vertical sync IRQ then it'd be mostly only be used for "busy wait until IRQ occurs".

Part of the reason for this is that often there's "global state" (e.g. where each entity is) that is modified by multiple pieces (by player controls, by enemy AI, by physics, etc) and also read by graphics; where doing everything serially avoids synchronisation difficulties (e.g. no need to worry about something modifying the data when graphics is trying to read it to generate the next frame).

Of course do I think this is extremely bad/wasteful; but even now a lot of games are still written like this.


While I certainly agree with you on this, I should point out that this wasn't just ignorance - game developers deliberately do this sort of thing, or did until recently anyway, mostly for reasons which were sort of sound in the past (mainly the belief that players would want all of the power of their system to be focused on the game they are currently playing, and the assumption that the game would need all of it anyway) but have been irrelevant for years if not decades.

When I took a course on game design in 2008 (it filled a major requirement, and I did learn a few things, but it was barely worth it - mostly, we focused on coaxing Torque3D into moving a few assets made by the professor in patterns dictated by the assignments), we were told flat out that trying to multithread a game was a Bad Thing, and that a game loop was the only way to go. A lot of online tutorials repeat this 'advice' even to this day, and until recently a newbie game dev who asked about using threads on most game dev fora would get the same ridicule that a newbie OS dev would get here if they asked about writing a kernel in VBScript.

This is why, until about four years ago, system building 'gurus' such as Paul Heimlich and Linus Sebastian usually dismissed multicore performance as irrelevant for games and focusing on CPU clock speed and GPU performance over everything else when discussing gaming rigs.

Things have changed among the major commercial game designers, who have been slowly warming up to the idea of using more than one thread in one process (though many commercial games still don't), but indie devs often ignore - or are ignorant of - the issue, relying on the often poorly optimized out-of-box performance of the game engine they are using to be sufficient, which is why - among other things - Player Unknown's Battlegrounds (which, for those too sensible to care about the topic, was the runaway hit indie game of the past year) has such notoriously poor performance even on massively overpowered game rigs.

This is exacerbated by the fact that (according to what I have read), fitting multiple threads into most game engines works about as well as fitting afterburners onto most lawnmower engines.

Image

It is getting better, slowly, but it is still a major problem.

While some indie devs do tune their games, most don't, and it shows. Even some AAA houses do only minimal tuning initially, counting on being able to patch any issues later if they get a lot of complaints (or even leaving it to modders to do their work for them, as seen in such infamous dumpster fires as Aliens: Colonial Marines, Mass Effect Andromeda, or the PC version of Arkham Knight). This becomes all the more evident when a truly well-tuned game such as the 2016 Doom reboot comes along and shows just what a piss-poor job everyone else is doing with it.

But this is getting away from the thread topic. Sorry for the digression. And yes, I know that the Sky Cutter isn't actually made from a lawnmower, it was just made to look like one, but saying that earlier would have killed the joke.

_________________
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 12 posts ] 

All times are UTC - 6 hours


Who is online

Users browsing this forum: No registered users and 65 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group