Hi,
amn wrote:
So, with today's and yesterday's computer systems, we have an HDMI (back in the day VGA) or another digital signal interface, between a graphic card and a display device, typically an LCD. As far as I understand, the graphic card has some control over when to "refresh" contents of the display (not video RAM, which is another aspect), but typically we always talk about some applied refresh rate, frequency at which the display in effect updates itself.
I am no electrical engineer so I don't know if it is typically the graphic card that exclusively drives the display or whether it is the display and/or the HDMI (an example) subsystem in it that polls the graphic card automatically at regular intervals for update from the framebuffer, so these details non-withstanding, I wonder:
For current hardware, you can assume that the video card is mostly a pair of counters where each counter has a bunch of comparators. The first counter is incremented at a specific frequency (called the "pixel clock frequency"), and there's some comparators that do certain things when that counter reaches certain values (start horizontal blanking, start horizontal sync, end horizontal sync, end horizontal blanking, etc). At the end of this (for the comparator with the highest value) it resets the count back to zero and increments a "line counter". The line counter is mostly the same - a bunch of comparators that do similar things when the line counter reaches certain values (start vertical blanking, start vertical sync, ..., reset the line counter back to zero).
The monitor receives the resulting signals, and based on the timing and sync pulse polarities it has to auto-guess what the video mode is. Apart from that (especially for old CRT monitors), the monitor is relatively simple and just does what the video card's signals tell it to do. More importantly (originally) the monitor didn't have any reason to store any pixel values anywhere - you'd have an electron beam that charged up a few dots of phosphor, and that phosphor would glow for a little while. How long the phosphor glowed was part of the monitor's design and determined the refresh rate - in a way, you can think of "refresh rate" as "how often the phosphor dots need to be refreshed". If you refreshed the display too fast the phosphor dots would get "overcharged" and you'd get a horrible "everything too bright" picture, and if you refreshed the display too slowly you'd get a "too dark" picture (and if it's very bad, you'd get a flickering mess). Ideally (to avoid these problems), the video card uses a refresh rate that the monitor is designed for ("60 Hz or suffer").
Modern monitors are designed to behave the same - in other words, they're designed to emulate phosphor dots that need refreshing. I'd assume this is mostly done with capacitors, where the video card's signals are used to charge the right pixel's capacitor/s quickly, and the capacitor/s (driving the LCD or LED or whatever) takes "about 1/60th of a second" to discharge.
In a perfect world (Hah!) the whole lot would be completely redesigned such that the monitor has its own memory/frame buffer that doesn't need to be constantly refreshed, where the video card only tells the monitor which areas were modified (as soon as they're modified), and where the monitor can use whatever refresh rate it likes (possibly splitting the screen into many zones and refreshing all the zones in parallel to get extremely high refresh rates without requiring insane amount of bandwidth from video card to monitor).
amn wrote:
Why in this day and age we can't switch over to a manual software-initiated display (not just framebuffer!) update mechanism supported on the hardware level all the way to the display?
Mostly because "small evolutionary steps" are more cost effective (e.g. easier for compatibility, etc) than "redesign the world".
However please note that how software works is not restricted to the way hardware works. Just because the video card sends pixel data to the monitor at a specific rate (e.g. 60 frames per second) does not mean that software must update the frame buffer at the same specific rate.
More specifically, with modern (GPU) hardware one idea would be to give each application its own texture, and then let the video driver merge/"compose" all the textures (under control of the GUI) to generate the frame buffer. In this case each application can update its texture (possibly using a "push" model) whenever it feels like it without caring about refresh rate at all.
Cheers,
Brendan