osmaster wrote:
As far as I understand the driver development is the problematic part. Isn't there any universal way of querying the driver from Applications ?
You can design an OS where "universal way" can be implemented.
But usually applications just call an API of a library without any purpose to call specifically a driver. The library can be implemented by OS developers or it can be an independent product. Independent product can be included in the OS distribution as it's part or it can be delivered in another way.
It means from the point of view of an application developer drivers are hidden under an abstraction layer like virtual file system or graphics rendering language. Exposing a driver directly to the application developers is often not a good idea because application developers want to have a solution ready to use instead of some bothering low level details. But sometime the driver level can help the application developer to get better performance in case the OS developers were too lazy to expose a consistent API with enough low level details, usually they describe such decision as "security driven", but in fact it's just the lack of efforts to implement a good and consistent design (time and resources constraints or dumb laziness).
osmaster wrote:
I looked at all that intel documentary for their graphics chip, it's so stupid and complicated. I think it should be something like this:
1. I set the card to a mode
2. I pass buffer of data containing some graphical information 2D,3D
3. I get some output back
It's bad idea to create such very specialized hardware. For the graphical information to be finally presented to a user there should be some processing in between. The processing means there should be an algorithm. The graphics related set of algorithms is very large and just can not be implemented in hardware (still not enough transistors). Also, if some algorithms will be implemented at the hardware level then it narrows too much the area where such hardware can be useful and as a consequence there will be a very small market for such hardware. Hardware with very small market is usually a loss making investment.
That's why there are universal chips that can use different algorithms. But algorithm requires a developer to implement it. And developer wants enough low level ways to control the hardware. Such ways are usually look like a plain old assembly language with an instruction set corresponding to the particular hardware. So, there are assembly languages for every graphics card. Instruction sets from different vendors can differ a lot, but for a one product line of one vendor there can be too little differences. In fact the complexity of the graphics drivers is just a reflection of the differences among the hardware of many vendors. Also there is the legacy problem, when a vendor sees it's too costly to invent a new instruction set every time and to teach the developers, who implement software for the vendor's hardware. So, the legacy and the lack of standards make the world too complicated. Would there be a unifying standard we now could develop graphics drivers just like we develop the x86 code.
osmaster wrote:
So my question is about universalizing the writing of drivers. How is that thing programmed, I have no idea, it's so complicated ?
The unifying standard should go first. Next you can write a universal driver. Or you can write an OS that hides the complexity from application developers under the well designed layer of abstraction (usually it's called architecture).