Hi,
mariuszp wrote:
Geri wrote:
you should have linked an ISO, so we could debug it.
https://github.com/madd-games/glidix/bl ... glidix.isoAlso, indeed, my scheduler does not support priorities, and the PIT is fixed at a frequency of 1000 Hz, even though some processes call
yield() to give up their CPU time... but then the next process would get much less CPU time than normal... that does sound like a pretty bad design. And drivers like the IDE driver and the keyboard driver are just hanging there waiting for commands and still getting CPU time.
So, if there's 100 tasks that happen to be in the order "video driver, GUI, application, keyboard driver, 96 other things", then:
- The user presses a key immediately after the keyboard driver finishes
- We wait for 99 task switches and up to 99 ms before the keyboard checks for a keypress
- The keyboard gets the keypress and sends it to the GUI
- We wait for 97 task switches and up to 97 ms before the GUI gets the keypress
- GUI receives the keypress, decides what to do with it and sends it to the application
- We wait for 1 task switch before the application gets the keypress
- The application receives the keypress and updates its window but runs out of time before it can tell the GUI update the video
- We wait for 100 task switches and up to 99 ms before the application can tell the GUI to update the video
- We wait for 99 task switches and up to 99 ms before the GUI knows that video needs to be updated
- The GUI updates its video and tells the video driver to update the screen
- We wait for 99 task switches and up to 99 ms before the video driver knows the screen needs to be updated
- The video driver updates the screen
In this scenario, it took 495 task switches and up to 295 ms between user pressing a key and user getting visual feedback.
The alternative (with correctly set task priorities and pre-emption) is:
- The user presses a key, kernel does a task switch to keyboard driver immediately
- The keyboard gets the keypress and sends it to the GUI, then blocks (waiting for next IRQ); causing an immediate task switch to GUI
- GUI receives the keypress, decides what to do with it, sends it to the application and blocks (waiting for next event); causing an immediate task switch to application
- The application receives the keypress and updates its window. It doesn't run out of time because all higher priority tasks are blocked anyway. It tells the GUI to update the video which unblocks the (higher priority) GUI and causes an immediate task switch/pre-emption
- The GUI updates its video, tells the video driver to update the screen and blocks (waiting for the next event), causing an immediate task switch
- The video driver updates the screen
In this case it's 5 task switches instead of 495 task switches and none of the other 96 tasks (that weren't involved) wasted any of the CPU time; so the time between user pressing a key and user getting visual feedback is more like 2 ms instead of up to 295 ms.
Note that (for your scheduler) reducing the time between task switches (e.g. from 1 ms to 0.1 ms) increases the chance that a task won't be able to finish something important before it runs out of time and will have to wait for all other tasks to have their turn, and also increases the time spent doing the task switches themselves; and for both of these reasons it can severely reduce performance. Also; increasing the amount of time tasks are given (e.g. from 1 ms to 10 ms) increases the amount of time an important task has to wait while unimportant tasks are wasting CPU time, which can severely reduce performance.
Basically, it's very bad, and changing the amount of time tasks are given won't change that.
mariuszp wrote:
I'm gonna be adding APIC support and here's a question: if I set APIC to single-shot mode, and that "shot" happens when interrupts are disabled, is it missed or does it get sent as soon as I enable interrupts? What about if interrupts are disabled, the shot happens, then I program the APIC again, and then enable interrupts? (I'll need to do that for calls such as wait() which would need to trigger the scheduler to schedule another task).
"Disabling" interrupts (e.g. with the CLI instruction) only causes interrupts to be postponed (until the CPU is able to receive them again) and doesn't cause IRQs to be discarded.
If the local APIC timer sends an IRQ while IRQs are disabled in the CPU, then that IRQ will be postponed. If the CPU resets the local APIC timer count and then enables IRQs, then the CPU receives the postponed IRQ from the local APIC timer when IRQs are enabled (after it has reset the local APIC timer count).
This may cause race conditions, so your code has to either avoid the race conditions or deal with them if/when they occur. I don't think there's a way to effectively avoid the race conditions. To handle them if/when they occur; the local APIC timer IRQ handler can check the timer count and ignore the IRQ if the count is non-zero.
mariuszp wrote:
(EDIT: I may also add that calling memcpy() while interrupts are disabled also causes the swap to screen to take a few seconds, is that a Bochs problem or could the memcpy() implementation I 'stole' from PDCLib be really bad and I should use "rep stosb" instead?)
Why are IRQs disabled in the first place? How much data are you copying? How does PDCLib implement 'memcpy()'?
I'd be tempted to assume that you're talking about blitting pixel data from some buffer in RAM to display memory; and IRQs shouldn't be disabled, you do nothing to prevent data that hasn't been changed since last time from being "re-copied" to display memory again for no sane reason, and that you shouldn't be using generic code (e.g. `memcpy()`) for a very special case.
Cheers,
Brendan