OSDev.org

The Place to Start for Operating System Developers
It is currently Thu Mar 28, 2024 4:27 am

All times are UTC - 6 hours




Post new topic Reply to topic  [ 25 posts ]  Go to page 1, 2  Next
Author Message
 Post subject: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Thu Jun 29, 2017 6:22 pm 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
I have been thinking why not use 1 microsecond as the PIT frequency? It would improve the timing quality of the system instead of using 1 millisecond.

Is this code good to set the frequency to 1 microsecond?
Code:
;Inputs:
;            EAX -- Frequency value
;            ECX -- 1 round up
;;
PIT_8253_4__Set_Frequency:
pushf
push wideax
push widebx
push widedx

;Calculate the countdown for the frequency value:
;
;              1193180
; COUNTDOWN = ---------
;             FREQUENCY
;
;;
  xor  edx,edx      ;Form EDX:EAX with EDX to 0, EDX:EAX==1193180
  xchg eax,ebx      ;Put EAX argument, in EBX, EBX==FREQUENCY
  mov  eax,1193180  ;Use default PIT frequency, EDX:EAX==1193180
  div  ebx          ;  EDX:EAX / EBX -- EAX result, EDX remainder

  cmp ecx,1         ;See if we want to round up
  jne .noroundup    ;If not, skip adding 1
  cmp edx,0
  je  .noroundup    ;If remainder==0, don't round up
   inc eax          ;If so, add 1 to round up

  .noroundup:




;Write the countdown value:
;
;                 1193180
;    COUNTDOWN = ---------
;                FREQUENCY
;
;
;                 1193180
;    FREQUENCY = ---------
;                COUNTDOWN
;
;;
  push wideax
  mov al,0x36  ;Command -- Write MSB:LSB countdown value
  out 0x43,al  ;Write PIT Mode/Command register (W)
  pop wideax



out 0x40,al   ;Write LSB to PIT channel 0 data port (R/W)

shr wideax,8  ;Get MSB in AL
out 0x40,al   ;Write MSB to PIT channel 0 data port (R/W)



pop widedx
pop widebx
pop wideax
popf
retwide




Code:
PIT_8253_4__Install:
pushf
push wideax
push widebx
push widecx


;Set our PIT timer tick to 1 microsecond:
;;
;Inputs:
;            EAX -- Frequency value
;            ECX -- 1 round up
;;
  mov wideax,1000000
  mov widecx,0
  call PIT_8253_4__Set_Frequency



;Now Install the ISR in the IDT:
;;
  ;      AL == Interrupt Number
  ;     EBX == Entry point of our ISR
  ;;
   mov al,0xF0               ;Install PIT at INT 0F0h
   mov ebx,PIT_8253_4__ISR
   call IDT_x86_32__InstallISR



;Finally, activate the IRQ by unmasking
;its corresponding bit (in the Master PIC:
;Bit 0 for IRQ 0, which is the PIT):
;;
   ;Read the bit mask:
   ;;
    in al,0x21

   ;Unmask bit 0:
   ;;
    and al,11111110b

   ;Send the new bit mask:
   ;;
    out 0x21,al



pop widecx
pop widebx
pop wideax
popf
retwide


_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Thu Jun 29, 2017 6:33 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

~ wrote:
I have been thinking why not use 1 microsecond as the PIT frequency?


Because it typically takes 1 us to do an IO port write; so with an IRQ every 1 us you'd waste 100% of a CPU's time doing nothing but sending EOI to the master PIC chip. Of course in practice all that would happen is that you'd miss most of the IRQs and end up with an unpredictable mess.

~ wrote:
It would improve the timing quality of the system instead of using 1 millisecond.


Using "one shot mode" you can get 838 ns precision (the highest possible precision) out of the PIT chip without causing a massive disaster.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Thu Jun 29, 2017 7:53 pm 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
But for CPUs above 4MHz the CPU could probably handle 4 to 4000 times a μ microsecond, unless it completely locks a long time while writing a port.

It would be surprising that the machine was equipped with a fast timer but an IRQ controller that cannot handle it.

What if the PIT is programmed to be triggered every 2, 4 or 8 μs? It would still be acceptable depending on the CPU speed, and could be adjusted. We could keep a system PIT counter to account for that amount of μs per tick, and it would be a very usable precision. That precision could probably be adjusted at run time depending on system load and needs.

It looks like with any PIT mode based on 1.19MHz we would have around 840 ns, but with some modes we would basically need to poll the PIT somehow as we wouldn't have an IRQ, or it seems to me.

And if our IRQ does nothing but increment a tick counter and signal that it's time again to scan a list of threads from a table of function pointers, done outside the IRQ itself, if 1 μs doesn't work for a machine, probably 2, 4, 8 or at most 16 μs per tick should really work via interrupts without clogging the PIT.

_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Thu Jun 29, 2017 8:27 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

~ wrote:
But for CPUs above 4MHz the CPU could probably handle 4000 times a nanosecond, unless it completely locks a long time while writing a port.


The ancient PIC chip and ancient PIT chip run at ancient ISA bus speeds; and the "~1 us delay" comes from the speed of the ancient ISA bus (e.g. 4 MHz ISA bus speed, with several bus cycles per IO port access).

~ wrote:
It would be surprising that the machine was equipped with a fast timer but an IRQ controller that cannot handle it.


Modern IRQ controllers (IO APIC, local APICs) and fast timers (HPET, local APIC timer) have no problem - they typically run at "CPU local bus speed" and not "vintage museum bus" speed.

~ wrote:
What if the PIT is programmed to be triggered every 2, 4 or 8 μs? It would still be acceptable depending on the CPU speed, and could be adjusted. We could keep a system PIT counter to account for that amount of μs per tick, and it would be a very usable precision.


With "divisor = 1" (1193.181333 KHz, about 838 ns between IRQs) the PIC chip probably won't be fast enough to handle it, and it'd be too unreliable to consider.

With "divisor = 2" (596.5906666 KHz, about 1.676 us between IRQs) the CPU would spend 1 us sending EOI to the PIC chip, then be able to do useful work for 0.676 us. This works out to about "1 - 0.676/1.676 = 60% of CPU time spend on EOIs".

With "divisor = 3" (397.727111 KHz, about 2.514 us between IRQs) the CPU would spend 1 us sending EOI to the PIC chip, then be able to do useful work for 1.514 us. This works out to about "1 - 1.514/2.514 = 40% of CPU time spend on EOIs".

With "divisor = 65536" (18.2 Hz, about 54925.4 us between IRQs) the CPU would spend 1 us sending EOI to the PIC chip, then be able to do useful work for 54924.4 us. This works out to about "1 - 54924.4/54925.4 = 0.0018% of CPU time spend on EOIs".

~ wrote:
It looks like with any PIT mode based on 1.19MHz we would have around 840 ns, but with some modes we would basically need to poll the PIT somehow as we wouldn't have an IRQ.


Polling the PIT requires IO port accesses (to an ancient legacy ISA device) that cost 1 us each.

~ wrote:
And if our IRQ does nothing but increment a tick counter and signal that it's time again to scan a list of threads from a table of function pointers, done outside the IRQ itself, if 1 μs doesn't work for a machine, probably 2, 4, 8 or at most 16 μs per tick should really work via interrupts without clogging the PIT.


The sane alternative is to use "one shot mode", set the timer so that it expires when you actually need it to expire (e.g. the time until the next task switch happens or the time until the next sleeping task has to be woken up, whichever is sooner) with 838 ns precision; and avoid millions of pointless IRQs that do you don't need that waste a massive amount of CPU time for no sane reason.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Thu Jun 29, 2017 11:10 pm 
Offline
Member
Member
User avatar

Joined: Wed Jul 13, 2011 7:38 pm
Posts: 558
Most PITs and PIT emulations (consider that most machines have a Super I/O chip that emulates a PIT these days, if they're not even doing it on the PCH) don't have a reliably higher clock rate than about 500 Hz in my experience. This includes ESXi, which is an enterprise hypervisor, and is built to have incredibly high tolerances and run modern operating systems in a standard, cohesively designed environment.

If you need anything for sensitive timing and you're intent on using the PIT you should go back to the sandbox and let Daddy Linus and Uncle Nadella hold your hand.


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Fri Jun 30, 2017 12:08 am 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
Kazinsal wrote:
Most PITs and PIT emulations (consider that most machines have a Super I/O chip that emulates a PIT these days, if they're not even doing it on the PCH) don't have a reliably higher clock rate than about 500 Hz in my experience. This includes ESXi, which is an enterprise hypervisor, and is built to have incredibly high tolerances and run modern operating systems in a standard, cohesively designed environment.

If you need anything for sensitive timing and you're intent on using the PIT you should go back to the sandbox and let Daddy Linus and Uncle Nadella hold your hand.
The 1000000 Hz divisor frequency seems to work in a 2GHz K8MM-V PC, but seems to fail in a 550 MHz Thinkpad 390X laptop. 838096 Hz divisor also seem to fail in the 550 MHz machine. In that one, it works with a frequency of 125000 Hz divisor, or 8 μs per tick (1/8 of the 1000000 Hz divisor), so probably a way to detect the maximum allowed divisor speed could be developed without locking up. 250000 Hz divisor, 500000 Hz divisor, also work for the 550 MHz machine.

This potential problem seems to affect only ISA devices.

There are many demos that are still being produced, and they still fail to run properly in all emulators, probably because of things like this. The PIT could still be fully emulated with the more modern timing mechanisms. It could even be implemented in hardware with an ISA speed mode, and a PCI speed mode transparent to legacy hardware, for example through fixed PC configuration ports through some Super I/O PCI device.

But that's not the point.

I have just tested my code with the PIT set to a frequency of around 1000000 to produce nearly 1 μs. The keyboard works as always.

It looks like as with any processor, you do good to regulate how much processing those devices make at any moment, depending on the tasks you want them to do, and record the current PIT speed at all times.

What I mean is that the system seems to work without issues if you do things like reprogramming the PIT to work at 1 ms while really slow things like the floppy disk are accessed, and also record the current PIT speed in a global variable.

I think that it's why all OSes feel much slower when the floppy is accessed. It must be taking care to work slow enough as to work in all existing motherboards. The code might work at full 1 μs speed with the floppy and the PIT in some machines, but with closed source OSes and the highly technical capability expected to adapt existing code, everyone prefers to slow down the whole system only when needed, mostly when accessing the floppy drive. The PIT must be slowed down in that case, but the slow down is only applied at the start and at the end of critical events, mainly when doing things that would produce interrupts from even slower things like the floppy.

Plus, that's code that no one must have touched in decades since it was first created.

I will keep my kernel working with the PIT at 1 μs to see how all machines behave, as well as applying any dynamic slow down when needed, for example when accessing the floppy (unless I prove it to be unnecessary even in my 386DX). I want to experience more of the actual effect of handling a default PIT at 1 μs. At least now I know that if something fails, at least from the interrupt point of view, it could be a probable cause.

Specially, my kernel seems to lock currently if I perform floppy DOR operations without first setting the PIT to 1 ms, then back to 1 μs when done. If I pack a DOR-handling function and slow it down at the start, then speed it up at the end, then using a 1 μs PIT seems to be no problem. I also have an infinite loop that waits for the floppy to terminate an IRQ with a global flag variable set to 1 when the floppy IRQ is on, obviously with interrupts on.

I should probably delete any and all infinite loops in my kernel and instead make them timing out loops with error on timeout so I can never possibly lock, not necessarily for this, but for any unexpected error. A good kernel probably has no infinite loops in it.

_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Last edited by ~ on Fri Jun 30, 2017 1:38 am, edited 6 times in total.

Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Fri Jun 30, 2017 12:31 am 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
Brendan wrote:
The sane alternative is to use "one shot mode", set the timer so that it expires when you actually need it to expire (e.g. the time until the next task switch happens or the time until the next sleeping task has to be woken up, whichever is sooner) with 838 ns precision; and avoid millions of pointless IRQs that do you don't need that waste a massive amount of CPU time for no sane reason.


Cheers,

Brendan
I want to implement simple multithreading, the simplest possible form of multitasking based on a fast timer, a global timer counter, and associated timing functions that support counter wrap-around without failing, as well as registering and clearing timer handlers.

Then I want to have an array of function pointers and associated timeout structures.

Then I would have some sort of scheduler outside any IRQ or interrupt routines. It would use LOOP and HLT instructions to avoid consuming too much CPU power, most probably as part of the main function of an external program, until I get mature scheduler code.

The scheduler would simply check the elements in the function pointer array, and for any pointer that is not NULL, the associated timeout structure would be tested. If the time out interval has passed for a function pointer, then the function will be effectively run.

A timer would be cleared simply by setting its associated element in the function array or timer handle to NULL (0 value).

I want to have an arbitrary number of timed function pointers and timer structures created on the fly, for example to learn by porting existing old games and make them have timer-based multithreading. That's why I want to have a global PIT at that 1 μs speed, and then the different timer structures with which to decide whether any timer or function needs to be activated already, in the same main process, by simply running a thread scheduler over all of those timed functions and in turn other timers from those timed routines.

This is probably good enough for non-mature but functional code as long as it doesn't fail and allows to add better functions in parallel to choose from, until I gradually come up with portable and scalable scheduling algorithms that are more mature and pluggable transparently to any supporting hardware.

_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Fri Jun 30, 2017 2:27 am 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
VERY IMPORTANT NOTE: I have just added a slow down to 1 ms for the KBC controlling code.

Previously the kernel locked up when initializing, but it was more for a bogus wait and excessive KBC polling than for anything else. Depending on the PIT speed, it ended up unlocking when booted, but it took several seconds.

If set to a high speed, the PIT should be slowed down every time that we need to initialize, command, or wait an IRQ interrupt from another ISA device for such manipulation, and accelerated back afterwards.

Now, when I want to manipulate the KBC and wait for status until it's ready to be commandable, I pack a function where I start by slowing down the PIT to 1 ms, and at the end I can accelerate it back to 1 μs, and now it doesn't lock not even in the 550 MHz machine.

What I mean is that probably if we take special care to slow down the PIT while commanding other ISA devices, actually reading or writing them, then the system won't suffer, and we can take measures, like recording the current PIT frequency, to know in a reliable way how much time has passed in any active timers as in a regular system slow down.

When using them normally like reading massive floppy data (surprisingly), typing on the keyboard and probably while using the mouse, no slowdowns are required, only in special parts of the code where we initialize and command them for a task, but not once they are working.

Now I only need to assemble my 386DX back to see if it also works there. If it does with these measures, then I couldn't say that it's failing, but I need to see, or optimize until it works with these settings.

I have updated my kernel code to reflect these improvements, actually testing with default PIT speed of 1 μs and slow downs of 1 ms to initialize and command other ISA devices.

If you want to see the new PIT code and the slow downs for the KBC and floppy code, I have updated here:
Image BOOTCFG__v2017-06-16.zip

_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Fri Jun 30, 2017 1:38 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

Yesterday, I tried to write a response to your posts. I wasn't able to post it. This morning, I tried to write a response to your posts again; and still wasn't able to post it. It's almost impossible to find words that adequately describe what I want to say that are acceptable to use in public. This is my third attempt.

You are completely and utterly incompetent.

You are ignoring all modern timers (HPET, local APIC timer, etc) that have all existed for over a decade now, and then complaining that someone should create a new "faster PIT". You were told that PIT can not be reliably used at this speed, ignored that, and are now having to implement idiotic work-arounds because the PIT can not be reliably used at this speed. You were told how the PIT can be used to achieve better precision than you are getting now without the problems that you are having to work around, and ignored that too. You tested how much your crippled joke effects CPU performance, not with any kind of intelligent/sane test (e.g. that measures the effect on anything that is CPU bound/heavy processing), but by testing keyboard (which has never used a significant amount of CPU time in the entire history of computers) and by testing floppy (which should be using the old ISA DMA controller and shouldn't be effected by CPU load at all). You've indicated that your floppy driver is disgusting (because its performance is effected, which implies that it's using PIO); and you've indicated that your keyboard code is disgusting (because you're polling "something").

If you were a beginner, I'd be able to excuse all of this. However; I have to assume that you've been dabbling in OS development for 10 years now (you joined these forums in 2007) and are not a beginner at all. You should have learnt something useful in those 10 years, but somehow you have not. It's incredible. It's like watching someone ride a bicycle in heavy rain, and by some bizarre freak of nature not being touched by a single drop of water.

You need to stop programming. You need to become a gardener, or find work in the fast food industry, or learn woodworking, or get enthusiastic about creating artwork, or... It doesn't really matter what it is. The only thing that matters is that whatever you choose has nothing to do with computers.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Fri Jun 30, 2017 5:42 pm 
Offline
Member
Member

Joined: Thu Aug 13, 2015 4:57 pm
Posts: 384
Brendan wrote:
The only thing that matters is that whatever you choose has nothing to do with computers.


Healthcare? =)


As for the actual topic, beyond everything Brendan already said.. Tilde, you do realize that even with modern 4GHz if you are doing IRQ's 1M times per second you only get 4k cycles for each, right? At which point you need to EOI the IRQ (uses some of those 4k cycles), the CPU deals with all of the stalls (cache misses (RAM takes 200-300 cycles, each time), TLB misses, pipeline flushes, etc) which all eat some of those 4k cycles. So even if it worked reliably, nobody would still want 1M IRQ's per second.

Note, it seems even the crappy OS's (Windows and Linux) might at some point stop the stupidity of interrupting _everything_ every x ms and instead interrupting only the stuff that needs to be interrupted and doing so only when they actually need to be interrupted. What's the point of interrupting some process every 1us if you know there's nothing better to do until 5ms later, you're just going to interrupt it 5000 times for no reason. This does not improve latency, it does not improve responsiveness, it does not improve "pre-emptiveness", it only wastes resources for _NO_ gain.

And seriously, it's not that hard to do things a little bit better.. For Linux and Windows they have tons of legacy baggage, so changing things is a bit more difficult for them, but you're designing your own OS, make it good not worse than Win/Lin..


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Fri Jun 30, 2017 6:46 pm 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

LtG wrote:
Note, it seems even the crappy OS's (Windows and Linux) might at some point stop the stupidity of interrupting _everything_ every x ms and instead interrupting only the stuff that needs to be interrupted and doing so only when they actually need to be interrupted. What's the point of interrupting some process every 1us if you know there's nothing better to do until 5ms later, you're just going to interrupt it 5000 times for no reason. This does not improve latency, it does not improve responsiveness, it does not improve "pre-emptiveness", it only wastes resources for _NO_ gain.


I was curious so I did a little research.

For linux (which is like a big lumbering walrus - a large amount of pre-existing kernel code that depends on "ticks" dating all the way back to the early 1990s), they started work on "tickless" in 2010 and reached "(almost) full tickless operation" in 2013. Of course the fastest "tick" Linux ever supported was 1 KHz (and not an absurd 1 MHz), so the benefits were much smaller, but they've reported ~1% less CPU time wasted and improved "jitter" .

For FreeBSD the story is similar (started work in 2009, reached "full tickless" in 2013).

For other OSs; Solaris was tickless since Solaris 8 (2000), OS X has been tickless since 10.4 (2005), Windows has been tickless since Windows 8 (2012).

OpenBSD and NetBSD are the only ones (that I looked for) that aren't tickless. OpenBSD has a project to try to implement it (but they seem to be struggling with other issues too, like lack of fine grained locking, etc; so I'd assume that the weight of old code has been crushing their ability to move forward for a long time). For NetBSD, it seems that the developers
consider the kernel's lack of "ticklessness" to be a bug.

LtG wrote:
And seriously, it's not that hard to do things a little bit better.. For Linux and Windows they have tons of legacy baggage, so changing things is a bit more difficult for them, but you're designing your own OS, make it good not worse than Win/Lin..


Yes; it's the legacy baggage (from an era before HPET, local APICs, power management, etc) that makes it hard for existing OSs. For people like us (without baggage and with the benefit of years of hindsight) it's much easier to get it right.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Fri Jun 30, 2017 9:13 pm 
Offline
Member
Member
User avatar

Joined: Tue Mar 06, 2007 11:17 am
Posts: 1225
Brendan wrote:
Yes; it's the legacy baggage (from an era before HPET, local APICs, power management, etc) that makes it hard for existing OSs. For people like us (without baggage and with the benefit of years of hindsight) it's much easier to get it right.


Cheers,

Brendan
The information about tick-less operating systems is interesting. It means that we could keep track of how many PIT counters there are in the system. If in a given moment there isn't any, we could simple disable it, but as soon as anything requests a timer, we could enable it, use it while needed, and then disable it again. But probably most of the time some sort of timer would be used, so the PIT or other timer would be activated. Enabling the timer only when actually used would make an OS much more tick-less. It would be a feature to be tested and well implemented when real applications that make use of timers arise.

__________________
Talking about the technical reasons as to why it takes so many decades to improve something like this, every developing branch of people has its own reasons and cultural/education background.

Mine is because I was born in El Salvador and will probably always live there.

Why is it important about that? Living in a particular country has nothing to do with what you know. I can learn anything as long as I'm provided with enough and good enough, complete information. So I'm already doing at my current personal best.

But the fact is that I always studied in public, not private institutions. In the 70's and mid 80's those were excellent places, but since I was born in 1984, I only got increasingly degraded education. All of the education I got has no more level than a kid with only 4 to 6 years of formal education in most places of the world.

I got a personal computer until 2001. Before that all computers I got were partially damaged. Other friends of mine carried viruses, so I lost my 386 and I was ignorant enough to know that I could format and reinstall DOS and Windows 3.x, so I opened my only hard disk and was left with no machine around 1999.

I never was taught good enough Math, Physics, Electronics or Chemistry for what is needed in practice and for what was available in other local study places, so I have to learn them all by myself, unlike my older sister, my parents and everyone before them. At least I had good friends and times to help me now keep with this effort, trying to make possible more good times.

I tried to see how a 2-year career on hardware would go locally (http://www.itca.edu.sv/) (http://archive.org/download/images_2017_2/GuiaEstudiantil2016Final.pdf -- look at least at the career information at page 58), but it wasn't very good because they only taught about the most basic components once a week and then one practice of what we had studied. They must lack personnel because they only had one good teacher who had returned from Korea, and was older than the rest of teachers, who didn't teach anything new that a minimal programmer like me didn't know already, and weren't very friendly. Mathematics was taught poorly, expecting that they only used the knowledge they already had got before, so I was left with no more choice than re-learning properly by myself all of that. They didn't even teach us to solder, micro-solder, make printed circuits, or handle surface microelectronic components. With an electronics class only once per week it cannot be expected to get good education, so developing an OS and learning to programs from good books and open source is actually more productive as things are currently. It's a new career so it will probably need to stay around for at least 10 years to have a hope to become expert enough, unlike the programming career that is more benign to the students to the place I went to originally in 2001, just like me while learning over the years.

You will find that most people has no time for going into a more in-depth study of technology of anything. Once people works here, there is hardly time for anything else than the daily preparation to assist for whatever job someone gets, be it relatively good or bad.

At least my father knows electronics because he had to work with it since the time where only vaccuum tubes, so he can understand algorithms from an electronic perspective, can program microcontrollers, and can make devices to collect heat from the Sun that aren't based on photovoltaic cells.

Here you will have a hard time finding anyone teaching system-level technology in software, and hardware is dramatically more limited.

Here national forums like http://www.svcommunity.org/forum/, which reflect very well the general level of programmers in our country, are the ones who have the highest popularity and attention of people, not websites like mine and better, trying to describe things from the roots.

____________________________________
____________________________________
____________________________________
____________________________________
So I have to admire things from times that others call "past" but that are still used. I consider that my generation is like an unfinished platform, and everyone alive who hasn't done something is too part of that unfinished platform.

I appreciate standard things. Since nobody taught me the technology of known PCs and since I didn't have that many books or expertise before being 25 or 26, I have to start by learning how the things that helped me and that I liked the most in my childhood work, and that now I understand that are part of a PC platform that has tried to have all of its hardware and software fully standardized, as a tool of "more serious" sciences where you will always find good tools, not unpredictable, non-standard, and potentially crippled instruments without any scale to norm its use under a science.

So I better learn from generations where people is mostly no longer alive (in the case of programming from books of 2000's, 90's, and 80's, and mathematical books from all eras), where past technology can be more easily understood and where we already know how that "past" technology will work.

I'm trying to learn math, electronics and how to implement PCI devices. I already bought the YMF... chips to add OPL to a Sound Blaster I want to make, but I don't know which circuit I can use to implement the PCI functions.

So keep in mind that with me you will always be able to see first hand how a people from El Salvador, Central America, Latin America or from most people, with only "normal" education at the level of the first 4 to 6 years of formal education in the life of a person, does things, and what such small level is capable of achieving, and in how much time.

I think I'm doing well enough, I'm already doing my best. I know more than most programmers or normal people I encounter, although people like my older sister and parents know much more than me in any topic I can think of. My older sister not even lives in this country anymore, but in Asia.

I'm learning, but as I have already proven to myself, it's good books, good websites, learning by myself over time as people of other generations, what will make me improve. Really it's not a real problem to live here, access to information from outside has improved, but when people gets new technology devices without knowing how things worked in past generations of devices, they just get distracted and uninterested in all the internals.

As you can see, this is why I must seem so slow to you.

I think that it's rather inconsiderate to post mostly forum messages with no technical content, so I better concentrate more and more in posting only messages with source code. Just like file-downloading websites, what is important here is improving the knowledge and code of operating system level technology, as when contributing changes, additions and improvements to the code of an existing OS. So the ideal here is to post code and information, not ranting, and if there are rants, make them based on code to improve, just like with any other cooperative project where anything that other one does, is an addition, the clearer and more complete, the nearest and most immediately usable to the OSDev.org and related collective systems.

____________________________________________
Just remember also that when programmable electronics, computers and games first appeared, many countries started learning how to make games.
But countries like El Salvador and even Mexico look like they didn't notice. Nobody there learned to program as a science when processors first appeared. Nobody published a game that is known to exist to date from that time, so nobody here has the initial knowledge that could have possible passed on to other future students. Mexico only made many movies, many of them good as to have concepts used even in games today, but now almost none is made. So it's like we have to build the foundations of that computing and related technologies from scratch.

Programming is still the most accessible technology as it doesn't require a specialized laboratory to make great programs, but for the rest of things like electronics, physical principles to actually make the electronics components and devices, look like there's not much. Maybe it's only me, but I have always only heard about people making database systems to keep track of inventories of products or money when I have asked when trying to know more about programming.

As you can see, I'm not complaining. I will just learning and doing things as fast as I can, but also knowing that there is a local disadvantage in the interest of making things like OSes or major applications, where the interest is very difficult to be shared with other locals, or is absent normally. It doesn't stop anyone, it only makes things too slow to progress, not even difficult, just slow.


And in reality, originally it was a female Siamese cat that went out looking for me out of nowhere who inspired me first to reach and stay to this day, from 2004, at the level of operating systems and greater ones. She apparently attracted chickens to my house and other cats because 6 chickens have lived here starting with a young rooster that couldn't sing yet, and currently 3 more regular cats have appeared out of nowhere, and another male Siamese cat who apparently was searching for her all of them at least 4 years after she came to my house, She became my best friend, loved me, always stayed by my side, really make me feel things in such a way that just inspired me to get interested in the OS level. If it hadn't been for her, now I would have sunk to indescribable levels of ignorance and carelessness, like many others locally. It's incredible that an animal has inspired me that much while I still see just about everyone else locally uninterested even on starting to think about things like that even in careers about designing hardware supposed to talk about it but that in reality are empty (they are just computer maintenance basic courses). I think that she exhibited more intelligence than me. If she had been human she would have been much much much better at OS developing than me, but still she demonstrated and taught me all the best she had, so I came to understand this level at a logical, human way, where I previously just had no idea. It's probably how things improve in some cases when no other kinds of help are in existence at the time. It will probably clear up in a benign way why I seem to slow. I do my best and learn, the information I produce seems good enough for its topics... it's to the best of my knowledge and it's starting to work. Even courses from Coursera or Udacity would somehow look like what I'm trying to do while explaining from the start.

Probably looking actually at me helps you see at which category of person and intellectual level I am at, but it would probably be better to just keep producing code to stay at the applied and really professional level:


This kind of work will make a lot of good to everyone, and specially countries like me, when young people finally realize what this is about, with all the previously accumulated experience by as much people as possible, will receive a lot of benefits. That's what matters, that everyone is better and knows more than before.

It will also keep alive the great things that really smart people has done. People from websites like this can become very appreciated in countries like El Salvador, even more than other countries because here this kind of people with that knowledge just is nowhere to be found, so I'm probably just at the start of the people of a country that has been reached by projects like this, who will learn and teach others, locals and virtual, to do important things for the well being of everyone thanks to public efforts like this.

But well, let's keep up and let's keep doing a good learning and development job after writing about the core reasons of the structure of our work. It would be interesting if you tried to find more people from El Salvador so that you can exchange technical plans of implementation. You are undoubtedly better at finding extremely high-quality answers, so you will undoubtedly either find better people like me from El Salvador, or will realize that the parts of the world that are just starting to improve will look as dumb as me in practice. It's funny and useful, so it has to keep up.

_________________
Live PC 1: Image Live PC 2: Image

YouTube:
http://youtube.com/@AltComp126/streams
http://youtube.com/@proyectos/streams

http://master.dl.sourceforge.net/projec ... 7z?viasf=1


Last edited by ~ on Sun Jul 02, 2017 6:24 pm, edited 1 time in total.

Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Sat Jul 01, 2017 1:05 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

~ wrote:
Brendan wrote:
Yes; it's the legacy baggage (from an era before HPET, local APICs, power management, etc) that makes it hard for existing OSs. For people like us (without baggage and with the benefit of years of hindsight) it's much easier to get it right.

The information about tick-less operating systems is interesting. It means that we could keep track of how many PIT counters there are in the system. If in a given moment there isn't any, we could simple disable it, but as soon as anything requests a timer, we could enable it, use it while needed, and then disable it again. But probably most of the time some sort of timer would be used, so the PIT or other timer would be activated. Enabling the timer only when actually used would make an OS much more tick-less. It would be a feature to be tested and well implemented when real applications that make use of timers arise.


You don't quite understand how it works. It's not just turning the timer off; it's changing the timer's count to make sure the IRQ only happens when it's necessary.

Imagine you have several things waiting for time to elapse. Maybe the scheduler wants to know when 10 ms has passed (because it wants to do a task switch then), maybe a device driver has to know when 50 ms has passed so it can do something with its device, maybe another task called "sleep()" and wants to wake up in 2 seconds time, and maybe there's some networking code that wants a time-out in 10 seconds time.

Also imagine that you've put all these "timer events" into a list, sorted in order of when they're supposed to happen (scheduler's "10 ms from now" event first, device driver's "50 ms from now" second, etc).

Now assume that a PIT IRQ just finished, so you look at this list of "timer events" and figure out which one happens next (the list is sorted, so it's always the first thing on the list). You see that the next one is in 10 ms time, so you configure the PIT to generate an IRQ in 10 ms time.

After 10 ms has passed the timer IRQ handler looks at the first thing on the list and knows "Hey, the scheduler's 10 ms has passed" and informs the scheduler (and the scheduler does a task switch and probably puts another "timer event" somewhere on the list); and then you look at the list again and see that the device driver is next, so you set the timer for "50-10 ms = 40 ms from now".

After that 40 ms has passed the timer IRQ handler looks at the first thing on the list and knows "Hey, the device driver's timer event has expired" and informs the device drivers; and then you look at the list again and see that you need to wake up a sleeping task next, so you set the timer to generate an IRQ in "2000 - 40 - 10 ms = 1950 ms".

Eventually (maybe) an IRQ occurs, and you handle the next "timer event" on the list and then find that the list is empty. In this case you just don't set the timer to generate another IRQ.

Sometimes you need to insert a new "timer event" into the list. When this happens, you check if the list was empty or if the new event is going to be the first on the list; and if it is you change the timer so an IRQ happens when the new event expires.

Sometimes, 2 or more events will expire at about the same time, and so you can merge them and handle them all with a single IRQ.

Sometimes a "timer event" will be cancelled before it expires, and you'll remove it from the list before it causes any IRQ.

Now assume that there's 1000 timer events that all expire within the same 10 seconds of time; but 400 of them get cancelled before they expire, and 100 get merged with other timer events. In this case you'd only need 500 IRQs for the entire 10 seconds. Also note that when setting the PIT you tell it a new "count of 838 ns periods", so all of those "timer events" get 838 ns precision.

Now compare that to what you're doing - in that 10 seconds of time you'd have about 6 million IRQs, and your "timer events" will have (at best) 1.676 ns precision. It's significantly higher overhead (6 million IRQs instead of 500 IRQs) for worse precision.

Next assume that (one day) you add support for HPET or local APIC timer or something else. In these cases; everything works exactly the same (for the list of "timer events", etc), except you get better precision, still with about the same number of IRQs as before (e.g. the same "about 500 IRQs in 10 seconds" possibly with minor differences in merging events occur at almost the same time).

Note 1: When the PIT chip is in "lobyte/hibyte mode" it takes 2 IO port writes to set a new count; and by using "lobyte only mode" instead this can be reduced to one IO port write. This leads to a strategy of switching to "lobyte only mode" when the delays needed are shorter than 214 us and inserting "dummy delays" to avoid switching out of "lobyte only mode" for longer delays if/when it helps to avoid switching back to "lobyte/hibyte mode" temporarily. For delays that are over 54 ms you'll need to insert dummy delays (even for "lobyte/hibyte mode") because the PIT can't handle longer delays.

Note 2: When you're doing this with the PIT it makes it hard to also use the PIT for accurately tracking "wall clock time"). To fix that use the RTC chip (at a low frequency like maybe 4 Hz) to keep "wall clock time" synchronised (and then use NTP to keep the RTC synchronised if you can).

Note 3: All of the above only really applies if you actually want very precise timing. Most OSs that use/used a regular tick (including the OSs that used a regular tick initially but then switch to "tickless") worked reasonably well with a 1000 Hz (or slower) tick. For modern/fast computers you could probably go has high as 10 KHz without any real disadvantages. If you want you could do a crude benchmark to estimate the CPU's speed and then set the PIT frequency based on the result - e.g. maybe ranging from 100 Hz (on ancient slow 80386) up to 10 KHz (for modern fast CPUs); so that you get better precision on fast CPUs and less overhead on slow CPUs.

~ wrote:
__________________
Talking about the technical reasons as to why it takes so many decades to improve something like this, every developing branch of people has its own reasons and cultural/education background.

...

As you can see, this is why I must seem so slow to you.


There are a lot of things in this world that I would change if I could, including the availability and affordability of a good education, and including the availability and affordability of computer hardware.

However; there is a huge difference between reasons and excuses, and a lot of these are excuses.

The fact is that (especially at the "hobbyist" level) for OS development you don't need to know anything about physics or chemistry, and a small amount of knowledge of electronics helps slightly (for low level things) but isn't essential. For mathematics, for the majority of software (excluding specialised fields, like 3D graphics, physics simulations, etc; but especially for OS development), you only need basic algebra. For programming itself; most people who are self-taught but then go to University afterwards (including myself) find that University teaches them almost nothing that they didn't already know or couldn't have learnt faster for free online. For "local knowledge", I've never actually met anyone in person that knows anything about OS development.

Far more important than all of these things is a reasonable ability to understand English (which you do seem to have) and Internet access (which you do seem to have); so that you can obtain and understand information that's available on the Internet (including things like CPU manuals, datasheets for devices, etc; and including "help sites" like osdev.org, stackoverflow.com, etc).

There's something else going wrong here.

Maybe it's work related (El Salvador really doesn't look good for "hours worked per week", or wages, or working conditions); maybe you're focusing on obsolete things (e.g. real mode, when even an old 80386 supports protected mode); maybe you're spending far too much time trying to teach people and not enough learning anything worth teaching; maybe it's something else. I don't know.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Sat Jul 01, 2017 2:07 am 
Offline
Member
Member

Joined: Thu Aug 13, 2015 4:57 pm
Posts: 384
Brendan wrote:
Note 3: All of the above only really applies if you actually want very precise timing. Most OSs that use/used a regular tick (including the OSs that used a regular tick initially but then switch to "tickless") worked reasonably well with a 1000 Hz (or slower) tick. For modern/fast computers you could probably go has high as 10 KHz without any real disadvantages. If you want you could do a crude benchmark to estimate the CPU's speed and then set the PIT frequency based on the result - e.g. maybe ranging from 100 Hz (on ancient slow 80386) up to 10 KHz (for modern fast CPUs); so that you get better precision on fast CPUs and less overhead on slow CPUs.


Even though 1kHz should pose no problems I would be very hesitant to use such high frequency. Suppose you have ten processes running that means each gets 1ms of time a hundred times a second, but still means that between each 1ms they do get there's been 9-10 ms in between trashing caches, so you'll get worse performance than if you'd do 100Hz.

Also, under most realistic conditions for a desktop there's rarely more than one or two processes ready to run, for me Firefox is the one that eats by far the most CPU, and that's because it's crappy..



As for the learning, "excuses", etc.. It's not just about knowledge, a good school might also teach people _how_ to think, which means the knowledge you have can be put to better use as opposed to someone else who has equal amount of knowledge but hasn't quite learned how to think.

I'd also like to point out that focus helps a lot, I know I should focus more myself too. For instance trying to support everything is pretty much pointless, you'll never get anything finished. While I hate to throw (still haven't) out my 32-bit PM code (and there's not even that much of it), I'm pretty sure I'm going to and will focus solely on 64-bit LM, same with SMP. Reality is that I actually want to use 64-bit SMP myself, so that's where I should've started to begin with, 32-bit PM just seemed a bit more convenient. I can always add legacy support if I want, but for instance I have no plans to ever add floppy support, I'm sure there's always going to be something more useful than floppies so I'll never get around to it.

I understand that at least some of the legacy stuff is easier to handle, and in some cases is more "standardized" than modern stuff but usually the modern stuff is less "hacky", it just requires more "scaffolding/framework" to operate. Back in the day you could nail down pieces of wood and call it a house, these days the first couple of months are used to dig a huge hole, then they build massive cranes at the site, all to just start building a house, but I know which one I'd rather live in =)

(I know, I know, some people do prefer old style wooden houses..)


Top
 Profile  
 
 Post subject: Re: Using PIT Timer Frequency of 1 μs Instead of 1 ms
PostPosted: Sat Jul 01, 2017 3:11 am 
Offline
Member
Member
User avatar

Joined: Sat Jan 15, 2005 12:00 am
Posts: 8561
Location: At his keyboard!
Hi,

LtG wrote:
Brendan wrote:
Note 3: All of the above only really applies if you actually want very precise timing. Most OSs that use/used a regular tick (including the OSs that used a regular tick initially but then switch to "tickless") worked reasonably well with a 1000 Hz (or slower) tick. For modern/fast computers you could probably go has high as 10 KHz without any real disadvantages. If you want you could do a crude benchmark to estimate the CPU's speed and then set the PIT frequency based on the result - e.g. maybe ranging from 100 Hz (on ancient slow 80386) up to 10 KHz (for modern fast CPUs); so that you get better precision on fast CPUs and less overhead on slow CPUs.


Even though 1kHz should pose no problems I would be very hesitant to use such high frequency. Suppose you have ten processes running that means each gets 1ms of time a hundred times a second, but still means that between each 1ms they do get there's been 9-10 ms in between trashing caches, so you'll get worse performance than if you'd do 100Hz.

Also, under most realistic conditions for a desktop there's rarely more than one or two processes ready to run, for me Firefox is the one that eats by far the most CPU, and that's because it's crappy..


This only applies if the scheduler is garbage (round robin, 1 tick = 1 time slice). For a very simple example (without complicating things with task priorities and different scheduling algorithms), if a scheduler says "a time slice is always 50 ms", then it doesn't matter much if that's 5 ticks (at 10 ms per tick) or 500 ticks (at 100 us per tick).

More precise timing is usually for when tasks block (e.g. where the next task gets a left over fraction of a tick) and better time accounting (did this process that's constantly blocking/unblocking consume a total of 10 ms of CPU time or 9.145 ms of CPU time?) and things like "nano_delay()", and better file system timestamps.

LtG wrote:
As for the learning, "excuses", etc.. It's not just about knowledge, a good school might also teach people _how_ to think, which means the knowledge you have can be put to better use as opposed to someone else who has equal amount of knowledge but hasn't quite learned how to think.

I'd also like to point out that focus helps a lot, I know I should focus more myself too. For instance trying to support everything is pretty much pointless, you'll never get anything finished. While I hate to throw (still haven't) out my 32-bit PM code (and there's not even that much of it), I'm pretty sure I'm going to and will focus solely on 64-bit LM, same with SMP. Reality is that I actually want to use 64-bit SMP myself, so that's where I should've started to begin with, 32-bit PM just seemed a bit more convenient. I can always add legacy support if I want, but for instance I have no plans to ever add floppy support, I'm sure there's always going to be something more useful than floppies so I'll never get around to it.

I understand that at least some of the legacy stuff is easier to handle, and in some cases is more "standardized" than modern stuff but usually the modern stuff is less "hacky", it just requires more "scaffolding/framework" to operate. Back in the day you could nail down pieces of wood and call it a house, these days the first couple of months are used to dig a huge hole, then they build massive cranes at the site, all to just start building a house, but I know which one I'd rather live in =)

(I know, I know, some people do prefer old style wooden houses..)


No, there's something much more bizarre going on here. If you look at ~'s posts from 10 years ago (mostly starting with PS/2 keyboard and mouse driver code) it's obvious that ~ wasn't a beginner back when he joined the forums; and if you compare those old posts to more recent posts it's like he's spent 10 years without gaining any knowledge/experience (and maybe even slowly going backwards in some ways).

For one random example, here's some posts by ~ from 2007 in a topic (from someone else) about detecting memory using probing; and here's some posts by ~ from last month (2017) in a topic (created by ~) about detecting memory using probing.


Cheers,

Brendan

_________________
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 25 posts ]  Go to page 1, 2  Next

All times are UTC - 6 hours


Who is online

Users browsing this forum: Bing [Bot], Google [Bot] and 57 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group