Hi,
bigbob wrote:
I can't confirm what Brendan said, because my OS is still a baby, but he is probably right.
On the other hand CM and interrupts work well together according to my experience.
I haven't emphasized it that I do CM with FORTH, and in FORTH words(i.e. small programs) communicate via stacks (parameter-, return-, float-stack).
In the timer-interrupt I watch for Ctrl-C keypress too and simply switch those three stacks by changing a few pointers (Ctrl-C sets the state of the current task to UNUSED to be exact).
It works well.
For a single task (where scheduler becomes mostly irrelevant), for single-CPU (where things are far simpler), for a very limited number of simple devices (e.g. keyboard and text mode, where you don't have to worry about things like processing millions of pixels ASAP or handling a several thousand packets per second), without power management (where you shift and/or postpone work to reduce heat and/or increase battery life), without any advanced features like full virtual memory management (with memory mapped files, swap space, copy on write, etc) or prioritised IO (where you re-order things like disk reads/writes to improve performance for important things), and without any security (e.g. any user can trash anything); everything can seem like it works well regardless of how bad it is.
It's not until (e.g.) you're watching a movie in one window while writing a letter in another window while processing a lot of data in the background while the computer is handing HTTP/FTP/DHCP/whatever requests from all over the network while 7 of your 8 CPUs are unused/being wasted that you discover that one of the fundamental pieces of the OS has severe design flaws and the only way to get acceptable performance is to break behaviour that all existing software depends on.
And really, it's the "break behaviour that all existing software depends on" that you'd want worry about - it's the sort of thing that (in severe cases) can completely destroy many years of work. E.g. you change the way scheduling is done and realise file IO and GUI needs to be modified to work properly with the new scheduling, and then realise all the applications depend on the old scheduling, old file IO and old GUI, and then realise it's quicker/easier to rewrite all the applications.
bigbob wrote:
Multi-core is still a problem, though, because in FORTH the dictionary should be accessed by only one task at a time (i.e. a task has to finish changing the dictionary, there can't be a task-switch in the middle, otherwise, the dictionary would be corrupted).
For pre-emption; you'd just postpone task switches while the dictionary is being modified.
For multi-core there's other problems; like making sure other CPUs aren't reading from the dictionary while its being modified; and making sure that other CPUs (that were waiting while a modification were being done) see something consistent when the modification is completed. The thing I'd worry about the most is scalability (e.g. most CPUs wasting most of their time waiting to access the dictionary). I'd expect in a system like this (where all CPUs require frequent access to the same "globally shared but modified" data) that using 2 CPUs will only make things 50% faster than single CPU (and not twice as fast), and using 4 CPUs will only make things 10% faster than 2 CPUs (and not twice as fast), and using 8 CPUs will actually give worse performance than using 4 CPUs.
To make performance tolerable on modern (multi-core) computers, I'd avoid "frequent access to the same globally shared but modified data" completely by giving each CPU its own separate dictionary. However; this would mean each CPU is more like a separate computer; where different tasks running on different CPUs communicate with something more like networking than function calls; and this type of thing requires software designed specifically for "isolated pieces that communicate" to be effective.
Cheers,
Brendan