
I'm new here, and am still learning the design part of this whole thing. Haha.
That makes sense. So something like this?bluemoon wrote:You should at least have a process that run the HLT instruction in lowest priority,
so that when all process are sleeping and withdraw their time-slices, the CPU won't busy switching tasks.
Code: Select all
void busy_task()
{
while(1)
asm volatile ("hlt");
}
Code: Select all
global busy_task:function
busy_task:
hlt
jmp busy_task
Oh, I see. e.g. you can't hold your own funeralpiranha wrote:It depends on how you design your scheduler and task system. For me, the kernel does have it's own task (running in ring0 obviously) that does:
* Finishes cleaning up after tasks that have exited (they can't clear out everything that is their own)
* Reaps sections of the kmalloc memory that have been completely freed.
* Cleans up some data structures that are alloc'd by the kernel at bootup
* And (in the future) perform swapping
But really, the most important feature is that if all the tasks sleep, the scheduler has a place to go to (the kernel task) so that it doesn't hang up inside the scheduler. I would recommend that the kernel has it's own task, you'll find uses for it.
Each process can have different priority, simplest way is to set PIT to higher frequency, say 1000Hz, when schedule switch process, it give a time slice counter, and decrease when PIT triggered. Once the count reach zero or the process give up it's time slice (by setting count directly to zero), the scheduler pick next task.Caleb1994 wrote:So, how should the kernel task (if it is going to do more then just take up extra time space) keep running constantly without taking up too much CPU?
Okay, I have quickly whipped together some basic code for my scheduler (like I said, it was VERY basic before. Wasn't really a scheduler at all actually. More like a task switcher.) I read up on different algorithms, and decided on this one: http://wiki.osdev.org/Scheduling_Algori ... ound_Robin it's not too complicated, but at this same time still allows for priorities. What do you think of this setup? I don't want to get too far if someone who has done this before thinks it's going to be badbluemoon wrote:Each process can have different priority, simplest way is to set PIT to higher frequency, say 1000Hz, when schedule switch process, it give a time slice counter, and decrease when PIT triggered. Once the count reach zero or the process give up it's time slice (by setting count directly to zero), the scheduler pick next task.
Code: Select all
/*! Defines all the information needed by the scheduler about a
certain task.
*/
typedef struct _task task_t;
typedef struct _runqueue run_queue_t;
struct _task
{
u32 pid; // Process ID
u32 esp0; // For the TSS so that each process has it's own kernel stack
registers_t* regs; // Saved Register State
pdir* dir; // Page Directory
program_t* program; // A structure defining some higher level attributes
run_queue_t* queue; // Specifies what run queue this task is a part of
task_t* next; // next task
};
/*! defines the runqueue of a priority */
typedef struct _runqueue
{
task_t* ready; // Tasks ready to be run
task_t* done; // Tasks waiting to be renewed (timeslice is exhausted)
u32 ready_count; // Number of tasks in the ready queue
} run_queue_t;
/*! Defines all the information needed by the scheduler in a nice structure */
typedef struct _scheduler
{
run_queue_t prior[SCHED_MAX_PRIOR+1]; // Run queues for different priorities
u32 timeslice; // The number of ticks left in the current timeslice
task_t* current; // The current task being executed
} scheduler_t;
Code: Select all
void task_switch(registers_t* regs)
{
if( task_prev_handler ) task_prev_handler(regs); // This should be the timer handler
// Is this timeslice finished?
if( sched.timeslice != 0 ){
--sched.timeslice; // No, decrement the counter and return
return;
}
// Save the current context, and remove the task from the ready queue
task_t* task = sched.current;
task->queue->ready = task->next; // Set the next ready task
task->next = task->queue->done; // Shift the done queue down
task->queue->done = task; // Set the next done task
task->queue->ready_count--; // Decrement the ready count for that queue
memcpy(task->regs, regs, sizeof(registers_t)); // Save the current register state
sched.current = 0;
// Check the priorities, starting with the highest, for ready tasks
u32 i;
for(i=0; i <= SCHED_MAX_PRIOR; ++i){
if( sched.prior[i].read_count ){
sched.current = sched.prior[i].ready; // Set the current task as the top of this ready queue
break;
} else if( sched.prior[i].done ) swap_queue(&sched.prior[i--]); // If there are tasks in the done queue, swap the queues and recheck
}
memcpy(regs, sched.current->regs, sizeof(registers_t)); // Grab the saved registers
set_kernel_stack(sched.current->esp0); // Grab the new tasks kernel stack pointer (this does not affect the current stack)
vmm_switch_dir(sched.current->dir); //
sched.timeslice = SCHED_TIMESLICE(SCHED_TIMESLICE_LENGTH);
}
Code: Select all
scheduler_t sched;
#define SCHED_MAX_PRIOR 3 // We have 4 priorities. zero being the highest, and 3 being the lowest.
#define SCHED_TIMESLICE_LENGTH 35 // The recommended size is between 20 and 50. I figured 35 was a good number.
#define SCHED_TIMESLICE(length) ((length/1000) * TIMER_TICKS_PER_SECOND) // Defines how many ticks are in a timeslice (mostly to make things look nice should be optimized out...)
Sound like what I was thinking. Sorry if these questions seem really basic. I want a firm grasp of what I want to (and need to) do before I get waist deep in garbage code :Opiranha wrote:I just have the kernel task sleep if its not needed. Really, it invokes the scheduler when it has nothing to do, and it jumps to the next task.
Other than that, if its the only task that can run, then It'll eat up CPU time, but it doesn't matter at that point cause...well, no other tasks can run.
-JL
Yes, that was my worry. I knew that there would be some restriction placed on tasks, so I wasn't sure how the kernel task would fit in since it is "all powerful" Ha.OSwhatever wrote:You can make the kernel it's own task if you think it's beneficial for you. It can help you since it is a container for resources associated with the kernel, however it becomes somewhat like a bastard since not all rules apply to it.
In my current setup, waiting in the scheduler wouldn't work. My scheduler's switch task function is one of the IRQ0 handlers, which means the EOI would not be sent to the PITs while I was sleeping, and therefore no more IRQs would fire. That doesn't sound good...OSwhatever wrote:]When it comes to the idle process or thread, they are quite simple to implement in the beginning. The other version is that you call the wait for interrupt instruction in scheduler itself. Then you don't need the task switch to the idle thread, you camp in the thread that was last running.
Well, the thing with the kernel task is that, while not all rules apply to it (e.g., it runs in ring0), you have complete control over the code that is inside it. Eventually, the other processes will probably be running executable files, and so they need the protection, but the kernel task is in the kernel, and you write it's code. The fact that the rules are different for the kernel task is not important (or detrimental).Yes, that was my worry. I knew that there would be some restriction placed on tasks, so I wasn't sure how the kernel task would fit in since it is "all powerful" Ha.
True. I hadn't thought of it that way.Well, the thing with the kernel task is that, while not all rules apply to it (e.g., it runs in ring0), you have complete control over the code that is inside it.
If you do as you described and run the high prio tasks and then not run those tasks again until you've run the low prio tasks, you're effectively running everything at the same priority level. I think you're forgetting that tasks generally block and temporarily disappear from the ready list. So your low priority tasks will get a chance to run when the higher priority tasks are blocked. Simple realtime schedulers generally work this way.Caleb1994 wrote:On a side note, I realized something. The design I posted will not work, because when a run_queue is exhausted (all tasks in that queue have run their timeslices) the queue is swapped and rechecked for ready tasks. This means that if we have two tasks in, let's say, priority 0 (the highest) A, and B and one in priority 3 (the lowest), C, then C would not execute until A and B have finished since every time A and B use their timeslices, they are made ready again, and they have a higher priority.
So I guess I will only swap the queues when all tasks in all priorities have finished their timeslice. This can be done with a counter. since I know the number of priorities. Simple fix, but I figured I needed to post that for anyone else that might read this laterIf someone has a better solution, lemme know. This is just what is "rolling off my fingertips" lol
Good thought. I use kernel process called primary task. It has many functions such as to be container for kernel threads, garbage collector for some data resources, parent process for orphan processes, manager for processes/users and so on.OSwhatever wrote:You can make the kernel it's own task if you think it's beneficial for you. It can help you since it is a container for resources associated with the kernel, however it becomes somewhat like a bastard since not all rules apply to it.