linux程序排程函式淺析
轉自:http://www.cnblogs.com/liangning/p/3885933.html
眾所周知,程序排程使用schedule()函式來完成,下面我們從分析該函式開始,程式碼如下(kernel/sched/core.c):
1 asmlinkage __visible void __sched schedule(void) 2 { 3 struct task_struct *tsk = current; 4 5 sched_submit_work(tsk); 6 __schedule(); 7 } 8 EXPORT_SYMBOL(schedule);
第3行獲取當前程序描述符指標,存放在本地變數tsk中。第6行呼叫__schedule()
1 static void __sched __schedule(void) 2 { 3 struct task_struct *prev, *next; 4 unsigned long *switch_count; 5 struct rq *rq; 6 int cpu; 7 8 need_resched: 9 preempt_disable(); 10 cpu = smp_processor_id(); 11 rq = cpu_rq(cpu); 12 rcu_note_context_switch(cpu);13 prev = rq->curr; 14 15 schedule_debug(prev); 16 17 if (sched_feat(HRTICK)) 18 hrtick_clear(rq); 19 20 /* 21 * Make sure that signal_pending_state()->signal_pending() below 22 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE) 23 * done by the caller to avoid the race with signal_wake_up().24 */ 25 smp_mb__before_spinlock(); 26 raw_spin_lock_irq(&rq->lock); 27 28 switch_count = &prev->nivcsw; 29 if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) { 30 if (unlikely(signal_pending_state(prev->state, prev))) { 31 prev->state = TASK_RUNNING; 32 } else { 33 deactivate_task(rq, prev, DEQUEUE_SLEEP); 34 prev->on_rq = 0; 35 36 /* 37 * If a worker went to sleep, notify and ask workqueue 38 * whether it wants to wake up a task to maintain 39 * concurrency. 40 */ 41 if (prev->flags & PF_WQ_WORKER) { 42 struct task_struct *to_wakeup; 43 44 to_wakeup = wq_worker_sleeping(prev, cpu); 45 if (to_wakeup) 46 try_to_wake_up_local(to_wakeup); 47 } 48 } 49 switch_count = &prev->nvcsw; 50 } 51 52 if (prev->on_rq || rq->skip_clock_update < 0) 53 update_rq_clock(rq); 54 55 next = pick_next_task(rq, prev); 56 clear_tsk_need_resched(prev); 57 clear_preempt_need_resched(); 58 rq->skip_clock_update = 0; 59 60 if (likely(prev != next)) { 61 rq->nr_switches++; 62 rq->curr = next; 63 ++*switch_count; 64 65 context_switch(rq, prev, next); /* unlocks the rq */ 66 /* 67 * The context switch have flipped the stack from under us 68 * and restored the local variables which were saved when 69 * this task called schedule() in the past. prev == current 70 * is still correct, but it can be moved to another cpu/rq. 71 */ 72 cpu = smp_processor_id(); 73 rq = cpu_rq(cpu); 74 } else 75 raw_spin_unlock_irq(&rq->lock); 76 77 post_schedule(rq); 78 79 sched_preempt_enable_no_resched(); 80 if (need_resched()) 81 goto need_resched; 82 }
第9行禁止核心搶佔。第10行獲取當前的cpu號。第11行獲取當前cpu的程序執行佇列。第13行將當前程序的描述符指標儲存在prev變數中。第55行將下一個被排程的程序描述符指標存放在next變數中。第56行清除當前程序的核心搶佔標記。第60行判斷當前程序和下一個排程的是不是同一個程序,如果不是的話,就要進行排程。第65行,對當前程序和下一個程序的上下文進行切換(排程之前要先切換上下文)。下面看看該函式(kernel/sched/core.c):
1 context_switch(struct rq *rq, struct task_struct *prev, 2 struct task_struct *next) 3 { 4 struct mm_struct *mm, *oldmm; 5 6 prepare_task_switch(rq, prev, next); 7 8 mm = next->mm; 9 oldmm = prev->active_mm; 10 /* 11 * For paravirt, this is coupled with an exit in switch_to to 12 * combine the page table reload and the switch backend into 13 * one hypercall. 14 */ 15 arch_start_context_switch(prev); 16 17 if (!mm) { 18 next->active_mm = oldmm; 19 atomic_inc(&oldmm->mm_count); 20 enter_lazy_tlb(oldmm, next); 21 } else 22 switch_mm(oldmm, mm, next); 23 24 if (!prev->mm) { 25 prev->active_mm = NULL; 26 rq->prev_mm = oldmm; 27 } 28 /* 29 * Since the runqueue lock will be released by the next 30 * task (which is an invalid locking op but in the case 31 * of the scheduler it's an obvious special-case), so we 32 * do an early lockdep release here: 33 */ 34 #ifndef __ARCH_WANT_UNLOCKED_CTXSW 35 spin_release(&rq->lock.dep_map, 1, _THIS_IP_); 36 #endif 37 38 context_tracking_task_switch(prev, next); 39 /* Here we just switch the register state and the stack. */ 40 switch_to(prev, next, prev); 41 42 barrier(); 43 /* 44 * this_rq must be evaluated again because prev may have moved 45 * CPUs since it called schedule(), thus the 'rq' on its stack 46 * frame will be invalid. 47 */ 48 finish_task_switch(this_rq(), prev); 49 }
上下文切換一般分為兩個,一個是硬體上下文切換(指的是cpu暫存器,要把當前程序使用的暫存器內容儲存下來,再把下一個程式的暫存器內容恢復),另一個是切換程序的地址空間(說白了就是程式程式碼)。程序的地址空間(程式程式碼)主要儲存在程序描述符中的struct mm_struct結構體中,因此該函式主要是操作這個結構體。第17行如果被排程的下一個程序地址空間mm為空,說明下個程序是個執行緒,沒有獨立的地址空間,共用所屬程序的地址空間,因此第18行將上個程序所使用的地址空間active_mm指標賦給下一個程序的該域,下一個程序也使用這個地址空間。第22行,如果下個程序地址空間不為空,說明下個程序有自己的地址空間,那麼執行switch_mm切換程序頁表。第40行切換程序的硬體上下文。 switch_to函式程式碼如下(arch/x86/include/asm/switch_to.h):
1 #define switch_to(prev, next, last) \ 2 do { \ 3 /* \ 4 * Context-switching clobbers all registers, so we clobber \ 5 * them explicitly, via unused output variables. \ 6 * (EAX and EBP is not listed because EBP is saved/restored \ 7 * explicitly for wchan access and EAX is the return value of \ 8 * __switch_to()) \ 9 */ \ 10 unsigned long ebx, ecx, edx, esi, edi; \ 11 \ 12 asm volatile("pushfl\n\t" /* save flags */ \ 13 "pushl %%ebp\n\t" /* save EBP */ \ 14 "movl %%esp,%[prev_sp]\n\t" /* save ESP */ \ 15 "movl %[next_sp],%%esp\n\t" /* restore ESP */ \ 16 "movl $1f,%[prev_ip]\n\t" /* save EIP */ \ 17 "pushl %[next_ip]\n\t" /* restore EIP */ \ 18 __switch_canary \ 19 "jmp __switch_to\n" /* regparm call */ \ 20 "1:\t" \ 21 "popl %%ebp\n\t" /* restore EBP */ \ 22 "popfl\n" /* restore flags */ \ 23 \ 24 /* output parameters */ \ 25 : [prev_sp] "=m" (prev->thread.sp), \ 26 [prev_ip] "=m" (prev->thread.ip), \ 27 "=a" (last), \ 28 \ 29 /* clobbered output registers: */ \ 30 "=b" (ebx), "=c" (ecx), "=d" (edx), \ 31 "=S" (esi), "=D" (edi) \ 32 \ 33 __switch_canary_oparam \ 34 \ 35 /* input parameters: */ \ 36 : [next_sp] "m" (next->thread.sp), \ 37 [next_ip] "m" (next->thread.ip), \ 38 \ 39 /* regparm parameters for __switch_to(): */ \ 40 [prev] "a" (prev), \ 41 [next] "d" (next) \ 42 \ 43 __switch_canary_iparam \ 44 \ 45 : /* reloaded segment registers */ \ 46 "memory"); \ 47 } while (0)
該函式中使用了內聯彙編來完成程序的硬體上下文切換。第12-13行將eflags和ebp暫存器的值壓棧,因為當程序再次切換回來後要用到這兩個暫存器的值。第14行將當前程序的棧頂指標儲存到程序的thread_info.sp中。第15行將下個程序的thread_info.sp中的值恢復到esp暫存器中,切換到下個程序的核心棧,至此,程序切換就完成了(程序核心棧的切換是程序切換的標誌),後邊程式碼的執行就是在新程序中進行。第16行將標號1所代表的地址存放到上個程序的thread_info.ip中,以後如果切換到上個程序,就從thread_info.ip所指向的程式碼處執行(實際上,你想讓上個程序再次被切換到時從哪個指令開始執行,就將該指令的地址儲存在上個程序的thread_info.ip中,程序的現場保護和函式呼叫時候的現場保護是有區別的,函式呼叫的現場保護是將暫存器的值壓棧(畢竟堆疊沒有切換),然後恢復現場時再將暫存器的值彈出來;程序切換的現場保護是將暫存器的值存入程序的thread_info結構中,當被切換掉的程序再次執行時,再從thread_info結構中恢復現場,畢竟程序切換了連核心堆疊都一同換掉了,所以必定要將程序的資源儲存在和程序相關的資料結構中,才不會丟失而且容易被恢復)。第17行將當前程序的thread_info.ip壓入核心棧中,一會要從這個ip指向的指令開始執行。第19行跳入到__switch_to函式中。下面看下__switch_to函式程式碼(arch/x86/kernel/process_32.c):
1 __visible __notrace_funcgraph struct task_struct * 2 __switch_to(struct task_struct *prev_p, struct task_struct *next_p) 3 { 4 struct thread_struct *prev = &prev_p->thread, 5 *next = &next_p->thread; 6 int cpu = smp_processor_id(); 7 struct tss_struct *tss = &per_cpu(init_tss, cpu); 8 fpu_switch_t fpu; 9 10 /* never put a printk in __switch_to... printk() calls wake_up*() indirectly */ 11 12 fpu = switch_fpu_prepare(prev_p, next_p, cpu); 13 14 /* 15 * Reload esp0. 16 */ 17 load_sp0(tss, next); 18 19 /* 20 * Save away %gs. No need to save %fs, as it was saved on the 21 * stack on entry. No need to save %es and %ds, as those are 22 * always kernel segments while inside the kernel. Doing this 23 * before setting the new TLS descriptors avoids the situation 24 * where we temporarily have non-reloadable segments in %fs 25 * and %gs. This could be an issue if the NMI handler ever 26 * used %fs or %gs (it does not today), or if the kernel is 27 * running inside of a hypervisor layer. 28 */ 29 lazy_save_gs(prev->gs); 30 31 /* 32 * Load the per-thread Thread-Local Storage descriptor. 33 */ 34 load_TLS(next, cpu); 35 36 /* 37 * Restore IOPL if needed. In normal use, the flags restore 38 * in the switch assembly will handle this. But if the kernel 39 * is running virtualized at a non-zero CPL, the popf will 40 * not restore flags, so it must be done in a separate step. 41 */ 42 if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl)) 43 set_iopl_mask(next->iopl); 44 45 /* 46 * If it were not for PREEMPT_ACTIVE we could guarantee that the 47 * preempt_count of all tasks was equal here and this would not be 48 * needed. 49 */ 50 task_thread_info(prev_p)->saved_preempt_count = this_cpu_read(__preempt_count); 51 this_cpu_write(__preempt_count, task_thread_info(next_p)->saved_preempt_count); 52 53 /* 54 * Now maybe handle debug registers and/or IO bitmaps 55 */ 56 if (unlikely(task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV || 57 task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT)) 58 __switch_to_xtra(prev_p, next_p, tss); 59 60 /* 61 * Leave lazy mode, flushing any hypercalls made here. 62 * This must be done before restoring TLS segments so 63 * the GDT and LDT are properly updated, and must be 64 * done before math_state_restore, so the TS bit is up 65 * to date. 66 */ 67 arch_end_context_switch(next_p); 68 69 this_cpu_write(kernel_stack, 70 (unsigned long)task_stack_page(next_p) + 71 THREAD_SIZE - KERNEL_STACK_OFFSET); 72 73 /* 74 * Restore %gs if needed (which is common) 75 */ 76 if (prev->gs | next->gs) 77 lazy_load_gs(next->gs); 78 79 switch_fpu_finish(next_p, fpu); 80 81 this_cpu_write(current_task, next_p); 82 83 return prev_p; 84 }
該函式主要是對剛切換過來的新程序進一步做些初始化工作。比如第34將該程序使用的執行緒區域性儲存段(TLS)裝入本地cpu的全域性描述符表。第84行返回語句會被編譯成兩條彙編指令,一條是將返回值prev_p儲存到eax暫存器,另外一個是ret指令,將核心棧頂的元素彈出eip暫存器,從這個eip指標處開始執行,也就是上個函式第17行所壓入的那個指標。一般情況下,被壓入的指標是上個函式第20行那個標號1所代表的地址,那麼從__switch_to函式返回後,將從標號1處開始執行。
需要注意的是,對於已經被排程過的程序而言,從__switch_to函式返回後,將從標號1處開始執行;但是對於用fork(),clone()等函式剛建立的新程序(未排程過),將進入ret_from_fork()函式,因為do_fork()函式在建立好程序之後,會給程序的thread_info.ip賦予ret_from_fork函式的地址,而不是標號1的地址,因此它會跳入ret_from_fork函式。後邊我們在分析fork系統呼叫的時候,就會看到。