上下文切换的步骤 [英] Steps in Context Switching

查看:372
本文介绍了上下文切换的步骤的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我被要求描述上下文切换所涉及的步骤:(1)两个不同进程之间的上下文切换;(2)同一进程中两个不同线程之间的上下文切换.

  1. 在上下文切换期间,内核会将旧进程的上下文保存在其PCB中,然后加载计划运行的新进程的已保存上下文.
  2. 同一进程中两个不同线程之间的上下文切换可以由操作系统安排,以便它们看起来是并行执行的,因此通常比两个不同进程之间的上下文切换要快.

这太笼统了吗?或者您想添加些什么来更清楚地说明流程?

解决方案

以相反的顺序解释这些内容要容易得多,因为进程切换始终涉及线程切换.

单核CPU上的典型线程上下文切换是这样的:

  1. 所有上下文切换均由中断"启动.这可能是运行驱动程序的实际硬件中断(例如,通过网卡,键盘,内存管理或计时器硬件),或者是执行类似于硬件中断的调用序列的软件调用(系统调用)进入操作系统.在发生驱动程序中断的情况下,OS提供了驱动程序可以调用的入口点,而不是执行正常"直接中断返回和处理.因此,如果驱动程序需要操作系统将线程设置为就绪(例如,已发出信号灯信号),则允许驱动程序通过操作系统调度程序退出.

  2. 非平凡的系统将必须启动硬件保护级别的更改才能进入内核状态,以便可以访问内核代码/数据等.

  3. 已中断线程的核心状态必须保存.在一个简单的嵌入式系统上,这可能只是将所有寄存器推入线程堆栈,并将堆栈指针保存在其线程控制块(TCB)中.

  4. 许多系统在此阶段切换到OS专用的堆栈,以便不会在每个线程的堆栈上产生大量的OS内部堆栈要求.

  5. 可能有必要标记发生中断状态更改的线程堆栈位置,以允许嵌套中断.

  6. 驱动程序/系统调用运行,并且可以通过从内部队列中为不同的线程优先级添加/删除TCB来更改就绪线程集.网卡驱动程序可能已经设置了一个事件或发信号通知另一个线程正在等待,因此该线程将被添加到就绪集中,或者正在运行的线程可能已经调用了sleep()并因此选择将自己从就绪集中删除. .

  7. 运行OS调度程序算法来确定下一个要运行的线程,通常是优先级最高的就绪线程,该线程位于该队列的最前面.如果下一个要运行的线程与先前运行的线程属于不同的进程,则这里需要一些额外的东西(请参阅下文).

  8. 已从TCB中为该线程保存的堆栈指针被检索并加载到硬件堆栈指针中.

  9. 已恢复所选线程的核心状态.在我的简单系统上,寄存器将从选定线程的堆栈中弹出.更复杂的系统将不得不处理返回到用户级保护的问题.

  10. 执行了中断返回,因此将执行转移到所选线程.

对于多核CPU,情况会更加复杂.调度程序可能会决定当前正在另一个内核上运行的线程可能需要停止并由刚刚准备就绪的线程替换.它可以通过使用其处理器间驱动程序来硬件中断正在运行的必须停止的线程的核心,来实现此目的.除了其他所有操作之外,此操作的复杂性是避免编写OS内核的一个很好的理由:)

典型的过程上下文切换是这样的:

  1. 进程上下文切换由线程上下文切换启动,因此上面的所有1-9都将需要发生.

  2. 在上面的步骤5中,调度程序决定运行一个线程,该线程与拥有先前运行的线程的线程不同.

  3. 必须为内存管理硬件加载新进程的地址空间,即允许新进程的线程访问其内存的任何选择器/段/标志/任何内容. /p>

  4. 任何FPU硬件的上下文都需要从PCB保存/恢复.

  5. 可能还需要保存/还原其他专用于过程的硬件.

在任何实际系统上,机制都是依赖于体系结构的,并且以上内容是对两种上下文切换的含义的粗略且不完整的指南.进程切换还会产生其他开销,而这些开销并不是严格意义上的切换的一部分-进程切换后可能会有额外的缓存溢出和页面错误,因为它的某些内存可能已被调出以支持属于的页面拥有之前运行的线程的进程.

I am asked to describe the steps involved in a context switch (1) between two different processes and (2) between two different threads in the same process.

  1. During a context switch, the kernel will save the context of the old process in its PCB and then load the saved context of the new process scheduled to run.
  2. Context switching between two different threads in the same process can be scheduled by the operating system so that they appear to execute in parallel, and is thus usually faster than context switches between two different processes.

Is this too general or what would you add to explain the process clearer?

解决方案

It's much easier to explain those in reverse order because a process-switch always involves a thread-switch.

A typical thread context switch on a single-core CPU happens like this:

  1. All context switches are initiated by an 'interrupt'. This could be an actual hardware interrupt that runs a driver, (eg. from a network card, keyboard, memory-management or timer hardware), or a software call, (system call), that performs a hardware-interrupt-like call sequence to enter the OS. In the case of a driver interrupt, the OS provides an entry point that the driver can call instead of performing the 'normal' direct interrupt-return & so allows a driver to exit via the OS scheduler if it needs the OS to set a thread ready, (eg. it has signaled a semaphore).

  2. Non-trivial systems will have to initiate a hardware-protection-level change to enter a kernel-state so that the kernel code/data etc. can be accessed.

  3. Core state for the interrupted thread has to be saved. On a simple embedded system, this might just be pushing all registers onto the thread stack and saving the stack pointer in its Thread Control Block (TCB).

  4. Many systems switch to an OS-dedicated stack at this stage so that the bulk of OS-internal stack requirements are not inflicted on the stack of every thread.

  5. It may be necessary to mark the thread stack position where the change to interrupt-state occurred to allow for nested interrupts.

  6. The driver/system call runs and may change the set of ready threads by adding/removing TCB's from internal queues for the different thread priorities, eg. network card driver may have set an event or signaled a semaphore that another thread was waiting on, so that thread will be added to the ready set, or a running thread may have called sleep() and so elected to remove itself from the ready set.

  7. The OS scheduler algorithm is run to decide which thread to run next, typically the highest-priority ready thread that is at the front of the queue for that priority. If the next-to-run thread belongs to a different process to the previously-run thread, some extra stuff is needed here, (see later).

  8. The saved stack pointer from the TCB for that thread is retrieved and loaded into the hardware stack pointer.

  9. The core state for the selected thread is restored. On my simple system, the registers would be popped from the stack of the selected thread. More complex systems will have to handle a return to user-level protection.

  10. An interrupt-return is performed, so transferring execution to the selected thread.

In the case of a multicore CPU, things are more complex. The scheduler may decide that a thread that is currently running on another core may need to be stopped and replaced by a thread that has just become ready. It can do this by using its interprocessor driver to hardware-interrupt the core running the thread that has to be stopped. The complexities of this operation, on top of all the other stuff, is a good reason to avoid writing OS kernels :)

A typical process context switch happens like this:

  1. Process context switches are initiated by a thread-context switch, so all of the above, 1-9, is going to need to happen.

  2. At step 5 above, the scheduler decides to run a thread belonging to a different process from the one that owned the previously-running thread.

  3. The memory-management hardware has to be loaded with the address-space for the new process, ie whatever selectors/segments/flags/whatever that allow the thread/s of the new process to access its memory.

  4. The context of any FPU hardware needs to be saved/restored from the PCB.

  5. There may be other process-dedicated hardware that needs to be saved/restored.

On any real system, the mechanisms are architecture-dependent and the above is a rough and incomplete guide to the implications of either context switch. There are other overheads generated by a process-switch that are not strictly part of the switch - there may be extra cache-flushes and page-faults after a process-switch since some of its memory may have been paged out in favour of pages belonging to the process owning the thread that was running before.

这篇关于上下文切换的步骤的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆