Context switch

In computing, a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point, and then restoring a different, previously saved, state.[1] This allows multiple processes to share a single central processing unit (CPU), and is an essential feature of a multiprogramming or multitasking operating system. In a traditional CPU, each process - a program in execution - utilizes the various CPU registers to store data and hold the current state of the running process. However, in a multitasking operating system, the operating system switches between processes or threads to allow the execution of multiple processes simultaneously. For every switch, the operating system must save the state of the currently running process, followed by loading the next process state, which will run on the CPU. This sequence of operations that stores the state of the running process and the loading of the following running process is called a context switch.

The precise meaning of the phrase "context switch" varies. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed. A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move between user mode and kernel mode tasks. The process of context switching can have a negative impact on system performance.[2]:28

Cost

Context switches are usually computationally intensive, and much of the design of operating systems is to optimize the use of context switches. Switching from one process to another requires a certain amount of time for doing the administration  saving and loading registers and memory maps, updating various tables and lists, etc. What is actually involved in a context switch depends on the architectures, operating systems, and the number of resources shared (threads that belong to the same process share many resources compared to unrelated non-cooperating processes).

For example, in the Linux kernel, context switching involves loading the corresponding process control block (PCB) stored in the PCB table in the kernel stack to retrieve information about the state of the new process. CPU state information including the registers, stack pointer, and program counter as well as memory management information like segmentation tables and page tables (unless the old process shares the memory with the new) are loaded from the PCB for the new process. To avoid incorrect address translation in the case of the previous and current processes using different memory, the translation lookaside buffer (TLB) must be flushed. This negatively affects performance because every memory reference to the TLB will be a miss because it is empty after most context switches.[3][4]

Furthermore, analogous context switching happens between user threads, notably green threads, and is often very lightweight, saving and restoring minimal context. In extreme cases, such as switching between goroutines in Go, a context switch is equivalent to a coroutine yield, which is only marginally more expensive than a subroutine call.

Switching cases

There are three potential triggers for a context switch:

Multitasking

Most commonly, within some scheduling scheme, one process must be switched out of the CPU so another process can run. This context switch can be triggered by the process making itself unrunnable, such as by waiting for an I/O or synchronization operation to complete. On a pre-emptive multitasking system, the scheduler may also switch out processes that are still runnable. To prevent other processes from being starved of CPU time, pre-emptive schedulers often configure a timer interrupt to fire when a process exceeds its time slice. This interrupt ensures that the scheduler will gain control to perform a context switch.

Interrupt handling

Modern architectures are interrupt driven. This means that if the CPU requests data from a disk, for example, it does not need to busy-wait until the read is over; it can issue the request (to the I/O device) and continue with some other task. When the read is over, the CPU can be interrupted (by a hardware in this case, which sends interrupt request to PIC) and presented with the read. For interrupts, a program called an interrupt handler is installed, and it is the interrupt handler that handles the interrupt from the disk.

When an interrupt occurs, the hardware automatically switches a part of the context (at least enough to allow the handler to return to the interrupted code). The handler may save additional context, depending on details of the particular hardware and software designs. Often only a minimal part of the context is changed in order to minimize the amount of time spent handling the interrupt. The kernel does not spawn or schedule a special process to handle interrupts, but instead the handler executes in the (often partial) context established at the beginning of interrupt handling. Once interrupt servicing is complete, the context in effect before the interrupt occurred is restored so that the interrupted process can resume execution in its proper state.

User and kernel mode switching

When the system transitions between user mode and kernel mode, a context switch is not necessary; a mode transition is not by itself a context switch. However, depending on the operating system, a context switch may also take place at this time.

Steps

The state of the currently executing process must be saved so it can be restored when rescheduled for execution.

The process state includes all the registers that the process may be using, especially the program counter, plus any other operating system specific data that may be necessary. This is usually stored in a data structure called a process control block (PCB) or switchframe.

The PCB might be stored on a per-process stack in kernel memory (as opposed to the user-mode call stack), or there may be some specific operating system-defined data structure for this information. A handle to the PCB is added to a queue of processes that are ready to run, often called the ready queue.

Since the operating system has effectively suspended the execution of one process, it can then switch context by choosing a process from the ready queue and restoring its PCB. In doing so, the program counter from the PCB is loaded, and thus execution can continue in the chosen process. Process and thread priority can influence which process is chosen from the ready queue (i.e., it may be a priority queue).

Example

Considering a general arithmetic addition operation A = B+1. The instruction is stored in the instruction register and the program counter is incremented. A and B are read from memory and are stored in registers R1, R2 respectively. In this case, B+1 is calculated and written in R1 as the final answer. This operation as there are sequential reads and writes and there's no waits for function calls used, hence no context switch/wait takes place in this case.

However, certain special instructions require system calls that require context switch to wait/sleep processes.  A system call handler is used for context switch to kernel mode. A display(data x) function may require data x from the Disk and a device driver in kernel mode, hence the display() function goes to sleep and waits on the READ operation to get the value of x from the disk, causing the program to wait and a wait for function call to the released setting the current statement to go to sleep and wait for the syscall to wake it up. To maintain concurrency however the program needs to re-execute the new value and the sleeping process together again.

Performance

Context switching itself has a cost in performance, due to running the task scheduler, TLB flushes, and indirectly due to sharing the CPU cache between multiple tasks.[5] Switching between threads of a single process can be faster than between two separate processes, because threads share the same virtual memory maps, so a TLB flush is not necessary.[6]

The time to switch between two separate processes is called the process switching latency. The time to switch between two threads of the same process is called the thread switching latency. The time from when a hardware interrupt is generated to when the interrupt is serviced is called the interrupt latency.

Switching between two processes in a single address space operating system can be faster than switching between two processes in an operating system with private per-process address spaces.[7]

Hardware vs. software

Context switching can be performed primarily by software or hardware. Some processors, like the Intel 80386 and its successors,[8] have hardware support for context switches, by making use of a special data segment designated the task state segment (TSS). A task switch can be explicitly triggered with a CALL or JMP instruction targeted at a TSS descriptor in the global descriptor table. It can occur implicitly when an interrupt or exception is triggered if there's a task gate in the interrupt descriptor table (IDT). When a task switch occurs the CPU can automatically load the new state from the TSS.

As with other tasks performed in hardware, one would expect this to be rather fast; however, mainstream operating systems, including Windows and Linux,[9] do not use this feature. This is mainly due to two reasons:

  • Hardware context switching does not save all the registers (only general-purpose registers, not floating point registers — although the TS bit is automatically turned on in the CR0 control register, resulting in a fault when executing floating-point instructions and giving the OS the opportunity to save and restore the floating-point state as needed).
  • Associated performance issues, e.g., software context switching can be selective and store only those registers that need storing, whereas hardware context switching stores nearly all registers whether they are required or not.

See also

References

  1. Douglas Comer; Timmothy V. Fossum (1988). "4 Scheduling and Context Switching". Operating System Design. Vol. I: The XINU Approach (PC Edition). Prentice Hall. p. 67. ISBN 0-13-638180-4. Context switching lies at the heart of the process juggling act. It consists of stopping the current computation, saving enough information so it may be restarted later, and restarting another process.
  2. Tanenbaum, Andrew S.; Bos, Herbert (March 20, 2014). Modern Operating Systems (4th ed.). Pearson. ISBN 978-0133591620.
  3. IA-64 Linux Kernel: Design and Implementation, 4.7 Switching Address Spaces
  4. Operating Systems, 5.6 The Context Switch, p. 118
  5. Chuanpeng Li; Chen Ding; Kai Shen. Quantifying The Cost of Context Switch (PDF). ACM Federated Computing Research Conference, San Diego, 13-14 June 2007. Archived (PDF) from the original on 2017-08-13.
  6. Ulrich Drepper (9 October 2014). "Memory part 3: Virtual Memory". LWN.net.
  7. D.L. Sims. "Multiple and single address spaces: towards a middle ground". 1993. doi:10.1109/IWOOOS.1993.324906
  8. "Context Switch definition". Linfo.org. Archived from the original on 2010-02-18. Retrieved 2013-09-08.
  9. Bovet, Daniel Pierre; Cesati, Marco (2006). Understanding the Linux Kernel, Third Edition. O'Reilly Media. p. 104. ISBN 978-0-596-00565-8. Retrieved 2009-11-23.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.