COMPUTER FUNDAMENTALS | OPERATING SYSTEMS
DESIGNED FOR IB EXAMINATIONS
OBJECTIVES
A1.3.3 Compare different approaches to scheduling.
• Managing the execution of processes by allocating CPU time to optimize system performance
• First-come first-served, round robin, multilevel queue scheduling, priority scheduling
A1.3.3 Compare different approaches to scheduling.
• Managing the execution of processes by allocating CPU time to optimize system performance
• First-come first-served, round robin, multilevel queue scheduling, priority scheduling
SECTION 1 | MANAGING THE EXECUTION OF PROCESSES
An operating system uses CPU scheduling to manage the execution of processes by deciding how processor time is allocated. Because a computer typically runs many processes at the same time, the CPU must be shared efficiently to optimize overall system performance.
Purpose of CPU Scheduling
The CPU can only execute one process at a time. CPU scheduling ensures that:
By carefully allocating CPU time, the operating system creates the illusion that many programs are running simultaneously.
How CPU Time Is Allocated
The operating system divides CPU time among active processes. This is usually achieved by:
This rapid switching between processes is known as context switching and is fundamental to multitasking systems.
Optimising System Performance
When allocating CPU time, the operating system aims to optimize performance according to several criteria:
Different systems may prioritize these goals differently. For example, a desktop computer emphasizes responsiveness, while a server may prioritize throughput.
Role of Scheduling Algorithms
To manage CPU time effectively, operating systems use scheduling algorithms. Each algorithm follows specific rules to decide which process runs next and for how long. The choice of scheduling approach has a significant impact on system performance, user experience, and fairness.
Managing the execution of processes through CPU scheduling is essential for efficient multitasking. By allocating CPU time carefully, the operating system optimizes performance and ensures smooth and reliable system operation.
Purpose of CPU Scheduling
The CPU can only execute one process at a time. CPU scheduling ensures that:
- Multiple processes can make progress
- The system remains responsive to user input
- Hardware resources are used efficiently
By carefully allocating CPU time, the operating system creates the illusion that many programs are running simultaneously.
How CPU Time Is Allocated
The operating system divides CPU time among active processes. This is usually achieved by:
- Selecting a process from a ready queue
- Allowing it to run for a short period of time
- Suspending it and switching to another process if needed
This rapid switching between processes is known as context switching and is fundamental to multitasking systems.
Optimising System Performance
When allocating CPU time, the operating system aims to optimize performance according to several criteria:
- Responsiveness: Interactive systems must respond quickly to user actions, such as typing or clicking.
- Throughput: The number of processes completed in a given period of time should be maximized, especially in batch systems.
- Fairness: All processes should receive a reasonable share of CPU time, preventing starvation.
- CPU Utilisation: The processor should be kept as busy as possible without overloading the system.
Different systems may prioritize these goals differently. For example, a desktop computer emphasizes responsiveness, while a server may prioritize throughput.
Role of Scheduling Algorithms
To manage CPU time effectively, operating systems use scheduling algorithms. Each algorithm follows specific rules to decide which process runs next and for how long. The choice of scheduling approach has a significant impact on system performance, user experience, and fairness.
Managing the execution of processes through CPU scheduling is essential for efficient multitasking. By allocating CPU time carefully, the operating system optimizes performance and ensures smooth and reliable system operation.
Why does an operating system allocate CPU time between processes?
A. To store files and folders efficiently.
B. To allow multiple processes to run fairly and optimize system performance.
C. To translate programs into machine code.
D. To manage hardware device connections.
SECTION 2 | FIRST COME FIRST SERVED
First-Come, First-Served (FCFS) is one of the simplest CPU scheduling algorithms used by operating systems. As its name suggests, processes are executed in the order in which they arrive in the ready queue.
How FCFS Scheduling Works
When a process becomes ready to run, it is placed at the end of a queue. The CPU selects the process at the front of the queue and allows it to run until it completes or blocks (for example, waiting for input/output).
Key characteristics of FCFS scheduling:
Advantages of FCFS
FCFS has several benefits:
Disadvantages of FCFS
Despite its simplicity, FCFS has notable drawbacks:
When FCFS Is Appropriate
FCFS is best suited to simple batch processing systems where:
First-Come, First-Served scheduling is straightforward and predictable, but it is generally unsuitable for modern interactive systems that require fast response times.
How FCFS Scheduling Works
When a process becomes ready to run, it is placed at the end of a queue. The CPU selects the process at the front of the queue and allows it to run until it completes or blocks (for example, waiting for input/output).
Key characteristics of FCFS scheduling:
- Processes are executed strictly in arrival order
- Once a process starts running, it keeps the CPU until it finishes or is blocked
- There is no pre-emption (the OS does not forcibly remove a running process from the CPU)
Advantages of FCFS
FCFS has several benefits:
- Simplicity: Easy to understand and implement
- Fairness in Order: Processes are treated equally based on arrival time
- Low Overhead: Minimal scheduling decisions reduce system overhead
Disadvantages of FCFS
Despite its simplicity, FCFS has notable drawbacks:
- Poor Responsiveness: Short, interactive processes may be forced to wait behind long-running processes
- Convoy Effect: A long CPU-bound process can delay many shorter processes, reducing overall system performance
- Inefficient CPU Utilization: The system may appear slow, especially in interactive environments
When FCFS Is Appropriate
FCFS is best suited to simple batch processing systems where:
- Processes have similar execution times
- Responsiveness is not critical
- Implementation simplicity is a priority
First-Come, First-Served scheduling is straightforward and predictable, but it is generally unsuitable for modern interactive systems that require fast response times.
Which statement correctly describes First-Come, First-Served scheduling?
A. The process with the highest priority always runs first.
B. Processes run in arrival order and are not pre-empted.
C. Each process is given a fixed time slice.
D. Processes are placed into multiple queues.
SECTION 3 | ROUND ROBIN
Round Robin is a pre-emptive CPU scheduling algorithm designed to improve fairness and responsiveness in multitasking and time-sharing systems. Each process is given a small, fixed amount of CPU time, known as a time slice or quantum.
How Round Robin Scheduling Works
All ready processes are placed in a queue. The operating system selects the process at the front of the queue and allows it to run for one time slice:
This cycle repeats continuously, giving each process regular access to the CPU.
Key Characteristics of Round Robin
Advantages of Round Robin
Disadvantages of Round Robin
When Round Robin Is Appropriate
Round Robin is commonly used in:
Round Robin scheduling balances fairness and responsiveness by dividing CPU time into equal slices, making it well suited for modern interactive systems
How Round Robin Scheduling Works
All ready processes are placed in a queue. The operating system selects the process at the front of the queue and allows it to run for one time slice:
- If the process finishes before the time slice expires, it leaves the system.
- If the process does not finish, it is pre-empted and moved to the back of the queue.
- The next process in the queue is then given CPU time.
This cycle repeats continuously, giving each process regular access to the CPU.
Key Characteristics of Round Robin
- Pre-emptive: The OS can interrupt a running process when its time slice ends.
- Fair Allocation: Each process receives an equal share of CPU time.
- Time-Sharing: Well suited to systems where many users or applications run concurrently.
Advantages of Round Robin
- Good Responsiveness: Interactive applications remain responsive because no process can monopolize the CPU.
- Fairness: All processes are treated equally regardless of execution time.
- Predictable Behaviour: Each process is guaranteed CPU access within a known time frame.
Disadvantages of Round Robin
- Context Switching Overhead: Frequent switching between processes can reduce efficiency.
- If the time slice is too short, overhead increases.
- If it is too long, the system behaves more like FCFS and becomes less responsive.
When Round Robin Is Appropriate
Round Robin is commonly used in:
- Interactive systems
- Multi-user operating systems
- Time-sharing environments
Round Robin scheduling balances fairness and responsiveness by dividing CPU time into equal slices, making it well suited for modern interactive systems
Which statement correctly describes Round Robin scheduling?
A. Processes run until completion in arrival order.
B. The highest-priority process always runs first.
C. Each process gets a fixed time slice and is pre-empted if it does not finish.
D. Processes are assigned to permanent queues.
SECTION 4 | MULTI LEVEL QUEUE SCHEDULING
Multilevel Queue Scheduling is a CPU scheduling approach that organizes processes into multiple separate queues based on their characteristics. Each queue is assigned its own scheduling policy, allowing the operating system to manage different types of processes more effectively.
How Multilevel Queue Scheduling Works
Processes are permanently assigned to a specific queue when they enter the system. The queue a process belongs to is typically determined by factors such as:
CPU Allocation Between Queues
There are two common approaches:
When Multilevel Queue Scheduling Is Appropriate
This approach is suitable for systems where processes can be clearly categorised and where fixed priorities are acceptable.
Multilevel queue scheduling improves performance by separating processes into queues with different scheduling strategies, but it can be unfair if lower-priority queues are neglected.
How Multilevel Queue Scheduling Works
Processes are permanently assigned to a specific queue when they enter the system. The queue a process belongs to is typically determined by factors such as:
- Process type (system, interactive, batch)
- Priority level
- Resource requirements
- Foreground (interactive) processes may use Round Robin
- Background (batch) processes may use FCFS
CPU Allocation Between Queues
There are two common approaches:
- Fixed Priority Scheduling:
Higher-priority queues are always served before lower-priority queues. - Time Sharing Between Queues:
Each queue is given a fixed portion of CPU time.
- Efficient Management: Different types of processes are handled according to their needs.
- Improved Responsiveness: Interactive processes can be prioritized.
- Clear Separation: System and user processes can be isolated logically.
- Inflexibility: Processes cannot move between queues.
- Risk of Starvation: Lower-priority queues may receive little or no CPU time.
When Multilevel Queue Scheduling Is Appropriate
This approach is suitable for systems where processes can be clearly categorised and where fixed priorities are acceptable.
Multilevel queue scheduling improves performance by separating processes into queues with different scheduling strategies, but it can be unfair if lower-priority queues are neglected.
Which statement correctly describes multilevel queue scheduling?
A. Processes move between queues based on their CPU usage.
B. All processes share the CPU equally.
C. Processes are permanently assigned to queues with different scheduling policies.
D. Processes run strictly in order of arrival.
SECTION 5 | PRIORITY LEVEL SCHEDULING
Priority scheduling is a CPU scheduling approach in which each process is assigned a priority level, and the CPU is allocated to the process with the highest priority. This allows the operating system to ensure that important tasks receive CPU time before less critical ones.
How Priority Scheduling Works
Each process is given a numerical priority value when it enters the system. The scheduler then:
Advantages of Priority Scheduling
When Priority Scheduling Is Appropriate
Priority scheduling is commonly used in:
Priority scheduling improves responsiveness for important processes but must be carefully managed to ensure fairness.
How Priority Scheduling Works
Each process is given a numerical priority value when it enters the system. The scheduler then:
- Selects the highest-priority process from the ready queue
- Allocates CPU time to that process
- Chooses the next highest-priority process when the CPU becomes available
- Non-pre-emptive: A running process keeps the CPU until it finishes or blocks
- Pre-emptive: A running process can be interrupted if a higher-priority process arrives
Advantages of Priority Scheduling
- Supports Critical Tasks: Important system or real-time processes can be prioritized
- Flexible: Can be combined with other scheduling algorithms
- Efficient for Mixed Workloads: Useful when some tasks are more time-sensitive than others
- Starvation: Low-priority processes may never get CPU time if high-priority processes keep arriving
- Priority Inversion: A low-priority process may block a higher-priority one by holding a required resource
When Priority Scheduling Is Appropriate
Priority scheduling is commonly used in:
- Real-time systems
- Systems with critical background services
- Environments where tasks have clearly defined importance levels
Priority scheduling improves responsiveness for important processes but must be carefully managed to ensure fairness.
Which statement correctly describes priority scheduling?
A. Processes run strictly in arrival order.
B. All processes receive equal CPU time.
C. The process with the highest priority is selected to use the CPU.
D. Processes are fixed into different queues permanently.
CPU Scheduling | The operating system process of deciding which process uses the CPU and for how long.
Process | A program that is currently loaded into memory and being executed by the CPU.
Ready Queue | A list of processes that are ready to use the CPU but are waiting for their turn.
Context Switching | The process of saving the state of a running process and loading the state of another process so the CPU can switch between them.
System Performance | A measure of how efficiently a computer system operates, including responsiveness, throughput, and CPU utilization.
Responsiveness | How quickly a system responds to user input or requests.
Throughput | The number of processes completed by the system in a given period of time.
Fairness | Ensuring that all processes receive an appropriate share of CPU time without starvation.
First-Come, First-Served (FCFS) | A non-pre-emptive scheduling algorithm where processes are executed in the order they arrive.
Convoy Effect | A situation in FCFS scheduling where short processes are delayed by a long-running process.
Pre-emptive Scheduling | A scheduling approach where the operating system can interrupt a running process to give CPU time to another process.
Non-Pre-emptive Scheduling | A scheduling approach where a process keeps the CPU until it finishes or blocks.
Round Robin Scheduling | A pre-emptive scheduling algorithm where each process is given a fixed time slice in turn.
Time Slice (Quantum) | A fixed amount of CPU time allocated to a process before it may be pre-empted.
Multilevel Queue Scheduling | A scheduling method that divides processes into multiple fixed queues, each with its own scheduling policy.
Priority Scheduling | A scheduling approach where the CPU is allocated to the process with the highest priority.
Priority Level | A value assigned to a process that determines its importance relative to other processes.
Starvation | A situation where a process never receives CPU time because other processes are always scheduled first.
Aging | A technique used in priority scheduling where a process’s priority increases the longer it waits, reducing starvation.
Process | A program that is currently loaded into memory and being executed by the CPU.
Ready Queue | A list of processes that are ready to use the CPU but are waiting for their turn.
Context Switching | The process of saving the state of a running process and loading the state of another process so the CPU can switch between them.
System Performance | A measure of how efficiently a computer system operates, including responsiveness, throughput, and CPU utilization.
Responsiveness | How quickly a system responds to user input or requests.
Throughput | The number of processes completed by the system in a given period of time.
Fairness | Ensuring that all processes receive an appropriate share of CPU time without starvation.
First-Come, First-Served (FCFS) | A non-pre-emptive scheduling algorithm where processes are executed in the order they arrive.
Convoy Effect | A situation in FCFS scheduling where short processes are delayed by a long-running process.
Pre-emptive Scheduling | A scheduling approach where the operating system can interrupt a running process to give CPU time to another process.
Non-Pre-emptive Scheduling | A scheduling approach where a process keeps the CPU until it finishes or blocks.
Round Robin Scheduling | A pre-emptive scheduling algorithm where each process is given a fixed time slice in turn.
Time Slice (Quantum) | A fixed amount of CPU time allocated to a process before it may be pre-empted.
Multilevel Queue Scheduling | A scheduling method that divides processes into multiple fixed queues, each with its own scheduling policy.
Priority Scheduling | A scheduling approach where the CPU is allocated to the process with the highest priority.
Priority Level | A value assigned to a process that determines its importance relative to other processes.
Starvation | A situation where a process never receives CPU time because other processes are always scheduled first.
Aging | A technique used in priority scheduling where a process’s priority increases the longer it waits, reducing starvation.
Open-Ended Questions – Approaches to CPU Scheduling
- Explain why an operating system must allocate CPU time between multiple processes.
- Describe how CPU scheduling helps optimize overall system performance.
- Explain how First-Come, First-Served (FCFS) scheduling works and identify one advantage and one disadvantage.
- Describe the convoy effect and explain why it can reduce system performance.
- Explain how Round Robin scheduling allocates CPU time and why it is suitable for interactive systems.
- Discuss how the choice of time slice affects the performance of Round Robin scheduling.
- Describe how multilevel queue scheduling organizes processes and manages CPU allocation.
- Explain one advantage and one disadvantage of multilevel queue scheduling.
- Describe how priority scheduling works and explain why starvation can occur.
- Explain how aging can be used to reduce starvation in priority scheduling
COMING SOON
A1.1 COMPUTER HARDWARE AND OPERATION
☐ 1.1.1 FUNCTIONS OF THE CPU
☐ 1.1.2 ROLE OF THE GPU
☐ 1.1.3 CPU VS GPU
☐ 1.1.4 PURPOSE AND TYPES OF PRIMARY MEMORY
☐ 1.1.5 FETCH, DECODE AND EXECUTE CYCLE
☐ 1.1.6 PIPELINING IN MULTICORE ARCHITECTURES
☐ 1.1.7 SECONDARY MEMORY STORAGE
☐ 1.1.8 CONCEPTS OF DATA COMPRESSION
☐ 1.1.9 CLOUD COMPUTING
A1.2 DATA REPRESENTATION AND COMPUTER LOGIC
☐ 1.2.1 REPRESENTING DATA
☐ 1.2.2 HOW BINARY IS USED TO STORE DATA
☐ 1.2.3 LOGIC GATES
☐ 1.2.4 TRUTH TABLES, CIRCUITS, EXPRESSIONS AND K MAPS
☐ 1.2.5 LOGIC CIRCUIT DIAGRAMS - COMING SOON
A1.3 OPERATING SYSTEMS AND CONTROL SYSTEMS
☐ 1.3.1 ROLE OF OPERATING SYSTEMS
☐ 1.3.2 FUNCTIONS OF OPERATING SYSTEMS
➩ 1.3.3 APPROACHES TO SCHEDULING
☐ 1.3.4 INTERUPT HANDLING
☐ 1.3.5 MULTITASKING
☐ 1.3.6 CONTROL SYSTEM COMPONENTS
☐ 1.3.7 CONTROL SYSTEM APPLICATIONS
☐ 1.1.1 FUNCTIONS OF THE CPU
☐ 1.1.2 ROLE OF THE GPU
☐ 1.1.3 CPU VS GPU
☐ 1.1.4 PURPOSE AND TYPES OF PRIMARY MEMORY
☐ 1.1.5 FETCH, DECODE AND EXECUTE CYCLE
☐ 1.1.6 PIPELINING IN MULTICORE ARCHITECTURES
☐ 1.1.7 SECONDARY MEMORY STORAGE
☐ 1.1.8 CONCEPTS OF DATA COMPRESSION
☐ 1.1.9 CLOUD COMPUTING
A1.2 DATA REPRESENTATION AND COMPUTER LOGIC
☐ 1.2.1 REPRESENTING DATA
☐ 1.2.2 HOW BINARY IS USED TO STORE DATA
☐ 1.2.3 LOGIC GATES
☐ 1.2.4 TRUTH TABLES, CIRCUITS, EXPRESSIONS AND K MAPS
☐ 1.2.5 LOGIC CIRCUIT DIAGRAMS - COMING SOON
A1.3 OPERATING SYSTEMS AND CONTROL SYSTEMS
☐ 1.3.1 ROLE OF OPERATING SYSTEMS
☐ 1.3.2 FUNCTIONS OF OPERATING SYSTEMS
➩ 1.3.3 APPROACHES TO SCHEDULING
☐ 1.3.4 INTERUPT HANDLING
☐ 1.3.5 MULTITASKING
☐ 1.3.6 CONTROL SYSTEM COMPONENTS
☐ 1.3.7 CONTROL SYSTEM APPLICATIONS