COMPUTER SCIENCE CAFÉ
  • WORKBOOKS
  • BLOCKY GAMES
  • GCSE
    • CAMBRIDGE GCSE
  • IB
  • A LEVEL
  • LEARN TO CODE
  • ROBOTICS ENGINEERING
  • MORE
    • CLASS PROJECTS
    • Classroom Discussions
    • Useful Links
    • SUBSCRIBE
    • ABOUT US
    • CONTACT US
    • PRIVACY POLICY
  • WORKBOOKS
  • BLOCKY GAMES
  • GCSE
    • CAMBRIDGE GCSE
  • IB
  • A LEVEL
  • LEARN TO CODE
  • ROBOTICS ENGINEERING
  • MORE
    • CLASS PROJECTS
    • Classroom Discussions
    • Useful Links
    • SUBSCRIBE
    • ABOUT US
    • CONTACT US
    • PRIVACY POLICY
ON THIS PAGE
SECTION 1 | OPERATING SYSTEMS
SECTION 2 | ALLOCATING RESOURCES
SECTION 3 | DEDICATED OPERATING SYSTEMS
SECTION 4 | MULTI TASKING
​SECTION 5 | HIDING COMPLEXITY
ALSO IN THIS TOPIC
SYSTEM RESOURCES
YOU ARE HERE | OPERATING SYSTEMS
TOPIC 6 REVISION
KEY TERMINOLOGY
TOPIC 6 ANSWERS

Picture

RESOURCE MANAGEMENT | OPERATING SYSTEMS

Topics from the IB Computer Science Specification 2014
FLIP CARDS
  • LEARN
  • TERMINOLOGY
  • QUESTIONS
<
>
SECTION 1 | OPERATING SYSTEMS
An operating system (OS) is the software that manages the hardware and software resources of a computer. It acts as an intermediary between the computer hardware and the applications that run on the computer. The operating system provides the necessary support and services to run applications, manage memory, handle input/output operations, and perform other tasks. It is the first software that is loaded when the computer starts up, and it runs continuously in the background, providing the necessary resources and services to other programs. Examples of popular operating systems include Windows, macOS, Linux, and Android. The basic functions of an operating system include:

Managing files: An operating system is responsible for managing the file system, organizing files and directories, and providing access to files for both the user and the applications running on the computer.
Handling interrupts: The operating system is responsible for handling interrupts generated by hardware components, such as the keyboard or mouse, and ensuring that the computer responds in a timely manner.
Providing an interface: The operating system provides a user interface, such as a graphical user interface or command-line interface, allowing the user to interact with the computer and perform tasks.
Managing peripherals and drivers: The operating system manages peripheral devices, such as printers and storage devices, and provides drivers to allow the hardware to interact with the software. A graphical user interface (GUI) is a type of user interface that uses graphical elements, such as icons and windows, to allow the user to interact with the computer. A command-line interface (CLI) is a type of user interface that uses text-based commands to perform tasks and access information. A GUI is more user-friendly and easier to use than a CLI, but a CLI provides more control and is more efficient for advanced users.
Managing memory: The operating system manages the computer's memory, allocating memory to running applications and managing memory allocation and deallocation as needed. The operating system uses various algorithms, such as first-fit and best-fit, to determine how to allocate memory to running applications, and it also manages the freeing up of memory when applications are closed.
Managing multitasking: The operating system is responsible for managing multitasking, allowing multiple applications to run simultaneously and switching between them as needed.
Providing a platform for running applications: The operating system provides a platform for running applications, providing the underlying support and resources needed to run the software.
Providing system security: The operating system provides security features, such as user authentication, access control, and data encryption, to protect the computer and its data from unauthorized access and attack. The operating system also provides firewalls, antivirus software, and other security tools to prevent unauthorized access and protect against attacks.​
Managing user accounts: The operating system is responsible for managing user accounts, allowing multiple users to log in and use the computer and managing the permissions and access rights for each user.
CHECK YOUR KNOWLEDGE

Which of the following is NOT a basic function of an operating system?

A) Managing files and directories
B) Providing system security
C) Developing applications
D) Managing multitasking
EXPLAINATION
​The correct answer is C: "Developing applications." While operating systems provide a platform for running applications, they do not develop applications themselves. Operating systems focus on tasks such as file management, system security, and managing multitasking.'
.
SECTION 2 | ALLOCATING RESOURCES
Operating systems deal with allocating storage and keeping track of programs in memory through a variety of techniques, including memory management, swapping, time-slicing, priority scheduling, and input/output operations. Here is a brief description of each of these techniques:
  • Memory management: Operating systems use memory management techniques to allocate memory to programs as needed. This can include managing the size and location of memory partitions, allocating memory to individual processes, and keeping track of memory usage to prevent over-allocation and system crashes.
  • Swapping: When a program's memory requirements exceed the available physical memory, the operating system can use swapping to transfer parts of the program from memory to disk storage, freeing up memory for other programs. This process can be automated by the operating system, which can swap out programs that have not been used for a certain amount of time or that are using excessive amounts of memory.
  • Time-slicing: Time-slicing is a technique used by operating systems to allow multiple programs to share the CPU by dividing its time among them. Each program is given a certain amount of CPU time, typically measured in milliseconds, before the operating system switches to the next program in the queue.
  • Priority scheduling: Priority scheduling is a technique used by operating systems to give higher priority to certain programs or processes, allowing them to receive more CPU time than lower priority programs. This can be useful for real-time applications or for ensuring that critical tasks are completed quickly.
  • Input/output operations: Operating systems manage input/output operations by providing programs with access to peripheral devices such as printers, keyboards, and displays. The operating system can allocate resources such as buffers and communication channels to each program and manage the flow of data between the program and the device.

Operating systems use a variety of techniques to allocate storage and manage programs in memory, including memory management, swapping, time-slicing, priority scheduling, and input/output operations. By using these techniques effectively, operating systems can improve system performance, manage resources efficiently, and prevent system crashes and other issues.

LOGICAL VS PHYSICAL MEMORY

Both logical memory and physical memory are ultimately referring to the same physical hardware. The difference between logical memory and physical memory lies in how the memory is perceived and managed by the operating system and the hardware.

Logical Memory (Virtual Memory)
  • User Perception: Logical memory is the memory address space that a process uses, as perceived by the user or programmer.
  • Virtual Address: It consists of virtual addresses that are translated to physical addresses by the operating system.
  • Abstraction: Logical memory provides an abstraction that gives the process the illusion of having a large, continuous memory space, even if the physical memory is smaller or fragmented.
Physical Memory
  • Actual Hardware: Physical memory refers to the real RAM (Random Access Memory) installed on the computer.
  • Physical Address: It is composed of physical addresses used by the hardware to access actual memory locations.
  • Direct Access: The operating system manages physical memory directly, using it to store data and instructions that processes need to execute.

​Summary
  • Logical Memory is an abstract representation of memory for processes, managed by the operating system to simplify memory usage.
  • Physical Memory is the real hardware, representing the actual physical RAM where data is stored.

WHAT IS PAGING

Paging is a memory management technique used by operating systems to efficiently manage and allocate computer memory. It allows the operating system to use physical memory (RAM) and secondary storage (like a hard disk or SSD) to create a virtual memory space that applications can use, giving the illusion of having more memory than physically available.

  • Definition: Paging is a way of storing and managing memory that divides the address space into fixed-sized blocks called pages. These pages are mapped to page frames in physical memory.
  • Logical vs. Physical Memory: Logical memory (virtual memory) used by applications is divided into pages, while physical memory (RAM) is divided into page frames of the same size.
  • Virtual Memory: Paging allows an operating system to extend available memory using secondary storage, enabling larger programs to run and providing better memory utilization.

How the Operating System Manages Paging
  1. Dividing Memory into Pages:
    • The virtual address space is divided into pages of a fixed size (commonly 4KB).
    • Physical memory is divided into page frames of the same size as the pages.
  2. Page Table
    • The page table is a data structure that keeps track of where virtual pages are located in physical memory.
    • Each process has its own page table, which maps virtual addresses (used by applications) to physical addresses (used by the memory hardware).
  3. Page Replacement
    • When physical memory is full, the operating system may need to swap out pages that are not currently in use to secondary storage to make space for new pages.
    • Page replacement algorithms (like Least Recently Used (LRU) or FIFO) decide which pages to swap out.
  4. Page Faults
    • When a program tries to access a page that is not currently in physical memory, a page fault occurs.
    • The operating system pauses the program, loads the required page from secondary storage into physical memory, updates the page table, and then resumes the program.

Advantages of Paging
  • Efficient Memory Usage: Pages can be placed anywhere in physical memory, making it easier to use scattered free memory efficiently.
  • Isolation: Each process has its own page table, providing memory isolation between processes, which enhances security and stability.
  • Virtual Memory: Paging allows the system to use more memory than the available physical memory, enabling the running of larger programs.

Summary
​
Paging allows the operating system to manage memory efficiently by dividing both physical and virtual memory into fixed-size pages. The page table, TLB, and page replacement mechanisms are crucial in handling paging, ensuring that processes can access memory effectively even when physical memory is limited.

WHAT IS THRASHING

Thrashing occurs in an operating system when the CPU spends most of its time swapping pages in and out of memory rather than executing actual processes. It happens when there is excessive paging, leading to low performance and system inefficiency.
​
Causes of Thrashing
  1. Insufficient Physical Memory: When there isn’t enough RAM to hold the working set of all running processes, the system frequently swaps pages between RAM and disk.
  2. Overloading Processes: Running too many processes simultaneously can exceed the available physical memory, causing frequent page faults.
  3. Poor Page Replacement: Ineffective page replacement algorithms can increase the rate of page faults, leading to more swapping.

Thrashing reduces system performance drastically, as more time is spent in managing memory than executing user programs.

WHAT IS TIME SLICING

Time slicing is a key technique used in modern operating systems to manage the execution of multiple processes effectively. It is an important concept in the context of multitasking, which allows multiple programs to run seemingly at the same time.

  • Definition: Time slicing is a method used by the operating system to allocate a fixed, small unit of CPU time (called a "time slice" or "quantum") to each process in a round-robin manner.
  • Context Switching: When a time slice expires, the CPU moves to the next process in the queue. This context switching allows for the sharing of CPU resources among multiple processes.
  • Time Quantum: The duration of each time slice is called a time quantum. This is usually in milliseconds and determines how much time each process gets to execute before the next process is given a turn.

How Time Slicing Works in Multitasking
  1. Process Queue
    • In multitasking, processes are organized into a ready queue.
    • All processes that are ready to execute but are waiting for CPU time are placed in this queue.
  2. Scheduler
    • The CPU scheduler is responsible for selecting which process from the queue gets the CPU next.
    • It gives each process a small time slice to execute its instructions.
  3. Round-Robin Scheduling
    • Time slicing is used in round-robin scheduling, where each process in the ready queue is given a time slice.
    • If a process does not complete within its time slice, it is preempted, and the next process in the queue is scheduled for execution.
    • If a process completes before its time slice ends, the CPU moves on to the next process without waiting.
  4. Context Switching
    • When a process is preempted (i.e., its time slice ends), the state of the process is saved.
    • The context switch involves saving the state of the current process and loading the state of the next process.
    • This allows each process to resume where it left off during its next time slice.
Benefits of Time Slicing
  • Multitasking: Time slicing allows the operating system to achieve multitasking by quickly switching between processes. To the user, this gives the illusion that multiple applications are running simultaneously.
  • Responsiveness: It ensures that all processes get a chance to execute, providing fairness and making the system more responsive. This is particularly important in interactive systems like desktop environments.
  • Resource Sharing: It allows for fair sharing of CPU resources among different processes, ensuring no single process can monopolize the CPU.
Example Scenario
  • Consider a system with three running processes: Process A, Process B, and Process C.
  • The operating system gives each process a time slice of 10 milliseconds.
  • The scheduler first gives Process A the CPU for 10 milliseconds. If Process A does not complete in that time, it is put back in the ready queue, and Process B is given the CPU for the next 10 milliseconds.
  • This continues for Process C, and then the cycle starts again with Process A.
  • By repeating this cycle, the system ensures that each process gets equal opportunity to use the CPU.
Time Slicing vs. Context Switching
  • Time Slicing is about dividing CPU time into fixed intervals and sharing it among processes.
  • Context Switching is the action of saving the state of one process and loading the state of another. This occurs at the end of each time slice to allow the next process to use the CPU.
Challenges with Time Slicing
  • Time Quantum Size: Choosing the right time quantum is important.
    • If it’s too short, the CPU will spend too much time on context switching, leading to overhead and reduced performance.
    • If it’s too long, the system becomes less responsive, and certain processes may have to wait longer, resulting in a poor user experience.
Summary
​
Time slicing is a fundamental technique in multitasking operating systems that enables multiple processes to run concurrently by dividing CPU time into small slices and sharing them among processes. This provides fair use of CPU resources, ensures responsiveness, and gives the user the illusion of parallel execution.

WHAT IS ROUND ROBIN SCHEDULING

Round-Robin Scheduling is a simple and widely used CPU scheduling algorithm in operating systems. It is designed to manage multiple processes by giving each one an equal share of the CPU in a cyclic order.
​
  • Time Quantum: Each process is assigned a fixed time slice (also called a time quantum), which defines how long it can use the CPU before the next process gets a turn.
  • Equal CPU Time: The scheduler cycles through all the processes in the ready queue, giving each process its time quantum. If a process doesn’t finish during its time slice, it is preempted and placed at the end of the queue.
  • Fairness: This ensures fair allocation of CPU time and is effective for time-sharing systems, providing a responsive experience where each process gets regular access to the CPU.

​Round-Robin Scheduling is particularly beneficial for systems that need quick response times, as it ensures that no process can monopolize the CPU for an extended period.
CHECK YOUR KNOWLEDGE

Which of the following techniques is used by an operating system to manage multiple programs sharing the CPU?

A) Memory Swapping
B) Time-Slicing
C) Page Replacement
D) Input/Output Buffers
EXPLAINATION
​The correct answer is B: "Time-Slicing." Time-slicing is a technique where the CPU allocates a small, fixed amount of time to each program, allowing multiple programs to share CPU time efficiently. This approach supports multitasking by quickly switching between processes.
.
SECTION 3 | MULTI TASKING
The operating system handles multiple tasks and to the user they often seem to run seamlessly simultaneously, below are some of the common manage techniques operating systems use:
​
  • Scheduling: Scheduling is the process by which an operating system decides which program or process should run next on the CPU. Scheduling algorithms can be based on factors such as priority, time-sharing, and real-time requirements.
  • Policies: Operating systems can use policies to control how resources are allocated and used by programs and processes. Policies can include limits on memory usage, CPU time, and network bandwidth, as well as rules for handling errors and conflicts.
  • Multitasking: Multitasking is the ability of an operating system to run multiple programs or processes at the same time. This can be achieved through techniques such as time-sharing, priority scheduling, and parallel processing.
  • Virtual memory: Virtual memory is a technique used by operating systems to allow programs to use more memory than is physically available on the system. This is achieved by mapping memory addresses used by programs to different areas of physical memory or disk storage.
  • Paging: Paging is a technique used by operating systems to manage virtual memory by dividing it into fixed-size pages. When a program accesses a page that is not currently in physical memory, the operating system retrieves it from disk storage and places it in memory.
  • Interrupt: An interrupt is a signal sent to the CPU by a device or program to indicate that it requires attention. The operating system can use interrupt handling to manage input/output operations, respond to hardware failures, and manage system resources.
  • Polling: Polling is a technique used by operating systems to manage input/output operations by regularly checking the status of devices and peripherals to see if they require attention. This can be less efficient than interrupt handling, but is useful for certain types of devices and systems.

Operating systems use a variety of resource management techniques to manage system resources and ensure that programs and processes run efficiently and reliably. These techniques can include scheduling, policies, multitasking, virtual memory, paging, interrupt handling, and polling. By using these techniques effectively, operating systems can improve system performance, reduce errors and conflicts, and prevent system crashes and other issues.
CHECK YOUR KNOWLEDGE

Which of the following best describes the concept of multitasking in an operating system?

A) Running multiple operating systems on the same computer.
B) Running multiple programs or processes simultaneously.
C) Installing applications from different sources.
D) Scheduling tasks to run only one at a time.
EXPLAINATION
The correct answer is B: "Running multiple programs or processes simultaneously." Multitasking is the ability of an operating system to handle multiple programs at the same time, switching between them quickly to give the illusion of concurrent execution.
.
SECTION 4 | DEDICATED OPERATING SYSTEMS
A dedicated operating system (OS) is an operating system that is designed to run on a specific device or platform. Unlike general-purpose operating systems such as Windows, Linux, or macOS, which are designed to run on a wide range of devices, a dedicated OS is tailored to the specific hardware and software requirements of a particular device or platform.

A dedicated OS can be designed for a variety of devices, including smartphones, tablets, embedded systems, gaming consoles, and other specialized devices. These devices often have specific hardware requirements, such as sensors, touchscreens, and specialized input/output devices, that a dedicated OS can take advantage of.
  • Optimized performance: A dedicated OS can be designed to take full advantage of the hardware and software capabilities of the device, resulting in faster and more efficient performance. This can be especially important for devices with limited resources, such as smartphones, tablets, and embedded systems.
  • Improved security: A dedicated OS can be designed with security features that are specific to the device and its intended use. This can include encryption, authentication, and access controls, which can help protect the device and its data from unauthorized access or attacks.
  • Better user experience: A dedicated OS can be customized to meet the specific needs and preferences of the device's users, resulting in a better user experience. This can include features such as intuitive user interfaces, touchscreens, and voice recognition.
  • Simplified maintenance and support: A dedicated OS can be easier to maintain and support, as it is designed specifically for the device and its components. This can make it easier for developers to troubleshoot issues, release updates, and provide technical support to users.
  • Reduced costs: Developing a dedicated OS can be more cost-effective than using a commercial or off-the-shelf OS, especially for high-volume products. This can allow manufacturers to lower the cost of the device and make it more accessible to consumers.

Producing a dedicated OS for a device can offer several advantages, including optimized performance, improved security, better user experience, simplified maintenance and support, and reduced costs. These benefits can make it easier for manufacturers to create devices that meet the specific needs and preferences of their users, while also improving the overall quality and reliability of the device.

However, developing a dedicated OS can also be more time-consuming and expensive than using a commercial or off-the-shelf OS, as it requires specialized expertise and resources. Additionally, a dedicated OS may have limited compatibility with other devices or platforms, which can limit its usefulness for certain applications.
CHECK YOUR KNOWLEDGE

Which of the following is an advantage of a dedicated operating system?

A) It can be used on a wide range of devices.
B) It offers optimized performance tailored to specific hardware.
C) It is compatible with all other operating systems.
D) It can be easily modified by the user for any purpose.
EXPLAINATION
The correct answer is B: "It offers optimized performance tailored to specific hardware." A dedicated operating system is designed specifically for certain devices, optimizing performance, efficiency, and often including device-specific features. This makes it ideal for devices with unique hardware requirements, such as embedded systems, smartphones, and gaming consoles.
.
SECTION 5 | HIDING COMPLEXITY
Operating hide the complexity of hardware to the user to make the system more intuitive and user-friendly, some methods they use include:
  • Virtualization of real devices: An operating system can use virtualization to create virtual devices that mimic the functionality of real hardware devices, such as printers, scanners, and network adapters. This can simplify the programming and use of these devices, as they can be treated as software objects rather than complex hardware components.
  • Drive letters: An operating system can use drive letters to abstract the complexity of disk storage devices from users and applications. Instead of having to navigate complex file systems, users can access files and folders through a simple drive letter, such as C: or D:.
  • Virtual memory: An operating system can use virtual memory to provide applications with more memory than is physically available on the system. This allows applications to operate as if they have access to more memory, without requiring them to manage the complexities of physical memory.
  • Input devices: An operating system can provide a common interface for input devices such as keyboards, mice, and touchscreens. This can abstract the complexities of different input devices from applications, allowing them to receive input in a standard format.
  • Java Virtual Machine: The Java Virtual Machine (JVM) is a software layer that abstracts the complexities of hardware and operating systems from Java applications. The JVM provides a standardized environment for Java applications to run, regardless of the underlying hardware or operating system.

Operating systems can hide the complexity of hardware from users and applications in a variety of ways, including virtualization of real devices, drive letters, virtual memory, input devices, and the Java Virtual Machine. By abstracting the complexities of hardware, operating systems can simplify the programming and use of devices and resources, making them more accessible to users and developers.
CHECK YOUR KNOWLEDGE

How does an operating system hide the complexity of hardware from users?

A) By requiring users to manage each hardware component individually
B) By providing virtualization, abstraction layers, and interfaces like drive letters
C) By making users install all drivers manually for each device
D) By displaying raw binary data for hardware status
EXPLAINATION
The correct answer is B: "By providing virtualization, abstraction layers, and interfaces like drive letters." Operating systems simplify user interaction with hardware by using methods like virtualization, abstraction, and standardized interfaces. This hides the underlying complexity, allowing users to interact with devices in a straightforward and intuitive manner.
.
Operating System (OS) | Software that manages computer hardware and software resources, acting as an intermediary between the hardware and applications.
File Management | Organizes files and directories, providing access to files for users and applications.
Interrupt Handling | Responds to signals from hardware or software, ensuring timely processing of tasks like keystrokes or mouse clicks.
User Interface (UI) | Allows user interaction with the computer; includes graphical (GUI) and command-line (CLI) interfaces.
Peripheral Management | Manages devices like printers and storage, using drivers to allow communication between hardware and software.
Memory Management | Allocates memory to applications, using algorithms (e.g., first-fit, best-fit) for efficient use and deallocation when programs close.
Multitasking | Allows multiple applications to run simultaneously, managed through techniques like time-slicing and priority scheduling.
System Security | Protects data and resources through user authentication, access control, and encryption.
User Accounts | Manages multiple users, defining access rights and permissions for each.
Resource Allocation | Distributes CPU time, memory, and I/O resources to programs using techniques like time-slicing, priority scheduling, and swapping.
Logical Memory (Virtual Memory) | Abstract memory space used by processes, translating to physical memory by the OS.
Physical Memory | The actual RAM hardware that stores data for running processes.
Paging | Divides memory into fixed-size pages; allows efficient use of memory by managing pages in both RAM and disk storage.
Page Table | Maps virtual pages to physical frames, tracking memory allocation for each process.
Page Fault | Occurs when a program accesses a page not in RAM, causing the OS to retrieve it from secondary storage.
Thrashing | When excessive paging reduces system performance, often due to limited physical memory.
Time Slicing | Allocates fixed CPU time to processes, enabling multitasking by switching between processes.
Round-Robin Scheduling | CPU scheduling algorithm that assigns equal time slices to each process in a cyclic order.
Scheduling | Determines the order of process execution, considering factors like priority and resource availability.
Virtual Memory | Expands available memory by using disk storage, allowing programs to use more memory than physically available.
Polling | The OS continuously checks device statuses, which is less efficient than interrupts but useful for some systems.
Dedicated Operating System | An OS tailored to specific devices (e.g., embedded systems), optimizing performance and security.
Virtualization | Creates virtual devices to simplify hardware interaction for users and applications.
Drive Letters | Abstraction of disk storage, making it easier for users to access files and folders.
Java Virtual Machine (JVM) | A platform-independent environment for Java applications, abstracting OS and hardware details.
Picture
  1. Provide an example of a scenario (e.g., machine learning, video rendering) and explain why a GPU's parallel processing capabilities are better suited for this task compared to a CPU. [4]
  2. Define bandwidth, and explain how it relates to data transfer rate, including examples like internet speed or network performance. [4]
  3. Provide a clear definition of cache memory and its purpose in improving data access speed. [2]
  4. Compare primary storage (e.g., RAM) and secondary storage (e.g., HDD, SSD) in terms of characteristics like volatility, speed, and purpose. [4]
  5. Provide an example (e.g., video rendering, data analysis) and explain why a single processor’s limited performance may be insufficient for such tasks. [4]
  6. Explain a scenario (e.g., low memory or insufficient CPU power) and describe the potential consequences, such as reduced system performance, crashes, or slow response time.[6]
  7. Discuss how the OS handles memory allocation, communicates with peripherals, and manages hardware interfaces to ensure smooth system functioning. [6]
  8. Outline OS resource management techniques. [2 Marks per term]
    Scheduling: Define how the OS decides which process runs and for how long, using methods like round-robin or priority scheduling.
    Policies: Define policies that control system resource usage, such as fairness and access control.
    Multitasking: Define how the OS allows multiple tasks or processes to be run concurrently.
    Virtual Memory: Define how the OS uses part of secondary storage to extend the available memory beyond the physical RAM.
    Paging: Define how the OS breaks memory into pages and uses page frames to manage memory efficiently.
    Interrupt: Define what interrupts are and how they help the CPU deal with urgent tasks.
    Polling: Define how the OS uses polling to regularly check the status of peripherals and respond to their needs.
  9. Explain how a dedicated OS can improve efficiency, reliability, and performance on a specific device by tailoring functionalities and minimizing unnecessary features.[6]
Picture
ALSO IN THIS TOPIC
SYSTEM RESOURCES
OPERATING SYSTEMS
TOPIC 6 REVISION
KEY TERMINOLOGY
TOPIC 6 ANSWERS
Picture
SUGGESTIONS
We would love to hear from you
SUBSCRIBE 
To enjoy more benefits
We hope you find this site useful. If you notice any errors or would like to contribute material then please contact us.