muhammadbilal254 / OP-Notes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Operating System Notes

System Call

Definition:

  • A system call is a request made by a process to the operating system kernel to perform tasks that require special privileges or access to system resources.

Key Points:

  1. Interface between User Programs and Kernel:

    • System calls provide an interface for user-level applications to interact with the operating system kernel.
    • User programs use system calls to request services such as I/O operations, file management, process management, and network communication.
  2. Privileged Operations:

    • System calls enable user programs to perform privileged operations that require kernel-level access (e.g., accessing hardware, managing processes, reading/writing files).
  3. Controlled Entry Points:

    • System calls define controlled entry points into the kernel, ensuring that only authorized operations are performed by user programs.
  4. Examples of System Calls:

    • Process Control: fork(), exec(), exit(), wait()
    • File Management: open(), read(), write(), close()
    • Device Management: ioctl(), read(), write()
    • Memory Management: brk(), mmap()
    • Network Communication: socket(), connect(), send(), recv()
  5. Execution of System Calls:

    • When a user program executes a system call, it triggers a software interrupt (e.g., int 0x80 on x86 architectures), causing a context switch to kernel mode.
    • The kernel identifies the requested system call based on the interrupt number and executes the corresponding kernel routine.
    • Upon completion, the kernel returns the result to the user program and switches back to user mode.
  6. Error Handling:

    • System calls return error codes to indicate success or failure of operations.
    • User programs should check the return value of system calls to handle errors appropriately.
  7. Security and Protection:

    • System calls enforce security and protection mechanisms, ensuring that user programs cannot directly access privileged resources without proper authorization.

In summary, system calls are fundamental mechanisms that enable user programs to interact with the underlying operating system kernel and perform a wide range of tasks while maintaining security and resource protection. Each system call corresponds to a specific operation or service provided by the operating system, allowing user programs to leverage the full capabilities of the underlying hardware and software platform.


PCB (Process Control Block)

Definition:

  • The Process Control Block (PCB), also known as the Task Control Block (TCB), is a data structure used by the operating system to manage information about a process during its execution.

Key Points:

  • Process State: Represents the current state of the process (e.g., running, waiting, ready, terminated).
  • Process ID (PID): Unique identifier assigned to each process by the operating system.
  • Program Counter (PC): Points to the address of the next instruction to be executed for the process.
  • CPU Registers: Contains the values of CPU registers associated with the process (e.g., accumulator, stack pointer).
  • Process Priority: Indicates the priority level assigned to the process for scheduling purposes.
  • Memory Management Information: Includes information about memory allocation (e.g., base address, limit).
  • I/O Status Information: Tracks open files, pending I/O operations, and other I/O-related details.
  • Accounting Information: Records CPU usage, execution time, and other statistical data for the process.

Context Switching

Definition:

  • Context switching is the process of saving and restoring the state of a CPU so that multiple processes can be executed concurrently on a single CPU core.

Key Points:

  • Saving Process State: The current state of a process, including CPU registers and program counter, is saved into its PCB during a context switch.
  • Loading Process State: The state of a different process is loaded from its PCB into the CPU registers to resume execution.
  • Scheduler Invocation: Context switches are initiated by the scheduler to allocate CPU time to different processes.
  • Efficiency Considerations: Context switching involves overhead due to saving and restoring process states, impacting system performance.
  • Preemption Handling: Context switches can occur preemptively (forced by the scheduler) or voluntarily (process yields CPU).
  • Concurrency Support: Context switching enables multitasking and concurrent execution of processes, improving system responsiveness.

In summary, the PCB (Process Control Block) is a data structure that stores essential information about a process, while context switching is the mechanism used to switch between processes efficiently, allowing for multitasking and concurrent execution of multiple processes on a single CPU core. Context switching involves saving and restoring process states, and it is a fundamental operation for process scheduling and management in modern operating systems.


Process Management

Definition:

  • Process management involves the creation, scheduling, execution, and termination of processes within an operating system.

Key Points:

  • Process Creation: Creating new processes from executable programs.
  • Process Scheduling: Allocating CPU time to processes using scheduling algorithms.
  • Process Execution: Loading and executing programs, managing process states (e.g., running, waiting, terminated).
  • Process Termination: Gracefully terminating processes and reclaiming allocated resources.
  • Process Communication: Facilitating communication and synchronization between processes.
  • Process Control: Managing process priorities, interruptions, and state transitions.

Memory Management

Definition:

  • Memory management refers to the management of computer memory to optimize its use by programs and processes.

Key Points:

  • Memory Allocation: Allocating memory to processes based on their needs.
  • Memory Deallocation: Reclaiming memory when it is no longer needed.
  • Memory Protection: Preventing unauthorized access to memory areas.
  • Memory Mapping: Mapping logical addresses to physical addresses using techniques like virtual memory.
  • Memory Hierarchy: Utilizing different levels of memory (e.g., cache, RAM, disk) for efficient data storage and retrieval.
  • Memory Optimization: Optimizing memory usage to enhance system performance and reliability.

Storage Management

Definition:

  • Storage management involves managing secondary storage devices (e.g., hard disks) to store and retrieve data efficiently.

Key Points:

  • File System Management: Creating, reading, writing, and deleting files on storage devices.
  • Disk Management: Allocating and managing disk space, disk scheduling, and disk caching.
  • Storage Optimization: Optimizing storage usage to maximize performance and reliability.
  • Backup and Recovery: Implementing strategies to protect data against loss or corruption.
  • Data Integrity: Ensuring data integrity through mechanisms like file system journaling and RAID.
  • Storage Hierarchies: Utilizing different types of storage media (e.g., SSDs, magnetic disks) based on speed and cost considerations.

In summary, process management involves handling the lifecycle of processes, memory management optimizes memory usage and allocation, and storage management ensures efficient storage and retrieval of data on secondary storage devices. These components are essential for the smooth operation of an operating system and the effective utilization of computer resources.

About