OS is a software which manages hardware. An operating System controls the allocation of resources and services such as memory, processors, devices and information.
Following are some of important functions of an operating System:
Memory management refers to management of Primary Memory or Main Memory. Main memory provides a fast storage that can be access directly by the CPU. So for a program to be executed, it must in the main memory.
In multiprogramming environment, OS decides which process gets the processor when and how much time. This function is called process scheduling.
I/O Operation: I/O subsystem comprised of I/O devices and their corresponding driver software. Drivers hides the peculiarities of specific hardware devices from the user as the device driver knows the peculiarities of the specific device.
A file system is normally organized into directories for easy navigation and usage. These directories may contain files and other directions. Operating System does the file management.
Program execution: Operating system handles many kinds of activities from user programs to system programs like printer spooler, name servers, file server etc. Each of these activities is encapsulated as a process.
Security -- By means of password and similar other techniques, preventing unauthorized access to programs and data.
Control over system performance -- Recording delays between request for a service and response from the system.
Job accounting -- Keeping track of time and resources used by various jobs and users.
Error detecting aids -- Production of dumps, traces, error messages and other debugging and error detecting aids.
Coordination between other softwares and users -- Coordination and assignment of compilers, interpreters, assemblers and other software to the various users of the computer systems.
Types of Operating System:
Batch operating system: The users of batch operating system do not interact with the computer directly. Each user prepares his job on an off-line device like punch cards and submits it to the computer operator. The operator then sorts programs into batches with similar requirements.
Time-sharing operating systems: Time sharing is a technique which enables many people, located at various terminals, to use a particular computer system at the same time.Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently. Thus, the user can receives an immediate response.
Distributed operating System: Distributed systems use multiple central processors to serve multiple real time application and multiple users. These processors are referred as sites, nodes, computers and so on.
Network operating System: Network Operating System runs on a server and and provides server the capability to manage data, users, groups, security, applications, and other networking functions. Examples: Microsoft Windows Server 2003
Real Time operating System: Real time system is defines as a data processing system in which the time interval required to process and respond to inputs is so small that it controls the environment. Real time processing is always on line whereas on line system need not be real time. example: Air traffic control system etc.
Hard real-time systems: Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems secondary storage is limited or missing with data stored in ROM
Soft real-time systems: Soft real time systems are less restrictive. Critical real-time task gets priority over other tasks and retains the priority until it completes. For example, Multimedia, virtual reality,
Operating System - Properties:
Batch processing: Batch processing is a technique in which Operating System collects one programs and data together in a batch before processing starts. OS defines a job which has predefined sequence of commands, programs and data as a single unit. When job completes its execution, its memory is released and the output for the job gets copied into an output spool for later printing or processing.
Multitasking: Multitasking refers to term where multiple jobs are executed by the CPU simultaneously by switching between them. Switches occur so frequently that the users may interact with each program while it is running.
Multiprogramming: When two or more programs are residing in memory at the same time, then sharing the processor is referred to the multiprogramming. Multiprogramming assumes a single shared processor. Multiprogramming increases CPU utilization by organizing jobs so that the CPU always has one to execute.
Interactivity: Interactivity refers that a User is capable to interact with computer system.
Real Time System: Real time systems represents are usually dedicated, embedded systems. Operating Systems typically read from and react to sensor data.
Distributed Environment: Distributed environment refers to multiple independent CPUs or processors in a computer system.
Spooling: Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers to putting data of various I/O jobs in a buffer. This buffer is a special area in memory or hard disk which is accessible to I/O devices.
Process:
A process is a program in execution. The execution of a process must progress in a sequential fashion
Object Program: Code to be executed.
Data: Data to be used for executing the program.
Resources: While executing the program, it may require some resources.
Status: Verifies the status of the process execution.A process can run to completion only when all requested resources have been allocated to the process.
Program:
Program contains the instructions to be executed by processor. A program takes a space at single place in main memory and continues to stay there.
New - The process is being created.
Ready - The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run.
Running: Process instructions are being executed (i.e. The process that is currently being executed).
Waiting: The process is waiting for some event to occur (such as the completion of an I/O operation).
Terminated: The process has finished execution.
Process Control Block:
Each process is represented in the operating system by a data structure called process control block (PCB) or task control block.
Pointer: Pointer points to another process control block. Pointer is used for maintaining the scheduling list.
Process State: Process state may be new, ready, running, waiting and so on.
Program Counter: Program Counter indicates the address of the next instruction to be executed for this process.
CPU registers: CPU registers include general purpose register, stack pointers, index registers and accumulators etc.
Memory management information: This information may include the value of base and limit registers, the page tables etc. This information is useful for deallocating the memory when the process terminates.
Accounting information: This information includes the amount of CPU and real time used, time limits, job or process numbers, account numbers etc.
Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy.
Scheduling queues refers to queues of processes or devices. When the process enters into the system, then this process is put into a job queue. Device queue is a queue for which multiple processes are waiting for a particular I/O device
A newly arrived process is put in the ready queue. Once the CPU is assigned to a process, then that process will execute. The process could issue an I/O request and then it would be placed in an I/O queue.
Schedulers: Schedulers are special system softwares which handles process scheduling in various ways
Long Term Scheduler: It is also called job scheduler. Job scheduler selects processes from the queue and loads them into memory for execution. Time-sharing operating systems have no long term scheduler.
Short Term Scheduler: It is also called CPU scheduler. CPU scheduler selects process among the processes that are ready to execute and allocates CPU to one of them. Short term scheduler also known as dispatcher
Medium Term Scheduler: Medium term scheduling is part of the swapping. Running process may become suspended if it makes an I/O request. Suspended processes cannot make any progress towards completion. It removes the processes from the memory. It reduces the degree of multiprogramming.
Context Switch: A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. Using this technique a context switcher enables multiple processes to share a single CPU.
Scheduling algorithms:
Throughput: number of processes that complete their execution per time unit
Turnaround time: amount of time to execute a particular process ie, The sum total of waiting time & execution time
Waiting time: amount of time a process has been waiting in the ready queue. ie, Service Time - Arrival Time
Response time: amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)
First Come First Serve (FCFS):
Jobs are executed on first come, first serve basis. Easy to understand and implement. Poor in performance as average wait time is high.
Shortest Job First (SJF):
Processer should know in advance how much time process will take. This algorithm associates with each process the length of the process's next CPU burst.When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS scheduling is used.
The SJF algorithm can be either pre-emptive or nonpreemptive. The choice arises when a new process arrives at the ready queue while a previous process is still executing. The next CPU burst of the newly arrived process may be shorter than what is left of the currently executing process. A pre-emptive SJF algorithm will preempt the currently executing process, whereas a nonpreemptive SJF algorithm will allow the currently running process to finish its CPU burst.
Priority Based Scheduling:
A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order. Generally the larger the CPU burst, the lower the priority, and vice versa.
Internally defined priorities use some measurable quantity or quantities to compute the priority of a process. External priorities are set by criteria outside the OS, such as the importance of the process, the type and amount of funds being paid for computer use.
A pre-emptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. A nonpreemptive priority scheduling algorithm will simply put the new process at the head of the ready queue.
A major problem with priority scheduling algorithms is indefinite blocking, or starvation. A priority scheduling algorithm can leave some low priority processes waiting indefinitely.
A solution to the problem of indefinite blockage of low-priority processes is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.
Round Robin Scheduling:
Each process is provided a fix time to execute called quantum. We keep the ready queue as a FIFO queue of processes.New processes are added to the tail of the ready queue. Once a process is executed for given time period. Process is preempted and other process executes for given time period. Context switching is used to save states of preempted processes.
Multi Queue Scheduling:
Multiple queues are maintained for processes. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type. Each queue has absolute priority over lower-priority queues and also each queue has its own scheduling algorithm.
Multi-Threading:
A thread is a flow of execution through the process code, with its own program counter, system registers and stack. A thread is also called a light weight process. Threads provide a way to improve application performance through parallelism.Threads are implemented in following two ways
User Level Threads -- application manages thread management kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application begins with a single thread and begins running in that thread.
Kernel Level Threads -- thread management done by the Kernel. Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process.Kernel threads are generally slower to create and manage than the user threads.
Multithreading Models -- Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models are three types
Many to Many Model -- In this model, many user level threads multiplexes to the Kernel thread of smaller or equal numbers. In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallels on a multiprocessor.
Many to One Model -- Many to one model maps many user level threads to one Kernel level thread. Thread management is done in user space. Only one thread can access the Kernel at a time,so multiple threads are unable to run in parallel on multiprocessors.
One to One Model -- There is one to one relationship of user level thread to the kernel level thread. It also another thread to run when a thread makes a blocking system call. It support multiple thread to execute in parallel on microprocessors.
Memory Management:
Memory management keeps track of each and every memory location either it is allocated to some process or it is free. Memory management provides protection by using two registers, a base register and a limit register. The base register holds the smallest legal physical memory address and the limit register specifies the size of the range.
Instructions and data to memory addresses can be done in Compile time, Load time orExecution time
Dynamic Loading: In dynamic loading, a routine of a program is not loaded until it is called by the program. The main program is loaded into memory and is executed. Other routines methods or modules are loaded on request.
Dynamic Linking: Linking is the process of collecting and combining various modules of code and data into a executable file that can be loaded into memory and executed. When it combines the libraries at load time, the linking is called static linking and when this linking is done at the time of execution, it is called as dynamic linking.In static linking, libraries linked at compile time, so program code size becomes bigger whereas in dynamic linking libraries linked at execution time so program code size remains smaller.
Logical versus Physical Address Space: An address generated by the CPU is a logical address whereas address actually available on memory unit is a physical address. Virtual and physical addresses differ in execution-time address-binding scheme.The run-time mapping from virtual to physical address is done by the memory management unit (MMU) which is a hardware device
Swapping: Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store , and then brought back into memory for continued execution.
Main memory usually has two partitions - Low Memory -- Operating system resides in this memory and High Memory -- User processes then held in high memory.
memory allocation mechanism:
Single-partition allocation In this type of allocation, relocation-register scheme is used. Relocation register contains value of smallest physical address whereas limit register contains range of logical addresses. Each logical address must be less than the limit register.
Multiple-partition allocation In this type of allocation, main memory is divided into a number of fixed-sized partitions where each partition should contain only one process. When a partition is free, a process is selected from the input queue and is loaded into the free partition.
Fragmentation: As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes can not be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation.
External fragmentation : Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous so it can not be used.
Internal fragmentation : Memory block assigned to process is bigger. Some portion of memory is left unused as it can not be used by another process.
Paging: External fragmentation is avoided by using paging technique. Paging is a technique in which physical memory is broken into blocks of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). When a process is to be executed, it's corresponding pages are loaded into any available memory frames. Address generated by CPU is divided into
Page number (p) -- page number is used as an index into a page table which contains base address of each page in physical memory.
Page offset (d) -- page offset is combined with base address to define the physical memory address.
Segmentation: Segmentation is a technique to break memory into logical pieces where each piece represents a group of related information. Segmentation can be implemented using or without using paging. Address generated by CPU is divided into
Segment number (s) -- segment number is used as an index into a segment table which contains base address of each segment in physical memory and a limit of segment.
Segment offset (o) -- segment offset is first checked against limit and then is combined with base address to define the physical memory address.
Virtual Memory:
Virtual memory is a technique that allows the execution of processes which are not completely available in memory. Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation system or Demand segmentation
Demand Paging:
A demand paging system is quite similar to a paging system with swapping. Rather than swapping the entire process into memory, however, we use a lazy swapper called pager. When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out again. Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time and the amount of physical memory needed.
Hardware support is required to distinguish between those pages that are in memory and those pages that are on the disk using the valid-invalid bit scheme. Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's failure to bring the desired page into memory. It is handled by modern os
Page Replacement Algorithm: Page replacement algorithms are the techniques using which Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose
Reference String: The string of memory references is called reference string. Reference strings are generated artificially or by tracing a given system and recording the address of each memory reference.
First In First Out (FIFO) algorithm: Oldest page in main memory is the one which will be selected for replacement. Easy to implement, keep a list, replace pages from the tail and add new pages at the head.
Optimal Page algorithm: An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. when a page needs to be swapped in, the operating system swaps out the page whose next use will occur farthest in the future.
Least Recently Used (LRU) algorithm: Page which has not been used for the longest time in main memory is the one which will be selected for replacement. Easy to implement, keep a list, replace pages by looking back into time.
Page Buffering algorithm: To get process start quickly, keep a pool of free frames.On page fault, select a page to be replaced.Write new page in the frame of free pool, mark the page table and restart the process.Now write the dirty page out of disk and place the frame holding replaced page in free pool.
Least frequently Used(LFU) algorithm: Page with the smallest count is the one which will be selected for replacement.This algorithm suffers from the situation in which a page is used heavily during the initial phase of a process, but then is never used again.
Most frequently Used(MFU) algorithm: This algorithm is based on the argument that the page with the smallest count was probably just brought in and has yet to be used.
I/O Hardware:
Computers operate on many kinds of devices. A device communicates with a computer system by sending signals over a cable or even through the air.The device communicates with the machine via a connection point termed a port (for example, a serial port).
A bus is a set of wires and a rigidly defined protocol that specifies a set of messages that can be sent on the wires.When device A has a cable that plugs into device B, and device B has a cable that plugs into device C, and device C plugs into a port on the computer, this arrangement is called a daisy chain.
Controller: A controller is a collection of electronics that can operate a port, a bus, or a device. The SCSI bus controller is often implemented as a separate circuit board (a host adapter) that plugs into the computer.
I/O port: An I/O port typically consists of four registers, called the status , control, data-in, and data-out registers.
Polling: Polling is a process by which a host waits for controller response.It is a looping process, reading the status register over and over until the busy bit of status register becomes clear
I/O Devices can be categorized into following category.
Human Readable devices are suitable for communicating with the computer user. Examples are printers, video display terminals, keyboard etc.
Machine Readable devices are suitable for communicating with electronic equipment. Examples are disk and tape drives, sensors, controllers and actuators.
Communication devices are suitable for communicating with remote devices. Examples are digital line drivers and modems.
Direct Memory Access (DMA):
A special control unit is used to transfer block of data directly between an external device and the main memory, without intervention by the processor. This approach is called Direct Memory Access(DMA). DMA can be used with either polling or interrupt software.
When used with an interrupt, the CPU is notified only after the entire block of data has been transferred. For each byte or word transferred, it must provide the memory address and all the bus signals controlling the data transfer.Handshaking is a process between the DMA controller and the device controller. It is performed via wires using terms DMA request and DMA acknowledge.
Device Controllers:
network card, graphics adapter, disk controller, DVD-ROM controller, serial port, USB, sound card
I/O Softwares:
Interrupts: The CPU hardware uses an interrupt request line wire which helps CPU to sense after executing every instruction. When the CPU checks that a controller has put a signal on the interrupt request line, the CPU saves a state, such as the current value of the instruction pointer, and jumps to the interrupt handler routine at a fixed address. The interrupt handler part determines the cause of the interrupt, performs the necessary processing and executes a interrupt instruction to return the CPU to its execution state.
Most CPUs have two interrupt request lines:
non-maskable interrupt - Such kind of interrupts are reserved for events like unrecoverable memory errors.
maskable interrupt - Such interrupts can be switched off by the CPU before the execution of critical instructions that must not be interrupted.
Application I/O Interface: Application I/O Interface represents the structuring techniques and interfaces for the operating system to enable I/O devices to be treated in a standard, uniform way.
Following are the characteristics of I/O interfaces with respected to devices.
Character-stream / block, Sequential / random-access, Synchronous / asynchronous, Sharable / dedicated, Speed of operation, Read-write, read only, or write only
Clocks: Clocks are also called timers. The clock software takes the form of a device driver though a clock is neither a blocking device nor a character based device. The clock software is the clock driver.
Kernel I/O Subsystem: Kernel I/O Subsystem is responsible to provide many services related to I/O such as Scheduling, Buffering, Caching, Spooling and Device Reservation, Error Handling
Device driver: Device driver is a program or routine developed for an I/O device. A device driver implements I/O operations or behaviours on a specific class of devices. In the layered structure of I/O system, device driver lies between interrupt handler and device independent I/O software.
File System:
File: A file is a named collection of related information that is recorded on secondary storage such as magnetic disks, magnetic tapes and optical disks.In general, a file is a sequence of bits, bytes, lines or records whose meaning is defined by the files creator and user.
File Structure: File structure is a structure, which is according to a required format that operating system can understand.
File Type: File type refers to the ability of the operating system to distinguish different types of file such as text files source files and binary files etc.
Ordinary files: These are the files that contain user information.
Directory files: These files contain list of file names and other information related to these files.
Special files: These files represent physical device like disks, terminals, printers, networks, tape drive etc. These files are of two types - Character special files [terminals or printers] and Block special files [disks and tapes]
File Access Mechanisms: File access mechanism refers to the manner in which the records of a file may be accessed.
Sequential access: The information in the file is processed in order, one record after the other. Example: Compilers usually access files in this fashion.
Direct/Random access: Each record has its own address on the file with by the help of which it can be directly accessed for reading or writing.The records need not be in any sequence within the file and they need not be in adjacent locations on the storage medium.
Indexed sequential access: An index is created for each file which contains pointers to various blocks. Index is searched sequentially and its pointer is used to access the file directly.
Space Allocation: Files are allocated disk spaces by operating system.
Contiguous Allocation: Each file occupy a contiguous address space on disk. Assigned disk address is in linear order. External fragmentation is a major issue with this type of allocation technique.
Linked Allocation: Each file carries a list of links to disk blocks. Directory contains link / pointer to first block of a file. Effectively used in sequential access file. Inefficient in case of direct access file.
Indexed Allocation: Provides solutions to problems of contigous and linked allocation. A index block is created having all pointers to files. Each file has its own index block which stores the addresses of disk space occupied by the file. Directory contains the addresses of index blocks of files.
Security:
Security refers to providing a protection system to computer system resources such as CPU, memory, disk, software programs and most importantly data/information stored in the computer system.
Authentication:
Authentication refers to identifying the each user of the system and associating the executing programs with those users. Operating Systems generally identifies/authenticates users using following three ways:
Username / Password, User card/key, User attribute [fingerprint/ eye retina pattern/ signature]
One Time passwords:
In One-Time Password system, a unique password is required every time user tries to login into the system. Once a one-time password is used then it can not be used again. One time password are implemented in various ways.
Random numbers, Secret key (User are provided a hardware device which can create a secret id mapped with user id), Network password (Some commercial applications send one time password to user on registered mobile/ email)
Program Threats:
Operating system's processes and kernel do the designated task as instructed. If a user program made these process do malicious tasks then it is known as Program Threats. Following is the list of some well known program threats.
Trojan Horse - Such program traps user login credentials and stores them to send to malicious user who can later on login to computer and can access system resources.
Trap Door - If a program which is designed to work as required, have a security hole in its code and perform illegal action without knowledge of user then it is called to have a trap door.
Logic Bomb - Logic bomb is a situation when a program misbehaves only when certain conditions met otherwise it works as a genuine program. It is harder to detect.
Virus - Virus as name suggest can replicate themselves on computer system .They are highly dangerous and can modify/delete user files, crash systems.
System Threats:
System threats refers to misuse of system services and network connections to put user in trouble. System threats can be used to launch program threats on a complete network called as program attack. Following is the list of some well known system threats.
Worm -Worm is a process which can choked down a system performance by using system resources to extreme levels.
Port Scanning - Port scanning is a mechanism or means by which a hacker can detects system vulnerabilities to make an attack on the system.
Denial of Service - Denial of service attacks normally prevents user to make legitimate use of the system.
Computer Security Classifications:
Type A Highest Level. Uses formal design specifications and verification techniques.
Type B Provides mandatory protection system. Have all the properties of a class C2 system. Attaches a sensitivity label to each object.It is of three types.
B1 - Maintains the security label of each object in the system.
B2 - Extends the sensitivity labels to each system resource, such as storage objects, supports covert channels and auditing of events.
B3 - Allows creating lists or user groups for access-control to grant access or revoke access to a given named object.
Type C Provides protection and user accountability using audit capabilities. It is of two types.
C1 - Incorporates controls so that users can protect their private information and keep other users from accidentally reading / deleting their data. UNIX versions are mostly Cl class.
C2 - Adds an individual-level access control to the capabilities of a Cl level system
Type D Lowest level. Minimum protection. MS-DOS fall in this category.
Linux:
Linux is one of popular version of UNIX operating System. Components of Linux System
Kernel - Kernel is the core part of Linux. It is responsible for all major activities of this operating system. It is consists of various modules and it interacts directly with the underlying hardware.
System Library - System libraries are special functions or programs using which application programs or system utilities accesses Kernel's features. These libraries implements most of the functionalities of the operating system and do not requires kernel module's code access rights.
System Utility - System Utility programs are responsible to do specialized, individual level tasks.
Architecture:
Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
Kernel - Core component of Operating System, interacts directly with hardware, provides low level services to upper layer components.
Shell - An interface to kernel, hiding complexity of kernel's functions from users. Takes commands from user and executes kernel's functions.
Utilities - Utility programs giving user most of the functionalities of an operating systems.
Following are some of important functions of an operating System:
Memory management refers to management of Primary Memory or Main Memory. Main memory provides a fast storage that can be access directly by the CPU. So for a program to be executed, it must in the main memory.
In multiprogramming environment, OS decides which process gets the processor when and how much time. This function is called process scheduling.
I/O Operation: I/O subsystem comprised of I/O devices and their corresponding driver software. Drivers hides the peculiarities of specific hardware devices from the user as the device driver knows the peculiarities of the specific device.
A file system is normally organized into directories for easy navigation and usage. These directories may contain files and other directions. Operating System does the file management.
Program execution: Operating system handles many kinds of activities from user programs to system programs like printer spooler, name servers, file server etc. Each of these activities is encapsulated as a process.
Security -- By means of password and similar other techniques, preventing unauthorized access to programs and data.
Control over system performance -- Recording delays between request for a service and response from the system.
Job accounting -- Keeping track of time and resources used by various jobs and users.
Error detecting aids -- Production of dumps, traces, error messages and other debugging and error detecting aids.
Coordination between other softwares and users -- Coordination and assignment of compilers, interpreters, assemblers and other software to the various users of the computer systems.
Types of Operating System:
Batch operating system: The users of batch operating system do not interact with the computer directly. Each user prepares his job on an off-line device like punch cards and submits it to the computer operator. The operator then sorts programs into batches with similar requirements.
Time-sharing operating systems: Time sharing is a technique which enables many people, located at various terminals, to use a particular computer system at the same time.Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently. Thus, the user can receives an immediate response.
Distributed operating System: Distributed systems use multiple central processors to serve multiple real time application and multiple users. These processors are referred as sites, nodes, computers and so on.
Network operating System: Network Operating System runs on a server and and provides server the capability to manage data, users, groups, security, applications, and other networking functions. Examples: Microsoft Windows Server 2003
Real Time operating System: Real time system is defines as a data processing system in which the time interval required to process and respond to inputs is so small that it controls the environment. Real time processing is always on line whereas on line system need not be real time. example: Air traffic control system etc.
Hard real-time systems: Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems secondary storage is limited or missing with data stored in ROM
Soft real-time systems: Soft real time systems are less restrictive. Critical real-time task gets priority over other tasks and retains the priority until it completes. For example, Multimedia, virtual reality,
Operating System - Properties:
Batch processing: Batch processing is a technique in which Operating System collects one programs and data together in a batch before processing starts. OS defines a job which has predefined sequence of commands, programs and data as a single unit. When job completes its execution, its memory is released and the output for the job gets copied into an output spool for later printing or processing.
Multitasking: Multitasking refers to term where multiple jobs are executed by the CPU simultaneously by switching between them. Switches occur so frequently that the users may interact with each program while it is running.
Multiprogramming: When two or more programs are residing in memory at the same time, then sharing the processor is referred to the multiprogramming. Multiprogramming assumes a single shared processor. Multiprogramming increases CPU utilization by organizing jobs so that the CPU always has one to execute.
Interactivity: Interactivity refers that a User is capable to interact with computer system.
Real Time System: Real time systems represents are usually dedicated, embedded systems. Operating Systems typically read from and react to sensor data.
Distributed Environment: Distributed environment refers to multiple independent CPUs or processors in a computer system.
Spooling: Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers to putting data of various I/O jobs in a buffer. This buffer is a special area in memory or hard disk which is accessible to I/O devices.
Process:
A process is a program in execution. The execution of a process must progress in a sequential fashion
Object Program: Code to be executed.
Data: Data to be used for executing the program.
Resources: While executing the program, it may require some resources.
Status: Verifies the status of the process execution.A process can run to completion only when all requested resources have been allocated to the process.
Program:
Program contains the instructions to be executed by processor. A program takes a space at single place in main memory and continues to stay there.
New - The process is being created.
Ready - The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run.
Running: Process instructions are being executed (i.e. The process that is currently being executed).
Waiting: The process is waiting for some event to occur (such as the completion of an I/O operation).
Terminated: The process has finished execution.
Process Control Block:
Each process is represented in the operating system by a data structure called process control block (PCB) or task control block.
Pointer: Pointer points to another process control block. Pointer is used for maintaining the scheduling list.
Process State: Process state may be new, ready, running, waiting and so on.
Program Counter: Program Counter indicates the address of the next instruction to be executed for this process.
CPU registers: CPU registers include general purpose register, stack pointers, index registers and accumulators etc.
Memory management information: This information may include the value of base and limit registers, the page tables etc. This information is useful for deallocating the memory when the process terminates.
Accounting information: This information includes the amount of CPU and real time used, time limits, job or process numbers, account numbers etc.
Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy.
Scheduling queues refers to queues of processes or devices. When the process enters into the system, then this process is put into a job queue. Device queue is a queue for which multiple processes are waiting for a particular I/O device
A newly arrived process is put in the ready queue. Once the CPU is assigned to a process, then that process will execute. The process could issue an I/O request and then it would be placed in an I/O queue.
Schedulers: Schedulers are special system softwares which handles process scheduling in various ways
Long Term Scheduler: It is also called job scheduler. Job scheduler selects processes from the queue and loads them into memory for execution. Time-sharing operating systems have no long term scheduler.
Short Term Scheduler: It is also called CPU scheduler. CPU scheduler selects process among the processes that are ready to execute and allocates CPU to one of them. Short term scheduler also known as dispatcher
Medium Term Scheduler: Medium term scheduling is part of the swapping. Running process may become suspended if it makes an I/O request. Suspended processes cannot make any progress towards completion. It removes the processes from the memory. It reduces the degree of multiprogramming.
Context Switch: A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. Using this technique a context switcher enables multiple processes to share a single CPU.
Scheduling algorithms:
Throughput: number of processes that complete their execution per time unit
Turnaround time: amount of time to execute a particular process ie, The sum total of waiting time & execution time
Waiting time: amount of time a process has been waiting in the ready queue. ie, Service Time - Arrival Time
Response time: amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)
First Come First Serve (FCFS):
Jobs are executed on first come, first serve basis. Easy to understand and implement. Poor in performance as average wait time is high.
Shortest Job First (SJF):
Processer should know in advance how much time process will take. This algorithm associates with each process the length of the process's next CPU burst.When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS scheduling is used.
The SJF algorithm can be either pre-emptive or nonpreemptive. The choice arises when a new process arrives at the ready queue while a previous process is still executing. The next CPU burst of the newly arrived process may be shorter than what is left of the currently executing process. A pre-emptive SJF algorithm will preempt the currently executing process, whereas a nonpreemptive SJF algorithm will allow the currently running process to finish its CPU burst.
Priority Based Scheduling:
A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order. Generally the larger the CPU burst, the lower the priority, and vice versa.
Internally defined priorities use some measurable quantity or quantities to compute the priority of a process. External priorities are set by criteria outside the OS, such as the importance of the process, the type and amount of funds being paid for computer use.
A pre-emptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. A nonpreemptive priority scheduling algorithm will simply put the new process at the head of the ready queue.
A major problem with priority scheduling algorithms is indefinite blocking, or starvation. A priority scheduling algorithm can leave some low priority processes waiting indefinitely.
A solution to the problem of indefinite blockage of low-priority processes is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.
Round Robin Scheduling:
Each process is provided a fix time to execute called quantum. We keep the ready queue as a FIFO queue of processes.New processes are added to the tail of the ready queue. Once a process is executed for given time period. Process is preempted and other process executes for given time period. Context switching is used to save states of preempted processes.
Multi Queue Scheduling:
Multiple queues are maintained for processes. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type. Each queue has absolute priority over lower-priority queues and also each queue has its own scheduling algorithm.
Multi-Threading:
A thread is a flow of execution through the process code, with its own program counter, system registers and stack. A thread is also called a light weight process. Threads provide a way to improve application performance through parallelism.Threads are implemented in following two ways
User Level Threads -- application manages thread management kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application begins with a single thread and begins running in that thread.
Kernel Level Threads -- thread management done by the Kernel. Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process.Kernel threads are generally slower to create and manage than the user threads.
Multithreading Models -- Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models are three types
Many to Many Model -- In this model, many user level threads multiplexes to the Kernel thread of smaller or equal numbers. In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallels on a multiprocessor.
Many to One Model -- Many to one model maps many user level threads to one Kernel level thread. Thread management is done in user space. Only one thread can access the Kernel at a time,so multiple threads are unable to run in parallel on multiprocessors.
One to One Model -- There is one to one relationship of user level thread to the kernel level thread. It also another thread to run when a thread makes a blocking system call. It support multiple thread to execute in parallel on microprocessors.
Memory Management:
Memory management keeps track of each and every memory location either it is allocated to some process or it is free. Memory management provides protection by using two registers, a base register and a limit register. The base register holds the smallest legal physical memory address and the limit register specifies the size of the range.
Instructions and data to memory addresses can be done in Compile time, Load time orExecution time
Dynamic Loading: In dynamic loading, a routine of a program is not loaded until it is called by the program. The main program is loaded into memory and is executed. Other routines methods or modules are loaded on request.
Dynamic Linking: Linking is the process of collecting and combining various modules of code and data into a executable file that can be loaded into memory and executed. When it combines the libraries at load time, the linking is called static linking and when this linking is done at the time of execution, it is called as dynamic linking.In static linking, libraries linked at compile time, so program code size becomes bigger whereas in dynamic linking libraries linked at execution time so program code size remains smaller.
Logical versus Physical Address Space: An address generated by the CPU is a logical address whereas address actually available on memory unit is a physical address. Virtual and physical addresses differ in execution-time address-binding scheme.The run-time mapping from virtual to physical address is done by the memory management unit (MMU) which is a hardware device
Swapping: Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store , and then brought back into memory for continued execution.
Main memory usually has two partitions - Low Memory -- Operating system resides in this memory and High Memory -- User processes then held in high memory.
memory allocation mechanism:
Single-partition allocation In this type of allocation, relocation-register scheme is used. Relocation register contains value of smallest physical address whereas limit register contains range of logical addresses. Each logical address must be less than the limit register.
Multiple-partition allocation In this type of allocation, main memory is divided into a number of fixed-sized partitions where each partition should contain only one process. When a partition is free, a process is selected from the input queue and is loaded into the free partition.
Fragmentation: As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes can not be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation.
External fragmentation : Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous so it can not be used.
Internal fragmentation : Memory block assigned to process is bigger. Some portion of memory is left unused as it can not be used by another process.
Paging: External fragmentation is avoided by using paging technique. Paging is a technique in which physical memory is broken into blocks of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). When a process is to be executed, it's corresponding pages are loaded into any available memory frames. Address generated by CPU is divided into
Page number (p) -- page number is used as an index into a page table which contains base address of each page in physical memory.
Page offset (d) -- page offset is combined with base address to define the physical memory address.
Segmentation: Segmentation is a technique to break memory into logical pieces where each piece represents a group of related information. Segmentation can be implemented using or without using paging. Address generated by CPU is divided into
Segment number (s) -- segment number is used as an index into a segment table which contains base address of each segment in physical memory and a limit of segment.
Segment offset (o) -- segment offset is first checked against limit and then is combined with base address to define the physical memory address.
Virtual Memory:
Virtual memory is a technique that allows the execution of processes which are not completely available in memory. Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation system or Demand segmentation
Demand Paging:
A demand paging system is quite similar to a paging system with swapping. Rather than swapping the entire process into memory, however, we use a lazy swapper called pager. When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out again. Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time and the amount of physical memory needed.
Hardware support is required to distinguish between those pages that are in memory and those pages that are on the disk using the valid-invalid bit scheme. Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's failure to bring the desired page into memory. It is handled by modern os
Page Replacement Algorithm: Page replacement algorithms are the techniques using which Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose
Reference String: The string of memory references is called reference string. Reference strings are generated artificially or by tracing a given system and recording the address of each memory reference.
First In First Out (FIFO) algorithm: Oldest page in main memory is the one which will be selected for replacement. Easy to implement, keep a list, replace pages from the tail and add new pages at the head.
Optimal Page algorithm: An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. when a page needs to be swapped in, the operating system swaps out the page whose next use will occur farthest in the future.
Least Recently Used (LRU) algorithm: Page which has not been used for the longest time in main memory is the one which will be selected for replacement. Easy to implement, keep a list, replace pages by looking back into time.
Page Buffering algorithm: To get process start quickly, keep a pool of free frames.On page fault, select a page to be replaced.Write new page in the frame of free pool, mark the page table and restart the process.Now write the dirty page out of disk and place the frame holding replaced page in free pool.
Least frequently Used(LFU) algorithm: Page with the smallest count is the one which will be selected for replacement.This algorithm suffers from the situation in which a page is used heavily during the initial phase of a process, but then is never used again.
Most frequently Used(MFU) algorithm: This algorithm is based on the argument that the page with the smallest count was probably just brought in and has yet to be used.
I/O Hardware:
Computers operate on many kinds of devices. A device communicates with a computer system by sending signals over a cable or even through the air.The device communicates with the machine via a connection point termed a port (for example, a serial port).
A bus is a set of wires and a rigidly defined protocol that specifies a set of messages that can be sent on the wires.When device A has a cable that plugs into device B, and device B has a cable that plugs into device C, and device C plugs into a port on the computer, this arrangement is called a daisy chain.
Controller: A controller is a collection of electronics that can operate a port, a bus, or a device. The SCSI bus controller is often implemented as a separate circuit board (a host adapter) that plugs into the computer.
I/O port: An I/O port typically consists of four registers, called the status , control, data-in, and data-out registers.
Polling: Polling is a process by which a host waits for controller response.It is a looping process, reading the status register over and over until the busy bit of status register becomes clear
I/O Devices can be categorized into following category.
Human Readable devices are suitable for communicating with the computer user. Examples are printers, video display terminals, keyboard etc.
Machine Readable devices are suitable for communicating with electronic equipment. Examples are disk and tape drives, sensors, controllers and actuators.
Communication devices are suitable for communicating with remote devices. Examples are digital line drivers and modems.
Direct Memory Access (DMA):
A special control unit is used to transfer block of data directly between an external device and the main memory, without intervention by the processor. This approach is called Direct Memory Access(DMA). DMA can be used with either polling or interrupt software.
When used with an interrupt, the CPU is notified only after the entire block of data has been transferred. For each byte or word transferred, it must provide the memory address and all the bus signals controlling the data transfer.Handshaking is a process between the DMA controller and the device controller. It is performed via wires using terms DMA request and DMA acknowledge.
Device Controllers:
network card, graphics adapter, disk controller, DVD-ROM controller, serial port, USB, sound card
I/O Softwares:
Interrupts: The CPU hardware uses an interrupt request line wire which helps CPU to sense after executing every instruction. When the CPU checks that a controller has put a signal on the interrupt request line, the CPU saves a state, such as the current value of the instruction pointer, and jumps to the interrupt handler routine at a fixed address. The interrupt handler part determines the cause of the interrupt, performs the necessary processing and executes a interrupt instruction to return the CPU to its execution state.
Most CPUs have two interrupt request lines:
non-maskable interrupt - Such kind of interrupts are reserved for events like unrecoverable memory errors.
maskable interrupt - Such interrupts can be switched off by the CPU before the execution of critical instructions that must not be interrupted.
Application I/O Interface: Application I/O Interface represents the structuring techniques and interfaces for the operating system to enable I/O devices to be treated in a standard, uniform way.
Following are the characteristics of I/O interfaces with respected to devices.
Character-stream / block, Sequential / random-access, Synchronous / asynchronous, Sharable / dedicated, Speed of operation, Read-write, read only, or write only
Clocks: Clocks are also called timers. The clock software takes the form of a device driver though a clock is neither a blocking device nor a character based device. The clock software is the clock driver.
Kernel I/O Subsystem: Kernel I/O Subsystem is responsible to provide many services related to I/O such as Scheduling, Buffering, Caching, Spooling and Device Reservation, Error Handling
Device driver: Device driver is a program or routine developed for an I/O device. A device driver implements I/O operations or behaviours on a specific class of devices. In the layered structure of I/O system, device driver lies between interrupt handler and device independent I/O software.
File System:
File: A file is a named collection of related information that is recorded on secondary storage such as magnetic disks, magnetic tapes and optical disks.In general, a file is a sequence of bits, bytes, lines or records whose meaning is defined by the files creator and user.
File Structure: File structure is a structure, which is according to a required format that operating system can understand.
File Type: File type refers to the ability of the operating system to distinguish different types of file such as text files source files and binary files etc.
Ordinary files: These are the files that contain user information.
Directory files: These files contain list of file names and other information related to these files.
Special files: These files represent physical device like disks, terminals, printers, networks, tape drive etc. These files are of two types - Character special files [terminals or printers] and Block special files [disks and tapes]
File Access Mechanisms: File access mechanism refers to the manner in which the records of a file may be accessed.
Sequential access: The information in the file is processed in order, one record after the other. Example: Compilers usually access files in this fashion.
Direct/Random access: Each record has its own address on the file with by the help of which it can be directly accessed for reading or writing.The records need not be in any sequence within the file and they need not be in adjacent locations on the storage medium.
Indexed sequential access: An index is created for each file which contains pointers to various blocks. Index is searched sequentially and its pointer is used to access the file directly.
Space Allocation: Files are allocated disk spaces by operating system.
Contiguous Allocation: Each file occupy a contiguous address space on disk. Assigned disk address is in linear order. External fragmentation is a major issue with this type of allocation technique.
Linked Allocation: Each file carries a list of links to disk blocks. Directory contains link / pointer to first block of a file. Effectively used in sequential access file. Inefficient in case of direct access file.
Indexed Allocation: Provides solutions to problems of contigous and linked allocation. A index block is created having all pointers to files. Each file has its own index block which stores the addresses of disk space occupied by the file. Directory contains the addresses of index blocks of files.
Security:
Security refers to providing a protection system to computer system resources such as CPU, memory, disk, software programs and most importantly data/information stored in the computer system.
Authentication:
Authentication refers to identifying the each user of the system and associating the executing programs with those users. Operating Systems generally identifies/authenticates users using following three ways:
Username / Password, User card/key, User attribute [fingerprint/ eye retina pattern/ signature]
One Time passwords:
In One-Time Password system, a unique password is required every time user tries to login into the system. Once a one-time password is used then it can not be used again. One time password are implemented in various ways.
Random numbers, Secret key (User are provided a hardware device which can create a secret id mapped with user id), Network password (Some commercial applications send one time password to user on registered mobile/ email)
Program Threats:
Operating system's processes and kernel do the designated task as instructed. If a user program made these process do malicious tasks then it is known as Program Threats. Following is the list of some well known program threats.
Trojan Horse - Such program traps user login credentials and stores them to send to malicious user who can later on login to computer and can access system resources.
Trap Door - If a program which is designed to work as required, have a security hole in its code and perform illegal action without knowledge of user then it is called to have a trap door.
Logic Bomb - Logic bomb is a situation when a program misbehaves only when certain conditions met otherwise it works as a genuine program. It is harder to detect.
Virus - Virus as name suggest can replicate themselves on computer system .They are highly dangerous and can modify/delete user files, crash systems.
System Threats:
System threats refers to misuse of system services and network connections to put user in trouble. System threats can be used to launch program threats on a complete network called as program attack. Following is the list of some well known system threats.
Worm -Worm is a process which can choked down a system performance by using system resources to extreme levels.
Port Scanning - Port scanning is a mechanism or means by which a hacker can detects system vulnerabilities to make an attack on the system.
Denial of Service - Denial of service attacks normally prevents user to make legitimate use of the system.
Computer Security Classifications:
Type A Highest Level. Uses formal design specifications and verification techniques.
Type B Provides mandatory protection system. Have all the properties of a class C2 system. Attaches a sensitivity label to each object.It is of three types.
B1 - Maintains the security label of each object in the system.
B2 - Extends the sensitivity labels to each system resource, such as storage objects, supports covert channels and auditing of events.
B3 - Allows creating lists or user groups for access-control to grant access or revoke access to a given named object.
Type C Provides protection and user accountability using audit capabilities. It is of two types.
C1 - Incorporates controls so that users can protect their private information and keep other users from accidentally reading / deleting their data. UNIX versions are mostly Cl class.
C2 - Adds an individual-level access control to the capabilities of a Cl level system
Type D Lowest level. Minimum protection. MS-DOS fall in this category.
Linux:
Linux is one of popular version of UNIX operating System. Components of Linux System
Kernel - Kernel is the core part of Linux. It is responsible for all major activities of this operating system. It is consists of various modules and it interacts directly with the underlying hardware.
System Library - System libraries are special functions or programs using which application programs or system utilities accesses Kernel's features. These libraries implements most of the functionalities of the operating system and do not requires kernel module's code access rights.
System Utility - System Utility programs are responsible to do specialized, individual level tasks.
Architecture:
Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
Kernel - Core component of Operating System, interacts directly with hardware, provides low level services to upper layer components.
Shell - An interface to kernel, hiding complexity of kernel's functions from users. Takes commands from user and executes kernel's functions.
Utilities - Utility programs giving user most of the functionalities of an operating systems.
No comments:
Post a Comment