Thursday, January 19, 2017
10:13 PM (GMT +5)

Go Back   CSS Forums > CSS Optional subjects > Group I > Computer Science

Reply Share Thread: Submit Thread to Facebook Facebook     Submit Thread to Twitter Twitter     Submit Thread to Google+ Google+    
LinkBack Thread Tools Search this Thread
Old Wednesday, March 26, 2008
Janeeta's Avatar
Join Date: Dec 2006
Location: Karachi
Posts: 96
Thanks: 26
Thanked 120 Times in 39 Posts
Janeeta is on a distinguished road
Default operating system

Types of Operating systems

Within the broad family of operating systems, there are generally four types, categorised based on the types of computers they control and the sort of applications they support. The broad categories are:

Real-time operating systems: They are used to control machinery, scientific instruments and industrial systems. An RTOS typically has very little user-interface capability, and no end-user utilities, since the system will be a sealed box when delivered for use. A very important part of an RTOS is managing the resources of the computer so that a particular operation executes in precisely the same amount of time every time it occurs. In a complex machine, having a part move more quickly just because system resources are available may be just as catastrophic as having it not move at all because the system is busy.

Single-user, single-tasking operating system: As the name implies, this operating system is designed to manage the computer so that one user can effectively do one thing at a time. The Palm O.S. for Palm handheld computers is a good example of a modern single-user, single-task operating system.

Single-user, multi-tasking operating system: This is the type of operating system most people use on there desktop and laptop computers today. Windows 98 and the Mac O.S. are both examples of an operating system that will let a single user has several programs in operation at the same time. For example, it's entirely possible for a Windows user to be writing a note in a word processor while downloading a file from the Internet while printing the text of an e-mail message.

Multi-user operating systems: A multi-user operating system allows many different users to take advantage of the computer's resources simultaneously. The operating system must make sure that the requirements of the various users are balanced, and that each of the programs they are using has sufficient and separate resources so that a problem with one user doesn't affect the entire community of users. Unix, VMS, and mainframe operating systems, such as MVS, are examples of multi-user operating systems. It's important to differentiate here between multi-user operating systems and single-user operating systems that support networking. Windows 2000 and Novell Netware can each support hundreds or thousands of networked users, but the operating systems themselves aren't true multi-user operating systems. The system administrator is the only user for Windows 2000 or Netware. The network support and the entire remote user logins the network enables are, in the overall plan of the operating system, a program being run by the administrative user.
Reply With Quote
The Following User Says Thank You to Janeeta For This Useful Post:
kashifilyas (Friday, March 19, 2010)
Old Wednesday, March 26, 2008
Janeeta's Avatar
Join Date: Dec 2006
Location: Karachi
Posts: 96
Thanks: 26
Thanked 120 Times in 39 Posts
Janeeta is on a distinguished road


Usually Operating system is there to manage all pieces of a complex system
Imagine what will happen if three programs running on a computer all try to print their output simultaneously on the same printer. The first few lines might be from program one and the next few lines might be from program two and so on with the result resulting in chaos. The operating system can bring order to potential chaos by buffering all the output destined for the printer on the disk.

The primary task of operating system is to keep track of who is using which resource, to grant resource request, to account for usage, and to meditate conflicting request from different programs and users.

When a computer has multiple users, the need for managing and protecting the memory, input/output devices and other resources are even more apparent. This need arises because it is usually necessary to share expensive resource such as tapes drives and phototypesetters.

Operating systems perform basic tasks, such as recognizing input from the keyboard, sending output to the display screen, keeping track of files and directories on the disk, and controlling peripheral devices such as disk drives and printers.

For large systems, the operating system has even greater responsibilities and powers. It is like a traffic cop -- it makes sure that different program and users running at the same time do not interfere with each other. The operating system is also responsible for security, ensuring that unauthorized users do not access the system.

The set of software products that jointly controls the system resource using these resources on a computer system is known as operating system.

Examples are Unix, Windows.
Reply With Quote
Old Wednesday, March 26, 2008
Janeeta's Avatar
Join Date: Dec 2006
Location: Karachi
Posts: 96
Thanks: 26
Thanked 120 Times in 39 Posts
Janeeta is on a distinguished road


Despite of the fact that a thread must execute in process, the process and its associated threads are different concept. Processes are used to group resources together and threads are the entities scheduled for execution on the CPU.

A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. In many respect, threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will generally call different procedures and thus a different execution history. This is why thread needs its own stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources also known as task, such as open files and signals.

Processes Vs Threads

As we mentioned earlier that in many respect threads operate in the same way as that of processes. Some of the similarities and differences are:

  • Like processes threads share CPU and only one thread active (running) at a time.
  • Like processes, threads within a processes, threads within a processes execute sequentially.
  • Like processes, thread can create children.
  • And li
ke process, if one thread is blocked, another thread can run.


* Unlike processes, threads are not independent of one another.
* Unlike processes, all threads can access every address in the task .
* Unlike processes, thread are design to assist one other. Note that processes might or might not assist one another because processes may originate from different users.

Why Threads?

Following are some reasons why we use threads in designing operating systems.
* )A process with multiple threads make a great server for example printer server.
*)Because threads can share common data, they do not need to use interprocess communication.
*)Because of the very nature, threads can take advantage of multiprocessors.
*)Threads are cheap in the sense that
-)They only need a stack and storage for registers therefore, threads are cheap to create.
-)Threads use very little resources of an operating system in which they are working. That is, threads do not need new address space, global data, program code or operating system resources.

Context switching are fast when working with threads. The reason is that we only have to save and/or restore PC, SP and registers.

But this cheapness does not come free - the biggest drawback is that there is no protection between threads.

User-Level Threads

User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes.


The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. Some other advantages are
*)User-level threads does not require modification to operating systems.

Simple Representation:

Each thread is represented simply by a PC, registers, stack and a small control block, all stored in the user process address space.

Simple Management:

This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel.

Fast and Efficient:

Thread switching is not much more expensive than a procedure call.


There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads.

User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there are runable threads left in the processes. For example, if one thread causes a page fault, the process blocks.

Kernel-Level Threads

In this method, the kernel knows about and manages the threads. No runtime system is needed in this case. Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the system. In addition, the kernel also maintains the traditional process table to keep track of processes. Operating Systems kernel provides system call to create and manage threads.

The implementation of general structure of kernel-level thread is

Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads.

Kernel-level threads are especially good for applications that frequently block.


The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads.
Since kernel must manage and schedule threads as well as processes. It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.

Advantages of Threads over Multiple Processes

Context Switching Threads are very inexpensive to create and destroy, and they are inexpensive to represent. For example, they require space to store, the PC, the SP, and the general-purpose registers, but they do not require space to share memory information, Information about open files of I/O devices in use, etc. With so little context, it is much faster to switch between threads. In other words, it is relatively easier for a context switch using threads.

Sharing Treads allow the sharing of a lot resources that cannot be shared in process, for example, sharing code section, data section, Operating System resources like open file etc.

Disadvantages of Threads over Multiprocesses

Blocking The major disadvantage if that if the kernel is single threaded, a system call of one thread will block the whole process and CPU may be idle during the blocking period.

Security Since there is, an extensive sharing among threads there is a potential problem of security. It is quite possible that one thread over writes the stack of another thread (or damaged shared data) although it is very unlikely since threads are meant to cooperate on a single task.

Application that Benefits from Threads

A proxy server satisfying the requests for a number of computers on a LAN would be benefited by a multi-threaded process. In general, any program that has to do more than one task at a time could benefit from multitasking. For example, a program that reads input, process it, and outputs could have three threads, one for each task.

Application that cannot Benefit from Threads

Any sequential process that cannot be divided into parallel task will not benefit from thread, as they would block until the previous one completes. For example, a program that displays the time of the day would not benefit from multiple threads
Reply With Quote
Old Wednesday, March 26, 2008
Janeeta's Avatar
Join Date: Dec 2006
Location: Karachi
Posts: 96
Thanks: 26
Thanked 120 Times in 39 Posts
Janeeta is on a distinguished road
Default Process

Process State

The process state consist of everything necessary to resume the process execution if it is somehow put aside temporarily. The process state consists of at least following:

*)Code for the program.
*)Program's static data.
*)Program's dynamic data.
*)Program's procedure call stack.
*)Contents of general purpose registers.
*)Contents of program counter (PC)
*)Contents of program status word (PSW).
*)Operating Systems resource in use.

Process state:

A process goes through a series of discrete process states.
New State: The process being created.

Running State: A process is said to be running if it has the CPU, that is, process actually using the CPU at that particular instant.

Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event to happen such that as an I/O completion before it can proceed. Note that a process is unable to run until some external event happens.

Ready State: A process is said to be ready if it use a CPU if one were available. A ready state process is runable but temporarily stopped running to let another process run.

Terminated state: The process has finished execution.

Process table

To keep track of all the processes that are running on the system. To do this, the operating system maintains a process table.

The process table is very simple data structure: It's a large array of structures. Each structure containts several pieces of data. Some of the
pieces of data associated with a process include:

*)The last values observed in the registers while the process was running.
*)The last value observed in the program counter while the process was running.
*)The state of the process (Is it ready to go?)
*)The current working directory of the process
*)The user ID of the user running the process
Reply With Quote
Old Wednesday, March 26, 2008
Janeeta's Avatar
Join Date: Dec 2006
Location: Karachi
Posts: 96
Thanks: 26
Thanked 120 Times in 39 Posts
Janeeta is on a distinguished road
Default IPC(inter process communication)


Interprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.

Race Conditions

In operating systems, processes that are working together share some common storage (main memory, file etc.) that each process can read and write. When two or more processes are reading or writing some shared data and the final result depends on who runs precisely when, are called race conditions. Concurrently executing threads that share data need to synchronize their operations and processing in order to avoid race condition on shared data. Only one ‘customer’ thread at a time should be allowed to examine and update the shared variable.

Race conditions are also possible in Operating Systems. If the ready queue is implemented as a linked list and if the ready queue is being manipulated during the handling of an interrupt, then interrupts must be disabled to prevent another interrupt before the first one completes. If interrupts are not disabled than the linked list could become corrupt.

Critical Section
How to avoid race conditions

The key to preventing trouble involving shared storage is find some way to prohibit more than one process from reading and writing the shared data simultaneously. That part of the program where the shared memory is accessed is called the Critical Section. To avoid race conditions and flawed results, one must identify codes in Critical Sections in each thread. The characteristic properties of the code that form a Critical Section are
  • Codes that reference one or more variables in a “read-update-write” fashion while any of those variables is possibly being altered by another thread.
  • Codes that alter one or more variables that are possibly being referenced in “read-updata-write” fashion by another thread.
  • Codes use a data structure while any part of it is possibly being altered by another thread.
  • Codes alter any part of a data structure while it is possibly in use by another thread.

Here, the important point is that when one process is executing shared modifiable data in its critical section, no other process is to be allowed to execute in its critical section. Thus, the execution of critical sections by the processes is mutually exclusive in time.

Mutual Exclusion
A way of making sure that if one process is using a shared modifiable data, the other processes will be excluded from doing the same thing.

Formally, while one process executes the shared variable, all other processes desiring to do so at the same time moment should be kept waiting; when that process has finished executing the shared variable, one of the processes waiting; while that process has finished executing the shared variable, one of the processes waiting to do so should be allowed to proceed. In this fashion, each process executing the shared data (variables) excludes all others from doing so simultaneously. This is called Mutual Exclusion.

Note that mutual exclusion needs to be enforced only when processes access shared modifiable data - when processes are performing operations that do not conflict with one another they should be allowed to proceed concurrently.

Mutual Exclusion Conditions
If we could arrange matters such that no two processes were ever in their critical sections simultaneously, we could avoid race conditions. We need four conditions to hold to have a good solution for the critical section problem (mutual exclusion).
  • No two processes may at the same moment inside their critical sections.
  • No assumptions are made about relative speeds of processes or number of CPUs.
  • No process should outside its critical section should block other processes.
  • No process should wait arbitrary long to enter its critical section.


A semaphore is a protected variable whose value can be accessed and altered only by the operations P and V and initialization operation called 'Semaphoiinitislize'.

Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also called general semaphores can assume only nonnegative values.

The P (or wait or sleep or down) operation on semaphores S, written as P(S) or wait (S), operates as follows:

P(S): IF S > 0
THEN S := S - 1
ELSE (wait on S)

The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal (S), operates as follows:

V(S): IF (one or more process are waiting on S)
THEN (let one of these processes proceed)
ELSE S := S +1
Reply With Quote
Old Thursday, March 27, 2008
Janeeta's Avatar
Join Date: Dec 2006
Location: Karachi
Posts: 96
Thanks: 26
Thanked 120 Times in 39 Posts
Janeeta is on a distinguished road
Default scheduling TEchniques

CPU Scheduler

Whenever the CPU becomes idle, it is the job of the CPU Scheduler ( a.k.a. the short-term scheduler ) to select another process from the ready queue to run next.

The storage structure for the ready queue and the algorithm used to select the next process are not necessarily a FIFO queue. There are several alternatives to choose from, as well as numerous adjustable parameters for each algorithm,

The assignment of physical processors to processes allows processors to accomplish work. The problem of determining when processors should be assigned and to which processes is called processor scheduling or CPU scheduling.

When more than one process is runable, the operating system must decide which one first. The part of the operating system concerned with this decision is called the scheduler, and algorithm it uses is called the scheduling algorithm.

Goals of Scheduling (objectives

Many objectives must be considered in the design of a scheduling discipline. In particular, a scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc., Some of these goals depends on the system one is using for example batch system, interactive system or real-time system, etc. but there are also some goals that are desirable in all systems.

General Goals

Fairness is important under all circumstances. A scheduler makes sure that each process gets its fair share of the CPU and no process can suffer indefinite postponement. Note that giving equivalent or equal time is not fair. Think of safety control and payroll at a nuclear plant.

Policy Enforcement
The scheduler has to make sure that system's policy is enforced. For example, if the local policy is safety then the safety control processes must be able to run whenever they want to, even if it means delay in payroll processes.

Scheduler should keep the system (or in particular CPU) busy cent percent of the time when possible. If the CPU and all the Input/Output devices can be kept running all the time, more work gets done per second than if some components are idle.

Response Time

A scheduler should minimize the response time for interactive user.

A scheduler should minimize the time batch users must wait for an output.

A scheduler should maximize the number of jobs processed per unit time.

A little thought will show that some of these goals are contradictory. It can be shown that any scheduling algorithm that favors some class of jobs hurts another class of jobs. The amount of CPU time available is finite, after all.

Preemptive Vs Nonpreemptive Scheduling

CPU scheduling decisions take place under one of four conditions

  • When a process switches from the running state to the waiting state, such as for an I/O request or invocation of the wait( ) system call.
  • When a process switches from the running state to the ready state, for example in response to an interrupt.
  • When a process switches from the waiting state to the ready state, say at completion of I/O or a return from wait( ).
  • When a process terminates.

  1. For conditions 1 and 4 there is no choice - A new process must be selected.
  2. For conditions 2 and 3 there is a choice - To either continue running the current process, or select a different one
  3. If scheduling takes place only under conditions 1 and 4, the system is said to be non-preemptive, or cooperative. Under these conditions, once a process starts running it keeps running, until it either voluntarily blocks or until it finishes. Otherwise the system is said to be preemptive.
  4. Windows used non-preemptive scheduling up to Windows 3.x, and started using pre-emptive scheduling with Win95. Macs used non-preemptive prior to OSX, and pre-emptive since then. Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt.
  5. Note that pre-emptive scheduling can cause problems when two processes share data, because one process may get interrupted in the middle of updating shared data structures. Chapter 6 will examine this issue in greater detail.
  6. Preemption can also be a problem if the kernel is busy implementing a system call ( e.g. updating critical kernel data structures ) when the preemption occurs. Most modern UNIXes deal with this problem by making the process wait until the system call has either completed or blocked before allowing the preemption Unfortunately this solution is problematic for real-time systems, as real-time response can no longer be guaranteed.
  7. Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section. Needless to say, this should only be done in rare situations, and only on very short pieces of code that will finish quickly, ( usually just a few machine instructions. )

First-Come First-Serve Scheduling, FCFS
FCFS is very simple - Just a FIFO queue, like customers waiting in line at the bank or the post office or at a copying machine.

Unfortunately, however, FCFS can yield some very long average wait times, particularly if the first process to get there takes a long time. For example, consider the following three processes:

Process Burst Time
P1 24
P2 3
P3 3

The average waiting time for the three processes is ( 0 + 24 + 27 ) / 3 = 17.0 ms

Shortest-Job-First Scheduling, SJF
The idea behind the SJF algorithm is to pick the quickest fastest little job that needs to be done, get it out of the way first, and then pick the next smallest fastest job to do next.

( Technically this algorithm picks a process based on the next shortest CPU burst, not the overall process time. )

Process Burst Time
P1 6
P2 8
P3 7
P4 3

In the case above the average wait time is ( 0 + 3 + 9 + 16 ) / 4 = 7.0 ms, ( as opposed to 10.25 ms for FCFS for the same processes. )

Priority Scheduling

Priority scheduling is a more general case of SJF, in which each job is assigned a priority and the job with the highest priority gets scheduled first. ( SJF uses the inverse of the next expected burst time as its priority - The smaller the expected burst, the higher the priority. )

Note that in practice, priorities are implemented using integers within a fixed range, but there is no agreed-upon convention as to whether "high" priorities use large numbers or small numbers. This book uses low number for high priorities, with 0 being the highest possible priority.

Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Priorities can be assigned either internally or externally. Internal priorities are assigned by the OS using criteria such as average burst time, ratio of CPU to I/O activity, system resource use, and other factors available to the kernel. External priorities are assigned by users, based on the importance of the job, fees paid, politics, etc.

Priority scheduling can be either preemptive or non-preemptive.

Priority scheduling can suffer from a major problem known as indefinite blocking, or starvation, in which a low-priority task can wait forever because there are always some other jobs around that have higher priority.

  • If this problem is allowed to occur, then processes will either run eventually when the system load lightens ( at say 20 a.m. ), or will eventually get lost when the system is shut down or crashes. ( There are rumors of jobs that have been stuck for years. )
  • One common solution to this problem is aging, in which priorities of jobs increase the longer they wait. Under this scheme a low-priority job will eventually get its priority raised high enough that it gets run.

Round Robin Scheduling
  • Round robin scheduling is similar to FCFS scheduling, except that CPU bursts are assigned with limits called time quantum.
  • When a process is given the CPU, a timer is set for whatever value has been set for a time quantum.
  1. If the process finishes its burst before the time quantum timer expires, then it is swapped out of the CPU just like the normal FCFS algorithm.
  2. If the timer goes off first, then the process is swapped out of the CPU and moved to the back end of the ready queue
  • The ready queue is maintained as a circular queue, so when all processes have had a turn, then the scheduler gives the first process another turn, and so on.
  • RR scheduling can give the effect of all processors sharing the CPU equally, although the average wait time can be longer than with other scheduling algorithms. In the following example the average wait time is 5.66 ms.

Process Burst Time
P1 24
P2 3
P3 3

Multilevel Queue Scheduling

When processes can be readily categorized, then multiple separate queues can be established, each implementing whatever scheduling algorithm is most appropriate for that type of job, and/or with different parametric adjustments.

Scheduling must also be done between queues, that is scheduling one queue to get time relative to other queues. Two common options are strict priority ( no job in a lower priority queue runs until all higher priority queues are empty ) and round-robin ( each queue gets a time slice in turn, possibly of different sizes. )

Note that under this algorithm jobs cannot switch from queue to queue - Once they are assigned a queue, that is their queue until they finish

Multilevel Feedback-Queue Scheduling

Multilevel feedback queue scheduling is similar to the ordinary multilevel queue scheduling described above, except jobs may be moved from one queue to another for a variety of reasons:
  • If the characteristics of a job change between CPU-intensive and I/O intensive, then it may be appropriate to switch a job from one queue to another.
  • Aging can also be incorporated, so that a job that has waited for a long time can get bumped up into a higher priority queue for a while

Multilevel feedback queue scheduling is the most flexible, because it can be tuned for any situation. But it is also the most complex to implement because of all the adjustable parameters. Some of the parameters which define one of these systems include:
  • The number of queues.
  • The scheduling algorithm for each queue.
  • The methods used to upgrade or demote processes from one queue to another. ( Which may be different. )
  • The method used to determine which queue a process enters initially
Reply With Quote
Old Thursday, March 27, 2008
Janeeta's Avatar
Join Date: Dec 2006
Location: Karachi
Posts: 96
Thanks: 26
Thanked 120 Times in 39 Posts
Janeeta is on a distinguished road
Default Segmentation

Segmented memory management

In a segmented memory management system the blocks to be replaced in main memory are potentially of unequal length and correspond to program and data ``segments.'' A program segment might be, for example, a subroutine or procedure. A data segment might be a data structure or an array. In both cases, segments correspond to logical blocks of code or data. Segments, then, are ``atomic,'' in the sense that either the whole segment should be in main memory, or none of the segment should be there. The segments may be placed anywhere in main memory, but the instructions or data in one segment should be contiguous

Using segmented memory management, the memory controller needs to know where in physical memory is the start and the end of each segment. When segments are replaced, a single segment can only be replaced by a segment of the same size, or by a smaller segment. After a time this results in a ``memory fragmentation'', with many small segments residing in memory, having small gaps between them. Because the probability that two adjacent segments can be replaced simultaneously is quite low, large segments may not get a chance to be placed in memory very often. In systems with segmented memory management, segments are often ``pushed together'' occasionally to limit the amount of fragmentation and allow large segments to be loaded.

This organization appears to be efficient because an entire block of code is available to the processor. Also, it is easy for two processes to share the same code in a segmented memory system; if the same procedure is used by two processes concurrently, there need only be a single copy of the code segment in memory. (Each process would maintain its own, distinct data segment for the code to access, however.)

Segmented memory management is not as popular as paged memory management, however. In fact, most processors which presently claim to support segmented memory management actually support a hybrid of paged and segmented memory management, where the segments consist of multiples of fixed size blocks.

In a segmented system, the blocks of memory are variable in size. Each process may have one or more segments. The segments may be visible to the process, that is, it may be aware that there are code, data, stack, and heap segments (whereas pages are usually transparent). A segment table is used that specifies the base of the physical addresses associated with a segment and the valid range. The combination of this base and the offset specified by the address means that two words must be used by the MMU to identify the physical address.

One advantage to the operating system is that once a segment has been allocated, there won't be any access faults from the process except those that actually try to access beyond the segment boundaries. Thus, once a process has started, it can run at memory rates until it returns control to the OS. Of course, this leads to the problem that if a segment is large, the time for a context switch can be excessive.

An access outside of the segment is detected as exceeding the valid range, and is trapped. Segmented virtual addressing is otherwise similar to paging. One problem with pure segmentation is external fragmentation (holes in the address map following a series of allocations and deallocations) that can lead to low memory utilization. External fragmentation is also called checkerboarding, and its correction requires a phase of memory compaction that reduces processing efficiency.

Advantages of segmentation over paging:
  • Speed. Reloading segment registers to change address spaces is much faster than switching page tables.
  • Segment descriptor tables consume less memory than page tables.
  • x86 page table entries do not have an 'Executable' bit. With segmentation, you can make a region of memory executable (code) or not (data).
  • Segment size can be byte-granular (size 1 byte to 1Meg in units of 1 byte); pages are always page-granular (size 4K to 4Gig in units of 4K).
  • Segmentation lets you make the segment as large as necessary, with no excess (there is no internal fragmentation).

Advantages of paging over segmentation:
  • Page-based address translation turns non-contiguous physical addresses into contiguous virtual addresses (the external fragmentation of free memory does not matter).
  • Many CPUs support paging, only a few support segmentation
  • Some things that are easy to do with paging but hard to do with segmentation unless you have multiple segments
  1. Efficient support for sparse address spaces. You can have large 'holes' in the virtual address space, like the hole between the top of the heap and the top of the stack.
  2. Shared memory between tasks
  • Some things are easier to do with paging because x86 page fault, unlike general protection fault, stores the faulting address in register CR2
  1. Demand-loading. No page of memory is allocated for a task until the task actually accesses the memory. This prevents a heavy load on the CPU when a task first starts up. It also conserves RAM.
  2. Memory-mapped files. This lets you read and write a file by reading and writing memory locations
  • Swapping. If RAM runs low, the kernel can copy pages that haven't been
accessed recently to a swapfile on the disk, to free RAM for more active tasks.
Reply With Quote
Old Thursday, March 27, 2008
Janeeta's Avatar
Join Date: Dec 2006
Location: Karachi
Posts: 96
Thanks: 26
Thanked 120 Times in 39 Posts
Janeeta is on a distinguished road
Default File management

File management system

What is a file?

A file of a collection of data that normally is stored on a secondary storage device such as a hard disk or floppy diskette.

What are the typical operations performed on files?
An operating system must provide a number of operations associated with files so that users can safely store and retrieve data.

Typical operations are
  • Open
  • Close
  • Create
  • Copy
  • Rename
  • List
In addition, operations on single data elements within a file are supported by
  • Read
  • Write
  • Seek

What are File Control Blocks?

File control blocks (FCB), sometimes referred to as file descriptors, are data structures that hold information about a file. When an operating system needs to access a file, it creates an associated file control block to manage the file.

The structure of the file control block differs between operating systems, but most file control blocks include the following parts
  • Filename
  • Location of file on secondary storage
  • Length of file
  • Date and time or creation or last access

What about how we name files?
Each operating system uses a specific convention or practice for naming files.

MS-DOS Uses eight character file names, a dot, then a three-character extension that denotes the type of file. Filenames are not case-sensitive.

UNIX Filenames can be up to 254 characters long and are case-sensitive.

Windows Filenames can be up to 255 characters long and are not case-sensitive

What are file types?
File types refer to classifying the content of the file, such as a program, text file, executable program or data file.

In Windows operating systems, the file type is derived from the filename extension. Typical file types and their extensions are

File Extension File Type
.bas basic source program
.c c source program
.dll system library
.doc Word document
.exe executable program
.txt text file

Windows associates applications (programs) with specific file types. For example, the default application that opens to process a file of type .txt is the Notepad editor.

How does an operating system keep track of files?
The hard disk is comprised of a large number of sequentially numbered sectors. As files are created, free sectors are allocated to hold the file contents and marked as allocated.

To keep track of the sectors and whether they are allocated or free, and to which file they belong, the operating system maintains a number of tables.

What is a root file system?
When the operating system is first installed, it creates a root file system on the disk that specifies how many sectors are available and how they will be allocated.

The root file system is a table of entries like a directory. In general, this is a fixed size, and once full, no more entries can be added.

Each entry can be either a file or another directory table

What does a root file system entry look like?
This is highly operating system specific, but an entry might look like,
  • Name of file
  • Beginning cluster number
  • Length of file in bytes
  • Type of file
  • Creation date and last modified right
  • File permissions (an access control

What is a cluster?
To make things a little simpler than managing a large number of sectors, the operating system groups sectors together into a minimum allocation unit called a cluster. When a request to create a file occurs, the operating system allocates a cluster at a time until the all the data is stored. This raises a question.

How are all the clusters of a file linked together?
The previous diagram also illustrates the linking of the file clusters in a chain, with the last cluster signifying that there are no more clusters allocated to the file.

One of the problems of using clusters as a minimum storage allocation unit is the wastage of space. Consider a cluster allocate of two sectors, each sector storing 1024 bytes (or characters). This means a minimum storage allocation of 2048 bytes. If you stored a file containing the phrase “Hello”, then this would result in 2043 unused bytes in the cluster (most operating systems store the length of the file, so there is no need to use an end of file marker, which would occupy an additional byte).

You might consider that a smaller allocation size based on the size of a sector would be more efficient. However, it becomes more complex to manage smaller cluster sizes and they take up more space (the table becomes larger and it takes more time to go through all the entries

How is free space managed?
The operating system can maintain a table of cluster entries, and mark each cluster as either free or allocated. This was a technique used in the MS-DOS operating system.

Other operating systems maintain a linked list of free clusters, each free cluster pointing to the next free cluster. As clusters are allocated, they are removed from the free cluster list. When a file is deleted, the clusters that were allocated to it are added back to the free cluster list.

What file systems are supported by Windows operating systems?
The Windows operating system supports the following file systems

The MS-DOS operating system introduced the File Allocation Table system of keeping track of file entries and free clusters. Filenames where restricted to eight characters with an addition three characters signifying the file type. The FAT tables were stored at the beginning of the storage space.

A file allocation table (FAT) is a table that an operating system maintains on a hard disk that provides a map of the clusters (the basic units of logical storage on a hard disk) that a file has been stored in. When you write a new file to a hard disk, the file is stored in one or more clusters that are not necessarily next to each other; they may be rather widely scattered over the disk. A typical cluster size is 2,048 bytes, 4,096 bytes, or 8,192 bytes. The operating system creates a FAT entry for the new file that records where each cluster is located and their sequential order. When you read a file, the operating system reassembles the file from clusters and places it as an entire file where you want to read it. For example, if this is a long Web page, it may very well be stored on more than one cluster on your hard disk.

An updated version of the FAT system designed for Windows 98. It supports file compression and long filenames.

Windows NT introduced the NT File System, designed to be more efficient at handling files than the FAT system. It spreads file tables throughout the disk, beginning at the center of the storage space. It supports file compression and long filenames

What are access-control lists and file permissions?
In multi-user operating systems, files may be accessed by multiple users. Permission rights associated with folders (directories) and files are used to protect or restrict access to files. In UNIX these rights are known as Read, Write and Execute. In Windows NT and Windows 2000 (using the NTFS file-system), additional file permissions are available.

What is a symbolic link or shortcut?
A symbol link is a filename that links to another file. Consider the case on a UNIX box where three different mail packages are available. The administrator only wants to reference the mail using the command “mail”, so the filename is made to point to (or reference) the desired mail package.

When the administrator runs the command “mail”, the appropriate mail package that is referenced by the symbolic link runs.

In Windows, a similar capability is known as a shortcut.

What is file-system integrity?
File-system integrity refers to whether the file-system contains errors. Sometimes this is caused by a user turning off the computer system without first shutting the computer down properly.

During the shutdown process, a flag can be written to the file-system. At startup, this flag can be detected, and if not present, means the computer system was not shut down correctly.

UNIX provides the fsck program to check the file-system. The Windows operating systems provide Scandisk or Chkdsk (checkdisk).

What is fragmentation and what does defragging a drive do?
When files are created and data written to the file, the operating system allocates space for the file from the free cluster list. Over a period of time, the clusters that are allocated to a file may no longer be sequential (contiguous or one after the after) but scattered over the disk.
Reply With Quote
The Following 3 Users Say Thank You to Janeeta For This Useful Post:
kashifilyas (Friday, March 19, 2010), sweety03 (Friday, November 30, 2012), wajid582 (Sunday, February 07, 2010)

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On

Similar Threads
Thread Thread Starter Forum Replies Last Post
Solved Everyday Science Papers Dilrauf General Science & Ability 4 Friday, April 08, 2011 07:10 PM
Telecommunications Dictionary & Terms Faraz_1984 General Knowledge, Quizzes, IQ Tests 0 Thursday, May 15, 2008 12:46 PM
The Globalization of World Politics: Revision guide 3eBaylis & Smith: hellowahab International Relations 0 Wednesday, October 17, 2007 04:13 PM
The Pakistan Council of Research in Water Resources free thinker Pakistan Affairs 1 Tuesday, April 25, 2006 09:05 AM

CSS Forum on Facebook Follow CSS Forum on Twitter

Disclaimer: All messages made available as part of this discussion group (including any bulletin boards and chat rooms) and any opinions, advice, statements or other information contained in any messages posted or transmitted by any third party are the responsibility of the author of that message and not of (unless is specifically identified as the author of the message). The fact that a particular message is posted on or transmitted using this web site does not mean that CSSForum has endorsed that message in any way or verified the accuracy, completeness or usefulness of any message. We encourage visitors to the forum to report any objectionable message in site feedback. This forum is not monitored 24/7.

Sponsors: ArgusVision   vBulletin, Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.