cittadelmonte.info Religion Operating System Principles 7th Edition Pdf

OPERATING SYSTEM PRINCIPLES 7TH EDITION PDF

Friday, January 25, 2019


cittadelmonte.info - Ebook download as PDF File .pdf) or read book online. A Book on C Programming in C (4th Edition ). Operating systems are an essential part of any computer system. Similarly, . As we wrote this Ninth Edition of Operating System Concepts, we were guided. Welcome to the Web site for Operating System Concepts, Seventh Edition by Abraham Silberschatz, Peter Baer Galvin and Greg Gagne. This Web site gives you.


Author:REYNALDO JETTER
Language:English, Spanish, German
Country:Gambia
Genre:Science & Research
Pages:613
Published (Last):22.07.2016
ISBN:385-3-35142-952-9
ePub File Size:21.69 MB
PDF File Size:13.22 MB
Distribution:Free* [*Regsitration Required]
Downloads:31565
Uploaded by: KATE

Operating systems are an essential part of any computer system. Similarly, . As we wrote this seventh edition of Operating System Concepts, we were guided. To mu!/ children, Lemor, Sioan, and Aaron. Atpi Silbgrschaf2. To nt/ toife, Carla, and mi!/ childrem, Gatien Otoen and Maddie. Peter Bder Gult)in. In meli10r!/. Operating system principles / Abraham Silberschatz, Peter Baer Galvin, Greg Edition. 7th ed., Wiley Asia student ed. Published. Hoboken, NJ: John Wiley.

In order to set up a list of libraries that you have access to, you must first login or sign up. Then set up a personal list of libraries from your profile page by clicking on your user name at the top right of any screen. You also may like to try some of these bookshops , which may or may not sell this item. Separate different tags with a comma. To include a comma in your tag, surround the tag with double quotes. Please enable cookies in your browser to get the full Trove experience.

Other Authors Galvin, Peter B. Gagne, Greg. Edition 7th ed. Subjects Operating systems Computers Contents Pt. Introduction Ch. System structures Pt. Process management. Process-concept Ch. Multithreaded programming Ch. Process scheduling Pt. Process coordination. Synchronization Ch. Deadlocks Pt. Memory management. Memory-management strategies Ch. Virtual-memory management Pt.

Storage management. File system Ch. Implementing file systems Ch. Secondary-storage structure Ch. Distributed systems. Distributed operating systems Ch. Distributed file systems Ch. Distributed synchronization Pt. Protection and security. System protection Ch. System security Pt. Special-purpose systems. Real-time systems Ch.

Multimedia systems Pt. Case studies. The Linux system Ch. Windows XP Ch. Influential operating systems App. The Mach System contents online App. Windows contents online. Namely, the process whose turn it is. The waiting process can only enter its critical section when the other process updates the value of turn.

This algorithm does not provide strict alternation. It only sets turn to the value of the other process upon exiting its critical section. If this process wishes to enter its critical section again - before the other process - it repeats the process of entering its critical section and setting turn to the other process upon exiting. Assume two processes wish to enter their respec- tive critical sections.

They both set their value of flag to true, however only the thread whose turn it is can proceed, the other thread waits. If bounded waiting were not preserved, it would therefore be possible that the waiting process would have to wait indefinitely while the first process repeatedly entered - and exited - its critical section.

The processes share the following variables: The structure of process Pi is shown in Figure 6. This algorithm satisfies the three conditions.

Operating System Concepts, 7th Edition

Before we show that the three conditions are satisfied, we give a brief explanation of what the algorithm does to ensure mutual exclusion. When a process i requires access to critical section, it first sets its flag variable to want in to indicate its desire. It then performs the following steps: Given the above description, we can reason about how the algorithm satisfies the requirements in the following manner: Notice that a process enters the critical section only if the following requirements is satisfied: Since the process sets its own flag variable set to in cs before checking the status of other processes, we are guaranteed that no two processes will enter the critical section simultaneously.

When this happens, all processes realize that there are competing processes, enter the next iteration of the outer while 1 loop and reset their flag variables to want in. Now the only process that will set its turn variable to in cs is the process whose index is closest to turn. It is however possible that new processes whose index values are even closer to turn might decide to enter the critical section at this point and therefore might be able to simultaneously set its flag to in cs.

These processes would then realize there are competing processes and might restart the process of entering the critical section. However, at each iteration, the index values of processes that set their flag variables to in cs become closer to turn and eventually we reach the following condition: This process then gets to enter the critical section. The bounded waiting requirement is satisfied by the fact that when a process k desires to enter the critical section, its flag is no longer set to idle.

Therefore, any process whose index does not lie between turn and k cannot enter the critical section. In the meantime, all processes whose index falls between turn and k and desire to enter the critical section would indeed enter the critical section due to the fact that the system always makes progress and the turn value monotonically becomes closer to k.

Eventually, either turn becomes k or there are no processes whose index values lie between turn and k, and therefore process k gets to enter the critical section. What other kinds of waiting are there in an operating system? Can busy waiting be avoided altogether? Busy waiting means that a process is waiting for a condition to be satisfied in a tight loop without relinquish the processor. Alternatively, a process could wait by relinquishing the processor, and block on a condition and wait to be awakened at some appropriate time in the future.

Busy waiting can be avoided but incurs the overhead associated with putting a process to sleep and having to wake it up when the appropriate program state is reached. Spinlocks are not appropriate for single-processor systems because the condition that would break a process out of the spinlock could be obtained only by executing a different process. If the process is not relinquishing the processor, other processes do not get the opportu- nity to set the program condition required for the first process to make progress.

In a multiprocessor system, other processes execute on other processors and thereby modify the program state in order to release the first process from the spinlock. If a user-level program is given the ability to disable interrupts, then it can disable the timer interrupt and prevent context switching from taking place, thereby allowing it to use the processor without letting other processes to execute. Interrupts are not sufficient in multiprocessor systems since disabling interrupts only prevents other processes from executing on the processor in which interrupts were disabled; there are no limitations on what processes could be executing on other processors and therefore the process disabling interrupts cannot guarantee mutually exclusive access to program state.

For example, a server may wish to have only N socket connections at any point in time. As soon as N connections are made, the server will not accept another incoming connection until an existing connection is re- leased. Explain how semaphores can be used by a server to limit the number of concurrent connections.

A semaphore is initialized to the number of allowable open socket connections. When a connection is accepted, the acquire method is called, when a connection is released, the release method is called. A wait operation atomically decrements the value associated with a semaphore. If two wait operations are executed on a semaphore when its value is 1, if the two operations are not performed atomically, then it is possible that both operations might proceed to decrement the semaphore value thereby violating mutual exclusion.

The solution should exhibit minimal busy waiting. Here is the pseudocode for implementing the operations: A barbershop consists of a waiting room with n chairs and a barber room with one barber chair. If there are no customers to be served, the barber goes to sleep. If a customer enters the barbershop and all chairs are occupied, then the customer leaves the shop.

If the barber is busy but chairs are available, then the customer sits in one of the free chairs. If the barber is asleep, the customer wakes up the barber. Write a program to coordinate the barber and the customers. A semaphore can be implemented using the following monitor code: Each condition variable is represented by a queue of threads waiting for the condition.

Each thread has a semaphore associated with its queue entry. When a thread performs a wait operation, it creates a new semaphore initialized to zero , appends the semaphore to the queue associated with the condition variable, and performs a blocking semaphore decrement operation on the newly created semaphore. When a thread performs a signal on a condition variable, the first process in the queue is awakened by performing an increment on the corresponding semaphore.

Explain why this is true. Design a new scheme that is suitable for larger portions. These copy operations could be expensive if one were using large extents of memory for each buffer region. The increased cost of copy operation means that the monitor is held for a longer period of time while a process is in the produce or consume operation. This decreases the overall throughput of the system.

This problem could be alleviated by storing pointers to buffer regions within the monitor instead of storing the buffer regions themselves. This operation should be relatively inexpensive and therefore the period of time that the monitor is being held will be much shorter, thereby increasing the throughput of the monitor. Propose a method for solving the readers- writers problem without causing starvation.

Throughput in the readers-writers problem is increased by favoring multiple readers as opposed to allowing a single writer to exclusively access the shared values. On the other hand, favoring readers could result in starvation for writers. The starvation in the readers- writers problem could be avoided by keeping timestamps associated with waiting processes. When a writer is finished with its task, it would wake up the process that has been waiting for the longest duration.

When a reader arrives and notices that another reader is accessing the database, then it would enter the critical section only if there are no waiting writers. These restrictions would guarantee fairness. The signal operations associated with monitors is not persistent in the following sense: If a subsequent wait operation is performed, then the corresponding thread simply blocks.

A future wait operation would immediately succeed because of the earlier increment. Suggest how the implementation described in Section 6. If the signal operation were the last statement, then the lock could be transferred from the signalling process to the process that is the recipient of the signal.

Silberschatz, Galvin, Gagne: Operating System Concepts, 7th Edition - Student Companion Site

Otherwise, the signalling process would have to explicitly release the lock and the recipient of the signal would have to compete with all other processes to obtain the lock to make progress.

Write a monitor that allocates three identical line printers to these processes, using the priority numbers for deciding the order of allocation. Here is the pseudocode: The sum of all unique numbers associated with all the processes currently accessing the file must be less than n. Write a monitor to coordinate access to the file. The pseudocode is as follows: How would the solution to the preceding exercise differ with the two different ways in which signaling can be performed?

The solution to the previous exercise is correct under both situations. However, it could suffer from the problem that a process might be awakened only to find that it is still not possible for it to make forward progress either because there was not sufficient slack to begin with when a process was awakened or if an intervening process gets control, obtains the monitor and starts accessing the file.

Also, note that the broadcast operation wakes up all of the waiting processes. If the signal also transfers control and the monitor from the current thread to the target, then one could check whether the target would indeed be able to make forward progress and perform the signal only if it it were possible. Write a monitor using this scheme to implement the readers— writers problem. Explain why, in general, this construct cannot be implemented efficiently. What restrictions need to be put on the await statement so that it can be implemented efficiently?

Restrict the generality of B; see Kessels []. This requires considerable complexity as well as might require some interaction with the compiler to evaluate the conditions at different points in time.

One could restrict the boolean condition to be a disjunction of conjunctions with each component being a simple check equality or inequality with respect to a static value on a program variable. In that case, the boolean condition could be communicated to the runtime system, which could perform the check every time it needs to determine which thread to be awakened. You may assume the existence of a real hardware clock that invokes a procedure tick in your monitor at regular intervals.

Here is a pseudocode for implementing this: Solaris, Linux, and Windows use spinlocks as a syn- chronization mechanism only on multiprocessor systems. In a multipro- cessor system, other processes execute on other processors and thereby modify the program state in order to release the first process from the spinlock. Why is this restriction necessary? If the transaction needs to be aborted, then the values of the updated data values need to be rolled back to the old values.

This requires the old values of the data entries to be logged before the updates are performed. A schedule refers to the execution sequence of the operations for one or more transactions. A serial schedule is the situation where each transaction of a schedule is performed atomically.

If a schedule consists of two different transactions where consecutive operations from the different transactions access the same data and at least one of the operations is a write, then we have what is known as a conflict.

If a schedule can be transformed into a serial schedule by a series of swaps on nonconflicting operations, we say that such a schedule is conflict serializable. The two-phase locking protocol ensures conflict serializabilty because exclusive locks which are used for write operations must be acquired serially, without releasing any locks during the acquire growing phase.

Other transactions that wish to acquire the same locks must wait for the first transaction to begin releasing locks. By requiring that all locks must first be acquired before releasing any locks, we are ensuring that potential conflicts are avoided. How does the system process transactions that were issued after the rolled-back transaction but that have timestamps smaller than the new timestamp of the rolled-back transaction?

If the transactions that were issued after the rolled-back trans- action had accessed variables that were updated by the rolled-back trans- action, then these transactions would have to rolled-back as well.

If they have not performed such operations that is, there is no overlap with the rolled-back transaction in terms of the variables accessed , then these operations are free to commit when appropriate.

Processes may ask for a number of these resources and —once finished —will return them. As an example, many commercial software packages provide a given number of licenses, indicating the number of applications that may run concurrently.

When the application is started, the license count is decremented. When the application is terminated, the license count is incremented. If all licenses are in use, requests to start the application are denied.

Such requests will only be granted when an existing license holder terminates the application and a license is returned. The maximum number of resources and the number of available resources are declared as follows: Do the fol- lowing: Identify the data involved in the race condition.

Identify the location or locations in the code where the race condition occurs. Using a semaphore, fix the race condition. The variable available resources.

The code that decrements available resources and the code that increments available resources are the statements that could be involved in race conditions. Use a semaphore to represent the available resources variable and replace increment and decrement operations by semaphore increment and semaphore decrement operations. This leads to awkward programming for a process that wishes obtain a number of resources: This will allow a process to invoke decrease count by simply calling decrease count count ; The process will only return from this function call when sufficient resources are available.

It is important that the students learn the three basic approaches to deadlock: It can be useful to pose a deadlock problem in human terms and ask why human systems never deadlock.

Can the students transfer this understanding of human systems to computer systems? Projects can involve simulation: Ask the students to al- locate the resources to prevent deadlock. The survey paper by Coffman, Elphick, and Shoshani [] is good sup- plemental reading, but you might also consider having the students go back to the papers by Havender [], Habermann [], and Holt [a].

The last two were published in CACM and so should be readily available. Exercises 7. Show that the four necessary conditions for deadlock indeed hold in this example. State a simple rule for avoiding deadlocks in this system. The four necessary conditions for a deadlock are 1 mutual exclu- sion; 2 hold-and-wait; 3 no preemption; and 4 circular wait.

The mutual exclusion condition holds as only one car can occupy a space in the roadway. A car cannot be removed i. Lastly, there is indeed a circular wait as each car is waiting for a subsequent car to advance. The circular wait condition is also easily observed from the graphic. A simple rule that would avoid this traffic deadlock is that a car may not advance into an intersection if it is clear they will not be able to immediately clear the intersection.

Dis- cuss how the four necessary conditions for deadlock indeed hold in this setting.

Silberschatz.Galvin.Operating.System.Concepts.7th.pdf

Discuss how deadlocks could be avoided by eliminating any one of the four conditions. Deadlock is possible because the four necessary conditions hold in the following manner: Deadlocks could be avoided by overcoming the conditions in the following manner: Exercises 49 7.

Such synchronization objects may include mutexes, semaphores, condition variables, etc. We can prevent the deadlock by adding a sixth object F. This solution is known as containment: Compare this scheme with the circular-wait scheme of Section 7. This is probably not a good solution because it yields too large a scope. It is better to define a locking policy with as narrow a scope as possible.

Runtime overheads b. System throughput Answer: A deadlock-avoidance scheme tends to increase the runtime overheads due to the cost of keep track of the current resource allocation. However, a deadlock-avoidance scheme allows for more concurrent use of resources than schemes that statically prevent the formation of dead- lock.

In that sense, a deadlock-avoidance scheme could increase system throughput. Resources break or are replaced, new processes come and go, new re- sources are bought and added to the system.

Increase Available new resources added. Decrease Available resource permanently removed from system c. Increase Max for one process the process needs more resources than allowed, it may want more d. Decrease Max for one process the process decides it does not need that many resources e. Increase the number of processes. Decrease the number of processes. Increase Available new resources added - This could safely be changed without any problems.

Increase Max for one process the process needs more resources than allowed, it may want more - This could have an effect on the system and introduce the possibility of deadlock. Decrease Max for one process the process decides it does not need that many resources - This could safely be changed without any problems. Increase the number of processes - This could be allowed assum- ing that resources were allocated to the new process es such that the system does not enter an unsafe state.

Decrease the number of processes - This could safely be changed without any problems. Show that the system is deadlock-free. Suppose the system is deadlocked. This implies that each process is holding one resource and is waiting for one more.

Since there are three processes and four resources, one process must be able to obtain two resources. This process requires no more resources and, therefore it will return its resources when done. Resources can be requested and released by pro- cesses only one at a time. Show that the system is deadlock free if the following two conditions hold: The maximum need of each process is between 1 and m resources b.

Using the terminology of Section 7. Hence the system cannot be in a deadlock state. Assume that requests for chopsticks are made one at a time.

The following rule prevents deadlock: Assume now that each philosopher requires three chopsticks to eat and that resource re- quests are still issued separately.

Describe some simple rules for deter- mining whether a particular request could be satisfied without causing deadlock given the current allocation of chopsticks to philosophers. When a philosopher makes a request for a chopstick, allocate the request if: Need A B C P0 7 4 3 P1 0 2 0 P2 6 0 0 P3 0 1 1 P4 4 3 1 If the value of Available is 2 3 0 , we can see that a request from process P0 for 0 2 0 cannot be satisfied as this lowers Available to 2 1 0 and no process could safely finish.

What is the content of the matrix Need? Is the system in a safe state? If a request from process P1 arrives for 0,4,2,0 , can the request be granted immediately? The values of Need for processes P0 through P4 respectively are 0, 0, 0, 0 , 0, 7, 5, 0 , 1, 0, 0, 2 , 0, 0, 2, 0 , and 0, 6, 4, 2.

With Available being equal to 1, 5, 2, 0 , either process P0 or P3 could run. Once process P3 runs, it releases its resources which allow all other existing processes to run.

Yes it can. This results in the value of Available being 1, 1, 0, 0. How could this assumption be violated? The optimistic assumption is that there will not be any form of circular-wait in terms of resources allocated and processes making requests for them.

This assumption could be violated if a circular-wait does indeed in practice. Create n threads that request and release resources from the banker. A banker will only grant the request if it leaves the system in a safe state. Ensure that access to shared data is thread-safe by employing Java thread synchronization as discussed in Section 7. Farmers in the two villages use this bridge to deliver their produce to the neighboring town.

The bridge can be- come deadlocked if both a northbound and a southbound farmer get on the bridge at the same time Vermont farmers are stubborn and are un- able to back up. Using semaphores, design an algorithm that prevents deadlock. Initially, do not be concerned about starvation the situation in which northbound farmers prevent southbound farmers from using the bridge, or vice versa.

We want the student to learn about all of them: Exercises 8. Internal Fragmentation is the area in a region or a page that is not used by the job occupying that region or page. This space is unavailable for use by the system until that job is finished and the page or region is released. A compiler is used to generate the object code for individual modules, and a linkage editor is used to combine multiple object modules into a single program binary.

How does the linkage editor change the binding of instructions and data to memory addresses? What information needs to be passed from the compiler to the linkage editor to facilitate the memory binding tasks of the linkage editor? The linkage editor has to replace unresolved symbolic ad- dresses with the actual addresses associated with the variables in the final program binary. In order to perform this, the modules should keep track of instructions that refer to unresolved symbols.

During linking, each module is assigned a sequence of addresses in the overall program binary and when this has been performed, unresolved references to sym- bols exported by this binary could be patched in other modules since every other module would contain the list of instructions that need to be patched.

Which algorithm makes the most efficient use of memory? Data allocated in the heap segments of programs is an example of such allocated memory. What is required to support dynamic memory allocation in the following schemes: Exercises 57 8. Pure segmentation also suffers from external fragmentation as a segment of a process is laid out contiguously in physical memory and fragmentation would occur as segments of dead processes are replaced by segments of new processes.

Segmentation, however, enables processes to share code; for instance, two different processes could share a code segment but have distinct data segments. Pure paging does not suffer from external frag- mentation, but instead suffers from internal fragmentation.

Processes are allocated in page granularity and if a page is not completely utilized, it results in internal fragmentation and a corresponding wastage of space. Paging also enables processes to share code at the granularity of pages.

How could the operating system allow access to other memory? Why should it or should it not? An address on a paging system is a logical page number and an offset.

The physical page is found by searching a table based on the logical page number to produce a physical page number. Because the operating system controls the contents of this table, it can limit a process to accessing only those physical pages allocated to the process.

There is no way for a process to refer to a page it does not own because the page will not be in the page table. This is useful when two or more processes need to exchange data—they just read and write to the same physical addresses which may be at varying logical addresses.

This makes for very efficient interprocess communication. Paging requires more memory overhead to maintain the trans- lation structures. Segmentation requires just two registers per segment: Paging on the other hand requires one entry per page, and this entry provides the physical address in which the page is located.

Code is stored starting with a small fixed virtual address such as 0. The code segment is followed by the data segment that is used for storing the program variables. When the program starts executing, the stack is allocated at the other end of the virtual address space and is allowed to grow towards lower virtual addresses.

What is the significance of the above structure on the following schemes: This could be much higher than the actual memory requirements of the process. When a program needs to extend the stack or the heap, it needs to allocate a new page but the corresponding page table entry is preallocated. If a memory reference takes nanoseconds, how long does a paged memory reference take?

If we add associative registers, and 75 percent of all page-table references are found in the associative registers, what is the effec- tive memory reference time?

Assume that finding a page-table entry in the associative registers takes zero time, if the entry is there. Segmentation and paging are often combined in order to im- prove upon each other. Segmented paging is helpful when the page table becomes very large. A large contiguous section of the page table that is unused can be collapsed into a single segment table entry with a page- table address of zero.

Paged segmentation handles the case of having very long segments that require a lot of time for allocation. By paging the segments, we reduce wasted memory due to external fragmentation as well as simplify the allocation. Exercises 59 8. Since segmentation is based on a logical division of memory rather than a physical one, segments of any size can be shared with only one entry in the segment tables of each user. With paging there must be a common entry in the page tables for each page that is shared.

Segment Base Length 0 1 14 2 90 3 4 96 What are the physical addresses for the following logical addresses? In certain situations the page tables could become large enough that by paging the page tables, one could simplify the memory allocation problem by ensuring that everything is allocated as fixed-size pages as opposed to variable-sized chunks and also enable the swapping of portions of page table that are not currently used.

How many memory operations are performed when an user program executes a memory load operation? When a memory load operation is performed, there are three memory operations that might be performed. One is to translate the position where the page table entry for the page could be found since page tables themselves are paged. The second access is to access the page table entry itself, while the third access is the actual memory load operation.

Under what circumstances is one scheme preferrable over the other? When a program occupies only a small portion of its large virtual address space, a hashed page table might be preferred due to its smaller size.

The disadvantage with hashed page tables however is the problem that arises due to conflicts in mapping multiple pages onto the same hashed page table entry. If many pages map to the same entry, then traversing the list corresponding to that hash table entry could in- cur a significant overhead; such overheads are minimal in the segmented paging scheme where each page table entry maintains information re- garding only one page.

Describe all the steps that the Intel takes in translating a logical address into a physical address. What are the advantages to the operating system of hardware that provides such complicated memory translation hardware? Are there any disadvantages to this address-translation system?

If so, what are they? If not, why is it not used by every manufacturer? The selector is an index into the segment descriptor table. The seg- ment descriptor result plus the original offset is used to produce a linear address with a dir, page, and offset. The dir is an index into a page directory. The entry from the page directory selects the page table, and the page field is an index into the page table.

The entry from the page table, plus the offset, is the physical address. Such a page translation mechanism offers the flexibility to allow most operating systems to implement their memory scheme in hardware, instead of having to implement some parts in hardware and some in software.

Because it can be done in hardware, it is more efficient and the kernel is simpler. Address translation can take longer due to the multiple table lookups it can invoke. Caches help, but there will still be cache misses.

The objectives of this chapter are to explain these concepts and show how paging works. A simulation is probably the easiest way to allow the students to program several of the page-replacement algorithms and see how they really work. If an interactive graphics display can be used to display the simulation as it works, the students may be better able to understand how paging works.

Exercises 9. Assume that the page boundary is at and the move instruction is moving values from a source region of Assume that a page fault occurs while accessing location By this time the locations of For every memory access operation, the page table needs to be consulted to check whether the corresponding page is resident or not and whether the program has read or write privileges for accessing the page. These checks would have to be performed in hardware. A TLB could serve as a cache and improve the performance of the lookup operation.

What is the hardware support required to implement this feature? When two processes are accessing the same set of program values for instance, the code segment of the source binary , then it is useful to map the corresponding pages into the virtual address spaces of the two programs in a write-protected manner. When a write does indeed take place, then a copy must be made to allow the two programs to individually access the different copies without interfering with each other.

The hardware support required to implement is simply the fol- lowing: If it is indeed write-protected, a trap would occur and the operating system could resolve the issue. The computer has bytes of physical memory. The virtual memory is implemented by paging, and the page size is bytes. A user process generates the virtual address Explain how the system establishes the corresponding physical location.

Distinguish between software and hardware operations. The virtual address in binary form is Since the page size is , the page table size is The page table is held in registers.

It takes 8 milliseconds to service a page fault if an empty page is available or the replaced page is not modified, and 20 milliseconds if the replaced page is modified. Memory access time is nanoseconds. Assume that the page to be replaced is modified 70 percent of the time.

What is the maximum acceptable page-fault rate for an effective access time of no more than nanoseconds? What can you say about the system if you notice the following behavior: If the pointer is moving fast, then the program is accessing a large number of pages simultaneously. It is most likely that during the period between the point at which the bit corresponding to a page is cleared and it is checked again, the page is accessed again and therefore cannot be replaced. This results in more scanning of the pages before a victim page is found.

If the pointer is moving slow, then the virtual memory system is finding candidate pages for replacement extremely efficiently, indicating that many of the resident pages are not being ac- cessed.

Also discuss under what circumstance does the opposite holds. Consider the following sequence of memory accesses in a system that can hold four pages in memory.

When page 5 is accessed, the least frequently used page-replacement algorithm would replace a page other than 1, and therefore would not incur a page fault when page 1 is accessed again. Consider the sequence in a system that holds four pages in memory: The most frequently used page replacement algo- rithm evicts page 4 while fetching page 5, while the LRU algorithm evicts page 1.

This is unlikely to happen much in practice. Assume that the free-frame pool is managed using the least recently used replacement policy. Answer the following questions: If a page fault occurs and if the page does not exist in the free- frame pool, how is free space generated for the newly requested page? If a page fault occurs and if the page exists in the free-frame pool, how is the resident page set and the free-frame pool managed to make space for the requested page? What does the system degenerate to if the number of resident pages is set to one?

What does the system degenerate to if the number of pages in the free-frame pool is zero? The accessed page is then moved to the resident set. Install a faster CPU. Install a bigger paging disk.

Increase the degree of multiprogramming. Decrease the degree of multiprogramming. Install more main memory. Install a faster hard disk or multiple controllers with multiple hard disks. Add prepaging to the page fetch algorithms. Increase the page size. The system obviously is spending most of its time paging, indicating over-allocation of memory. If the level of multiprogramming is reduced resident processes would page fault less frequently and the CPU utilization would improve.

Another way to improve performance would be to get more physical memory or a faster paging drum. Get a faster CPU —No. Get a bigger paging drum—No. Increase the degree of multiprogramming—No.

Decrease the degree of multiprogramming—Yes. Exercises 65 e. Install more main memory—Likely to improve CPU utilization as more pages can remain resident and not require paging to or from the disks.

Install a faster hard disk, or multiple controllers with multiple hard disks—Also an improvement, for as the disk bottleneck is removed by faster response and more throughput to the disks, the CPU will get more data more quickly.

Add prepaging to the page fetch algorithms—Again, the CPU will get more data faster, so it will be more in use. This is only the case if the paging action is amenable to prefetching i. Increase the page size —Increasing the page size will result in fewer page faults if data is being accessed sequentially. If data access is more or less random, more paging action could ensue because fewer pages can be kept in memory and more data is transferred per page fault.

So this change is as likely to decrease utilization as it is to increase it. What is the sequence of page faults incurred when all of the pages of a program are currently non-resident and the first instruction of the program is an indirect memory load operation? What happens when the operating system is using a per-process frame allocation technique and only two pages are allocated to this process?

The following page faults take place: The operating system will generate three page faults with the third page replacing the page containing the instruc- tion. If the instruction needs to be fetched again to repeat the trapped instruction, then the sequence of page faults will continue indefinitely. If the instruction is cached in a register, then it will be able to execute completely after the third page fault. What would you gain and what would you lose by using this policy rather than LRU or second-chance replacement?

Such an algorithm could be implemented with the use of a reference bit. After every examination, the bit is set to zero; set back to one if the page is referenced. The algorithm would then select an arbitrary page for replacement from the set of unused pages since the last examination. The advantage of this algorithm is its simplicity - nothing other than a reference bit need be maintained. The disadvantage of this algorithm is that it ignores locality by only using a short time frame for determining whether to evict a page or not.

We can do this minimization by distributing heavily used pages evenly over all of memory, rather than having them compete for a small number of page frames. We can associate with each page frame a counter of the number of pages that are associated with that frame. Then, to replace a page, we search for the page frame with the smallest counter.

Define a page-replacement algorithm using this basic idea. Specif- ically address the problems of 1 what the initial value of the counters is, 2 when counters are increased, 3 when counters are decreased, and 4 how the page to be replaced is selected.

How many page faults occur for your algorithm for the following reference string, for four page frames? What is the minimum number of page faults for an optimal page- replacement strategy for the reference string in part b with four page frames? Define a page-replacement algorithm addressing the problems of: Initial value of the counters—0. Counters are increased —whenever a new page is associ- ated with that frame.

Counters are decreased —whenever one of the pages asso- ciated with that frame is no longer required. How the page to be replaced is selected —find a frame with the smallest counter. Use FIFO for breaking ties. Addresses are translated through a page table in main memory, with an access time of 1 microsec- ond per memory access.

Thus, each memory reference through the page table takes two accesses. To improve this time, we have added an asso- ciative memory that reduces access time to one memory reference, if the page-table entry is in the associative memory.

Assume that 80 percent of the accesses are in the associative memory and that, of the remaining, 10 percent or 2 percent of the total cause page faults.

What is the effective memory access time? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem? Thrashing is caused by underallocation of the minimum num- ber of pages required by a process, forcing it to continuously page fault.

The system can detect thrashing by evaluating the level of CPU utiliza- tion as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming. One representing data and another representing code? As an example, the code being accessed by a process may retain the same working set for a long period of time.

However, the data the code accesses may change, thus reflecting a change in the working set for data accesses. This could result in a large number of page faults.

However, once a process is scheduled, it is unlikely to generate page faults since its resident set has been overestimated. Using Figure 9. Perform coalescing whenever possible: The following allocation is made by the Buddy system: The byte request is assigned a byte segment. The byte request is assigned a byte segement, the 60 byte request is assigned a 64 byte segment and the byte request is assigned a byte segment. After the allocation, the following segment sizes are available: After the releases of memory, the only segment in use would be a byte segment containing bytes of data.

The following segments will be free: What could be done to address this scalability issue? This had long been a problem with the slab allocator - poor scalability with multiple CPUs. The issue comes from having to lock the global cache when it is being accesses.

This has the effect of serializing cache accesses on multiprocessor systems. Solaris has addressed this by introducing a per-CPU cache, rather than a single global cache. What are the advantages of such a paging scheme? What modifications to the virtual memory system are provide this functionality? The program could have a large code segment or use large- sized arrays as data. These portions of the program could be allocated to larger pages, thereby decreasing the memory overheads associated with a page table.

The virtual memory system would then have to maintain multiple free lists of pages for the different sizes and should also need to have more complex code for address translation to take into account different page sizes. First, generate a random page- reference string where page numbers range from Apply the ran- dom page-reference string to each algorithm and record the number of page faults incurred by each algorithm.

Implement the replacement algorithms such that the number of page frames can vary from Assume that demand paging is used. Design two programs that communicate with shared memory using the Win32 API as outlined in Section 9. The consumer process will then read and output the sequence from shared memory. In this instance, the producer process will be passed an integer pa- rameter on the command line specifying the number of Catalan numbers to produce, i.

Every- thing is typically stored in files: The student should learn what a file is to the operating system and what the problems are provid- ing naming conventions to allow files to be found by user programs, protec- tion. Two problems can crop up with this chapter. First, terminology may be different between your system and the book. This can be used to drive home the point that concepts are important and terms must be clearly defined when you get to a new system.

Second, it may be difficult to motivate students to learn about directory structures that are not the ones on the system they are using. This can best be overcome if the students have two very different systems to consider, such as a single-user system for a microcomputer and a large, university time-shared system. Projects might include a report about the details of the file system for the local system.

It is also possible to write programs to implement a simple file system either in memory allocate a large block of memory that is used to simulate a disk or on top of an existing file system. In many cases, the design of a file system is an interesting project of its own. Exercises What problems may occur if a new file is created in the same storage area or with the same absolute path name?

How can these problems be avoided? Let F1 be the old file and F2 be the new file. A user wishing to access F1 through an existing link will actually access F2. Note that the access protection for file F1 is used rather than the one associated with F2. This can be accomplished in several ways: Should the operating system maintain a separate table for each user or just maintain one table that contains references to files that are being accessed by all users at the current time?

If the same file is being accessed by two different programs or users, should there be separate entries in the open file table? By keeping a central open-file table, the operating system can perform the following operation that would be infeasible other- wise. Consider a file that is currently being accessed by one or more processes. If the file is deleted, then it should not be removed from the disk until all processes accessing the file have closed it. This check could be performed only if there is centralized accounting of number of processes accessing the file.

On the other hand, if two processes are accessing the file, then separate state needs to be maintained to keep track of the current location of which parts of the file are being accessed by the two processes.

This requires the operating system to maintain separate entries for the two processes. In many cases, separate programs might be willing to tolerate concurrent access to a file without requiring the need to obtain locks and thereby guaranteeing mutual exclusion to the files.

Mutual exclusion could be guaranteed by other program structures such as memory locks or other forms of synchronization.

In such situations, the mandatory locks would limit the flexibility in how files could be accessed and might also increase the overheads associated with accessing files. By recording the name of the creating program, the operat- ing system is able to implement features such as automatic program invocation when the file is accessed based on this information.

It does add overhead in the operating system and require space in the file descriptor, however. Automatic opening and closing of files relieves the user from the invocation of these functions, and thus makes it more convenient to the user; however, it requires more overhead than the case where explicit opening and closing is required.

When a block is accessed, the file system could prefetch the subsequent blocks in anticipation of future requests to these blocks. This prefetching optimization would reduce the waiting time experienced by the process for future requests.

An application that maintains a database of entries could benefit from such support. For instance, if a program is maintaining a student database, then accesses to the database cannot be modeled by any predetermined access pattern.

The access to records are random and locating the records would be more efficient if the operating system were to provide some form of tree-based index. The advantage is that there is greater transparency in the sense that the user does not need to be aware of mount points and create links in all scenarios.

The disadvantage however is that the filesystem containing the link might be mounted while the filesystem containing the target file might not be, and therefore one cannot provide trans- parent access to the file in such a scenario; the error condition would expose to the user that a link is a dead link and that the link does indeed cross filesystem boundaries.

Discuss the relative merits of each approach. With a single copy, several concurrent updates to a file may result in user obtaining incorrect information, and the file being left in an incorrect state. With multiple copies, there is storage waste and the various copies may not be consistent with respect to each other. The advantage is that the application can deal with the failure condition in a more intelligent manner if it realizes that it incurred an error while accessing a file stored in a remote filesystem.

The disadvantage however is the lack of uniformity in failure semantics and the resulting complexity in application code. UNIXconsistency semantics requires updates to a file to be immediately available to other processes.

Supporting such a semantics for shared files on remote file systems could result in the following inefficiencies: The basic issues are device directory, free space management, and space allocation on a disk. A file is a collection of extents, with each ex- tent corresponding to a contiguous set of blocks.

A key issue in such systems is the degree of variability in the size of the extents. What are the advantages and disadvantages of the following schemes: All extents are of the same size, and the size is predetermined. Extents can be of any size and are allocated dynamically. Extents can be of a few fixed sizes, and these sizes are predeter- mined. If all extents are of the same size, and the size is predeter- mined, then it simplifies the block allocation scheme.

A simple bit map or free list for extents would suffice. If the extents can be of any size and are allocated dynamically, then more complex allocation schemes are required. It might be difficult to find an extent of the appropriate size and there might be external fragmentation. One could use the Buddy system allocator discussed in the previous chapters to design an ap- propriate allocator. When the extents can be of a few fixed sizes, and these sizes are predetermined, one would have to maintain a separate bitmap or free list for each possible size.

This scheme is of intermediate complexity and of intermediate flexibility in comparison to the earlier schemes. The advantage is that while accessing a block that is stored at the middle of a file, its location can be determined by chasing the pointers stored in the FAT as opposed to accessing all of the individual blocks of the file in a sequential manner to find the pointer to the target block.

Typically, most of the FAT can be cached in memory and therefore the pointers can be determined with just memory accesses instead of having to access the disk blocks. Suppose that the pointer to the free-space list is lost. Can the system reconstruct the free-space list? Consider a file system similar to the one used by UNIX with indexed allocation. Assume that none of the disk blocks is currently being cached. Suggest a scheme to ensure that the pointer is never lost as a result of memory failure.

Those remaining unallocated pages could be relinked as the free-space list. The free-space list pointer could be stored on the disk, perhaps in several places. For instance, a file system could allocate 4 KB of disk space as a single 4-KB block or as eight byte blocks.

How could we take advantage of this flexibility to improve performance? What modifications would have to be made to the free-space management scheme in order to support this feature?

Such a scheme would decrease internal fragmentation. If a file is 5KB, then it could be allocated a 4KB block and two contiguous byte blocks. In addition to maintaining a bitmap of free blocks, one would also have to maintain extra state regarding which of the subblocks are currently being used inside a block.

The allocator would then have to examine this extra state to allocate subblocks and coallesce the subblocks to obtain the larger block when all of the subblocks become free. Exercises 77 Answer: The primary difficulty that might arise is due to delayed updates of data and metadata.

Updates could be delayed in the hope that the same data might be updated in the future or that the updated data might be temporary and might be deleted in the near future.

However, if the system were to crash without having committed the delayed updates, then the consistency of the file system is destroyed. Assume that the information about each file is al- ready in memory. For each of the three allocation strategies contiguous, linked, and indexed , answer these questions: How is the logical-to-physical address mapping accomplished in this system? For the indexed allocation, assume that a file is always less than blocks long. If we are currently at logical block 10 the last block accessed was block 10 and want to access logical block 4, how many physical blocks must be read from the disk?

Let Z be the starting file address block number. Divide the logical address by with X and Y the resulting quotient and remainder respectively. Add X to Z to obtain the physical block number. Y is the displacement into that block. Divide the logical physical address by with X and Y the resulting quotient and remainder respectively.

Get the index block into memory. Physical block address is contained in the index block at location X. Y is the dis- placement into the desired physical block.

SHARRON from Nebraska
I relish reading books gently. Also read my other posts. I absolutely love socializing with friends/neighbors.