Serially Reusable Resources In Operating System
Data) differ from serially reusable resources. – A process may request consumable resources but will never release them. – A process can release units of a consumable resource without ever acquiring them. • Because such resources may have an unbounded number of units and because allocated units are not released.
Chapter Questions 6 2, 3, 8, 9, 12. 8 1, 13(a) 9 10, 12, 13, 15 11 1 SAMPLE QUESTIONS (including some questions from the Ph.D. Qualifying exam in OS):.
Employing semaphores, Solve the reader/writer problem giving the writer priority. Describe the deadlock detection algorithm found on pages 266-268. Considering the basic function of a page table, for example: the need to locate, replace, or swap pages to/from secondary storage; memory protection and sharing, explain what the fields of a page-table would be and the rational for each field. Given a system that uses the banker’s algorithm for avoiding deadlock and the resource state shown below, explain why it is a safe or unsafe state. Assume that the total number of each system resource is, where and R i means the amount of resource i. Preemptible resources. Non-preemptible resources serially reusable resource Paging virtual memory working-set Priority preemption memory fragmentation quantum preemption Thrashing Page Frame page fault.
Discuss at least 4 weaknesses in the Banker’s Algorithm. Compare and contrast what is meant by a process’ location space and its name space. A memory management system must consider a number of problems, discuss code/data, fragmentation, memory partitioning, protection, and swapping. Explain how a paging system works, be certain to include faults, thrashing, pages, frames, swapping, page replacement strategies, address translation, and anomalies. Explain the mapping of a virtual address to a real address under a paging system, include faults, pages, page frames, swapping, page replacement (be sure you are clear, and use diagrams if necessary.). Discuss deadlock; what is it.
Provide an illustration, explain the necessary conditions for deadlock, and name the three basic approaches to handling deadlock. Discuss/explain scheduling methodologies; include non-preemptive versus preemptive, FCFS, SPN, RR, and multilevel feedback queues. There are three basic issues in defining a paging policy: when to fetch a page, when to remove/replace a page, and where to place pages. Discuss one aspect of each issue. Consider the following experiment and explain the observations. A program is run by itself on a paging machine.
The program begins execution with its first procedure page. As it runs, the pages it needs are demand paged into available page frames. The number of available page frames is much much larger than the number of pages in the program. Also, there is a dial external to the computer that allows a person to set the maximum number of page frames the program may use. Initially, the dial is set to 2 frames and the program is run to completion. The dial is then set at 3 frames and again the program is run to completion. This process is continued until the dial is eventually set to the number of available pages in real storage, and the program is run for the last time.
For each run, the execution time of the program is recorded. Observations: As the dial is changed from 2, to 3, to 4, the execution time improves dramatically. From 4, 5, to 6, the execution time still improved each time, but less dramatically. With the setting of 7 and higher the execution time is essentially constant.
Assume a disk with 200 tracks and that the disk request queue has random requests in it. The requested tracks, in the order received are: 55, 58, 39, 18, 90, 160, 150, 38, and 184.
Assuming that the disk head is a track 100, and is moving toward track 200, show the sequence that the requests will be serviced and the total track sequence traversed for each of the following scheduling policies: FCFS, SSTF, and SCAN. Exam) Semaphores: Using semaphores, implement a writer's priority solution to the readers/writes problem. Exam) Deadlock: A system that uses the Banker's Algorithm deadlock avoidance has five (5) processes (1, 2, 3, 4, and 5) and four (4) types of resources (A, B, C, and D). There are multiple resources of each type.
Is the following state safe or not? If it is, show how the processes can complete. If not, show how they can deadlock.
Outline. Resources.
Resource management. Deadlock Resources. A resource is anything required by a process. Anything - os as resource manager. Process - owns the resource. Required - no resource, no progress; adaptive processes. Resources - sharable, serially reusable, and consumable.
Serially Reusable Resources In Operating System
Resource Management. Resource management - protection, economy, and convenience. Fairness defined by policy and provided by mechanism. The deadlock problem. Resource Managers.
Generically, most resource managers behave the same way. Specific differences arise from (mainly) resource characteristics and management policies. For example:. Printers can’t be shared. Non-sharing printer management is inconvenient and expensive. Resource Models. Resource managers model resources abstractly.
There are u units of the resource available. Processes request up to u resource units. There are always 0 ≤ a ≤ u resource units available. These and other resource properties are stored in maintained by the resource manager. Resource Characteristics. A resource’s characteristics determine how (in part) it’s managed. There are lots of possible characteristics.
Free logo fonts downloads. Resource durability: reusable vs consumable. Resource multiplicity: static vs dynamic. Resource sharing: Resource Durability. Resource durability determines how a resource reacts to being used. A remains after being used.
Disks, networks, CPUs are reusable resources. A dissapears after begin used. Network packets and IPC messages are consumable resources. Resource Multiplicity. A has a fixed or slowly changing number of units. Disks and CPUs are static resources. Reusable resources tend to be static.
A has a varying number of units. Consumable resources necessarily are dynamic. They have to be created and consumed. Resource Sharing. A can be used by more than one process at the same time.
Input devices tend to be sharable resources. A can be used by at most one process at a time. Output devices tend to be serially-reusable devices. General Management. The typical resource management cycle is: request, grant, and release. Management Refinements. A request may fail, or the requestor could be blocked.
The requestor is unblocked when the resource is granted. If the request fails, the requestor can try again later. change their request. They might ask for less, or a different but similar resource.
Earliest Deadline First Scheduling
Asynchronous Management. Asynchronous resource management provides non-blocking behavior:. The requestor receives a request id. Querying the request id eventualy indicates a grant or deny. The requestor can block on the id until the request is resolved. Synchronous resource management can also provide query functions.
Sharing can lead to suble race conditions. Set-Up. A system has one unit each of resources A and B allocated synchronously.
Two threads concurrently execute t 1: a = request(A) b = request(B) t 2: b = request(B) a = request(A) What happens? Analysis. The resulting allocations depend on the threads’ interleaving. Case 1 t 1: a = request(A) t 1: b = request(B) t 2: b = request(B) t 2: a = request(A) Ok Case 2 t 1: a = request(A) t 2: b = request(B) t 1: b = request(B) t 2: a = request(A)?. Case 1 seems fine, but what about case 2? Deadlock.
Each thread holds a resource needed by the other thread. Each thread relies on the other thread to make progress. Each thread can't make progress because of the other thread.
This is the condition. Deadlock Background. Deadlock is an old problem. Train track management; traffic gridlock. Both user processes and OSs can cause and suffer from deadlock.
Finite, non sharable resources are the heart of the problem. Deadlock is a global condition, which makes it difficult to find and deal with. Deadlock Handling.
There are four deadlock handling techniques:. Prevent deadlock. Avoid deadlock. Detect and recover deadlock.
Ad hoc techniques. Historically, ad hoc techniques have prevailed. Distributed and networked systems make ad hoc techniques less useful. Ad Hoc Techniques. Ad hoc techniques, a.k.a. The therapeutic reboot. The traditional technique for handling deadlock (or any other problem).
Detecting when to reboot can be tricky. It’s quiet in here, too quiet. There’s no blinking lights, or the they're blinking too fast. May or may not work in distributed environments. Deadlock Prevention.
Deal with deadlock by making it impossible. This can be tricky to impossible. User processes can always deadlock over their own resources. The idea is to eliminate the conditions under which deadlock arises.
Deadlock Conditions. The four deadlock conditions:. Exclusive resource access; no sharing. Incremental allocation; each allocation a process blocking point. Circular waiting for resources.
No take back resource allocation. These are necessary conditions; if any is missing, deadlock can’t occur. Preventing Deadlock. Avoiding any of the four conditions makes deadlock impossible. Given two non-sharable resources A and B (one unit each): t 1 has A and blocks allocating B t 2 has B and blocks allocating A.
This is deadlock. How can it be prevented? Shared Resources. A shared resource can’t cause deadlock.
Processes use the resource without waiting. t 1 doesn’t block on B; t 2 doesn’t block on A. makes resources sharable.
A printer is unsharable. Virtualizing the printer with a print queue makes it sharable.
Bulk Allocation. Incremental allocation causes two problems:. Each allocation request may block.
Previously acquired resources are held during the block. All allocations made at once without interruption. t 1 allocates A and B with one request; so does t 2. One of them loses and blocks. Analysis. Bulk allocation is similar to batch-job allocation policies. Bulk allocation is difficult to do in general.
Realistic general resource estimates are difficult to make. Resources may depend on data. Bulk allocation leads to pessimistic allocation requests. Better take extra, just in case. Ordered Allocation. Processes waiting in a cyclic deadlock. Prevent cycles by.
Totally order the resources. A process holding R i R j for i.