Thursday, August 27, 2009

RESOURCE-ALLOCATION GRAPH























Example of a Resource Allocation Graph:

>P1 is holding an instance of R2 and requests instance of R1.
>P2 is holding an intance of R1 and R2 and requests of instance of R3.
>P3 is holding an instance of R3.
>Is not a deadlock .




Resource Allocation Graph With A Deadlock:


>P1 is holding an instance of R2 and requests an instance of R1.

>P2 is holding an instance of R2 and R1 then requests of instance of R3.

>P3 is holding an instance of R3 and requests an instance of R2.

>is a deadlock.

Resource Allocation Graph With A Cycle But No Deadlock:



>P1 is holding an instance of R2 an requests an instance of R1.

>P2 is holding an instance of R1

>P3 is holding an instance of R1 and requests an instance of R2.

>P4 is holding an instance of R2.


> is not a deadlock.



Resource-Allocation Graph For Deadlock Avoidance:



>P1 is holding R1 and P1 is claim edge to R2.

>P2 is requesting R1 and P2 is cliam edge to R2.

>is not a deadlock.




Unsafe State In Resource-Allocation Graph:

>P1 is holding R1 and pi is claim edge to R2.

>P2 is holding R2 and requests R1.

>is not a deadlock.

Thursday, August 20, 2009

RECOVERY FROM DEADLOCK

� Abort all deadlocked processes:����� this method clearly will break the deadlock cycle, but at a great expense, since these processes may have computed for a long time, and the results of these partial computations must be discarded, and probably must be recomputed later
� Abort one process at a time until the deadlock cycle is eliminated:����� this method incurs considerable overhead, since, after each process is aborted, a deadlock-detection algorithm must be invoked to determine whether any processes are still deadlock
� Minimum cost

DEADLOCK DETECTION

� An algorithm that examines the state of the system to determine whether a deadlock has occurred
� An algorithm to recover from the deadlock

DEADLOCK PREVENTION

>By ensuring that at least one of these conditions cannot hold, we can prevent the occurrence of a deadlock.

METHODS FOR HANDLING DEADLOCKS

� We can use a protocol to ensure that the system will never enter a deadlock state.
� We can allow the system to enter a deadlock� state and then recover
� We can ignore the problem all together, and pretend that deadlocks never occur in the system.� This solution is the one used by most operating systems, including UNIX.
� Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions cannot hold.
� Deadlock avoidance, on the other hand, requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime.

DEADLOCK CHARACTERIZATION

A deadlock situation can arise if the following four conditions hold simultaneously in a system:

� Mutual exclusion� at least one resource must be held in a non-sharable mode; that is, only one process at a time can use the resource.� If another process requests that resource, the requesting process must be delayed until the resource has been released.
� Hold and wait� there must exist a process that is holding at least one resource and is waiting to acquire additional resources that are currently being held by other processes.
� No preemption resources cannot be preempted; that is, the process holding it after that process has completed its task can release a resource only voluntarily by the process holding it, after that process has completed its task.
� Circular wait� there must exist a set {Po,P1,�,Pn} of waiting processes such that P0 is waiting for a resource that is held by P1,P1 is waiting for a resource that is held by P2,�,Pn-1 is waiting for a resource that held by Pn and Pn is waiting for a resource that is held by Po.

Thursday, August 13, 2009

Multiprocessor Scheduling

In computer science, multiprocessor scheduling is an NP-Complete optimization problem. The problem statement is: "Given a set J of jobs where job ji has length li and a number of processors mi, what is the minimum possible time required to schedule all jobs in J on m processors such that none overlap?" The applications of this problem are numerous, but are, as suggested by the name of the problem, most strongly associated with the scheduling of computational tasks in a multiprocessor environment.

Real-Time Scheduling

>Static table-driven approach
>For periodic tasks.
>Input for analysis consists of : periodic arrival time, execution time, ending time, priorities.
>Inflexible to dynamic changes.
>General policy: earliest deadline first.
>Static priority-driven preemptive scheduling
>For use with non-RT systems: Priority based preemptive scheduling.
>Priority assignment based on real-time constraints.
>Example: Rate monotonic algorithm
>Dynamic planning-based scheduling
>After the task arrives before execution begins, a schedule is prepared that includes the new as >well as the existing tasks.
>If the new one can go without affecting the existing schedules than nothing is revised.
>Else schedules are revised to accommodate the new task.
>Remember that sometimes new tasks may be rejected if deadlines cannot be met.
>Dynamic best-effort scheduling:
>used in most commercial RTs of today
>tasks are periodic, no static scheduling is possible
some short-term scheduling such as shortest deadline first is used.
>Until the task completes we do not know whether it has met the deadline.

Thread Scheduling

>An application can be implemented as a set of threads that cooperate and execute concurrently in the same address space. Criteria: When related threads run in parallel pert. improves.
>Load sharing: pool of threads, pool of processors.
>Gang scheduling: Bunch of related threads scheduled together.
>Dedicated processor assignment: Each program gets as many processors as there are parallel threads.
>Dynamic scheduling: More like demand scheduling.

Monday, August 10, 2009

THREAD LIBRARY
[ ] Thread Library schedules user-level threads to run on LWP[ ] Thread management done by user-level Threads Library[ ] The Thread Library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.
Multi - threading Models
Many-to-One Model
[ ]Many user-level threads mapped to single kernel thread
* Thread management is done in user space
* Blocking problem
* No support for running in parallel on MP
* Used on systems that do not support kernel threads.
* Green-threads library in Solaris 2
One-to-One Model[ ]Each user-level thread maps to a kernel thread
* Creating a user thread requires creating the corresponding kernel thread
* Overhead
* Restrict the number of threads supported by the OS
•Examples
* Windows NT/2000
* OS/2
Many-to-Many Model
[ ] Multiplex many user-level threads to a smaller or equal number of kernel threads
[ ] Allows many user level threads to be mapped to many kernel threads.
[ ] Allows OS to create a sufficient number of kernel threads.
•Examples
* Solaris 2, IRIX, HP-UX, Tru64 UNIX
* Windows NT/2000 with the ThreadFiber package
Kernel Thread
[ ] Kernel maintains context information for the process and the threads
[ ] Supported by the Kernel
[ ] Slower to create and manage threads than are user threads
[ ] If a thread performs a blocking system call, then the kernel can schedule another thread in the application for execution.
[ ] Multiple threads are able to run in parallel on multiprocessors.
[ ] Scheduling is done on a thread basis
[ ] Slower to create and manage
[ ] If a thread performs a blocking system call, the kernel can schedule another thread in the application for execution
[ ] Can take advantage of a multi-processor environment
Examples
{}Windows 95/98/NT/2000
{}Solaris
{}Tru64 UNIX
{}BeOS
{}Linux
Thread
[ ] Lightweight process (LW)
[ ] Basic unit of CPU utilization
[ ] A thread comprises a thread ID, a program counter, a register set, and a stack
[ ] A thread shares with other threads belonging to the same process its code section, data section, and other OS resources, such as open files and signals
[ ] A process with multiple threads can do more than one task at a time
Single-Threaded Process
Single threaded programs have one path of execution, and multi-threaded programs have two or more paths of execution. Single threaded programs can perform only one task at a time, and have to finish each task in sequence before they can start another. For most programs, one thread of execution is all you need, but sometimes it makes sense to use multiple threads in a program to accomplish multiple simultaneous tasks.
Multi-Threaded Process
Multithreading as a widespread programming and execution model allows multiple threads to exist within the context of a single process. These threads share the process' resources but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. However, perhaps the most interesting application of the technology is when it is applied to a single process to enable parallel execution on a multiprocessor system.
Multithreading computers have hardware support to efficiently execute multiple threads. These are distinguished from multiprocessing systems (such as multi-core systems) in that the threads have to share the resources of single core: the computing units, the CPU caches and the translation lookaside buffer(TLB). Where multiprocessing systems include multiple complete processing units, multithreading aims to increase utilization of a single core by leveraging thread-level as well as instruction-level parallelism. As the two techniques are complementary, they are sometimes combined in systems with multiple multithreading CPUs and in CPUs with multiple multithreading cores.
User Threads
[ ] Thread management done by user-level threads library
[ ] No support from the kernel (The kernel is not aware of the existence of threads)
[ ] Fast to create and manage
[ ] Block all threads for a blocking system call if the kernel is single threaded
[ ] Cannot take advantage of multi-processors
 Three primary thread libraries:
 POSIX Pthreads
 Win32 threads
 Java threads
BENEFITS OF MULTITHREADED PROGRAMMING
BENEFITS
 Responsiveness
 Resource Sharing
 Economy
 Utilization of MP Architectures