Tanenbaum chapter 2

Mutual Exclusion


Using a queue with shared memory, while one process is executing ``add'', no other process should be allowed to access the queue data structure.
Example: In a file update, where records are read, modified and written back, no other process should be allowed access to a record while one process is changing it.
Example: On a single lane bridge, no other vehicle should be allowed on the bridge while a vehicle is on it.

These are all examples of access to a resource by a process. If no other process is allowed access during this time, then access must be mutually exclusive.

Critical section

Suppose you implemented mutual exclusion in this way: These examples exclude too much. A critical section is the smallest amount of the program that must have exclusive access to the resource(s).

Disabling interrupts

In an interrupt driven system, context switches from one process to another can only occur on interrupts (timer, I/O device, etc). If a process disables all interrupts then it cannot be switched out.

On entry to the critical section the process can disable all interrupts, and on exit from it can enable them again.

int add(queue *q, char ch) { if (full(q)) return 0; DISABLE INTERRUPTS if (q->empty) { q->empty = 0; q->front = q->end = 0; } else q->front = (q->front + 1) % SIZE; q->elmts[q->front] = ch; ENABLE INTERRUPTS return 1; } This is fine when the O/S does it in supervisor mode. However it is not something that an ordinary application should be allowed to do (it could hog the CPU). So it is not a general way of implementing exclusion.


A lock is something that can be set to locked or unlocked. Locks are used to indicate access to a resource. If the lock is set, then a process is in its critical section but if it is not set then no process is in its critical section for that resource.

A process tests the lock for access:

if lock is not set then set the lock enter critical section ... leave critical section unset the lock else ... There is a small problem here: two processes may be both trying to set the lock. P1 tests its value; P2 tests its value; P1 sets the lock and enters its critical section; P2 sets the lock and enters its critical section.

Testing and setting the lock must be an atomic or indivisible operation so that it can be guaranteed that this kind of race for the resource cannot take place.

if test the lock and set the lock then enter critical section ... leave critical section unset the lock else ...

Lock file

A lock file is a file (usually empty) that two or more processes agree to use to signal exclusion: if the lock file exists then a process is in its critical section, but if it does not exist then no process is in its critical section for that resource.

On entry to its critical section a process checks for file existence and creates it if it is does not exist. On exit it removes it.

The Unix ``creat'' and ``unlink'' are atomic file operations (ie no other file operation on that file can take place during one of these). If you attempt to create a file that exists but does not have write permission, then creat fails

int test_and_set(lockfile) { int fd; fd = creat(lockfile, 0); if (fd == -1) return 0; close(fd); return 1; } unset_lock(lockfile) { unlink(lockfile) } The principal disadvantage of this method is speed. Creating and removing files are file system operations involving disk accesses, so are relatively slow compared to CPU operations.

Test and set instruction

At the opposite extreme, there may be hardware support for locking by special purpose machine instructions. Typical is the test and set instruction.

A test and set instruction is guaranteed to be indivisible. It reads the value of a memory location into a register. After that it sets the memory location to a value. It achieves indivisibility by holding the bus during the read and successive write so that no other memory operations can be done.

lock: if TSL(lock, 1) == 0 lock succeeded else lock failed unlock: set lock to 0

Spin locks

A process wants access to a resource. It tests the lock and finds it to be set by another process. What does it do?

A process can ``spin'':

while test_and_set(lock) fails ; Spin locks are CPU intensive, and perform no useful work. On a single CPU it will result in the process being swapped in and out to perform no useful work while waiting.

On some multi-processor machines this can be used effectively: if you have 32 CPUs, useful work can be performed without context switching on 31 CPUs while the last one is spinning. Spin locks are used on such machines when short waits are expected.

Less expensive than spin locks are for the process to suspend in some way. It could sleep for a fixed period, and try again, or it could suspend indefinitely until some other process wakes it up.


Semaphores are an extension to test-and-set in which the semaphore also handles the sleep/spin mechanism internally, so that it is invisible to the programmer.

Suppose three or more processes P1, P2, P3,... attempt to access an exclusive resource. P1 succeeds; P2 and P3 must both sleep.

A list of these sleeping processes would need to be maintained so that when P1 terminates one of P2 or P3 could be awoken.

When it in turn finishes, the remaining process on the sleeping list would be woken up.

A semaphore is a data-type with a value and a list that it maintains. When the value of the semaphore is one, the list is empty. When the value is zero, the list may be non-empty.

Two operations are allowed on the semaphore: UP and DOWN. DOWN is used to gain access to a resource, UP is used to release it.

DOWN(process) { if value is one then set value to zero allow process to continue else place process on sleeping list } UP { increase value if the value was zero remove a process from sleeping list allow it to complete DOWN } Uusally the semaphore is initialised to 1, with an empty list, indicating that the resource is available.
Example: Processes P1, P2 and P3 attempt to access a resource controlled by a semaphore. sem = 1 DOWN(P1) sem = 0 P1 enters its critical section DOWN(P2) P2 suspends because sem == 0 DOWN(P3) P3 suspends because sem == 0 P1 exits its critical section UP(P1) sem = 1 wakeup a process, say P2 P2 completes DOWN(P2) sem = 0 P2 enters its critical section P2 leaves its critical section UP(P2) sem = 1 wakeup P3 P3 completes DOWN(P3) sem = 0 P3 enters its critical section P3 leaves its critical section UP(P3) sem = 1 At the end of this the semaphore has the value 1, so that the resource is available.

This is what the queue looks like with semaphores:

int add(queue *q, char ch) { if (full(q)) return 0; DOWN(sem, getpid()) if (q->empty) { q->empty = 0; q->front = q->end = 0; } else q->front = (q->front + 1) % SIZE; q->elmts[q->front] = ch; UP(sem) return 1; } Example: To control access to shared memory in L9.1, one process stored values immediately in the memory, while the other process went to sleep for 5 seconds.

This is not enough to guarantee in every case that the memory will be set before being accessed. This does:

create sem with value 1 DOWN(sem) if (fork() == 0) { DOWN(sem) access shared memory read from memory } else { create shared memory write to memory UP(sem) }

Non-binary semaphores

The semaphores above have two values: 1, the resource is available or 0, the resource is in use. There are examples where the resource may have higher initial availability, so the initial value could be higher.


A table, queue or other list has 3 empty slots. Suppose only one process has write access to this, and only one process has remove access. The write process can write upto three elements before the table becomes full and it has to suspend.

So a ``empty'' semaphore could be initialised to 3. The writing process could do three DOWN operations on this before having to suspend. Each time the removing process does an UP, another space is added and ``empty'' shows this value.

Other mechanisms

Their are many mechanisms for IPC and many mechanisms for controlling access to resources. Some have even been implemented.

The mechanisms presented are fairly low-level. Higher level abstractions include monitors, barrier synchronisation, logic variables as streams, etc, etc.

Reference: Deadlock, Tanenbaum Chapter 6


Example: Two processes wish to communicate with each other, in a 2-way manner. Two pipes can be used for this:
The two processes issue read/write requests P1: P2: read(pipe2) read(pipe1) write(pipe1) write(pipe2)


When the ``give way to the right'' rule was in force, this was a common situation:

Deadlock detection

A resource allocation graph is a graph showing processes, resources and the connections between them. Processes are modelled by circles, resources by squares. If a process controls a resource then there is an arrow from resource to process. If a process requests a resource an arrow is shown the other way.
For example, P1 is using R1 and has requested R2, while P2 is using R2.
Deadlock can now occur if P2 requests R1 - setting up a cycle.

The total set of conditions for deadlock to occur are:

  1. Mutual exclusion. Each resource is either currently assigned to one process or is available.
  2. Hold and wait. Processes holding resources can request new ones.
  3. No preemption. Resources granted cannot be taken away, but must be released by the process holding them.
  4. Circular wait. There must be a cycle in the resource allocation graph.

Deadlock prevention

To prevent deadlocks from occurring, one of the four conditions must be disallowed.
  1. Mutual exclusion. Make some resources unsharable, such as printers, tape drives.
  2. Hold and wait. Process must request all needed resources at one time. This results in wasted resources.

    Process must release all current resources before requesting more. This is not feasible.

  3. No Preemption. Make it possible for the O/S to make a process give up a resource. This may result in lost data.
  4. Circular wait. Make it impossible for cycles to occur in the graph by restricting the types of graphs that can be made. For example, give each resource a priority number. If a resource is held with a priority, then a new resource can only be requested of lower priority.

Deadlock avoidance

Deadlock prevention is to ensure that deadlocks never occur. Deadlock avoidance is attempting to ensure that resources are never allocated in a way that might cause deadlock.

There are situations that may give rise to cycles, but in practise will not lead to deadlock. For example, P1 and P2 both want R1 and R2, but in this manner:

P1 needs R1 first, then R2, but with a slight overlap.

P2 needs R2 first, then R1 but with no overlap.
P1 can be given R1 and P2 can be given R2. P1 will require R2 before it can give up R1, but that is ok - P2 will give up R2 before it needs R1.

The Banker's algorithm can be used to ensure this, but it is too complicated for general use.

This page is http://pandonia.canberra.edu.au/OS/l9_2.html, copyright Jan Newmarch.
It is maintained by Jan Newmarch.
email: jan@ise.canberra.edu.au
Web: http://pandonia.canberra.edu.au/
Last modified: 16 September, 1995