Semaphores

A semaphore is a variable.

It is a variable or abstract data type that provides a simple but useful abstraction for controlling access by multiple processes to a common resource in a parallel programming or multi user environment.

A semaphore is hardware or a software tag variable whose value indicates the status of a common resource. Its purpose is to lock the resource being used. A process which needs the resource will check the semaphore for determining the status of the resource followed by the decision for proceeding. In multitasking operating systems, the activities are synchronized by using the semaphore techniques.

It is like as a record of how many units of a particular resource are available, coupled with operations to safely (i.e., without race conditions) adjust that record as units are required or become free, and, if necessary, wait until a unit of the resource becomes available. Semaphores are a useful tool in the prevention of race conditions; however, their use is by no means a guarantee that a program is free from these problems.

There are 2 types of semaphores:


 * 1) Binary semaphores:Binary semaphores have 2 methods associated with it. (up, down / lock, unlock, unavailable / available). Binary semaphores can take only 2 values (0/1). They are used to acquire locks. When a resource is available, the process in charge set the semaphore to 1 else 0. (same functionality that mutexes have).


 * 1) Counting semaphores:Semaphores which allow an arbitrary resource count are called counting semaphores. Counting Semaphore may have value to be greater than one, typically used to allocate resources from a pool of identical resources.

Mutex
A mutex and the binary semaphore are essentially the same. Both can take values: 0 or 1. However, there is a significant difference between them that makes mutexes more efficient than binary semaphores.

Mutex is the short form for ‘Mutual Exclusion object’. A mutex allows multiple threads for sharing the same resource. The resource can be file. A mutex with a unique name is created at the time of starting a program. A mutex must be locked from other threads, when any thread that needs the resource. When the data is no longer used / needed, the mutex is set to unlock.

A mutex can be unlocked only by the thread that locked it. Thus a mutex has an owner concept.

The differences between binary semaphore and mutex are:
Mutex is used exclusively for mutual exclusion. Both mutual exclusion and synchronization can be used by binary. A task that took mutex can only give mutex. From an ISR a mutex can not be given. Recursive taking of mutual exclusion semaphores is possible. This means that a task that holds before finally releasing a semaphore, can take the semaphore more than once. Options for making the task which takes as DELETE_SAFE are provided by Mutex, which means the task deletion is not possible when holding the mutex.

Semantics and implementation
One important property of these semaphore variables is that their value cannot be changed except by using the wait and signal functions. Counting semaphores are equipped with two operations, historically denoted as V (also known as signal) and P (or wait)(see below). Operation V increments the semaphore S, and operation P decrements it. The semantics of these operations are shown below. Square brackets are used to indicate atomic operations, i.e., operations which appear indivisible from the perspective of other processes.

The value of the semaphore S is the number of units of the resource that are currently available. The P operation wastes time or sleeps until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. The V operation is the inverse: it makes a resource available again after the process has finished using it. A simple way to understand wait and signal operations is: Many operating systems provide efficient semaphore primitives that unblock a waiting process when the semaphore is incremented. This means that processes do not waste time checking the semaphore value unnecessarily. The counting semaphore concept can be extended with the ability to claim or return more than one "unit" from the semaphore, a technique implemented in UNIX. The modified V and P operations are as follows:
 * 1) wait: Decrements the value of semaphore variable by 1. If the value becomes negative, the process executing wait is blocked, i.e., added to the semaphore's queue.
 * 2) signal: Increments the value of semaphore variable by 1. After the increment, if the pre-increment value was negative (meaning there are processes waiting for a resource), it transfers a blocked process from the semaphore's waiting queue to the ready queue.

function V(semaphore S, integer I): [S ← S + I] function P(semaphore S, integer I): repeat: [if S >= I:           S ← S - I            break]

To avoid starvation, a semaphore has an associated queue of processes (usually a first-in, first out). If a process performs a P operation on a semaphore that has the value zero, the process is added to the semaphore's queue. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution. When processes have different priorities the queue may be ordered by priority, so that the highest priority process is taken from the queue first. If the implementation does not ensure atomicity of the increment, decrement and comparison operations, then there is a risk of increments or decrements being forgotten, or of the semaphore value becoming negative. Atomicity may be achieved by using a machine instruction that is able to read, modify and write the semaphore in a single operation. In the absence of such a hardware instruction, an atomic operation may be synthesized through the use of a software mutual exclusion algorithm. On uniprocessor systems, atomic operations can be ensured by temporarily suspending preemption or disabling hardware interrupts. This approach does not work on multiprocessor systems where it is possible for two programs sharing a semaphore to run on different processors at the same time. To solve this problem in a multiprocessor system a locking variable can be used to control access to the semaphore. The locking variable is manipulated using a test-and-set-lock (TSL) command.

Producer/Consumer problem
In the producer-consumer problem, one process (the producer) generates data items and another process (the consumer) receives and uses them. They communicate using a queue of maximum size N and are subject to the following conditions: The consumer must wait for the producer to produce something if the queue is empty. The producer must wait for the consumer to consume something if the queue is full. The semaphore solution to the producer-consumer problem tracks the state of the queue with two semaphores: emptyCount, the number of empty places in the queue, and fullCount, the number of elements in the queue. To maintain integrity, emptyCount may be lower (but never higher) than the actual number of empty places in the queue, and fullCount may be higher (but never lower) than the actual number of items in the queue. Empty places and items represent two kinds of resources, empty boxes and full boxes, and the semaphores emptyCount and fullCount maintain control over these resources. The binary semaphore useQueue ensures that the integrity of the state of the queue itself is not compromised, for example by two producers attempting to add items to an empty queue simultaneously, thereby corrupting its internal state. Alternatively a mutex could be used in place of the binary semaphore. The emptyCount is initially N, fullCount is initially 0, and useQueue is initially 1. The producer does the following repeatedly: produce: P(emptyCount) P(useQueue) putItemIntoQueue(item) V(useQueue) V(fullCount) The consumer does the following repeatedly: consume: P(fullCount) P(useQueue) item ← getItemFromQueue V(useQueue) V(emptyCount) Example. A single consumer enters its critical section. Since fullCount is 0, the consumer blocks. Several producers enter the producer critical section. No more than N producers may enter their critical section due to emptyCount constraining their entry. The producers, one at a time, gain access to the queue through useQueue and deposit items in the queue. Once the first producer exits its critical section, fullCount is incremented, allowing one consumer to enter its critical section. Note that emptyCount may be much lower than the actual number of empty places in the queue, for example in the case where many producers have decremented it but are waiting their turn on useQueue before filling empty places. Note that emptyCount + fullCount ≤ N always holds, with equality if and only if no producers or consumers are executing their critical sections.