Summary of every problem and its solution we have previously discussed
At first, we have a problem called producer-consumer problem and we will end up having a robust solution named as semaphores.
|There was a producer-consumer problem with a bounded buffer which can store at a maximum of N-1 items in it.|
To solve the above problem,
|The count variable is used and initialized with a “0” value, as count = 0 and its value is incremented for a new item and decremented on the removal of an item. Now, if the producer process runs count++ and consumer process runs count – – separately then, in that case, there is no problem but if they run concurrently/parallel then that action leads to a Data Inconsistency and a Race Condition problem arises which states, that the final output is dependent on the order of execution of statements.|
So, to solve the Race Condition problem discussed above,
|We need to ensure that only one process can able to manipulate the count variable, which is nothing but a shared variable and for that we should now go for a process synchronization or coordination mechanisms. We have several mechanisms and all those solutions must satisfy the following conditions in order to be a good solution for the producer-consumer problem and those conditions are:
In addition to the above,
|At a given time, there can be a moment when more than one kernel-mode processes are executing at once in the critical section. So, it can lead to a Race Condition and OS code(kernel code) must have proper mechanisms to avoid such problems.
Example: X is a kernel data structure which contains the list of open files. So, if two processes open files simultaneously then separate updates to this list can result in Race Condition.
However, if categorized, there are only 2 approaches for handling the critical section in OS(kernel).
Note: Every solution is based on some kind of LOCK.
The following are few mechanisms available to avoid or remove the critical section problem:
Solution/Modification for semaphore in order to remove the “Busy Waiting” problem.
The waiting process (basically spinning) should go to sleep() = Waiting Queue / Blocks Itself and when the 1st process comes out of the critical section, then it comes out of sleep by executing wakeup() = Read Queue and after that CPU scheduling algorithms are used in order to give it a CPU control.
Semaphore basically have two things:
Following is code for the modified version of wait() for a semaphore:
Following is code for the modified version of signal() for a semaphore:
Differences between a normal semaphore and a modified semaphore:
Now modified version of semaphore states that:
Now after discussing all the problems and solutions discussed above, it is time to discuss a little bit about Single-core processor and Multi-core processor:
- Single-core processor = it is critical to run wait() and signal() atomically and again we have the critical section problem. So, to solve this in single processor systems we can simply use Disable Interrupt mechanism.
- Multi-core processor = if we use Disable Interrupt mechanism here, the interrupt should be disabled on each available core/processor in the system and this is difficult as well as it results in a decreased performance. So, there should be something like Compare_And_Swap() or TestAndSet(). So that wait() and signal() can be performed atomically.
Note: Still Busy Waiting is there, but not in the entry section and in the critical section itself because wait() also have a critical section and signal also have a critical section. So, we have limited Busy Waiting to their’s critical sections only. Moreover, because these sections are short and if properly coded, they are not more than 10 lines of code. So, Busy Waiting will last only for a short time.
Note: In the modified version of semaphores, we have used Busy Waiting as a feature to it.