TA

COMP3511: Operating System (Spring 2014) Project 2 Synchronization

The purpose of this project is to understand basic concepts in synchronization by implementing some primitives like lock, condition variable and monitor(semaphore is already implemented in Nachos), and then using them to solve the bunded buffer problem.

It is easier to understand these concepts based on some real life experience, imagine the following situation:

I have a free bicycle rental shop, and you are the customers who want to be the free riders, but you have to come to my shop, join the queue in front of my shop and wait for your turn. If when you enter my shop i still have bicycles then you succeed, but my shop only have 50 bicycles, and everyone can only rent 1 bicycle.

Here i am the cpu or shared resources, you customers are the threads, and queue in front of my shop is the ready queue. Here we use first come first serve scheduling, which means the customers in the queue enter my shop one by one sequentially.

Semaphore

Now suppose you come to my shop, and we have a conversation:

You :  Hi, do you have bicycle to rent?
  I :  Sorry, all the bicycles are rent out.
You :  Oh, too bad.
  I :  Don't worry, leave me your phone number, i will call you when there is bicycle available, you don't have to come here to check everytime later.

Then you go back and i put you on the waiting list.

When a customer rent a bicycle, it's a P operation, when a customer return a bicycle, it's a V operation, also i will call the customer on the top of the waiting list informing him/her that now there are available bicycle (equal to put one waiting/sleeping thread to the ready queue). Even when the waiting list is empty, a semaphore will still do this step (actually doing nothing). Let's say you get my call and then you come to the shop, expecting that this time you will get a bicycle this time:

You :  Hi, so now you have a bicycle for me?
  I :  Oh sorry again!
You :  What? You just call me and i am on the top of waiting list.
  I :  Yeah, you are on the top of waitling list and i only call you, but you have seen that when you come here there are guys in front of you in the queue outside of this door and they take the bicycle ahead of you.

The conversation i put here is to show that in the V operation, the thread waken up and put into ready queue cannot be guaranteed to be the first thread that will get the cpu right after the V operation. So what will be the Semaphore Value when the waken up thread gets the cpu later is still undetermined, depends on the how the wake up step was designed and what scheduling policy is used.

In Nachos, semaphore is already implemented, please check "threads/synch.h" and "threads/synch.cc" file, especailly the P V implementation.

Declaration of Semaphore:

class Semaphore {
  public:
    Semaphore(char* debugName, int initialValue);   // set initial value
    ~Semaphore();                       // de-allocate semaphore
    char* getName() { return name;}         // debugging assist

    void P();    // these are the only operations on a semaphore
    void V();    // they are both *atomic*

  private:
    char* name;        // useful for debugging
    int value;         // semaphore value, always >= 0
    List *queueSem;       // threads waiting in P() for the value to be > 0
};

Note: There are two ways of the implementation of semaphore. In normal case(which is the default case in Nachos), the semaphore value cannot be negative. In another case(check the above mentioned files in the first project's folders), it can be negative. It's better to use the first implementation, as enabling a negative value may lead to problems if Lock and Condition Variable are implemented on top of semaphore.

Lock

Declaration of Lock:

class Lock {
  public:
    Lock(char* debugName);          // initialize lock to be FREE
    ~Lock();                // deallocate lock
    char* getName() { return name; }    // debugging assist

    void Acquire(); // these are the only operations on a lock
    void Release(); // they are both *atomic*

    bool isHeldByCurrentThread();   // true if the current thread
                    // holds this lock.  Useful for
                    // checking in Release, and in
                    // Condition variable ops below.
    void PrintQueue();      // print queueLock
  private:
    char* name;             // for debugging
    // plus some other stuff you'll need to define

    Thread *holdingThread;  // the thread holding this clock
    bool isFree;            // lock state: free or not?
    List *queueLock;        // the waiting queue for this lock
};

Lock can be seemed as a semaphore except that its value can only be 0 or 1, which means now my shop only have 1 bicycle. The logic of lock's Acquire and Release operation is simple.

Comparison with semaphore

  • Lock has the state variable isFree, indicating whether it is free or not. For the sake of security, we want to make sure that the thread which tries to release the lock(lock state will be set to free if released) is the thread that holding the lock, thus we need a variable holdingThread to indicate the thread currently holding the lock. In semaphore we don't have these considerations.

  • Think of the difference between "waken up" and "get lock" for a thread. As mentioned before, being waken up cannot guarantee you anything, but getting the lock is like a reservation, which ensures you that you will get the bicycle in future when you come to my shop.

Now you can think of how state variable isFree, holdingThread and lock's waiting queue queueLock change in the implementation of lock's Acquire and Release operation.

Note: There are also two ways of implementation of lock, based on semaphore or not. In project 2, we will not use semaphore to implement lock, but from scratch, which is more natural and safer.

Condition Variable and Monitor

Declaration of Condition Variable:

class Condition {
  public:
    Condition(char* debugName);     // initialize condition to 
                    // "no one waiting"
    ~Condition();           // deallocate the condition
    char* getName() { return (name); }

    void Wait(Lock *conditionLock);     // these are the 3 operations on 
                    // condition variables; releasing the 
                    // lock and going to sleep are 
                    // *atomic* in Wait()
    void Signal(Lock *conditionLock);   // conditionLock must be held by
    void Broadcast(Lock *conditionLock);// the currentThread for all of 
                    // these operations
    void PrintQueue();
  private:
    char* name;
    // plus some other stuff you'll need to define

    List *queueCond;
};

Literally condition variable just involves a condition and a waiting queue waiting for the condition to be satisfied. However, in synchronization condition variable refers to a mixture of some condition with an associated lock, so that multiple threads can access some shared resources concurrently and safely. The condition reflects the natural constrain of the resource access, while lock ensures that only one thread get the chance to manipulate the shared resource at every single moment(after the thread got the chance, it still has to check whether the condition satisfied).

Let's use my shop example to illustrate this:

I don't want to sit in my shop and serve every customer by myself. I will setup some rules and then go to enjoy my life. There is a database which saves the list of customers who is on the waiting list, and two computers connected to the database, one on my worktable(C1), which only supports read and delete operation, the other outside of the shop(C2), which only supports insert operation. I leave one key in front of my shop, the customer who gets the key can open the door of the shop. After getting in, he will check whether there are bicycles. If there are bicycles, he will take a bicycle, close the door, leave the key in front of the door, and then enjoy his trip. Otherwise, he will close the door, leave the key, go to C2, insert his name into the database and then go home waiting. Remember here he is not waiting for the key, he is waiting for the condition that there are available bicycles to be satisfied, so the waiting list is a condition queue, not a lock queue. The queue in front of my shop is the ready queue to challenge the availability of the key, if challenge failed, they will be put into a lock queue. For the customers who wants to come back and return the bicycle, he also needs the key to open the door. After going inside and placing the bicycle in the right place, he has the responsibility to inform other customers who are on the waiting list that now there are available bicycles. So he will go to C1 and read who are on the waiting list, call one of them, then delete his records in the database, close the door, leave the key, and just leave. While a bicycle returning behavior will bring some waiting list customer back to ready queue, each time some one leaves my shop will also bring some lock queue customer back to ready queue.

The correspondence between formal concepts in synchronization and items in above example is:

Formal Example
condition bicycle number nonzero
associated lock key
monitor the whole self-serving system
condition queue customers in the database

The correspondence between formal functions in synchronization and actions in above example is:

Formal Example
Lock::Acquire() get key, enter shop
Lock::Release() close door, leave key
Condition::Wait(Lock*) close door, leave key, insertion through C2, go home
Condition::Signal(Lock*) delete through C1

With this comparison in your mind, it will be easier to understand all these synchronization operations. In the implementation, you have to think carefully about the order of actions inside each function. Think why i put two computers in above example, because if the customer could insert through C1 inside the shop, which means the thread goes to sleep inside a monitor, then he will never leave the shop, in other words the thread stays inside the monitor and holds the monitor's lock forever, forming a deadlock.

So what is a monitor. Basically, monitor is a data structure that wraps up shared data with condition variable and lock. Lock is for permission of accessing the shared data, in other words entering the monitor, whether it succeed in read/write the shared data depends on whether the condition is satisfied. So many condition variables may be associated with the same lock.

Note: About Condition::Singal(Lock*) function, after the thread wake up another thread from the condition queue, whether the thread will stay inside the monitor keep executing following codes or just leave the monitor, release the lock, then wait for another chance of entering the monitor depends on the design style of the condition variable. In most used Mesa-style, it will stay inside the monitor, keep holding the lock. It's up to the monitor's designer to decide when the thread will leave the monitor and release the lock to other threads. Condition::Broadcast(Lock*) function is the same as Condition::Singal(Lock*) function, except that it wakes up all the condition queue thread. Like lock, condition variable is implemented directly, not on top of semaphore.

Please check "threads/synchlist.h" and "threads/synchlist.cc", see how Nachos implement a simple monitor structure, understand SynchList::Append(void*) and SynchList::Remove(void*) function.

Application: Bounded Buffer Problem

Declaration of Bounded Buffer:

struct Message
{
    char* creatorName;
    int content;
};


class BoundedBuffer
{
    public:
        BoundedBuffer(int bfSize);
        ~BoundedBuffer();

        void Produce(Message *msg);
        Message* Consume();

        void PrintBuffer(); // print buffer content
        void ShowAll();     // show all the relevant contents

    private:
        int size;       // bounded buffer size
        List *buffer;   // for saving Message
        Lock *lock;     // lock for entering the monitor, in other words, lock for attemption of insertion/delete of message. 
        Condition *notEmpty;
        Condition *notFull;
};

Also know as producer-consumer problem. The producer's job is to generate a piece of data, put it into the buffer and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer) one piece at a time. The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer.

In project 2, we will implement a bounded buffer using 2 conditional variables, please check "threads/bdBuffer.h" and "threads/bdBuffer.cc", basic functions are BoundedBuffer::Produce(Message*) and BoundedBuffer::Consume().

About The Project

Code can be downloaded from here.

After downloaded, you can unzip the code by

tar -xvf os_nachos_proj2.tar

In threads/threadtest.cc, line 47-93:

BoundedBuffer* buffer = new BoundedBuffer(5);

void
RandomBufferOperation(int mode)
{
    int num;
    for (num = 1; num < 11; )
    {
        if (Random()%2)
        {
            if (mode)
            {    
                Message *msg = new Message;
                msg->creatorName = currentThread->getName();
                msg->content = num;
                printf("\n*********** %s try to produce its %d message \n", currentThread->getName(), num);
                buffer->Produce(msg);
            } else {
                printf("\n*********** %s try to consume message\n", currentThread->getName());
                buffer->Consume();
            }
            buffer->ShowAll();
            num++;
        } else {
            currentThread->Yield();
        }
    }
}

void
BoundedBufferTest()
{
    RandomInit((int)time(0));    

    Thread *p1 = new Thread("Producer 1");
    p1->Fork(RandomBufferOperation, 1);

    Thread *c1 = new Thread("Consumer 1");
    c1->Fork(RandomBufferOperation, 0);

    Thread *p2 = new Thread("Producer 2");
    p2->Fork(RandomBufferOperation, 1);

    Thread *c2 = new Thread("Consumer 2");
    c2->Fork(RandomBufferOperation, 0);

}

There is a buffer with bounded size 5, and 2 producers and 2 consumers, each time we will randomly select a producer or a consumer to produce or consume a message. The program will end after each producer finishes producing 10 messages and each consumer finishes consuming 10 messages. Following is start part of a sample output:

*********** Consumer 1 try to consume message
Consumer 1 get BUFFER LOCK 
Consumer 1 release BUFFER LOCK 
BUFFER NOT EMPTY not satisfied, put Consumer 1 into BUFFER NOT EMPTY 's condition queue

BUFFER NOT EMPTY condition queue: Consumer 1    


*********** Consumer 2 try to consume message
Consumer 2 get BUFFER LOCK 
Consumer 2 release BUFFER LOCK 
BUFFER NOT EMPTY not satisfied, put Consumer 2 into BUFFER NOT EMPTY 's condition queue 

BUFFER NOT EMPTY condition queue: Consumer 1    Consumer 2  


*********** Producer 1 try to produce its 1 message 
Producer 1 get BUFFER LOCK 
Sucessfully produce message, now signal not empty!
BUFFER NOT EMPTY satisfied, put Consumer 1 from BUFFER NOT EMPTY 's condition queue into Ready Queue 
Producer 1 release BUFFER LOCK 

BUFFER LOCK queue: 

BUFFER NOT EMPTY condition queue: 
Consumer 2  
BUFFER NOT FULL condition queue: 

Ready list contents:
Producer 2, Consumer 1, 
Buffer content: 
    Message       Creator
     1            Producer 1


Consumer 1 get BUFFER LOCK 
Successfully consume message, now signal not full!
Consumer 1 release BUFFER LOCK 

BUFFER LOCK queue: 

BUFFER NOT EMPTY condition queue: 
Consumer 2  
BUFFER NOT FULL condition queue: 

Ready list contents:
Producer 1, Producer 2, 
Buffer content: 
    Message       Creator

Consumer1 and consumer2 tries to consume message from buffer but at the beginning the buffer is empty, so their consume operations are blocked. Then Producer1 produce a message and Consumer1's blocked consume operation is resumed, but Consumer2's consume operation is still blocked.

Requirements

1. Complete functions

  • Lock::Acquire() and Lock::Release() in "threads/synch.cc"
  • Condition::Wait(Lock*) in "threads/synch.cc"
  • BoundedBuffer::Produce(Message) and BoundedBuffer::Consume()* in "threads/bdBuffer.cc"

Note: A compiled version "sample_nachos" (under Ubuntu12.04 ) is provided for testing and comparing with "nachos" built by your code. Also the "sample_output" is provided as an instruction.

2. Output of the program, pick a moment from your output when some blocked function resumes, explain what happened at that moment based on you output.

After completion of all the functions:

cd threads/
make
./nachos > output.txt

Then your output will be saved in file "output.txt".

Note: due to the platform dependence of nachos code, if you got error like header file can not be found when running 'make' inside 'threads/' folder, you may try to run 'make' outside of the 'threads/' folder. The makefile outside will compile all the modules including threads module, while the makefile inside each module will only make that module. The make in each module still calls the configuration in Makefile.dep and Makefile.common outside of 'threads/' folder. Run 'make' outside of 'threads/' folder will write the correct header files path and libraries path (which is platform dependant) to the Makefile inside each module, after that running 'make' inside each module should work.

3. Answering following questions briefly

  • What is the difference between the data structures implemented in "threads/synchlist.cc" and "threads/bdBuffer.cc" respectively ?
  • What would happen if in function BoundedBuffer::Produce(Message)*, we change while into if ?

4. Upload "proj2_StuID.zip" which contains your code, output and answers(better with .pdf format).

Useful Links:

  1. Synchronization Primitives: Implementation and Examples
  2. Locks and Condition Variables
  3. Nachos Code Sample1
  4. Nachos Code Sample2
  5. Nachos Code Sample3
  6. Nachos Code Sample4 (chinese)
  7. Good Material sdu.edu.cn (chinese)

Comments