Advertisement
Guest User

current design doc

a guest
Sep 24th, 2015
94
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
C 9.23 KB | None | 0 0
  1.             +--------------------+
  2.             |        CS 162      |
  3.             | PROJECT 1: THREADS |
  4.             |   DESIGN DOCUMENT  |
  5.             +--------------------+
  6.                    
  7. ---- GROUP ----
  8.  
  9. >> Fill in the names and email addresses of your group members.
  10.  
  11. Brandt Sheets <bmsheets@berkeley.edu>
  12. Bruce Zhang <brucezhang@berkeley.edu>
  13. Derek Pierce <dap92@berkeley.edu>
  14. Tayo Olukoya <tolukoya@berkeley.edu>
  15.  
  16. ---- PRELIMINARIES ----
  17.  
  18. >> If you have any preliminary comments on your submission, notes for the
  19. >> TAs, or extra credit, please give them here.
  20.  
  21. >> Please cite any offline or online sources you consulted while
  22. >> preparing your submission, other than the Pintos documentation, course
  23. >> text, lecture notes, and course staff.
  24.  
  25.                  ALARM CLOCK
  26.                  ===========
  27.  
  28. ---- DATA STRUCTURES ----
  29.  
  30. >> A1: Copy here the declaration of each new or changed `struct' or
  31. >> `struct' member, global or static variable, `typedef', or
  32. >> enumeration.  Identify the purpose of each in 25 words or less.
  33.  
  34. static struct list blocked_list
  35. This list maintains all threads that are currently blocked by
  36. timer_sleep() and waiting to wake up.
  37.  
  38. Member of struct thread - int64_t sleep_ticks
  39. sleep_ticks stores the number of ticks the thread must wait prior to
  40. waking so that timer interrupt can check and decrement this value.
  41.  
  42.  
  43. ---- ALGORITHMS ----
  44.  
  45. >> A2: Briefly describe what happens in a call to timer_sleep(),
  46. >> including the effects of the timer interrupt handler.
  47. When timer_sleep() is called the following happens:
  48. 1. Interrupts are disabled
  49. 2. Current thread's sleep_ticks is set to the value passed to timer_sleep()
  50. 3. thread_block() is called on the running thread
  51. 4. Interrupts are reenabled
  52.  
  53. With each call of timer_interrupt(), the function looks at each thread
  54. in blocked_list and checks if thread.sleep_ticks == 0. If so it unblocks
  55. that thread, otherwise it simply decrements sleep_ticks by one.
  56.  
  57. >> A3: What steps are taken to minimize the amount of time spent in
  58. >> the timer interrupt handler?
  59. Rather than iterating through all threads to check for blocked threads to wake,
  60. the interrupt handler only looks through threads in the blocked_list.
  61.  
  62. ---- SYNCHRONIZATION ----
  63.  
  64. >> A4: How are race conditions avoided when multiple threads call
  65. >> timer_sleep() simultaneously?
  66. Race conditions are avoided by protecting the critical section of
  67. code, namely the thread_block call, by disabling and re-enabling
  68. interrupts immediately prior to and following the call. This is
  69. important to ensure that there are not multiple, simultaneous
  70. attempts to add threads to blocked_list.
  71.  
  72. >> A5: How are race conditions avoided when a timer interrupt occurs
  73. >> during a call to timer_sleep()?
  74. To ensure that a given thread maintains the correct value of
  75. sleep_ticks, that value is also stored while interrupts are disabled.
  76. This way, there is no possibility of timer interrupt being called
  77. without the successfully decrementing the counter.
  78.  
  79. ---- RATIONALE ----
  80.  
  81. >> A6: Why did you choose this design?  In what ways is it superior to
  82. >> another design you considered?
  83. While it is generally undesirable to disable and re-enable interrupts,
  84. it is preferable in this case to using a lock or sempahore because
  85. the timer interrupt is responsible for updating the sleep_ticks
  86. value, and if enabled during the critical section, it would increment
  87. ticks without also decrementing all appropriate sleep_ticks.
  88.  
  89. We also chose to create a new list of blocked threads to speed up
  90. the timer_interrupt, which has to check for threads to unblock.
  91. An alternative is to simply iterate through all existing threads,
  92. but this approach is slower and unnecessary.
  93.  
  94.              PRIORITY SCHEDULING
  95.              ===================
  96.  
  97. ---- DATA STRUCTURES ----
  98.  
  99. >> B1: Copy here the declaration of each new or changed `struct' or
  100. >> `struct' member, global or static variable, `typedef', or
  101. >> enumeration.  Identify the purpose of each in 25 words or less.
  102. static struct hash priorityHash; Hash table where the hashing function returns the priority of the thread put in
  103.  
  104. static struct lock set_lock; Lock used when threads are switching priority
  105.  
  106. static struct hash oldPriorities; Stores the old priorities of threads that received a priority donation.
  107.  
  108.  
  109. >> B2: Explain the data structure used to track priority donation.
  110. >> Use ASCII art to diagram a nested donation.  (Alternately, submit a
  111. >> .png file.)
  112. A hash table is used to hold current priorities, when a thread receives a donation it will update in that hash table. There is also a hash table of original priorities, so when a thread finishes it can revert back to its original priority.
  113.  
  114. thread A with priority 5, thread B with priority 10, thread C with priority 20
  115. thread A has lock 1, thread B needs lock 1 and has lock 2, thread C needs lock 2
  116.  
  117. Step 1. A<-donates--B    C
  118. Step 2. A     B<-donates--C
  119. Step 3. A<-donates C's--B   C
  120.  
  121. ---- ALGORITHMS ----
  122.  
  123. >> B3: How do you ensure that the highest priority thread waiting for
  124. >> a lock, semaphore, or condition variable wakes up first?
  125. All the waiting threads will be in the hash table with their priorities as
  126. the result of the hash function, the hash table also takes in a less function
  127. which can be used to keep the priorities in the right order. Whenever a thread
  128. is being pulled out of the hash table it will be the highest priority because when we initialize the hash table we can give it a comparator that puts highest priorities first.
  129.  
  130. >> B4: Describe the sequence of events when a call to lock_acquire()
  131. >> causes a priority donation.  How is nested donation handled?
  132. First a low priority thread acquires the lock, but before it can release the
  133. the lock it is kicked out by a medium priority thread. Now when a high priority
  134. thread comes along and tries to call lock_acquire() it won't be able to because
  135. the low priority thread has not been able to run, so the high priority thread
  136. donates its priority to the low priority thread. This allows for the low
  137. priority thread to get CPU time which allows it to finish and call lock_release()
  138. Once the lock is released the high priority is able to run. If a thread tries to acquire a lock and realizes it is already acquired it will check the priority of the thread with the lock. If it sees the priority is lower than its own it will donate its priority, and the thread with the lock will call thread_set_priority() with the priority it has received. If the donator has a lock that an even higher priority thread needs it will receive a donation and change its priority, when its priority changes it will check if it is still waiting for the first thread to release the lock and if it is it will donate the priority it just received.
  139.  
  140.  
  141.  
  142. >> B5: Describe the sequence of events when lock_release() is called
  143. >> on a lock that a higher-priority thread is waiting for.
  144. Once lock_release() is called the high priority thread that has been sitting
  145. idle until it gets a opportunity to acquire the lock wakes up and calls
  146. lock_acquire(), once it has the lock it is able to run
  147.  
  148. ---- SYNCHRONIZATION ----
  149.  
  150. >> B6: Describe a potential race in thread_set_priority() and explain
  151. >> how your implementation avoids it.  Can you use a lock to avoid
  152. >> this race?
  153. If a thread is pulled off the ready list while a thread is setting priority it
  154. could trigger a race condition, two different threads could be pulled depending
  155. if the thread setting its priority has already done the change. Locks can be used
  156. and we are using a lock that is acquired when a thread begins to change its priority
  157. and released once it has changed.
  158.  
  159. ---- RATIONALE ----
  160.  
  161. >> B7: Why did you choose this design?  In what ways is it superior to
  162. >> another design you considered?
  163. We considered building a priority queue to hold the thread priorities but decided
  164. that using the built in hash structure could do everything we needed and would
  165. avoid bugs because it was given to us.
  166.  
  167.              ADVANCED SCHEDULER
  168.              ==================
  169.  
  170. ---- DATA STRUCTURES ----
  171.  
  172. >> C1: Copy here the declaration of each new or changed `struct' or
  173. >> `struct' member, global or static variable, `typedef', or
  174. >> enumeration.  Identify the purpose of each in 25 words or less.
  175.  
  176. ---- ALGORITHMS ----
  177.  
  178. >> C2: Suppose threads A, B, and C have nice values 0, 1, and 2.  Each
  179. >> has a recent_cpu value of 0.  Fill in the table below showing the
  180. >> scheduling decision and the priority and recent_cpu values for each
  181. >> thread after each given number of timer ticks:
  182.  
  183. timer  recent_cpu    priority   thread
  184. ticks   A   B   C   A   B   C   to run
  185. -----  --  --  --  --  --  --   ------
  186.  0
  187.  4
  188.  8
  189. 12
  190. 16
  191. 20
  192. 24
  193. 28
  194. 32
  195. 36
  196.  
  197. >> C3: Did any ambiguities in the scheduler specification make values
  198. >> in the table uncertain?  If so, what rule did you use to resolve
  199. >> them?  Does this match the behavior of your scheduler?
  200.  
  201. >> C4: How is the way you divided the cost of scheduling between code
  202. >> inside and outside interrupt context likely to affect performance?
  203.  
  204. ---- RATIONALE ----
  205.  
  206. >> C5: Briefly critique your design, pointing out advantages and
  207. >> disadvantages in your design choices.  If you were to have extra
  208. >> time to work on this part of the project, how might you choose to
  209. >> refine or improve your design?
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement