Advertisement
Guest User

Untitled

a guest
Dec 18th, 2014
131
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 27.08 KB | None | 0 0
  1. Part 5 – Scheduling (10%)
  2. 14) There are different criterias that have been suggested for comparing the different CPU scheduling algorithms. Depnding on what is compared, can make a difference as to which algorithm is the best. There are five main criteria for process scheduling algorithms (Gagne et al, 2012), these are listed below:
  3. • CPU Utilization
  4. • Throughput
  5. • Turnaround time
  6. • Waiting time
  7. • Response time
  8. CPU Utilization
  9. We need to keep the CPU as busy as possible, CPU utilization can range from 0 to 100 percent. Within a real system, CPU utilization should range from approximately 40 percent for a lightly loaded system and 90 percent for a fully loaded system (Gagne et al, 2012).
  10. Throughput
  11. If the CPU is busy executing processes, then this means that work is being done within the system. One way to measure how much work has been done is through the number of processes completed per time unit which is called throughput. For a short process, this would be 10 processes per second but for long process it would be one process per hour (Gagne et al, 2012).
  12. Turnaround time
  13. From the point of view of some processes, the important criteria is how long it takes to execute the process. The period of time it takes between the time of submission and the time of completion is the turnaround time. Turnaround time is the time spent waiting to get into the memory, waiting in the ready queue, executing on the CPU and doing I/O (Gagne et al, 2012).
  14. Waiting time
  15. The algorithm does not affect the time it takes for a process to execute or do I/O. It only affects the time a process spends waiting within the ready queue. Waiting time is the time it spent waiting in the ready queue (Gagne et al, 2012).
  16. Response time
  17. Within an interactive system, turnaround time may not be the best criteria. The reason for this is because usually the a process can produce some output fairly early and still can continue computing the new results while the last results are being output to the user. From this another measure of time is introduced. This is the time from the submission of a request until the first response is formed. The response time is the time it takes to start responding not the time it takes to output the responses. Turnaround time is usually limited by the speed of the output device (Gagne et al, 2012).
  18. 15) CPU Scheduling are classified into two categories: Pre-emptive and non-pre-emptive. In pre-emptive scheduling a running process may be suspended in the middle of execution. The operating system then would reclaim the CPU and preempt the process and give it to another process with a higher priority. But it isn’t always given to a process with a higher priority as demonstrated with the round robin. In non-pre-empting scheduling, the process is given a CPU, it runs as long as it needs the CPU. A major disadvantage with non-pre-emptive scheduling is that when a long running process is given a CPU, it monopolizes the CPU and will not release it until it is done with it. This could be even more dangerous due to program failures. From this a process can go into an infinite loop which would occupy the CPU resulting in the system to stall. The disadvantage to pre-emptive scheduling can lead to complexity within the kernel design and also within the design of multiprocess applications for process synchronization (Aravind et al, 2010).
  19. 16) There are different types of algorithms which are used to decide which process in a ready queue is allocated to the CPU and in which order. There are 3 different types of algorithms:
  20. • First-come-first-served (FCFS)
  21. • Round Robin (RR)
  22. • Shortest Remaining Time (SRT)
  23. Below I am going to show diagrams for the following scheduling algorithms:
  24. FCFS
  25.  
  26.  
  27.  
  28.  
  29. (Tran, 1998)
  30.  
  31. Round robin with time quantum of 2
  32.  
  33.  
  34.  
  35.  
  36.  
  37. (Tran, 1998)
  38.  
  39. Shortest Time Remaining
  40.  
  41.  
  42.  
  43.  
  44.  
  45. (Tran, 1998)
  46.  
  47. Feedback scheduling with time quantum of 2 next quantum of 4
  48.  
  49.  
  50.  
  51.  
  52.  
  53.  
  54. (Tran, 1998)
  55.  
  56. 17) Exponential averaging determines the approximation of the shortest process. We denote the current nth, CPU usage burst by tn. We also need to denote the average of all the previous up till now by Tn. By using a weighting factor of 0 < a <1 with tn and 1 -a we can estimate when the next CPU usage burst will be. The value that is predicted via Tn+1 is shown as the following formula:
  57. Tn+1 = a * tn + (1 -a) * Tn (Bhatt, 2010)
  58. If a within the formula is equal to 1 then we can say that the past history has been ignored. The next burst of usage is the same as the immediate past utilization. If a is turned into 0 then we can ignore the immediate past utilization altogether. By choosing the value of a in within the range of 0 to 1 we can then weigh the immediate past usage as well as the previous history of processes with decreasing weightage (Bhatt, 2010).
  59. Part 6 – Concurrency (20%)
  60. 18)
  61. 19)
  62. 20) There are a list of conditions that system needs to hold at the same time in order for deadlock to occur. The list of conditions are:
  63. 1. Mutual Exclusion
  64. 2. Hold and Wait
  65. 3. No Preemption
  66. 4. Circular Wait
  67. (Gagne et al, 2012)
  68. Mutual Exclusion
  69. At least one resource must be held within a non-shareable mode which means that only one process can use it at any given time. If another process tries to access the resource then it would be delayed until it is released. One way around this would be to use a read-only file. The reason why they can be used is because they are a sharable resource. Multiple processes can open a read-only file together simultaneously (Gagne et al, 2012).
  70. Hold and Wait
  71. A process needs to hold a minimum of one resource and at the same time would be trying to access other resources that are already help up by other processes. A way to get around this condition would be to ensure that whenever a process tries to get hold of a resource, it does not have any other resource already. We can do this by implementing a protocol which requires each process to request and allocate all of it resources before it executes. We can do this by requiring system calls that are requesting resources for processes to lead all other system calls (Gagne et al, 2012).
  72.  
  73. No Preemption
  74. A resource can be released only by the process that is holding it. This is only possible once the process has finished its task. To make sure that this condition does not hold, one way to get around it would be to use the following protocol; if a process is holding a couple of resources and it requests more resources, it would not immediately be able to get it. This would then lead to all the other resources that are currently held by the process to be preempted. These resources are then added to a list of preempted resources that the process is waiting for. The process would then be restarted only if it can regain old resources and new ones too (Gagne et al, 2012).
  75. Circular Wait
  76. A set, for example (P0, P1….,PN) of a waiting process must work in a way where P0 is waiting for a resource which is held by P1 whilst P1 is waiting for a resource held be P2. PN-1 is waiting for PN and PN is waiting for a resource held by P0. The way to get around this would be to implement a total ordering of all different types of resources and to ensure that each process requests resources in an increasing order of enumeration (Gagne et al, 2012).
  77. 21) There are two main methods to avoid deadlock. Those are deadlock-prevention and deadlock avoidance. If a system fails to employ either then this would result in a deadlock situation to occur. When this happens the system may offer an algorithm which examines the state of the system to find out if a deadlock has occurred and also an algorithm to recover from deadlock. The way the system would detect whether or not a deadlock has occurred is through a wait-for graph. The wait-for graph works in a way where if a cycle would exist within the wait-for graph then this would mean a deadlock occurs. The graph does this by looking at the Pi to Pi within the graph. An edge from Pi to Pi within the graph shows us that process Pi is waiting for process Pi to release the resource that Pi needs. Consequently in order for a system to detect deadlock, it would need to maintain the wait-for graph and would need to invoke an algorithm which searches for the cycle within the graph (Gagne et al, 2012).
  78. There are several ways a system can recover from deadlock if it has occurred. One of these ways is process termination. There are two ways of doing this. One way would be to abort all the deadlock processes. The advantage of this is that it will break the deadlock cycle straight away. This disadvantage to this method is that deadlock processes may have been computed for a long time. By breaking the processes all at once this would result in having these computation destroyed and the process having to re-compute later. The other way of recovering from a deadlock would be to abort one process at a time until the deadlock cycle is destroyed. This is a better way of recovering from deadlock than the first option meaning that each process would not need to re-compute later but the disadvantage of this is that this method incurs a lot of overheads. After each process is aborted, the deadlock-detection algorithm will fire up and check to see if every one of the processes aborted is still deadlocked or not (Gagne et al, 2012).
  79. 22) Priority inversion is when a higher priority process needs to read or modify kernel data that is currently being used or accessed by a lower-priority process. Since it is usually protected, the higher priority would need to wait until the lower priority is finished with it. It becomes even more complicate when a lower priority process is preempted in favor of another process with a higher priority. This only occurs within systems that have more than two priorities. So the best way around it would be to only have two priorities. This is usually inadequate for most operating systems but the priority inversion never occurs due to the system implementing a priority-inheritance protocol. What this protocol does is that all processes that are accessing resources which are needed by higher priority processes inherit the higher priority until they have finished with the resource. Once it is done, their priorities return to their original value (Gagne et al, 2012).
  80. 23) Dining philosophers is one of many typical synchronization problem. The reason as to why it is so widely discussed is because it demonstrates how deadlock can transpire within a system. The problem consist of five philosophers who all are thinking but also need to eat in order to think more. They are all eating Chinese food, but only five chopsticks are available. All of them are sat in a circle with access to one chopstick on the left and one chopstick on the right. In order for a philosopher to eat, they must pick up the left chopstick then the right chopstick and then eat, and finally putting the chopsticks down. One problem to occur from this is that is all of them pick up the chopstick on the left, then none of them can pick up a chopstick on the right which will result in them starving; this is a classic example of deadlock. If we think of the problem in terms code then the chopstick is not being used a shared resource which is resulting in race condition as another philosopher is picking up the chopstick already in use. The way to solve this problem would be to lock the chopsticks before the philosopher eats (Kann, 2003). The following pseudo code demonstrates this:
  81.  
  82.  
  83.  
  84.  
  85.  
  86. (Shene, 2014)
  87.  
  88. This solution assures that another philosopher does not try to take a chopstick that is another in use. It also assures that all philosophers have a chance to eat as well.
  89. 24)
  90. Part 7 – Memory Management (10%)
  91. 25) The main difference between logical and physical memory is that when implementing paging, physical memory breaks down into fixed-sized blocks called frames and logical memory breaks down in to the same sized blocks called pages. When a process is to be executed, its pages are loaded into any free memory frames that is available from their source. The source is either file system or backing store. The backing store is divided into fixed-sized blocks which are the same size as the memory frame or a cluster of multiple frames (Gagne et al, 2012).
  92. Execution Time
  93. Process may be moved to one memory to another memory segment within execution, then binding must be delayed until run time. Special hardware is needed in order for this scheme to work. Many operating system use the execution time binding (Dhotre and Puntambekar, 2008).
  94. Compile Time
  95. The process will reside within the memory at compile time. It generates an absolute code at compile time binding. After the process is compiled, if the memory’s location is changed, then the code must be recompiled (Dhotre and Puntambekar, 2008).
  96. Load Time
  97. The complier must generate a relocatable code after compile time. When this situation occurs, the final binding is delayed until the load time. If the starting address is changed, then it only needs to reload the user code to incorporate this changed value (Dhotre and Puntambekar, 2008).
  98. 26) Segmentation is a memory management scheme. What segmentation does is that it divides a program into smaller blocks called segments. A segment can be defined as a subroutine, data area or array. Segmentation is similar to dynamic partitioning due to its unequal size segments. It eliminates internal fragmentation but does suffer from external fragmentation. Segments are formed at program translation time by grouping together related items. Segmentation is convenient as it organizes programs and data (Dhotre, 2008).
  99. The diagram below shows an example of segmentation:
  100.  
  101.  
  102.  
  103.  
  104.  
  105.  
  106.  
  107. (No Author, Unknown)
  108. Paging permits physical address space of a process to be non-contiguous. The address space of a process is divided into many fixed sized small blocks called pages. The physical memory is also divided into smaller fixed sized blocks called frames. Within a system, pages and frames will be the same size. When a process is executed, its pages are then moved from a secondary storage which is usually the fast disk into an available frame within the physical memory. (Gill, 2006)
  109. The diagram below shows how paging works:
  110.  
  111.  
  112.  
  113. (Gill, 2006)
  114.  
  115. 27a) TLB (translation look-aside buffer) is associative, high speed memory. Each entry within the TLB consists of two parts. This is a key (tag) or a value. When the memory is presented with an item, the item is then compared to all the keys within the TLB simultaneously. If the key is found then the corresponding value is returned. The search process for this is fast and does not affect the performance. The TLB only contains a minimal amount of the page table entries. When a logical address is formed by the CPU, the page number is presented to the TLB. If the page number is found, then the frame number becomes available and then is used to access the memory. If the page number is not in the TLB, which is known as a TLB miss, a memory reference to the table page table must be done. This may be done automatically or via an interrupt to the operating system. Then when the frame number is obtained, it can be used to access the memory (Gagne et al, 2012).
  116. The diagram below shows how the TLB works:
  117.  
  118.  
  119.  
  120.  
  121.  
  122.  
  123.  
  124.  
  125.  
  126. (Gagne et al, 2012)
  127. 27b) In order to compute EAT (effective access time) we need to know how much time is needed to service a page fault. A page fault will cause these events to occur which are listed below:
  128.  
  129.  
  130. (Gagne et al, 2012, p. 405)
  131. If the time taken to access the TLB is 20 nanoseconds, time taken to access the memory is 100 nanoseconds and with the hit ratio of 80%, then the effective time would be: EAT = 0.80 x 100 + 0.20 x 200 = 120ns. For the more realistic hit ration of 99% this would be: EAT = 0.99 x 100 + 0.01 x 200 = 101ns (Gagne et al, 2012).
  132. 28) When a page isn’t available within the main memory when the processor need to execute one, then this called page fault. To be able to bring the required page into the main memory when it isn’t available in the memory, we then would need to remove that page from the main memory allowing it to allocate space for the new page so it can be executed. When this happens, the page that is removed from the memory is called the victim page. The selection of the victim page is done through different algorithms called page replacement algorithms. One of the widely used algorithms is called the optimal page replacement (Reddy, 2009).
  133. The diagram below shows how it works:
  134.  
  135.  
  136.  
  137.  
  138.  
  139.  
  140. (Reddy, 2009)
  141. The way this policy works is by selecting the page for replacement when the page fault is to occur for which the time to the next reference is the longest. The disadvantage to this method is that it is impossible to implement. It requires the operating system to have prefect knowledge of future events. The advantage of this is that the page faults a less when compared to other algorithms and it reduces the total of page replacement timings (Reddy, 2009).
  142. 29) If the number of frames allocated by the low level process falls below the smallest number which is required by the computers architecture, then we need to suspend that process’s execution. The next step is then page out the rest of the pages left and freezing all of its allocated frames. If a process does not have the number of frames to support its active pages then this leads to a page-fault and needs to replace the pages. But the problem with this is that these pages are needed immediately since they are in active use. This leads to the repetition of faults again and again. The high paging activity is called thrashing. A process would be thrashing if it is spending more time paging rather than executing (Gagne et al, 2012).
  143. Part 8 – I/O Management and Disk Scheduling (5%)
  144. 30) There are 3 different types of I/O buffering schemes:
  145. 1. Single Buffering
  146. 2. Double Buffering
  147. 3. Circular Buffering
  148. Single Buffering
  149. With a single buffer, the operating system puts a buffer within the system portion of the main memory. The process for single buffer is simple. First, input transfers are made to the system buffer then after the transfer the block is placed within the user space and it requests another block. This allows the user space to process one block of the data whilst the other is being read in. The operating system is then able to swap the process out whilst keeping track of the assignment of system buffers to user processes (Dhotre, 2009).
  150. The diagram below shows a single buffer scheme:
  151.  
  152. (Dhotre, 2009)
  153.  
  154. Double Buffering
  155. Double buffer is also called buffer swapping. With double buffer there are two buffers within the system. One of the buffers is for the controller or driver while it is waiting to be retrieved by a higher level of hierarchy. The other buffer is used to store data from a lower level module. The double buffering improvements may come at a cost of increased complexity. It also may become inadequate if the process preforms rapid bursts of I/O (Dhotre, 2009).
  156. The diagram below depicts double buffering within a driver:
  157.  
  158.  
  159.  
  160.  
  161.  
  162.  
  163.  
  164. (Dhotre, 2009)
  165. Circular Buffering
  166. When there are more than two buffers being used at the same time, the collection of buffers is referred to as circular buffer. Within this, the producer cannot pass the consumer because this would overwrite the buffer before they have been consumed. The producer can only fill up to buffer j -1 whilst buffer j is being consumed (Dhotre, 2009).
  167. The diagram below shows how the circular buffer works:
  168.  
  169.  
  170.  
  171.  
  172.  
  173. (Dhotre, 2009)
  174. 31) There are three different types of disk scheduling algorithms that can be used. Those are:
  175. • FIFO (first-in-first-out)/FCFS (First-come-First-served)
  176. • SSTF (Shortest Time Service first)
  177. • SCAN
  178. FIFO is claimed to be the fairest of them all but it is also said to be the slowest. The reason for this is because the way FIFO works. Processes are executed in the order time of their arrival in the ready queue. It favors the longest waiting process regardless of its processing time requirements. The diagram below shows the way FCFS works and why it is slow (Aravind, 2010):
  179.  
  180.  
  181.  
  182.  
  183. (Aravind, 2010)
  184. In contrast, the SSTF is different as it favors processes with the shortest execution time. But this algorithm is not as effective. If we were to know the execution time requirements in advance then this would improve the efficiency of this algorithm. When the process requests a CPU, it needs to inform the system how long it is going to be using the CPU. When it becomes available the system will give it to the process with the least expected execution time. Using this method achieves the best average turnaround time. However it is difficult to predict process execution time. Also using this algorithm is also risking causing a system starvation as it favors processes with shortest execution times. Because of this there is a chance short processes repeatedly overtaking long processes. The diagram below shows the way SSTF works (Aravind, 2010):
  185. (Aravind, 2010)
  186. SCAN scheduling is different from the two other algorithms listed. The way it works is by having the entire drive head sweep all the surface of the disk. It goes as far as the outermost cylinder before changing direction and coming back to the innermost cylinder. It selects the next waiting time request depending on whose location it reaches in its path whilst sweeping back and forth across the disk. Concequently the movement time will be less than FCFS but this way is fairer than SSTF (Godbole, 2010).
  187. Part 9 – Security (5%)
  188. 32) There are many different ways a user can keep their computers and personal devices safe. User authentication is usually based on three things, something the user knows for example passwords, something the user holds for example keys and something attributed to the user for example fingerprint.
  189. Passwords
  190. The most common way of user authentication is passwords. When the user is enters a user ID or account name for themselves, it is only then they are asked to enter a password. If the password matches the one stored by the system then they are able to get into the system as the system believes that it is being accessed by the user. The advantage of this is that only the user knows what the password is meaning that no one else can get access to it. However passwords and very vulnerable, and are more common way of security than anything else as they are simple and easy for the user to understand. They can usually be guessed easily, accidently exposed by the user themselves or passed around to someone else who doesn’t have authorization (Gagne et al, 2012).
  191. Biometrics
  192. Another way of securing a system would be through biometrics. This would be any computer or laptop that provides the option for you to be able to use your fingerprint as a form of user authentication before allowing you on the device. This is a great way of authentication as some devices will also allow you to have both a password and fingerprint as authentication which will make it harder for someone else to get into the device or system. But this is also not as strong as it looks. The fingerprint scanner works in a way where it saves your sequences on the system so when you use it, it matches it and lets you in. If your system is not encrypted then this would result in someone able to gain access to your device. Also it is not sufficient enough to guarantee the ID of the actual user. As the system can be penetrated to gain access to the authentications saved, false negatives is something to consider as well. False negative is when a anti-virus software fails to pick up an attack made to the system. If false negative was to occur then the user would not have known anything has happened to their device (Gagne et al, 2012).
  193. 33) The hardening of operating systems ensures that the system is more secure against internal or external attacks. Some methods of hardening may vary but usually are the same across all different types of operating systems. Some of the techniques used are:
  194. • Removing unnecessary services, applications and protocols
  195. • Configuring users, groups and permissions
  196. • Configuring resource controls
  197. It is important that an operating system is configured to only run necessary services which are required to perform the tasks its assigned. For example unless a host is running as a mail or web server, there would be no need for it to have HTTP or SMTP services running within the system. So to improve on system security it would be best not have unnecessary services running on the system as it will minimize the chances of an external attack (Smyth, 2010).
  198. Many modern operating systems have the ability to provide stronger passwords to ensure the user authentication is as accurate as it can be. These measures will ensure that the user has not input a weak, easily guessable password. Other measures included are the enforcement of changing the password regularly (one a week) and disabling a user’s account once there are too many failed login attempts (Smyth, 2010). There is also options within the operating systems which gives the administrator privileges that allow them to see what is shared between other users of the same system and ensure there user area is kept separated from the main one. This prevents internal attacks and hacking attempts as it ensures that other users of the same device cannot access anyone else’s system and files.
  199. Many modern operating systems provide regular updates which includes patches and fixes. These are known as security updates and they usually include fixes to loopholes and security threats that are being exploited (Smyth, 2010). These help prevent attacks as they fix the way hackers are accessing the system and makes it harder for them to do so. All operating systems have the update options automatically turned on, but users have the options to turn it off. It is advised not to do so as it makes their system vulnerable.
  200. 34) Users of a system can be considered the biggest threat to system security. The reason behind this is because users can unknowingly can download a virus which disrupts a system which can cause it to breakdown or be penetrated. The way they may do this by visiting malicious websites which automatically download a virus allowing hackers to control their system or give them permission by mistake. Some dangerous websites have known to have done this before. The way to minimize this from happening would to ensure the system has a system which includes internet security. These software detect the threat and prevent the user from visiting or downloading files from it. For example Google Chrome web browser includes this feature where if a user is about to go onto a dangerous website they stop them from accessing it and tell them to return to the pervious webpage. A way to detect a security breech directly related to a user would be to install an anti-virus software. Most of these software’s allow the main user to see what attacks have been attempted and stopped. If a correlation between a user and threats is shown then the administrator could then talk to the user and take extra precaution to make sure they don’t repeat what they’ve been doing. Once a security breach is detected the administrator could open the anti-virus software to scan the system to make sure it destroys any malicious software and check to see if anything has been damaged.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement