Advertisement
Guest User

Untitled

a guest
Dec 18th, 2014
158
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 22.92 KB | None | 0 0
  1. Part 5 - Scheduling
  2.  
  3. 14) Process Scheduling Algorithms
  4.  
  5. 1. CPU utilisation. Keeping the CPU busy, ranges from 0-100%. Ideally should range from 40-90% when in use.
  6. 2. Throughput. When the CPU is executing processes then it is doing its job. Throughput is number of processes completed per time unit i.e. ten per hour or 100 per second.
  7. 3. Turnaround time. Using a processes as a pointer, this is the time it takes to execute this process. The time when the process is sent to the time it's completed is the turnaround time.
  8. 4. Waiting time. The CPU's scheduling algorithm only affects the waiting time i.e. amount of time a process spends waiting in the ready queue.
  9. 5. Response time. This is measured from the submission time up to when the first response is produced. It is the time it starts responding and not the actual output of the response. P289
  10.  
  11.  
  12. 15) Pre-Emptive & Non Pre-Emptive Scheduling
  13.  
  14. Non pre-emptive scheduling is when a process switches from a running state to a waiting state and when the schedule terminates. Non pre-emptive is where the CPU is assigned to a process and is kept until either it is terminated or the process switches to the waiting state.
  15.  
  16. Pre-emptive scheduling is when a process changes from the running state to the ready state and when a process switches from the waiting state to the ready state. P288
  17.  
  18. 16) Diagrams for Scheduling Algorithms
  19.  
  20.  
  21. 17) Exponential Averaging
  22.  
  23.  
  24. Part 6 - Concurrency
  25.  
  26.  
  27. 18) Race Condition
  28.  
  29. Critical section is where each process has a segment of code. This code has a process which may change common variables; such as updating tables. Whenever a process is being executed in this critical section, it is the only one allowed to execute in it. No two process can execute in at the same time in their critical section. p230
  30.  
  31. Atomic action is whenever read and write can be carried out simultaneously. This prevents any other process or I/O device from manipulating the data carried out by an atomic action or when carrying out a read or write operation until the operation is complete. It also must be carried out completely or not at all, it doesn't stop at any point. http://www.cs.nott.ac.uk/~nza/G52CON/lecture4.pdf
  32.  
  33. Race condition is whenever the state is incorrect. It is where several processes can use and change the same data concurrently, where the outcome of the execution depends on the order of the access taking place. p229
  34.  
  35.  
  36. 19) Critical Section
  37.  
  38. A) Hardware solutions are based around “locking”, protecting certain areas through using locks. If we are only using a single processor then all we need to do is prevent interrupts from happening while a shared variable is being modified. p233
  39.  
  40. B) Semaphores are an integer variable that are accessed through only two operations, wait() and signal(). When the semaphores value is modified no other process can modify that same value at the same time. A semaphore is basically just a lock which protects a shared resource. A lock must be acquired before the shared resource can be used. P238
  41.  
  42. C) Monitors are introduced to reduce the chance errors and deadlocks from semaphores. A monitor is a group of multiple routines, protected by a mutual exclusion lock. No routines can be executed until a thread acquires the lock. Only a single thread can execute at a time, all waiting threads must wait for the thread to finish with the lock it's using. A thread can however put it's execution on hold and wait for an event to occur which will allow another thread to take control of the lock.
  43. http://www.programmerinterview.com/index.php/operating-systems/monitors-vs-semaphores/
  44.  
  45.  
  46. 20) Deadlocks
  47.  
  48. Deadlocks occur under the following circumstances:
  49.  
  50. – Mutex. No two concurrent processes are in their critical section at the same time.
  51. – Hold and wait. A process has one resources it needs but requires at least one more.
  52. – No pre-emption. Resources can't be released in their mutex state.
  53. – Circular wait. In a circular wait there are a chain of processes where one process is waiting on a resource being held by another resource, which is held by another resource and so on.
  54.  
  55. It is possible to avoid deadlocks if one of those conditions does not hold.
  56.  
  57. To prevent each circumstance we can:
  58.  
  59. – Mutex can not be avoided as it is necessary for the system to run.
  60. – Hold and wait. This can be removed by allowing the process to use all its resources before executing. This means that when this process requests a resource, it doesn't hold on to others.
  61. – No pre-emption. If a process is not allocated a resource when it requests it, instantly, it must let go of all of the other resources.
  62. – Circular wait. Order the resources and state that processes request them in the set order.
  63. Deadlocks lecture notes
  64.  
  65.  
  66. 21) Deadlock Detection
  67.  
  68. To effectively detect deadlocks it is best if we determine; how often a deadlock is likely to happen and how many processes will be affected when it happens.
  69.  
  70. From this we can determine how and how often we search for possible deadlock situations. If often, then use an detection algorithm often. However, the best and most effective way to detect deadlocks in a normal (non damaged) system, is to run at set intervals such as every 2 hours or when the CPU utilisation drops below 50%.
  71.  
  72. The two ways in which we can detect deadlock are:
  73.  
  74. Single instance of each resource type;
  75.  
  76. If resources can one instance then we can use an algorithm which uses a wait for graph. A deadlock exists in the system if the wait-for graph contains a cycle. To detect a deadlock, the wait for graph needs to be maintained and run an algorithm which looks for a cycle in the graph. In this algorithm the graph uses n, as an order of operations, where n is the number of vertices in the graph. P358
  77.  
  78. Several instances of a resource type;
  79.  
  80. This algorithm is similar to banker's. The detection algorithm investigates every possible allocation sequence for processes which are remaining to be completed. An example of this algorithm uses five processes, three resource types and seven instances. P358/359
  81.  
  82. 22) Priority Inversion
  83.  
  84. Scheduling problems happen when a higher priority process needs to read or modify data which is currently being accessed by a lower priority process. Priority inversion is whenever a lower priority process affects how long a higher process must wait for the resource to stop being used. This doesn't cause a problem if there are only two priorities in the system because one of the priorities will be using the file while the other is waiting, therefore they can not affect each other. P241/242
  85.  
  86.  
  87. 23) Dining Philosophers
  88.  
  89. This is a real life problem which is used to demonstrate how deadlock can happen in a system. We have five philosophers, sat in a circle, who all want to eat but there are only five chopsticks available. Each philosopher has one chopstick to his left and one to his right. To overcome this problem each philosopher must pick up the left chopstick, then the right one and eat before putting them down. The problem arises if they all target the left chopstick or try to eat at the same time. No two neighbours can eat together, it is a waiting game similar to that of a process synchronization. W e can therefore use semaphores to attempt to solve this problem by referring to the chopstick as a semaphore we can execute a wait operation when a philosopher goes to grab a chopstick and when the chopstick is released execute a signal operation.
  90. P223
  91.  
  92. We can also use the following pseudo code to solve the problem:
  93.  
  94. 1. Move from thinking to hungry (to simplify the analysis we assume that philosophers always become hungry);
  95. 2. When hungry, randomly choose to try and pick up the left or right stick;
  96. 3. Wait until the fork is down and then pick it up;
  97. 4. If the other stick is free, pick it up; otherwise, put the original stick down (and return to step 1);
  98. 5. Eat (since in possession of both forks);
  99. 6. When finished eating, put both forks down in any order and return to thinking.
  100. http://www.prismmodelchecker.org/tutorial/phil.php – refernece the 1-5
  101.  
  102.  
  103.  
  104. 24) Deadlock in Programming task
  105.  
  106.  
  107.  
  108.  
  109. Part 7 - Memory Management
  110.  
  111. 25) Logical/Physical memory
  112.  
  113. A logical address is given by the CPU whereas the address that the memory-address register sees is the physical address. - cite p355
  114.  
  115. Execution time binding is whenever, during execution, a process can be moved from one part of memory to another, meaning binding is delayed until the run time. This is more commonly used because it requires less management. Compile time binding must know where in the memory the process is, at its compile time. Which then means that the code is created given that starting location a and branch off there. This can require a lot of problems at a later point if the start location changes as the code will need recompiling.
  116.  
  117. Comparing execution time binding to load time binding we can say that load time does not require the processes address in memory as the compiler can generate a relocatable code. This means that binding is not completed until the load time and if there's an address change then the code can simply be reloaded.
  118.  
  119. Execution time binding is therefore used more commonly due to it being easy to implement, easy to write/change and because it produces different logical and physical addresses, as oppose to the other two.
  120.  
  121. P354
  122.  
  123. 26) Segmentation & Paging
  124.  
  125. External fragmentation can happen whenever the available memory space can handle a request but the spaces are not contiguous. We could have huge chunks of free memory between processes causing a problem because if they were all together, we would have the resources to run more processes. Internal fragmentation happens when assigned memory to a process is larger than the requested memory.
  126.  
  127. Segmentation is a collection of logical address space segments. It splits a program into smaller blocks called segments, these segments are of unequal size. Each of these segments has an address to specify its name and distance from the base address (offset). Every segment is treated as a number meaning it will be called (segment-number, offset (distance from base address)). Segmentation can be affected by external fragmentation but not internal.
  128.  
  129. Diagram to show how segmentation works:
  130. refernece the book figure 8.9 p367
  131.  
  132.  
  133. Paging is similar to segmentation due to them both using non-contiguous file management. However paging isn't affected by external fragmentation. Paging splits processes and physical memory into fixed size blocks, often being the same size. Physical memory blocks are called frames and logical memory blocks are called pages. 'When the process starts, pages get loaded into any available memory frames from their source.' cite page367 This source is split into fixed size chunks that are the same size as the frames.
  134.  
  135. Diagram to show how paging works:
  136. reference book p368 fig 8.11
  137. P365 + 367
  138.  
  139.  
  140. 27) TLB – Translation look-aside buffer
  141.  
  142. a) Explain TLB
  143.  
  144. TLB is high speed memory consisting of a key and a value. When presented with an item, a comparison is made with all keys and if found the corresponding value is presented. TLB does not affect performance and to stay small in size, only contains a small amount of table entries. When the CPU creates a logical address, the page number is sent to the TLB. If a page number is found, its frame number will be immediately available. Due to this happening within the CPU, it does not affect performance within a system that doesn't use paging. If a TLB miss occurs the page table will be referenced, done through an interrupt or in hardware, and when the frame number is retrieved we can then then use it to access memory plus add it to the TLB which in turn will increase the speed of retrieval next time it is needed.
  145.  
  146. Diagram showing how TLB works:
  147. 373 8.14
  148. b) Effective access time - EAT
  149. If the TLB's (translation look aside buffer) access time is 20 nanoseconds, time taken to access the memory is 100 nanoseconds and the hit ratio is 80% then we work it out as:
  150. 0.80 * 100 + 0.20 * 200 = 120 nanoseconds.
  151.  
  152. Using the same information but with a hit ratio of 99% we work it out as:
  153. 0.99 * 100 + 0.01 * 200 = 101 nanoseconds.
  154. p374
  155.  
  156. 28) Page Replacement
  157. Optimal page replacement is not practical because it is extremely hard to implement as it requires knowledge of future events and the whole sequence or reference string. It is therefore only used for comparison studies.
  158. An alternative to the previous method is the LRU (least recently used) algorithm. This method uses the time of the page's last use and chooses the one which hasn't been used for the longest time. This is basically optimal page replacement but in reverse. However, despite being the most consistent algorithm its downside is it requires hardware to run.
  159. We need assistance from a counter, added to the CPU, incremented for every reference. We always have the last reference time of each page which replaces the page by searching the table to find the LRU page. This counter must be maintained when page tables are changed and clock overflow needs to come into consideration. We could also use stack to implement LRU. The most recently used page will be at the top of this stack and the least recently used will be at the bottom. This is appropriate for software related implementations of an LRU replacement.
  160. Diagram showing how LRU works:
  161.  
  162. page 416 9.15
  163.  
  164. 29) Thrashing
  165. 'A process is thrashing if it is spending more time paging than executing.' cite p426
  166.  
  167. When the number of assigned frames falls below the required amount, we stop the process' execution. Then we must page out pages to free frames. If a process does not have enough frames to support pages in use then it will page-fault meaning those pages need to be replaced. This is the problem because these pages are instantly needed due to being active, leading to continuous faults. We can then see that the process is constantly paging when it should be executing which is what we call thrashing.
  168. P425
  169. By introducing a new process to a system we increase the degree of multiprogramming. This algorithm replaces pages without regarding their belonging. When it starts taking frames from other processes, it starts faulting due to the pages being required by the processes they're being taken from so they also fault and take frames from other processes. This faulting must use the paging device which means a queue is formed and CPU-utilization decreases. CPU utilization decreasing thus increases the degree of multiprogramming. This new process tries to start by taking frames from running processes, leading to more faults and a longer wait, again dropping CPU utilization thus increasing the degree of multiprogramming again which eventually leads to thrashing.
  170.  
  171. We can say that if the degree of multiprogramming increases the CPU utilization can also increase but once it hits a maximum amount and the degree of multiprogramming goes above it then thrashing happens. p450
  172.  
  173. We can see this in the following diagram:
  174. 9.18 p426
  175.  
  176.  
  177. Part 8 - I/O Management and Disk Scheduling
  178.  
  179. 30) I/O Buffering
  180.  
  181. Buffering for I/O devices is performed to stop speed difference affecting output from two devices. Buffering also is there to break down data into smaller packets for sending across a network, then reassembling them when received. Finally, it can be used for supporting copy semantics.
  182.  
  183. http://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/13_IOSystems.html
  184.  
  185. Whenever comparing single, double and circular buffering we can say that single buffering assigns a single buffer in main memory to deal with an I/O request as opposed to double buffering which uses two and a process can transfer data from one of them while the OS manages (empties/fills) the other. Circular can use two or more buffers and each buffer is classed as a single entity in this circle. Circular buffering is used when I/O must keep up with a process.
  186.  
  187. http://cs.nyu.edu/courses/Fall12/CSCI-GA.2250-001/slides/Chapter11.pdf
  188.  
  189. 31) Disk Scheduling
  190.  
  191. On a magnetic disk the movement of a physical “arm” is relied upon to access the data at different locations on the disk.
  192.  
  193. FIFO can often be slow due to the head movement being random and the jumps could be huge. The queue could go 52, 198, 13, 48 which means there could be times, such as between 198 and 13, where the seek time is slow. We can also say that while doing 48 that 52 could be done next however because it is not in that order it will not be done meaning the arm will end up going back on itself.
  194.  
  195. Whenever we compare it to SSTF we can see that there is more consistency in the jumps. It simply looks for the nearest request from the current position of the head so we can see that, using the FIFO example, that it would go 13, 48, 52, 198 meaning the arm does not go back on itself. Also, comparing it to SCAN we can see that the movement goes from one end of the disk to the other and then back. It deals with each request as it reaches it. Again using the previous example if starting at 52 it would go 52, 48, 13, 0 (at 0 it will scan the other way), 198 then whatever number is at the end before reversing again.
  196.  
  197.  
  198.  
  199.  
  200.  
  201.  
  202.  
  203. http://moodle.mmu.ac.uk/pluginfile.php/1236531/mod_resource/content/2/Lecture%209%20IO%20Management.pdf
  204.  
  205.  
  206. Part 9 - Security
  207.  
  208. 32) User authentication
  209.  
  210. Pros:
  211.  
  212. What the user knows
  213.  
  214. - The admin or user can often reset the password whenever they want decreasing the chances of an unauthorised access due to brute force.
  215. – The admin can also set the standards for passwords i.e. must contain 1 capital letter, 3 numbers and be 8 characters long.
  216. – Passwords on a database are often also hashed so they do not show their regular text form.
  217. – Can have one time passwords to decrease the chances of unauthorised user access and password sniffing. - p688
  218.  
  219. What the user holds
  220.  
  221. – Simple to implement.
  222. – Can’t be brute forced, can only be accessed by that particular card or code.
  223.  
  224. Attribute of the user
  225.  
  226. – Very hard to break into this system. Your attributes are unique, besides your eyeballs, which can sometimes be very similar if you have an identical twin in which case you would need another form of authentication or check for these particular people.
  227.  
  228.  
  229. Cons:
  230.  
  231. What the user knows
  232.  
  233. – Passwords can be sniffed out by using a network monitoring tool such as Wireshark.
  234. – Passwords can also be detected visually such as watching a user enter theirs or heard verbally if a user has maybe said it.
  235. – Sometimes passwords can be easily cracked depending on their difficulty, but overall all passwords can eventually be brute forced given enough time.
  236. – Can be tracked using a keylogging software if implemented on to the computer system.
  237.  
  238. What the user holds
  239.  
  240. – Can easily be reproduced.
  241. – Easy to lose therefore easy for someone who finds this particular item to use it.
  242. – A problem with the chip, pin or something similar can often lock out a user.
  243.  
  244. Attribute of the user
  245.  
  246. – Expensive to implement.
  247. – Can often lock out legitimate users due to problems such as cutting a finger, contact lenses or diseases affecting the eyes etc.
  248. – Almost impossible to reproduce.
  249.  
  250.  
  251. 33) OS Hardening
  252.  
  253. Removing unnecessary services, applications and protocols
  254.  
  255. This involves removing file or disabling file and printer sharing services, system and network management tools and other services such as wireless, web and email services. Removing them is preferred as attacks can reactivate a disabled service. This increases security because the attack can not compromise these services decreasing the amount of access routes to the network.
  256.  
  257. Configuring users, groups and permissions
  258.  
  259. Configuring permissions on a network is as simple as creating groups and assigning those groups different access levels to allow them to carry out different tasks. With each of these groups we then assign the users depending on their security level such as staff, admin, root admin etc. This will massively decrease the chance of infection or problems on the network due to certain user groups not being allowed to download, install or even access certain websites or areas. This is also a great way to manage a system's users and you can add and remove in an instant which means if a user is infected or at risk you can simply remove them from the network instantly.
  260.  
  261. Configuring resource controls
  262.  
  263. This denies users unauthorised access to help reduce security breaches. The administrator can deny access to files and directories for reading access to protect the information. He can also deny modifying rights to reduce the possibility of configuration changes on the system using system related tools which could reduce security. The OS can also be turned into a virtual environment to further prevent security breaches and the user and any potential breaches can't leave that environment.
  264.  
  265.  
  266. http://books.google.co.uk/books?id=EimzA3Fn12UC&pg=SA4-PA2&lpg=SA4-PA2&dq=Removing+unnecessary+services,+applications+and+protocols&source=bl&ots=ZcNcjQHlte&sig=gwFgadBcyfvBdnn3_TutJ3F55Ck&hl=en&sa=X&ei=lEyGVNbjCMT6UMmYgDA&ved=0CDcQ6AEwAw#v=onepage&q=Removing%20unnecessary%20services%2C%20applications%20and%20protocols&f=false
  267.  
  268.  
  269. 34) Dealing with threats
  270.  
  271. How to minimise user threat
  272.  
  273. The first thing to do is to educate users on the dos and don't s of using the internet and the network which will tie in with a security policy. Then we should minimise user threat by creating a security policy which states what users can and can't do on the network, this should be updated regularly.
  274.  
  275. How to detect a security breach because of a user
  276.  
  277. To detect a breach we can use signature-based detection which examines the packets for certain strings, we search for this in the headers or contents. This works similar to anti virus software which scans the packets for known viruses by matching the code inside them to code inside a database. We can also use anomaly detection which detects unusual patterns of behaviour within a system, this can be done through monitoring shell commands to pick up on oddities in them from a user or detecting abnormalities in the user's usage such as oddly timed logins, unusual usage of certain applications and access to unusual websites.
  278. - p692
  279.  
  280. What to do when the breach has been detected
  281.  
  282. Firstly we need to identify the attack, what's been compromised and what type of attack it is and then quarantine it. If the attempt is severe then it is best to take the network offline and try to quarantine what's affected. If you have managed to detect it before the actual network has been properly breached then redirect it to a honey pot so you can gather information on the attack.
  283.  
  284. The next step is to remove the attack from the network by comparing the system before and after the attack but whenever removing the attack make sure that it is recorded as it is a crime. While removing the attack make sure that whatever was the cause of the attack is fixed, i.e. close the open port, update your hardware and detection systems etc. Finally after this is done then you can put the network back online.
  285.  
  286. http://www.seculert.com/blog/2013/07/network-compromised-5-critical-steps-to-handling-a-security-breach.html
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement