Advertisement
Guest User

Untitled

a guest
Feb 21st, 2020
208
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.74 KB | None | 0 0
  1. 1. Consider a magnetic disk drive with 8 surfaces, 512 tracks 64 sectors sector size 1kb average seek time 8ms 1.5 ms 3600rpm.
  2.  
  3. - Cylinder capacity = 8 * 64 * 1kb = 512kb
  4.  
  5. - Seek time = 8ms
  6.  
  7. - Rotational latency = 1/2 * 60/3600*2 = 8.3ms
  8.  
  9. - MB means that we need an IO cylinder, so 9 times track to track access time needed to access 10 cylinders. i.e transfer tme + 9 * track to track access tme 1 track transfer time = 80 * 60/ 3.6ms = 133.33ms
  10.  
  11. - Total time = seek time + 10 * (per track transfer time + rotational latency) + 9 * (track to track access time) => 8+ 10 * (133.33 + 8.3) + 9 * 1.5 = 1425.5ms
  12.  
  13. 2. Advantages of RAID5 over RAID4
  14.  
  15. - RAID5 is organized in a similar fashion to RAID4.
  16. The difference is that RAID5 distributes the parity strips across all disks. The distribution of parity strips across all drives avoids the potential I/O bottleneck found in RAID4
  17.  
  18. 3. Compare direct mapping with accociative mapping.
  19.  
  20. - DIRECT MAPPING is the simplest technique, it maps each block of main memory into only one possible cache line.
  21.  
  22. - ASSOCIATIVE MAPPING overcomes the disadvantage of direct mapping by permitting each main memory block to be loaded into any line of the cache.
  23.  
  24. 4. Explain the relation between cache size and hit rate
  25.  
  26. - AS THE BLOCK SIZE increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that data in the vicinity of a refferenced word are likely to be refferenced in the near future. As the block size increases, more useful useful data is brought into the cache and hit ratio will begin to decrease.
  27. - The greater the cache size is, the less likely is for cache misses to occur. Since the formula for cache hit rate is cache hits/cache hits + misses, the bigger the cache size is, there is less misses. Therefore, the hit rate is higher in percentage
  28.  
  29. 5. Dif between logical cache and physical.
  30.  
  31. - LOGICAL CACHE stores data using initial addresses. The processor accesses the cache directly without going through the MMU.
  32.  
  33. -PHYSICAL CACHE stores data using main memory physical addresses. A physical cache is located between the MMU and main memory
  34.  
  35. - VIRTUAL CACHE access speed is greater than physical cache
  36.  
  37. 6. Benefits of increasing bus width.
  38.  
  39. - THE WIDTH of the data bus is the key factor in determining overall system performance.
  40.  
  41. EXAMPLE: If the data bus is 32 bits wide and each instruction is 64 bits long then the processor must access the memory module twice during each instruction cycle.
  42.  
  43. 7. Purpose of registers MAR, MBR, IR, PC
  44.  
  45. - MAR: Contains the address of a location in memory
  46.  
  47. - MBR: Contains a word of data to be written to memory or the word most recently read
  48.  
  49. - IR: Contains the instruction most recently fetched
  50.  
  51. - PC: Contains the address of an instruction to be fetched
  52.  
  53. FINAL
  54.  
  55. 1. Briefly explain how the size of instructions affect design of computer systems
  56.  
  57. 2. Briefly explain how the number of operands in the instructions affect the design of hardware
  58.  
  59. 3. How to determine the width of program counter in terms of bits
  60.  
  61. 4. Differences between segmentation and paging:
  62.  
  63. -Page is always at fixed block size whereas a segment is at variable size
  64. -The size of the page is decided or specified by the hardware.The size of the segment is specified by the user.
  65. -In paging,the user only provides a single integer as the address which is divided by the hardware into a page number and offset.
  66. In segmentation the user specifies the address in two quantites i.e segment number and offset
  67.  
  68. 5. Briefly explain how choosing addressing which addressing models will be used affects the design of computer hardware.
  69.  
  70. - ADDRESSING modes are an aspect of the instruction set architecture in most CPU designs.The various addressing modes that are defined in a given instruction set architecture
  71. define how machine language instructions in that architecture identify the operand of each instruction.Addressing mode specifies how to calculate the effective memory address at
  72. an operand by using information held in register and/or constants contained within a machine instruction or elsewhere.
  73.  
  74. 6. Explain how I/O channels improve the performance of computer systems
  75.  
  76. -An important determinant of performance in many parallel programs is the time required to move data between memory and secondary storage,that is the time required for input/output
  77.  
  78.  
  79. 7. What are the 4 methods to deal with multiple interrupts? Explain them briefly
  80.  
  81. a) PROVIDING MULTIPLE INTERRUPT LINES
  82.  
  83. First method for dealing with multiple interrupts is providing multiple interrupt lines between processor and I/O modules.This technique is impractical because it is very hard to dedicate more than few bus lines or processor pins to interrupt lines.Beacause of that,albeit we would use this technique,it is certain that each line will have multiple I/O modules attached to it and thus,it is needed to use one of other three techniques on each line
  84.  
  85. b) SOFTWARE POLL
  86.  
  87. In this technique,when processor detects an interrupt,it branches to an interrupt-service routine that polls each I/O module to determine which module caused the interrupt.It can be done on two ways.First,the poll can be in form of TEST I/O.Then processor raises TEST I/O and puts the address of a particular I/O module on the address line and the I/O module responds positively if it set an interrupt.Second way is that module can contain addressable status register.Then processor read status register of each I/O module to identify module which caused the interrupt.When interrupting module is identified,processor branches to a device-service routine specific to that device.This technique has disadvantage because it is time consuming.
  88.  
  89. c) DAISY CHAIN
  90.  
  91. Third technique is DAISY CHAIN and it provides a hardware poll. This technique is more efficient to use than SOFTWARE POLL. I/O modules use the same interrupt request line which is daisy chained through the modules.When processor senses that interrupt happened, it sends interrupt acknowledge by this line and that signal is propagating through the modules until it comes to the requesting module.Requesting module typically responds by putting word on data lines. That word is called vector and whole technique is also called VECTORED INTERRUPT. Vector is used as pointer to appropriate device-service routine and by this technique we avoid executing general interrupt-service routine.
  92.  
  93. d) BUS ARBITRATION
  94.  
  95. BUS ARBITRATION is technique which also use vectored interrupt. In this technique I/O module first have to take control of the bus how it could raise the interrupt request line. On this way,only one module can raise the line at a time. When processor detects an interrupt,it responds on interrupt acknowledge line and requesting module then places its vector on the data lines.
  96.  
  97. 8. Explain how demand paging mechanism works
  98.  
  99. Demand paging works on next way: Each page of process which should be performed will be brought only when there is need for it,by the other words,on demand. For better explanation of demand paging, I will use an example from our book. If we imagine a very big process which is consisted of long program plus a number of arrays of data. After any time,it may be needed to execute only small section of the program and it may be enough to use only several arrays of data. Then it would be wasting a time to load all pages when it is needed only few pages before program is suspended.
  100.  
  101.  
  102.  
  103. 9. Briefly explain how using the pipeline mechanism increases the performance of computers.
  104.  
  105. -Pipelining increases performances of computer by increasing instruction throughput.Pipelining allows fetching next instructions while processor do arithmetic operations,holding them in buffer close to the processor until each instruction operation can be performed.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement