Advertisement
Guest User

4CAT

a guest
Nov 13th, 2019
108
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.11 KB | None | 0 0
  1.  
  2. Read chapter 41 and 42 for Friday. In class he will go over them at breakneck speed.
  3. Distributed OS will have less to do with distributed file systems, and more process scheduling.
  4.  
  5. Recap:
  6. We talked about how to build a file system. Based in a tree-based structure. Windows is drive based, Unix is root based trees.
  7. Think back to shell and which/where. Grabbed PATH and ran access on each location. Access takes absolute path to find iNode of that directory. How'd it do that? Started at root and navigated along the path.
  8. We do the same shit
  9.  
  10. A file does not exist, its an iNode and some blocks. Everything else is a virtual representation for us.
  11. To wrap up file systems, he's going to drop down to hardware for a bit.
  12. Some of this is becoming less of a problem for SSD.
  13.  
  14. Goal 1: Look at performance of hard drive
  15. Spoiler: They're slow. Even at 15,000 RPM, still milliseconds. Thats a long time.
  16. Then someone came up with RAID. Lets use multiple disks to improve reliability and access time.
  17.  
  18. RAID0
  19. Put first block of data on disk 1. Put second block of data on disk 2. Rinse repeat. Referred to as striping. Building a striped array of disks is `RAID-0`
  20. If you lose a disk, you lose both disks.
  21. But you double your speed.
  22. This is great, but what about redundancy?
  23.  
  24. RAID1
  25. Second idea: Put first block of data on disk 1 and 2. Put second block of data on disk 1 and 2.
  26. Now if a drive dies, you can reconstruct it. Referred to as mirroring. `RAID-1` Two copies of a disk. Pay for redundancy with halving your storage size.
  27.  
  28. RAID0+1
  29. I now have two good ideas. Lets combine them!
  30. Combine two RAID-0 arrays by using RAID-1.
  31. 4 terrabyte drives. 2 terrabytes of space. Twice the write/read speed.
  32. Can decide how you want to write.
  33. Block 0 on drives 1 and 2, or 1 and 3. `RAID1+0`
  34. There is RAID100 too. Bunch of RAID1 mirros, stripe with RAID0, stripe again. lol?
  35.  
  36. There is RAID2,3,4
  37. 2 - Important for historical significance. Only used in early machines. Striping across multiple disks at the bit level, not block level.
  38. 3 - RAID2 but byte level
  39. 4 - RAID3 but block level. Stripe the drives block level, but more than 2 drives
  40.  
  41. RAID 5
  42. Parity!
  43. 3 or more disks.
  44. Take all but 1 and basically do RAID0.
  45. ADD bits from disk 1 and disk 2, if its even write 1 on disk 3.
  46. Can be applied to more than 3 disks, with the same everything.
  47. When a drive fails, we can still pretend like its there using the magic of parity.
  48. Not expensive to write or maintain too.
  49. Problem: If you lose 2 drives you're done. Usually built within ~5 disk range because of this.
  50.  
  51. RAID 6
  52. Starting to become useful
  53. Block level striping with double distributed parity.
  54. Double fault tolerance - can survive double failure, and triple failure is pretty rare.
  55. 10 disks with 2 parity drives.
  56.  
  57. Throughout all of this, the OS sees this all as 1 disk. The controller reports it was one disk. The controller then juggles all this as needed.
  58. When a server hard drive fails, no one but the SYSADMIN knows. Pretty neat.
  59.  
  60. No really, read FAST file system chapter.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement