Advertisement
VolleMelk

Rebuild RAID on LS-WVL with Debian

May 17th, 2013
500
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 3.94 KB | None | 0 0
  1. # Last thing we have to do, get the RAID array back together.
  2.  
  3. # Power down the NAS
  4. # Put back the drive you pulled before
  5. # Start netcat on your target pc (Just to be sure)
  6. # Power on the NAS
  7. # In a few minutes, you should be able to login to the NAS (if not, check the bootlog in netcat)
  8.  
  9. # --------------------------
  10. # Update: 17-05-2013
  11. # If you need to reconstruct a drive with the same partition layout
  12. # Login as root:
  13. parted
  14.  
  15. # Disk selection in parted
  16. # The LS-WVL has two drive bays: left (A) is /dev/sda and the right drive bay (B) is /dev/sdb
  17. # Parted selects the lowest possible drive at start.
  18. # First we need to get the partitions on the drive which contains your data. So if your data
  19. # is on the right partition, run
  20. select /dev/sdb
  21. # If it's on the left partition, this drive is already selected. You can select it either way with
  22. select /dev/sda
  23. # You will get feedback like
  24. # (parted) select /dev/sdb
  25. # Using /dev/sdb
  26.  
  27. # Now we get the partition table
  28. unit s              # Select sectors as unit, so you have the most exact way of replicating the drive
  29. print               # Print the partition table
  30.  
  31. # Mine for example is this:
  32.  
  33. # (parted) print
  34. # Model: ATA ST3000DM001-9YN1 (scsi)
  35. # Disk /dev/sda: 5860533168s
  36. # Sector size (logical/physical): 512B/4096B
  37. # Partition Table: gpt              # <<< HERE IS YOUR PARTITION TABLE
  38.  
  39. # Number  Start      End          Size         File system  Name     Flags
  40. #  1      2048s      2002943s     2000896s     ext3         primary
  41. #  2      2002944s   12003327s    10000384s                 primary
  42. #  3      12003328s  12005375s    2048s                     primary
  43. #  4      12005376s  12007423s    2048s                     primary
  44. #  5      12007424s  14008319s    2000896s                  primary
  45. #  6      14008320s  5843758319s  5829750000s               primary
  46.  
  47. # Now select the other drive (empty drive)
  48. select /dev/sdb                     # Select /dev/sda if the LEFT drive is the empty drive
  49.  
  50. # Create the partition table. The output of your previous 'print' command gave a partition label.
  51. mklabel gpt
  52.  
  53. # Recreate every partition using mkpart, in the same order.
  54. # mkpart [name] [start] [end]          # See here: http://www.gnu.org/software/parted/manual/html_node/mkpart.html
  55. # For me, the commands were
  56. mkpart primary 2048 2002943
  57. mkpart primary 2002944 12003327
  58. mkpart primary 12003328 12005375
  59. mkpart primary 12005376 12007423
  60. mkpart primary 12007424 14008319
  61. mkpart primary 14008320 5843758319
  62. # USE YOUR OWN VALUES. ONLY FOR DEMONSTRATION
  63.  
  64. # Now you can use 'print' again. Compare the two outputs. All three columns (Start, End and Size) should
  65. # be EXACTLY the same. File System and Name will come later.
  66. print
  67.  
  68. # If everything suits you, exit parted with
  69. quit
  70.  
  71. # -------------------------------
  72. # Rebuild RAID
  73. # -------------------------------
  74. # Log in as root, if you're not already are.
  75.  
  76. # You have 4 partitions: boot, root, data and swap.
  77. # partition     device          drive
  78. # boot          /dev/md0        /dev/sda1      
  79. # root          /dev/md1        /dev/sda2
  80. # data          /dev/md2        /dev/sda6
  81. # swap          /dev/md10       /dev/sda5
  82. # (if you pulled/replaced the left drive, you will have sdbX instead of sdaX)
  83.  
  84. # Cross check this table with your own NAS:
  85. cat /proc/mdstat
  86.  
  87. # Example output:
  88. # md1 : active raid1 sda2[0]
  89. #       4999156 blocks super 1.2 [2/1] [U_]
  90.  
  91. # md1           - device
  92. # sda2          - drive
  93. # [2/1]         - 1 of 2 drives are working in the RAID array
  94.  
  95. # Normally:
  96. # sda1 -> sdb1
  97. # sda2 -> sdb2
  98. # etc etc...
  99.  
  100. mdadm --manage /dev/md0 --add /dev/sdb1
  101.  
  102. # /dev/md0      - the device we are managing
  103. # /dev/sdb1     - the drive we are adding. Note that we are ADDing sdB1
  104. # Do this for all 4 partitions:
  105. # mdadm --manage /dev/md0 --add /dev/sdb1
  106. # mdadm --manage /dev/md1 --add /dev/sdb2
  107. # etc etc...
  108.  
  109. # You can check the progress of rebuilding the RAID array with:
  110. cat /proc/mdstat
  111.  
  112. # For me, rebuilding 3TB took ~400 minutes.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement