alohamora

ansible ceph module : Unable to create OSD 1

Mar 18th, 2014
286
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 13.87 KB | None | 0 0
  1.  
  2.  
  3. [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
  4. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
  5. [ceph-node1][DEBUG ] connected to host: ceph-node1
  6. [ceph-node1][DEBUG ] detect platform information from remote host
  7. [ceph-node1][DEBUG ] detect machine type
  8. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  9. [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
  10. [ceph-node1][INFO ] Running command: ceph-disk list
  11. [ceph-node1][DEBUG ] /dev/vda :
  12. [ceph-node1][DEBUG ] /dev/vda1 swap, swap
  13. [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
  14. [ceph-node1][DEBUG ] /dev/vdb other, unknown
  15. [ceph-node1][DEBUG ] /dev/vdc other, unknown
  16. [root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node1:/dev/vdb
  17. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk zap ceph-node1:/dev/vdb
  18. [ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node1
  19. [ceph-node1][DEBUG ] connected to host: ceph-node1
  20. [ceph-node1][DEBUG ] detect platform information from remote host
  21. [ceph-node1][DEBUG ] detect machine type
  22. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  23. [ceph-node1][DEBUG ] zeroing last few blocks of device
  24. [ceph-node1][INFO ] Running command: sgdisk --zap-all --clear --mbrtogpt -- /dev/vdb
  25. [ceph-node1][DEBUG ] Creating new GPT entries.
  26. [ceph-node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
  27. [ceph-node1][DEBUG ] other utilities.
  28. [ceph-node1][DEBUG ] The operation has completed successfully.
  29. Unhandled exception in thread started by
  30. Error in sys.excepthook:
  31.  
  32. Original exception was:
  33. [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
  34. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
  35. [ceph-node1][DEBUG ] connected to host: ceph-node1
  36. [ceph-node1][DEBUG ] detect platform information from remote host
  37. [ceph-node1][DEBUG ] detect machine type
  38. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  39. [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
  40. [ceph-node1][INFO ] Running command: ceph-disk list
  41. [ceph-node1][DEBUG ] /dev/vda :
  42. [ceph-node1][DEBUG ] /dev/vda1 swap, swap
  43. [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
  44. [ceph-node1][DEBUG ] /dev/vdb other, unknown
  45. [ceph-node1][DEBUG ] /dev/vdc other, unknown
  46. [root@ceph-node1 ceph]#
  47. [root@ceph-node1 ceph]#
  48. [root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node1:/dev/vdb
  49. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk zap ceph-node1:/dev/vdb
  50. [ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node1
  51. [ceph-node1][DEBUG ] connected to host: ceph-node1
  52. [ceph-node1][DEBUG ] detect platform information from remote host
  53. [ceph-node1][DEBUG ] detect machine type
  54. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  55. [ceph-node1][DEBUG ] zeroing last few blocks of device
  56. [ceph-node1][INFO ] Running command: sgdisk --zap-all --clear --mbrtogpt -- /dev/vdb
  57. [ceph-node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
  58. [ceph-node1][DEBUG ] other utilities.
  59. [ceph-node1][DEBUG ] The operation has completed successfully.
  60. [root@ceph-node1 ceph]#
  61. [root@ceph-node1 ceph]#
  62. [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
  63. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
  64. [ceph-node1][DEBUG ] connected to host: ceph-node1
  65. [ceph-node1][DEBUG ] detect platform information from remote host
  66. [ceph-node1][DEBUG ] detect machine type
  67. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  68. [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
  69. [ceph-node1][INFO ] Running command: ceph-disk list
  70. [ceph-node1][DEBUG ] /dev/vda :
  71. [ceph-node1][DEBUG ] /dev/vda1 swap, swap
  72. [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
  73. [ceph-node1][DEBUG ] /dev/vdb other, unknown
  74. [ceph-node1][DEBUG ] /dev/vdc other, unknown
  75. Unhandled exception in thread started by <function run_and_release at 0x9b3758>
  76. Traceback (most recent call last):
  77. File "/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py", line 245, in run_and_release
  78. with self._running_lock:
  79. File "/usr/lib64/python2.6/threading.py", line 117, in acquire
  80. me = _get_ident()
  81. TypeError: 'NoneType' object is not callable
  82. [root@ceph-node1 ceph]#
  83. [root@ceph-node1 ceph]#
  84. [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
  85. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
  86. [ceph-node1][DEBUG ] connected to host: ceph-node1
  87. [ceph-node1][DEBUG ] detect platform information from remote host
  88. [ceph-node1][DEBUG ] detect machine type
  89. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  90. [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
  91. [ceph-node1][INFO ] Running command: ceph-disk list
  92. [ceph-node1][DEBUG ] /dev/vda :
  93. [ceph-node1][DEBUG ] /dev/vda1 swap, swap
  94. [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
  95. [ceph-node1][DEBUG ] /dev/vdb other, unknown
  96. [ceph-node1][DEBUG ] /dev/vdc other, unknown
  97. [root@ceph-node1 ceph]#
  98. [root@ceph-node1 ceph]#
  99. [root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node1:/dev/vdc
  100. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk zap ceph-node1:/dev/vdc
  101. [ceph_deploy.osd][DEBUG ] zapping /dev/vdc on ceph-node1
  102. [ceph-node1][DEBUG ] connected to host: ceph-node1
  103. [ceph-node1][DEBUG ] detect platform information from remote host
  104. [ceph-node1][DEBUG ] detect machine type
  105. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  106. [ceph-node1][DEBUG ] zeroing last few blocks of device
  107. [ceph-node1][INFO ] Running command: sgdisk --zap-all --clear --mbrtogpt -- /dev/vdc
  108. [ceph-node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
  109. [ceph-node1][DEBUG ] other utilities.
  110. [ceph-node1][DEBUG ] The operation has completed successfully.
  111. [root@ceph-node1 ceph]#
  112. [root@ceph-node1 ceph]#
  113. [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
  114. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
  115. [ceph-node1][DEBUG ] connected to host: ceph-node1
  116. [ceph-node1][DEBUG ] detect platform information from remote host
  117. [ceph-node1][DEBUG ] detect machine type
  118. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  119. [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
  120. [ceph-node1][INFO ] Running command: ceph-disk list
  121. [ceph-node1][DEBUG ] /dev/vda :
  122. [ceph-node1][DEBUG ] /dev/vda1 swap, swap
  123. [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
  124. [ceph-node1][DEBUG ] /dev/vdb other, unknown
  125. [ceph-node1][DEBUG ] /dev/vdc other, unknown
  126. Unhandled exception in thread started by <function run_and_release at 0x24db758>
  127. Traceback (most recent call last):
  128. File "/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py", line 245, in run_and_release
  129. with self._running_lock:
  130. File "/usr/lib64/python2.6/threading.py", line 117, in acquire
  131. me = _get_ident()
  132. TypeError: 'NoneType' object is not callable
  133. [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
  134. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
  135. [ceph-node1][DEBUG ] connected to host: ceph-node1
  136. [ceph-node1][DEBUG ] detect platform information from remote host
  137. [ceph-node1][DEBUG ] detect machine type
  138. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  139. [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
  140. [ceph-node1][INFO ] Running command: ceph-disk list
  141. [ceph-node1][DEBUG ] /dev/vda :
  142. [ceph-node1][DEBUG ] /dev/vda1 swap, swap
  143. [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
  144. [ceph-node1][DEBUG ] /dev/vdb other, unknown
  145. [ceph-node1][DEBUG ] /dev/vdc other, unknown
  146. [root@ceph-node1 ceph]#
  147. [root@ceph-node1 ceph]#
  148. [root@ceph-node1 ceph]#
  149. [[root@ceph-node1 ceph]# parted /dev/vdb
  150. GNU Parted 2.1
  151. Using /dev/vdb
  152. Welcome to GNU Parted! Type 'help' to view a list of commands.
  153. (parted) print
  154. Model: Virtio Block Device (virtblk)
  155. Disk /dev/vdb: 472GB
  156. Sector size (logical/physical): 512B/512B
  157. Partition Table: gpt
  158.  
  159. Number Start End Size File system Name Flags
  160.  
  161. (parted) q
  162. [root@ceph-node1 ceph]#
  163. [root@ceph-node1 ceph]#
  164. [root@ceph-node1 ceph]# fdisk -l /dev/vdb
  165.  
  166. WARNING: GPT (GUID Partition Table) detected on '/dev/vdb'! The util fdisk doesn't support GPT. Use GNU Parted.
  167.  
  168.  
  169. Disk /dev/vdb: 472.4 GB, 472446402560 bytes
  170. 256 heads, 63 sectors/track, 57213 cylinders
  171. Units = cylinders of 16128 * 512 = 8257536 bytes
  172. Sector size (logical/physical): 512 bytes / 512 bytes
  173. I/O size (minimum/optimal): 512 bytes / 512 bytes
  174. Disk identifier: 0x00000000
  175.  
  176. Device Boot Start End Blocks Id System
  177. /dev/vdb1 1 57214 461373439+ ee GPT
  178. [root@ceph-node1 ceph]# fdisk -l /dev/vdc
  179.  
  180. WARNING: GPT (GUID Partition Table) detected on '/dev/vdc'! The util fdisk doesn't support GPT. Use GNU Parted.
  181.  
  182.  
  183. Disk /dev/vdc: 483.2 GB, 483183820800 bytes
  184. 255 heads, 63 sectors/track, 58743 cylinders
  185. Units = cylinders of 16065 * 512 = 8225280 bytes
  186. Sector size (logical/physical): 512 bytes / 512 bytes
  187. I/O size (minimum/optimal): 512 bytes / 512 bytes
  188. Disk identifier: 0x00000000
  189.  
  190. Device Boot Start End Blocks Id System
  191. /dev/vdc1 1 58744 471859199+ ee GPT
  192. [root@ceph-node1 ceph]#
  193. [root@ceph-node1 ceph]#
  194. [root@ceph-node1 ceph]#
  195. [root@ceph-node1 ceph]#
  196. [root@ceph-node1 ceph]#
  197. [root@ceph-node1 ceph]#
  198. [root@ceph-node1 ceph]# parted -s /dev/vdc print
  199. Model: Virtio Block Device (virtblk)
  200. Disk /dev/vdc: 483GB
  201. Sector size (logical/physical): 512B/512B
  202. Partition Table: gpt
  203.  
  204. Number Start End Size File system Name Flags
  205.  
  206. [root@ceph-node1 ceph]# parted -s /dev/vdb print
  207. Model: Virtio Block Device (virtblk)
  208. Disk /dev/vdb: 472GB
  209. Sector size (logical/physical): 512B/512B
  210. Partition Table: gpt
  211.  
  212. Number Start End Size File system Name Flags
  213.  
  214. [root@ceph-node1 ceph]#
  215. [root@ceph-node1 ceph]#
  216. [root@ceph-node1 ceph]#
  217. [root@ceph-node1 ceph]#
  218. [root@ceph-node1 ceph]#
  219. [root@ceph-node1 ceph]#
  220. [root@ceph-node1 ceph]#
  221. [root@ceph-node1 ceph]#
  222. [root@ceph-node1 ceph]#
  223. [root@ceph-node1 ceph]#
  224. [root@ceph-node1 ceph]# ceph-disk prepare --fs-type xfs --cluster ceph /dev/vdc
  225. INFO:ceph-disk:Will colocate journal with data on /dev/vdc
  226. The operation has completed successfully.
  227. The operation has completed successfully.
  228. mkfs.xfs: /dev/vdc1 contains a mounted filesystem
  229. Usage: mkfs.xfs
  230. /* blocksize */ [-b log=n|size=num]
  231. /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
  232. (sunit=value,swidth=value|su=num,sw=num),
  233. sectlog=n|sectsize=num
  234. /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
  235. projid32bit=0|1]
  236. /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
  237. sunit=value|su=num,sectlog=n|sectsize=num,
  238. lazy-count=0|1]
  239. /* label */ [-L label (maximum 12 characters)]
  240. /* naming */ [-n log=n|size=num,version=2|ci]
  241. /* prototype file */ [-p fname]
  242. /* quiet */ [-q]
  243. /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
  244. /* sectorsize */ [-s log=n|size=num]
  245. /* version */ [-V]
  246. devicename
  247. <devicename> is required unless -d name=xxx is given.
  248. <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
  249. xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
  250. <value> is xxx (512 byte blocks).
  251. ceph-disk: Error: Command '['mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/vdc1']' returned non-zero exit status 1
  252. [root@ceph-node1 ceph]#
  253. [root@ceph-node1 ceph]#
  254. [root@ceph-node1 ceph]#
  255. [root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
  256. [ceph_deploy.cli][INFO ] Invoked (1.3.5): /usr/bin/ceph-deploy disk list ceph-node1
  257. [ceph-node1][DEBUG ] connected to host: ceph-node1
  258. [ceph-node1][DEBUG ] detect platform information from remote host
  259. [ceph-node1][DEBUG ] detect machine type
  260. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  261. [ceph_deploy.osd][DEBUG ] Listing disks on ceph-node1...
  262. [ceph-node1][INFO ] Running command: ceph-disk list
  263. [ceph-node1][DEBUG ] /dev/vda :
  264. [ceph-node1][DEBUG ] /dev/vda1 swap, swap
  265. [ceph-node1][DEBUG ] /dev/vda2 other, ext4, mounted on /
  266. [ceph-node1][DEBUG ] /dev/vdb other, unknown
  267. [ceph-node1][DEBUG ] /dev/vdc :
  268. [ceph-node1][DEBUG ] /dev/vdc1 ceph data, active, cluster ceph, osd.3
  269. [ceph-node1][DEBUG ] /dev/vdc2 ceph journal
  270. [root@ceph-node1 ceph]#
  271. [root@ceph-node1 ceph]#
  272. [root@ceph-node1 ceph]#
  273. [root@ceph-node1 ceph]#
  274. [root@ceph-node1 ceph]# fdisk -l /dev/vdc
  275.  
  276. WARNING: GPT (GUID Partition Table) detected on '/dev/vdc'! The util fdisk doesn't support GPT. Use GNU Parted.
  277.  
  278.  
  279. Disk /dev/vdc: 483.2 GB, 483183820800 bytes
  280. 255 heads, 63 sectors/track, 58743 cylinders
  281. Units = cylinders of 16065 * 512 = 8225280 bytes
  282. Sector size (logical/physical): 512 bytes / 512 bytes
  283. I/O size (minimum/optimal): 512 bytes / 512 bytes
  284. Disk identifier: 0x00000000
  285.  
  286. Device Boot Start End Blocks Id System
  287. /dev/vdc1 1 58744 471859199+ ee GPT
  288. [root@ceph-node1 ceph]#
  289. [root@ceph-node1 ceph]#
  290. [root@ceph-node1 ceph]# parted -s /dev/vdc print
  291. Model: Virtio Block Device (virtblk)
  292. Disk /dev/vdc: 483GB
  293. Sector size (logical/physical): 512B/512B
  294. Partition Table: gpt
  295.  
  296. Number Start End Size File system Name Flags
  297. 2 1049kB 105MB 104MB ceph journal
  298. 1 106MB 483GB 483GB xfs ceph data
  299.  
  300. [root@ceph-node1 ceph]#
  301. [root@ceph-node1 ceph]#
  302. [root@ceph-node1 ceph]# ceph-disk activate --activate-key /var/lib/ceph/bootstrap-osd/ceph.keyring /dev/vdc1
  303. INFO:ceph-disk:ceph osd.3 already mounted in position; unmounting ours.
  304. === osd.3 ===
  305. Error ENOENT: osd.3 does not exist. create it before updating the crush map
  306. failed: 'timeout 10 /usr/bin/ceph --name=osd.3 --keyring=/var/lib/ceph/osd/ceph-3/keyring osd crush create-or-move -- 3 0.44 root=default host=ceph-node1 '
  307. [root@ceph-node1 ceph]# ceph osd tree
  308. # id weight type name up/down reweight
  309. -1 0 root default
  310. -2 0 host ceph-node1
  311.  
  312. [root@ceph-node1 ceph]#
Advertisement
Add Comment
Please, Sign In to add comment