Advertisement
Guest User

Untitled

a guest
Dec 18th, 2014
1,553
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 6.16 KB | None | 0 0
  1. # mount -t glusterfs -o noatime,direct-io-mode=disable,log-file=/var/log/gluster-client.log gluster.work.com:/worknas /media/
  2. Mount failed. Please check the log file for more details.
  3.  
  4. # cat =/var/log/gluster-client.log
  5. [2014-12-19 02:23:52.604989] I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.5 (/usr/sbin/glusterfs --log-file=/var/log/gluster-client.log --direct-io-mode=disable --fuse-mountopts=noatime --volfile-id=/worknas --volfile-server=gluster.work.com --fuse-mountopts=noatime /media/)
  6. [2014-12-19 02:23:52.613109] I [mount.c:290:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument), retry to mount via fusermount
  7. [2014-12-19 02:23:52.620220] E [mount.c:162:fuse_mount_fusermount] 0-glusterfs-fuse: failed to exec fusermount: No such file or directory
  8. [2014-12-19 02:23:52.621844] E [mount.c:298:gf_fuse_mount] 0-glusterfs-fuse: mount of gluster.work.com:/worknas to /media/ (default_permissions,noatime,allow_other,max_read=131072) failed
  9. [2014-12-19 02:23:52.621883] I [socket.c:3480:socket_init] 0-glusterfs: SSL support is NOT enabled
  10. [2014-12-19 02:23:52.622013] I [socket.c:3495:socket_init] 0-glusterfs: using system polling thread
  11. [2014-12-19 02:23:52.623829] E [glusterfsd.c:1744:daemonize] 0-daemonize: mount failed
  12. [2014-12-19 02:23:52.664776] E [common-utils.c:211:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Name or service not known)
  13. [2014-12-19 02:23:52.664859] E [name.c:249:af_inet_client_get_remote_sockaddr] 0-worknas-client-2: DNS resolution failed on host gluster03.work.com
  14. [2014-12-19 02:23:52.666913] E [common-utils.c:211:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Name or service not known)
  15. [2014-12-19 02:23:52.666964] E [name.c:249:af_inet_client_get_remote_sockaddr] 0-worknas-client-3: DNS resolution failed on host gluster04.work.com
  16. [2014-12-19 02:23:52.666998] E [afr-common.c:3735:afr_notify] 0-worknas-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.
  17. Given volfile:
  18. +------------------------------------------------------------------------------+
  19. 1: volume worknas-client-0
  20. 2: type protocol/client
  21. 3: option ping-timeout 30
  22. 4: option frame-timeout 90
  23. 5: option transport-type tcp
  24. 6: option remote-subvolume /srv/gluster/xvdb1/brick
  25. 7: option remote-host gluster01.work.com
  26. 8: end-volume
  27. 9:
  28. 10: volume worknas-client-1
  29. 11: type protocol/client
  30. 12: option ping-timeout 30
  31. 13: option frame-timeout 90
  32. 14: option transport-type tcp
  33. 15: option remote-subvolume /srv/gluster/xvdb1/brick
  34. 16: option remote-host gluster02.work.com
  35. 17: end-volume
  36. 18:
  37. 19: volume worknas-client-2
  38. 20: type protocol/client
  39. 21: option ping-timeout 30
  40. 22: option frame-timeout 90
  41. 23: option transport-type tcp
  42. 24: option remote-subvolume /srv/gluster/xvdb1/brick
  43. 25: option remote-host gluster03.work.com
  44. 26: end-volume
  45. 27:
  46. 28: volume worknas-client-3
  47. 29: type protocol/client
  48. 30: option ping-timeout 30
  49. 31: option frame-timeout 90
  50. 32: option transport-type tcp
  51. 33: option remote-subvolume /srv/gluster/xvdb1/brick
  52. 34: option remote-host gluster04.work.com
  53. 35: end-volume
  54. 36:
  55. 37: volume worknas-replicate-0
  56. 38: type cluster/replicate
  57. 39: option self-heal-readdir-size 2KB
  58. 40: option eager-lock on
  59. 41: option data-self-heal-algorithm diff
  60. 42: option data-self-heal-window-size 2
  61. 43: option data-self-heal on
  62. 44: option metadata-self-heal on
  63. 45: option background-self-heal-count 20
  64. 46: subvolumes worknas-client-0 worknas-client-1
  65. 47: end-volume
  66. 48:
  67. 49: volume worknas-replicate-1
  68. 50: type cluster/replicate
  69. 51: option self-heal-readdir-size 2KB
  70. 52: option eager-lock on
  71. 53: option data-self-heal-algorithm diff
  72. 54: option data-self-heal-window-size 2
  73. 55: option data-self-heal on
  74. 56: option metadata-self-heal on
  75. 57: option background-self-heal-count 20
  76. 58: subvolumes worknas-client-2 worknas-client-3
  77. 59: end-volume
  78. 60:
  79. 61: volume worknas-stripe-0
  80. 62: type cluster/stripe
  81. 63: subvolumes worknas-replicate-0 worknas-replicate-1
  82. 64: end-volume
  83. 65:
  84. 66: volume worknas-dht
  85. 67: type cluster/distribute
  86. 68: option readdir-optimize on
  87. 69: option rebalance-stats on
  88. 70: option min-free-disk 5%
  89. 71: subvolumes worknas-stripe-0
  90. 72: end-volume
  91. 73:
  92. 74: volume worknas-write-behind
  93. 75: type performance/write-behind
  94. 76: option cache-size 4MB
  95. 77: option flush-behind on
  96. 78: subvolumes worknas-dht
  97. 79: end-volume
  98. 80:
  99. 81: volume worknas-read-ahead
  100. 82: type performance/read-ahead
  101. 83: subvolumes worknas-write-behind
  102. 84: end-volume
  103. 85:
  104. 86: volume worknas-io-cache
  105. 87: type performance/io-cache
  106. 88: option cache-timeout 60
  107. 89: option max-file-size 2MB
  108. 90: subvolumes worknas-read-ahead
  109. 91: end-volume
  110. 92:
  111. 93: volume worknas-quick-read
  112. 94: type performance/quick-read
  113. 95: subvolumes worknas-io-cache
  114. 96: end-volume
  115. 97:
  116. 98: volume worknas-open-behind
  117. 99: type performance/open-behind
  118. 100: subvolumes worknas-quick-read
  119. 101: end-volume
  120. 102:
  121. 103: volume worknas-md-cache
  122. 104: type performance/md-cache
  123. 105: subvolumes worknas-open-behind
  124. 106: end-volume
  125. 107:
  126. 108: volume worknas
  127. 109: type debug/io-stats
  128. 110: option count-fop-hits off
  129. 111: option latency-measurement off
  130. 112: option log-level WARNING
  131. 113: subvolumes worknas-md-cache
  132. 114: end-volume
  133.  
  134. +------------------------------------------------------------------------------+
  135. [2014-12-19 02:23:52.668649] W [socket.c:514:__socket_rwv] 0-worknas-client-1: readv failed (No data available)
  136. [2014-12-19 02:23:52.670786] W [socket.c:514:__socket_rwv] 0-worknas-client-0: readv failed (No data available)
  137. [2014-12-19 02:23:52.697580] W [glusterfsd.c:1002:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f384200fefd] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7f38422e3182] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x7f3842dccef5]))) 0-: received signum (15), shutting down
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement