Advertisement
Guest User

Untitled

a guest
Jul 23rd, 2015
176
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.06 KB | None | 0 0
  1. 0> 2015-07-23 16:09:43.755199 7fa47ef00700 -1 os/FileStore.cc: In function 'virtual int FileStore::read(coll_t, const ghobject_t&, uint64_t, size_t, ceph::bufferlist&, uint32_t, bool)' thread 7fa47ef00700 time 2015-07-23 16:09:43.560568
  2. os/FileStore.cc: 2850: FAILED assert(allow_eio || !m_filestore_fail_eio || got != -5)
  3.  
  4. ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
  5. 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) [0xbc2b8b]
  6. 2: (FileStore::read(coll_t, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int, bool)+0xc98) [0x916d58]
  7. 3: (ReplicatedBackend::build_push_op(ObjectRecoveryInfo const&, ObjectRecoveryProgress const&, ObjectRecoveryProgress*, PushOp*, object_stat_sum_t*)+0x2c2) [0xa0da52]
  8. 4: (ReplicatedBackend::handle_pull(pg_shard_t, PullOp&, PushOp*)+0xe4) [0xa10224]
  9. 5: (ReplicatedBackend::do_pull(std::tr1::shared_ptr<OpRequest>)+0xd6) [0xa10506]
  10. 6: (ReplicatedBackend::handle_message(std::tr1::shared_ptr<OpRequest>)+0x3ce) [0xa1763e]
  11. 7: (ReplicatedPG::do_request(std::tr1::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x167) [0x840f57]
  12. 8: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3d5) [0x6a0f85]
  13. 9: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x331) [0x6a14d1]
  14. 10: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x875) [0xbb2c85]
  15. 11: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xbb4da0]
  16. 12: (()+0x8182) [0x7fa4a3bfe182]
  17. 13: (clone()+0x6d) [0x7fa4a216947d]
  18. NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
  19.  
  20. --- logging levels ---
  21. 0/ 5 none
  22. 0/ 0 lockdep
  23. 0/ 0 context
  24. 0/ 0 crush
  25. 1/ 5 mds
  26. 1/ 5 mds_balancer
  27. 1/ 5 mds_locker
  28. 1/ 5 mds_log
  29. 1/ 5 mds_log_expire
  30. 1/ 5 mds_migrator
  31. 0/ 0 buffer
  32. 0/ 0 timer
  33. 0/ 0 filer
  34. 0/ 1 striper
  35. 0/ 0 objecter
  36. 0/ 0 rados
  37. 0/ 0 rbd
  38. 0/ 5 rbd_replay
  39. 0/ 0 journaler
  40. 0/ 5 objectcacher
  41. 0/ 0 client
  42. 0/ 0 osd
  43. 0/ 0 optracker
  44. 0/ 0 objclass
  45. 0/ 0 filestore
  46. 1/ 3 keyvaluestore
  47. 0/ 0 journal
  48. 0/ 0 ms
  49. 0/ 0 mon
  50. 0/ 0 monc
  51. 0/ 0 paxos
  52. 0/ 0 tp
  53. 0/ 0 auth
  54. 1/ 5 crypto
  55. 0/ 0 finisher
  56. 0/ 0 heartbeatmap
  57. 0/ 0 perfcounter
  58. 0/ 0 rgw
  59. 1/10 civetweb
  60. 1/ 5 javaclient
  61. 0/ 0 asok
  62. 0/ 0 throttle
  63. 0/ 0 refs
  64. 1/ 5 xio
  65. -2/-2 (syslog threshold)
  66. -1/-1 (stderr threshold)
  67. max_recent 10000
  68. max_new 1000
  69. log_file /var/log/ceph/ceph-osd.5.log
  70. --- end dump of recent events ---
  71. 2015-07-23 16:09:43.804462 7fa47ef00700 -1 *** Caught signal (Aborted) **
  72. in thread 7fa47ef00700
  73.  
  74. ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
  75. 1: /usr/bin/ceph-osd() [0xacb3ba]
  76. 2: (()+0x10340) [0x7fa4a3c06340]
  77. 3: (gsignal()+0x39) [0x7fa4a20a5cc9]
  78. 4: (abort()+0x148) [0x7fa4a20a90d8]
  79. 5: (__gnu_cxx::__verbose_terminate_handler()+0x155) [0x7fa4a29b0535]
  80. 6: (()+0x5e6d6) [0x7fa4a29ae6d6]
  81. 7: (()+0x5e703) [0x7fa4a29ae703]
  82. 8: (()+0x5e922) [0x7fa4a29ae922]
  83. 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x278) [0xbc2d78]
  84. 10: (FileStore::read(coll_t, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int, bool)+0xc98) [0x916d58]
  85. 11: (ReplicatedBackend::build_push_op(ObjectRecoveryInfo const&, ObjectRecoveryProgress const&, ObjectRecoveryProgress*, PushOp*, object_stat_sum_t*)+0x2c2) [0xa0da52]
  86. 12: (ReplicatedBackend::handle_pull(pg_shard_t, PullOp&, PushOp*)+0xe4) [0xa10224]
  87. 13: (ReplicatedBackend::do_pull(std::tr1::shared_ptr<OpRequest>)+0xd6) [0xa10506]
  88. 14: (ReplicatedBackend::handle_message(std::tr1::shared_ptr<OpRequest>)+0x3ce) [0xa1763e]
  89. 15: (ReplicatedPG::do_request(std::tr1::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x167) [0x840f57]
  90. 16: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3d5) [0x6a0f85]
  91. 17: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x331) [0x6a14d1]
  92. 18: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x875) [0xbb2c85]
  93. 19: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xbb4da0]
  94. 20: (()+0x8182) [0x7fa4a3bfe182]
  95. 21: (clone()+0x6d) [0x7fa4a216947d]
  96. NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
  97.  
  98. --- begin dump of recent events ---
  99. 0> 2015-07-23 16:09:43.804462 7fa47ef00700 -1 *** Caught signal (Aborted) **
  100. in thread 7fa47ef00700
  101.  
  102. ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
  103. 1: /usr/bin/ceph-osd() [0xacb3ba]
  104. 2: (()+0x10340) [0x7fa4a3c06340]
  105. 3: (gsignal()+0x39) [0x7fa4a20a5cc9]
  106. 4: (abort()+0x148) [0x7fa4a20a90d8]
  107. 5: (__gnu_cxx::__verbose_terminate_handler()+0x155) [0x7fa4a29b0535]
  108. 6: (()+0x5e6d6) [0x7fa4a29ae6d6]
  109. 7: (()+0x5e703) [0x7fa4a29ae703]
  110. 8: (()+0x5e922) [0x7fa4a29ae922]
  111. 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x278) [0xbc2d78]
  112. 10: (FileStore::read(coll_t, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int, bool)+0xc98) [0x916d58]
  113. 11: (ReplicatedBackend::build_push_op(ObjectRecoveryInfo const&, ObjectRecoveryProgress const&, ObjectRecoveryProgress*, PushOp*, object_stat_sum_t*)+0x2c2) [0xa0da52]
  114. 12: (ReplicatedBackend::handle_pull(pg_shard_t, PullOp&, PushOp*)+0xe4) [0xa10224]
  115. 13: (ReplicatedBackend::do_pull(std::tr1::shared_ptr<OpRequest>)+0xd6) [0xa10506]
  116. 14: (ReplicatedBackend::handle_message(std::tr1::shared_ptr<OpRequest>)+0x3ce) [0xa1763e]
  117. 15: (ReplicatedPG::do_request(std::tr1::shared_ptr<OpRequest>&, ThreadPool::TPHandle&)+0x167) [0x840f57]
  118. 16: (OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x3d5) [0x6a0f85]
  119. 17: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x331) [0x6a14d1]
  120. 18: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x875) [0xbb2c85]
  121. 19: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xbb4da0]
  122. 20: (()+0x8182) [0x7fa4a3bfe182]
  123. 21: (clone()+0x6d) [0x7fa4a216947d]
  124. NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
  125.  
  126. --- logging levels ---
  127. 0/ 5 none
  128. 0/ 0 lockdep
  129. 0/ 0 context
  130. 0/ 0 crush
  131. 1/ 5 mds
  132. 1/ 5 mds_balancer
  133. 1/ 5 mds_locker
  134. 1/ 5 mds_log
  135. 1/ 5 mds_log_expire
  136. 1/ 5 mds_migrator
  137. 0/ 0 buffer
  138. 0/ 0 timer
  139. 0/ 0 filer
  140. 0/ 1 striper
  141. 0/ 0 objecter
  142. 0/ 0 rados
  143. 0/ 0 rbd
  144. 0/ 5 rbd_replay
  145. 0/ 0 journaler
  146. 0/ 5 objectcacher
  147. 0/ 0 client
  148. 0/ 0 osd
  149. 0/ 0 optracker
  150. 0/ 0 objclass
  151. 0/ 0 filestore
  152. 1/ 3 keyvaluestore
  153. 0/ 0 journal
  154. 0/ 0 ms
  155. 0/ 0 mon
  156. 0/ 0 monc
  157. 0/ 0 paxos
  158. 0/ 0 tp
  159. 0/ 0 auth
  160. 1/ 5 crypto
  161. 0/ 0 finisher
  162. 0/ 0 heartbeatmap
  163. 0/ 0 perfcounter
  164. 0/ 0 rgw
  165. 1/10 civetweb
  166. 1/ 5 javaclient
  167. 0/ 0 asok
  168. 0/ 0 throttle
  169. 0/ 0 refs
  170. 1/ 5 xio
  171. -2/-2 (syslog threshold)
  172. -1/-1 (stderr threshold)
  173. max_recent 10000
  174. max_new 1000
  175. log_file /var/log/ceph/ceph-osd.5.log
  176. --- end dump of recent events ---
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement