Advertisement
Guest User

xs01log

a guest
Nov 12th, 2012
108
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 59.70 KB | None | 0 0
  1. Nov 12 13:37:46 xs01 mgmtd: [5677]: info: CIB replace: master
  2. Nov 12 13:37:46 xs01 lrmd: [5673]: info: rsc:drbd0:0 start[210] (pid 964)
  3. Nov 12 13:37:46 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  4. Nov 12 13:37:46 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  5. Nov 12 13:37:46 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  6. Nov 12 13:37:46 xs01 mgmtd: [5677]: info: CIB query: cib
  7. Nov 12 13:37:47 xs01 kernel: [252386.645433] d-con drbd0: Starting worker thread (from drbdsetup [1004])
  8. Nov 12 13:37:47 xs01 kernel: [252386.645602] block drbd0: disk( Diskless -> Attaching )
  9. Nov 12 13:37:47 xs01 kernel: [252386.650430] d-con drbd0: Method to ensure write ordering: barrier
  10. Nov 12 13:37:47 xs01 kernel: [252386.650437] block drbd0: max BIO size = 131072
  11. Nov 12 13:37:47 xs01 kernel: [252386.650444] block drbd0: drbd_bm_resize called with capacity == 2339768520
  12. Nov 12 13:37:47 xs01 kernel: [252386.660184] block drbd0: resync bitmap: bits=292471065 words=4569861 pages=8926
  13. Nov 12 13:37:47 xs01 kernel: [252386.660195] block drbd0: size = 1116 GB (1169884260 KB)
  14. Nov 12 13:37:47 xs01 kernel: [252386.912837] block drbd0: bitmap READ of 8926 pages took 63 jiffies
  15. Nov 12 13:37:47 xs01 kernel: [252386.921164] block drbd0: recounting of set bits took additional 2 jiffies
  16. Nov 12 13:37:47 xs01 kernel: [252386.921169] block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
  17. Nov 12 13:37:47 xs01 kernel: [252386.921181] block drbd0: disk( Attaching -> Consistent )
  18. Nov 12 13:37:47 xs01 kernel: [252386.921185] block drbd0: attached to UUIDs A5D2F216E6C0E288:0000000000000000:9D9C5DAFE8FBE22D:9D9B5DAFE8FBE22D
  19. Nov 12 13:37:47 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  20. Nov 12 13:37:47 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  21. Nov 12 13:37:47 xs01 kernel: [252386.943022] d-con drbd0: conn( StandAlone -> Unconnected )
  22. Nov 12 13:37:47 xs01 kernel: [252386.943045] d-con drbd0: Starting receiver thread (from drbd_w_drbd0 [1005])
  23. Nov 12 13:37:47 xs01 kernel: [252386.943121] d-con drbd0: receiver (re)started
  24. Nov 12 13:37:47 xs01 kernel: [252386.943138] d-con drbd0: conn( Unconnected -> WFConnection )
  25. Nov 12 13:37:47 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd0:0 (5)
  26. Nov 12 13:37:47 xs01 attrd: [5674]: notice: attrd_perform_update: Sent update 331: master-drbd0:0=5
  27. Nov 12 13:37:47 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  28. Nov 12 13:37:47 xs01 lrmd: [5673]: info: operation start[210] on drbd0:0 for client 5676: pid 964 exited with return code 0
  29. Nov 12 13:37:48 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_start_0 (call=210, rc=0, cib-update=208, confirmed=true) ok
  30. Nov 12 13:37:48 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[211] (pid 1028)
  31. Nov 12 13:37:48 xs01 lrmd: [5673]: info: RA output: (drbd0:0:notify:stdout)
  32. Nov 12 13:37:48 xs01 lrmd: [5673]: info: operation notify[211] on drbd0:0 for client 5676: pid 1028 exited with return code 0
  33. Nov 12 13:37:48 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=211, rc=0, cib-update=0, confirmed=true) ok
  34. Nov 12 13:37:48 xs01 mgmtd: [5677]: info: CIB query: cib
  35. Nov 12 13:37:49 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[212] (pid 1056)
  36. Nov 12 13:37:49 xs01 lrmd: [5673]: info: RA output: (drbd0:0:notify:stdout)
  37. Nov 12 13:37:49 xs01 lrmd: [5673]: info: operation notify[212] on drbd0:0 for client 5676: pid 1056 exited with return code 0
  38. Nov 12 13:37:49 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=212, rc=0, cib-update=0, confirmed=true) ok
  39. Nov 12 13:37:49 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[213] (pid 1084)
  40. Nov 12 13:37:49 xs01 lrmd: [5673]: info: operation notify[213] on drbd0:0 for client 5676: pid 1084 exited with return code 0
  41. Nov 12 13:37:49 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=213, rc=0, cib-update=0, confirmed=true) ok
  42. Nov 12 13:37:49 xs01 lrmd: [5673]: info: rsc:drbd0:0 promote[214] (pid 1107)
  43. Nov 12 13:37:50 xs01 kernel: [252389.003857] d-con drbd0: helper command: /sbin/drbdadm fence-peer drbd0
  44. Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: initiate_remote_stonith_op: Initiating remote operation off for xs02: 9a4553b1-ce00-452c-83f2-323babe09022
  45. Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: Refreshing port list for stonith-ipmi-xs02
  46. Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
  47. Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: call_remote_stonith: Requesting that xs01 perform op off xs02
  48. Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
  49. Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: stonith_fence: Found 1 matching devices for 'xs02'
  50. Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: stonith_command: Processed st_fence from xs01: rc=-1
  51. Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: crm_new_peer: Node xs02 now has id: 134283530
  52. Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: crm_new_peer: Node 134283530 is now known as xs02
  53. Nov 12 13:37:50 xs01 kernel: [252389.088093] d-con drbd0: Handshake successful: Agreed network protocol version 100
  54. Nov 12 13:37:50 xs01 kernel: [252389.088262] d-con drbd0: Peer authenticated using 20 bytes HMAC
  55. Nov 12 13:37:50 xs01 kernel: [252389.088298] d-con drbd0: conn( WFConnection -> WFReportParams )
  56. Nov 12 13:37:50 xs01 kernel: [252389.088301] d-con drbd0: Starting asender thread (from drbd_r_drbd0 [1017])
  57. Nov 12 13:37:50 xs01 mgmtd: [5677]: info: CIB query: cib
  58. Nov 12 13:37:50 xs01 external/ipmi(stonith-ipmi-xs02)[1152]: [1163]: debug: ipmitool output: Chassis Power Control: Down/Off
  59. Nov 12 13:37:51 xs01 stonith-ng: [5672]: notice: log_operation: Operation 'off' [1146] (call 0 from 02a982a9-d2b2-419d-b2dd-18faa40352ef) for host 'xs02' with device 'stonith-ipmi-xs02' returned: 0
  60. Nov 12 13:37:51 xs01 crmd: [5676]: notice: tengine_stonith_notify: Peer xs02 was terminated (off) by xs01 for xs01: OK (ref=9a4553b1-ce00-452c-83f2-323babe09022)
  61. Nov 12 13:37:51 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: Performing: stonith -t external/ipmi -T off xs02
  62. Nov 12 13:37:51 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: success: xs02 0
  63. Nov 12 13:37:51 xs01 stonith-ng: [5672]: notice: remote_op_done: Operation off of xs02 by xs01 for xs01[02a982a9-d2b2-419d-b2dd-18faa40352ef]: OK
  64. Nov 12 13:37:51 xs01 stonith_admin-fence-peer.sh[1165]: stonith_admin successfully fenced peer xs02.
  65. Nov 12 13:37:51 xs01 kernel: [252390.276555] d-con drbd0: helper command: /sbin/drbdadm fence-peer drbd0 exit code 7 (0x700)
  66. Nov 12 13:37:51 xs01 kernel: [252390.276560] d-con drbd0: fence-peer helper returned 7 (peer was stonithed)
  67. Nov 12 13:37:51 xs01 kernel: [252390.276567] d-con drbd0: Expected cstate < C_WF_REPORT_PARAMS
  68. Nov 12 13:37:51 xs01 kernel: [252390.276570] d-con drbd0: Expected cstate < C_WF_REPORT_PARAMS
  69. Nov 12 13:37:51 xs01 kernel: [252390.276572] d-con drbd0: Expected cstate < C_WF_REPORT_PARAMS
  70. Nov 12 13:37:51 xs01 lrmd: [5673]: info: RA output: (drbd0:0:promote:stderr) 0: State change failed: (-2) Need access to UpToDate data
  71. Nov 12 13:37:51 xs01 lrmd: [5673]: info: RA output: (drbd0:0:promote:stderr) Command 'drbdsetup primary 0
  72. Nov 12 13:37:51 xs01 lrmd: [5673]: info: RA output: (drbd0:0:promote:stderr) ' terminated with exit code 17
  73. Nov 12 13:37:51 xs01 kernel: [252390.276599] block drbd0: drbd_sync_handshake:
  74. Nov 12 13:37:51 xs01 kernel: [252390.276604] block drbd0: self A5D2F216E6C0E288:0000000000000000:9D9C5DAFE8FBE22D:9D9B5DAFE8FBE22D bits:0 flags:0
  75. Nov 12 13:37:51 xs01 kernel: [252390.276608] block drbd0: peer A5D2F216E6C0E288:0000000000000000:9D9C5DAFE8FBE22C:9D9B5DAFE8FBE22D bits:0 flags:0
  76. Nov 12 13:37:51 xs01 kernel: [252390.276612] block drbd0: uuid_compare()=0 by rule 40
  77. Nov 12 13:37:51 xs01 kernel: [252390.276622] block drbd0: peer( Unknown -> Secondary ) conn( WFReportParams -> Connected ) disk( Consistent -> UpToDate ) pdsk( DUnknown -> UpToDate )
  78. Nov 12 13:37:51 xs01 drbd(drbd0:0)[1107]: [1167]: ERROR: drbd0: Called drbdadm -c /etc/drbd.conf primary drbd0
  79. Nov 12 13:37:51 xs01 drbd(drbd0:0)[1107]: [1169]: ERROR: drbd0: Exit code 17
  80. Nov 12 13:37:51 xs01 drbd(drbd0:0)[1107]: [1171]: ERROR: drbd0: Command output:
  81. Nov 12 13:37:51 xs01 lrmd: [5673]: info: RA output: (drbd0:0:promote:stdout)
  82. Nov 12 13:37:51 xs01 drbd(drbd0:0)[1107]: [1173]: CRIT: Refusing to be promoted to Primary without UpToDate data
  83. Nov 12 13:37:51 xs01 corosync[5664]: [pcmk ] info: update_member: Node xs02 now has process list: 00000000000000000000000000151112 (1380626)
  84. Nov 12 13:37:51 xs01 lrmd: [5673]: info: operation promote[214] on drbd0:0 for client 5676: pid 1107 exited with return code 1
  85. Nov 12 13:37:51 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_promote_0 (call=214, rc=1, cib-update=209, confirmed=true) unknown error
  86. Nov 12 13:37:51 xs01 corosync[5664]: [pcmk ] info: send_member_notification: Sending membership update 348 to 2 children
  87. Nov 12 13:37:51 xs01 cib: [5671]: info: ais_dispatch_message: Membership 348: quorum retained
  88. Nov 12 13:37:51 xs01 corosync[5664]: [pcmk ] info: update_member: Node xs02 now has process list: 00000000000000000000000000151312 (1381138)
  89. Nov 12 13:37:51 xs01 crmd: [5676]: info: ais_dispatch_message: Membership 348: quorum retained
  90. Nov 12 13:37:51 xs01 cib: [5671]: info: ais_dispatch_message: Membership 348: quorum retained
  91. Nov 12 13:37:51 xs01 corosync[5664]: [pcmk ] info: send_member_notification: Sending membership update 348 to 2 children
  92. Nov 12 13:37:51 xs01 crmd: [5676]: notice: crmd_peer_update: Status update: Client xs02/crmd now has status [offline] (DC=xs02)
  93. Nov 12 13:37:51 xs01 crmd: [5676]: info: crmd_peer_update: Got client status callback - our DC is dead
  94. Nov 12 13:37:51 xs01 crmd: [5676]: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_CRMD_STATUS_CALLBACK origin=crmd_peer_update ]
  95. Nov 12 13:37:51 xs01 crmd: [5676]: info: ais_dispatch_message: Membership 348: quorum retained
  96. Nov 12 13:37:51 xs01 crmd: [5676]: notice: crmd_peer_update: Status update: Client xs02/crmd now has status [online] (DC=<null>)
  97. Nov 12 13:37:55 xs01 kernel: [252394.150191] bnx2 0000:0b:00.1: eth1: NIC Copper Link is Down
  98. Nov 12 13:37:56 xs01 kernel: [252395.859498] bnx2 0000:0b:00.1: eth1: NIC Copper Link is Up, 100 Mbps full duplexbnx2: , receive bnx2: & transmit bnx2: flow control ONbnx2:
  99. Nov 12 13:37:59 xs01 corosync[5664]: [TOTEM ] A processor failed, forming new configuration.
  100. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] CLM CONFIGURATION CHANGE
  101. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] New Configuration:
  102. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] r(0) ip(10.1.1.135)
  103. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] Members Left:
  104. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] r(0) ip(10.1.1.136)
  105. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] Members Joined:
  106. Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 352: memb=1, new=0, lost=1
  107. Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: pcmk_peer_update: memb: xs01 117506314
  108. Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: pcmk_peer_update: lost: xs02 134283530
  109. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] CLM CONFIGURATION CHANGE
  110. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] New Configuration:
  111. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] r(0) ip(10.1.1.135)
  112. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] Members Left:
  113. Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] Members Joined:
  114. Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 352: memb=1, new=0, lost=0
  115. Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: pcmk_peer_update: MEMB: xs01 117506314
  116. Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: ais_mark_unseen_peer_dead: Node xs02 was not seen in the previous transition
  117. Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: update_member: Node 134283530/xs02 is now: lost
  118. Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: send_member_notification: Sending membership update 352 to 2 children
  119. Nov 12 13:38:05 xs01 corosync[5664]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
  120. Nov 12 13:38:05 xs01 corosync[5664]: [CPG ] chosen downlist: sender r(0) ip(10.1.1.135) ; members(old:2 left:1)
  121. Nov 12 13:38:05 xs01 cib: [5671]: notice: ais_dispatch_message: Membership 352: quorum lost
  122. Nov 12 13:38:05 xs01 corosync[5664]: [MAIN ] Completed service synchronization, ready to provide service.
  123. Nov 12 13:38:05 xs01 crmd: [5676]: notice: ais_dispatch_message: Membership 352: quorum lost
  124. Nov 12 13:38:05 xs01 cib: [5671]: info: crm_update_peer: Node xs02: id=134283530 state=lost (new) addr=r(0) ip(10.1.1.136) votes=1 born=340 seen=348 proc=00000000000000000000000000151312
  125. Nov 12 13:38:05 xs01 crmd: [5676]: info: ais_status_callback: status: xs02 is now lost (was member)
  126. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_readwrite: We are now in R/W mode
  127. Nov 12 13:38:05 xs01 crmd: [5676]: info: crm_update_peer: Node xs02: id=134283530 state=lost (new) addr=r(0) ip(10.1.1.136) votes=1 born=340 seen=348 proc=00000000000000000000000000151312
  128. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/210, version=0.442.6): ok (rc=0)
  129. Nov 12 13:38:05 xs01 crmd: [5676]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
  130. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/211, version=0.442.7): ok (rc=0)
  131. Nov 12 13:38:05 xs01 crmd: [5676]: info: do_te_control: Registering TE UUID: d1569e85-c54d-4aae-899f-6cde2642f3be
  132. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/213, version=0.442.8): ok (rc=0)
  133. Nov 12 13:38:05 xs01 crmd: [5676]: info: set_graph_functions: Setting custom graph functions
  134. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/215, version=0.442.9): ok (rc=0)
  135. Nov 12 13:38:05 xs01 crmd: [5676]: info: do_dc_takeover: Taking over DC status for this partition
  136. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/217, version=0.442.10): ok (rc=0)
  137. Nov 12 13:38:05 xs01 crmd: [5676]: info: join_make_offer: Making join offers based on membership 352
  138. Nov 12 13:38:05 xs01 crmd: [5676]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
  139. Nov 12 13:38:05 xs01 crmd: [5676]: info: ais_dispatch_message: Membership 352: quorum still lost
  140. Nov 12 13:38:05 xs01 crmd: [5676]: info: crmd_ais_dispatch: Setting expected votes to 2
  141. Nov 12 13:38:05 xs01 crmd: [5676]: info: update_dc: Set DC to xs01 (3.0.6)
  142. Nov 12 13:38:05 xs01 crmd: [5676]: info: ais_dispatch_message: Membership 352: quorum still lost
  143. Nov 12 13:38:05 xs01 crmd: [5676]: info: crmd_ais_dispatch: Setting expected votes to 2
  144. Nov 12 13:38:05 xs01 crmd: [5676]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  145. Nov 12 13:38:05 xs01 crmd: [5676]: info: do_dc_join_finalize: join-1: Syncing the CIB from xs01 to the rest of the cluster
  146. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/221, version=0.442.11): ok (rc=0)
  147. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/222, version=0.442.11): ok (rc=0)
  148. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/223, version=0.442.12): ok (rc=0)
  149. Nov 12 13:38:05 xs01 crmd: [5676]: info: do_dc_join_ack: join-1: Updating node state to member for xs01
  150. Nov 12 13:38:05 xs01 crmd: [5676]: info: erase_status_tag: Deleting xpath: //node_state[@uname='xs01']/lrm
  151. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='xs01']/lrm (origin=local/crmd/224, version=0.442.13): ok (rc=0)
  152. Nov 12 13:38:05 xs01 crmd: [5676]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  153. Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  154. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/226, version=0.442.15): ok (rc=0)
  155. Nov 12 13:38:05 xs01 pengine: [5675]: notice: unpack_config: On loss of CCM Quorum: Ignore
  156. Nov 12 13:38:05 xs01 crmd: [5676]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
  157. Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-drbd0:0 (8)
  158. Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/228, version=0.442.17): ok (rc=0)
  159. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: pe_fence_node: Node xs02 will be fenced because it is un-expectedly down
  160. Nov 12 13:38:05 xs01 crmd: [5676]: WARN: match_down_event: No match for shutdown action on xs02
  161. Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd0:0 (5)
  162. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: determine_online_status: Node xs02 is unclean
  163. Nov 12 13:38:05 xs01 crmd: [5676]: info: te_update_diff: Stonith/shutdown of xs02 not matched
  164. Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-vmdisk-pri:0 (1352722459)
  165. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: unpack_rsc_op: Processing failed op drbd0:0_last_failure_0 on xs01: unknown error (1)
  166. Nov 12 13:38:05 xs01 crmd: [5676]: info: abort_transition_graph: te_update_diff:234 - Triggered transition abort (complete=1, tag=node_state, id=xs02, magic=NA, cib=0.442.16) : Node failure
  167. Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-drbd0:0 (1352722653)
  168. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: unpack_rsc_op: Processing failed op drbd0:1_last_failure_0 on xs02: unknown error (1)
  169. Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  170. Nov 12 13:38:05 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
  171. Nov 12 13:38:05 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
  172. Nov 12 13:38:05 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999998 more times on xs02 before being forced off
  173. Nov 12 13:38:05 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999998 more times on xs02 before being forced off
  174. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action stonith-ipmi-xs01_stop_0 on xs02 is unrunnable (offline)
  175. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
  176. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action stonith-ipmi-xs01_stop_0 on xs02 is unrunnable (offline)
  177. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
  178. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action drbd0:1_stop_0 on xs02 is unrunnable (offline)
  179. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
  180. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action drbd0:1_stop_0 on xs02 is unrunnable (offline)
  181. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
  182. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action drbd0:1_stop_0 on xs02 is unrunnable (offline)
  183. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
  184. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action drbd0:1_stop_0 on xs02 is unrunnable (offline)
  185. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
  186. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: stage6: Scheduling Node xs02 for STONITH
  187. Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Stop stonith-ipmi-xs01 (xs02)
  188. Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Start dlm:0 (xs01 - blocked)
  189. Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Start o2cb:0 (xs01 - blocked)
  190. Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Demote drbd0:0 (Master -> Slave xs01)
  191. Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Recover drbd0:0 (Master xs01)
  192. Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Stop drbd0:1 (xs02)
  193. Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Start vmdisk-pri:0 (xs01 - blocked)
  194. Nov 12 13:38:05 xs01 crmd: [5676]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  195. Nov 12 13:38:05 xs01 crmd: [5676]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1352738285-133) derived from /var/lib/pengine/pe-warn-52.bz2
  196. Nov 12 13:38:05 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 74: notify drbd0:0_pre_notify_demote_0 on xs01 (local)
  197. Nov 12 13:38:05 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[215] (pid 1190)
  198. Nov 12 13:38:05 xs01 pengine: [5675]: WARN: process_pe_message: Transition 0: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-52.bz2
  199. Nov 12 13:38:05 xs01 pengine: [5675]: notice: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues.
  200. Nov 12 13:38:05 xs01 lrmd: [5673]: info: operation notify[215] on drbd0:0 for client 5676: pid 1190 exited with return code 0
  201. Nov 12 13:38:05 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=215, rc=0, cib-update=0, confirmed=true) ok
  202. Nov 12 13:38:05 xs01 crmd: [5676]: notice: te_fence_node: Executing reboot fencing operation (61) on xs02 (timeout=60000)
  203. Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: initiate_remote_stonith_op: Initiating remote operation reboot for xs02: 719bd9e5-b039-4fc4-8646-baeea93e0772
  204. Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
  205. Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: call_remote_stonith: Requesting that xs01 perform op reboot xs02
  206. Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
  207. Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: stonith_fence: Found 1 matching devices for 'xs02'
  208. Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: stonith_command: Processed st_fence from xs01: rc=-1
  209. Nov 12 13:38:05 xs01 external/ipmi(stonith-ipmi-xs02)[1217]: [1230]: debug: ipmitool output: Chassis Power Control: Up/On
  210. Nov 12 13:38:05 xs01 mgmtd: [5677]: info: CIB query: cib
  211. Nov 12 13:38:06 xs01 stonith-ng: [5672]: notice: log_operation: Operation 'reboot' [1212] (call 0 from f80eb6d2-7c12-47ad-98e3-fd0822447da0) for host 'xs02' with device 'stonith-ipmi-xs02' returned
  212. : 0
  213. Nov 12 13:38:06 xs01 crmd: [5676]: info: tengine_stonith_callback: StonithOp <st-reply st_origin="stonith_construct_async_reply" t="stonith-ng" st_op="reboot" st_remote_op="719bd9e5-b039-4fc4-8646-
  214. baeea93e0772" st_clientid="f80eb6d2-7c12-47ad-98e3-fd0822447da0" st_target="xs02" st_device_action="st_fence" st_callid="0" st_callopt="0" st_rc="0" st_output="Performing: stonith -t external/ipmi
  215. -T reset xs02 success: xs02 0 " src="xs01" seq="8" state="2" />
  216. Nov 12 13:38:06 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: Performing: stonith -t external/ipmi -T reset xs02
  217. Nov 12 13:38:06 xs01 crmd: [5676]: info: erase_status_tag: Deleting xpath: //node_state[@uname='xs02']/lrm
  218. Nov 12 13:38:06 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: success: xs02 0
  219. Nov 12 13:38:06 xs01 crmd: [5676]: info: erase_status_tag: Deleting xpath: //node_state[@uname='xs02']/transient_attributes
  220. Nov 12 13:38:06 xs01 stonith-ng: [5672]: notice: remote_op_done: Operation reboot of xs02 by xs01 for xs01[f80eb6d2-7c12-47ad-98e3-fd0822447da0]: OK
  221. Nov 12 13:38:06 xs01 crmd: [5676]: notice: crmd_peer_update: Status update: Client xs02/crmd now has status [offline] (DC=true)
  222. Nov 12 13:38:06 xs01 crmd: [5676]: notice: tengine_stonith_notify: Peer xs02 was terminated (reboot) by xs01 for xs01: OK (ref=719bd9e5-b039-4fc4-8646-baeea93e0772)
  223. Nov 12 13:38:06 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 25: demote drbd0:0_demote_0 on xs01 (local)
  224. Nov 12 13:38:06 xs01 lrmd: [5673]: info: rsc:drbd0:0 demote[216] (pid 1231)
  225. Nov 12 13:38:06 xs01 crmd: [5676]: info: cib_fencing_updated: Fencing update 231 for xs02: complete
  226. Nov 12 13:38:06 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='xs02']/lrm (origin=local/crmd/232, version=0.442.24): ok (rc=0)
  227. Nov 12 13:38:06 xs01 crmd: [5676]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=stonith-ipmi-xs01_last_0, magic=0:0;5:18:0:984c6a8c-
  228. 26eb-43cc-94d9-ec0b63f93dd7, cib=0.442.24) : Resource op removal
  229. Nov 12 13:38:06 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='xs02']/transient_attributes (origin=local/crmd/233, version=0.442.25)
  230. : ok (rc=0)
  231. Nov 12 13:38:06 xs01 crmd: [5676]: info: abort_transition_graph: te_update_diff:194 - Triggered transition abort (complete=0, tag=transient_attributes, id=xs02, magic=NA, cib=0.442.25) : Transient
  232. attribute: removal
  233. Nov 12 13:38:06 xs01 lrmd: [5673]: info: operation demote[216] on drbd0:0 for client 5676: pid 1231 exited with return code 0
  234. Nov 12 13:38:06 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_demote_0 (call=216, rc=0, cib-update=235, confirmed=true) ok
  235. Nov 12 13:38:06 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 75: notify drbd0:0_post_notify_demote_0 on xs01 (local)
  236. Nov 12 13:38:06 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[217] (pid 1254)
  237. Nov 12 13:38:06 xs01 lrmd: [5673]: info: RA output: (drbd0:0:notify:stdout)
  238. Nov 12 13:38:06 xs01 lrmd: [5673]: info: operation notify[217] on drbd0:0 for client 5676: pid 1254 exited with return code 0
  239. Nov 12 13:38:06 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=217, rc=0, cib-update=0, confirmed=true) ok
  240. Nov 12 13:38:06 xs01 crmd: [5676]: notice: run_graph: ==== Transition 0 (Complete=16, Pending=0, Fired=0, Skipped=12, Incomplete=7, Source=/var/lib/pengine/pe-warn-52.bz2): Stopped
  241. Nov 12 13:38:06 xs01 crmd: [5676]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  242. Nov 12 13:38:06 xs01 pengine: [5675]: notice: unpack_config: On loss of CCM Quorum: Ignore
  243. Nov 12 13:38:06 xs01 pengine: [5675]: WARN: unpack_rsc_op: Processing failed op drbd0:0_last_failure_0 on xs01: unknown error (1)
  244. Nov 12 13:38:06 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
  245. Nov 12 13:38:06 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
  246. Nov 12 13:38:06 xs01 pengine: [5675]: notice: LogActions: Start dlm:0 (xs01 - blocked)
  247. Nov 12 13:38:06 xs01 pengine: [5675]: notice: LogActions: Start o2cb:0 (xs01 - blocked)
  248. Nov 12 13:38:06 xs01 pengine: [5675]: notice: LogActions: Recover drbd0:0 (Slave xs01)
  249. Nov 12 13:38:06 xs01 pengine: [5675]: notice: LogActions: Start vmdisk-pri:0 (xs01 - blocked)
  250. Nov 12 13:38:06 xs01 crmd: [5676]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  251. Nov 12 13:38:06 xs01 crmd: [5676]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1352738286-139) derived from /var/lib/pengine/pe-input-898.bz2
  252. Nov 12 13:38:06 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 67: notify drbd0:0_pre_notify_stop_0 on xs01 (local)
  253. Nov 12 13:38:06 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[218] (pid 1282)
  254. Nov 12 13:38:06 xs01 lrmd: [5673]: info: operation notify[218] on drbd0:0 for client 5676: pid 1282 exited with return code 0
  255. Nov 12 13:38:06 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=218, rc=0, cib-update=0, confirmed=true) ok
  256. Nov 12 13:38:06 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 2: stop drbd0:0_stop_0 on xs01 (local)
  257. Nov 12 13:38:06 xs01 lrmd: [5673]: info: rsc:drbd0:0 stop[219] (pid 1304)
  258. Nov 12 13:38:06 xs01 pengine: [5675]: notice: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-898.bz2
  259. Nov 12 13:38:07 xs01 mgmtd: [5677]: info: CIB query: cib
  260. Nov 12 13:38:10 xs01 kernel: [252409.584075] d-con drbd0: PingAck did not arrive in time.
  261. Nov 12 13:38:10 xs01 kernel: [252409.584098] d-con drbd0: peer( Secondary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown )
  262. Nov 12 13:38:10 xs01 kernel: [252409.584102] d-con drbd0: asender terminated
  263. Nov 12 13:38:10 xs01 kernel: [252409.584104] d-con drbd0: Terminating asender thread
  264. Nov 12 13:38:10 xs01 kernel: [252409.584137] d-con drbd0: conn( NetworkFailure -> Disconnecting )
  265. Nov 12 13:38:10 xs01 kernel: [252409.593995] d-con drbd0: Connection closed
  266. Nov 12 13:38:10 xs01 kernel: [252409.594008] d-con drbd0: conn( Disconnecting -> StandAlone )
  267. Nov 12 13:38:10 xs01 kernel: [252409.594010] d-con drbd0: receiver terminated
  268. Nov 12 13:38:10 xs01 kernel: [252409.594014] d-con drbd0: Terminating receiver thread
  269. Nov 12 13:38:10 xs01 kernel: [252409.594045] block drbd0: disk( UpToDate -> Failed )
  270. Nov 12 13:38:10 xs01 kernel: [252409.649085] block drbd0: disk( Failed -> Diskless )
  271. Nov 12 13:38:10 xs01 kernel: [252409.649269] block drbd0: drbd_bm_resize called with capacity == 0
  272. Nov 12 13:38:10 xs01 kernel: [252409.650545] d-con drbd0: Terminating worker thread
  273. Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:stop:stdout)
  274. Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:stop:stdout)
  275. Nov 12 13:38:10 xs01 crm_attribute: [1334]: info: Invoked: crm_attribute -N xs01 -n master-drbd0:0 -l reboot -D
  276. Nov 12 13:38:10 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd0:0 (<null>)
  277. Nov 12 13:38:10 xs01 attrd: [5674]: notice: attrd_perform_update: Sent delete 344: node=xs01, attr=master-drbd0:0, id=<n/a>, set=(null), section=status
  278. Nov 12 13:38:10 xs01 crmd: [5676]: info: abort_transition_graph: te_update_diff:194 - Triggered transition abort (complete=0, tag=transient_attributes, id=xs01, magic=NA, cib=0.442.28) : Transient
  279. attribute: removal
  280. Nov 12 13:38:10 xs01 attrd: [5674]: notice: attrd_perform_update: Sent delete -22: node=xs01, attr=master-drbd0:0, id=<n/a>, set=(null), section=status
  281. Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:stop:stdout)
  282. Nov 12 13:38:10 xs01 lrmd: [5673]: info: operation stop[219] on drbd0:0 for client 5676: pid 1304 exited with return code 0
  283. Nov 12 13:38:10 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_stop_0 (call=219, rc=0, cib-update=237, confirmed=true) ok
  284. Nov 12 13:38:10 xs01 crmd: [5676]: notice: run_graph: ==== Transition 1 (Complete=9, Pending=0, Fired=0, Skipped=6, Incomplete=4, Source=/var/lib/pengine/pe-input-898.bz2): Stopped
  285. Nov 12 13:38:10 xs01 crmd: [5676]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  286. Nov 12 13:38:10 xs01 pengine: [5675]: notice: unpack_config: On loss of CCM Quorum: Ignore
  287. Nov 12 13:38:10 xs01 pengine: [5675]: WARN: unpack_rsc_op: Processing failed op drbd0:0_last_failure_0 on xs01: unknown error (1)
  288. Nov 12 13:38:10 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
  289. Nov 12 13:38:10 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
  290. Nov 12 13:38:10 xs01 pengine: [5675]: notice: LogActions: Start dlm:0 (xs01 - blocked)
  291. Nov 12 13:38:10 xs01 crmd: [5676]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  292. Nov 12 13:38:10 xs01 pengine: [5675]: notice: LogActions: Start o2cb:0 (xs01 - blocked)
  293. Nov 12 13:38:10 xs01 pengine: [5675]: notice: LogActions: Start drbd0:0 (xs01)
  294. Nov 12 13:38:10 xs01 pengine: [5675]: notice: LogActions: Start vmdisk-pri:0 (xs01 - blocked)
  295. Nov 12 13:38:10 xs01 crmd: [5676]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1352738290-143) derived from /var/lib/pengine/pe-input-899.bz2
  296. Nov 12 13:38:10 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 23: start drbd0:0_start_0 on xs01 (local)
  297. Nov 12 13:38:10 xs01 lrmd: [5673]: info: rsc:drbd0:0 start[220] (pid 1335)
  298. Nov 12 13:38:10 xs01 pengine: [5675]: notice: process_pe_message: Transition 2: PEngine Input stored in: /var/lib/pengine/pe-input-899.bz2
  299. Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  300. Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  301. Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  302. Nov 12 13:38:11 xs01 mgmtd: [5677]: info: CIB query: cib
  303. Nov 12 13:38:11 xs01 kernel: [252410.920756] d-con drbd0: Starting worker thread (from drbdsetup [1375])
  304. Nov 12 13:38:11 xs01 kernel: [252410.920904] block drbd0: disk( Diskless -> Attaching )
  305. Nov 12 13:38:11 xs01 kernel: [252410.927922] d-con drbd0: Method to ensure write ordering: barrier
  306. Nov 12 13:38:11 xs01 kernel: [252410.927927] block drbd0: max BIO size = 131072
  307. Nov 12 13:38:11 xs01 kernel: [252410.927933] block drbd0: drbd_bm_resize called with capacity == 2339768520
  308. Nov 12 13:38:11 xs01 kernel: [252410.936953] block drbd0: resync bitmap: bits=292471065 words=4569861 pages=8926
  309. Nov 12 13:38:11 xs01 kernel: [252410.936960] block drbd0: size = 1116 GB (1169884260 KB)
  310. Nov 12 13:38:12 xs01 kernel: [252411.190356] block drbd0: bitmap READ of 8926 pages took 63 jiffies
  311. Nov 12 13:38:12 xs01 kernel: [252411.198094] block drbd0: recounting of set bits took additional 2 jiffies
  312. Nov 12 13:38:12 xs01 kernel: [252411.198099] block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
  313. Nov 12 13:38:12 xs01 kernel: [252411.198110] block drbd0: disk( Attaching -> Consistent )
  314. Nov 12 13:38:12 xs01 kernel: [252411.198114] block drbd0: attached to UUIDs A5D2F216E6C0E288:0000000000000000:9D9C5DAFE8FBE22D:9D9B5DAFE8FBE22D
  315. Nov 12 13:38:12 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  316. Nov 12 13:38:12 xs01 kernel: [252411.220935] d-con drbd0: conn( StandAlone -> Unconnected )
  317. Nov 12 13:38:12 xs01 kernel: [252411.220958] d-con drbd0: Starting receiver thread (from drbd_w_drbd0 [1376])
  318. Nov 12 13:38:12 xs01 kernel: [252411.221026] d-con drbd0: receiver (re)started
  319. Nov 12 13:38:12 xs01 kernel: [252411.221040] d-con drbd0: conn( Unconnected -> WFConnection )
  320. Nov 12 13:38:12 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  321. Nov 12 13:38:12 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd0:0 (5)
  322. Nov 12 13:38:12 xs01 attrd: [5674]: notice: attrd_perform_update: Sent update 348: master-drbd0:0=5
  323. Nov 12 13:38:12 xs01 crmd: [5676]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-xs01-master-drbd0.0, name=master-drbd0:0, value=5
  324. , magic=NA, cib=0.442.30) : Transient attribute: update
  325. Nov 12 13:38:12 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
  326. Nov 12 13:38:12 xs01 lrmd: [5673]: info: operation start[220] on drbd0:0 for client 5676: pid 1335 exited with return code 0
  327. Nov 12 13:38:12 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_start_0 (call=220, rc=0, cib-update=239, confirmed=true) ok
  328. Nov 12 13:38:12 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 65: notify drbd0:0_post_notify_start_0 on xs01 (local)
  329. Nov 12 13:38:12 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[221] (pid 1392)
  330. Nov 12 13:38:12 xs01 lrmd: [5673]: info: RA output: (drbd0:0:notify:stdout)
  331. Nov 12 13:38:12 xs01 lrmd: [5673]: info: operation notify[221] on drbd0:0 for client 5676: pid 1392 exited with return code 0
  332. Nov 12 13:38:12 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=221, rc=0, cib-update=0, confirmed=true) ok
  333. Nov 12 13:38:12 xs01 crmd: [5676]: notice: run_graph: ==== Transition 2 (Complete=9, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-input-899.bz2): Stopped
  334. Nov 12 13:38:12 xs01 crmd: [5676]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  335. Nov 12 13:38:12 xs01 pengine: [5675]: notice: unpack_config: On loss of CCM Quorum: Ignore
  336. Nov 12 13:38:12 xs01 pengine: [5675]: WARN: unpack_rsc_op: Processing failed op drbd0:0_last_failure_0 on xs01: unknown error (1)
  337. Nov 12 13:38:12 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
  338. Nov 12 13:38:12 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
  339. Nov 12 13:38:12 xs01 pengine: [5675]: notice: LogActions: Start dlm:0 (xs01)
  340. Nov 12 13:38:12 xs01 pengine: [5675]: notice: LogActions: Start o2cb:0 (xs01)
  341. Nov 12 13:38:12 xs01 pengine: [5675]: notice: LogActions: Promote drbd0:0 (Slave -> Master xs01)
  342. Nov 12 13:38:12 xs01 pengine: [5675]: notice: LogActions: Start vmdisk-pri:0 (xs01)
  343. Nov 12 13:38:12 xs01 crmd: [5676]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  344. Nov 12 13:38:12 xs01 crmd: [5676]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1352738292-147) derived from /var/lib/pengine/pe-input-900.bz2
  345. Nov 12 13:38:12 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 70: notify drbd0:0_pre_notify_promote_0 on xs01 (local)
  346. Nov 12 13:38:12 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[222] (pid 1420)
  347. Nov 12 13:38:12 xs01 pengine: [5675]: notice: process_pe_message: Transition 3: PEngine Input stored in: /var/lib/pengine/pe-input-900.bz2
  348. Nov 12 13:38:12 xs01 lrmd: [5673]: info: operation notify[222] on drbd0:0 for client 5676: pid 1420 exited with return code 0
  349. Nov 12 13:38:12 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=222, rc=0, cib-update=0, confirmed=true) ok
  350. Nov 12 13:38:12 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 25: promote drbd0:0_promote_0 on xs01 (local)
  351. Nov 12 13:38:12 xs01 lrmd: [5673]: info: rsc:drbd0:0 promote[223] (pid 1442)
  352. Nov 12 13:38:12 xs01 kernel: [252411.465490] d-con drbd0: helper command: /sbin/drbdadm fence-peer drbd0
  353. Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: initiate_remote_stonith_op: Initiating remote operation off for xs02: edddf437-31b5-4c99-a2d7-562c7780a206
  354. Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
  355. Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: call_remote_stonith: Requesting that xs01 perform op off xs02
  356. Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
  357. Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: stonith_fence: Found 1 matching devices for 'xs02'
  358. Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: stonith_command: Processed st_fence from xs01: rc=-1
  359. Nov 12 13:38:12 xs01 external/ipmi(stonith-ipmi-xs02)[1478]: [1489]: debug: ipmitool output: Chassis Power Control: Down/Off
  360. Nov 12 13:38:12 xs01 mgmtd: [5677]: info: CIB query: cib
  361. Nov 12 13:38:13 xs01 stonith-ng: [5672]: notice: log_operation: Operation 'off' [1473] (call 0 from a5f1800c-a1eb-4bb5-bd43-a0194b387e79) for host 'xs02' with device 'stonith-ipmi-xs02' returned: 0
  362. Nov 12 13:38:13 xs01 crmd: [5676]: notice: tengine_stonith_notify: Peer xs02 was terminated (off) by xs01 for xs01: OK (ref=edddf437-31b5-4c99-a2d7-562c7780a206)
  363. Nov 12 13:38:13 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: Performing: stonith -t external/ipmi -T off xs02
  364. Nov 12 13:38:13 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: success: xs02 0
  365. Nov 12 13:38:13 xs01 stonith-ng: [5672]: notice: remote_op_done: Operation off of xs02 by xs01 for xs01[a5f1800c-a1eb-4bb5-bd43-a0194b387e79]: OK
  366. Nov 12 13:38:13 xs01 stonith_admin-fence-peer.sh[1491]: stonith_admin successfully fenced peer xs02.
  367. Nov 12 13:38:13 xs01 kernel: [252412.685720] d-con drbd0: helper command: /sbin/drbdadm fence-peer drbd0 exit code 7 (0x700)
  368. Nov 12 13:38:13 xs01 kernel: [252412.685725] d-con drbd0: fence-peer helper returned 7 (peer was stonithed)
  369. Nov 12 13:38:13 xs01 kernel: [252412.685742] d-con drbd0: pdsk( DUnknown -> Outdated )
  370. Nov 12 13:38:13 xs01 kernel: [252412.685750] block drbd0: role( Secondary -> Primary ) disk( Consistent -> UpToDate )
  371. Nov 12 13:38:13 xs01 kernel: [252412.692228] block drbd0: new current UUID 7360EBBFA6D33021:A5D2F216E6C0E288:9D9C5DAFE8FBE22D:9D9B5DAFE8FBE22D
  372. Nov 12 13:38:13 xs01 lrmd: [5673]: info: RA output: (drbd0:0:promote:stdout)
  373. Nov 12 13:38:13 xs01 lrmd: [5673]: info: operation promote[223] on drbd0:0 for client 5676: pid 1442 exited with return code 0
  374. Nov 12 13:38:13 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_promote_0 (call=223, rc=0, cib-update=241, confirmed=true) ok
  375. Nov 12 13:38:13 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 71: notify drbd0:0_post_notify_promote_0 on xs01 (local)
  376. Nov 12 13:38:13 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[224] (pid 1495)
  377. Nov 12 13:38:13 xs01 lrmd: [5673]: info: RA output: (drbd0:0:notify:stdout)
  378. Nov 12 13:38:13 xs01 lrmd: [5673]: info: operation notify[224] on drbd0:0 for client 5676: pid 1495 exited with return code 0
  379. Nov 12 13:38:13 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=224, rc=0, cib-update=0, confirmed=true) ok
  380. Nov 12 13:38:13 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 26: monitor drbd0:0_monitor_10000 on xs01 (local)
  381. Nov 12 13:38:13 xs01 lrmd: [5673]: info: rsc:drbd0:0 monitor[225] (pid 1528)
  382. Nov 12 13:38:13 xs01 lrmd: [5673]: info: operation monitor[225] on drbd0:0 for client 5676: pid 1528 exited with return code 8
  383. Nov 12 13:38:13 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_monitor_10000 (call=225, rc=8, cib-update=242, confirmed=false) master
  384. Nov 12 13:38:13 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 7: start dlm:0_start_0 on xs01 (local)
  385. Nov 12 13:38:13 xs01 lrmd: [5673]: info: rsc:dlm:0 start[226] (pid 1555)
  386. Nov 12 13:38:13 xs01 cluster-dlm[1567]: main: dlm_controld master started
  387. Nov 12 13:38:13 xs01 corosync[5664]: [pcmk ] info: pcmk_notify: Enabling node notifications for child 1567 (0x7f6fc8003680)
  388. Nov 12 13:38:13 xs01 cluster-dlm: setup_misc_devices: found /dev/misc/dlm-control minor 58
  389. Nov 12 13:38:13 xs01 cluster-dlm: setup_misc_devices: found /dev/misc/dlm-monitor minor 57
  390. Nov 12 13:38:13 xs01 cluster-dlm: setup_misc_devices: found /dev/misc/dlm_plock minor 56
  391. Nov 12 13:38:13 xs01 cluster-dlm: setup_monitor: /dev/misc/dlm-monitor fd 10
  392. Nov 12 13:38:13 xs01 cluster-dlm: update_comms_nodes: /sys/kernel/config/dlm/cluster/comms: opendir failed: 2
  393. Nov 12 13:38:13 xs01 cluster-dlm: clear_configfs_spaces: /sys/kernel/config/dlm/cluster/spaces: opendir failed: 2
  394. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: get_cluster_type: Cluster type is: 'openais'
  395. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
  396. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: init_ais_connection_classic: AIS connection established
  397. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: get_ais_nodeid: Server details: id=117506314 uname=xs01 cname=pcmk
  398. Nov 12 13:38:13 xs01 cluster-dlm: detect_protocol: confdb_key_get error 11
  399. Nov 12 13:38:13 xs01 cluster-dlm: setup_cpg_daemon: setup_cpg_daemon 12
  400. Nov 12 13:38:13 xs01 cluster-dlm: log_config: dlm:controld conf 1 1 0 memb 117506314 join 117506314 left
  401. Nov 12 13:38:13 xs01 cluster-dlm: set_protocol: set_protocol member_count 1 propose daemon 1.1.1 kernel 1.1.1
  402. Nov 12 13:38:13 xs01 cluster-dlm: receive_protocol: run protocol from nodeid 117506314
  403. Nov 12 13:38:13 xs01 cluster-dlm: set_protocol: daemon run 1.1.1 max 1.1.1 kernel run 1.1.1 max 1.1.1
  404. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
  405. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_new_peer: Node xs01 now has id: 117506314
  406. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_new_peer: Node 117506314 is now known as xs01
  407. Nov 12 13:38:13 xs01 cluster-dlm: setup_plocks: plocks 14
  408. Nov 12 13:38:13 xs01 cluster-dlm: setup_plocks: plock cpg message size: 104 bytes
  409. Nov 12 13:38:13 xs01 cluster-dlm: update_cluster: Processing membership 352
  410. Nov 12 13:38:13 xs01 cluster-dlm: dlm_process_node: Adding address ip(10.1.1.135) to configfs for node 117506314
  411. Nov 12 13:38:13 xs01 cluster-dlm: add_configfs_node: set_configfs_node 117506314 10.1.1.135 local 1
  412. Nov 12 13:38:13 xs01 cluster-dlm: dlm_process_node: Added active node 117506314: born-on=348, last-seen=352, this-event=352, last-event=0
  413. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: ais_dispatch_message: Membership 352: quorum still lost
  414. Nov 12 13:38:13 xs01 cluster-dlm: dlm_process_node: Skipped inactive node 134283530: born-on=340, last-seen=0, this-event=352, last-event=0
  415. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_update_peer: Node xs01: id=117506314 state=member (new) addr=r(0) ip(10.1.1.135) (new) votes=1 (new) born=348 seen=352 proc=00000000000000000000
  416. 000000151312 (new)
  417. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_new_peer: Node xs02 now has id: 134283530
  418. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_new_peer: Node 134283530 is now known as xs02
  419. Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_update_peer: Node xs02: id=134283530 state=lost (new) addr=r(0) ip(10.1.1.136) votes=1 born=340 seen=0 proc=00000000000000000000000000151312
  420. Nov 12 13:38:14 xs01 mgmtd: [5677]: info: CIB query: cib
  421. Nov 12 13:38:14 xs01 lrmd: [5673]: info: operation start[226] on dlm:0 for client 5676: pid 1555 exited with return code 0
  422. Nov 12 13:38:14 xs01 crmd: [5676]: info: process_lrm_event: LRM operation dlm:0_start_0 (call=226, rc=0, cib-update=243, confirmed=true) ok
  423. Nov 12 13:38:14 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 8: monitor dlm:0_monitor_10000 on xs01 (local)
  424. Nov 12 13:38:14 xs01 lrmd: [5673]: info: rsc:dlm:0 monitor[227] (pid 1578)
  425. Nov 12 13:38:15 xs01 lrmd: [5673]: info: operation monitor[227] on dlm:0 for client 5676: pid 1578 exited with return code 0
  426. Nov 12 13:38:15 xs01 crmd: [5676]: info: process_lrm_event: LRM operation dlm:0_monitor_10000 (call=227, rc=0, cib-update=244, confirmed=false) ok
  427. Nov 12 13:38:15 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 9: start o2cb:0_start_0 on xs01 (local)
  428. Nov 12 13:38:15 xs01 lrmd: [5673]: info: rsc:o2cb:0 start[228] (pid 1587)
  429. Nov 12 13:38:15 xs01 o2cb(o2cb:0)[1587]: [1598]: INFO: Stack glue driver not loaded
  430. Nov 12 13:38:15 xs01 o2cb(o2cb:0)[1587]: [1600]: INFO: Starting o2cb:0
  431. Nov 12 13:38:15 xs01 kernel: [252414.058576] ocfs2: Registered cluster interface user
  432. Nov 12 13:38:15 xs01 kernel: [252414.073168] OCFS2 Node Manager 1.5.0
  433. Nov 12 13:38:15 xs01 kernel: [252414.091556] OCFS2 1.5.0
  434. Nov 12 13:38:15 xs01 ocfs2_controld.pcmk: Core dumps enabled: /var/lib/openais
  435. Nov 12 13:38:15 xs01 corosync[5664]: [pcmk ] info: pcmk_notify: Enabling node notifications for child 1612 (0x7f6fc8013750)
  436. Nov 12 13:38:15 xs01 ocfs2_controld: Cluster connection established. Local node id: 117506314
  437. Nov 12 13:38:15 xs01 ocfs2_controld: Added Pacemaker as client 1 with fd 7
  438. Nov 12 13:38:15 xs01 ocfs2_controld: Initializing CKPT service (try 1)
  439. Nov 12 13:38:15 xs01 ocfs2_controld: Connected to CKPT service with handle 0x327b23c600000000
  440. Nov 12 13:38:15 xs01 ocfs2_controld: Opening checkpoint "ocfs2:controld:0701010a" (try 1)
  441. Nov 12 13:38:15 xs01 ocfs2_controld: Opened checkpoint "ocfs2:controld:0701010a" with handle 0x6633487300000000
  442. Nov 12 13:38:15 xs01 ocfs2_controld: Writing to section "daemon_max_protocol" on checkpoint "ocfs2:controld:0701010a" (try 1)
  443. Nov 12 13:38:15 xs01 ocfs2_controld: Creating section "daemon_max_protocol" on checkpoint "ocfs2:controld:0701010a" (try 1)
  444. Nov 12 13:38:15 xs01 ocfs2_controld: Created section "daemon_max_protocol" on checkpoint "ocfs2:controld:0701010a"
  445. Nov 12 13:38:15 xs01 ocfs2_controld: Writing to section "ocfs2_max_protocol" on checkpoint "ocfs2:controld:0701010a" (try 1)
  446. Nov 12 13:38:15 xs01 ocfs2_controld: Creating section "ocfs2_max_protocol" on checkpoint "ocfs2:controld:0701010a" (try 1)
  447. Nov 12 13:38:15 xs01 ocfs2_controld: Created section "ocfs2_max_protocol" on checkpoint "ocfs2:controld:0701010a"
  448. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: get_cluster_type: Cluster type is: 'openais'
  449. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
  450. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: init_ais_connection_classic: AIS connection established
  451. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: get_ais_nodeid: Server details: id=117506314 uname=xs01 cname=pcmk
  452. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
  453. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_new_peer: Node xs01 now has id: 117506314
  454. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_new_peer: Node 117506314 is now known as xs01
  455. Nov 12 13:38:15 xs01 ocfs2_controld: Starting join for group "ocfs2:controld"
  456. Nov 12 13:38:15 xs01 ocfs2_controld: cpg_join succeeded
  457. Nov 12 13:38:15 xs01 ocfs2_controld: setup done
  458. Nov 12 13:38:15 xs01 ocfs2_controld: confchg called
  459. Nov 12 13:38:15 xs01 ocfs2_controld: ocfs2_controld (group "ocfs2:controld") confchg: members 1, left 0, joined 1
  460. Nov 12 13:38:15 xs01 ocfs2_controld: CPG is live, we are the first daemon
  461. Nov 12 13:38:15 xs01 ocfs2_controld: Opening checkpoint "ocfs2:controld" (try 1)
  462. Nov 12 13:38:15 xs01 ocfs2_controld: Opened checkpoint "ocfs2:controld" with handle 0x194e92eb00000001
  463. Nov 12 13:38:15 xs01 ocfs2_controld: Writing to section "daemon_protocol" on checkpoint "ocfs2:controld" (try 1)
  464. Nov 12 13:38:15 xs01 ocfs2_controld: Creating section "daemon_protocol" on checkpoint "ocfs2:controld" (try 1)
  465. Nov 12 13:38:15 xs01 ocfs2_controld: Created section "daemon_protocol" on checkpoint "ocfs2:controld"
  466. Nov 12 13:38:15 xs01 ocfs2_controld: Writing to section "ocfs2_protocol" on checkpoint "ocfs2:controld" (try 1)
  467. Nov 12 13:38:15 xs01 ocfs2_controld: Creating section "ocfs2_protocol" on checkpoint "ocfs2:controld" (try 1)
  468. Nov 12 13:38:15 xs01 ocfs2_controld: Created section "ocfs2_protocol" on checkpoint "ocfs2:controld"
  469. Nov 12 13:38:15 xs01 ocfs2_controld: Daemon protocol is 1.0
  470. Nov 12 13:38:15 xs01 ocfs2_controld: fs protocol is 1.0
  471. Nov 12 13:38:15 xs01 ocfs2_controld: Connecting to dlm_controld
  472. Nov 12 13:38:15 xs01 ocfs2_controld: Opening control device
  473. Nov 12 13:38:15 xs01 cluster-dlm: process_listener: client connection 5 fd 15
  474. Nov 12 13:38:15 xs01 ocfs2_controld: Starting to listen for mounters
  475. Nov 12 13:38:15 xs01 ocfs2_controld: new listening connection 4
  476. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: ais_dispatch_message: Membership 352: quorum still lost
  477. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_update_peer: Node xs01: id=117506314 state=member (new) addr=r(0) ip(10.1.1.135) (new) votes=1 (new) born=348 seen=352 proc=00000000000000000
  478. 000000000151312 (new)
  479. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_new_peer: Node xs02 now has id: 134283530
  480. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_new_peer: Node 134283530 is now known as xs02
  481. Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_update_peer: Node xs02: id=134283530 state=lost (new) addr=r(0) ip(10.1.1.136) votes=1 born=340 seen=0 proc=00000000000000000000000000151312
  482. Nov 12 13:38:15 xs01 mgmtd: [5677]: info: CIB query: cib
  483. Nov 12 13:38:17 xs01 lrmd: [5673]: info: operation start[228] on o2cb:0 for client 5676: pid 1587 exited with return code 0
  484. Nov 12 13:38:17 xs01 crmd: [5676]: info: process_lrm_event: LRM operation o2cb:0_start_0 (call=228, rc=0, cib-update=245, confirmed=true) ok
  485. Nov 12 13:38:17 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 10: monitor o2cb:0_monitor_10000 on xs01 (local)
  486. Nov 12 13:38:17 xs01 lrmd: [5673]: info: rsc:o2cb:0 monitor[229] (pid 1624)
  487. Nov 12 13:38:17 xs01 lrmd: [5673]: info: operation monitor[229] on o2cb:0 for client 5676: pid 1624 exited with return code 0
  488. Nov 12 13:38:17 xs01 crmd: [5676]: info: process_lrm_event: LRM operation o2cb:0_monitor_10000 (call=229, rc=0, cib-update=246, confirmed=false) ok
  489. Nov 12 13:38:17 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 51: start vmdisk-pri:0_start_0 on xs01 (local)
  490. Nov 12 13:38:17 xs01 lrmd: [5673]: info: rsc:vmdisk-pri:0 start[230] (pid 1638)
  491. Nov 12 13:38:17 xs01 Filesystem(vmdisk-pri:0)[1638]: [1675]: INFO: Running start for /dev/drbd/by-res/drbd0 on /vmdisk
  492. Nov 12 13:38:17 xs01 ocfs2_controld: new client connection 5
  493. Nov 12 13:38:17 xs01 ocfs2_controld: client msg
  494. Nov 12 13:38:17 xs01 ocfs2_controld: client message 0 from 5: MOUNT
  495. Nov 12 13:38:17 xs01 ocfs2_controld: start_mount: uuid "B0CE632E636744EDA5011D6501E78990", device "/dev/drbd0", service "ocfs2"
  496. Nov 12 13:38:17 xs01 ocfs2_controld: Adding service "ocfs2" to device "/dev/drbd0" uuid "B0CE632E636744EDA5011D6501E78990"
  497. Nov 12 13:38:17 xs01 ocfs2_controld: Starting join for group "ocfs2:B0CE632E636744EDA5011D6501E78990"
  498. Nov 12 13:38:17 xs01 ocfs2_controld: cpg_join succeeded
  499. Nov 12 13:38:17 xs01 ocfs2_controld: start_mount returns 0
  500. Nov 12 13:38:17 xs01 ocfs2_controld: confchg called
  501. Nov 12 13:38:17 xs01 ocfs2_controld: group "ocfs2:B0CE632E636744EDA5011D6501E78990" confchg: members 1, left 0, joined 1
  502. Nov 12 13:38:17 xs01 ocfs2_controld: Node 117506314 joins group ocfs2:B0CE632E636744EDA5011D6501E78990
  503. Nov 12 13:38:17 xs01 ocfs2_controld: This node joins group ocfs2:B0CE632E636744EDA5011D6501E78990
  504. Nov 12 13:38:17 xs01 ocfs2_controld: Filling node 117506314 to group ocfs2:B0CE632E636744EDA5011D6501E78990
  505. Nov 12 13:38:17 xs01 ocfs2_controld: Registering mountgroup B0CE632E636744EDA5011D6501E78990 with dlm_controld
  506. Nov 12 13:38:17 xs01 ocfs2_controld: Registering "B0CE632E636744EDA5011D6501E78990" with dlm_controld
  507. Nov 12 13:38:17 xs01 ocfs2_controld: message from dlmcontrol
  508. Nov 12 13:38:17 xs01 ocfs2_controld: Registration of "B0CE632E636744EDA5011D6501E78990" complete
  509. Nov 12 13:38:17 xs01 ocfs2_controld: Mountgroup B0CE632E636744EDA5011D6501E78990 successfully registered with dlm_controld
  510. Nov 12 13:38:17 xs01 ocfs2_controld: notify_mount_client sending 0 "OK"
  511. Nov 12 13:38:17 xs01 ocfs2_controld: Notified client: 1
  512. Nov 12 13:38:17 xs01 cluster-dlm: process_uevent: uevent: add@/kernel/dlm/B0CE632E636744EDA5011D6501E78990
  513. Nov 12 13:38:17 xs01 cluster-dlm: process_uevent: kernel: add@ B0CE632E636744EDA5011D6501E78990
  514. Nov 12 13:38:17 xs01 cluster-dlm: process_uevent: uevent: online@/kernel/dlm/B0CE632E636744EDA5011D6501E78990
  515. Nov 12 13:38:17 xs01 cluster-dlm: process_uevent: kernel: online@ B0CE632E636744EDA5011D6501E78990
  516. Nov 12 13:38:17 xs01 kernel: [252416.308668] dlm: Using TCP for communications
  517. Nov 12 13:38:17 xs01 cluster-dlm: log_config: dlm:ls:B0CE632E636744EDA5011D6501E78990 conf 1 1 0 memb 117506314 join 117506314 left
  518. Nov 12 13:38:17 xs01 cluster-dlm: add_change: B0CE632E636744EDA5011D6501E78990 add_change cg 1 joined nodeid 117506314
  519. Nov 12 13:38:17 xs01 cluster-dlm: add_change: B0CE632E636744EDA5011D6501E78990 add_change cg 1 we joined
  520. Nov 12 13:38:17 xs01 cluster-dlm: add_change: B0CE632E636744EDA5011D6501E78990 add_change cg 1 counts member 1 joined 1 remove 0 failed 0
  521. Nov 12 13:38:17 xs01 cluster-dlm: check_fencing_done: B0CE632E636744EDA5011D6501E78990 check_fencing done
  522. Nov 12 13:38:17 xs01 cluster-dlm: check_quorum_done: B0CE632E636744EDA5011D6501E78990 check_quorum disabled
  523. Nov 12 13:38:17 xs01 cluster-dlm: check_fs_done: B0CE632E636744EDA5011D6501E78990 check_fs done
  524. Nov 12 13:38:17 xs01 cluster-dlm: send_info: B0CE632E636744EDA5011D6501E78990 send_start cg 1 flags 1 data2 0 counts 0 1 1 0 0
  525. Nov 12 13:38:17 xs01 cluster-dlm: receive_start: B0CE632E636744EDA5011D6501E78990 receive_start 117506314:1 len 76
  526. Nov 12 13:38:17 xs01 cluster-dlm: match_change: B0CE632E636744EDA5011D6501E78990 match_change 117506314:1 matches cg 1
  527. Nov 12 13:38:17 xs01 cluster-dlm: wait_messages_done: B0CE632E636744EDA5011D6501E78990 wait_messages cg 1 got all 1
  528. Nov 12 13:38:17 xs01 cluster-dlm: start_kernel: B0CE632E636744EDA5011D6501E78990 start_kernel cg 1 member_count 1
  529. Nov 12 13:38:17 xs01 cluster-dlm: do_sysfs: write "3192944163" to "/sys/kernel/dlm/B0CE632E636744EDA5011D6501E78990/id"
  530. Nov 12 13:38:17 xs01 cluster-dlm: set_configfs_members: set_members mkdir "/sys/kernel/config/dlm/cluster/spaces/B0CE632E636744EDA5011D6501E78990/nodes/117506314"
  531. Nov 12 13:38:17 xs01 cluster-dlm: do_sysfs: write "1" to "/sys/kernel/dlm/B0CE632E636744EDA5011D6501E78990/control"
  532. Nov 12 13:38:17 xs01 cluster-dlm: do_sysfs: write "0" to "/sys/kernel/dlm/B0CE632E636744EDA5011D6501E78990/event_done"
  533. Nov 12 13:38:17 xs01 ocfs2_controld: client msg
  534. Nov 12 13:38:17 xs01 ocfs2_controld: client message 1 from 5: MRESULT
  535. Nov 12 13:38:17 xs01 ocfs2_controld: complete_mount: uuid "B0CE632E636744EDA5011D6501E78990", errcode "0", service "ocfs2"
  536. Nov 12 13:38:17 xs01 ocfs2_controld: client msg
  537. Nov 12 13:38:17 xs01 ocfs2_controld: client 5 fd 14 dead
  538. Nov 12 13:38:17 xs01 ocfs2_controld: client 5 fd -1 dead
  539. Nov 12 13:38:17 xs01 kernel: [252416.451333] ocfs2: Mounting device (147,0) on (node 1175063, slot 0) with ordered data mode.
  540. Nov 12 13:38:17 xs01 ocfs2_hb_ctl[1703]: ocfs2_hb_ctl /sbin/ocfs2_hb_ctl -P -d /dev/drbd0
  541. Nov 12 13:38:17 xs01 ocfs2_controld: new client connection 5
  542. Nov 12 13:38:17 xs01 ocfs2_controld: client msg
  543. Nov 12 13:38:17 xs01 ocfs2_controld: client message 6 from 5: LISTCLUSTERS
  544. Nov 12 13:38:17 xs01 ocfs2_controld: client msg
  545. Nov 12 13:38:17 xs01 ocfs2_controld: client 5 fd 14 dead
  546. Nov 12 13:38:17 xs01 ocfs2_controld: client 5 fd -1 dead
  547. Nov 12 13:38:17 xs01 lrmd: [5673]: info: operation start[230] on vmdisk-pri:0 for client 5676: pid 1638 exited with return code 0
  548. Nov 12 13:38:17 xs01 crmd: [5676]: info: process_lrm_event: LRM operation vmdisk-pri:0_start_0 (call=230, rc=0, cib-update=247, confirmed=true) ok
  549. Nov 12 13:38:17 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 52: monitor vmdisk-pri:0_monitor_20000 on xs01 (local)
  550. Nov 12 13:38:17 xs01 lrmd: [5673]: info: rsc:vmdisk-pri:0 monitor[231] (pid 1710)
  551. Nov 12 13:38:17 xs01 lrmd: [5673]: info: operation monitor[231] on vmdisk-pri:0 for client 5676: pid 1710 exited with return code 0
  552. Nov 12 13:38:17 xs01 crmd: [5676]: info: process_lrm_event: LRM operation vmdisk-pri:0_monitor_20000 (call=231, rc=0, cib-update=248, confirmed=false) ok
  553. Nov 12 13:38:17 xs01 crmd: [5676]: notice: run_graph: ==== Transition 3 (Complete=23, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-900.bz2): Complete
  554. Nov 12 13:38:17 xs01 crmd: [5676]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  555. Nov 12 13:38:18 xs01 mgmtd: [5677]: info: CIB query: cib
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement