Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Nov 12 13:37:46 xs01 mgmtd: [5677]: info: CIB replace: master
- Nov 12 13:37:46 xs01 lrmd: [5673]: info: rsc:drbd0:0 start[210] (pid 964)
- Nov 12 13:37:46 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:37:46 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:37:46 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:37:46 xs01 mgmtd: [5677]: info: CIB query: cib
- Nov 12 13:37:47 xs01 kernel: [252386.645433] d-con drbd0: Starting worker thread (from drbdsetup [1004])
- Nov 12 13:37:47 xs01 kernel: [252386.645602] block drbd0: disk( Diskless -> Attaching )
- Nov 12 13:37:47 xs01 kernel: [252386.650430] d-con drbd0: Method to ensure write ordering: barrier
- Nov 12 13:37:47 xs01 kernel: [252386.650437] block drbd0: max BIO size = 131072
- Nov 12 13:37:47 xs01 kernel: [252386.650444] block drbd0: drbd_bm_resize called with capacity == 2339768520
- Nov 12 13:37:47 xs01 kernel: [252386.660184] block drbd0: resync bitmap: bits=292471065 words=4569861 pages=8926
- Nov 12 13:37:47 xs01 kernel: [252386.660195] block drbd0: size = 1116 GB (1169884260 KB)
- Nov 12 13:37:47 xs01 kernel: [252386.912837] block drbd0: bitmap READ of 8926 pages took 63 jiffies
- Nov 12 13:37:47 xs01 kernel: [252386.921164] block drbd0: recounting of set bits took additional 2 jiffies
- Nov 12 13:37:47 xs01 kernel: [252386.921169] block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
- Nov 12 13:37:47 xs01 kernel: [252386.921181] block drbd0: disk( Attaching -> Consistent )
- Nov 12 13:37:47 xs01 kernel: [252386.921185] block drbd0: attached to UUIDs A5D2F216E6C0E288:0000000000000000:9D9C5DAFE8FBE22D:9D9B5DAFE8FBE22D
- Nov 12 13:37:47 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:37:47 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:37:47 xs01 kernel: [252386.943022] d-con drbd0: conn( StandAlone -> Unconnected )
- Nov 12 13:37:47 xs01 kernel: [252386.943045] d-con drbd0: Starting receiver thread (from drbd_w_drbd0 [1005])
- Nov 12 13:37:47 xs01 kernel: [252386.943121] d-con drbd0: receiver (re)started
- Nov 12 13:37:47 xs01 kernel: [252386.943138] d-con drbd0: conn( Unconnected -> WFConnection )
- Nov 12 13:37:47 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd0:0 (5)
- Nov 12 13:37:47 xs01 attrd: [5674]: notice: attrd_perform_update: Sent update 331: master-drbd0:0=5
- Nov 12 13:37:47 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:37:47 xs01 lrmd: [5673]: info: operation start[210] on drbd0:0 for client 5676: pid 964 exited with return code 0
- Nov 12 13:37:48 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_start_0 (call=210, rc=0, cib-update=208, confirmed=true) ok
- Nov 12 13:37:48 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[211] (pid 1028)
- Nov 12 13:37:48 xs01 lrmd: [5673]: info: RA output: (drbd0:0:notify:stdout)
- Nov 12 13:37:48 xs01 lrmd: [5673]: info: operation notify[211] on drbd0:0 for client 5676: pid 1028 exited with return code 0
- Nov 12 13:37:48 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=211, rc=0, cib-update=0, confirmed=true) ok
- Nov 12 13:37:48 xs01 mgmtd: [5677]: info: CIB query: cib
- Nov 12 13:37:49 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[212] (pid 1056)
- Nov 12 13:37:49 xs01 lrmd: [5673]: info: RA output: (drbd0:0:notify:stdout)
- Nov 12 13:37:49 xs01 lrmd: [5673]: info: operation notify[212] on drbd0:0 for client 5676: pid 1056 exited with return code 0
- Nov 12 13:37:49 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=212, rc=0, cib-update=0, confirmed=true) ok
- Nov 12 13:37:49 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[213] (pid 1084)
- Nov 12 13:37:49 xs01 lrmd: [5673]: info: operation notify[213] on drbd0:0 for client 5676: pid 1084 exited with return code 0
- Nov 12 13:37:49 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=213, rc=0, cib-update=0, confirmed=true) ok
- Nov 12 13:37:49 xs01 lrmd: [5673]: info: rsc:drbd0:0 promote[214] (pid 1107)
- Nov 12 13:37:50 xs01 kernel: [252389.003857] d-con drbd0: helper command: /sbin/drbdadm fence-peer drbd0
- Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: initiate_remote_stonith_op: Initiating remote operation off for xs02: 9a4553b1-ce00-452c-83f2-323babe09022
- Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: Refreshing port list for stonith-ipmi-xs02
- Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
- Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: call_remote_stonith: Requesting that xs01 perform op off xs02
- Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
- Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: stonith_fence: Found 1 matching devices for 'xs02'
- Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: stonith_command: Processed st_fence from xs01: rc=-1
- Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: crm_new_peer: Node xs02 now has id: 134283530
- Nov 12 13:37:50 xs01 stonith-ng: [5672]: info: crm_new_peer: Node 134283530 is now known as xs02
- Nov 12 13:37:50 xs01 kernel: [252389.088093] d-con drbd0: Handshake successful: Agreed network protocol version 100
- Nov 12 13:37:50 xs01 kernel: [252389.088262] d-con drbd0: Peer authenticated using 20 bytes HMAC
- Nov 12 13:37:50 xs01 kernel: [252389.088298] d-con drbd0: conn( WFConnection -> WFReportParams )
- Nov 12 13:37:50 xs01 kernel: [252389.088301] d-con drbd0: Starting asender thread (from drbd_r_drbd0 [1017])
- Nov 12 13:37:50 xs01 mgmtd: [5677]: info: CIB query: cib
- Nov 12 13:37:50 xs01 external/ipmi(stonith-ipmi-xs02)[1152]: [1163]: debug: ipmitool output: Chassis Power Control: Down/Off
- Nov 12 13:37:51 xs01 stonith-ng: [5672]: notice: log_operation: Operation 'off' [1146] (call 0 from 02a982a9-d2b2-419d-b2dd-18faa40352ef) for host 'xs02' with device 'stonith-ipmi-xs02' returned: 0
- Nov 12 13:37:51 xs01 crmd: [5676]: notice: tengine_stonith_notify: Peer xs02 was terminated (off) by xs01 for xs01: OK (ref=9a4553b1-ce00-452c-83f2-323babe09022)
- Nov 12 13:37:51 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: Performing: stonith -t external/ipmi -T off xs02
- Nov 12 13:37:51 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: success: xs02 0
- Nov 12 13:37:51 xs01 stonith-ng: [5672]: notice: remote_op_done: Operation off of xs02 by xs01 for xs01[02a982a9-d2b2-419d-b2dd-18faa40352ef]: OK
- Nov 12 13:37:51 xs01 stonith_admin-fence-peer.sh[1165]: stonith_admin successfully fenced peer xs02.
- Nov 12 13:37:51 xs01 kernel: [252390.276555] d-con drbd0: helper command: /sbin/drbdadm fence-peer drbd0 exit code 7 (0x700)
- Nov 12 13:37:51 xs01 kernel: [252390.276560] d-con drbd0: fence-peer helper returned 7 (peer was stonithed)
- Nov 12 13:37:51 xs01 kernel: [252390.276567] d-con drbd0: Expected cstate < C_WF_REPORT_PARAMS
- Nov 12 13:37:51 xs01 kernel: [252390.276570] d-con drbd0: Expected cstate < C_WF_REPORT_PARAMS
- Nov 12 13:37:51 xs01 kernel: [252390.276572] d-con drbd0: Expected cstate < C_WF_REPORT_PARAMS
- Nov 12 13:37:51 xs01 lrmd: [5673]: info: RA output: (drbd0:0:promote:stderr) 0: State change failed: (-2) Need access to UpToDate data
- Nov 12 13:37:51 xs01 lrmd: [5673]: info: RA output: (drbd0:0:promote:stderr) Command 'drbdsetup primary 0
- Nov 12 13:37:51 xs01 lrmd: [5673]: info: RA output: (drbd0:0:promote:stderr) ' terminated with exit code 17
- Nov 12 13:37:51 xs01 kernel: [252390.276599] block drbd0: drbd_sync_handshake:
- Nov 12 13:37:51 xs01 kernel: [252390.276604] block drbd0: self A5D2F216E6C0E288:0000000000000000:9D9C5DAFE8FBE22D:9D9B5DAFE8FBE22D bits:0 flags:0
- Nov 12 13:37:51 xs01 kernel: [252390.276608] block drbd0: peer A5D2F216E6C0E288:0000000000000000:9D9C5DAFE8FBE22C:9D9B5DAFE8FBE22D bits:0 flags:0
- Nov 12 13:37:51 xs01 kernel: [252390.276612] block drbd0: uuid_compare()=0 by rule 40
- Nov 12 13:37:51 xs01 kernel: [252390.276622] block drbd0: peer( Unknown -> Secondary ) conn( WFReportParams -> Connected ) disk( Consistent -> UpToDate ) pdsk( DUnknown -> UpToDate )
- Nov 12 13:37:51 xs01 drbd(drbd0:0)[1107]: [1167]: ERROR: drbd0: Called drbdadm -c /etc/drbd.conf primary drbd0
- Nov 12 13:37:51 xs01 drbd(drbd0:0)[1107]: [1169]: ERROR: drbd0: Exit code 17
- Nov 12 13:37:51 xs01 drbd(drbd0:0)[1107]: [1171]: ERROR: drbd0: Command output:
- Nov 12 13:37:51 xs01 lrmd: [5673]: info: RA output: (drbd0:0:promote:stdout)
- Nov 12 13:37:51 xs01 drbd(drbd0:0)[1107]: [1173]: CRIT: Refusing to be promoted to Primary without UpToDate data
- Nov 12 13:37:51 xs01 corosync[5664]: [pcmk ] info: update_member: Node xs02 now has process list: 00000000000000000000000000151112 (1380626)
- Nov 12 13:37:51 xs01 lrmd: [5673]: info: operation promote[214] on drbd0:0 for client 5676: pid 1107 exited with return code 1
- Nov 12 13:37:51 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_promote_0 (call=214, rc=1, cib-update=209, confirmed=true) unknown error
- Nov 12 13:37:51 xs01 corosync[5664]: [pcmk ] info: send_member_notification: Sending membership update 348 to 2 children
- Nov 12 13:37:51 xs01 cib: [5671]: info: ais_dispatch_message: Membership 348: quorum retained
- Nov 12 13:37:51 xs01 corosync[5664]: [pcmk ] info: update_member: Node xs02 now has process list: 00000000000000000000000000151312 (1381138)
- Nov 12 13:37:51 xs01 crmd: [5676]: info: ais_dispatch_message: Membership 348: quorum retained
- Nov 12 13:37:51 xs01 cib: [5671]: info: ais_dispatch_message: Membership 348: quorum retained
- Nov 12 13:37:51 xs01 corosync[5664]: [pcmk ] info: send_member_notification: Sending membership update 348 to 2 children
- Nov 12 13:37:51 xs01 crmd: [5676]: notice: crmd_peer_update: Status update: Client xs02/crmd now has status [offline] (DC=xs02)
- Nov 12 13:37:51 xs01 crmd: [5676]: info: crmd_peer_update: Got client status callback - our DC is dead
- Nov 12 13:37:51 xs01 crmd: [5676]: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_CRMD_STATUS_CALLBACK origin=crmd_peer_update ]
- Nov 12 13:37:51 xs01 crmd: [5676]: info: ais_dispatch_message: Membership 348: quorum retained
- Nov 12 13:37:51 xs01 crmd: [5676]: notice: crmd_peer_update: Status update: Client xs02/crmd now has status [online] (DC=<null>)
- Nov 12 13:37:55 xs01 kernel: [252394.150191] bnx2 0000:0b:00.1: eth1: NIC Copper Link is Down
- Nov 12 13:37:56 xs01 kernel: [252395.859498] bnx2 0000:0b:00.1: eth1: NIC Copper Link is Up, 100 Mbps full duplexbnx2: , receive bnx2: & transmit bnx2: flow control ONbnx2:
- Nov 12 13:37:59 xs01 corosync[5664]: [TOTEM ] A processor failed, forming new configuration.
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] CLM CONFIGURATION CHANGE
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] New Configuration:
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] r(0) ip(10.1.1.135)
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] Members Left:
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] r(0) ip(10.1.1.136)
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] Members Joined:
- Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 352: memb=1, new=0, lost=1
- Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: pcmk_peer_update: memb: xs01 117506314
- Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: pcmk_peer_update: lost: xs02 134283530
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] CLM CONFIGURATION CHANGE
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] New Configuration:
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] r(0) ip(10.1.1.135)
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] Members Left:
- Nov 12 13:38:05 xs01 corosync[5664]: [CLM ] Members Joined:
- Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 352: memb=1, new=0, lost=0
- Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: pcmk_peer_update: MEMB: xs01 117506314
- Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: ais_mark_unseen_peer_dead: Node xs02 was not seen in the previous transition
- Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: update_member: Node 134283530/xs02 is now: lost
- Nov 12 13:38:05 xs01 corosync[5664]: [pcmk ] info: send_member_notification: Sending membership update 352 to 2 children
- Nov 12 13:38:05 xs01 corosync[5664]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
- Nov 12 13:38:05 xs01 corosync[5664]: [CPG ] chosen downlist: sender r(0) ip(10.1.1.135) ; members(old:2 left:1)
- Nov 12 13:38:05 xs01 cib: [5671]: notice: ais_dispatch_message: Membership 352: quorum lost
- Nov 12 13:38:05 xs01 corosync[5664]: [MAIN ] Completed service synchronization, ready to provide service.
- Nov 12 13:38:05 xs01 crmd: [5676]: notice: ais_dispatch_message: Membership 352: quorum lost
- Nov 12 13:38:05 xs01 cib: [5671]: info: crm_update_peer: Node xs02: id=134283530 state=lost (new) addr=r(0) ip(10.1.1.136) votes=1 born=340 seen=348 proc=00000000000000000000000000151312
- Nov 12 13:38:05 xs01 crmd: [5676]: info: ais_status_callback: status: xs02 is now lost (was member)
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_readwrite: We are now in R/W mode
- Nov 12 13:38:05 xs01 crmd: [5676]: info: crm_update_peer: Node xs02: id=134283530 state=lost (new) addr=r(0) ip(10.1.1.136) votes=1 born=340 seen=348 proc=00000000000000000000000000151312
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/210, version=0.442.6): ok (rc=0)
- Nov 12 13:38:05 xs01 crmd: [5676]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/211, version=0.442.7): ok (rc=0)
- Nov 12 13:38:05 xs01 crmd: [5676]: info: do_te_control: Registering TE UUID: d1569e85-c54d-4aae-899f-6cde2642f3be
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/213, version=0.442.8): ok (rc=0)
- Nov 12 13:38:05 xs01 crmd: [5676]: info: set_graph_functions: Setting custom graph functions
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/215, version=0.442.9): ok (rc=0)
- Nov 12 13:38:05 xs01 crmd: [5676]: info: do_dc_takeover: Taking over DC status for this partition
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/217, version=0.442.10): ok (rc=0)
- Nov 12 13:38:05 xs01 crmd: [5676]: info: join_make_offer: Making join offers based on membership 352
- Nov 12 13:38:05 xs01 crmd: [5676]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
- Nov 12 13:38:05 xs01 crmd: [5676]: info: ais_dispatch_message: Membership 352: quorum still lost
- Nov 12 13:38:05 xs01 crmd: [5676]: info: crmd_ais_dispatch: Setting expected votes to 2
- Nov 12 13:38:05 xs01 crmd: [5676]: info: update_dc: Set DC to xs01 (3.0.6)
- Nov 12 13:38:05 xs01 crmd: [5676]: info: ais_dispatch_message: Membership 352: quorum still lost
- Nov 12 13:38:05 xs01 crmd: [5676]: info: crmd_ais_dispatch: Setting expected votes to 2
- Nov 12 13:38:05 xs01 crmd: [5676]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Nov 12 13:38:05 xs01 crmd: [5676]: info: do_dc_join_finalize: join-1: Syncing the CIB from xs01 to the rest of the cluster
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/221, version=0.442.11): ok (rc=0)
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/222, version=0.442.11): ok (rc=0)
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/223, version=0.442.12): ok (rc=0)
- Nov 12 13:38:05 xs01 crmd: [5676]: info: do_dc_join_ack: join-1: Updating node state to member for xs01
- Nov 12 13:38:05 xs01 crmd: [5676]: info: erase_status_tag: Deleting xpath: //node_state[@uname='xs01']/lrm
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='xs01']/lrm (origin=local/crmd/224, version=0.442.13): ok (rc=0)
- Nov 12 13:38:05 xs01 crmd: [5676]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/226, version=0.442.15): ok (rc=0)
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Nov 12 13:38:05 xs01 crmd: [5676]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
- Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-drbd0:0 (8)
- Nov 12 13:38:05 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/228, version=0.442.17): ok (rc=0)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: pe_fence_node: Node xs02 will be fenced because it is un-expectedly down
- Nov 12 13:38:05 xs01 crmd: [5676]: WARN: match_down_event: No match for shutdown action on xs02
- Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd0:0 (5)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: determine_online_status: Node xs02 is unclean
- Nov 12 13:38:05 xs01 crmd: [5676]: info: te_update_diff: Stonith/shutdown of xs02 not matched
- Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-vmdisk-pri:0 (1352722459)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: unpack_rsc_op: Processing failed op drbd0:0_last_failure_0 on xs01: unknown error (1)
- Nov 12 13:38:05 xs01 crmd: [5676]: info: abort_transition_graph: te_update_diff:234 - Triggered transition abort (complete=1, tag=node_state, id=xs02, magic=NA, cib=0.442.16) : Node failure
- Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-drbd0:0 (1352722653)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: unpack_rsc_op: Processing failed op drbd0:1_last_failure_0 on xs02: unknown error (1)
- Nov 12 13:38:05 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999998 more times on xs02 before being forced off
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999998 more times on xs02 before being forced off
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action stonith-ipmi-xs01_stop_0 on xs02 is unrunnable (offline)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action stonith-ipmi-xs01_stop_0 on xs02 is unrunnable (offline)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action drbd0:1_stop_0 on xs02 is unrunnable (offline)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action drbd0:1_stop_0 on xs02 is unrunnable (offline)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action drbd0:1_stop_0 on xs02 is unrunnable (offline)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Action drbd0:1_stop_0 on xs02 is unrunnable (offline)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: custom_action: Marking node xs02 unclean
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: stage6: Scheduling Node xs02 for STONITH
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Stop stonith-ipmi-xs01 (xs02)
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Start dlm:0 (xs01 - blocked)
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Start o2cb:0 (xs01 - blocked)
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Demote drbd0:0 (Master -> Slave xs01)
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Recover drbd0:0 (Master xs01)
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Stop drbd0:1 (xs02)
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: LogActions: Start vmdisk-pri:0 (xs01 - blocked)
- Nov 12 13:38:05 xs01 crmd: [5676]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Nov 12 13:38:05 xs01 crmd: [5676]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1352738285-133) derived from /var/lib/pengine/pe-warn-52.bz2
- Nov 12 13:38:05 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 74: notify drbd0:0_pre_notify_demote_0 on xs01 (local)
- Nov 12 13:38:05 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[215] (pid 1190)
- Nov 12 13:38:05 xs01 pengine: [5675]: WARN: process_pe_message: Transition 0: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-52.bz2
- Nov 12 13:38:05 xs01 pengine: [5675]: notice: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues.
- Nov 12 13:38:05 xs01 lrmd: [5673]: info: operation notify[215] on drbd0:0 for client 5676: pid 1190 exited with return code 0
- Nov 12 13:38:05 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=215, rc=0, cib-update=0, confirmed=true) ok
- Nov 12 13:38:05 xs01 crmd: [5676]: notice: te_fence_node: Executing reboot fencing operation (61) on xs02 (timeout=60000)
- Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: initiate_remote_stonith_op: Initiating remote operation reboot for xs02: 719bd9e5-b039-4fc4-8646-baeea93e0772
- Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
- Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: call_remote_stonith: Requesting that xs01 perform op reboot xs02
- Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
- Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: stonith_fence: Found 1 matching devices for 'xs02'
- Nov 12 13:38:05 xs01 stonith-ng: [5672]: info: stonith_command: Processed st_fence from xs01: rc=-1
- Nov 12 13:38:05 xs01 external/ipmi(stonith-ipmi-xs02)[1217]: [1230]: debug: ipmitool output: Chassis Power Control: Up/On
- Nov 12 13:38:05 xs01 mgmtd: [5677]: info: CIB query: cib
- Nov 12 13:38:06 xs01 stonith-ng: [5672]: notice: log_operation: Operation 'reboot' [1212] (call 0 from f80eb6d2-7c12-47ad-98e3-fd0822447da0) for host 'xs02' with device 'stonith-ipmi-xs02' returned
- : 0
- Nov 12 13:38:06 xs01 crmd: [5676]: info: tengine_stonith_callback: StonithOp <st-reply st_origin="stonith_construct_async_reply" t="stonith-ng" st_op="reboot" st_remote_op="719bd9e5-b039-4fc4-8646-
- baeea93e0772" st_clientid="f80eb6d2-7c12-47ad-98e3-fd0822447da0" st_target="xs02" st_device_action="st_fence" st_callid="0" st_callopt="0" st_rc="0" st_output="Performing: stonith -t external/ipmi
- -T reset xs02 success: xs02 0 " src="xs01" seq="8" state="2" />
- Nov 12 13:38:06 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: Performing: stonith -t external/ipmi -T reset xs02
- Nov 12 13:38:06 xs01 crmd: [5676]: info: erase_status_tag: Deleting xpath: //node_state[@uname='xs02']/lrm
- Nov 12 13:38:06 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: success: xs02 0
- Nov 12 13:38:06 xs01 crmd: [5676]: info: erase_status_tag: Deleting xpath: //node_state[@uname='xs02']/transient_attributes
- Nov 12 13:38:06 xs01 stonith-ng: [5672]: notice: remote_op_done: Operation reboot of xs02 by xs01 for xs01[f80eb6d2-7c12-47ad-98e3-fd0822447da0]: OK
- Nov 12 13:38:06 xs01 crmd: [5676]: notice: crmd_peer_update: Status update: Client xs02/crmd now has status [offline] (DC=true)
- Nov 12 13:38:06 xs01 crmd: [5676]: notice: tengine_stonith_notify: Peer xs02 was terminated (reboot) by xs01 for xs01: OK (ref=719bd9e5-b039-4fc4-8646-baeea93e0772)
- Nov 12 13:38:06 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 25: demote drbd0:0_demote_0 on xs01 (local)
- Nov 12 13:38:06 xs01 lrmd: [5673]: info: rsc:drbd0:0 demote[216] (pid 1231)
- Nov 12 13:38:06 xs01 crmd: [5676]: info: cib_fencing_updated: Fencing update 231 for xs02: complete
- Nov 12 13:38:06 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='xs02']/lrm (origin=local/crmd/232, version=0.442.24): ok (rc=0)
- Nov 12 13:38:06 xs01 crmd: [5676]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=stonith-ipmi-xs01_last_0, magic=0:0;5:18:0:984c6a8c-
- 26eb-43cc-94d9-ec0b63f93dd7, cib=0.442.24) : Resource op removal
- Nov 12 13:38:06 xs01 cib: [5671]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='xs02']/transient_attributes (origin=local/crmd/233, version=0.442.25)
- : ok (rc=0)
- Nov 12 13:38:06 xs01 crmd: [5676]: info: abort_transition_graph: te_update_diff:194 - Triggered transition abort (complete=0, tag=transient_attributes, id=xs02, magic=NA, cib=0.442.25) : Transient
- attribute: removal
- Nov 12 13:38:06 xs01 lrmd: [5673]: info: operation demote[216] on drbd0:0 for client 5676: pid 1231 exited with return code 0
- Nov 12 13:38:06 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_demote_0 (call=216, rc=0, cib-update=235, confirmed=true) ok
- Nov 12 13:38:06 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 75: notify drbd0:0_post_notify_demote_0 on xs01 (local)
- Nov 12 13:38:06 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[217] (pid 1254)
- Nov 12 13:38:06 xs01 lrmd: [5673]: info: RA output: (drbd0:0:notify:stdout)
- Nov 12 13:38:06 xs01 lrmd: [5673]: info: operation notify[217] on drbd0:0 for client 5676: pid 1254 exited with return code 0
- Nov 12 13:38:06 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=217, rc=0, cib-update=0, confirmed=true) ok
- Nov 12 13:38:06 xs01 crmd: [5676]: notice: run_graph: ==== Transition 0 (Complete=16, Pending=0, Fired=0, Skipped=12, Incomplete=7, Source=/var/lib/pengine/pe-warn-52.bz2): Stopped
- Nov 12 13:38:06 xs01 crmd: [5676]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Nov 12 13:38:06 xs01 pengine: [5675]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Nov 12 13:38:06 xs01 pengine: [5675]: WARN: unpack_rsc_op: Processing failed op drbd0:0_last_failure_0 on xs01: unknown error (1)
- Nov 12 13:38:06 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
- Nov 12 13:38:06 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
- Nov 12 13:38:06 xs01 pengine: [5675]: notice: LogActions: Start dlm:0 (xs01 - blocked)
- Nov 12 13:38:06 xs01 pengine: [5675]: notice: LogActions: Start o2cb:0 (xs01 - blocked)
- Nov 12 13:38:06 xs01 pengine: [5675]: notice: LogActions: Recover drbd0:0 (Slave xs01)
- Nov 12 13:38:06 xs01 pengine: [5675]: notice: LogActions: Start vmdisk-pri:0 (xs01 - blocked)
- Nov 12 13:38:06 xs01 crmd: [5676]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Nov 12 13:38:06 xs01 crmd: [5676]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1352738286-139) derived from /var/lib/pengine/pe-input-898.bz2
- Nov 12 13:38:06 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 67: notify drbd0:0_pre_notify_stop_0 on xs01 (local)
- Nov 12 13:38:06 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[218] (pid 1282)
- Nov 12 13:38:06 xs01 lrmd: [5673]: info: operation notify[218] on drbd0:0 for client 5676: pid 1282 exited with return code 0
- Nov 12 13:38:06 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=218, rc=0, cib-update=0, confirmed=true) ok
- Nov 12 13:38:06 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 2: stop drbd0:0_stop_0 on xs01 (local)
- Nov 12 13:38:06 xs01 lrmd: [5673]: info: rsc:drbd0:0 stop[219] (pid 1304)
- Nov 12 13:38:06 xs01 pengine: [5675]: notice: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-898.bz2
- Nov 12 13:38:07 xs01 mgmtd: [5677]: info: CIB query: cib
- Nov 12 13:38:10 xs01 kernel: [252409.584075] d-con drbd0: PingAck did not arrive in time.
- Nov 12 13:38:10 xs01 kernel: [252409.584098] d-con drbd0: peer( Secondary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown )
- Nov 12 13:38:10 xs01 kernel: [252409.584102] d-con drbd0: asender terminated
- Nov 12 13:38:10 xs01 kernel: [252409.584104] d-con drbd0: Terminating asender thread
- Nov 12 13:38:10 xs01 kernel: [252409.584137] d-con drbd0: conn( NetworkFailure -> Disconnecting )
- Nov 12 13:38:10 xs01 kernel: [252409.593995] d-con drbd0: Connection closed
- Nov 12 13:38:10 xs01 kernel: [252409.594008] d-con drbd0: conn( Disconnecting -> StandAlone )
- Nov 12 13:38:10 xs01 kernel: [252409.594010] d-con drbd0: receiver terminated
- Nov 12 13:38:10 xs01 kernel: [252409.594014] d-con drbd0: Terminating receiver thread
- Nov 12 13:38:10 xs01 kernel: [252409.594045] block drbd0: disk( UpToDate -> Failed )
- Nov 12 13:38:10 xs01 kernel: [252409.649085] block drbd0: disk( Failed -> Diskless )
- Nov 12 13:38:10 xs01 kernel: [252409.649269] block drbd0: drbd_bm_resize called with capacity == 0
- Nov 12 13:38:10 xs01 kernel: [252409.650545] d-con drbd0: Terminating worker thread
- Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:stop:stdout)
- Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:stop:stdout)
- Nov 12 13:38:10 xs01 crm_attribute: [1334]: info: Invoked: crm_attribute -N xs01 -n master-drbd0:0 -l reboot -D
- Nov 12 13:38:10 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd0:0 (<null>)
- Nov 12 13:38:10 xs01 attrd: [5674]: notice: attrd_perform_update: Sent delete 344: node=xs01, attr=master-drbd0:0, id=<n/a>, set=(null), section=status
- Nov 12 13:38:10 xs01 crmd: [5676]: info: abort_transition_graph: te_update_diff:194 - Triggered transition abort (complete=0, tag=transient_attributes, id=xs01, magic=NA, cib=0.442.28) : Transient
- attribute: removal
- Nov 12 13:38:10 xs01 attrd: [5674]: notice: attrd_perform_update: Sent delete -22: node=xs01, attr=master-drbd0:0, id=<n/a>, set=(null), section=status
- Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:stop:stdout)
- Nov 12 13:38:10 xs01 lrmd: [5673]: info: operation stop[219] on drbd0:0 for client 5676: pid 1304 exited with return code 0
- Nov 12 13:38:10 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_stop_0 (call=219, rc=0, cib-update=237, confirmed=true) ok
- Nov 12 13:38:10 xs01 crmd: [5676]: notice: run_graph: ==== Transition 1 (Complete=9, Pending=0, Fired=0, Skipped=6, Incomplete=4, Source=/var/lib/pengine/pe-input-898.bz2): Stopped
- Nov 12 13:38:10 xs01 crmd: [5676]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Nov 12 13:38:10 xs01 pengine: [5675]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Nov 12 13:38:10 xs01 pengine: [5675]: WARN: unpack_rsc_op: Processing failed op drbd0:0_last_failure_0 on xs01: unknown error (1)
- Nov 12 13:38:10 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
- Nov 12 13:38:10 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
- Nov 12 13:38:10 xs01 pengine: [5675]: notice: LogActions: Start dlm:0 (xs01 - blocked)
- Nov 12 13:38:10 xs01 crmd: [5676]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Nov 12 13:38:10 xs01 pengine: [5675]: notice: LogActions: Start o2cb:0 (xs01 - blocked)
- Nov 12 13:38:10 xs01 pengine: [5675]: notice: LogActions: Start drbd0:0 (xs01)
- Nov 12 13:38:10 xs01 pengine: [5675]: notice: LogActions: Start vmdisk-pri:0 (xs01 - blocked)
- Nov 12 13:38:10 xs01 crmd: [5676]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1352738290-143) derived from /var/lib/pengine/pe-input-899.bz2
- Nov 12 13:38:10 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 23: start drbd0:0_start_0 on xs01 (local)
- Nov 12 13:38:10 xs01 lrmd: [5673]: info: rsc:drbd0:0 start[220] (pid 1335)
- Nov 12 13:38:10 xs01 pengine: [5675]: notice: process_pe_message: Transition 2: PEngine Input stored in: /var/lib/pengine/pe-input-899.bz2
- Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:38:10 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:38:11 xs01 mgmtd: [5677]: info: CIB query: cib
- Nov 12 13:38:11 xs01 kernel: [252410.920756] d-con drbd0: Starting worker thread (from drbdsetup [1375])
- Nov 12 13:38:11 xs01 kernel: [252410.920904] block drbd0: disk( Diskless -> Attaching )
- Nov 12 13:38:11 xs01 kernel: [252410.927922] d-con drbd0: Method to ensure write ordering: barrier
- Nov 12 13:38:11 xs01 kernel: [252410.927927] block drbd0: max BIO size = 131072
- Nov 12 13:38:11 xs01 kernel: [252410.927933] block drbd0: drbd_bm_resize called with capacity == 2339768520
- Nov 12 13:38:11 xs01 kernel: [252410.936953] block drbd0: resync bitmap: bits=292471065 words=4569861 pages=8926
- Nov 12 13:38:11 xs01 kernel: [252410.936960] block drbd0: size = 1116 GB (1169884260 KB)
- Nov 12 13:38:12 xs01 kernel: [252411.190356] block drbd0: bitmap READ of 8926 pages took 63 jiffies
- Nov 12 13:38:12 xs01 kernel: [252411.198094] block drbd0: recounting of set bits took additional 2 jiffies
- Nov 12 13:38:12 xs01 kernel: [252411.198099] block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
- Nov 12 13:38:12 xs01 kernel: [252411.198110] block drbd0: disk( Attaching -> Consistent )
- Nov 12 13:38:12 xs01 kernel: [252411.198114] block drbd0: attached to UUIDs A5D2F216E6C0E288:0000000000000000:9D9C5DAFE8FBE22D:9D9B5DAFE8FBE22D
- Nov 12 13:38:12 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:38:12 xs01 kernel: [252411.220935] d-con drbd0: conn( StandAlone -> Unconnected )
- Nov 12 13:38:12 xs01 kernel: [252411.220958] d-con drbd0: Starting receiver thread (from drbd_w_drbd0 [1376])
- Nov 12 13:38:12 xs01 kernel: [252411.221026] d-con drbd0: receiver (re)started
- Nov 12 13:38:12 xs01 kernel: [252411.221040] d-con drbd0: conn( Unconnected -> WFConnection )
- Nov 12 13:38:12 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:38:12 xs01 attrd: [5674]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd0:0 (5)
- Nov 12 13:38:12 xs01 attrd: [5674]: notice: attrd_perform_update: Sent update 348: master-drbd0:0=5
- Nov 12 13:38:12 xs01 crmd: [5676]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-xs01-master-drbd0.0, name=master-drbd0:0, value=5
- , magic=NA, cib=0.442.30) : Transient attribute: update
- Nov 12 13:38:12 xs01 lrmd: [5673]: info: RA output: (drbd0:0:start:stdout)
- Nov 12 13:38:12 xs01 lrmd: [5673]: info: operation start[220] on drbd0:0 for client 5676: pid 1335 exited with return code 0
- Nov 12 13:38:12 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_start_0 (call=220, rc=0, cib-update=239, confirmed=true) ok
- Nov 12 13:38:12 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 65: notify drbd0:0_post_notify_start_0 on xs01 (local)
- Nov 12 13:38:12 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[221] (pid 1392)
- Nov 12 13:38:12 xs01 lrmd: [5673]: info: RA output: (drbd0:0:notify:stdout)
- Nov 12 13:38:12 xs01 lrmd: [5673]: info: operation notify[221] on drbd0:0 for client 5676: pid 1392 exited with return code 0
- Nov 12 13:38:12 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=221, rc=0, cib-update=0, confirmed=true) ok
- Nov 12 13:38:12 xs01 crmd: [5676]: notice: run_graph: ==== Transition 2 (Complete=9, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-input-899.bz2): Stopped
- Nov 12 13:38:12 xs01 crmd: [5676]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
- Nov 12 13:38:12 xs01 pengine: [5675]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Nov 12 13:38:12 xs01 pengine: [5675]: WARN: unpack_rsc_op: Processing failed op drbd0:0_last_failure_0 on xs01: unknown error (1)
- Nov 12 13:38:12 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
- Nov 12 13:38:12 xs01 pengine: [5675]: notice: common_apply_stickiness: ms_drbd0 can fail 999992 more times on xs01 before being forced off
- Nov 12 13:38:12 xs01 pengine: [5675]: notice: LogActions: Start dlm:0 (xs01)
- Nov 12 13:38:12 xs01 pengine: [5675]: notice: LogActions: Start o2cb:0 (xs01)
- Nov 12 13:38:12 xs01 pengine: [5675]: notice: LogActions: Promote drbd0:0 (Slave -> Master xs01)
- Nov 12 13:38:12 xs01 pengine: [5675]: notice: LogActions: Start vmdisk-pri:0 (xs01)
- Nov 12 13:38:12 xs01 crmd: [5676]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Nov 12 13:38:12 xs01 crmd: [5676]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1352738292-147) derived from /var/lib/pengine/pe-input-900.bz2
- Nov 12 13:38:12 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 70: notify drbd0:0_pre_notify_promote_0 on xs01 (local)
- Nov 12 13:38:12 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[222] (pid 1420)
- Nov 12 13:38:12 xs01 pengine: [5675]: notice: process_pe_message: Transition 3: PEngine Input stored in: /var/lib/pengine/pe-input-900.bz2
- Nov 12 13:38:12 xs01 lrmd: [5673]: info: operation notify[222] on drbd0:0 for client 5676: pid 1420 exited with return code 0
- Nov 12 13:38:12 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=222, rc=0, cib-update=0, confirmed=true) ok
- Nov 12 13:38:12 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 25: promote drbd0:0_promote_0 on xs01 (local)
- Nov 12 13:38:12 xs01 lrmd: [5673]: info: rsc:drbd0:0 promote[223] (pid 1442)
- Nov 12 13:38:12 xs01 kernel: [252411.465490] d-con drbd0: helper command: /sbin/drbdadm fence-peer drbd0
- Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: initiate_remote_stonith_op: Initiating remote operation off for xs02: edddf437-31b5-4c99-a2d7-562c7780a206
- Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
- Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: call_remote_stonith: Requesting that xs01 perform op off xs02
- Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: can_fence_host_with_device: stonith-ipmi-xs02 can fence xs02: dynamic-list
- Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: stonith_fence: Found 1 matching devices for 'xs02'
- Nov 12 13:38:12 xs01 stonith-ng: [5672]: info: stonith_command: Processed st_fence from xs01: rc=-1
- Nov 12 13:38:12 xs01 external/ipmi(stonith-ipmi-xs02)[1478]: [1489]: debug: ipmitool output: Chassis Power Control: Down/Off
- Nov 12 13:38:12 xs01 mgmtd: [5677]: info: CIB query: cib
- Nov 12 13:38:13 xs01 stonith-ng: [5672]: notice: log_operation: Operation 'off' [1473] (call 0 from a5f1800c-a1eb-4bb5-bd43-a0194b387e79) for host 'xs02' with device 'stonith-ipmi-xs02' returned: 0
- Nov 12 13:38:13 xs01 crmd: [5676]: notice: tengine_stonith_notify: Peer xs02 was terminated (off) by xs01 for xs01: OK (ref=edddf437-31b5-4c99-a2d7-562c7780a206)
- Nov 12 13:38:13 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: Performing: stonith -t external/ipmi -T off xs02
- Nov 12 13:38:13 xs01 stonith-ng: [5672]: info: log_operation: stonith-ipmi-xs02: success: xs02 0
- Nov 12 13:38:13 xs01 stonith-ng: [5672]: notice: remote_op_done: Operation off of xs02 by xs01 for xs01[a5f1800c-a1eb-4bb5-bd43-a0194b387e79]: OK
- Nov 12 13:38:13 xs01 stonith_admin-fence-peer.sh[1491]: stonith_admin successfully fenced peer xs02.
- Nov 12 13:38:13 xs01 kernel: [252412.685720] d-con drbd0: helper command: /sbin/drbdadm fence-peer drbd0 exit code 7 (0x700)
- Nov 12 13:38:13 xs01 kernel: [252412.685725] d-con drbd0: fence-peer helper returned 7 (peer was stonithed)
- Nov 12 13:38:13 xs01 kernel: [252412.685742] d-con drbd0: pdsk( DUnknown -> Outdated )
- Nov 12 13:38:13 xs01 kernel: [252412.685750] block drbd0: role( Secondary -> Primary ) disk( Consistent -> UpToDate )
- Nov 12 13:38:13 xs01 kernel: [252412.692228] block drbd0: new current UUID 7360EBBFA6D33021:A5D2F216E6C0E288:9D9C5DAFE8FBE22D:9D9B5DAFE8FBE22D
- Nov 12 13:38:13 xs01 lrmd: [5673]: info: RA output: (drbd0:0:promote:stdout)
- Nov 12 13:38:13 xs01 lrmd: [5673]: info: operation promote[223] on drbd0:0 for client 5676: pid 1442 exited with return code 0
- Nov 12 13:38:13 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_promote_0 (call=223, rc=0, cib-update=241, confirmed=true) ok
- Nov 12 13:38:13 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 71: notify drbd0:0_post_notify_promote_0 on xs01 (local)
- Nov 12 13:38:13 xs01 lrmd: [5673]: info: rsc:drbd0:0 notify[224] (pid 1495)
- Nov 12 13:38:13 xs01 lrmd: [5673]: info: RA output: (drbd0:0:notify:stdout)
- Nov 12 13:38:13 xs01 lrmd: [5673]: info: operation notify[224] on drbd0:0 for client 5676: pid 1495 exited with return code 0
- Nov 12 13:38:13 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=224, rc=0, cib-update=0, confirmed=true) ok
- Nov 12 13:38:13 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 26: monitor drbd0:0_monitor_10000 on xs01 (local)
- Nov 12 13:38:13 xs01 lrmd: [5673]: info: rsc:drbd0:0 monitor[225] (pid 1528)
- Nov 12 13:38:13 xs01 lrmd: [5673]: info: operation monitor[225] on drbd0:0 for client 5676: pid 1528 exited with return code 8
- Nov 12 13:38:13 xs01 crmd: [5676]: info: process_lrm_event: LRM operation drbd0:0_monitor_10000 (call=225, rc=8, cib-update=242, confirmed=false) master
- Nov 12 13:38:13 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 7: start dlm:0_start_0 on xs01 (local)
- Nov 12 13:38:13 xs01 lrmd: [5673]: info: rsc:dlm:0 start[226] (pid 1555)
- Nov 12 13:38:13 xs01 cluster-dlm[1567]: main: dlm_controld master started
- Nov 12 13:38:13 xs01 corosync[5664]: [pcmk ] info: pcmk_notify: Enabling node notifications for child 1567 (0x7f6fc8003680)
- Nov 12 13:38:13 xs01 cluster-dlm: setup_misc_devices: found /dev/misc/dlm-control minor 58
- Nov 12 13:38:13 xs01 cluster-dlm: setup_misc_devices: found /dev/misc/dlm-monitor minor 57
- Nov 12 13:38:13 xs01 cluster-dlm: setup_misc_devices: found /dev/misc/dlm_plock minor 56
- Nov 12 13:38:13 xs01 cluster-dlm: setup_monitor: /dev/misc/dlm-monitor fd 10
- Nov 12 13:38:13 xs01 cluster-dlm: update_comms_nodes: /sys/kernel/config/dlm/cluster/comms: opendir failed: 2
- Nov 12 13:38:13 xs01 cluster-dlm: clear_configfs_spaces: /sys/kernel/config/dlm/cluster/spaces: opendir failed: 2
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: get_cluster_type: Cluster type is: 'openais'
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: init_ais_connection_classic: AIS connection established
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: get_ais_nodeid: Server details: id=117506314 uname=xs01 cname=pcmk
- Nov 12 13:38:13 xs01 cluster-dlm: detect_protocol: confdb_key_get error 11
- Nov 12 13:38:13 xs01 cluster-dlm: setup_cpg_daemon: setup_cpg_daemon 12
- Nov 12 13:38:13 xs01 cluster-dlm: log_config: dlm:controld conf 1 1 0 memb 117506314 join 117506314 left
- Nov 12 13:38:13 xs01 cluster-dlm: set_protocol: set_protocol member_count 1 propose daemon 1.1.1 kernel 1.1.1
- Nov 12 13:38:13 xs01 cluster-dlm: receive_protocol: run protocol from nodeid 117506314
- Nov 12 13:38:13 xs01 cluster-dlm: set_protocol: daemon run 1.1.1 max 1.1.1 kernel run 1.1.1 max 1.1.1
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_new_peer: Node xs01 now has id: 117506314
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_new_peer: Node 117506314 is now known as xs01
- Nov 12 13:38:13 xs01 cluster-dlm: setup_plocks: plocks 14
- Nov 12 13:38:13 xs01 cluster-dlm: setup_plocks: plock cpg message size: 104 bytes
- Nov 12 13:38:13 xs01 cluster-dlm: update_cluster: Processing membership 352
- Nov 12 13:38:13 xs01 cluster-dlm: dlm_process_node: Adding address ip(10.1.1.135) to configfs for node 117506314
- Nov 12 13:38:13 xs01 cluster-dlm: add_configfs_node: set_configfs_node 117506314 10.1.1.135 local 1
- Nov 12 13:38:13 xs01 cluster-dlm: dlm_process_node: Added active node 117506314: born-on=348, last-seen=352, this-event=352, last-event=0
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: ais_dispatch_message: Membership 352: quorum still lost
- Nov 12 13:38:13 xs01 cluster-dlm: dlm_process_node: Skipped inactive node 134283530: born-on=340, last-seen=0, this-event=352, last-event=0
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_update_peer: Node xs01: id=117506314 state=member (new) addr=r(0) ip(10.1.1.135) (new) votes=1 (new) born=348 seen=352 proc=00000000000000000000
- 000000151312 (new)
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_new_peer: Node xs02 now has id: 134283530
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_new_peer: Node 134283530 is now known as xs02
- Nov 12 13:38:13 xs01 cluster-dlm: [1567]: info: crm_update_peer: Node xs02: id=134283530 state=lost (new) addr=r(0) ip(10.1.1.136) votes=1 born=340 seen=0 proc=00000000000000000000000000151312
- Nov 12 13:38:14 xs01 mgmtd: [5677]: info: CIB query: cib
- Nov 12 13:38:14 xs01 lrmd: [5673]: info: operation start[226] on dlm:0 for client 5676: pid 1555 exited with return code 0
- Nov 12 13:38:14 xs01 crmd: [5676]: info: process_lrm_event: LRM operation dlm:0_start_0 (call=226, rc=0, cib-update=243, confirmed=true) ok
- Nov 12 13:38:14 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 8: monitor dlm:0_monitor_10000 on xs01 (local)
- Nov 12 13:38:14 xs01 lrmd: [5673]: info: rsc:dlm:0 monitor[227] (pid 1578)
- Nov 12 13:38:15 xs01 lrmd: [5673]: info: operation monitor[227] on dlm:0 for client 5676: pid 1578 exited with return code 0
- Nov 12 13:38:15 xs01 crmd: [5676]: info: process_lrm_event: LRM operation dlm:0_monitor_10000 (call=227, rc=0, cib-update=244, confirmed=false) ok
- Nov 12 13:38:15 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 9: start o2cb:0_start_0 on xs01 (local)
- Nov 12 13:38:15 xs01 lrmd: [5673]: info: rsc:o2cb:0 start[228] (pid 1587)
- Nov 12 13:38:15 xs01 o2cb(o2cb:0)[1587]: [1598]: INFO: Stack glue driver not loaded
- Nov 12 13:38:15 xs01 o2cb(o2cb:0)[1587]: [1600]: INFO: Starting o2cb:0
- Nov 12 13:38:15 xs01 kernel: [252414.058576] ocfs2: Registered cluster interface user
- Nov 12 13:38:15 xs01 kernel: [252414.073168] OCFS2 Node Manager 1.5.0
- Nov 12 13:38:15 xs01 kernel: [252414.091556] OCFS2 1.5.0
- Nov 12 13:38:15 xs01 ocfs2_controld.pcmk: Core dumps enabled: /var/lib/openais
- Nov 12 13:38:15 xs01 corosync[5664]: [pcmk ] info: pcmk_notify: Enabling node notifications for child 1612 (0x7f6fc8013750)
- Nov 12 13:38:15 xs01 ocfs2_controld: Cluster connection established. Local node id: 117506314
- Nov 12 13:38:15 xs01 ocfs2_controld: Added Pacemaker as client 1 with fd 7
- Nov 12 13:38:15 xs01 ocfs2_controld: Initializing CKPT service (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Connected to CKPT service with handle 0x327b23c600000000
- Nov 12 13:38:15 xs01 ocfs2_controld: Opening checkpoint "ocfs2:controld:0701010a" (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Opened checkpoint "ocfs2:controld:0701010a" with handle 0x6633487300000000
- Nov 12 13:38:15 xs01 ocfs2_controld: Writing to section "daemon_max_protocol" on checkpoint "ocfs2:controld:0701010a" (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Creating section "daemon_max_protocol" on checkpoint "ocfs2:controld:0701010a" (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Created section "daemon_max_protocol" on checkpoint "ocfs2:controld:0701010a"
- Nov 12 13:38:15 xs01 ocfs2_controld: Writing to section "ocfs2_max_protocol" on checkpoint "ocfs2:controld:0701010a" (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Creating section "ocfs2_max_protocol" on checkpoint "ocfs2:controld:0701010a" (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Created section "ocfs2_max_protocol" on checkpoint "ocfs2:controld:0701010a"
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: get_cluster_type: Cluster type is: 'openais'
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: init_ais_connection_classic: AIS connection established
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: get_ais_nodeid: Server details: id=117506314 uname=xs01 cname=pcmk
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_new_peer: Node xs01 now has id: 117506314
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_new_peer: Node 117506314 is now known as xs01
- Nov 12 13:38:15 xs01 ocfs2_controld: Starting join for group "ocfs2:controld"
- Nov 12 13:38:15 xs01 ocfs2_controld: cpg_join succeeded
- Nov 12 13:38:15 xs01 ocfs2_controld: setup done
- Nov 12 13:38:15 xs01 ocfs2_controld: confchg called
- Nov 12 13:38:15 xs01 ocfs2_controld: ocfs2_controld (group "ocfs2:controld") confchg: members 1, left 0, joined 1
- Nov 12 13:38:15 xs01 ocfs2_controld: CPG is live, we are the first daemon
- Nov 12 13:38:15 xs01 ocfs2_controld: Opening checkpoint "ocfs2:controld" (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Opened checkpoint "ocfs2:controld" with handle 0x194e92eb00000001
- Nov 12 13:38:15 xs01 ocfs2_controld: Writing to section "daemon_protocol" on checkpoint "ocfs2:controld" (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Creating section "daemon_protocol" on checkpoint "ocfs2:controld" (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Created section "daemon_protocol" on checkpoint "ocfs2:controld"
- Nov 12 13:38:15 xs01 ocfs2_controld: Writing to section "ocfs2_protocol" on checkpoint "ocfs2:controld" (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Creating section "ocfs2_protocol" on checkpoint "ocfs2:controld" (try 1)
- Nov 12 13:38:15 xs01 ocfs2_controld: Created section "ocfs2_protocol" on checkpoint "ocfs2:controld"
- Nov 12 13:38:15 xs01 ocfs2_controld: Daemon protocol is 1.0
- Nov 12 13:38:15 xs01 ocfs2_controld: fs protocol is 1.0
- Nov 12 13:38:15 xs01 ocfs2_controld: Connecting to dlm_controld
- Nov 12 13:38:15 xs01 ocfs2_controld: Opening control device
- Nov 12 13:38:15 xs01 cluster-dlm: process_listener: client connection 5 fd 15
- Nov 12 13:38:15 xs01 ocfs2_controld: Starting to listen for mounters
- Nov 12 13:38:15 xs01 ocfs2_controld: new listening connection 4
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: ais_dispatch_message: Membership 352: quorum still lost
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_update_peer: Node xs01: id=117506314 state=member (new) addr=r(0) ip(10.1.1.135) (new) votes=1 (new) born=348 seen=352 proc=00000000000000000
- 000000000151312 (new)
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_new_peer: Node xs02 now has id: 134283530
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_new_peer: Node 134283530 is now known as xs02
- Nov 12 13:38:15 xs01 ocfs2_controld: [1612]: info: crm_update_peer: Node xs02: id=134283530 state=lost (new) addr=r(0) ip(10.1.1.136) votes=1 born=340 seen=0 proc=00000000000000000000000000151312
- Nov 12 13:38:15 xs01 mgmtd: [5677]: info: CIB query: cib
- Nov 12 13:38:17 xs01 lrmd: [5673]: info: operation start[228] on o2cb:0 for client 5676: pid 1587 exited with return code 0
- Nov 12 13:38:17 xs01 crmd: [5676]: info: process_lrm_event: LRM operation o2cb:0_start_0 (call=228, rc=0, cib-update=245, confirmed=true) ok
- Nov 12 13:38:17 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 10: monitor o2cb:0_monitor_10000 on xs01 (local)
- Nov 12 13:38:17 xs01 lrmd: [5673]: info: rsc:o2cb:0 monitor[229] (pid 1624)
- Nov 12 13:38:17 xs01 lrmd: [5673]: info: operation monitor[229] on o2cb:0 for client 5676: pid 1624 exited with return code 0
- Nov 12 13:38:17 xs01 crmd: [5676]: info: process_lrm_event: LRM operation o2cb:0_monitor_10000 (call=229, rc=0, cib-update=246, confirmed=false) ok
- Nov 12 13:38:17 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 51: start vmdisk-pri:0_start_0 on xs01 (local)
- Nov 12 13:38:17 xs01 lrmd: [5673]: info: rsc:vmdisk-pri:0 start[230] (pid 1638)
- Nov 12 13:38:17 xs01 Filesystem(vmdisk-pri:0)[1638]: [1675]: INFO: Running start for /dev/drbd/by-res/drbd0 on /vmdisk
- Nov 12 13:38:17 xs01 ocfs2_controld: new client connection 5
- Nov 12 13:38:17 xs01 ocfs2_controld: client msg
- Nov 12 13:38:17 xs01 ocfs2_controld: client message 0 from 5: MOUNT
- Nov 12 13:38:17 xs01 ocfs2_controld: start_mount: uuid "B0CE632E636744EDA5011D6501E78990", device "/dev/drbd0", service "ocfs2"
- Nov 12 13:38:17 xs01 ocfs2_controld: Adding service "ocfs2" to device "/dev/drbd0" uuid "B0CE632E636744EDA5011D6501E78990"
- Nov 12 13:38:17 xs01 ocfs2_controld: Starting join for group "ocfs2:B0CE632E636744EDA5011D6501E78990"
- Nov 12 13:38:17 xs01 ocfs2_controld: cpg_join succeeded
- Nov 12 13:38:17 xs01 ocfs2_controld: start_mount returns 0
- Nov 12 13:38:17 xs01 ocfs2_controld: confchg called
- Nov 12 13:38:17 xs01 ocfs2_controld: group "ocfs2:B0CE632E636744EDA5011D6501E78990" confchg: members 1, left 0, joined 1
- Nov 12 13:38:17 xs01 ocfs2_controld: Node 117506314 joins group ocfs2:B0CE632E636744EDA5011D6501E78990
- Nov 12 13:38:17 xs01 ocfs2_controld: This node joins group ocfs2:B0CE632E636744EDA5011D6501E78990
- Nov 12 13:38:17 xs01 ocfs2_controld: Filling node 117506314 to group ocfs2:B0CE632E636744EDA5011D6501E78990
- Nov 12 13:38:17 xs01 ocfs2_controld: Registering mountgroup B0CE632E636744EDA5011D6501E78990 with dlm_controld
- Nov 12 13:38:17 xs01 ocfs2_controld: Registering "B0CE632E636744EDA5011D6501E78990" with dlm_controld
- Nov 12 13:38:17 xs01 ocfs2_controld: message from dlmcontrol
- Nov 12 13:38:17 xs01 ocfs2_controld: Registration of "B0CE632E636744EDA5011D6501E78990" complete
- Nov 12 13:38:17 xs01 ocfs2_controld: Mountgroup B0CE632E636744EDA5011D6501E78990 successfully registered with dlm_controld
- Nov 12 13:38:17 xs01 ocfs2_controld: notify_mount_client sending 0 "OK"
- Nov 12 13:38:17 xs01 ocfs2_controld: Notified client: 1
- Nov 12 13:38:17 xs01 cluster-dlm: process_uevent: uevent: add@/kernel/dlm/B0CE632E636744EDA5011D6501E78990
- Nov 12 13:38:17 xs01 cluster-dlm: process_uevent: kernel: add@ B0CE632E636744EDA5011D6501E78990
- Nov 12 13:38:17 xs01 cluster-dlm: process_uevent: uevent: online@/kernel/dlm/B0CE632E636744EDA5011D6501E78990
- Nov 12 13:38:17 xs01 cluster-dlm: process_uevent: kernel: online@ B0CE632E636744EDA5011D6501E78990
- Nov 12 13:38:17 xs01 kernel: [252416.308668] dlm: Using TCP for communications
- Nov 12 13:38:17 xs01 cluster-dlm: log_config: dlm:ls:B0CE632E636744EDA5011D6501E78990 conf 1 1 0 memb 117506314 join 117506314 left
- Nov 12 13:38:17 xs01 cluster-dlm: add_change: B0CE632E636744EDA5011D6501E78990 add_change cg 1 joined nodeid 117506314
- Nov 12 13:38:17 xs01 cluster-dlm: add_change: B0CE632E636744EDA5011D6501E78990 add_change cg 1 we joined
- Nov 12 13:38:17 xs01 cluster-dlm: add_change: B0CE632E636744EDA5011D6501E78990 add_change cg 1 counts member 1 joined 1 remove 0 failed 0
- Nov 12 13:38:17 xs01 cluster-dlm: check_fencing_done: B0CE632E636744EDA5011D6501E78990 check_fencing done
- Nov 12 13:38:17 xs01 cluster-dlm: check_quorum_done: B0CE632E636744EDA5011D6501E78990 check_quorum disabled
- Nov 12 13:38:17 xs01 cluster-dlm: check_fs_done: B0CE632E636744EDA5011D6501E78990 check_fs done
- Nov 12 13:38:17 xs01 cluster-dlm: send_info: B0CE632E636744EDA5011D6501E78990 send_start cg 1 flags 1 data2 0 counts 0 1 1 0 0
- Nov 12 13:38:17 xs01 cluster-dlm: receive_start: B0CE632E636744EDA5011D6501E78990 receive_start 117506314:1 len 76
- Nov 12 13:38:17 xs01 cluster-dlm: match_change: B0CE632E636744EDA5011D6501E78990 match_change 117506314:1 matches cg 1
- Nov 12 13:38:17 xs01 cluster-dlm: wait_messages_done: B0CE632E636744EDA5011D6501E78990 wait_messages cg 1 got all 1
- Nov 12 13:38:17 xs01 cluster-dlm: start_kernel: B0CE632E636744EDA5011D6501E78990 start_kernel cg 1 member_count 1
- Nov 12 13:38:17 xs01 cluster-dlm: do_sysfs: write "3192944163" to "/sys/kernel/dlm/B0CE632E636744EDA5011D6501E78990/id"
- Nov 12 13:38:17 xs01 cluster-dlm: set_configfs_members: set_members mkdir "/sys/kernel/config/dlm/cluster/spaces/B0CE632E636744EDA5011D6501E78990/nodes/117506314"
- Nov 12 13:38:17 xs01 cluster-dlm: do_sysfs: write "1" to "/sys/kernel/dlm/B0CE632E636744EDA5011D6501E78990/control"
- Nov 12 13:38:17 xs01 cluster-dlm: do_sysfs: write "0" to "/sys/kernel/dlm/B0CE632E636744EDA5011D6501E78990/event_done"
- Nov 12 13:38:17 xs01 ocfs2_controld: client msg
- Nov 12 13:38:17 xs01 ocfs2_controld: client message 1 from 5: MRESULT
- Nov 12 13:38:17 xs01 ocfs2_controld: complete_mount: uuid "B0CE632E636744EDA5011D6501E78990", errcode "0", service "ocfs2"
- Nov 12 13:38:17 xs01 ocfs2_controld: client msg
- Nov 12 13:38:17 xs01 ocfs2_controld: client 5 fd 14 dead
- Nov 12 13:38:17 xs01 ocfs2_controld: client 5 fd -1 dead
- Nov 12 13:38:17 xs01 kernel: [252416.451333] ocfs2: Mounting device (147,0) on (node 1175063, slot 0) with ordered data mode.
- Nov 12 13:38:17 xs01 ocfs2_hb_ctl[1703]: ocfs2_hb_ctl /sbin/ocfs2_hb_ctl -P -d /dev/drbd0
- Nov 12 13:38:17 xs01 ocfs2_controld: new client connection 5
- Nov 12 13:38:17 xs01 ocfs2_controld: client msg
- Nov 12 13:38:17 xs01 ocfs2_controld: client message 6 from 5: LISTCLUSTERS
- Nov 12 13:38:17 xs01 ocfs2_controld: client msg
- Nov 12 13:38:17 xs01 ocfs2_controld: client 5 fd 14 dead
- Nov 12 13:38:17 xs01 ocfs2_controld: client 5 fd -1 dead
- Nov 12 13:38:17 xs01 lrmd: [5673]: info: operation start[230] on vmdisk-pri:0 for client 5676: pid 1638 exited with return code 0
- Nov 12 13:38:17 xs01 crmd: [5676]: info: process_lrm_event: LRM operation vmdisk-pri:0_start_0 (call=230, rc=0, cib-update=247, confirmed=true) ok
- Nov 12 13:38:17 xs01 crmd: [5676]: info: te_rsc_command: Initiating action 52: monitor vmdisk-pri:0_monitor_20000 on xs01 (local)
- Nov 12 13:38:17 xs01 lrmd: [5673]: info: rsc:vmdisk-pri:0 monitor[231] (pid 1710)
- Nov 12 13:38:17 xs01 lrmd: [5673]: info: operation monitor[231] on vmdisk-pri:0 for client 5676: pid 1710 exited with return code 0
- Nov 12 13:38:17 xs01 crmd: [5676]: info: process_lrm_event: LRM operation vmdisk-pri:0_monitor_20000 (call=231, rc=0, cib-update=248, confirmed=false) ok
- Nov 12 13:38:17 xs01 crmd: [5676]: notice: run_graph: ==== Transition 3 (Complete=23, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-900.bz2): Complete
- Nov 12 13:38:17 xs01 crmd: [5676]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
- Nov 12 13:38:18 xs01 mgmtd: [5677]: info: CIB query: cib
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement