Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Apr 9 20:21:29 node1 heartbeat: [1890]: info: No log entry found in ha.cf -- use logd
- Apr 9 20:21:29 node1 heartbeat: [1890]: info: Enabling logging daemon
- Apr 9 20:21:29 node1 heartbeat: [1890]: info: logfile and debug file are those specified in logd config file (default /etc/logd.cf)
- Apr 9 20:21:29 node1 heartbeat: [1890]: info: **************************
- Apr 9 20:21:29 node1 heartbeat: [1890]: info: Configuration validated. Starting heartbeat 3.0.5
- Apr 9 20:21:29 node1 heartbeat: [2345]: info: heartbeat: version 3.0.5
- Apr 9 20:21:30 node1 heartbeat: [2345]: info: Heartbeat generation: 1333386521
- Apr 9 20:21:30 node1 heartbeat: [2345]: info: glib: UDP multicast heartbeat started for group 239.0.0.43 port 694 interface br0 (ttl=1 loop=0)
- Apr 9 20:21:30 node1 heartbeat: [2345]: info: glib: UDP Broadcast heartbeat started on port 694 (694) interface br1
- Apr 9 20:21:30 node1 heartbeat: [2345]: info: glib: UDP Broadcast heartbeat closed on port 694 interface br1 - Status: 1
- Apr 9 20:21:30 node1 heartbeat: [2345]: info: Local status now set to: 'up'
- Apr 9 20:21:30 node1 ntpd[2264]: bind() fd 27, family AF_INET6, port 123, scope 7, addr fe80::acf9:a3ff:fe76:4998, mcast=0 flags=0x11 fails: Cannot assign requested address
- Apr 9 20:21:30 node1 ntpd[2264]: unable to create socket on virbr0 (12) for fe80::acf9:a3ff:fe76:4998#123
- Apr 9 20:21:30 node1 ntpd[2264]: failed to initialize interface for address fe80::acf9:a3ff:fe76:4998
- Apr 9 20:21:30 node1 heartbeat: [2345]: info: Link node2:br0 up.
- Apr 9 20:21:30 node1 heartbeat: [2345]: info: Link quorumnode:br0 up.
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Comm_now_up(): updating status to active
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Local status now set to: 'active'
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/ccm" (112,122)
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/cib" (112,122)
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/lrmd -r" (0,0)
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/stonithd" (0,0)
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/attrd" (112,122)
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/crmd" (112,122)
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/dopd" (112,122)
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Status update for node quorumnode: status active
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Link node2:br1 up.
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Status update for node node2: status active
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: Link node1:br1 up.
- Apr 9 20:21:31 node1 heartbeat: [2404]: info: Starting "/usr/lib/heartbeat/lrmd -r" as uid 0 gid 0 (pid 2404)
- Apr 9 20:21:31 node1 heartbeat: [2407]: info: Starting "/usr/lib/heartbeat/crmd" as uid 112 gid 122 (pid 2407)
- Apr 9 20:21:31 node1 heartbeat: [2402]: info: Starting "/usr/lib/heartbeat/ccm" as uid 112 gid 122 (pid 2402)
- Apr 9 20:21:31 node1 heartbeat: [2405]: info: Starting "/usr/lib/heartbeat/stonithd" as uid 0 gid 0 (pid 2405)
- Apr 9 20:21:31 node1 heartbeat: [2408]: info: Starting "/usr/lib/heartbeat/dopd" as uid 112 gid 122 (pid 2408)
- Apr 9 20:21:31 node1 heartbeat: [2403]: info: Starting "/usr/lib/heartbeat/cib" as uid 112 gid 122 (pid 2403)
- Apr 9 20:21:31 node1 heartbeat: [2406]: info: Starting "/usr/lib/heartbeat/attrd" as uid 112 gid 122 (pid 2406)
- Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: PID=2408
- Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: Signing in with heartbeat
- Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: [We are node1]
- Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: Setting message filter mode
- Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: Setting message signal
- Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: Waiting for messages...
- Apr 9 20:21:31 node1 ccm: [2402]: info: Hostname: node1
- Apr 9 20:21:31 node1 lrmd: [2404]: info: enabling coredumps
- Apr 9 20:21:31 node1 lrmd: [2404]: WARN: Core dumps could be lost if multiple dumps occur.
- Apr 9 20:21:31 node1 lrmd: [2404]: WARN: Consider setting non-default value in /proc/sys/kernel/core_pattern (or equivalent) for maximum supportability
- Apr 9 20:21:31 node1 lrmd: [2404]: WARN: Consider setting /proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum supportability
- Apr 9 20:21:31 node1 lrmd: [2404]: info: Started.
- Apr 9 20:21:31 node1 cib: [2403]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
- Apr 9 20:21:31 node1 attrd: [2406]: info: Invoked: /usr/lib/heartbeat/attrd
- Apr 9 20:21:31 node1 attrd: [2406]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
- Apr 9 20:21:31 node1 stonith-ng: [2405]: info: Invoked: /usr/lib/heartbeat/stonithd
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: the send queue length from heartbeat to client ccm is set to 1024
- Apr 9 20:21:31 node1 stonith-ng: [2405]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
- Apr 9 20:21:31 node1 stonith-ng: [2405]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
- Apr 9 20:21:31 node1 stonith-ng: [2405]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
- Apr 9 20:21:31 node1 attrd: [2406]: notice: main: Starting mainloop...
- Apr 9 20:21:31 node1 crmd: [2407]: info: Invoked: /usr/lib/heartbeat/crmd
- Apr 9 20:21:31 node1 crmd: [2407]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
- Apr 9 20:21:31 node1 crmd: [2407]: info: main: CRM Hg Version: 9971ebba4494012a93c03b40a2c58ec0eb60f50c
- Apr 9 20:21:31 node1 crmd: [2407]: info: crmd_init: Starting crmd
- Apr 9 20:21:31 node1 cib: [2403]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
- Apr 9 20:21:31 node1 cib: [2403]: info: validate_with_relaxng: Creating RNG parser context
- Apr 9 20:21:31 node1 cib: [2403]: info: startCib: CIB Initialization completed successfully
- Apr 9 20:21:31 node1 cib: [2403]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
- Apr 9 20:21:31 node1 cib: [2403]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: the send queue length from heartbeat to client attrd is set to 1024
- Apr 9 20:21:31 node1 stonith-ng: [2405]: info: register_heartbeat_conn: Hostname: node1
- Apr 9 20:21:31 node1 stonith-ng: [2405]: info: register_heartbeat_conn: UUID: 1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5
- Apr 9 20:21:31 node1 stonith-ng: [2405]: info: main: Starting stonith-ng mainloop
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: the send queue length from heartbeat to client stonith-ng is set to 1024
- Apr 9 20:21:31 node1 cib: [2403]: info: register_heartbeat_conn: Hostname: node1
- Apr 9 20:21:31 node1 cib: [2403]: info: register_heartbeat_conn: UUID: 1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5
- Apr 9 20:21:31 node1 heartbeat: [2345]: info: the send queue length from heartbeat to client cib is set to 1024
- Apr 9 20:21:31 node1 cib: [2403]: info: ccm_connect: Registering with CCM...
- Apr 9 20:21:31 node1 cib: [2403]: info: cib_init: Requesting the list of configured nodes
- Apr 9 20:21:32 node1 cib: [2403]: info: cib_init: Starting cib mainloop
- Apr 9 20:21:32 node1 cib: [2403]: info: cib_client_status_callback: Status update: Client node1/cib now has status [join]
- Apr 9 20:21:32 node1 cib: [2403]: info: crm_new_peer: Node 0 is now known as node1
- Apr 9 20:21:32 node1 cib: [2403]: info: crm_update_peer_proc: node1.cib is now online
- Apr 9 20:21:32 node1 cib: [2403]: WARN: cib_peer_callback: Discarding cib_apply_diff message (5688) from node2: not in our membership
- Apr 9 20:21:32 node1 crmd: [2407]: info: do_cib_control: CIB connection established
- Apr 9 20:21:32 node1 crmd: [2407]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
- Apr 9 20:21:32 node1 crmd: [2407]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
- Apr 9 20:21:32 node1 cib: [2403]: info: cib_client_status_callback: Status update: Client node1/cib now has status [online]
- Apr 9 20:21:32 node1 cib: [2403]: info: cib_client_status_callback: Status update: Client node2/cib now has status [online]
- Apr 9 20:21:32 node1 cib: [2403]: info: crm_new_peer: Node 0 is now known as node2
- Apr 9 20:21:32 node1 crmd: [2407]: info: register_heartbeat_conn: Hostname: node1
- Apr 9 20:21:32 node1 cib: [2403]: info: crm_update_peer_proc: node2.cib is now online
- Apr 9 20:21:32 node1 crmd: [2407]: info: register_heartbeat_conn: UUID: 1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5
- Apr 9 20:21:32 node1 cib: [2403]: info: cib_client_status_callback: Status update: Client quorumnode/cib now has status [online]
- Apr 9 20:21:33 node1 heartbeat: [2345]: info: the send queue length from heartbeat to client crmd is set to 1024
- Apr 9 20:21:33 node1 cib: [2403]: info: crm_new_peer: Node 0 is now known as quorumnode
- Apr 9 20:21:33 node1 cib: [2403]: info: crm_update_peer_proc: quorumnode.cib is now online
- Apr 9 20:21:33 node1 cib: [2403]: info: cib_process_diff: Diff 5.116.26 -> 5.116.27 not applied to 5.116.0: current "num_updates" is less than required
- Apr 9 20:21:33 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:21:33 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.116.27 -> 5.116.28 (sync in progress)
- Apr 9 20:21:33 node1 crmd: [2407]: info: do_ha_control: Connected to the cluster
- Apr 9 20:21:33 node1 crmd: [2407]: info: do_ccm_control: CCM connection established... waiting for first callback
- Apr 9 20:21:33 node1 crmd: [2407]: info: do_started: Delaying start, no membership data (0000000000100000)
- Apr 9 20:21:33 node1 crmd: [2407]: info: crmd_init: Starting crmd's mainloop
- Apr 9 20:21:33 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:21:33 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:21:33 node1 crmd: [2407]: notice: crmd_client_status_callback: Status update: Client node1/crmd now has status [online] (DC=false)
- Apr 9 20:21:33 node1 crmd: [2407]: info: crm_new_peer: Node 0 is now known as node1
- Apr 9 20:21:33 node1 crmd: [2407]: info: ais_status_callback: status: node1 is now unknown
- Apr 9 20:21:33 node1 crmd: [2407]: info: crm_update_peer_proc: node1.crmd is now online
- Apr 9 20:21:33 node1 crmd: [2407]: notice: crmd_peer_update: Status update: Client node1/crmd now has status [online] (DC=<null>)
- Apr 9 20:21:33 node1 crmd: [2407]: notice: crmd_client_status_callback: Status update: Client node1/crmd now has status [online] (DC=false)
- Apr 9 20:21:34 node1 crmd: [2407]: notice: crmd_client_status_callback: Status update: Client node2/crmd now has status [online] (DC=false)
- Apr 9 20:21:34 node1 crmd: [2407]: info: crm_new_peer: Node 0 is now known as node2
- Apr 9 20:21:34 node1 crmd: [2407]: info: ais_status_callback: status: node2 is now unknown
- Apr 9 20:21:34 node1 crmd: [2407]: info: crm_update_peer_proc: node2.crmd is now online
- Apr 9 20:21:34 node1 crmd: [2407]: notice: crmd_peer_update: Status update: Client node2/crmd now has status [online] (DC=<null>)
- Apr 9 20:21:34 node1 crmd: [2407]: notice: crmd_client_status_callback: Status update: Client quorumnode/crmd now has status [offline] (DC=false)
- Apr 9 20:21:34 node1 crmd: [2407]: info: crm_new_peer: Node 0 is now known as quorumnode
- Apr 9 20:21:34 node1 crmd: [2407]: info: ais_status_callback: status: quorumnode is now unknown
- Apr 9 20:21:34 node1 crmd: [2407]: info: do_started: Delaying start, no membership data (0000000000100000)
- Apr 9 20:21:34 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.116.28 from node2
- Apr 9 20:21:35 node1 crmd: [2407]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
- Apr 9 20:21:35 node1 crmd: [2407]: info: mem_handle_event: instance=19, nodes=3, new=3, lost=0, n_idx=0, new_idx=0, old_idx=6
- Apr 9 20:21:35 node1 crmd: [2407]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=19)
- Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: NEW MEMBERSHIP: trans=19, nodes=3, new=3, lost=0 n_idx=0, new_idx=0, old_idx=6
- Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011CURRENT: node2 [nodeid=2, born=1]
- Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011CURRENT: quorumnode [nodeid=0, born=17]
- Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011CURRENT: node1 [nodeid=1, born=19]
- Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011NEW: node2 [nodeid=2, born=1]
- Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011NEW: quorumnode [nodeid=0, born=17]
- Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011NEW: node1 [nodeid=1, born=19]
- Apr 9 20:21:35 node1 crmd: [2407]: info: crm_get_peer: Node node2 now has id: 2
- Apr 9 20:21:35 node1 crmd: [2407]: info: ais_status_callback: status: node2 is now member (was unknown)
- Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer: Node node2: id=2 state=member (new) addr=(null) votes=-1 born=1 seen=19 proc=00000000000000000000000000000200
- Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer_proc: node2.ais is now online
- Apr 9 20:21:35 node1 crmd: [2407]: info: ais_status_callback: status: quorumnode is now member (was unknown)
- Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer: Node quorumnode: id=0 state=member (new) addr=(null) votes=-1 born=17 seen=19 proc=00000000000000000000000000000000
- Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer_proc: quorumnode.ais is now online
- Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer_proc: quorumnode.crmd is now online
- Apr 9 20:21:35 node1 crmd: [2407]: notice: crmd_peer_update: Status update: Client quorumnode/crmd now has status [online] (DC=<null>)
- Apr 9 20:21:35 node1 crmd: [2407]: info: crm_get_peer: Node node1 now has id: 1
- Apr 9 20:21:35 node1 crmd: [2407]: info: ais_status_callback: status: node1 is now member (was unknown)
- Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=19 seen=19 proc=00000000000000000000000000000200
- Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer_proc: node1.ais is now online
- Apr 9 20:21:35 node1 crmd: [2407]: info: do_started: The local CRM is operational
- Apr 9 20:21:35 node1 crmd: [2407]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
- Apr 9 20:21:35 node1 cib: [2403]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
- Apr 9 20:21:35 node1 cib: [2403]: info: mem_handle_event: instance=19, nodes=3, new=3, lost=0, n_idx=0, new_idx=0, old_idx=6
- Apr 9 20:21:35 node1 cib: [2403]: info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=19)
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_get_peer: Node node2 now has id: 2
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer: Node node2: id=2 state=member (new) addr=(null) votes=-1 born=1 seen=19 proc=00000000000000000000000000000100
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: node2.ais is now online
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: node2.crmd is now online
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer: Node quorumnode: id=0 state=member (new) addr=(null) votes=-1 born=17 seen=19 proc=00000000000000000000000000000100
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: quorumnode.ais is now online
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: quorumnode.crmd is now online
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_get_peer: Node node1 now has id: 1
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=19 seen=19 proc=00000000000000000000000000000100
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: node1.ais is now online
- Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: node1.crmd is now online
- Apr 9 20:21:36 node1 crmd: [2407]: info: te_connect_stonith: Attempting connection to fencing daemon...
- Apr 9 20:21:37 node1 crmd: [2407]: info: te_connect_stonith: Connected
- Apr 9 20:21:37 node1 crmd: [2407]: info: update_dc: Set DC to node2 (3.0.5)
- Apr 9 20:21:38 node1 ntpd[2264]: synchronized to 10.52.0.33, stratum 3
- Apr 9 20:21:38 node1 ntpd[2264]: kernel time sync status change 2001
- Apr 9 20:22:52 node1 crmd: [2407]: info: update_attrd: Connecting to attrd...
- Apr 9 20:22:52 node1 crmd: [2407]: info: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
- Apr 9 20:22:52 node1 attrd: [2406]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Apr 9 20:22:53 node1 crmd: [2407]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/transient_attributes": ok (rc=0)
- Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=13:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_sysadmin_notify:1_monitor_0 )
- Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:p_sysadmin_notify:1 probe[2] (pid 2863)
- Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=14:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_ping:1_monitor_0 )
- Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:p_ping:1 probe[3] (pid 2864)
- Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=15:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_0 )
- Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 probe[4] (pid 2865)
- Apr 9 20:22:54 node1 lrmd: [2404]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
- Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=16:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=stonithnode1_monitor_0 )
- Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:stonithnode1 probe[5] (pid 2866)
- Apr 9 20:22:54 node1 lrmd: [2404]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
- Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=17:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=stonithnode2_monitor_0 )
- Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:stonithnode2 probe[6] (pid 2867)
- Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=18:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:0_monitor_0 )
- Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:p_drbd_mount1:0 probe[7] (pid 2868)
- Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=19:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:1_monitor_0 )
- Apr 9 20:22:54 node1 lrmd: [2404]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
- Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=20:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_libvirt-bin_monitor_0 )
- Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=21:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_fs_vmstore_monitor_0 )
- Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=22:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_vm_monitor_0 )
- Apr 9 20:22:54 node1 stonith-ng: [2405]: notice: stonith_device_action: Device stonithnode2 not found
- Apr 9 20:22:54 node1 stonith-ng: [2405]: info: stonith_command: Processed st_execute from lrmd: rc=-12
- Apr 9 20:22:54 node1 stonith-ng: [2405]: notice: stonith_device_action: Device stonithnode1 not found
- Apr 9 20:22:54 node1 stonith-ng: [2405]: info: stonith_command: Processed st_execute from lrmd: rc=-12
- Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[6] on stonithnode2 for client 2407: pid 2867 exited with return code 7
- Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[5] on stonithnode1 for client 2407: pid 2866 exited with return code 7
- Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation stonithnode2_monitor_0 (call=6, rc=7, cib-update=7, confirmed=true) not running
- Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation stonithnode1_monitor_0 (call=5, rc=7, cib-update=8, confirmed=true) not running
- Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[2] on p_sysadmin_notify:1 for client 2407: pid 2863 exited with return code 7
- Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_sysadmin_notify:1_monitor_0 (call=2, rc=7, cib-update=9, confirmed=true) not running
- Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[3] on p_ping:1 for client 2407: pid 2864 exited with return code 7
- Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_ping:1_monitor_0 (call=3, rc=7, cib-update=10, confirmed=true) not running
- Apr 9 20:22:54 node1 crm_attribute: [2935]: info: Invoked: crm_attribute -N node1 -n master-p_drbd_vmstore:0 -l reboot -D
- Apr 9 20:22:54 node1 crm_attribute: [2936]: info: Invoked: crm_attribute -N node1 -n master-p_drbd_mount1:0 -l reboot -D
- Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[4] on p_drbd_vmstore:0 for client 2407: pid 2865 exited with return code 7
- Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[7] on p_drbd_mount1:0 for client 2407: pid 2868 exited with return code 7
- Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_0 (call=4, rc=7, cib-update=11, confirmed=true) not running
- Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount1:0_monitor_0 (call=7, rc=7, cib-update=12, confirmed=true) not running
- Apr 9 20:22:55 node1 lrmd: [2404]: info: rsc:p_drbd_mount2:1 probe[8] (pid 2937)
- Apr 9 20:22:55 node1 lrmd: [2404]: info: rsc:p_libvirt-bin probe[9] (pid 2938)
- Apr 9 20:22:55 node1 lrmd: [2404]: info: rsc:p_fs_vmstore probe[10] (pid 2939)
- Apr 9 20:22:55 node1 lrmd: [2404]: info: rsc:p_vm probe[11] (pid 2940)
- Apr 9 20:22:55 node1 lrmd: [2404]: info: operation monitor[9] on p_libvirt-bin for client 2407: pid 2938 exited with return code 0
- Apr 9 20:22:55 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_libvirt-bin_monitor_0 (call=9, rc=0, cib-update=13, confirmed=true) ok
- Apr 9 20:22:55 node1 Filesystem[2939]: [2956]: WARNING: Couldn't find device [/dev/drbd0]. Expected /dev/??? to exist
- Apr 9 20:22:55 node1 lrmd: [2404]: info: operation monitor[10] on p_fs_vmstore for client 2407: pid 2939 exited with return code 7
- Apr 9 20:22:55 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_fs_vmstore_monitor_0 (call=10, rc=7, cib-update=14, confirmed=true) not running
- Apr 9 20:22:55 node1 VirtualDomain[2940]: [3013]: INFO: Configuration file /mnt/storage/vmstore/config/vm.xml not readable during probe.
- Apr 9 20:22:55 node1 lrmd: [2404]: info: operation monitor[11] on p_vm for client 2407: pid 2940 exited with return code 7
- Apr 9 20:22:55 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_vm_monitor_0 (call=11, rc=7, cib-update=15, confirmed=true) not running
- Apr 9 20:22:55 node1 crm_attribute: [3014]: info: Invoked: crm_attribute -N node1 -n master-p_drbd_mount2:1 -l reboot -D
- Apr 9 20:22:55 node1 lrmd: [2404]: info: operation monitor[8] on p_drbd_mount2:1 for client 2407: pid 2937 exited with return code 7
- Apr 9 20:22:55 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount2:1_monitor_0 (call=8, rc=7, cib-update=16, confirmed=true) not running
- Apr 9 20:22:56 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Apr 9 20:22:56 node1 attrd: [2406]: notice: attrd_perform_update: Sent update 12: probe_complete=true
- Apr 9 20:22:56 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Apr 9 20:22:56 node1 attrd: [2406]: notice: attrd_perform_update: Sent update 15: probe_complete=true
- Apr 9 20:26:30 node1 ntpd[2264]: Listening on interface #13 virbr0, fe80::acf9:a3ff:fe76:4998#123 Enabled
- Apr 9 20:26:30 node1 ntpd[2264]: Listening on interface #14 tap0, fe80::7060:ceff:fe9b:7c46#123 Enabled
- Apr 9 20:26:30 node1 ntpd[2264]: new interface(s) found: waking up resolver
- Apr 9 20:30:09 node1 cib: [2403]: info: apply_xml_diff: Digest mis-match: expected f3816d6269e8cd580705d41fe50810d0, calculated ae0d5a39a0170ac9076f9bc4cd9deaa3
- Apr 9 20:30:09 node1 cib: [2403]: notice: cib_process_diff: Diff 5.116.64 -> 5.116.65 not applied to 5.116.64: Failed application of an update diff
- Apr 9 20:30:09 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:30:09 node1 crmd: [2407]: info: delete_resource: Removing resource p_drbd_vmstore:0 for 3951_mount2_resource (internal) on node1
- Apr 9 20:30:09 node1 crmd: [2407]: info: notify_deleted: Notifying 3951_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
- Apr 9 20:30:09 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
- Apr 9 20:30:09 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021409-5
- Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.116.64 -> 5.116.65 (sync in progress)
- Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.116.65 -> 5.117.1 (sync in progress)
- Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.117.1 -> 5.117.2 (sync in progress)
- Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.117.1 -> 5.117.2 (sync in progress)
- Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.117.2 -> 5.117.3 (sync in progress)
- Apr 9 20:30:09 node1 cib: [2403]: info: cib_process_diff: Diff 5.117.3 -> 5.117.4 not applied to 5.116.64: current "epoch" is less than required
- Apr 9 20:30:09 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.117.4 -> 5.117.5 (sync in progress)
- Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.117.5 -> 5.117.6 (sync in progress)
- Apr 9 20:30:09 node1 crmd: [2407]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
- Apr 9 20:30:09 node1 crmd: [2407]: info: notify_deleted: Notifying 3951_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
- Apr 9 20:30:09 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
- Apr 9 20:30:09 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021409-6
- Apr 9 20:30:09 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
- Apr 9 20:30:09 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021409-7
- Apr 9 20:30:10 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=10:238:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_0 )
- Apr 9 20:30:10 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 probe[12] (pid 3952)
- Apr 9 20:30:10 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.117.6 from node2
- Apr 9 20:30:10 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Apr 9 20:30:10 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:30:10 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:30:10 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:30:10 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:30:10 node1 crm_attribute: [3981]: info: Invoked: crm_attribute -N node1 -n master-p_drbd_vmstore:0 -l reboot -D
- Apr 9 20:30:10 node1 lrmd: [2404]: info: operation monitor[12] on p_drbd_vmstore:0 for client 2407: pid 3952 exited with return code 7
- Apr 9 20:30:10 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_0 (call=12, rc=7, cib-update=26, confirmed=true) not running
- Apr 9 20:31:32 node1 cib: [2403]: info: cib_stats: Processed 139 operations (143.00us average, 0% utilization) in the last 10min
- Apr 9 20:31:58 node1 cib: [2403]: info: apply_xml_diff: Digest mis-match: expected 6eb7faf8513de7c395f08b4a5f5ec9b8, calculated b3c232c95807c8764d9ff63756b991ff
- Apr 9 20:31:58 node1 cib: [2403]: notice: cib_process_diff: Diff 5.118.6 -> 5.118.7 not applied to 5.118.6: Failed application of an update diff
- Apr 9 20:31:58 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:31:58 node1 crmd: [2407]: info: delete_resource: Removing resource p_drbd_vmstore:0 for 4363_mount2_resource (internal) on node1
- Apr 9 20:31:58 node1 crmd: [2407]: info: notify_deleted: Notifying 4363_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
- Apr 9 20:31:58 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
- Apr 9 20:31:58 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-9
- Apr 9 20:31:58 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.118.6 -> 5.118.7 (sync in progress)
- Apr 9 20:31:58 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.118.7 -> 5.119.1 (sync in progress)
- Apr 9 20:31:58 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.119.1 -> 5.119.2 (sync in progress)
- Apr 9 20:31:58 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.119.1 -> 5.119.2 (sync in progress)
- Apr 9 20:31:58 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.119.2 -> 5.119.3 (sync in progress)
- Apr 9 20:31:58 node1 crmd: [2407]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
- Apr 9 20:31:58 node1 crmd: [2407]: info: notify_deleted: Notifying 4363_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
- Apr 9 20:31:58 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
- Apr 9 20:31:58 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-10
- Apr 9 20:31:58 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
- Apr 9 20:31:58 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-11
- Apr 9 20:31:59 node1 cib: [2403]: info: cib_process_diff: Diff 5.119.3 -> 5.119.4 not applied to 5.118.6: current "epoch" is less than required
- Apr 9 20:31:59 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:32:00 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=10:242:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_0 )
- Apr 9 20:32:00 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 probe[13] (pid 4369)
- Apr 9 20:32:00 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.119.4 from node2
- Apr 9 20:32:00 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Apr 9 20:32:00 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:32:00 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:32:00 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:0 (10000)
- Apr 9 20:32:00 node1 attrd: [2406]: notice: attrd_perform_update: Sent update 22: master-p_drbd_vmstore:0=10000
- Apr 9 20:32:00 node1 lrmd: [2404]: info: operation monitor[13] on p_drbd_vmstore:0 for client 2407: pid 4369 exited with return code 0
- Apr 9 20:32:00 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_0 (call=13, rc=0, cib-update=35, confirmed=true) ok
- Apr 9 20:32:01 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=40:243:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_20000 )
- Apr 9 20:32:01 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 monitor[14] (pid 4397)
- Apr 9 20:32:01 node1 lrmd: [2404]: info: operation monitor[14] on p_drbd_vmstore:0 for client 2407: pid 4397 exited with return code 0
- Apr 9 20:32:01 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_20000 (call=14, rc=0, cib-update=36, confirmed=false) ok
- Apr 9 20:32:08 node1 cib: [2403]: info: apply_xml_diff: Digest mis-match: expected c7a260969809bd977ed6b4e461507213, calculated e3bf86196343f83f13b3f81c7bd413c5
- Apr 9 20:32:08 node1 cib: [2403]: notice: cib_process_diff: Diff 5.119.15 -> 5.119.16 not applied to 5.119.15: Failed application of an update diff
- Apr 9 20:32:08 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:32:08 node1 crmd: [2407]: info: delete_resource: Removing resource p_drbd_vmstore:0 for 4463_mount2_resource (internal) on node1
- Apr 9 20:32:08 node1 crmd: [2407]: info: lrm_remove_deleted_op: Removing op p_drbd_vmstore:0_monitor_20000:14 for deleted resource p_drbd_vmstore:0
- Apr 9 20:32:08 node1 crmd: [2407]: info: notify_deleted: Notifying 4463_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
- Apr 9 20:32:08 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
- Apr 9 20:32:08 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-13
- Apr 9 20:32:08 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.119.15 -> 5.119.16 (sync in progress)
- Apr 9 20:32:08 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.119.16 -> 5.120.1 (sync in progress)
- Apr 9 20:32:08 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.120.1 -> 5.120.2 (sync in progress)
- Apr 9 20:32:08 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.120.1 -> 5.120.2 (sync in progress)
- Apr 9 20:32:08 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.120.2 -> 5.120.3 (sync in progress)
- Apr 9 20:32:08 node1 cib: [2403]: info: cib_process_diff: Diff 5.120.3 -> 5.120.4 not applied to 5.119.15: current "epoch" is less than required
- Apr 9 20:32:08 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:32:08 node1 crmd: [2407]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
- Apr 9 20:32:08 node1 crmd: [2407]: info: notify_deleted: Notifying 4463_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
- Apr 9 20:32:08 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
- Apr 9 20:32:08 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-14
- Apr 9 20:32:08 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
- Apr 9 20:32:08 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-15
- Apr 9 20:32:08 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_monitor_20000 from 1:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021528-16
- Apr 9 20:32:08 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=159:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_notify_0 )
- Apr 9 20:32:08 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 notify[15] (pid 4464)
- Apr 9 20:32:08 node1 lrmd: [2404]: info: RA output: (p_drbd_vmstore:0:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
- Apr 9 20:32:08 node1 lrmd: [2404]: info: operation notify[15] on p_drbd_vmstore:0 for client 2407: pid 4464 exited with return code 0
- Apr 9 20:32:08 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_notify_0 from 159:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021528-17
- Apr 9 20:32:08 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_notify_0 (call=15, rc=0, cib-update=0, confirmed=true) ok
- Apr 9 20:32:10 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=10:246:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_0 )
- Apr 9 20:32:10 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 probe[16] (pid 4496)
- Apr 9 20:32:10 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.120.4 from node2
- Apr 9 20:32:10 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:0 (10000)
- Apr 9 20:32:10 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:32:10 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Apr 9 20:32:10 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:32:10 node1 lrmd: [2404]: info: operation monitor[16] on p_drbd_vmstore:0 for client 2407: pid 4496 exited with return code 0
- Apr 9 20:32:10 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_0 (call=16, rc=0, cib-update=46, confirmed=true) ok
- Apr 9 20:32:11 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=40:247:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_20000 )
- Apr 9 20:32:11 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 monitor[17] (pid 4524)
- Apr 9 20:32:11 node1 lrmd: [2404]: info: operation monitor[17] on p_drbd_vmstore:0 for client 2407: pid 4524 exited with return code 0
- Apr 9 20:32:11 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_20000 (call=17, rc=0, cib-update=47, confirmed=false) ok
- Apr 9 20:32:17 node1 cib: [2403]: info: apply_xml_diff: Digest mis-match: expected 5849452c68971ff82825919bc39ad90e, calculated a7dc750ac875169d270be7354e10a561
- Apr 9 20:32:17 node1 cib: [2403]: notice: cib_process_diff: Diff 5.120.16 -> 5.120.17 not applied to 5.120.16: Failed application of an update diff
- Apr 9 20:32:17 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:32:17 node1 crmd: [2407]: info: delete_resource: Removing resource p_drbd_mount1:0 for 4552_mount2_resource (internal) on node1
- Apr 9 20:32:17 node1 crmd: [2407]: info: notify_deleted: Notifying 4552_mount2_resource on node1 that p_drbd_mount1:0 was deleted
- Apr 9 20:32:17 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4552) in sscanf result (3) for 0:0:crm-resource-4552
- Apr 9 20:32:17 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.120.16 -> 5.120.17 (sync in progress)
- Apr 9 20:32:17 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:0_delete_60000 from 0:0:crm-resource-4552: lrm_invoke-lrmd-1334021537-19
- Apr 9 20:32:17 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.120.17 -> 5.121.1 (sync in progress)
- Apr 9 20:32:17 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.121.1 -> 5.121.2 (sync in progress)
- Apr 9 20:32:17 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.121.1 -> 5.121.2 (sync in progress)
- Apr 9 20:32:17 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.121.2 -> 5.121.3 (sync in progress)
- Apr 9 20:32:17 node1 cib: [2403]: info: cib_process_diff: Diff 5.121.3 -> 5.121.4 not applied to 5.120.16: current "epoch" is less than required
- Apr 9 20:32:17 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:32:17 node1 crmd: [2407]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
- Apr 9 20:32:17 node1 crmd: [2407]: info: notify_deleted: Notifying 4552_mount2_resource on node1 that p_drbd_mount1:1 was deleted
- Apr 9 20:32:17 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4552) in sscanf result (3) for 0:0:crm-resource-4552
- Apr 9 20:32:17 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_delete_60000 from 0:0:crm-resource-4552: lrm_invoke-lrmd-1334021537-20
- Apr 9 20:32:17 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4552) in sscanf result (3) for 0:0:crm-resource-4552
- Apr 9 20:32:17 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_delete_60000 from 0:0:crm-resource-4552: lrm_invoke-lrmd-1334021537-21
- Apr 9 20:32:18 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.121.4 from node2
- Apr 9 20:32:18 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:0 (10000)
- Apr 9 20:32:18 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:32:18 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:32:18 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:32:18 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:32:18 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Apr 9 20:32:18 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=12:251:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:0_monitor_0 )
- Apr 9 20:32:18 node1 lrmd: [2404]: info: rsc:p_drbd_mount1:0 probe[18] (pid 4554)
- Apr 9 20:32:18 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount1:0 (10000)
- Apr 9 20:32:18 node1 attrd: [2406]: notice: attrd_perform_update: Sent update 33: master-p_drbd_mount1:0=10000
- Apr 9 20:32:18 node1 lrmd: [2404]: info: operation monitor[18] on p_drbd_mount1:0 for client 2407: pid 4554 exited with return code 0
- Apr 9 20:32:18 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount1:0_monitor_0 (call=18, rc=0, cib-update=57, confirmed=true) ok
- Apr 9 20:32:19 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=76:252:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:0_monitor_20000 )
- Apr 9 20:32:19 node1 lrmd: [2404]: info: rsc:p_drbd_mount1:0 monitor[19] (pid 4581)
- Apr 9 20:32:19 node1 lrmd: [2404]: info: operation monitor[19] on p_drbd_mount1:0 for client 2407: pid 4581 exited with return code 0
- Apr 9 20:32:19 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount1:0_monitor_20000 (call=19, rc=0, cib-update=58, confirmed=false) ok
- Apr 9 20:32:21 node1 crmd: [2407]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
- Apr 9 20:32:21 node1 crmd: [2407]: info: notify_deleted: Notifying 4609_mount2_resource on node1 that p_drbd_mount2:0 was deleted
- Apr 9 20:32:21 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4609) in sscanf result (3) for 0:0:crm-resource-4609
- Apr 9 20:32:21 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_delete_60000 from 0:0:crm-resource-4609: lrm_invoke-lrmd-1334021541-23
- Apr 9 20:32:21 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4609) in sscanf result (3) for 0:0:crm-resource-4609
- Apr 9 20:32:21 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_delete_60000 from 0:0:crm-resource-4609: lrm_invoke-lrmd-1334021541-24
- Apr 9 20:32:22 node1 cib: [2403]: info: apply_xml_diff: Digest mis-match: expected 65e4ad017b3e64d10be5a7606dce9840, calculated 4d0cb2b0b6658edac47a946867278d2a
- Apr 9 20:32:22 node1 cib: [2403]: notice: cib_process_diff: Diff 5.123.1 -> 5.123.2 not applied to 5.123.1: Failed application of an update diff
- Apr 9 20:32:22 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:32:22 node1 crmd: [2407]: info: delete_resource: Removing resource p_drbd_mount2:1 for 4609_mount2_resource (internal) on node1
- Apr 9 20:32:22 node1 crmd: [2407]: info: notify_deleted: Notifying 4609_mount2_resource on node1 that p_drbd_mount2:1 was deleted
- Apr 9 20:32:22 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4609) in sscanf result (3) for 0:0:crm-resource-4609
- Apr 9 20:32:22 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:1_delete_60000 from 0:0:crm-resource-4609: lrm_invoke-lrmd-1334021542-25
- Apr 9 20:32:22 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.123.1 -> 5.123.2 (sync in progress)
- Apr 9 20:32:22 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.123.1 -> 5.123.2 (sync in progress)
- Apr 9 20:32:22 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.123.2 -> 5.124.1 (sync in progress)
- Apr 9 20:32:22 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.124.1 -> 5.124.2 (sync in progress)
- Apr 9 20:32:22 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.124.2 -> 5.124.3 (sync in progress)
- Apr 9 20:32:22 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:32:22 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:32:23 node1 cib: [2403]: info: cib_process_diff: Diff 5.124.3 -> 5.124.4 not applied to 5.123.1: current "epoch" is less than required
- Apr 9 20:32:23 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
- Apr 9 20:32:23 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.124.4 -> 5.124.5 (sync in progress)
- Apr 9 20:32:24 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=14:256:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_monitor_0 )
- Apr 9 20:32:24 node1 lrmd: [2404]: info: rsc:p_drbd_mount2:0 probe[20] (pid 4611)
- Apr 9 20:32:24 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.124.5 from node2
- Apr 9 20:32:24 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:0 (10000)
- Apr 9 20:32:24 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
- Apr 9 20:32:24 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount1:0 (10000)
- Apr 9 20:32:24 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
- Apr 9 20:32:24 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Apr 9 20:32:24 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount2:0 (10000)
- Apr 9 20:32:24 node1 attrd: [2406]: notice: attrd_perform_update: Sent update 43: master-p_drbd_mount2:0=10000
- Apr 9 20:32:24 node1 lrmd: [2404]: info: operation monitor[20] on p_drbd_mount2:0 for client 2407: pid 4611 exited with return code 0
- Apr 9 20:32:24 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount2:0_monitor_0 (call=20, rc=0, cib-update=68, confirmed=true) ok
- Apr 9 20:32:25 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=109:257:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_monitor_20000 )
- Apr 9 20:32:25 node1 lrmd: [2404]: info: rsc:p_drbd_mount2:0 monitor[21] (pid 4640)
- Apr 9 20:32:25 node1 lrmd: [2404]: info: operation monitor[21] on p_drbd_mount2:0 for client 2407: pid 4640 exited with return code 0
- Apr 9 20:32:25 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount2:0_monitor_20000 (call=21, rc=0, cib-update=69, confirmed=false) ok
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement