Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- node2 heartbeat: [10682]: info: No log entry found in ha.cf -- use logd
- node2 heartbeat: [10682]: info: Enabling logging daemon
- node2 heartbeat: [10682]: info: logfile and debug file are those specified in logd config file (default /etc/logd.cf)
- node2 heartbeat: [10682]: info: **************************
- node2 heartbeat: [10682]: info: Configuration validated. Starting heartbeat 3.0.5
- node2 heartbeat: [10683]: info: heartbeat: version 3.0.5
- node2 heartbeat: [10683]: info: Heartbeat generation: 1328556158
- node2 heartbeat: [10683]: info: glib: UDP multicast heartbeat started for group 239.0.0.43 port 694 interface br0 (ttl=1 loop=0)
- node2 heartbeat: [10683]: info: glib: UDP Broadcast heartbeat started on port 694 (694) interface br1
- node2 heartbeat: [10683]: info: glib: UDP Broadcast heartbeat closed on port 694 interface br1 - Status: 1
- node2 heartbeat: [10683]: info: Local status now set to: 'up'
- node2 heartbeat: [10683]: info: Link node2:br1 up.
- node2 heartbeat: [10683]: info: Link node1:br0 up.
- node2 heartbeat: [10683]: info: Link node1:br1 up.
- node2 heartbeat: [10683]: info: Link quorumnode:br0 up.
- node2 heartbeat: [10683]: info: Status update for node quorumnode: status active
- node2 heartbeat: [10683]: info: Comm_now_up(): updating status to active
- node2 heartbeat: [10683]: info: Local status now set to: 'active'
- node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/ccm" (113,122)
- node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/cib" (113,122)
- node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/lrmd -r" (0,0)
- node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/stonithd" (0,0)
- node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/attrd" (113,122)
- node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/crmd" (113,122)
- node2 heartbeat: [10683]: info: Starting child client "/usr/lib/heartbeat/dopd" (113,122)
- node2 heartbeat: [10701]: info: Starting "/usr/lib/heartbeat/cib" as uid 113 gid 122 (pid 10701)
- node2 heartbeat: [10703]: info: Starting "/usr/lib/heartbeat/stonithd" as uid 0 gid 0 (pid 10703)
- node2 heartbeat: [10706]: info: Starting "/usr/lib/heartbeat/dopd" as uid 113 gid 122 (pid 10706)
- node2 heartbeat: [10704]: info: Starting "/usr/lib/heartbeat/attrd" as uid 113 gid 122 (pid 10704)
- node2 heartbeat: [10702]: info: Starting "/usr/lib/heartbeat/lrmd -r" as uid 0 gid 0 (pid 10702)
- node2 heartbeat: [10700]: info: Starting "/usr/lib/heartbeat/ccm" as uid 113 gid 122 (pid 10700)
- node2 heartbeat: [10705]: info: Starting "/usr/lib/heartbeat/crmd" as uid 113 gid 122 (pid 10705)
- node2 /usr/lib/heartbeat/dopd: [10706]: debug: PID=10706
- node2 /usr/lib/heartbeat/dopd: [10706]: debug: Signing in with heartbeat
- node2 attrd: [10704]: info: Invoked: /usr/lib/heartbeat/attrd
- node2 /usr/lib/heartbeat/dopd: [10706]: debug: [We are node2]
- node2 attrd: [10704]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
- node2 cib: [10701]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
- node2 cib: [10701]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
- node2 crmd: [10705]: info: Invoked: /usr/lib/heartbeat/crmd
- node2 /usr/lib/heartbeat/dopd: [10706]: debug: Setting message filter mode
- node2 /usr/lib/heartbeat/dopd: [10706]: debug: Setting message signal
- node2 crmd: [10705]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
- node2 /usr/lib/heartbeat/dopd: [10706]: debug: Waiting for messages...
- node2 crmd: [10705]: info: main: CRM Hg Version: 9971ebba4494012a93c03b40a2c58ec0eb60f50c
- node2 attrd: [10704]: notice: main: Starting mainloop...
- node2 crmd: [10705]: info: crmd_init: Starting crmd
- node2 ccm: [10700]: info: Hostname: node2
- node2 stonith-ng: [10703]: info: Invoked: /usr/lib/heartbeat/stonithd
- node2 lrmd: [10702]: info: enabling coredumps
- node2 cib: [10701]: info: validate_with_relaxng: Creating RNG parser context
- node2 lrmd: [10702]: WARN: Core dumps could be lost if multiple dumps occur.
- node2 lrmd: [10702]: WARN: Consider setting non-default value in /proc/sys/kernel/core_pattern (or equivalent) for maximum supportability
- node2 lrmd: [10702]: WARN: Consider setting /proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum supportability
- node2 lrmd: [10702]: info: Started.
- node2 stonith-ng: [10703]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
- node2 heartbeat: [10683]: info: the send queue length from heartbeat to client attrd is set to 1024
- node2 stonith-ng: [10703]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
- node2 stonith-ng: [10703]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
- node2 heartbeat: [10683]: info: the send queue length from heartbeat to client ccm is set to 1024
- node2 stonith-ng: [10703]: info: register_heartbeat_conn: Hostname: node2
- node2 stonith-ng: [10703]: info: register_heartbeat_conn: UUID: 9100538b-7a1f-41fd-9c1a-c6b4b1c32b18
- node2 stonith-ng: [10703]: info: main: Starting stonith-ng mainloop
- node2 cib: [10701]: info: startCib: CIB Initialization completed successfully
- node2 cib: [10701]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
- node2 cib: [10701]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
- node2 heartbeat: [10683]: info: the send queue length from heartbeat to client stonith-ng is set to 1024
- node2 cib: [10701]: info: register_heartbeat_conn: Hostname: node2
- node2 cib: [10701]: info: register_heartbeat_conn: UUID: 9100538b-7a1f-41fd-9c1a-c6b4b1c32b18
- node2 cib: [10701]: info: ccm_connect: Registering with CCM...
- node2 cib: [10701]: WARN: ccm_connect: CCM Activation failed
- node2 cib: [10701]: WARN: ccm_connect: CCM Connection failed 1 times (30 max)
- node2 heartbeat: [10683]: info: the send queue length from heartbeat to client cib is set to 1024
- node2 crmd: [10705]: info: do_cib_control: Could not connect to the CIB service: connection failed
- node2 crmd: [10705]: WARN: do_cib_control: Couldn't complete CIB registration 1 times... pause and retry
- node2 crmd: [10705]: info: crmd_init: Starting crmd's mainloop
- node2 heartbeat: [10683]: info: Status update for node node1: status active
- node2 crmd: [10705]: info: crm_timer_popped: Wait Timer (I_NULL) just popped (2000ms)
- node2 cib: [10701]: info: ccm_connect: Registering with CCM...
- node2 cib: [10701]: WARN: ccm_connect: CCM Activation failed
- node2 cib: [10701]: WARN: ccm_connect: CCM Connection failed 2 times (30 max)
- node2 crmd: [10705]: info: do_cib_control: Could not connect to the CIB service: connection failed
- node2 crmd: [10705]: WARN: do_cib_control: Couldn't complete CIB registration 2 times... pause and retry
- node2 crmd: [10705]: info: crm_timer_popped: Wait Timer (I_NULL) just popped (2000ms)
- node2 cib: [10701]: info: ccm_connect: Registering with CCM...
- node2 cib: [10701]: info: cib_init: Requesting the list of configured nodes
- node2 cib: [10701]: info: cib_init: Starting cib mainloop
- node2 cib: [10701]: info: cib_client_status_callback: Status update: Client node2/cib now has status [join]
- node2 cib: [10701]: info: crm_new_peer: Node 0 is now known as node2
- node2 cib: [10701]: info: crm_update_peer_proc: node2.cib is now online
- node2 cib: [10701]: WARN: cib_peer_callback: Discarding cib_sync_one message (16f) from quorumnode: not in our membership
- node2 cib: [10701]: WARN: cib_peer_callback: Discarding cib_apply_diff message (a778) from node1: not in our membership
- node2 cib: [10701]: info: cib_client_status_callback: Status update: Client node2/cib now has status [online]
- node2 crmd: [10705]: info: do_cib_control: CIB connection established
- node2 crmd: [10705]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
- node2 crmd: [10705]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
- node2 cib: [10701]: info: cib_client_status_callback: Status update: Client node1/cib now has status [online]
- node2 cib: [10701]: info: crm_new_peer: Node 0 is now known as node1
- node2 crmd: [10705]: info: register_heartbeat_conn: Hostname: node2
- node2 cib: [10701]: info: crm_update_peer_proc: node1.cib is now online
- node2 crmd: [10705]: info: register_heartbeat_conn: UUID: 9100538b-7a1f-41fd-9c1a-c6b4b1c32b18
- node2 cib: [10701]: info: cib_client_status_callback: Status update: Client quorumnode/cib now has status [online]
- node2 heartbeat: [10683]: info: the send queue length from heartbeat to client crmd is set to 1024
- node2 cib: [10701]: info: crm_new_peer: Node 0 is now known as quorumnode
- node2 cib: [10701]: info: crm_update_peer_proc: quorumnode.cib is now online
- node2 crmd: [10705]: info: do_ha_control: Connected to the cluster
- node2 crmd: [10705]: info: do_ccm_control: CCM connection established... waiting for first callback
- node2 crmd: [10705]: info: do_started: Delaying start, no membership data (0000000000100000)
- node2 crmd: [10705]: info: config_query_callback: Shutdown escalation occurs after: 300000ms
- node2 crmd: [10705]: info: config_query_callback: Checking for expired actions every 300000ms
- node2 crmd: [10705]: notice: crmd_client_status_callback: Status update: Client node2/crmd now has status [online] (DC=false)
- node2 cib: [10701]: info: cib_process_diff: Diff 11.360.211 -> 11.360.212 not applied to 11.360.0: current "num_updates" is less than required
- node2 cib: [10701]: info: cib_server_process_diff: Requesting re-sync from peer
- node2 cib: [10701]: notice: cib_server_process_diff: Not applying diff 11.360.212 -> 11.360.213 (sync in progress)
- node2 crmd: [10705]: info: crm_new_peer: Node 0 is now known as node2
- node2 crmd: [10705]: info: ais_status_callback: status: node2 is now unknown
- node2 crmd: [10705]: info: crm_update_peer_proc: node2.crmd is now online
- node2 crmd: [10705]: notice: crmd_peer_update: Status update: Client node2/crmd now has status [online] (DC=<null>)
- node2 crmd: [10705]: notice: crmd_client_status_callback: Status update: Client node2/crmd now has status [online] (DC=false)
- node2 crmd: [10705]: notice: crmd_client_status_callback: Status update: Client node1/crmd now has status [online] (DC=false)
- node2 crmd: [10705]: info: crm_new_peer: Node 0 is now known as node1
- node2 crmd: [10705]: info: ais_status_callback: status: node1 is now unknown
- node2 crmd: [10705]: info: crm_update_peer_proc: node1.crmd is now online
- node2 crmd: [10705]: notice: crmd_peer_update: Status update: Client node1/crmd now has status [online] (DC=<null>)
- node2 crmd: [10705]: notice: crmd_client_status_callback: Status update: Client quorumnode/crmd now has status [offline] (DC=false)
- node2 crmd: [10705]: info: crm_new_peer: Node 0 is now known as quorumnode
- node2 crmd: [10705]: info: ais_status_callback: status: quorumnode is now unknown
- node2 crmd: [10705]: info: do_started: Delaying start, no membership data (0000000000100000)
- node2 cib: [10701]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 11.360.213 from node1
- node2 crmd: [10705]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
- node2 crmd: [10705]: info: mem_handle_event: instance=29, nodes=3, new=3, lost=0, n_idx=0, new_idx=0, old_idx=6
- node2 crmd: [10705]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=29)
- node2 crmd: [10705]: info: ccm_event_detail: NEW MEMBERSHIP: trans=29, nodes=3, new=3, lost=0 n_idx=0, new_idx=0, old_idx=6
- node2 crmd: [10705]: info: ccm_event_detail: #011CURRENT: node1 [nodeid=1, born=24]
- node2 crmd: [10705]: info: ccm_event_detail: #011CURRENT: quorumnode [nodeid=0, born=27]
- node2 crmd: [10705]: info: ccm_event_detail: #011CURRENT: node2 [nodeid=2, born=29]
- node2 crmd: [10705]: info: ccm_event_detail: #011NEW: node1 [nodeid=1, born=24]
- node2 crmd: [10705]: info: ccm_event_detail: #011NEW: quorumnode [nodeid=0, born=27]
- node2 crmd: [10705]: info: ccm_event_detail: #011NEW: node2 [nodeid=2, born=29]
- node2 crmd: [10705]: info: crm_get_peer: Node node1 now has id: 1
- node2 crmd: [10705]: info: ais_status_callback: status: node1 is now member (was unknown)
- node2 crmd: [10705]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=24 seen=29 proc=00000000000000000000000000000200
- node2 crmd: [10705]: info: crm_update_peer_proc: node1.ais is now online
- node2 crmd: [10705]: info: ais_status_callback: status: quorumnode is now member (was unknown)
- node2 crmd: [10705]: info: crm_update_peer: Node quorumnode: id=0 state=member (new) addr=(null) votes=-1 born=27 seen=29 proc=00000000000000000000000000000000
- node2 crmd: [10705]: info: crm_update_peer_proc: quorumnode.ais is now online
- node2 crmd: [10705]: info: crm_update_peer_proc: quorumnode.crmd is now online
- node2 crmd: [10705]: notice: crmd_peer_update: Status update: Client quorumnode/crmd now has status [online] (DC=<null>)
- node2 crmd: [10705]: info: crm_get_peer: Node node2 now has id: 2
- node2 crmd: [10705]: info: ais_status_callback: status: node2 is now member (was unknown)
- node2 crmd: [10705]: info: crm_update_peer: Node node2: id=2 state=member (new) addr=(null) votes=-1 born=29 seen=29 proc=00000000000000000000000000000200
- node2 crmd: [10705]: info: crm_update_peer_proc: node2.ais is now online
- node2 crmd: [10705]: info: do_started: The local CRM is operational
- node2 crmd: [10705]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
- node2 cib: [10701]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
- node2 cib: [10701]: info: mem_handle_event: instance=29, nodes=3, new=3, lost=0, n_idx=0, new_idx=0, old_idx=6
- node2 cib: [10701]: info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=29)
- node2 cib: [10701]: info: crm_get_peer: Node node1 now has id: 1
- node2 cib: [10701]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=24 seen=29 proc=00000000000000000000000000000100
- node2 cib: [10701]: info: crm_update_peer_proc: node1.ais is now online
- node2 cib: [10701]: info: crm_update_peer_proc: node1.crmd is now online
- node2 cib: [10701]: info: crm_update_peer: Node quorumnode: id=0 state=member (new) addr=(null) votes=-1 born=27 seen=29 proc=00000000000000000000000000000100
- node2 cib: [10701]: info: crm_update_peer_proc: quorumnode.ais is now online
- node2 cib: [10701]: info: crm_update_peer_proc: quorumnode.crmd is now online
- node2 cib: [10701]: info: crm_get_peer: Node node2 now has id: 2
- node2 cib: [10701]: info: crm_update_peer: Node node2: id=2 state=member (new) addr=(null) votes=-1 born=29 seen=29 proc=00000000000000000000000000000100
- node2 cib: [10701]: info: crm_update_peer_proc: node2.ais is now online
- node2 cib: [10701]: info: crm_update_peer_proc: node2.crmd is now online
- node2 crmd: [10705]: info: update_dc: Set DC to node1 (3.0.5)
- node2 crmd: [10705]: info: te_connect_stonith: Attempting connection to fencing daemon...
- node2 crmd: [10705]: info: te_connect_stonith: Connected
- node2 attrd: [10704]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- node2 crmd: [10705]: info: update_attrd: Connecting to attrd...
- node2 crmd: [10705]: info: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
- node2 crmd: [10705]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/transient_attributes": ok (rc=0)
- node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=12:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_drbd_vmstore:1_monitor_0 )
- node2 lrmd: [10702]: info: rsc:p_drbd_vmstore:1 probe[2] (pid 11065)
- node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=13:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_drbd_mount1:1_monitor_0 )
- node2 lrmd: [10702]: info: rsc:p_drbd_mount1:1 probe[3] (pid 11066)
- node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=14:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_drbd_mount2:1_monitor_0 )
- node2 lrmd: [10702]: info: rsc:p_drbd_mount2:1 probe[4] (pid 11069)
- node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=15:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_fs_vmstore_monitor_0 )
- node2 lrmd: [10702]: info: rsc:p_fs_vmstore probe[5] (pid 11070)
- node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=16:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_vm_webapps_monitor_0 )
- node2 lrmd: [10702]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
- node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=17:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_libvirt-bin:0_monitor_0 )
- node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=18:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_sysadmin_notify:0_monitor_0 )
- node2 lrmd: [10702]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
- node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=19:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=stonith-node1_monitor_0 )
- node2 lrmd: [10702]: info: rsc:stonith-node1 probe[9] (pid 11077)
- node2 lrmd: [10702]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
- node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=20:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=stonith-node2_monitor_0 )
- node2 lrmd: [10702]: info: rsc:stonith-node2 probe[10] (pid 11079)
- node2 stonith-ng: [10703]: notice: stonith_device_action: Device stonith-node1 not found
- node2 stonith-ng: [10703]: info: stonith_command: Processed st_execute from lrmd: rc=-12
- node2 crmd: [10705]: info: do_lrm_rsc_op: Performing key=21:363:7:7f2a906a-c70b-4e40-8417-73a05b76b811 op=p_ping:0_monitor_0 )
- node2 lrmd: [10702]: info: operation monitor[9] on stonith-node1 for client 10705: pid 11077 exited with return code 7
- node2 stonith-ng: [10703]: notice: stonith_device_action: Device stonith-node2 not found
- node2 stonith-ng: [10703]: info: stonith_command: Processed st_execute from lrmd: rc=-12
- node2 crmd: [10705]: info: process_lrm_event: LRM operation stonith-node1_monitor_0 (call=9, rc=7, cib-update=8, confirmed=true) not running
- node2 lrmd: [10702]: info: operation monitor[10] on stonith-node2 for client 10705: pid 11079 exited with return code 7
- node2 crmd: [10705]: info: process_lrm_event: LRM operation stonith-node2_monitor_0 (call=10, rc=7, cib-update=9, confirmed=true) not running
- node2 lrmd: [10702]: info: operation monitor[5] on p_fs_vmstore for client 10705: pid 11070 exited with return code 7
- node2 crmd: [10705]: info: process_lrm_event: LRM operation p_fs_vmstore_monitor_0 (call=5, rc=7, cib-update=10, confirmed=true) not running
- node2 crm_attribute: [11176]: info: Invoked: crm_attribute -N node2 -n master-p_drbd_vmstore:1 -l reboot -D
- node2 crm_attribute: [11180]: info: Invoked: crm_attribute -N node2 -n master-p_drbd_mount2:1 -l reboot -D
- node2 crm_attribute: [11182]: info: Invoked: crm_attribute -N node2 -n master-p_drbd_mount1:1 -l reboot -D
- node2 lrmd: [10702]: info: operation monitor[2] on p_drbd_vmstore:1 for client 10705: pid 11065 exited with return code 7
- node2 crmd: [10705]: info: process_lrm_event: LRM operation p_drbd_vmstore:1_monitor_0 (call=2, rc=7, cib-update=11, confirmed=true) not running
- node2 lrmd: [10702]: info: operation monitor[4] on p_drbd_mount2:1 for client 10705: pid 11069 exited with return code 7
- node2 crmd: [10705]: info: process_lrm_event: LRM operation p_drbd_mount2:1_monitor_0 (call=4, rc=7, cib-update=12, confirmed=true) not running
- node2 lrmd: [10702]: info: operation monitor[3] on p_drbd_mount1:1 for client 10705: pid 11066 exited with return code 7
- node2 crmd: [10705]: info: process_lrm_event: LRM operation p_drbd_mount1:1_monitor_0 (call=3, rc=7, cib-update=13, confirmed=true) not running
- node2 lrmd: [10702]: info: rsc:p_vm_webapps probe[6] (pid 11183)
- node2 lrmd: [10702]: info: rsc:p_libvirt-bin:0 probe[7] (pid 11184)
- node2 lrmd: [10702]: info: rsc:p_sysadmin_notify:0 probe[8] (pid 11185)
- node2 lrmd: [10702]: info: RA output: (p_libvirt-bin:0:probe:stderr) process 11184: The last reference on a connection was dropped without closing the connection. This is a bug in an application. See dbus_connection_unref() documentation for details.#012Most likely, the application was supposed to call dbus_connection_close(), since this is a private connection.
- node2 lrmd: [10702]: info: operation monitor[7] on p_libvirt-bin:0 for client 10705: pid 11184 exited with return code 7
- node2 lrmd: [10702]: info: rsc:p_ping:0 probe[11] (pid 11189)
- node2 crmd: [10705]: info: process_lrm_event: LRM operation p_libvirt-bin:0_monitor_0 (call=7, rc=7, cib-update=14, confirmed=true) not running
- node2 lrmd: [10702]: info: RA output: (p_vm_webapps:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock': Connection refused#012error: failed to connect to the hypervisor
- node2 lrmd: [10702]: info: operation monitor[8] on p_sysadmin_notify:0 for client 10705: pid 11185 exited with return code 7
- node2 crmd: [10705]: info: process_lrm_event: LRM operation p_sysadmin_notify:0_monitor_0 (call=8, rc=7, cib-update=15, confirmed=true) not running
- node2 lrmd: [10702]: info: operation monitor[11] on p_ping:0 for client 10705: pid 11189 exited with return code 7
- node2 crmd: [10705]: info: process_lrm_event: LRM operation p_ping:0_monitor_0 (call=11, rc=7, cib-update=16, confirmed=true) not running
- node2 VirtualDomain[11183]: [11211]: INFO: Configuration file /mnt/storage/vmstore/config/webapps.xml not readable during probe.
- node2 lrmd: [10702]: info: operation monitor[6] on p_vm_webapps for client 10705: pid 11183 exited with return code 7
- node2 crmd: [10705]: info: process_lrm_event: LRM operation p_vm_webapps_monitor_0 (call=6, rc=7, cib-update=17, confirmed=true) not running
- node2 attrd: [10704]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- node2 attrd: [10704]: notice: attrd_perform_update: Sent update 9: probe_complete=true
- node2 cib: [10701]: info: cib_stats: Processed 134 operations (522.00us average, 0% utilization) in the last 10min
- node2 cib: [10701]: info: cib_stats: Processed 30 operations (1333.00us average, 0% utilization) in the last 10min
- node2 cib: [10701]: info: cib_stats: Processed 30 operations (1000.00us average, 0% utilization) in the last 10min
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement