Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Nov 29 21:35:43 kvm00 lvm[11328]: Subthread finished
- Nov 29 21:35:43 kvm00 lvm[11328]: Joined child thread
- Nov 29 21:35:43 kvm00 lvm[11328]: ret == 0, errno = 0. removing client
- Nov 29 21:35:43 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x1df9b30, msg=(nil), len=0, csid=(nil), xid=67
- Nov 29 21:35:43 kvm00 lvm[11328]: process_work_item: free fd -1
- Nov 29 21:35:43 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:35:43 kvm00 lvm[11328]: Got new connection on fd 5
- Nov 29 21:35:43 kvm00 lvm[11328]: Read on local socket 5, len = 25
- Nov 29 21:35:43 kvm00 lvm[11328]: creating pipe, [12, 13]
- Nov 29 21:35:43 kvm00 rsyslogd-2177: imuxsock begins to drop messages from pid 11328 due to rate-limiting
- Nov 29 21:36:16 kvm00 pacemakerd: [11006]: notice: update_node_processes: 0x25fbb30 Node 184619180 now known as kvm01, was:
- Nov 29 21:36:16 kvm00 stonith-ng: [11011]: info: crm_new_peer: Node kvm01 now has id: 184619180
- Nov 29 21:36:16 kvm00 stonith-ng: [11011]: info: crm_new_peer: Node 184619180 is now known as kvm01
- Nov 29 21:36:16 kvm00 crmd: [11015]: notice: crmd_peer_update: Status update: Client kvm01/crmd now has status [online] (DC=true)
- Nov 29 21:36:16 kvm00 crmd: [11015]: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=crmd_peer_update ]
- Nov 29 21:36:16 kvm00 crmd: [11015]: info: abort_transition_graph: do_te_invoke:169 - Triggered transition abort (complete=1) : Peer Halt
- Nov 29 21:36:16 kvm00 crmd: [11015]: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
- Nov 29 21:36:16 kvm00 crmd: [11015]: info: update_dc: Set DC to kvm00 (3.0.6)
- Nov 29 21:36:18 kvm00 crmd: [11015]: info: do_dc_join_offer_all: A new node joined the cluster
- Nov 29 21:36:18 kvm00 crmd: [11015]: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
- Nov 29 21:36:18 kvm00 crmd: [11015]: info: update_dc: Set DC to kvm00 (3.0.6)
- Nov 29 21:36:19 kvm00 crmd: [11015]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: do_dc_join_finalize: join-3: Syncing the CIB from kvm00 to the rest of the cluster
- Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/48, version=0.302.33): ok (rc=0)
- Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/49, version=0.302.34): ok (rc=0)
- Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/50, version=0.302.35): ok (rc=0)
- Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='kvm01']/transient_attributes (origin=kvm01/crmd/6, version=0.302.36): ok (rc=0)
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: do_dc_join_ack: join-3: Updating node state to member for kvm01
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: erase_status_tag: Deleting xpath: //node_state[@uname='kvm01']/lrm
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: do_dc_join_ack: join-3: Updating node state to member for kvm00
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: erase_status_tag: Deleting xpath: //node_state[@uname='kvm00']/lrm
- Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='kvm01']/lrm (origin=local/crmd/51, version=0.302.37): ok (rc=0)
- Nov 29 21:36:19 kvm00 crmd: [11015]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
- Nov 29 21:36:19 kvm00 attrd: [11013]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
- Nov 29 21:36:19 kvm00 attrd: [11013]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
- Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='kvm00']/lrm (origin=local/crmd/53, version=0.302.39): ok (rc=0)
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_dlm_controld:0_last_0, magic=0:0;16:0:0:9411dad7-fa00-4d39-9f25-f4c8c4d2c944, cib=0.302.39) : Resource op removal
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: abort_transition_graph: te_update_diff:276 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.302.40) : LRM Refresh
- Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/55, version=0.302.41): ok (rc=0)
- Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/57, version=0.302.43): ok (rc=0)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: unpack_config: On loss of CCM Quorum: Ignore
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_iscsi:1#011(kvm01)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Restart p_dlm_controld:0#011(Started kvm00)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_dlm_controld:1#011(kvm01)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Restart p_gfs_controld:0#011(Started kvm00)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_gfs_controld:1#011(kvm01)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Restart p_clvm:0#011(Started kvm00)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_clvm:1#011(kvm01)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Restart p_vg0:0#011(Started kvm00)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_vg0:1#011(kvm01)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Restart p_shared_gfs2:0#011(Started kvm00)
- Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_shared_gfs2:1#011(kvm01)
- Nov 29 21:36:19 kvm00 crmd: [11015]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1385757379-36) derived from /var/lib/pengine/pe-input-685.bz2
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 10: monitor p_iscsi:1_monitor_0 on kvm01
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 11: monitor p_dlm_controld:1_monitor_0 on kvm01
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 12: monitor p_gfs_controld:1_monitor_0 on kvm01
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 13: monitor p_clvm:1_monitor_0 on kvm01
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 14: monitor p_vg0:1_monitor_0 on kvm01
- Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 15: monitor p_shared_gfs2:1_monitor_0 on kvm01
- Nov 29 21:36:20 kvm00 pengine: [11014]: notice: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-685.bz2
- Nov 29 21:36:21 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 9: probe_complete probe_complete on kvm01 - no waiting
- Nov 29 21:36:21 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 19: start p_iscsi:1_start_0 on kvm01
- Nov 29 21:36:21 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 57: stop p_shared_gfs2:0_stop_0 on kvm00 (local)
- Nov 29 21:36:21 kvm00 lrmd: [11012]: info: cancel_op: operation monitor[19] on p_shared_gfs2:0 for client 11015, its parameters: fstype=[gfs2] CRM_meta_timeout=[20000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] device=[/dev/vg0/shared-gfs2] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_interval=[120000] CRM_meta_globally_unique=[false] directory=[/shared00] cancelled
- Nov 29 21:36:21 kvm00 lrmd: [11012]: info: rsc:p_shared_gfs2:0 stop[20] (pid 11950)
- Nov 29 21:36:21 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_shared_gfs2:0_monitor_120000 (call=19, status=1, cib-update=0, confirmed=true) Cancelled
- Nov 29 21:36:21 kvm00 Filesystem[11950]: INFO: Running stop for /dev/vg0/shared-gfs2 on /shared00
- Nov 29 21:36:21 kvm00 Filesystem[11950]: INFO: Trying to unmount /shared00
- Nov 29 21:36:21 kvm00 Filesystem[11950]: INFO: unmounted /shared00 successfully
- Nov 29 21:36:21 kvm00 lrmd: [11012]: info: operation stop[20] on p_shared_gfs2:0 for client 11015: pid 11950 exited with return code 0
- Nov 29 21:36:21 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_shared_gfs2:0_stop_0 (call=20, rc=0, cib-update=61, confirmed=true) ok
- Nov 29 21:36:21 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 49: stop p_vg0:0_stop_0 on kvm00 (local)
- Nov 29 21:36:21 kvm00 lrmd: [11012]: info: cancel_op: operation monitor[17] on p_vg0:0 for client 11015, its parameters: CRM_meta_timeout=[60000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] volgrpname=[vg0] CRM_meta_interval=[60000] CRM_meta_globally_unique=[false] cancelled
- Nov 29 21:36:21 kvm00 lrmd: [11012]: info: rsc:p_vg0:0 stop[21] (pid 12021)
- Nov 29 21:36:21 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_vg0:0_monitor_60000 (call=17, status=1, cib-update=0, confirmed=true) Cancelled
- Nov 29 21:36:21 kvm00 rsyslogd-2177: imuxsock lost 80 messages from pid 11328 due to rate-limiting
- Nov 29 21:36:21 kvm00 lvm[11328]: Got new connection on fd 5
- Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 25
- Nov 29 21:36:21 kvm00 lvm[11328]: creating pipe, [12, 13]
- Nov 29 21:36:21 kvm00 lvm[11328]: Creating pre&post thread
- Nov 29 21:36:21 kvm00 lvm[11328]: Created pre&post thread, state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: in sub thread: client = 0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: doing PRE command LOCK_VG 'V_vg0' at 1 (client=0x7f01d003b520)
- Nov 29 21:36:21 kvm00 lvm[11328]: lock_resource 'V_vg0', flags=0, mode=3
- Nov 29 21:36:21 kvm00 lvm[11328]: lock_resource returning 0, lock_id=1
- Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 71, flags=0x1 (LOCAL)
- Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d003b740. client=0x7f01d003b520, msg=0x7f01d0045570, len=25, csid=(nil), xid=71
- Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
- Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: LOCK_VG (0x33) msg=0x7f01d003d440, msglen =25, client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: do_lock_vg: resource 'V_vg0', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: Invalidating cached metadata for VG vg0
- Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
- Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
- Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
- Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 72, flags=0x1 (LOCAL)
- Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=72
- Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
- Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
- Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
- Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
- Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
- Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 73, flags=0x1 (LOCAL)
- Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=73
- Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
- Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
- Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
- Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
- Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
- Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 74, flags=0x1 (LOCAL)
- Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=74
- Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
- Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
- Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
- Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
- Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
- Nov 29 21:36:21 kvm00 lvm[11328]: check_all_clvmds_running
- Nov 29 21:36:21 kvm00 lvm[11328]: down_callback. node 167841964, state = 3
- Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 75, flags=0x0 ()
- Nov 29 21:36:21 kvm00 lvm[11328]: num_nodes = 1
- Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d003d530. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=75
- Nov 29 21:36:21 kvm00 lvm[11328]: Sending message to all cluster nodes
- Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
- Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
- Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
- Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
- Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 25
- Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: doing PRE command LOCK_VG 'V_vg0' at 6 (client=0x7f01d003b520)
- Nov 29 21:36:21 kvm00 lvm[11328]: unlock_resource: V_vg0 lockid: 1
- Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 76, flags=0x1 (LOCAL)
- Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=25, csid=(nil), xid=76
- Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
- Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: LOCK_VG (0x33) msg=0x7f01d003b770, msglen =25, client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: do_lock_vg: resource 'V_vg0', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: Invalidating cached metadata for VG vg0
- Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
- Nov 29 21:36:21 kvm00 lvm[11328]: 167841964 got message from nodeid 167841964 for 0. len 31
- Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
- Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: EOF on local socket: inprogress=0
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for child thread
- Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Subthread finished
- Nov 29 21:36:21 kvm00 lvm[11328]: Joined child thread
- Nov 29 21:36:21 kvm00 lvm[11328]: ret == 0, errno = 0. removing client
- Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=(nil), len=0, csid=(nil), xid=76
- Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: free fd -1
- Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:36:21 kvm00 LVM[12021]: INFO: Deactivating volume group vg0
- Nov 29 21:36:21 kvm00 lvm[11328]: Got new connection on fd 5
- Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 25
- Nov 29 21:36:21 kvm00 lvm[11328]: creating pipe, [12, 13]
- Nov 29 21:36:21 kvm00 lvm[11328]: Creating pre&post thread
- Nov 29 21:36:21 kvm00 lvm[11328]: Created pre&post thread, state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: in sub thread: client = 0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: doing PRE command LOCK_VG 'V_vg0' at 1 (client=0x7f01d003b520)
- Nov 29 21:36:21 kvm00 lvm[11328]: lock_resource 'V_vg0', flags=0, mode=3
- Nov 29 21:36:21 kvm00 lvm[11328]: lock_resource returning 0, lock_id=1
- Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 77, flags=0x1 (LOCAL)
- Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d003b740. client=0x7f01d003b520, msg=0x7f01d0045570, len=25, csid=(nil), xid=77
- Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
- Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: LOCK_VG (0x33) msg=0x7f01d003d440, msglen =25, client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: do_lock_vg: resource 'V_vg0', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: Invalidating cached metadata for VG vg0
- Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
- Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
- Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
- Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 78, flags=0x1 (LOCAL)
- Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=78
- Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
- Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
- Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
- Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
- Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
- Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
- Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
- Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
- Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
- Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 79, flags=0x1 (LOCAL)
- Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=79
- Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
- Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
- Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
- Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
- Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
- Nov 29 21:36:21 kvm00 rsyslogd-2177: imuxsock begins to drop messages from pid 11328 due to rate-limiting
- Nov 29 21:36:21 kvm00 LVM[12021]: INFO: 0 logical volume(s) in volume group "vg0" now active
- Nov 29 21:36:21 kvm00 lrmd: [11012]: info: operation stop[21] on p_vg0:0 for client 11015: pid 12021 exited with return code 0
- Nov 29 21:36:21 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_vg0:0_stop_0 (call=21, rc=0, cib-update=62, confirmed=true) ok
- Nov 29 21:36:21 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 41: stop p_clvm:0_stop_0 on kvm00 (local)
- Nov 29 21:36:21 kvm00 lrmd: [11012]: info: cancel_op: operation monitor[15] on p_clvm:0 for client 11015, its parameters: daemon_timeout=[30] CRM_meta_timeout=[30000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_interval=[60000] CRM_meta_globally_unique=[false] cancelled
- Nov 29 21:36:21 kvm00 lrmd: [11012]: info: rsc:p_clvm:0 stop[22] (pid 12056)
- Nov 29 21:36:21 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_clvm:0_monitor_60000 (call=15, status=1, cib-update=0, confirmed=true) Cancelled
- Nov 29 21:36:21 kvm00 clvmd[12056]: INFO: Stopping p_clvm:0
- Nov 29 21:36:21 kvm00 clvmd[12056]: INFO: Stopping clvmd
- Nov 29 21:36:22 kvm00 lrmd: [11012]: info: operation stop[22] on p_clvm:0 for client 11015: pid 12056 exited with return code 0
- Nov 29 21:36:22 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_clvm:0_stop_0 (call=22, rc=0, cib-update=63, confirmed=true) ok
- Nov 29 21:36:22 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 33: stop p_gfs_controld:0_stop_0 on kvm00 (local)
- Nov 29 21:36:22 kvm00 lrmd: [11012]: info: cancel_op: operation monitor[13] on p_gfs_controld:0 for client 11015, its parameters: CRM_meta_timeout=[20000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] daemon=[gfs_controld.pcmk] CRM_meta_clone_max=[2] CRM_meta_interval=[10000] CRM_meta_globally_unique=[false] cancelled
- Nov 29 21:36:22 kvm00 lrmd: [11012]: info: rsc:p_gfs_controld:0 stop[23] (pid 12077)
- Nov 29 21:36:22 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_gfs_controld:0_monitor_10000 (call=13, status=1, cib-update=0, confirmed=true) Cancelled
- Nov 29 21:36:22 kvm00 gfs_controld.pcmk[11275]: [11275]: notice: terminate_ais_connection: Disconnecting from Corosync
- Nov 29 21:36:22 kvm00 lrmd: [11012]: info: RA output: (p_gfs_controld:0:stop:stderr) gfs_controld.pcmk: no process found
- Nov 29 21:36:22 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 20: monitor p_iscsi:1_monitor_120000 on kvm01
- Nov 29 21:36:23 kvm00 lrmd: [11012]: info: operation stop[23] on p_gfs_controld:0 for client 11015: pid 12077 exited with return code 0
- Nov 29 21:36:23 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_gfs_controld:0_stop_0 (call=23, rc=0, cib-update=64, confirmed=true) ok
- Nov 29 21:36:23 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 25: stop p_dlm_controld:0_stop_0 on kvm00 (local)
- Nov 29 21:36:23 kvm00 lrmd: [11012]: info: cancel_op: operation monitor[11] on p_dlm_controld:0 for client 11015, its parameters: CRM_meta_timeout=[20000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] daemon=[dlm_controld.pcmk] CRM_meta_clone_max=[2] CRM_meta_interval=[10000] CRM_meta_globally_unique=[false] cancelled
- Nov 29 21:36:23 kvm00 lrmd: [11012]: info: rsc:p_dlm_controld:0 stop[24] (pid 12084)
- Nov 29 21:36:23 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_dlm_controld:0_monitor_10000 (call=11, status=1, cib-update=0, confirmed=true) Cancelled
- Nov 29 21:36:23 kvm00 dlm_controld.pcmk: [11231]: notice: terminate_ais_connection: Disconnecting from Corosync
- Nov 29 21:36:23 kvm00 kernel: [19093.065220] dlm: closing connection to node 184619180
- Nov 29 21:36:23 kvm00 kernel: [19093.065278] dlm: closing connection to node 167841964
- Nov 29 21:36:23 kvm00 lrmd: [11012]: info: RA output: (p_dlm_controld:0:stop:stderr) dlm_controld.pcmk: no process found
- Nov 29 21:36:24 kvm00 lrmd: [11012]: info: operation stop[24] on p_dlm_controld:0 for client 11015: pid 12084 exited with return code 0
- Nov 29 21:36:24 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_dlm_controld:0_stop_0 (call=24, rc=0, cib-update=65, confirmed=true) ok
- Nov 29 21:36:24 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 26: start p_dlm_controld:0_start_0 on kvm00 (local)
- Nov 29 21:36:24 kvm00 lrmd: [11012]: info: rsc:p_dlm_controld:0 start[25] (pid 12090)
- Nov 29 21:36:24 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 27: start p_dlm_controld:1_start_0 on kvm01
- Nov 29 21:36:24 kvm00 lrmd: [11012]: info: RA output: (p_dlm_controld:0:start:stderr) dlm_controld.pcmk: no process found
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: get_cluster_type: Cluster type is: 'openais'
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: init_ais_connection_classic: AIS connection established
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: get_ais_nodeid: Server details: id=167841964 uname=kvm00 cname=pcmk
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_new_peer: Node kvm00 now has id: 167841964
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_new_peer: Node 167841964 is now known as kvm00
- Nov 29 21:36:24 kvm00 corosync[3570]: [pcmk ] info: pcmk_notify: Enabling node notifications for child 12100 (0x1850ba0)
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: notice: ais_dispatch_message: Membership 1292: quorum acquired
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_update_peer: Node kvm00: id=167841964 state=member (new) addr=r(0) ip(172.16.1.10) (new) votes=1 (new) born=1292 seen=1292 proc=00000000000000000000000000000000
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_new_peer: Node kvm01 now has id: 184619180
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_new_peer: Node 184619180 is now known as kvm01
- Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_update_peer: Node kvm01: id=184619180 state=member (new) addr=r(0) ip(172.16.1.11) votes=1 born=1284 seen=1292 proc=00000000000000000000000000000000
- Nov 29 21:36:25 kvm00 lrmd: [11012]: info: operation start[25] on p_dlm_controld:0 for client 11015: pid 12090 exited with return code 0
- Nov 29 21:36:25 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_dlm_controld:0_start_0 (call=25, rc=0, cib-update=66, confirmed=true) ok
- Nov 29 21:36:25 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 1: monitor p_dlm_controld:0_monitor_10000 on kvm00 (local)
- Nov 29 21:36:25 kvm00 lrmd: [11012]: info: rsc:p_dlm_controld:0 monitor[26] (pid 12108)
- Nov 29 21:36:25 kvm00 lrmd: [11012]: info: operation monitor[26] on p_dlm_controld:0 for client 11015: pid 12108 exited with return code 0
- Nov 29 21:36:25 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_dlm_controld:0_monitor_10000 (call=26, rc=0, cib-update=67, confirmed=false) ok
- Nov 29 21:36:25 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 28: monitor p_dlm_controld:1_monitor_10000 on kvm01
- Nov 29 21:36:25 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 34: start p_gfs_controld:0_start_0 on kvm00 (local)
- Nov 29 21:36:25 kvm00 lrmd: [11012]: info: rsc:p_gfs_controld:0 start[27] (pid 12115)
- Nov 29 21:36:25 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 35: start p_gfs_controld:1_start_0 on kvm01
- Nov 29 21:36:25 kvm00 lrmd: [11012]: info: RA output: (p_gfs_controld:0:start:stderr) gfs_controld.pcmk: no process found
- Nov 29 21:36:25 kvm00 gfs_controld[12125]: gfs_controld 3.0.12 started
- Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: get_cluster_type: Cluster type is: 'openais'
- Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
- Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: init_ais_connection_classic: AIS connection established
- Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: get_ais_nodeid: Server details: id=167841964 uname=kvm00 cname=pcmk
- Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
- Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: debug: crm_new_peer: Creating entry for node kvm00/167841964
- Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: crm_new_peer: Node kvm00 now has id: 167841964
- Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: crm_new_peer: Node 167841964 is now known as kvm00
- Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
- Nov 29 21:36:25 kvm00 corosync[3570]: [pcmk ] info: pcmk_notify: Enabling node notifications for child 12125 (0x18397b0)
- Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: notice: ais_dispatch_message: Membership 1292: quorum acquired
- Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: info: crm_update_peer: Node kvm00: id=167841964 state=member (new) addr=r(0) ip(172.16.1.10) (new) votes=1 (new) born=1292 seen=1292 proc=00000000000000000000000000000000
- Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: debug: crm_new_peer: Creating entry for node kvm01/184619180
- Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: info: crm_new_peer: Node kvm01 now has id: 184619180
- Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: info: crm_new_peer: Node 184619180 is now known as kvm01
- Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: info: crm_update_peer: Node kvm01: id=184619180 state=member (new) addr=r(0) ip(172.16.1.11) votes=1 born=1284 seen=1292 proc=00000000000000000000000000000000
- Nov 29 21:36:26 kvm00 lrmd: [11012]: info: operation start[27] on p_gfs_controld:0 for client 11015: pid 12115 exited with return code 0
- Nov 29 21:36:26 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_gfs_controld:0_start_0 (call=27, rc=0, cib-update=68, confirmed=true) ok
- Nov 29 21:36:26 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 2: monitor p_gfs_controld:0_monitor_10000 on kvm00 (local)
- Nov 29 21:36:26 kvm00 lrmd: [11012]: info: rsc:p_gfs_controld:0 monitor[28] (pid 12160)
- Nov 29 21:36:26 kvm00 lrmd: [11012]: info: operation monitor[28] on p_gfs_controld:0 for client 11015: pid 12160 exited with return code 0
- Nov 29 21:36:26 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_gfs_controld:0_monitor_10000 (call=28, rc=0, cib-update=69, confirmed=false) ok
- Nov 29 21:36:26 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 36: monitor p_gfs_controld:1_monitor_10000 on kvm01
- Nov 29 21:36:26 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 42: start p_clvm:0_start_0 on kvm00 (local)
- Nov 29 21:36:26 kvm00 lrmd: [11012]: info: rsc:p_clvm:0 start[29] (pid 12167)
- Nov 29 21:36:26 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 43: start p_clvm:1_start_0 on kvm01
- Nov 29 21:36:26 kvm00 clvmd[12167]: INFO: Starting p_clvm:0
- Nov 29 21:36:26 kvm00 clvmd[12178]: CLVMD started
- Nov 29 21:36:26 kvm00 clvmd[12178]: Can't open cluster manager socket: No such file or directory
- Nov 29 21:36:26 kvm00 kernel: [19096.194090] dlm: Using TCP for communications
- Nov 29 21:36:26 kvm00 udevd[12039]: kernel-provided name 'dlm_clvmd' and NAME= 'misc/dlm_clvmd' disagree, please use SYMLINK+= or change the kernel to provide the proper name
- Nov 29 21:36:26 kvm00 kernel: [19096.198561] dlm: connecting to 184619180
- Nov 29 21:36:26 kvm00 kernel: [19096.198771] dlm: got connection from 184619180
- Nov 29 21:36:27 kvm00 clvmd[12178]: Created DLM lockspace for CLVMD.
- Nov 29 21:36:27 kvm00 clvmd[12178]: DLM initialisation complete
- Nov 29 21:36:27 kvm00 clvmd[12178]: Our local node id is 167841964
- Nov 29 21:36:27 kvm00 clvmd[12178]: Connected to Corosync
- Nov 29 21:36:27 kvm00 clvmd[12178]: Cluster LVM daemon started - connected to Corosync
- Nov 29 21:36:27 kvm00 clvmd[12178]: Cluster ready, doing some more initialisation
- Nov 29 21:36:27 kvm00 clvmd[12178]: starting LVM thread
- Nov 29 21:36:27 kvm00 clvmd[12178]: LVM thread function started
- Nov 29 21:36:27 kvm00 lvm[12178]: Sub thread ready for work.
- Nov 29 21:36:27 kvm00 lvm[12178]: LVM thread waiting for work
- Nov 29 21:36:27 kvm00 lvm[12178]: clvmd ready for work
- Nov 29 21:36:27 kvm00 lvm[12178]: Using timeout of 60 seconds
- Nov 29 21:36:27 kvm00 lvm[12178]: confchg callback. 1 joined, 0 left, 2 members
- Nov 29 21:36:29 kvm00 lrmd: [11012]: info: operation start[29] on p_clvm:0 for client 11015: pid 12167 exited with return code 0
- Nov 29 21:36:29 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_clvm:0_start_0 (call=29, rc=0, cib-update=70, confirmed=true) ok
- Nov 29 21:36:29 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 5: monitor p_clvm:0_monitor_60000 on kvm00 (local)
- Nov 29 21:36:29 kvm00 lrmd: [11012]: info: rsc:p_clvm:0 monitor[30] (pid 12201)
- Nov 29 21:36:29 kvm00 lrmd: [11012]: info: operation monitor[30] on p_clvm:0 for client 11015: pid 12201 exited with return code 0
- Nov 29 21:36:29 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_clvm:0_monitor_60000 (call=30, rc=0, cib-update=71, confirmed=false) ok
- Nov 29 21:36:29 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 44: monitor p_clvm:1_monitor_60000 on kvm01
- Nov 29 21:36:29 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 50: start p_vg0:0_start_0 on kvm00 (local)
- Nov 29 21:36:29 kvm00 lrmd: [11012]: info: rsc:p_vg0:0 start[31] (pid 12205)
- Nov 29 21:36:29 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 51: start p_vg0:1_start_0 on kvm01
- Nov 29 21:36:29 kvm00 LVM[12205]: INFO: Activating volume group vg0
- Nov 29 21:36:29 kvm00 lvm[12178]: Got new connection on fd 5
- Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 29
- Nov 29 21:36:29 kvm00 lvm[12178]: check_all_clvmds_running
- Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 184619180, state = 3
- Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 167841964, state = 3
- Nov 29 21:36:29 kvm00 lvm[12178]: creating pipe, [13, 14]
- Nov 29 21:36:29 kvm00 lvm[12178]: Creating pre&post thread
- Nov 29 21:36:29 kvm00 lvm[12178]: Created pre&post thread, state = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: in sub thread: client = 0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: doing PRE command LOCK_VG 'P_#global' at 4 (client=0xda9b80)
- Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource 'P_#global', flags=0, mode=4
- Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource returning 0, lock_id=1
- Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 14
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 0, flags=0x0 ()
- Nov 29 21:36:29 kvm00 lvm[12178]: num_nodes = 2
- Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0xdaa240. client=0xda9b80, msg=0xda9c90, len=29, csid=(nil), xid=0
- Nov 29 21:36:29 kvm00 lvm[12178]: Sending message to all cluster nodes
- Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
- Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: LOCK_VG (0x33) msg=0xda9fe0, msglen =29, client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: Refreshing context
- Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 167841964 for 0. len 29
- Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 184619180 for 167841964. len 18
- Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node b0110ac: 0 bytes
- Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 2
- Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:29 kvm00 lvm[12178]: Got 2 replies, expecting: 2
- Nov 29 21:36:29 kvm00 lvm[12178]: LVM thread waiting for work
- Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
- Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 25
- Nov 29 21:36:29 kvm00 lvm[12178]: Got pre command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: doing PRE command LOCK_VG 'V_vg0' at 1 (client=0xda9b80)
- Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource 'V_vg0', flags=0, mode=3
- Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource returning 0, lock_id=3
- Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 14
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 1, flags=0x1 (LOCAL)
- Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0xdd5900. client=0xda9b80, msg=0xda9c90, len=25, csid=(nil), xid=1
- Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
- Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: LOCK_VG (0x33) msg=0xdd5940, msglen =25, client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'V_vg0', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: Invalidating cached metadata for VG vg0
- Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 1
- Nov 29 21:36:29 kvm00 lvm[12178]: LVM thread waiting for work
- Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
- Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 31
- Nov 29 21:36:29 kvm00 lvm[12178]: check_all_clvmds_running
- Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 184619180, state = 3
- Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 167841964, state = 3
- Nov 29 21:36:29 kvm00 lvm[12178]: Got pre command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 14
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 2, flags=0x0 ()
- Nov 29 21:36:29 kvm00 lvm[12178]: num_nodes = 2
- Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0xd9f910. client=0xda9b80, msg=0xda9c90, len=31, csid=(nil), xid=2
- Nov 29 21:36:29 kvm00 lvm[12178]: Sending message to all cluster nodes
- Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
- Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: SYNC_NAMES (0x2d) msg=0xdd5900, msglen =31, client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: Syncing device names
- Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 2
- Nov 29 21:36:29 kvm00 lvm[12178]: LVM thread waiting for work
- Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 167841964 for 0. len 31
- Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 184619180 for 167841964. len 18
- Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node b0110ac: 0 bytes
- Nov 29 21:36:29 kvm00 lvm[12178]: Got 2 replies, expecting: 2
- Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
- Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 25
- Nov 29 21:36:29 kvm00 lvm[12178]: Got pre command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: doing PRE command LOCK_VG 'V_vg0' at 6 (client=0xda9b80)
- Nov 29 21:36:29 kvm00 lvm[12178]: unlock_resource: V_vg0 lockid: 3
- Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 14
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 3, flags=0x1 (LOCAL)
- Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0xdd5900. client=0xda9b80, msg=0xda9c90, len=25, csid=(nil), xid=3
- Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
- Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: LOCK_VG (0x33) msg=0xda9fe0, msglen =25, client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'V_vg0', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: Invalidating cached metadata for VG vg0
- Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 1
- Nov 29 21:36:29 kvm00 lvm[12178]: LVM thread waiting for work
- Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
- Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 29
- Nov 29 21:36:29 kvm00 lvm[12178]: check_all_clvmds_running
- Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 184619180, state = 3
- Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 167841964, state = 3
- Nov 29 21:36:29 kvm00 lvm[12178]: Got pre command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: doing PRE command LOCK_VG 'P_#global' at 6 (client=0xda9b80)
- Nov 29 21:36:29 kvm00 lvm[12178]: unlock_resource: P_#global lockid: 1
- Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 14
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 4, flags=0x0 ()
- Nov 29 21:36:29 kvm00 lvm[12178]: num_nodes = 2
- Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0xd997f0. client=0xda9b80, msg=0xda9c90, len=29, csid=(nil), xid=4
- Nov 29 21:36:29 kvm00 lvm[12178]: Sending message to all cluster nodes
- Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
- Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: LOCK_VG (0x33) msg=0xda9fb0, msglen =29, client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: Refreshing context
- Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 167841964 for 0. len 29
- Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 184619180 for 0. len 29
- Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0x7f36c0000920. client=0x69fd20, msg=0x7f36c50ba5fc, len=29, csid=0x7fff1305ad9c, xid=0
- Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 2
- Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: remote
- Nov 29 21:36:29 kvm00 lvm[12178]: process_remote_command LOCK_VG (0x33) for clientid 0xd000000 XID 0 on node b0110ac
- Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: Refreshing context
- Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 184619180 for 167841964. len 18
- Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node b0110ac: 0 bytes
- Nov 29 21:36:29 kvm00 lvm[12178]: Got 2 replies, expecting: 2
- Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
- Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
- Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: EOF on local socket: inprogress=0
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for child thread
- Nov 29 21:36:29 kvm00 lvm[12178]: Got pre command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: Subthread finished
- Nov 29 21:36:29 kvm00 lvm[12178]: Joined child thread
- Nov 29 21:36:29 kvm00 lvm[12178]: ret == 0, errno = 0. removing client
- Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0x7f36c00008b0. client=0xda9b80, msg=(nil), len=0, csid=(nil), xid=4
- Nov 29 21:36:29 kvm00 LVM[12205]: INFO: Reading all physical volumes. This may take a while... Found volume group "vg0" using metadata type lvm2
- Nov 29 21:36:29 kvm00 lvm[12178]: Got new connection on fd 13
- Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 13, len = 25
- Nov 29 21:36:29 kvm00 lvm[12178]: creating pipe, [14, 15]
- Nov 29 21:36:29 kvm00 lvm[12178]: Creating pre&post thread
- Nov 29 21:36:29 kvm00 lvm[12178]: Created pre&post thread, state = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: in sub thread: client = 0x7f36c0000990
- Nov 29 21:36:29 kvm00 lvm[12178]: doing PRE command LOCK_VG 'V_vg0' at 1 (client=0x7f36c0000990)
- Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource 'V_vg0', flags=0, mode=3
- Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource returning 0, lock_id=1
- Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 15
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 14: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0x7f36c0000990
- Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 5, flags=0x1 (LOCAL)
- Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0x7f36c0000bb0. client=0x7f36c0000990, msg=0x7f36c00008f0, len=25, csid=(nil), xid=5
- Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: free fd -1
- Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
- Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: LOCK_VG (0x33) msg=0x7f36c0000bf0, msglen =25, client=0x7f36c0000990
- Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'V_vg0', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
- Nov 29 21:36:29 kvm00 lvm[12178]: Invalidating cached metadata for VG vg0
- Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
- Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 1
- Nov 29 21:36:29 kvm00 lvm[12178]: LVM thread waiting for work
- Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
- Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
- Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 14: 4 bytes: status: 0
- Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0x7f36c0000990
- Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
- Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 167841964 for 184619180. len 18
- Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 184619180 for 0. len 31
- Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0x7f36c00008b0. client=0x69fd20, msg=0x7f36c50ba87c, len=31, csid=0x7fff1305ad9c, xid=0
- Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: remote
- Nov 29 21:36:29 kvm00 rsyslogd-2177: imuxsock begins to drop messages from pid 12178 due to rate-limiting
- Nov 29 21:36:29 kvm00 LVM[12205]: INFO: 3 logical volume(s) in volume group "vg0" now active
- Nov 29 21:36:30 kvm00 lrmd: [11012]: info: operation start[31] on p_vg0:0 for client 11015: pid 12205 exited with return code 0
- Nov 29 21:36:30 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_vg0:0_start_0 (call=31, rc=0, cib-update=72, confirmed=true) ok
- Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 6: monitor p_vg0:0_monitor_60000 on kvm00 (local)
- Nov 29 21:36:30 kvm00 lrmd: [11012]: info: rsc:p_vg0:0 monitor[32] (pid 12263)
- Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 52: monitor p_vg0:1_monitor_60000 on kvm01
- Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 58: start p_shared_gfs2:0_start_0 on kvm00 (local)
- Nov 29 21:36:30 kvm00 lrmd: [11012]: info: rsc:p_shared_gfs2:0 start[33] (pid 12273)
- Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 59: start p_shared_gfs2:1_start_0 on kvm01
- Nov 29 21:36:30 kvm00 Filesystem[12273]: INFO: Running start for /dev/vg0/shared-gfs2 on /shared00
- Nov 29 21:36:30 kvm00 lrmd: [11012]: info: operation monitor[32] on p_vg0:0 for client 11015: pid 12263 exited with return code 0
- Nov 29 21:36:30 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_vg0:0_monitor_60000 (call=32, rc=0, cib-update=73, confirmed=false) ok
- Nov 29 21:36:30 kvm00 lrmd: [11012]: info: RA output: (p_shared_gfs2:0:start:stderr) FATAL: Module scsi_hostadapter not found.
- Nov 29 21:36:30 kvm00 kernel: [19099.785019] GFS2: fsid=: Trying to join cluster "lock_dlm", "pcmk:pcmk"
- Nov 29 21:36:30 kvm00 kernel: [19099.788346] GFS2: fsid=pcmk:pcmk.0: Joined cluster. Now mounting FS...
- Nov 29 21:36:30 kvm00 kernel: [19099.834355] GFS2: fsid=pcmk:pcmk.0: jid=0, already locked for use
- Nov 29 21:36:30 kvm00 kernel: [19099.834357] GFS2: fsid=pcmk:pcmk.0: jid=0: Looking at journal...
- Nov 29 21:36:30 kvm00 kernel: [19099.866688] GFS2: fsid=pcmk:pcmk.0: jid=0: Done
- Nov 29 21:36:30 kvm00 kernel: [19099.866729] GFS2: fsid=pcmk:pcmk.0: jid=1: Trying to acquire journal lock...
- Nov 29 21:36:30 kvm00 kernel: [19099.867518] GFS2: fsid=pcmk:pcmk.0: jid=1: Looking at journal...
- Nov 29 21:36:30 kvm00 kernel: [19099.929070] GFS2: fsid=pcmk:pcmk.0: jid=1: Done
- Nov 29 21:36:30 kvm00 lrmd: [11012]: info: operation start[33] on p_shared_gfs2:0 for client 11015: pid 12273 exited with return code 0
- Nov 29 21:36:30 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_shared_gfs2:0_start_0 (call=33, rc=0, cib-update=74, confirmed=true) ok
- Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 3: monitor p_shared_gfs2:0_monitor_120000 on kvm00 (local)
- Nov 29 21:36:30 kvm00 lrmd: [11012]: info: rsc:p_shared_gfs2:0 monitor[34] (pid 12337)
- Nov 29 21:36:30 kvm00 lrmd: [11012]: info: operation monitor[34] on p_shared_gfs2:0 for client 11015: pid 12337 exited with return code 0
- Nov 29 21:36:30 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_shared_gfs2:0_monitor_120000 (call=34, rc=0, cib-update=75, confirmed=false) ok
- Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 60: monitor p_shared_gfs2:1_monitor_120000 on kvm01
- Nov 29 21:36:30 kvm00 crmd: [11015]: notice: run_graph: ==== Transition 1 (Complete=58, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-685.bz2): Complete
- Nov 29 21:36:30 kvm00 crmd: [11015]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement