Advertisement
Guest User

Untitled

a guest
Nov 29th, 2013
161
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 56.36 KB | None | 0 0
  1. Nov 29 21:35:43 kvm00 lvm[11328]: Subthread finished
  2. Nov 29 21:35:43 kvm00 lvm[11328]: Joined child thread
  3. Nov 29 21:35:43 kvm00 lvm[11328]: ret == 0, errno = 0. removing client
  4. Nov 29 21:35:43 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x1df9b30, msg=(nil), len=0, csid=(nil), xid=67
  5. Nov 29 21:35:43 kvm00 lvm[11328]: process_work_item: free fd -1
  6. Nov 29 21:35:43 kvm00 lvm[11328]: LVM thread waiting for work
  7. Nov 29 21:35:43 kvm00 lvm[11328]: Got new connection on fd 5
  8. Nov 29 21:35:43 kvm00 lvm[11328]: Read on local socket 5, len = 25
  9. Nov 29 21:35:43 kvm00 lvm[11328]: creating pipe, [12, 13]
  10. Nov 29 21:35:43 kvm00 rsyslogd-2177: imuxsock begins to drop messages from pid 11328 due to rate-limiting
  11. Nov 29 21:36:16 kvm00 pacemakerd: [11006]: notice: update_node_processes: 0x25fbb30 Node 184619180 now known as kvm01, was:
  12. Nov 29 21:36:16 kvm00 stonith-ng: [11011]: info: crm_new_peer: Node kvm01 now has id: 184619180
  13. Nov 29 21:36:16 kvm00 stonith-ng: [11011]: info: crm_new_peer: Node 184619180 is now known as kvm01
  14. Nov 29 21:36:16 kvm00 crmd: [11015]: notice: crmd_peer_update: Status update: Client kvm01/crmd now has status [online] (DC=true)
  15. Nov 29 21:36:16 kvm00 crmd: [11015]: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=crmd_peer_update ]
  16. Nov 29 21:36:16 kvm00 crmd: [11015]: info: abort_transition_graph: do_te_invoke:169 - Triggered transition abort (complete=1) : Peer Halt
  17. Nov 29 21:36:16 kvm00 crmd: [11015]: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
  18. Nov 29 21:36:16 kvm00 crmd: [11015]: info: update_dc: Set DC to kvm00 (3.0.6)
  19. Nov 29 21:36:18 kvm00 crmd: [11015]: info: do_dc_join_offer_all: A new node joined the cluster
  20. Nov 29 21:36:18 kvm00 crmd: [11015]: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
  21. Nov 29 21:36:18 kvm00 crmd: [11015]: info: update_dc: Set DC to kvm00 (3.0.6)
  22. Nov 29 21:36:19 kvm00 crmd: [11015]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  23. Nov 29 21:36:19 kvm00 crmd: [11015]: info: do_dc_join_finalize: join-3: Syncing the CIB from kvm00 to the rest of the cluster
  24. Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/48, version=0.302.33): ok (rc=0)
  25. Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/49, version=0.302.34): ok (rc=0)
  26. Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/50, version=0.302.35): ok (rc=0)
  27. Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='kvm01']/transient_attributes (origin=kvm01/crmd/6, version=0.302.36): ok (rc=0)
  28. Nov 29 21:36:19 kvm00 crmd: [11015]: info: do_dc_join_ack: join-3: Updating node state to member for kvm01
  29. Nov 29 21:36:19 kvm00 crmd: [11015]: info: erase_status_tag: Deleting xpath: //node_state[@uname='kvm01']/lrm
  30. Nov 29 21:36:19 kvm00 crmd: [11015]: info: do_dc_join_ack: join-3: Updating node state to member for kvm00
  31. Nov 29 21:36:19 kvm00 crmd: [11015]: info: erase_status_tag: Deleting xpath: //node_state[@uname='kvm00']/lrm
  32. Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='kvm01']/lrm (origin=local/crmd/51, version=0.302.37): ok (rc=0)
  33. Nov 29 21:36:19 kvm00 crmd: [11015]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  34. Nov 29 21:36:19 kvm00 crmd: [11015]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
  35. Nov 29 21:36:19 kvm00 attrd: [11013]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  36. Nov 29 21:36:19 kvm00 attrd: [11013]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  37. Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='kvm00']/lrm (origin=local/crmd/53, version=0.302.39): ok (rc=0)
  38. Nov 29 21:36:19 kvm00 crmd: [11015]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=p_dlm_controld:0_last_0, magic=0:0;16:0:0:9411dad7-fa00-4d39-9f25-f4c8c4d2c944, cib=0.302.39) : Resource op removal
  39. Nov 29 21:36:19 kvm00 crmd: [11015]: info: abort_transition_graph: te_update_diff:276 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.302.40) : LRM Refresh
  40. Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/55, version=0.302.41): ok (rc=0)
  41. Nov 29 21:36:19 kvm00 cib: [11010]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/57, version=0.302.43): ok (rc=0)
  42. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: unpack_config: On loss of CCM Quorum: Ignore
  43. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_iscsi:1#011(kvm01)
  44. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Restart p_dlm_controld:0#011(Started kvm00)
  45. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_dlm_controld:1#011(kvm01)
  46. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Restart p_gfs_controld:0#011(Started kvm00)
  47. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_gfs_controld:1#011(kvm01)
  48. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Restart p_clvm:0#011(Started kvm00)
  49. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_clvm:1#011(kvm01)
  50. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Restart p_vg0:0#011(Started kvm00)
  51. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_vg0:1#011(kvm01)
  52. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Restart p_shared_gfs2:0#011(Started kvm00)
  53. Nov 29 21:36:19 kvm00 pengine: [11014]: notice: LogActions: Start p_shared_gfs2:1#011(kvm01)
  54. Nov 29 21:36:19 kvm00 crmd: [11015]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  55. Nov 29 21:36:19 kvm00 crmd: [11015]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1385757379-36) derived from /var/lib/pengine/pe-input-685.bz2
  56. Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 10: monitor p_iscsi:1_monitor_0 on kvm01
  57. Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 11: monitor p_dlm_controld:1_monitor_0 on kvm01
  58. Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 12: monitor p_gfs_controld:1_monitor_0 on kvm01
  59. Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 13: monitor p_clvm:1_monitor_0 on kvm01
  60. Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 14: monitor p_vg0:1_monitor_0 on kvm01
  61. Nov 29 21:36:19 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 15: monitor p_shared_gfs2:1_monitor_0 on kvm01
  62. Nov 29 21:36:20 kvm00 pengine: [11014]: notice: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-685.bz2
  63. Nov 29 21:36:21 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 9: probe_complete probe_complete on kvm01 - no waiting
  64. Nov 29 21:36:21 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 19: start p_iscsi:1_start_0 on kvm01
  65. Nov 29 21:36:21 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 57: stop p_shared_gfs2:0_stop_0 on kvm00 (local)
  66. Nov 29 21:36:21 kvm00 lrmd: [11012]: info: cancel_op: operation monitor[19] on p_shared_gfs2:0 for client 11015, its parameters: fstype=[gfs2] CRM_meta_timeout=[20000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] device=[/dev/vg0/shared-gfs2] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_interval=[120000] CRM_meta_globally_unique=[false] directory=[/shared00] cancelled
  67. Nov 29 21:36:21 kvm00 lrmd: [11012]: info: rsc:p_shared_gfs2:0 stop[20] (pid 11950)
  68. Nov 29 21:36:21 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_shared_gfs2:0_monitor_120000 (call=19, status=1, cib-update=0, confirmed=true) Cancelled
  69. Nov 29 21:36:21 kvm00 Filesystem[11950]: INFO: Running stop for /dev/vg0/shared-gfs2 on /shared00
  70. Nov 29 21:36:21 kvm00 Filesystem[11950]: INFO: Trying to unmount /shared00
  71. Nov 29 21:36:21 kvm00 Filesystem[11950]: INFO: unmounted /shared00 successfully
  72. Nov 29 21:36:21 kvm00 lrmd: [11012]: info: operation stop[20] on p_shared_gfs2:0 for client 11015: pid 11950 exited with return code 0
  73. Nov 29 21:36:21 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_shared_gfs2:0_stop_0 (call=20, rc=0, cib-update=61, confirmed=true) ok
  74. Nov 29 21:36:21 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 49: stop p_vg0:0_stop_0 on kvm00 (local)
  75. Nov 29 21:36:21 kvm00 lrmd: [11012]: info: cancel_op: operation monitor[17] on p_vg0:0 for client 11015, its parameters: CRM_meta_timeout=[60000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] volgrpname=[vg0] CRM_meta_interval=[60000] CRM_meta_globally_unique=[false] cancelled
  76. Nov 29 21:36:21 kvm00 lrmd: [11012]: info: rsc:p_vg0:0 stop[21] (pid 12021)
  77. Nov 29 21:36:21 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_vg0:0_monitor_60000 (call=17, status=1, cib-update=0, confirmed=true) Cancelled
  78. Nov 29 21:36:21 kvm00 rsyslogd-2177: imuxsock lost 80 messages from pid 11328 due to rate-limiting
  79. Nov 29 21:36:21 kvm00 lvm[11328]: Got new connection on fd 5
  80. Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 25
  81. Nov 29 21:36:21 kvm00 lvm[11328]: creating pipe, [12, 13]
  82. Nov 29 21:36:21 kvm00 lvm[11328]: Creating pre&post thread
  83. Nov 29 21:36:21 kvm00 lvm[11328]: Created pre&post thread, state = 0
  84. Nov 29 21:36:21 kvm00 lvm[11328]: in sub thread: client = 0x7f01d003b520
  85. Nov 29 21:36:21 kvm00 lvm[11328]: doing PRE command LOCK_VG 'V_vg0' at 1 (client=0x7f01d003b520)
  86. Nov 29 21:36:21 kvm00 lvm[11328]: lock_resource 'V_vg0', flags=0, mode=3
  87. Nov 29 21:36:21 kvm00 lvm[11328]: lock_resource returning 0, lock_id=1
  88. Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
  89. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  90. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  91. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
  92. Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 71, flags=0x1 (LOCAL)
  93. Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d003b740. client=0x7f01d003b520, msg=0x7f01d0045570, len=25, csid=(nil), xid=71
  94. Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
  95. Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: LOCK_VG (0x33) msg=0x7f01d003d440, msglen =25, client=0x7f01d003b520
  96. Nov 29 21:36:21 kvm00 lvm[11328]: do_lock_vg: resource 'V_vg0', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
  97. Nov 29 21:36:21 kvm00 lvm[11328]: Invalidating cached metadata for VG vg0
  98. Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
  99. Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
  100. Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
  101. Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
  102. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
  103. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  104. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  105. Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
  106. Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
  107. Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
  108. Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
  109. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
  110. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  111. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  112. Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 72, flags=0x1 (LOCAL)
  113. Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=72
  114. Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
  115. Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
  116. Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
  117. Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
  118. Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
  119. Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
  120. Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
  121. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
  122. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  123. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  124. Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
  125. Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
  126. Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
  127. Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
  128. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
  129. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  130. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  131. Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 73, flags=0x1 (LOCAL)
  132. Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=73
  133. Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
  134. Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
  135. Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
  136. Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
  137. Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
  138. Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
  139. Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
  140. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
  141. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  142. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  143. Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
  144. Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
  145. Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
  146. Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
  147. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
  148. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  149. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  150. Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 74, flags=0x1 (LOCAL)
  151. Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=74
  152. Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
  153. Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
  154. Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
  155. Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
  156. Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
  157. Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
  158. Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
  159. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
  160. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  161. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  162. Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
  163. Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
  164. Nov 29 21:36:21 kvm00 lvm[11328]: check_all_clvmds_running
  165. Nov 29 21:36:21 kvm00 lvm[11328]: down_callback. node 167841964, state = 3
  166. Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
  167. Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
  168. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
  169. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  170. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  171. Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 75, flags=0x0 ()
  172. Nov 29 21:36:21 kvm00 lvm[11328]: num_nodes = 1
  173. Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d003d530. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=75
  174. Nov 29 21:36:21 kvm00 lvm[11328]: Sending message to all cluster nodes
  175. Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
  176. Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
  177. Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
  178. Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
  179. Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
  180. Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
  181. Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
  182. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
  183. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  184. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  185. Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
  186. Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 25
  187. Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
  188. Nov 29 21:36:21 kvm00 lvm[11328]: doing PRE command LOCK_VG 'V_vg0' at 6 (client=0x7f01d003b520)
  189. Nov 29 21:36:21 kvm00 lvm[11328]: unlock_resource: V_vg0 lockid: 1
  190. Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
  191. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  192. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  193. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
  194. Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 76, flags=0x1 (LOCAL)
  195. Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=25, csid=(nil), xid=76
  196. Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
  197. Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: LOCK_VG (0x33) msg=0x7f01d003b770, msglen =25, client=0x7f01d003b520
  198. Nov 29 21:36:21 kvm00 lvm[11328]: do_lock_vg: resource 'V_vg0', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
  199. Nov 29 21:36:21 kvm00 lvm[11328]: Invalidating cached metadata for VG vg0
  200. Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
  201. Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
  202. Nov 29 21:36:21 kvm00 lvm[11328]: 167841964 got message from nodeid 167841964 for 0. len 31
  203. Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
  204. Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
  205. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
  206. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  207. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  208. Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
  209. Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 0
  210. Nov 29 21:36:21 kvm00 lvm[11328]: EOF on local socket: inprogress=0
  211. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for child thread
  212. Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
  213. Nov 29 21:36:21 kvm00 lvm[11328]: Subthread finished
  214. Nov 29 21:36:21 kvm00 lvm[11328]: Joined child thread
  215. Nov 29 21:36:21 kvm00 lvm[11328]: ret == 0, errno = 0. removing client
  216. Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=(nil), len=0, csid=(nil), xid=76
  217. Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: free fd -1
  218. Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
  219. Nov 29 21:36:21 kvm00 LVM[12021]: INFO: Deactivating volume group vg0
  220. Nov 29 21:36:21 kvm00 lvm[11328]: Got new connection on fd 5
  221. Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 25
  222. Nov 29 21:36:21 kvm00 lvm[11328]: creating pipe, [12, 13]
  223. Nov 29 21:36:21 kvm00 lvm[11328]: Creating pre&post thread
  224. Nov 29 21:36:21 kvm00 lvm[11328]: Created pre&post thread, state = 0
  225. Nov 29 21:36:21 kvm00 lvm[11328]: in sub thread: client = 0x7f01d003b520
  226. Nov 29 21:36:21 kvm00 lvm[11328]: doing PRE command LOCK_VG 'V_vg0' at 1 (client=0x7f01d003b520)
  227. Nov 29 21:36:21 kvm00 lvm[11328]: lock_resource 'V_vg0', flags=0, mode=3
  228. Nov 29 21:36:21 kvm00 lvm[11328]: lock_resource returning 0, lock_id=1
  229. Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
  230. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
  231. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  232. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  233. Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 77, flags=0x1 (LOCAL)
  234. Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d003b740. client=0x7f01d003b520, msg=0x7f01d0045570, len=25, csid=(nil), xid=77
  235. Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
  236. Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: LOCK_VG (0x33) msg=0x7f01d003d440, msglen =25, client=0x7f01d003b520
  237. Nov 29 21:36:21 kvm00 lvm[11328]: do_lock_vg: resource 'V_vg0', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
  238. Nov 29 21:36:21 kvm00 lvm[11328]: Invalidating cached metadata for VG vg0
  239. Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
  240. Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
  241. Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
  242. Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
  243. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
  244. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  245. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  246. Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
  247. Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
  248. Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
  249. Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
  250. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
  251. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  252. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  253. Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 78, flags=0x1 (LOCAL)
  254. Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=78
  255. Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
  256. Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
  257. Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
  258. Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
  259. Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
  260. Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
  261. Nov 29 21:36:21 kvm00 lvm[11328]: Got post command condition...
  262. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting for next pre command
  263. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  264. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  265. Nov 29 21:36:21 kvm00 lvm[11328]: Send local reply
  266. Nov 29 21:36:21 kvm00 lvm[11328]: Read on local socket 5, len = 31
  267. Nov 29 21:36:21 kvm00 lvm[11328]: Got pre command condition...
  268. Nov 29 21:36:21 kvm00 lvm[11328]: Writing status 0 down pipe 13
  269. Nov 29 21:36:21 kvm00 lvm[11328]: Waiting to do post command - state = 0
  270. Nov 29 21:36:21 kvm00 lvm[11328]: read on PIPE 12: 4 bytes: status: 0
  271. Nov 29 21:36:21 kvm00 lvm[11328]: background routine status was 0, sock_client=0x7f01d003b520
  272. Nov 29 21:36:21 kvm00 lvm[11328]: distribute command: XID = 79, flags=0x1 (LOCAL)
  273. Nov 29 21:36:21 kvm00 lvm[11328]: add_to_lvmqueue: cmd=0x7f01d0045570. client=0x7f01d003b520, msg=0x7f01d003b740, len=31, csid=(nil), xid=79
  274. Nov 29 21:36:21 kvm00 lvm[11328]: process_work_item: local
  275. Nov 29 21:36:21 kvm00 lvm[11328]: process_local_command: SYNC_NAMES (0x2d) msg=0x7f01d003b770, msglen =31, client=0x7f01d003b520
  276. Nov 29 21:36:21 kvm00 lvm[11328]: Syncing device names
  277. Nov 29 21:36:21 kvm00 lvm[11328]: Reply from node a0110ac: 0 bytes
  278. Nov 29 21:36:21 kvm00 lvm[11328]: Got 1 replies, expecting: 1
  279. Nov 29 21:36:21 kvm00 lvm[11328]: LVM thread waiting for work
  280. Nov 29 21:36:21 kvm00 rsyslogd-2177: imuxsock begins to drop messages from pid 11328 due to rate-limiting
  281. Nov 29 21:36:21 kvm00 LVM[12021]: INFO: 0 logical volume(s) in volume group "vg0" now active
  282. Nov 29 21:36:21 kvm00 lrmd: [11012]: info: operation stop[21] on p_vg0:0 for client 11015: pid 12021 exited with return code 0
  283. Nov 29 21:36:21 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_vg0:0_stop_0 (call=21, rc=0, cib-update=62, confirmed=true) ok
  284. Nov 29 21:36:21 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 41: stop p_clvm:0_stop_0 on kvm00 (local)
  285. Nov 29 21:36:21 kvm00 lrmd: [11012]: info: cancel_op: operation monitor[15] on p_clvm:0 for client 11015, its parameters: daemon_timeout=[30] CRM_meta_timeout=[30000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_interval=[60000] CRM_meta_globally_unique=[false] cancelled
  286. Nov 29 21:36:21 kvm00 lrmd: [11012]: info: rsc:p_clvm:0 stop[22] (pid 12056)
  287. Nov 29 21:36:21 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_clvm:0_monitor_60000 (call=15, status=1, cib-update=0, confirmed=true) Cancelled
  288. Nov 29 21:36:21 kvm00 clvmd[12056]: INFO: Stopping p_clvm:0
  289. Nov 29 21:36:21 kvm00 clvmd[12056]: INFO: Stopping clvmd
  290. Nov 29 21:36:22 kvm00 lrmd: [11012]: info: operation stop[22] on p_clvm:0 for client 11015: pid 12056 exited with return code 0
  291. Nov 29 21:36:22 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_clvm:0_stop_0 (call=22, rc=0, cib-update=63, confirmed=true) ok
  292. Nov 29 21:36:22 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 33: stop p_gfs_controld:0_stop_0 on kvm00 (local)
  293. Nov 29 21:36:22 kvm00 lrmd: [11012]: info: cancel_op: operation monitor[13] on p_gfs_controld:0 for client 11015, its parameters: CRM_meta_timeout=[20000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] daemon=[gfs_controld.pcmk] CRM_meta_clone_max=[2] CRM_meta_interval=[10000] CRM_meta_globally_unique=[false] cancelled
  294. Nov 29 21:36:22 kvm00 lrmd: [11012]: info: rsc:p_gfs_controld:0 stop[23] (pid 12077)
  295. Nov 29 21:36:22 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_gfs_controld:0_monitor_10000 (call=13, status=1, cib-update=0, confirmed=true) Cancelled
  296. Nov 29 21:36:22 kvm00 gfs_controld.pcmk[11275]: [11275]: notice: terminate_ais_connection: Disconnecting from Corosync
  297. Nov 29 21:36:22 kvm00 lrmd: [11012]: info: RA output: (p_gfs_controld:0:stop:stderr) gfs_controld.pcmk: no process found
  298. Nov 29 21:36:22 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 20: monitor p_iscsi:1_monitor_120000 on kvm01
  299. Nov 29 21:36:23 kvm00 lrmd: [11012]: info: operation stop[23] on p_gfs_controld:0 for client 11015: pid 12077 exited with return code 0
  300. Nov 29 21:36:23 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_gfs_controld:0_stop_0 (call=23, rc=0, cib-update=64, confirmed=true) ok
  301. Nov 29 21:36:23 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 25: stop p_dlm_controld:0_stop_0 on kvm00 (local)
  302. Nov 29 21:36:23 kvm00 lrmd: [11012]: info: cancel_op: operation monitor[11] on p_dlm_controld:0 for client 11015, its parameters: CRM_meta_timeout=[20000] CRM_meta_name=[monitor] crm_feature_set=[3.0.6] CRM_meta_notify=[false] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] daemon=[dlm_controld.pcmk] CRM_meta_clone_max=[2] CRM_meta_interval=[10000] CRM_meta_globally_unique=[false] cancelled
  303. Nov 29 21:36:23 kvm00 lrmd: [11012]: info: rsc:p_dlm_controld:0 stop[24] (pid 12084)
  304. Nov 29 21:36:23 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_dlm_controld:0_monitor_10000 (call=11, status=1, cib-update=0, confirmed=true) Cancelled
  305. Nov 29 21:36:23 kvm00 dlm_controld.pcmk: [11231]: notice: terminate_ais_connection: Disconnecting from Corosync
  306. Nov 29 21:36:23 kvm00 kernel: [19093.065220] dlm: closing connection to node 184619180
  307. Nov 29 21:36:23 kvm00 kernel: [19093.065278] dlm: closing connection to node 167841964
  308. Nov 29 21:36:23 kvm00 lrmd: [11012]: info: RA output: (p_dlm_controld:0:stop:stderr) dlm_controld.pcmk: no process found
  309. Nov 29 21:36:24 kvm00 lrmd: [11012]: info: operation stop[24] on p_dlm_controld:0 for client 11015: pid 12084 exited with return code 0
  310. Nov 29 21:36:24 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_dlm_controld:0_stop_0 (call=24, rc=0, cib-update=65, confirmed=true) ok
  311. Nov 29 21:36:24 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 26: start p_dlm_controld:0_start_0 on kvm00 (local)
  312. Nov 29 21:36:24 kvm00 lrmd: [11012]: info: rsc:p_dlm_controld:0 start[25] (pid 12090)
  313. Nov 29 21:36:24 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 27: start p_dlm_controld:1_start_0 on kvm01
  314. Nov 29 21:36:24 kvm00 lrmd: [11012]: info: RA output: (p_dlm_controld:0:start:stderr) dlm_controld.pcmk: no process found
  315. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: get_cluster_type: Cluster type is: 'openais'
  316. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
  317. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: init_ais_connection_classic: AIS connection established
  318. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: get_ais_nodeid: Server details: id=167841964 uname=kvm00 cname=pcmk
  319. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
  320. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_new_peer: Node kvm00 now has id: 167841964
  321. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_new_peer: Node 167841964 is now known as kvm00
  322. Nov 29 21:36:24 kvm00 corosync[3570]: [pcmk ] info: pcmk_notify: Enabling node notifications for child 12100 (0x1850ba0)
  323. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: notice: ais_dispatch_message: Membership 1292: quorum acquired
  324. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_update_peer: Node kvm00: id=167841964 state=member (new) addr=r(0) ip(172.16.1.10) (new) votes=1 (new) born=1292 seen=1292 proc=00000000000000000000000000000000
  325. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_new_peer: Node kvm01 now has id: 184619180
  326. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_new_peer: Node 184619180 is now known as kvm01
  327. Nov 29 21:36:24 kvm00 cluster-dlm: [12100]: info: crm_update_peer: Node kvm01: id=184619180 state=member (new) addr=r(0) ip(172.16.1.11) votes=1 born=1284 seen=1292 proc=00000000000000000000000000000000
  328. Nov 29 21:36:25 kvm00 lrmd: [11012]: info: operation start[25] on p_dlm_controld:0 for client 11015: pid 12090 exited with return code 0
  329. Nov 29 21:36:25 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_dlm_controld:0_start_0 (call=25, rc=0, cib-update=66, confirmed=true) ok
  330. Nov 29 21:36:25 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 1: monitor p_dlm_controld:0_monitor_10000 on kvm00 (local)
  331. Nov 29 21:36:25 kvm00 lrmd: [11012]: info: rsc:p_dlm_controld:0 monitor[26] (pid 12108)
  332. Nov 29 21:36:25 kvm00 lrmd: [11012]: info: operation monitor[26] on p_dlm_controld:0 for client 11015: pid 12108 exited with return code 0
  333. Nov 29 21:36:25 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_dlm_controld:0_monitor_10000 (call=26, rc=0, cib-update=67, confirmed=false) ok
  334. Nov 29 21:36:25 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 28: monitor p_dlm_controld:1_monitor_10000 on kvm01
  335. Nov 29 21:36:25 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 34: start p_gfs_controld:0_start_0 on kvm00 (local)
  336. Nov 29 21:36:25 kvm00 lrmd: [11012]: info: rsc:p_gfs_controld:0 start[27] (pid 12115)
  337. Nov 29 21:36:25 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 35: start p_gfs_controld:1_start_0 on kvm01
  338. Nov 29 21:36:25 kvm00 lrmd: [11012]: info: RA output: (p_gfs_controld:0:start:stderr) gfs_controld.pcmk: no process found
  339. Nov 29 21:36:25 kvm00 gfs_controld[12125]: gfs_controld 3.0.12 started
  340. Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: get_cluster_type: Cluster type is: 'openais'
  341. Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
  342. Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: init_ais_connection_classic: AIS connection established
  343. Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: get_ais_nodeid: Server details: id=167841964 uname=kvm00 cname=pcmk
  344. Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
  345. Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: debug: crm_new_peer: Creating entry for node kvm00/167841964
  346. Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: crm_new_peer: Node kvm00 now has id: 167841964
  347. Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: info: crm_new_peer: Node 167841964 is now known as kvm00
  348. Nov 29 21:36:25 kvm00 cluster-gfs: [12125]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
  349. Nov 29 21:36:25 kvm00 corosync[3570]: [pcmk ] info: pcmk_notify: Enabling node notifications for child 12125 (0x18397b0)
  350. Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: notice: ais_dispatch_message: Membership 1292: quorum acquired
  351. Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: info: crm_update_peer: Node kvm00: id=167841964 state=member (new) addr=r(0) ip(172.16.1.10) (new) votes=1 (new) born=1292 seen=1292 proc=00000000000000000000000000000000
  352. Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: debug: crm_new_peer: Creating entry for node kvm01/184619180
  353. Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: info: crm_new_peer: Node kvm01 now has id: 184619180
  354. Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: info: crm_new_peer: Node 184619180 is now known as kvm01
  355. Nov 29 21:36:25 kvm00 gfs_controld[12125]: [12125]: info: crm_update_peer: Node kvm01: id=184619180 state=member (new) addr=r(0) ip(172.16.1.11) votes=1 born=1284 seen=1292 proc=00000000000000000000000000000000
  356. Nov 29 21:36:26 kvm00 lrmd: [11012]: info: operation start[27] on p_gfs_controld:0 for client 11015: pid 12115 exited with return code 0
  357. Nov 29 21:36:26 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_gfs_controld:0_start_0 (call=27, rc=0, cib-update=68, confirmed=true) ok
  358. Nov 29 21:36:26 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 2: monitor p_gfs_controld:0_monitor_10000 on kvm00 (local)
  359. Nov 29 21:36:26 kvm00 lrmd: [11012]: info: rsc:p_gfs_controld:0 monitor[28] (pid 12160)
  360. Nov 29 21:36:26 kvm00 lrmd: [11012]: info: operation monitor[28] on p_gfs_controld:0 for client 11015: pid 12160 exited with return code 0
  361. Nov 29 21:36:26 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_gfs_controld:0_monitor_10000 (call=28, rc=0, cib-update=69, confirmed=false) ok
  362. Nov 29 21:36:26 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 36: monitor p_gfs_controld:1_monitor_10000 on kvm01
  363. Nov 29 21:36:26 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 42: start p_clvm:0_start_0 on kvm00 (local)
  364. Nov 29 21:36:26 kvm00 lrmd: [11012]: info: rsc:p_clvm:0 start[29] (pid 12167)
  365. Nov 29 21:36:26 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 43: start p_clvm:1_start_0 on kvm01
  366. Nov 29 21:36:26 kvm00 clvmd[12167]: INFO: Starting p_clvm:0
  367. Nov 29 21:36:26 kvm00 clvmd[12178]: CLVMD started
  368. Nov 29 21:36:26 kvm00 clvmd[12178]: Can't open cluster manager socket: No such file or directory
  369. Nov 29 21:36:26 kvm00 kernel: [19096.194090] dlm: Using TCP for communications
  370. Nov 29 21:36:26 kvm00 udevd[12039]: kernel-provided name 'dlm_clvmd' and NAME= 'misc/dlm_clvmd' disagree, please use SYMLINK+= or change the kernel to provide the proper name
  371. Nov 29 21:36:26 kvm00 kernel: [19096.198561] dlm: connecting to 184619180
  372. Nov 29 21:36:26 kvm00 kernel: [19096.198771] dlm: got connection from 184619180
  373. Nov 29 21:36:27 kvm00 clvmd[12178]: Created DLM lockspace for CLVMD.
  374. Nov 29 21:36:27 kvm00 clvmd[12178]: DLM initialisation complete
  375. Nov 29 21:36:27 kvm00 clvmd[12178]: Our local node id is 167841964
  376. Nov 29 21:36:27 kvm00 clvmd[12178]: Connected to Corosync
  377. Nov 29 21:36:27 kvm00 clvmd[12178]: Cluster LVM daemon started - connected to Corosync
  378. Nov 29 21:36:27 kvm00 clvmd[12178]: Cluster ready, doing some more initialisation
  379. Nov 29 21:36:27 kvm00 clvmd[12178]: starting LVM thread
  380. Nov 29 21:36:27 kvm00 clvmd[12178]: LVM thread function started
  381. Nov 29 21:36:27 kvm00 lvm[12178]: Sub thread ready for work.
  382. Nov 29 21:36:27 kvm00 lvm[12178]: LVM thread waiting for work
  383. Nov 29 21:36:27 kvm00 lvm[12178]: clvmd ready for work
  384. Nov 29 21:36:27 kvm00 lvm[12178]: Using timeout of 60 seconds
  385. Nov 29 21:36:27 kvm00 lvm[12178]: confchg callback. 1 joined, 0 left, 2 members
  386. Nov 29 21:36:29 kvm00 lrmd: [11012]: info: operation start[29] on p_clvm:0 for client 11015: pid 12167 exited with return code 0
  387. Nov 29 21:36:29 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_clvm:0_start_0 (call=29, rc=0, cib-update=70, confirmed=true) ok
  388. Nov 29 21:36:29 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 5: monitor p_clvm:0_monitor_60000 on kvm00 (local)
  389. Nov 29 21:36:29 kvm00 lrmd: [11012]: info: rsc:p_clvm:0 monitor[30] (pid 12201)
  390. Nov 29 21:36:29 kvm00 lrmd: [11012]: info: operation monitor[30] on p_clvm:0 for client 11015: pid 12201 exited with return code 0
  391. Nov 29 21:36:29 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_clvm:0_monitor_60000 (call=30, rc=0, cib-update=71, confirmed=false) ok
  392. Nov 29 21:36:29 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 44: monitor p_clvm:1_monitor_60000 on kvm01
  393. Nov 29 21:36:29 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 50: start p_vg0:0_start_0 on kvm00 (local)
  394. Nov 29 21:36:29 kvm00 lrmd: [11012]: info: rsc:p_vg0:0 start[31] (pid 12205)
  395. Nov 29 21:36:29 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 51: start p_vg0:1_start_0 on kvm01
  396. Nov 29 21:36:29 kvm00 LVM[12205]: INFO: Activating volume group vg0
  397. Nov 29 21:36:29 kvm00 lvm[12178]: Got new connection on fd 5
  398. Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 29
  399. Nov 29 21:36:29 kvm00 lvm[12178]: check_all_clvmds_running
  400. Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 184619180, state = 3
  401. Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 167841964, state = 3
  402. Nov 29 21:36:29 kvm00 lvm[12178]: creating pipe, [13, 14]
  403. Nov 29 21:36:29 kvm00 lvm[12178]: Creating pre&post thread
  404. Nov 29 21:36:29 kvm00 lvm[12178]: Created pre&post thread, state = 0
  405. Nov 29 21:36:29 kvm00 lvm[12178]: in sub thread: client = 0xda9b80
  406. Nov 29 21:36:29 kvm00 lvm[12178]: doing PRE command LOCK_VG 'P_#global' at 4 (client=0xda9b80)
  407. Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource 'P_#global', flags=0, mode=4
  408. Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource returning 0, lock_id=1
  409. Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 14
  410. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
  411. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
  412. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
  413. Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 0, flags=0x0 ()
  414. Nov 29 21:36:29 kvm00 lvm[12178]: num_nodes = 2
  415. Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0xdaa240. client=0xda9b80, msg=0xda9c90, len=29, csid=(nil), xid=0
  416. Nov 29 21:36:29 kvm00 lvm[12178]: Sending message to all cluster nodes
  417. Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
  418. Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: LOCK_VG (0x33) msg=0xda9fe0, msglen =29, client=0xda9b80
  419. Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
  420. Nov 29 21:36:29 kvm00 lvm[12178]: Refreshing context
  421. Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 167841964 for 0. len 29
  422. Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 184619180 for 167841964. len 18
  423. Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node b0110ac: 0 bytes
  424. Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 2
  425. Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
  426. Nov 29 21:36:29 kvm00 lvm[12178]: Got 2 replies, expecting: 2
  427. Nov 29 21:36:29 kvm00 lvm[12178]: LVM thread waiting for work
  428. Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
  429. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
  430. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
  431. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
  432. Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
  433. Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 25
  434. Nov 29 21:36:29 kvm00 lvm[12178]: Got pre command condition...
  435. Nov 29 21:36:29 kvm00 lvm[12178]: doing PRE command LOCK_VG 'V_vg0' at 1 (client=0xda9b80)
  436. Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource 'V_vg0', flags=0, mode=3
  437. Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource returning 0, lock_id=3
  438. Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 14
  439. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
  440. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
  441. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
  442. Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 1, flags=0x1 (LOCAL)
  443. Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0xdd5900. client=0xda9b80, msg=0xda9c90, len=25, csid=(nil), xid=1
  444. Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
  445. Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: LOCK_VG (0x33) msg=0xdd5940, msglen =25, client=0xda9b80
  446. Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'V_vg0', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
  447. Nov 29 21:36:29 kvm00 lvm[12178]: Invalidating cached metadata for VG vg0
  448. Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
  449. Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 1
  450. Nov 29 21:36:29 kvm00 lvm[12178]: LVM thread waiting for work
  451. Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
  452. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
  453. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
  454. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
  455. Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
  456. Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 31
  457. Nov 29 21:36:29 kvm00 lvm[12178]: check_all_clvmds_running
  458. Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 184619180, state = 3
  459. Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 167841964, state = 3
  460. Nov 29 21:36:29 kvm00 lvm[12178]: Got pre command condition...
  461. Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 14
  462. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
  463. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
  464. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
  465. Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 2, flags=0x0 ()
  466. Nov 29 21:36:29 kvm00 lvm[12178]: num_nodes = 2
  467. Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0xd9f910. client=0xda9b80, msg=0xda9c90, len=31, csid=(nil), xid=2
  468. Nov 29 21:36:29 kvm00 lvm[12178]: Sending message to all cluster nodes
  469. Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
  470. Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: SYNC_NAMES (0x2d) msg=0xdd5900, msglen =31, client=0xda9b80
  471. Nov 29 21:36:29 kvm00 lvm[12178]: Syncing device names
  472. Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
  473. Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 2
  474. Nov 29 21:36:29 kvm00 lvm[12178]: LVM thread waiting for work
  475. Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 167841964 for 0. len 31
  476. Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 184619180 for 167841964. len 18
  477. Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node b0110ac: 0 bytes
  478. Nov 29 21:36:29 kvm00 lvm[12178]: Got 2 replies, expecting: 2
  479. Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
  480. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
  481. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
  482. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
  483. Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
  484. Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 25
  485. Nov 29 21:36:29 kvm00 lvm[12178]: Got pre command condition...
  486. Nov 29 21:36:29 kvm00 lvm[12178]: doing PRE command LOCK_VG 'V_vg0' at 6 (client=0xda9b80)
  487. Nov 29 21:36:29 kvm00 lvm[12178]: unlock_resource: V_vg0 lockid: 3
  488. Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 14
  489. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
  490. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
  491. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
  492. Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 3, flags=0x1 (LOCAL)
  493. Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0xdd5900. client=0xda9b80, msg=0xda9c90, len=25, csid=(nil), xid=3
  494. Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
  495. Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: LOCK_VG (0x33) msg=0xda9fe0, msglen =25, client=0xda9b80
  496. Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'V_vg0', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
  497. Nov 29 21:36:29 kvm00 lvm[12178]: Invalidating cached metadata for VG vg0
  498. Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
  499. Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 1
  500. Nov 29 21:36:29 kvm00 lvm[12178]: LVM thread waiting for work
  501. Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
  502. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
  503. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
  504. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
  505. Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
  506. Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 29
  507. Nov 29 21:36:29 kvm00 lvm[12178]: check_all_clvmds_running
  508. Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 184619180, state = 3
  509. Nov 29 21:36:29 kvm00 lvm[12178]: down_callback. node 167841964, state = 3
  510. Nov 29 21:36:29 kvm00 lvm[12178]: Got pre command condition...
  511. Nov 29 21:36:29 kvm00 lvm[12178]: doing PRE command LOCK_VG 'P_#global' at 6 (client=0xda9b80)
  512. Nov 29 21:36:29 kvm00 lvm[12178]: unlock_resource: P_#global lockid: 1
  513. Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 14
  514. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
  515. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
  516. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
  517. Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 4, flags=0x0 ()
  518. Nov 29 21:36:29 kvm00 lvm[12178]: num_nodes = 2
  519. Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0xd997f0. client=0xda9b80, msg=0xda9c90, len=29, csid=(nil), xid=4
  520. Nov 29 21:36:29 kvm00 lvm[12178]: Sending message to all cluster nodes
  521. Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
  522. Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: LOCK_VG (0x33) msg=0xda9fb0, msglen =29, client=0xda9b80
  523. Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
  524. Nov 29 21:36:29 kvm00 lvm[12178]: Refreshing context
  525. Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 167841964 for 0. len 29
  526. Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 184619180 for 0. len 29
  527. Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0x7f36c0000920. client=0x69fd20, msg=0x7f36c50ba5fc, len=29, csid=0x7fff1305ad9c, xid=0
  528. Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
  529. Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 2
  530. Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: remote
  531. Nov 29 21:36:29 kvm00 lvm[12178]: process_remote_command LOCK_VG (0x33) for clientid 0xd000000 XID 0 on node b0110ac
  532. Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
  533. Nov 29 21:36:29 kvm00 lvm[12178]: Refreshing context
  534. Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 184619180 for 167841964. len 18
  535. Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node b0110ac: 0 bytes
  536. Nov 29 21:36:29 kvm00 lvm[12178]: Got 2 replies, expecting: 2
  537. Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
  538. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
  539. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 13: 4 bytes: status: 0
  540. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0xda9b80
  541. Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
  542. Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 5, len = 0
  543. Nov 29 21:36:29 kvm00 lvm[12178]: EOF on local socket: inprogress=0
  544. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for child thread
  545. Nov 29 21:36:29 kvm00 lvm[12178]: Got pre command condition...
  546. Nov 29 21:36:29 kvm00 lvm[12178]: Subthread finished
  547. Nov 29 21:36:29 kvm00 lvm[12178]: Joined child thread
  548. Nov 29 21:36:29 kvm00 lvm[12178]: ret == 0, errno = 0. removing client
  549. Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0x7f36c00008b0. client=0xda9b80, msg=(nil), len=0, csid=(nil), xid=4
  550. Nov 29 21:36:29 kvm00 LVM[12205]: INFO: Reading all physical volumes. This may take a while... Found volume group "vg0" using metadata type lvm2
  551. Nov 29 21:36:29 kvm00 lvm[12178]: Got new connection on fd 13
  552. Nov 29 21:36:29 kvm00 lvm[12178]: Read on local socket 13, len = 25
  553. Nov 29 21:36:29 kvm00 lvm[12178]: creating pipe, [14, 15]
  554. Nov 29 21:36:29 kvm00 lvm[12178]: Creating pre&post thread
  555. Nov 29 21:36:29 kvm00 lvm[12178]: Created pre&post thread, state = 0
  556. Nov 29 21:36:29 kvm00 lvm[12178]: in sub thread: client = 0x7f36c0000990
  557. Nov 29 21:36:29 kvm00 lvm[12178]: doing PRE command LOCK_VG 'V_vg0' at 1 (client=0x7f36c0000990)
  558. Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource 'V_vg0', flags=0, mode=3
  559. Nov 29 21:36:29 kvm00 lvm[12178]: lock_resource returning 0, lock_id=1
  560. Nov 29 21:36:29 kvm00 lvm[12178]: Writing status 0 down pipe 15
  561. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting to do post command - state = 0
  562. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 14: 4 bytes: status: 0
  563. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0x7f36c0000990
  564. Nov 29 21:36:29 kvm00 lvm[12178]: distribute command: XID = 5, flags=0x1 (LOCAL)
  565. Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0x7f36c0000bb0. client=0x7f36c0000990, msg=0x7f36c00008f0, len=25, csid=(nil), xid=5
  566. Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: free fd -1
  567. Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: local
  568. Nov 29 21:36:29 kvm00 lvm[12178]: process_local_command: LOCK_VG (0x33) msg=0x7f36c0000bf0, msglen =25, client=0x7f36c0000990
  569. Nov 29 21:36:29 kvm00 lvm[12178]: do_lock_vg: resource 'V_vg0', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
  570. Nov 29 21:36:29 kvm00 lvm[12178]: Invalidating cached metadata for VG vg0
  571. Nov 29 21:36:29 kvm00 lvm[12178]: Reply from node a0110ac: 0 bytes
  572. Nov 29 21:36:29 kvm00 lvm[12178]: Got 1 replies, expecting: 1
  573. Nov 29 21:36:29 kvm00 lvm[12178]: LVM thread waiting for work
  574. Nov 29 21:36:29 kvm00 lvm[12178]: Got post command condition...
  575. Nov 29 21:36:29 kvm00 lvm[12178]: Waiting for next pre command
  576. Nov 29 21:36:29 kvm00 lvm[12178]: read on PIPE 14: 4 bytes: status: 0
  577. Nov 29 21:36:29 kvm00 lvm[12178]: background routine status was 0, sock_client=0x7f36c0000990
  578. Nov 29 21:36:29 kvm00 lvm[12178]: Send local reply
  579. Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 167841964 for 184619180. len 18
  580. Nov 29 21:36:29 kvm00 lvm[12178]: 167841964 got message from nodeid 184619180 for 0. len 31
  581. Nov 29 21:36:29 kvm00 lvm[12178]: add_to_lvmqueue: cmd=0x7f36c00008b0. client=0x69fd20, msg=0x7f36c50ba87c, len=31, csid=0x7fff1305ad9c, xid=0
  582. Nov 29 21:36:29 kvm00 lvm[12178]: process_work_item: remote
  583. Nov 29 21:36:29 kvm00 rsyslogd-2177: imuxsock begins to drop messages from pid 12178 due to rate-limiting
  584. Nov 29 21:36:29 kvm00 LVM[12205]: INFO: 3 logical volume(s) in volume group "vg0" now active
  585. Nov 29 21:36:30 kvm00 lrmd: [11012]: info: operation start[31] on p_vg0:0 for client 11015: pid 12205 exited with return code 0
  586. Nov 29 21:36:30 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_vg0:0_start_0 (call=31, rc=0, cib-update=72, confirmed=true) ok
  587. Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 6: monitor p_vg0:0_monitor_60000 on kvm00 (local)
  588. Nov 29 21:36:30 kvm00 lrmd: [11012]: info: rsc:p_vg0:0 monitor[32] (pid 12263)
  589. Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 52: monitor p_vg0:1_monitor_60000 on kvm01
  590. Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 58: start p_shared_gfs2:0_start_0 on kvm00 (local)
  591. Nov 29 21:36:30 kvm00 lrmd: [11012]: info: rsc:p_shared_gfs2:0 start[33] (pid 12273)
  592. Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 59: start p_shared_gfs2:1_start_0 on kvm01
  593. Nov 29 21:36:30 kvm00 Filesystem[12273]: INFO: Running start for /dev/vg0/shared-gfs2 on /shared00
  594. Nov 29 21:36:30 kvm00 lrmd: [11012]: info: operation monitor[32] on p_vg0:0 for client 11015: pid 12263 exited with return code 0
  595. Nov 29 21:36:30 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_vg0:0_monitor_60000 (call=32, rc=0, cib-update=73, confirmed=false) ok
  596. Nov 29 21:36:30 kvm00 lrmd: [11012]: info: RA output: (p_shared_gfs2:0:start:stderr) FATAL: Module scsi_hostadapter not found.
  597. Nov 29 21:36:30 kvm00 kernel: [19099.785019] GFS2: fsid=: Trying to join cluster "lock_dlm", "pcmk:pcmk"
  598. Nov 29 21:36:30 kvm00 kernel: [19099.788346] GFS2: fsid=pcmk:pcmk.0: Joined cluster. Now mounting FS...
  599. Nov 29 21:36:30 kvm00 kernel: [19099.834355] GFS2: fsid=pcmk:pcmk.0: jid=0, already locked for use
  600. Nov 29 21:36:30 kvm00 kernel: [19099.834357] GFS2: fsid=pcmk:pcmk.0: jid=0: Looking at journal...
  601. Nov 29 21:36:30 kvm00 kernel: [19099.866688] GFS2: fsid=pcmk:pcmk.0: jid=0: Done
  602. Nov 29 21:36:30 kvm00 kernel: [19099.866729] GFS2: fsid=pcmk:pcmk.0: jid=1: Trying to acquire journal lock...
  603. Nov 29 21:36:30 kvm00 kernel: [19099.867518] GFS2: fsid=pcmk:pcmk.0: jid=1: Looking at journal...
  604. Nov 29 21:36:30 kvm00 kernel: [19099.929070] GFS2: fsid=pcmk:pcmk.0: jid=1: Done
  605. Nov 29 21:36:30 kvm00 lrmd: [11012]: info: operation start[33] on p_shared_gfs2:0 for client 11015: pid 12273 exited with return code 0
  606. Nov 29 21:36:30 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_shared_gfs2:0_start_0 (call=33, rc=0, cib-update=74, confirmed=true) ok
  607. Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 3: monitor p_shared_gfs2:0_monitor_120000 on kvm00 (local)
  608. Nov 29 21:36:30 kvm00 lrmd: [11012]: info: rsc:p_shared_gfs2:0 monitor[34] (pid 12337)
  609. Nov 29 21:36:30 kvm00 lrmd: [11012]: info: operation monitor[34] on p_shared_gfs2:0 for client 11015: pid 12337 exited with return code 0
  610. Nov 29 21:36:30 kvm00 crmd: [11015]: info: process_lrm_event: LRM operation p_shared_gfs2:0_monitor_120000 (call=34, rc=0, cib-update=75, confirmed=false) ok
  611. Nov 29 21:36:30 kvm00 crmd: [11015]: info: te_rsc_command: Initiating action 60: monitor p_shared_gfs2:1_monitor_120000 on kvm01
  612. Nov 29 21:36:30 kvm00 crmd: [11015]: notice: run_graph: ==== Transition 1 (Complete=58, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-685.bz2): Complete
  613. Nov 29 21:36:30 kvm00 crmd: [11015]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement