Advertisement
amartin

VCS pending DC log

Feb 11th, 2013
587
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 412.39 KB | None | 0 0
  1. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
  2. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  3. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  4. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  5. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  6. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  7. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  8. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  9. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  10. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  11. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  12. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  13. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  14. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  15. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  16. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  17. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  18. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  19. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  20. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  21. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  22. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  23. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  24. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  25. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  26. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  27. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  28. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
  29. Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
  30. Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
  31. Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
  32. Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
  33. Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
  34. Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
  35. Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
  36. Feb 10 23:39:50 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
  37. Feb 10 23:39:50 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  38. Feb 10 23:39:50 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 52: (null)
  39. Feb 10 23:39:50 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 52 (ref=pe_calc-dc-1360561190-115) derived from (null)
  40. Feb 10 23:39:50 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
  41. Feb 10 23:39:50 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
  42. Feb 10 23:39:50 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 280220e0-2334-45c8-af3d-670b559f0f4f (0)
  43. Feb 10 23:39:50 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
  44. Feb 10 23:39:50 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
  45. Feb 10 23:39:56 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.280220e0: Timer expired
  46. Feb 10 23:39:56 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 6/59:52:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
  47. Feb 10 23:39:56 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 6 for vcs0 failed (Timer expired): aborting transition.
  48. Feb 10 23:39:56 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
  49. Feb 10 23:39:56 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=280220e0-2334-45c8-af3d-670b559f0f4f) by client crmd.29811
  50. Feb 10 23:39:56 [29811] vcsquorum crmd: notice: run_graph: Transition 52 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
  51. Feb 10 23:39:56 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  52. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
  53. Feb 10 23:39:56 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
  54. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  55. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
  56. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
  57. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  58. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  59. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  60. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  61. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  62. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  63. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  64. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  65. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  66. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  67. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  68. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  69. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  70. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  71. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  72. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  73. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  74. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  75. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  76. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  77. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  78. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  79. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  80. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  81. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  82. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  83. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
  84. Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
  85. Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
  86. Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
  87. Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
  88. Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
  89. Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
  90. Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
  91. Feb 10 23:39:56 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
  92. Feb 10 23:39:56 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  93. Feb 10 23:39:56 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 53: (null)
  94. Feb 10 23:39:56 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 53 (ref=pe_calc-dc-1360561196-116) derived from (null)
  95. Feb 10 23:39:56 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
  96. Feb 10 23:39:56 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
  97. Feb 10 23:39:56 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 4571a1e1-e2d9-4a15-98a8-6fdfebacde0e (0)
  98. Feb 10 23:39:56 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
  99. Feb 10 23:39:56 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
  100. Feb 10 23:40:02 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.4571a1e1: Timer expired
  101. Feb 10 23:40:02 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 7/59:53:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
  102. Feb 10 23:40:02 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 7 for vcs0 failed (Timer expired): aborting transition.
  103. Feb 10 23:40:02 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
  104. Feb 10 23:40:02 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=4571a1e1-e2d9-4a15-98a8-6fdfebacde0e) by client crmd.29811
  105. Feb 10 23:40:02 [29811] vcsquorum crmd: notice: run_graph: Transition 53 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
  106. Feb 10 23:40:02 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  107. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
  108. Feb 10 23:40:02 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
  109. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  110. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
  111. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
  112. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  113. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  114. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  115. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  116. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  117. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  118. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  119. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  120. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  121. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  122. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  123. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  124. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  125. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  126. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  127. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  128. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  129. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  130. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  131. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  132. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  133. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  134. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  135. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  136. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  137. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  138. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
  139. Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
  140. Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
  141. Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
  142. Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
  143. Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
  144. Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
  145. Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
  146. Feb 10 23:40:02 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
  147. Feb 10 23:40:02 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  148. Feb 10 23:40:02 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 54: (null)
  149. Feb 10 23:40:02 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 54 (ref=pe_calc-dc-1360561202-117) derived from (null)
  150. Feb 10 23:40:02 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
  151. Feb 10 23:40:02 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
  152. Feb 10 23:40:02 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 4421ca1b-055e-4651-8c6d-14887a7f6b9f (0)
  153. Feb 10 23:40:02 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
  154. Feb 10 23:40:02 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
  155. Feb 10 23:40:08 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.4421ca1b: Timer expired
  156. Feb 10 23:40:08 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 8/59:54:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
  157. Feb 10 23:40:08 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 8 for vcs0 failed (Timer expired): aborting transition.
  158. Feb 10 23:40:08 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
  159. Feb 10 23:40:08 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=4421ca1b-055e-4651-8c6d-14887a7f6b9f) by client crmd.29811
  160. Feb 10 23:40:08 [29811] vcsquorum crmd: notice: run_graph: Transition 54 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
  161. Feb 10 23:40:08 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  162. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
  163. Feb 10 23:40:08 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
  164. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  165. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
  166. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
  167. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  168. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  169. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  170. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  171. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  172. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  173. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  174. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  175. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  176. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  177. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  178. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  179. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  180. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  181. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  182. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  183. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  184. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  185. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  186. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  187. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  188. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  189. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  190. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  191. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  192. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  193. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
  194. Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
  195. Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
  196. Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
  197. Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
  198. Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
  199. Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
  200. Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
  201. Feb 10 23:40:08 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
  202. Feb 10 23:40:08 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 55: (null)
  203. Feb 10 23:40:08 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  204. Feb 10 23:40:08 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 55 (ref=pe_calc-dc-1360561208-118) derived from (null)
  205. Feb 10 23:40:08 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
  206. Feb 10 23:40:08 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
  207. Feb 10 23:40:08 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 2c7d2267-17bb-4345-a7bf-084e31c6be8d (0)
  208. Feb 10 23:40:08 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
  209. Feb 10 23:40:08 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
  210. Feb 10 23:40:14 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.2c7d2267: Timer expired
  211. Feb 10 23:40:14 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 9/59:55:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
  212. Feb 10 23:40:14 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 9 for vcs0 failed (Timer expired): aborting transition.
  213. Feb 10 23:40:14 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
  214. Feb 10 23:40:14 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=2c7d2267-17bb-4345-a7bf-084e31c6be8d) by client crmd.29811
  215. Feb 10 23:40:14 [29811] vcsquorum crmd: notice: run_graph: Transition 55 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
  216. Feb 10 23:40:14 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  217. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
  218. Feb 10 23:40:14 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
  219. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  220. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
  221. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
  222. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  223. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  224. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  225. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  226. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  227. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  228. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  229. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  230. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  231. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  232. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  233. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  234. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  235. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  236. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  237. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  238. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  239. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  240. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  241. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  242. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  243. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  244. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  245. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  246. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  247. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  248. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
  249. Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
  250. Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
  251. Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
  252. Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
  253. Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
  254. Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
  255. Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
  256. Feb 10 23:40:14 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
  257. Feb 10 23:40:14 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  258. Feb 10 23:40:14 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 56: (null)
  259. Feb 10 23:40:14 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 56 (ref=pe_calc-dc-1360561214-119) derived from (null)
  260. Feb 10 23:40:14 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
  261. Feb 10 23:40:14 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
  262. Feb 10 23:40:14 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 4937ed3c-30c2-4795-94e8-36f4e2c0fa52 (0)
  263. Feb 10 23:40:14 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
  264. Feb 10 23:40:14 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
  265. Feb 10 23:40:20 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.4937ed3c: Timer expired
  266. Feb 10 23:40:20 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 10/59:56:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
  267. Feb 10 23:40:20 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 10 for vcs0 failed (Timer expired): aborting transition.
  268. Feb 10 23:40:20 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
  269. Feb 10 23:40:20 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=4937ed3c-30c2-4795-94e8-36f4e2c0fa52) by client crmd.29811
  270. Feb 10 23:40:20 [29811] vcsquorum crmd: notice: run_graph: Transition 56 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
  271. Feb 10 23:40:20 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  272. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
  273. Feb 10 23:40:20 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
  274. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  275. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
  276. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
  277. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  278. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  279. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  280. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  281. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  282. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  283. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  284. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  285. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  286. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  287. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  288. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  289. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  290. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  291. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  292. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  293. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  294. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  295. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  296. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  297. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  298. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  299. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  300. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  301. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  302. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  303. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
  304. Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
  305. Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
  306. Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
  307. Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
  308. Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
  309. Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
  310. Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
  311. Feb 10 23:40:20 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
  312. Feb 10 23:40:20 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  313. Feb 10 23:40:20 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 57: (null)
  314. Feb 10 23:40:20 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 57 (ref=pe_calc-dc-1360561220-120) derived from (null)
  315. Feb 10 23:40:20 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
  316. Feb 10 23:40:20 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
  317. Feb 10 23:40:20 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 5b5d0013-9ded-4330-9f15-226eb95f061a (0)
  318. Feb 10 23:40:20 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
  319. Feb 10 23:40:20 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
  320. Feb 10 23:40:26 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.5b5d0013: Timer expired
  321. Feb 10 23:40:26 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 11/59:57:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
  322. Feb 10 23:40:26 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 11 for vcs0 failed (Timer expired): aborting transition.
  323. Feb 10 23:40:26 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
  324. Feb 10 23:40:26 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=5b5d0013-9ded-4330-9f15-226eb95f061a) by client crmd.29811
  325. Feb 10 23:40:26 [29811] vcsquorum crmd: notice: run_graph: Transition 57 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
  326. Feb 10 23:40:26 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  327. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: pe_fence_node: Node vcs0 will be fenced because stonithvcs1 is thought to be active there
  328. Feb 10 23:40:27 [29810] vcsquorum pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_vcs:0 active in master mode on vcs0
  329. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  330. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1.example.com is unrunnable (pending)
  331. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1.example.com is unrunnable (pending)
  332. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1.example.com is unrunnable (pending)
  333. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  334. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  335. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1.example.com is unrunnable (pending)
  336. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  337. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (offline)
  338. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  339. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (offline)
  340. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  341. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_stop_0 on vcs0 is unrunnable (offline)
  342. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  343. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_stop_0 on vcs0 is unrunnable (offline)
  344. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  345. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_stop_0 on vcs0 is unrunnable (offline)
  346. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  347. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_stop_0 on vcs0 is unrunnable (offline)
  348. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  349. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  350. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  351. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  352. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  353. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  354. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_demote_0 on vcs0 is unrunnable (offline)
  355. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_stop_0 on vcs0 is unrunnable (offline)
  356. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  357. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: custom_action: Action p_ping:0_stop_0 on vcs0 is unrunnable (offline)
  358. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: stage6: Scheduling Node vcs0 for STONITH
  359. Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
  360. Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
  361. Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs0)
  362. Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs0)
  363. Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs0)
  364. Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs0)
  365. Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Stopped vcs0)
  366. Feb 10 23:40:27 [29810] vcsquorum pengine: notice: LogActions: Stop p_ping:0 (vcs0)
  367. Feb 10 23:40:27 [29810] vcsquorum pengine: warning: process_pe_message: Calculated Transition 58: (null)
  368. Feb 10 23:40:27 [29811] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  369. Feb 10 23:40:27 [29811] vcsquorum crmd: info: do_te_invoke: Processing graph 58 (ref=pe_calc-dc-1360561226-121) derived from (null)
  370. Feb 10 23:40:27 [29811] vcsquorum crmd: notice: te_fence_node: Executing reboot fencing operation (59) on vcs0 (timeout=60000)
  371. Feb 10 23:40:27 [29807] vcsquorum stonith-ng: notice: stonith_command: Client crmd.29811.58bfa638 wants to fence (reboot) 'vcs0' with device '(any)'
  372. Feb 10 23:40:27 [29807] vcsquorum stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for vcs0: 42e5ef92-424e-4fae-a95a-02b55029880f (0)
  373. Feb 10 23:40:27 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_fence from crmd.29811: Operation now in progress (-115)
  374. Feb 10 23:40:27 [29807] vcsquorum stonith-ng: info: stonith_command: Processed st_query from vcsquorum: OK (0)
  375. Feb 10 23:40:33 [29807] vcsquorum stonith-ng: error: remote_op_done: Operation reboot of vcs0 by vcsquorum for crmd.29811@vcsquorum.42e5ef92: Timer expired
  376. Feb 10 23:40:33 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 12/59:58:0:e3eca602-12d5-435f-9f7d-7836f6f41012: Timer expired (-62)
  377. Feb 10 23:40:33 [29811] vcsquorum crmd: notice: tengine_stonith_callback: Stonith operation 12 for vcs0 failed (Timer expired): aborting transition.
  378. Feb 10 23:40:33 [29811] vcsquorum crmd: info: abort_transition_graph: tengine_stonith_callback:447 - Triggered transition abort (complete=0) : Stonith failed
  379. Feb 10 23:40:33 [29811] vcsquorum crmd: notice: tengine_stonith_notify: Peer vcs0 was not terminated (st_notify_fence) by vcsquorum for vcsquorum: Timer expired (ref=42e5ef92-424e-4fae-a95a-02b55029880f) by client crmd.29811
  380. Feb 10 23:40:33 [29811] vcsquorum crmd: notice: run_graph: Transition 58 (Complete=6, Pending=0, Fired=0, Skipped=18, Incomplete=7, Source=unknown): Stopped
  381. Feb 10 23:40:33 [29811] vcsquorum crmd: notice: too_many_st_failures: Too many failures to fence vcs0 (11), giving up
  382. Feb 10 23:40:33 [26090] vcsquorum corosync notice [QUORUM] This node is within the non-primary component and will NOT provide any services.
  383. Feb 10 23:40:33 [26090] vcsquorum corosync notice [QUORUM] Members[1]: 755053578
  384. Feb 10 23:40:33 [26090] vcsquorum corosync notice [QUORUM] Members[1]: 755053578
  385. Feb 10 23:40:33 [29811] vcsquorum crmd: notice: pcmk_quorum_notification: Membership 63836: quorum lost (1)
  386. Feb 10 23:40:33 [29811] vcsquorum crmd: notice: corosync_mark_unseen_peer_dead: Node -1425984502/vcs1.example.com was not seen in the previous transition
  387. Feb 10 23:40:33 [29811] vcsquorum crmd: notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vcs1.example.com[2868982794] - state is now lost
  388. Feb 10 23:40:33 [29811] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now lost (was member)
  389. Feb 10 23:40:33 [26090] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63836) was formed.
  390. Feb 10 23:40:33 [26090] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  391. Feb 10 23:40:33 [29811] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63836: quorum still lost (1)
  392. Feb 10 23:40:33 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/104, version=2.100.39): OK (rc=0)
  393. Feb 10 23:40:33 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/105, version=2.100.40): OK (rc=0)
  394. Feb 10 23:40:33 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/107, version=2.100.42): OK (rc=0)
  395. Feb 10 23:41:40 [26090] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1442761718
  396. Feb 10 23:41:40 [29811] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63840: quorum still lost (2)
  397. Feb 10 23:41:40 [29811] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs0[2852205578] - state is now member
  398. Feb 10 23:41:40 [29811] vcsquorum crmd: info: peer_update_callback: vcs0 is now member (was (null))
  399. Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63840) was formed.
  400. Feb 10 23:41:40 [29811] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  401. Feb 10 23:41:40 [26090] vcsquorum corosync notice [QUORUM] This node is within the primary component and will provide service.
  402. Feb 10 23:41:40 [26090] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1442761718
  403. Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2
  404. Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2
  405. Feb 10 23:41:40 [26090] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  406. Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 7
  407. Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 7
  408. Feb 10 23:41:40 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/109, version=2.100.44): OK (rc=0)
  409. Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: a
  410. Feb 10 23:41:40 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: c
  411. Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: e
  412. Feb 10 23:41:41 [29811] vcsquorum crmd: notice: pcmk_quorum_notification: Membership 63840: quorum acquired (2)
  413. Feb 10 23:41:41 [29807] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[2.0] stonith-ng.-1442761718
  414. Feb 10 23:41:41 [29807] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[2.0] stonith-ng.755053578
  415. Feb 10 23:41:41 [29807] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[2.1] stonith-ng.-1442761718
  416. Feb 10 23:41:41 [29807] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
  417. Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 18
  418. Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 18
  419. Feb 10 23:41:41 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/111, version=2.100.46): OK (rc=0)
  420. Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1c
  421. Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1c
  422. Feb 10 23:41:41 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/112, version=2.100.47): OK (rc=0)
  423. Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1d
  424. Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1d
  425. Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1d
  426. Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1f
  427. Feb 10 23:41:41 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 1f
  428. Feb 10 23:41:41 [29805] vcsquorum cib: info: pcmk_cpg_membership: Joined[2.0] cib.-1442761718
  429. Feb 10 23:41:41 [29805] vcsquorum cib: info: pcmk_cpg_membership: Member[2.0] cib.755053578
  430. Feb 10 23:41:41 [29805] vcsquorum cib: info: pcmk_cpg_membership: Member[2.1] cib.-1442761718
  431. Feb 10 23:41:41 [29805] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
  432. Feb 10 23:41:42 [29811] vcsquorum crmd: info: pcmk_cpg_membership: Joined[2.0] crmd.-1442761718
  433. Feb 10 23:41:42 [29811] vcsquorum crmd: info: pcmk_cpg_membership: Member[2.0] crmd.755053578
  434. Feb 10 23:41:42 [29811] vcsquorum crmd: info: pcmk_cpg_membership: Member[2.1] crmd.-1442761718
  435. Feb 10 23:41:42 [29811] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
  436. Feb 10 23:41:42 [29811] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [online] (DC=true)
  437. Feb 10 23:41:42 [29811] vcsquorum crmd: info: peer_update_callback: Node return implies stonith of vcs0 (action 59) completed
  438. Feb 10 23:41:42 [29811] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
  439. Feb 10 23:41:42 [29811] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/transient_attributes
  440. Feb 10 23:41:42 [29811] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
  441. Feb 10 23:41:42 [29811] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
  442. Feb 10 23:41:42 [29811] vcsquorum crmd: notice: too_many_st_failures: Too many failures to fence vcs0 (11), giving up
  443. Feb 10 23:41:42 [29811] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63840
  444. Feb 10 23:41:42 [29811] vcsquorum crmd: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
  445. Feb 10 23:41:42 [29811] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  446. Feb 10 23:41:42 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/114, version=2.100.49): OK (rc=0)
  447. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 26
  448. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 26
  449. Feb 10 23:41:42 [29805] vcsquorum cib: warning: cib_process_diff: Diff 2.100.0 -> 2.100.1 from vcs0 not applied to 2.100.49: current "num_updates" is greater than required
  450. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2a 2c 2e 30
  451. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2a 2e
  452. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2a
  453. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 2a
  454. Feb 10 23:41:42 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/transient_attributes (origin=local/crmd/115, version=2.100.50): OK (rc=0)
  455. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 34
  456. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 34
  457. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 37
  458. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 38
  459. Feb 10 23:41:42 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs0/vcs0/(null), version=2.100.51): OK (rc=0)
  460. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 3a 3c 3e 40 42 44
  461. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 3a 3e 42
  462. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 3a 42
  463. Feb 10 23:41:42 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 42
  464. Feb 10 23:41:43 [29811] vcsquorum crmd: info: do_dc_join_offer_all: A new node joined the cluster
  465. Feb 10 23:41:43 [29811] vcsquorum crmd: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
  466. Feb 10 23:41:43 [29811] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  467. Feb 10 23:41:43 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 47
  468. Feb 10 23:41:43 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 47
  469. Feb 10 23:41:43 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 49
  470. Feb 10 23:41:44 [29811] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcs0[-1442761718] - expected state is now member
  471. Feb 10 23:41:44 [29811] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  472. Feb 10 23:41:44 [29811] vcsquorum crmd: info: do_dc_join_finalize: join-3: Syncing the CIB from vcsquorum to the rest of the cluster
  473. Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/119, version=2.100.51): OK (rc=0)
  474. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 4d 4f 51 53 55 57
  475. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 4f 53 57
  476. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 53 59
  477. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 53
  478. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 53
  479. Feb 10 23:41:44 [29811] vcsquorum crmd: info: do_dc_join_ack: join-3: Updating node state to member for vcsquorum
  480. Feb 10 23:41:44 [29811] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  481. Feb 10 23:41:44 [29811] vcsquorum crmd: info: do_dc_join_ack: join-3: Updating node state to member for vcs0
  482. Feb 10 23:41:44 [29811] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
  483. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 5a 5c 5e
  484. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 5a 5e
  485. Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/120, version=2.100.52): OK (rc=0)
  486. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 5e
  487. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 63
  488. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 63
  489. Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/121, version=2.100.53): OK (rc=0)
  490. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 66
  491. Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/transient_attributes (origin=vcs0/crmd/8, version=2.100.54): OK (rc=0)
  492. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 68
  493. Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/122, version=2.100.55): OK (rc=0)
  494. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 6a 6c 6e
  495. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 6c
  496. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 6c
  497. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 70 72 74 76 78
  498. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 72 76
  499. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 72
  500. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 72
  501. Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/124, version=2.100.57): OK (rc=0)
  502. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 7a
  503. Feb 10 23:41:44 [26090] vcsquorum corosync notice [TOTEM ] Retransmit List: 7c
  504. Feb 10 23:41:44 [29811] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  505. Feb 10 23:41:44 [29811] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  506. Feb 10 23:41:44 [29811] vcsquorum crmd: notice: too_many_st_failures: Too many failures to fence vcs0 (11), giving up
  507. Feb 10 23:41:44 [29809] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  508. Feb 10 23:41:44 [29809] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  509. Feb 10 23:41:44 [26090] vcsquorum corosync error [TOTEM ] Marking ringid 0 interface 192.168.1.45 FAULTY
  510. Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/126, version=2.100.59): OK (rc=0)
  511. Feb 10 23:41:44 [29805] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/128, version=2.100.61): OK (rc=0)
  512. Feb 10 23:41:45 [26090] vcsquorum corosync notice [TOTEM ] Automatically recovered ring 0
  513. Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Unloading all Corosync service engines.
  514. Feb 10 23:41:46 [26090] vcsquorum corosync info [QB ] withdrawing server sockets
  515. Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync vote quorum service v1.0
  516. Feb 10 23:41:46 [26090] vcsquorum corosync info [QB ] withdrawing server sockets
  517. Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync configuration map access
  518. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: cfg_connection_destroy: Connection destroyed
  519. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: notice: pcmk_shutdown_worker: Shuting down Pacemaker
  520. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: notice: stop_child: Stopping crmd: Sent -15 to process 29811
  521. Feb 10 23:41:46 [29811] vcsquorum crmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  522. Feb 10 23:41:46 [29811] vcsquorum crmd: notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
  523. Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_shutdown_req: Sending shutdown request to vcsquorum
  524. Feb 10 23:41:46 [26090] vcsquorum corosync info [QB ] withdrawing server sockets
  525. Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync configuration service
  526. Feb 10 23:41:46 [29811] vcsquorum crmd: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2
  527. Feb 10 23:41:46 [29811] vcsquorum crmd: info: crmd_ais_destroy: connection closed
  528. Feb 10 23:41:46 [29805] vcsquorum cib: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2
  529. Feb 10 23:41:46 [29805] vcsquorum cib: error: cib_ais_destroy: Corosync connection lost! Exiting.
  530. Feb 10 23:41:46 [29805] vcsquorum cib: info: terminate_cib: cib_ais_destroy: Exiting fast...
  531. Feb 10 23:41:46 [29805] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
  532. Feb 10 23:41:46 [29805] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
  533. Feb 10 23:41:46 [29807] vcsquorum stonith-ng: error: crm_ipc_read: Connection to cib_rw failed
  534. Feb 10 23:41:46 [29807] vcsquorum stonith-ng: error: mainloop_gio_callback: Connection to cib_rw[0x9ca7e0] closed (I/O condition=17)
  535. Feb 10 23:41:46 [29805] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
  536. Feb 10 23:41:46 [29809] vcsquorum attrd: error: crm_ipc_read: Connection to cib_rw failed
  537. Feb 10 23:41:46 [29809] vcsquorum attrd: error: mainloop_gio_callback: Connection to cib_rw[0x23e2860] closed (I/O condition=17)
  538. Feb 10 23:41:46 [29809] vcsquorum attrd: error: attrd_cib_connection_destroy: Connection to the CIB terminated...
  539. Feb 10 23:41:46 [29811] vcsquorum crmd: error: crm_ipc_read: Connection to cib_shm failed
  540. Feb 10 23:41:46 [29811] vcsquorum crmd: error: mainloop_gio_callback: Connection to cib_shm[0xbc6490] closed (I/O condition=17)
  541. Feb 10 23:41:46 [29811] vcsquorum crmd: error: crmd_cib_connection_destroy: Connection to the CIB terminated...
  542. Feb 10 23:41:46 [29811] vcsquorum crmd: error: do_log: FSA: Input I_ERROR from crmd_cib_connection_destroy() received in state S_POLICY_ENGINE
  543. Feb 10 23:41:46 [29811] vcsquorum crmd: warning: do_state_transition: State transition S_POLICY_ENGINE -> S_RECOVERY [ input=I_ERROR cause=C_FSA_INTERNAL origin=crmd_cib_connection_destroy ]
  544. Feb 10 23:41:46 [29811] vcsquorum crmd: error: do_recover: Action A_RECOVER (0000000001000000) not supported
  545. Feb 10 23:41:46 [29811] vcsquorum crmd: warning: do_election_vote: Not voting in election, we're in state S_RECOVERY
  546. Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_dc_release: DC role released
  547. Feb 10 23:41:46 [29811] vcsquorum crmd: info: pe_ipc_destroy: Connection to the Policy Engine released
  548. Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_te_control: Transitioner is now inactive
  549. Feb 10 23:41:46 [29811] vcsquorum crmd: error: do_log: FSA: Input I_TERMINATE from do_recover() received in state S_RECOVERY
  550. Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_state_transition: State transition S_RECOVERY -> S_TERMINATE [ input=I_TERMINATE cause=C_FSA_INTERNAL origin=do_recover ]
  551. Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_shutdown: Disconnecting STONITH...
  552. Feb 10 23:41:46 [29811] vcsquorum crmd: info: tengine_stonith_connection_destroy: Fencing daemon disconnected
  553. Feb 10 23:41:46 [29811] vcsquorum crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service
  554. Feb 10 23:41:46 [29811] vcsquorum crmd: info: lrmd_connection_destroy: connection destroyed
  555. Feb 10 23:41:46 [29811] vcsquorum crmd: info: lrm_connection_destroy: LRM Connection disconnected
  556. Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_lrm_control: Disconnected from the LRM
  557. Feb 10 23:41:46 [29808] vcsquorum lrmd: info: lrmd_ipc_destroy: LRMD client disconnecting 0x6bbc00 - name: crmd id: 30675725-cb57-4d05-a2da-34d192311002
  558. Feb 10 23:41:46 [29811] vcsquorum crmd: info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
  559. Feb 10 23:41:46 [29811] vcsquorum crmd: notice: terminate_cs_connection: Disconnecting from Corosync
  560. Feb 10 23:41:46 [29811] vcsquorum crmd: info: crm_cluster_disconnect: Disconnected from corosync
  561. Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_ha_control: Disconnected from the cluster
  562. Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_cib_control: Disconnecting CIB
  563. Feb 10 23:41:46 [29811] vcsquorum crmd: info: qb_ipcs_us_withdraw: withdrawing server sockets
  564. Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
  565. Feb 10 23:41:46 [29811] vcsquorum crmd: error: do_exit: Could not recover from internal error
  566. Feb 10 23:41:46 [29811] vcsquorum crmd: info: do_exit: [crmd] stopped (2)
  567. Feb 10 23:41:46 [29811] vcsquorum crmd: info: free_mem: Dropping I_PENDING: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_election_vote ]
  568. Feb 10 23:41:46 [29811] vcsquorum crmd: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_dc_release ]
  569. Feb 10 23:41:46 [29811] vcsquorum crmd: info: free_mem: Dropping I_TERMINATE: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_stop ]
  570. Feb 10 23:41:46 [29811] vcsquorum crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service
  571. Feb 10 23:41:46 [29811] vcsquorum crmd: info: crm_xml_cleanup: Cleaning up memory from libxml2
  572. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: pcmk_child_exit: Child process cib exited (pid=29805, rc=64)
  573. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: pcmk_child_exit: Child process attrd exited (pid=29809, rc=1)
  574. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: pcmk_child_exit: Child process crmd exited (pid=29811, rc=2)
  575. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: notice: stop_child: Stopping pengine: Sent -15 to process 29810
  576. Feb 10 23:41:46 [29807] vcsquorum stonith-ng: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2
  577. Feb 10 23:41:46 [29807] vcsquorum stonith-ng: error: stonith_peer_ais_destroy: AIS connection terminated
  578. Feb 10 23:41:46 [29807] vcsquorum stonith-ng: info: stonith_shutdown: Terminating with 1 clients
  579. Feb 10 23:41:46 [29807] vcsquorum stonith-ng: info: qb_ipcs_us_withdraw: withdrawing server sockets
  580. Feb 10 23:41:46 [29807] vcsquorum stonith-ng: info: crm_xml_cleanup: Cleaning up memory from libxml2
  581. Feb 10 23:41:46 [29807] vcsquorum stonith-ng: info: main: Done
  582. Feb 10 23:41:46 [29808] vcsquorum lrmd: error: crm_ipc_read: Connection to stonith-ng failed
  583. Feb 10 23:41:46 [29808] vcsquorum lrmd: error: mainloop_gio_callback: Connection to stonith-ng[0x6c3830] closed (I/O condition=17)
  584. Feb 10 23:41:46 [29808] vcsquorum lrmd: error: stonith_connection_destroy_cb: LRMD lost STONITH connection
  585. Feb 10 23:41:46 [26090] vcsquorum corosync info [QB ] withdrawing server sockets
  586. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: cpg_connection_destroy: Connection destroyed
  587. Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01
  588. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: info: pcmk_child_exit: Child process stonith-ng exited (pid=29807, rc=0)
  589. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
  590. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: info: pcmk_child_exit: Child process pengine exited (pid=29810, rc=0)
  591. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
  592. Feb 10 23:41:46 [29808] vcsquorum lrmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  593. Feb 10 23:41:46 [29808] vcsquorum lrmd: info: lrmd_shutdown: Terminating with 0 clients
  594. Feb 10 23:41:46 [29808] vcsquorum lrmd: info: qb_ipcs_us_withdraw: withdrawing server sockets
  595. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: notice: stop_child: Stopping lrmd: Sent -15 to process 29808
  596. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: info: pcmk_child_exit: Child process lrmd exited (pid=29808, rc=0)
  597. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle
  598. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: notice: pcmk_shutdown_worker: Shutdown complete
  599. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: info: qb_ipcs_us_withdraw: withdrawing server sockets
  600. Feb 10 23:41:46 [29803] vcsquorum pacemakerd: info: main: Exiting pacemakerd
  601. Feb 10 23:41:46 [26090] vcsquorum corosync info [QB ] withdrawing server sockets
  602. Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync cluster quorum service v0.1
  603. Feb 10 23:41:46 [26090] vcsquorum corosync notice [SERV ] Service engine unloaded: corosync profile loading service
  604. Feb 10 23:41:46 [26090] vcsquorum corosync notice [MAIN ] Corosync Cluster Engine exiting normally
  605. Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] Initializing transport (UDP/IP Multicast).
  606. Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
  607. Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] Initializing transport (UDP/IP Multicast).
  608. Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
  609. Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] The network interface [192.168.1.45] is now up.
  610. Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync configuration map access [0]
  611. Feb 10 23:43:05 [1099] vcsquorum corosync info [QB ] server name: cmap
  612. Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync configuration service [1]
  613. Feb 10 23:43:05 [1099] vcsquorum corosync info [QB ] server name: cfg
  614. Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
  615. Feb 10 23:43:05 [1099] vcsquorum corosync info [QB ] server name: cpg
  616. Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync profile loading service [4]
  617. Feb 10 23:43:05 [1099] vcsquorum corosync notice [QUORUM] Using quorum provider corosync_votequorum
  618. Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
  619. Feb 10 23:43:05 [1099] vcsquorum corosync info [QB ] server name: votequorum
  620. Feb 10 23:43:05 [1099] vcsquorum corosync notice [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
  621. Feb 10 23:43:05 [1099] vcsquorum corosync info [QB ] server name: quorum
  622. Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] The network interface [192.168.7.45] is now up.
  623. Feb 10 23:43:05 [1099] vcsquorum corosync notice [QUORUM] Members[1]: 755053578
  624. Feb 10 23:43:05 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63844) was formed.
  625. Feb 10 23:43:05 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  626. Feb 10 23:43:06 [1099] vcsquorum corosync notice [QUORUM] Members[3]: 755053578 -1442761718 -1425984502
  627. Feb 10 23:43:06 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63852) was formed.
  628. Feb 10 23:43:06 [1099] vcsquorum corosync notice [QUORUM] This node is within the primary component and will provide service.
  629. Feb 10 23:43:06 [1099] vcsquorum corosync notice [QUORUM] Members[3]: 755053578 -1442761718 -1425984502
  630. Feb 10 23:43:06 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  631. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: read_config: User configured file based logging and explicitly disabled syslog.
  632. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: notice: main: Starting Pacemaker 1.1.8 (Build: 1f8858c): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart systemd corosync-native snmp libesmtp
  633. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: main: Maximum core file size is: 18446744073709551615
  634. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: qb_ipcs_us_publish: server name: pacemakerd
  635. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: notice: update_node_processes: 0xfa83b0 Node 755053578 now known as vcsquorum, was:
  636. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1199 for process cib
  637. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1201 for process stonith-ng
  638. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1202 for process lrmd
  639. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1203 for process attrd
  640. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1204 for process pengine
  641. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: start_child: Forked child 1205 for process crmd
  642. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: info: main: Starting mainloop
  643. Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
  644. Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: get_cluster_type: Cluster type is: 'corosync'
  645. Feb 10 23:43:07 [1201] vcsquorum stonith-ng: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  646. Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 755053578
  647. Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
  648. Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: init_cs_connection_once: Connection to 'corosync': established
  649. Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 755053578 is now known as vcsquorum
  650. Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 755053578 has uuid 755053578
  651. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: notice: update_node_processes: 0x11ac800 Node 2868982794 now known as vcs1, was:
  652. Feb 10 23:43:07 [1197] vcsquorum pacemakerd: notice: update_node_processes: 0xfa8b90 Node 2852205578 now known as vcs0, was:
  653. Feb 10 23:43:07 [1201] vcsquorum stonith-ng: info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
  654. Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
  655. Feb 10 23:43:07 [1203] vcsquorum attrd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  656. Feb 10 23:43:07 [1203] vcsquorum attrd: notice: main: Starting mainloop...
  657. Feb 10 23:43:07 [1199] vcsquorum cib: info: get_cluster_type: Cluster type is: 'corosync'
  658. Feb 10 23:43:07 [1199] vcsquorum cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
  659. Feb 10 23:43:07 [1199] vcsquorum cib: info: validate_with_relaxng: Creating RNG parser context
  660. Feb 10 23:43:07 [1202] vcsquorum lrmd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
  661. Feb 10 23:43:07 [1202] vcsquorum lrmd: info: qb_ipcs_us_publish: server name: lrmd
  662. Feb 10 23:43:07 [1202] vcsquorum lrmd: info: main: Starting
  663. Feb 10 23:43:07 [1205] vcsquorum crmd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
  664. Feb 10 23:43:07 [1205] vcsquorum crmd: notice: main: CRM Git Version: 1f8858c
  665. Feb 10 23:43:07 [1205] vcsquorum crmd: info: get_cluster_type: Cluster type is: 'corosync'
  666. Feb 10 23:43:07 [1205] vcsquorum crmd: info: crm_ipc_connect: Could not establish cib_shm connection: Connection refused (111)
  667. Feb 10 23:43:07 [1199] vcsquorum cib: info: startCib: CIB Initialization completed successfully
  668. Feb 10 23:43:07 [1199] vcsquorum cib: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  669. Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 755053578
  670. Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
  671. Feb 10 23:43:07 [1199] vcsquorum cib: info: init_cs_connection_once: Connection to 'corosync': established
  672. Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_get_peer: Node 755053578 is now known as vcsquorum
  673. Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_get_peer: Node 755053578 has uuid 755053578
  674. Feb 10 23:43:07 [1199] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_ro
  675. Feb 10 23:43:07 [1199] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_rw
  676. Feb 10 23:43:07 [1199] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_shm
  677. Feb 10 23:43:07 [1199] vcsquorum cib: info: cib_init: Starting cib mainloop
  678. Feb 10 23:43:07 [1199] vcsquorum cib: info: pcmk_cpg_membership: Joined[0.0] cib.755053578
  679. Feb 10 23:43:07 [1199] vcsquorum cib: info: pcmk_cpg_membership: Member[0.0] cib.755053578
  680. Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 2852205578
  681. Feb 10 23:43:07 [1199] vcsquorum cib: info: pcmk_cpg_membership: Member[0.1] cib.-1442761718
  682. Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1442761718] - corosync-cpg is now online
  683. Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 2868982794
  684. Feb 10 23:43:07 [1199] vcsquorum cib: info: pcmk_cpg_membership: Member[0.2] cib.-1425984502
  685. Feb 10 23:43:07 [1199] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1425984502] - corosync-cpg is now online
  686. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: notice: setup_cib: Watching for stonith topology changes
  687. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: qb_ipcs_us_publish: server name: stonith-ng
  688. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: main: Starting stonith-ng mainloop
  689. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[0.0] stonith-ng.755053578
  690. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[0.0] stonith-ng.755053578
  691. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 2852205578
  692. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[0.1] stonith-ng.-1442761718
  693. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1442761718] - corosync-cpg is now online
  694. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 2868982794
  695. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[0.2] stonith-ng.-1425984502
  696. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1425984502] - corosync-cpg is now online
  697. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 2852205578 is now known as vcs0
  698. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 2852205578 has uuid 2852205578
  699. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 2868982794 is now known as vcs1
  700. Feb 10 23:43:08 [1201] vcsquorum stonith-ng: info: crm_get_peer: Node 2868982794 has uuid 2868982794
  701. Feb 10 23:43:08 [1205] vcsquorum crmd: info: do_cib_control: CIB connection established
  702. Feb 10 23:43:08 [1205] vcsquorum crmd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  703. Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 755053578
  704. Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
  705. Feb 10 23:43:08 [1205] vcsquorum crmd: info: init_cs_connection_once: Connection to 'corosync': established
  706. Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 755053578 is now known as vcsquorum
  707. Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcsquorum is now (null)
  708. Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 755053578 has uuid 755053578
  709. Feb 10 23:43:08 [1205] vcsquorum crmd: notice: init_quorum_connection: Quorum acquired
  710. Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/3, version=2.100.1): OK (rc=0)
  711. Feb 10 23:43:08 [1205] vcsquorum crmd: info: do_ha_control: Connected to the cluster
  712. Feb 10 23:43:08 [1205] vcsquorum crmd: info: lrmd_api_connect: Connecting to lrmd
  713. Feb 10 23:43:08 [1202] vcsquorum lrmd: info: lrmd_ipc_accept: Accepting client connection: 0x2244e00 pid=1205 for uid=997 gid=0
  714. Feb 10 23:43:08 [1199] vcsquorum cib: info: crm_get_peer: Node 2852205578 is now known as vcs0
  715. Feb 10 23:43:08 [1199] vcsquorum cib: info: crm_get_peer: Node 2852205578 has uuid 2852205578
  716. Feb 10 23:43:08 [1205] vcsquorum crmd: info: do_started: Delaying start, no membership data (0000000000100000)
  717. Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.101.2 -> 2.101.3 from vcs0 not applied to 2.100.1: current "epoch" is less than required
  718. Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_server_process_diff: Requesting re-sync from peer
  719. Feb 10 23:43:08 [1205] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63852: quorum retained (3)
  720. Feb 10 23:43:08 [1205] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcsquorum[755053578] - state is now member
  721. Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcsquorum is now member (was (null))
  722. Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 2852205578
  723. Feb 10 23:43:08 [1205] vcsquorum crmd: info: pcmk_quorum_notification: Obtaining name for new node 2852205578
  724. Feb 10 23:43:08 [1205] vcsquorum crmd: notice: corosync_node_name: Inferred node name 'vcs0.example.com' for nodeid 2852205578 from DNS
  725. Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 2852205578 is now known as vcs0.example.com
  726. Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now (null)
  727. Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 2852205578 has uuid 2852205578
  728. Feb 10 23:43:08 [1205] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs0.example.com[2852205578] - state is now member
  729. Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now member (was (null))
  730. Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 2868982794
  731. Feb 10 23:43:08 [1205] vcsquorum crmd: info: pcmk_quorum_notification: Obtaining name for new node 2868982794
  732. Feb 10 23:43:08 [1205] vcsquorum crmd: notice: corosync_node_name: Inferred node name 'vcs1.example.com' for nodeid 2868982794 from DNS
  733. Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 2868982794 is now known as vcs1.example.com
  734. Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now (null)
  735. Feb 10 23:43:08 [1205] vcsquorum crmd: info: crm_get_peer: Node 2868982794 has uuid 2868982794
  736. Feb 10 23:43:08 [1205] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs1.example.com[2868982794] - state is now member
  737. Feb 10 23:43:08 [1205] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now member (was (null))
  738. Feb 10 23:43:08 [1205] vcsquorum crmd: info: qb_ipcs_us_publish: server name: crmd
  739. Feb 10 23:43:08 [1205] vcsquorum crmd: notice: do_started: The local CRM is operational
  740. Feb 10 23:43:08 [1205] vcsquorum crmd: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
  741. Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs0: 15070d3a94b6c6b977ed638be996c276
  742. Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_process_replace: Replaced 2.100.1 with 2.101.3 from vcs0
  743. Feb 10 23:43:08 [1199] vcsquorum cib: info: cib_replace_notify: Replaced: 2.100.1 -> 2.101.3 from vcs0
  744. Feb 10 23:43:09 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Joined[0.0] crmd.755053578
  745. Feb 10 23:43:09 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Member[0.0] crmd.755053578
  746. Feb 10 23:43:09 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Member[0.1] crmd.-1442761718
  747. Feb 10 23:43:09 [1205] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0.example.com[-1442761718] - corosync-cpg is now online
  748. Feb 10 23:43:09 [1205] vcsquorum crmd: info: peer_update_callback: Client vcs0.example.com/peer now has status [online] (DC=<null>)
  749. Feb 10 23:43:09 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Member[0.2] crmd.-1425984502
  750. Feb 10 23:43:09 [1205] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1.example.com[-1425984502] - corosync-cpg is now online
  751. Feb 10 23:43:09 [1205] vcsquorum crmd: info: peer_update_callback: Client vcs1.example.com/peer now has status [online] (DC=<null>)
  752. Feb 10 23:43:29 [1205] vcsquorum crmd: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
  753. Feb 10 23:43:29 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
  754. Feb 10 23:43:29 [1205] vcsquorum crmd: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
  755. Feb 10 23:43:29 [1205] vcsquorum crmd: crit: crm_get_peer: Node vcs0.example.com and vcs0 share the same cluster node id '2852205578'!
  756. Feb 10 23:43:29 [1205] vcsquorum crmd: info: crm_get_peer: Node vcs0 now has id: 2852205578
  757. Feb 10 23:43:29 [1205] vcsquorum crmd: info: crm_get_peer: Node 2852205578 is now known as vcs0
  758. Feb 10 23:43:29 [1205] vcsquorum crmd: info: peer_update_callback: vcs0 is now (null)
  759. Feb 10 23:43:29 [1205] vcsquorum crmd: info: crm_get_peer: Node 2852205578 has uuid 2852205578
  760. Feb 10 23:43:29 [1205] vcsquorum crmd: error: crmd_ais_dispatch: Recieving messages from a node we think is dead: vcs0[-1442761718]
  761. Feb 10 23:43:29 [1205] vcsquorum crmd: info: crm_update_peer_proc: crmd_ais_dispatch: Node vcs0[-1442761718] - corosync-cpg is now online
  762. Feb 10 23:43:29 [1205] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [online] (DC=<null>)
  763. Feb 10 23:43:29 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 7 (current: 2, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  764. Feb 10 23:44:50 [1199] vcsquorum cib: info: crm_get_peer: Node 2868982794 is now known as vcs1
  765. Feb 10 23:44:50 [1199] vcsquorum cib: info: crm_get_peer: Node 2868982794 has uuid 2868982794
  766. Feb 10 23:45:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
  767. Feb 10 23:45:06 [1205] vcsquorum crmd: crit: crm_get_peer: Node vcs1.example.com and vcs1 share the same cluster node id '2868982794'!
  768. Feb 10 23:45:06 [1205] vcsquorum crmd: info: crm_get_peer: Node vcs1 now has id: 2868982794
  769. Feb 10 23:45:06 [1205] vcsquorum crmd: info: crm_get_peer: Node 2868982794 is now known as vcs1
  770. Feb 10 23:45:06 [1205] vcsquorum crmd: info: peer_update_callback: vcs1 is now (null)
  771. Feb 10 23:45:06 [1205] vcsquorum crmd: info: crm_get_peer: Node 2868982794 has uuid 2868982794
  772. Feb 10 23:45:06 [1205] vcsquorum crmd: error: crmd_ais_dispatch: Recieving messages from a node we think is dead: vcs1[-1425984502]
  773. Feb 10 23:45:06 [1205] vcsquorum crmd: info: crm_update_peer_proc: crmd_ais_dispatch: Node vcs1[-1425984502] - corosync-cpg is now online
  774. Feb 10 23:45:06 [1205] vcsquorum crmd: info: peer_update_callback: Client vcs1/peer now has status [online] (DC=<null>)
  775. Feb 10 23:45:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 6 (current: 3, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  776. Feb 10 23:45:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 8 (current: 3, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  777. Feb 10 23:45:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 9 (current: 3, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  778. Feb 10 23:45:29 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  779. Feb 10 23:45:29 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  780. Feb 10 23:45:29 [1205] vcsquorum crmd: info: do_te_control: Registering TE UUID: 4e8cb7a7-66f8-4877-98c7-2c096796e92d
  781. Feb 10 23:45:29 [1205] vcsquorum crmd: info: set_graph_functions: Setting custom graph functions
  782. Feb 10 23:45:29 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  783. Feb 10 23:45:29 [1199] vcsquorum cib: info: cib_process_readwrite: We are now in R/W mode
  784. Feb 10 23:45:29 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/6, version=2.101.12): OK (rc=0)
  785. Feb 10 23:45:29 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/7, version=2.101.13): OK (rc=0)
  786. Feb 10 23:45:29 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/9, version=2.101.14): OK (rc=0)
  787. Feb 10 23:45:29 [1205] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63852
  788. Feb 10 23:45:29 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-1: Waiting on 3 outstanding join acks
  789. Feb 10 23:45:29 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  790. Feb 10 23:45:29 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/11, version=2.101.15): OK (rc=0)
  791. Feb 10 23:45:29 [1205] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcsquorum[755053578] - expected state is now member
  792. Feb 10 23:47:06 [1205] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs0 (op=join_offer)
  793. Feb 10 23:47:06 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
  794. Feb 10 23:47:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 7 (current: 4, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  795. Feb 10 23:47:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 10 (current: 4, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  796. Feb 10 23:47:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 11 (current: 4, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  797. Feb 10 23:47:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  798. Feb 10 23:47:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  799. Feb 10 23:47:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.15 -> 2.101.16 from vcs0 not applied to 2.101.16: current "num_updates" is greater than required
  800. Feb 10 23:47:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.16 -> 2.101.17 from vcs0 not applied to 2.101.17: current "num_updates" is greater than required
  801. Feb 10 23:47:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.17 -> 2.101.18 from vcs0 not applied to 2.101.18: current "num_updates" is greater than required
  802. Feb 10 23:47:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.18 -> 2.101.19 from vcs0 not applied to 2.101.19: current "num_updates" is greater than required
  803. Feb 10 23:49:06 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  804. Feb 10 23:49:06 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  805. Feb 10 23:49:06 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  806. Feb 10 23:49:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/14, version=2.101.20): OK (rc=0)
  807. Feb 10 23:49:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/15, version=2.101.21): OK (rc=0)
  808. Feb 10 23:49:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.19 -> 2.101.20 from vcs1 not applied to 2.101.21: current "num_updates" is greater than required
  809. Feb 10 23:49:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.20 -> 2.101.21 from vcs1 not applied to 2.101.21: current "num_updates" is greater than required
  810. Feb 10 23:49:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/17, version=2.101.28): OK (rc=0)
  811. Feb 10 23:49:06 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-2: Waiting on 3 outstanding join acks
  812. Feb 10 23:49:06 [1205] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs0 (op=join_offer)
  813. Feb 10 23:49:06 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
  814. Feb 10 23:49:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 8 (current: 5, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  815. Feb 10 23:49:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 12 (current: 5, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  816. Feb 10 23:49:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  817. Feb 10 23:49:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
  818. Feb 10 23:49:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 13 (current: 6, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  819. Feb 10 23:49:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/19, version=2.101.29): OK (rc=0)
  820. Feb 10 23:49:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 14 (current: 6, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  821. Feb 10 23:51:06 [1205] vcsquorum crmd: info: crmd_ha_msg_filter: Another DC detected: vcs0 (op=join_offer)
  822. Feb 10 23:51:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 9 (current: 6, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  823. Feb 10 23:51:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 15 (current: 6, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  824. Feb 10 23:51:06 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  825. Feb 10 23:51:06 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  826. Feb 10 23:51:06 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  827. Feb 10 23:51:06 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  828. Feb 10 23:51:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/21, version=2.101.38): OK (rc=0)
  829. Feb 10 23:51:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/22, version=2.101.39): OK (rc=0)
  830. Feb 10 23:51:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/24, version=2.101.40): OK (rc=0)
  831. Feb 10 23:51:06 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-3: Waiting on 3 outstanding join acks
  832. Feb 10 23:51:06 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  833. Feb 10 23:51:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/26, version=2.101.41): OK (rc=0)
  834. Feb 10 23:53:06 [1205] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs0 (op=join_offer)
  835. Feb 10 23:53:06 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
  836. Feb 10 23:53:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 10 (current: 7, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  837. Feb 10 23:53:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 7 (current: 7, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
  838. Feb 10 23:55:06 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  839. Feb 10 23:55:06 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  840. Feb 10 23:55:06 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  841. Feb 10 23:55:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/29, version=2.101.50): OK (rc=0)
  842. Feb 10 23:55:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/30, version=2.101.51): OK (rc=0)
  843. Feb 10 23:55:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.49 -> 2.101.50 from vcs1 not applied to 2.101.51: current "num_updates" is greater than required
  844. Feb 10 23:55:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.50 -> 2.101.51 from vcs1 not applied to 2.101.51: current "num_updates" is greater than required
  845. Feb 10 23:55:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/32, version=2.101.54): OK (rc=0)
  846. Feb 10 23:55:06 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-4: Waiting on 3 outstanding join acks
  847. Feb 10 23:55:06 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  848. Feb 10 23:55:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/34, version=2.101.55): OK (rc=0)
  849. Feb 10 23:58:06 [1205] vcsquorum crmd: error: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms)
  850. Feb 10 23:58:06 [1205] vcsquorum crmd: info: crm_timer_popped: Welcomed: 2, Integrated: 1
  851. Feb 10 23:58:06 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
  852. Feb 10 23:58:06 [1205] vcsquorum crmd: warning: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
  853. Feb 10 23:58:06 [1205] vcsquorum crmd: warning: do_state_transition: 2 cluster nodes failed to respond to the join offer.
  854. Feb 10 23:58:06 [1205] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs1.example.com 4
  855. Feb 10 23:58:06 [1205] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs0.example.com 4
  856. Feb 10 23:58:06 [1205] vcsquorum crmd: info: do_dc_join_finalize: join-4: Syncing the CIB from vcsquorum to the rest of the cluster
  857. Feb 10 23:58:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/37, version=2.101.55): OK (rc=0)
  858. Feb 10 23:58:06 [1205] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/transient_attributes
  859. Feb 10 23:58:06 [1205] vcsquorum crmd: info: update_attrd: Connecting to attrd... 5 retries remaining
  860. Feb 10 23:58:06 [1205] vcsquorum crmd: info: do_dc_join_ack: join-4: Updating node state to member for vcsquorum
  861. Feb 10 23:58:06 [1205] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  862. Feb 10 23:58:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/38, version=2.101.56): OK (rc=0)
  863. Feb 10 23:58:06 [1199] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs1: 09b83e9a5a32e2ad5270161ca7dc6a3c
  864. Feb 10 23:58:06 [1199] vcsquorum cib: warning: cib_process_replace: Replacement 2.101.55 from vcs1 not applied to 2.101.56: current num_updates is greater than the replacement
  865. Feb 10 23:58:06 [1199] vcsquorum cib: warning: cib_diff_notify: Update (client: crmd, call:54): 2.101.56 -> 2.101.55 (Update was older than existing configuration)
  866. Feb 10 23:58:06 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.101.55 -> 2.101.56 from vcs1 not applied to 2.101.56: current "num_updates" is greater than required
  867. Feb 10 23:58:06 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.101.59
  868. Feb 10 23:58:06 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.102.1
  869. Feb 10 23:58:06 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum" id="755053578" />
  870. Feb 10 23:58:06 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum.example.com" />
  871. Feb 10 23:58:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/transient_attributes (origin=local/crmd/39, version=2.102.4): OK (rc=0)
  872. Feb 10 23:58:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/40, version=2.102.5): OK (rc=0)
  873. Feb 10 23:58:06 [1205] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  874. Feb 10 23:58:06 [1205] vcsquorum crmd: warning: do_state_transition: Only 1 of 3 cluster nodes are eligible to run resources - continue 2
  875. Feb 10 23:58:06 [1205] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  876. Feb 10 23:58:06 [1203] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  877. Feb 10 23:58:06 [1205] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=2.103.1) : Non-status change
  878. Feb 10 23:58:07 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.102.6
  879. Feb 10 23:58:07 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.103.1
  880. Feb 10 23:58:07 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum.example.com" id="755053578" />
  881. Feb 10 23:58:07 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
  882. Feb 10 23:58:07 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/42, version=2.103.1): OK (rc=0)
  883. Feb 10 23:58:07 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/44, version=2.103.3): OK (rc=0)
  884. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_fs_vcs_monitor_0 on vcs1 is unrunnable (pending)
  885. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_daemon_svn_monitor_0 on vcs1 is unrunnable (pending)
  886. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_daemon_git-daemon_monitor_0 on vcs1 is unrunnable (pending)
  887. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_ip_vcs_monitor_0 on vcs1 is unrunnable (pending)
  888. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_drbd_vcs:0_monitor_0 on vcs1 is unrunnable (pending)
  889. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_ping:0_monitor_0 on vcs1 is unrunnable (pending)
  890. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action p_sysadmin_notify:0_monitor_0 on vcs1 is unrunnable (pending)
  891. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (pending)
  892. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action stonithvcs1_stop_0 on vcs0 is unrunnable (pending)
  893. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (pending)
  894. Feb 10 23:58:07 [1204] vcsquorum pengine: warning: custom_action: Action stonithvcsquorum_stop_0 on vcs0 is unrunnable (pending)
  895. Feb 10 23:58:07 [1204] vcsquorum pengine: notice: LogActions: Stop stonithvcs1 (vcs0)
  896. Feb 10 23:58:07 [1204] vcsquorum pengine: notice: LogActions: Stop stonithvcsquorum (vcs0)
  897. Feb 10 23:58:07 [1205] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  898. Feb 10 23:58:07 [1205] vcsquorum crmd: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1360562287-25) derived from /var/lib/pacemaker/pengine/pe-input-872.bz2
  899. Feb 10 23:58:07 [1205] vcsquorum crmd: notice: run_graph: Transition 0 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-872.bz2): Complete
  900. Feb 10 23:58:07 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  901. Feb 10 23:58:07 [1204] vcsquorum pengine: notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-872.bz2
  902. Feb 11 00:00:23 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.103.3 -> 2.103.4 from vcs1 not applied to 2.103.4: current "num_updates" is greater than required
  903. Feb 11 00:00:23 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 11 (current: 7, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  904. Feb 11 00:00:23 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 16 (current: 7, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  905. Feb 11 00:00:23 [1199] vcsquorum cib: info: cib_replace_notify: Replaced: 2.103.4 -> 2.103.5 from vcs0
  906. Feb 11 00:00:23 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
  907. Feb 11 00:00:23 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 8 (current: 8, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
  908. Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.103.4
  909. Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.103.5
  910. Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: -- <cib num_updates="4" />
  911. Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: ++ <cib epoch="103" num_updates="5" admin_epoch="2" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcsquorum" update-client="crmd" cib-last-written="Sun Feb 10 23:58:06 2013" have-quorum="1" dc-uuid="755053578" />
  912. Feb 11 00:00:23 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.103.4 -> 2.103.5 from vcs1 not applied to 2.103.5: current "num_updates" is greater than required
  913. Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.103.5
  914. Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.104.1
  915. Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum" id="755053578" />
  916. Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum.example.com" />
  917. Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.104.2
  918. Feb 11 00:00:23 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.105.1
  919. Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum.example.com" id="755053578" />
  920. Feb 11 00:00:23 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
  921. Feb 11 00:00:23 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/47, version=2.105.1): OK (rc=0)
  922. Feb 11 00:00:23 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  923. Feb 11 00:00:23 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  924. Feb 11 00:00:37 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.105.2 -> 2.105.3 from vcs1 not applied to 2.105.3: current "num_updates" is greater than required
  925. Feb 11 00:00:37 [1199] vcsquorum cib: info: cib_replace_notify: Replaced: 2.105.3 -> 2.105.4 from vcs0
  926. Feb 11 00:00:37 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.105.3
  927. Feb 11 00:00:37 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.105.4
  928. Feb 11 00:00:37 [1199] vcsquorum cib: notice: cib:diff: -- <cib num_updates="3" />
  929. Feb 11 00:00:37 [1199] vcsquorum cib: notice: cib:diff: ++ <cib epoch="105" num_updates="4" admin_epoch="2" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcsquorum" update-client="crmd" cib-last-written="Mon Feb 11 00:00:23 2013" have-quorum="1" dc-uuid="755053578" />
  930. Feb 11 00:00:37 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 9 (current: 9, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
  931. Feb 11 00:00:37 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 12 (current: 9, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  932. Feb 11 00:00:37 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/49, version=2.105.5): OK (rc=0)
  933. Feb 11 00:00:37 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  934. Feb 11 00:00:37 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.105.3 -> 2.105.4 from vcs1 not applied to 2.105.6: current "num_updates" is greater than required
  935. Feb 11 00:00:37 [1199] vcsquorum cib: warning: cib_process_diff: Diff 2.105.4 -> 2.106.1 from vcs1 not applied to 2.105.6: current "num_updates" is greater than required
  936. Feb 11 00:00:37 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.1 -> 2.106.2 from vcs1 not applied to 2.105.6: current "epoch" is less than required
  937. Feb 11 00:00:37 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  938. Feb 11 00:00:37 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  939. Feb 11 00:00:56 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.2 -> 2.106.3 from vcs1 not applied to 2.105.7: current "epoch" is less than required
  940. Feb 11 00:00:56 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  941. Feb 11 00:00:56 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 13 (current: 9, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  942. Feb 11 00:00:56 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 17 (current: 9, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  943. Feb 11 00:00:56 [1199] vcsquorum cib: info: cib_replace_notify: Replaced: 2.105.7 -> 2.105.8 from vcs0
  944. Feb 11 00:00:56 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.105.7
  945. Feb 11 00:00:56 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.105.8
  946. Feb 11 00:00:56 [1199] vcsquorum cib: notice: cib:diff: -- <cib num_updates="7" />
  947. Feb 11 00:00:56 [1199] vcsquorum cib: notice: cib:diff: ++ <cib epoch="105" num_updates="8" admin_epoch="2" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcsquorum" update-client="crmd" cib-last-written="Mon Feb 11 00:00:23 2013" have-quorum="1" dc-uuid="755053578" />
  948. Feb 11 00:00:56 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 10 (current: 10, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
  949. Feb 11 00:00:56 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.3 -> 2.106.4 from vcs1 not applied to 2.105.8: current "epoch" is less than required
  950. Feb 11 00:00:56 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  951. Feb 11 00:00:56 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.4 -> 2.106.5 from vcs1 not applied to 2.105.8: current "epoch" is less than required
  952. Feb 11 00:00:56 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  953. Feb 11 00:00:56 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.5 -> 2.106.6 from vcs1 not applied to 2.105.8: current "epoch" is less than required
  954. Feb 11 00:00:56 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  955. Feb 11 00:00:57 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/51, version=2.105.9): OK (rc=0)
  956. Feb 11 00:00:57 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  957. Feb 11 00:00:57 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  958. Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.6 -> 2.106.7 from vcs1 not applied to 2.105.11: current "epoch" is less than required
  959. Feb 11 00:01:06 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  960. Feb 11 00:01:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 14 (current: 10, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  961. Feb 11 00:01:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 18 (current: 10, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  962. Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_replace_notify: Replaced: 2.105.11 -> 2.105.12 from vcs0
  963. Feb 11 00:01:06 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.105.11
  964. Feb 11 00:01:06 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.105.12
  965. Feb 11 00:01:06 [1199] vcsquorum cib: notice: cib:diff: -- <cib num_updates="11" />
  966. Feb 11 00:01:06 [1199] vcsquorum cib: notice: cib:diff: ++ <cib epoch="105" num_updates="12" admin_epoch="2" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcsquorum" update-client="crmd" cib-last-written="Mon Feb 11 00:00:23 2013" have-quorum="1" dc-uuid="755053578" />
  967. Feb 11 00:01:06 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 11 (current: 11, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
  968. Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.7 -> 2.106.8 from vcs1 not applied to 2.105.12: current "epoch" is less than required
  969. Feb 11 00:01:06 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  970. Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.8 -> 2.106.9 from vcs1 not applied to 2.105.12: current "epoch" is less than required
  971. Feb 11 00:01:06 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  972. Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.9 -> 2.106.10 from vcs1 not applied to 2.105.12: current "epoch" is less than required
  973. Feb 11 00:01:06 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  974. Feb 11 00:01:06 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/53, version=2.105.13): OK (rc=0)
  975. Feb 11 00:01:06 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  976. Feb 11 00:01:06 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  977. Feb 11 00:01:26 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 19 (current: 11, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  978. Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.10 -> 2.106.11 from vcs1 not applied to 2.105.14: current "epoch" is less than required
  979. Feb 11 00:02:23 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  980. Feb 11 00:02:23 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  981. Feb 11 00:02:23 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  982. Feb 11 00:02:23 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  983. Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.11 -> 2.106.12 from vcs1 not applied to 2.105.14: current "epoch" is less than required
  984. Feb 11 00:02:23 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  985. Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.12 -> 2.106.13 from vcs1 not applied to 2.105.14: current "epoch" is less than required
  986. Feb 11 00:02:23 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  987. Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.13 -> 2.106.14 from vcs1 not applied to 2.105.14: current "epoch" is less than required
  988. Feb 11 00:02:23 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  989. Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/55, version=2.105.15): OK (rc=0)
  990. Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/56, version=2.105.16): OK (rc=0)
  991. Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/58, version=2.105.17): OK (rc=0)
  992. Feb 11 00:02:23 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-5: Waiting on 3 outstanding join acks
  993. Feb 11 00:02:23 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  994. Feb 11 00:02:23 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/60, version=2.105.18): OK (rc=0)
  995. Feb 11 00:03:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.14 -> 2.106.15 from vcs0 not applied to 2.105.18: current "epoch" is less than required
  996. Feb 11 00:03:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  997. Feb 11 00:03:26 [1205] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs0 (op=join_offer)
  998. Feb 11 00:03:26 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
  999. Feb 11 00:03:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.15 -> 2.106.16 from vcs0 not applied to 2.105.18: current "epoch" is less than required
  1000. Feb 11 00:03:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1001. Feb 11 00:03:26 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 15 (current: 12, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  1002. Feb 11 00:03:26 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 12 (current: 12, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
  1003. Feb 11 00:03:26 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  1004. Feb 11 00:03:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.16 -> 2.106.17 from vcs0 not applied to 2.105.18: current "epoch" is less than required
  1005. Feb 11 00:03:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1006. Feb 11 00:03:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.17 -> 2.106.18 from vcs0 not applied to 2.105.18: current "epoch" is less than required
  1007. Feb 11 00:03:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1008. Feb 11 00:05:26 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  1009. Feb 11 00:05:26 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  1010. Feb 11 00:05:26 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  1011. Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/63, version=2.105.19): OK (rc=0)
  1012. Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/64, version=2.105.20): OK (rc=0)
  1013. Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.18 -> 2.106.19 from vcs1 not applied to 2.105.20: current "epoch" is less than required
  1014. Feb 11 00:05:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1015. Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.19 -> 2.106.20 from vcs1 not applied to 2.105.20: current "epoch" is less than required
  1016. Feb 11 00:05:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1017. Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.20 -> 2.106.21 from vcs1 not applied to 2.105.20: current "epoch" is less than required
  1018. Feb 11 00:05:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1019. Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.21 -> 2.106.22 from vcs1 not applied to 2.105.20: current "epoch" is less than required
  1020. Feb 11 00:05:26 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1021. Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/66, version=2.105.21): OK (rc=0)
  1022. Feb 11 00:05:26 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-6: Waiting on 3 outstanding join acks
  1023. Feb 11 00:05:26 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1024. Feb 11 00:05:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/68, version=2.105.22): OK (rc=0)
  1025. Feb 11 00:06:47 [1205] vcsquorum crmd: info: handle_shutdown_request: Creating shutdown request for vcs0 (state=S_INTEGRATION)
  1026. Feb 11 00:06:47 [1203] vcsquorum attrd: warning: get_corosync_uuid: Node vcs0 is not yet known by corosync
  1027. Feb 11 00:06:47 [1203] vcsquorum attrd: warning: crm_get_peer: Cannot obtain a UUID for node 0/vcs0
  1028. Feb 11 00:06:47 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.22 -> 2.106.23 from vcs1 not applied to 2.105.24: current "epoch" is less than required
  1029. Feb 11 00:06:47 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1030. Feb 11 00:06:47 [1199] vcsquorum cib: info: cib_process_diff: Diff 2.106.23 -> 2.106.24 from vcs1 not applied to 2.105.24: current "epoch" is less than required
  1031. Feb 11 00:06:47 [1199] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1032. Feb 11 00:06:47 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  1033. Feb 11 00:06:47 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  1034. Feb 11 00:08:26 [1205] vcsquorum crmd: error: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms)
  1035. Feb 11 00:08:26 [1205] vcsquorum crmd: info: crm_timer_popped: Welcomed: 2, Integrated: 1
  1036. Feb 11 00:08:26 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
  1037. Feb 11 00:08:26 [1205] vcsquorum crmd: warning: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
  1038. Feb 11 00:08:26 [1205] vcsquorum crmd: warning: do_state_transition: 2 cluster nodes failed to respond to the join offer.
  1039. Feb 11 00:08:26 [1205] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs1.example.com 6
  1040. Feb 11 00:08:26 [1205] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs0.example.com 6
  1041. Feb 11 00:08:26 [1205] vcsquorum crmd: info: do_dc_join_finalize: join-6: Syncing the CIB from vcsquorum to the rest of the cluster
  1042. Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/71, version=2.105.24): OK (rc=0)
  1043. Feb 11 00:08:26 [1205] vcsquorum crmd: info: do_dc_join_ack: join-6: Updating node state to member for vcsquorum
  1044. Feb 11 00:08:26 [1205] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  1045. Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs1: 34fe9008b3da5710da50d9618da3bfd5
  1046. Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_replace: Replaced 2.105.24 with 2.106.24 from vcs1
  1047. Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_replace_notify: Local-only Replace: 2.106.24 from vcs1
  1048. Feb 11 00:08:26 [1205] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
  1049. Feb 11 00:08:26 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 2.106.24
  1050. Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum" id="755053578" />
  1051. Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: -- <node_state join="down" id="2868982794" />
  1052. Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: -- <node_state uname="vcsquorum" join="member" id="755053578" />
  1053. Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum.example.com" />
  1054. Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: ++ <node_state id="2868982794" uname="vcs1" in_ccm="true" crmd="online" join="member" crm-debug-origin="do_cib_replaced" expected="member" />
  1055. Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: ++ <node_state id="755053578" uname="vcsquorum.example.com" in_ccm="true" crmd="online" join="down" crm-debug-origin="do_cib_replaced" expected="member" />
  1056. Feb 11 00:08:26 [1205] vcsquorum crmd: notice: do_election_count_vote: Election 13 (current: 13, owner: 755053578): Processed no-vote from vcs0 (Peer is not part of our cluster)
  1057. Feb 11 00:08:26 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 2.106.30
  1058. Feb 11 00:08:26 [1199] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 2.107.1
  1059. Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum.example.com" id="755053578" />
  1060. Feb 11 00:08:26 [1199] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
  1061. Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/72, version=2.107.1): OK (rc=0)
  1062. Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/73, version=2.107.2): OK (rc=0)
  1063. Feb 11 00:08:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/75, version=2.107.4): OK (rc=0)
  1064. Feb 11 00:08:26 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  1065. Feb 11 00:08:26 [1203] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  1066. Feb 11 00:08:27 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Left[1.0] crmd.-1442761718
  1067. Feb 11 00:08:27 [1205] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
  1068. Feb 11 00:08:27 [1205] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [offline] (DC=true)
  1069. Feb 11 00:08:27 [1205] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2852205578
  1070. Feb 11 00:08:27 [1205] vcsquorum crmd: notice: peer_update_callback: Stonith/shutdown of vcs0 not matched
  1071. Feb 11 00:08:27 [1205] vcsquorum crmd: info: crm_update_peer_expected: peer_update_callback: Node vcs0[-1442761718] - expected state is now down
  1072. Feb 11 00:08:27 [1205] vcsquorum crmd: info: abort_transition_graph: peer_update_callback:211 - Triggered transition abort (complete=1) : Node failure
  1073. Feb 11 00:08:27 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Member[1.0] crmd.755053578
  1074. Feb 11 00:08:27 [1205] vcsquorum crmd: info: pcmk_cpg_membership: Member[1.1] crmd.-1425984502
  1075. Feb 11 00:08:27 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Left[1.0] stonith-ng.-1442761718
  1076. Feb 11 00:08:27 [1201] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
  1077. Feb 11 00:08:27 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[1.0] stonith-ng.755053578
  1078. Feb 11 00:08:27 [1201] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[1.1] stonith-ng.-1425984502
  1079. Feb 11 00:08:27 [1199] vcsquorum cib: info: pcmk_cpg_membership: Left[1.0] cib.-1442761718
  1080. Feb 11 00:08:27 [1199] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
  1081. Feb 11 00:08:27 [1199] vcsquorum cib: info: pcmk_cpg_membership: Member[1.0] cib.755053578
  1082. Feb 11 00:08:27 [1199] vcsquorum cib: info: pcmk_cpg_membership: Member[1.1] cib.-1425984502
  1083. Feb 11 00:10:26 [1205] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  1084. Feb 11 00:10:26 [1205] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  1085. Feb 11 00:10:26 [1205] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  1086. Feb 11 00:10:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/78, version=2.107.10): OK (rc=0)
  1087. Feb 11 00:10:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/79, version=2.107.11): OK (rc=0)
  1088. Feb 11 00:10:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/81, version=2.107.12): OK (rc=0)
  1089. Feb 11 00:10:26 [1205] vcsquorum crmd: info: do_dc_join_offer_all: join-7: Waiting on 3 outstanding join acks
  1090. Feb 11 00:10:26 [1205] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1091. Feb 11 00:10:26 [1199] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/83, version=2.107.13): OK (rc=0)
  1092. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  1093. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: pcmk_shutdown_worker: Shuting down Pacemaker
  1094. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping crmd: Sent -15 to process 1205
  1095. Feb 11 00:11:40 [1205] vcsquorum crmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  1096. Feb 11 00:11:40 [1205] vcsquorum crmd: notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
  1097. Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_shutdown_req: Sending shutdown request to vcsquorum
  1098. Feb 11 00:11:40 [1205] vcsquorum crmd: info: handle_shutdown_request: Creating shutdown request for vcsquorum (state=S_INTEGRATION)
  1099. Feb 11 00:11:40 [1203] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: shutdown (1360563100)
  1100. Feb 11 00:11:40 [1203] vcsquorum attrd: notice: attrd_perform_update: Sent update 16: shutdown=1360563100
  1101. Feb 11 00:11:40 [1203] vcsquorum attrd: notice: attrd_ais_dispatch: Update relayed from vcs1
  1102. Feb 11 00:11:40 [1203] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: shutdown (1360563100)
  1103. Feb 11 00:11:40 [1203] vcsquorum attrd: notice: attrd_perform_update: Sent update 18: shutdown=1360563100
  1104. Feb 11 00:11:40 [1205] vcsquorum crmd: info: handle_request: Shutting ourselves down (DC)
  1105. Feb 11 00:11:40 [1205] vcsquorum crmd: warning: do_log: FSA: Input I_STOP from route_message() received in state S_INTEGRATION
  1106. Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_STOPPING [ input=I_STOP cause=C_HA_MESSAGE origin=route_message ]
  1107. Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_dc_release: DC role released
  1108. Feb 11 00:11:40 [1205] vcsquorum crmd: info: pe_ipc_destroy: Connection to the Policy Engine released
  1109. Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_te_control: Transitioner is now inactive
  1110. Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_shutdown: Disconnecting STONITH...
  1111. Feb 11 00:11:40 [1205] vcsquorum crmd: info: tengine_stonith_connection_destroy: Fencing daemon disconnected
  1112. Feb 11 00:11:40 [1205] vcsquorum crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service
  1113. Feb 11 00:11:40 [1205] vcsquorum crmd: info: lrmd_connection_destroy: connection destroyed
  1114. Feb 11 00:11:40 [1205] vcsquorum crmd: info: lrm_connection_destroy: LRM Connection disconnected
  1115. Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_lrm_control: Disconnected from the LRM
  1116. Feb 11 00:11:40 [1205] vcsquorum crmd: info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
  1117. Feb 11 00:11:40 [1202] vcsquorum lrmd: info: lrmd_ipc_destroy: LRMD client disconnecting 0x2244e00 - name: crmd id: 7be9e918-747d-4ec8-9fdb-9537fb2d250f
  1118. Feb 11 00:11:40 [1205] vcsquorum crmd: notice: terminate_cs_connection: Disconnecting from Corosync
  1119. Feb 11 00:11:40 [1205] vcsquorum crmd: info: crm_cluster_disconnect: Disconnected from corosync
  1120. Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_ha_control: Disconnected from the cluster
  1121. Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_cib_control: Disconnecting CIB
  1122. Feb 11 00:11:40 [1205] vcsquorum crmd: info: crmd_cib_connection_destroy: Connection to the CIB terminated...
  1123. Feb 11 00:11:40 [1205] vcsquorum crmd: info: qb_ipcs_us_withdraw: withdrawing server sockets
  1124. Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
  1125. Feb 11 00:11:40 [1205] vcsquorum crmd: info: do_exit: [crmd] stopped (0)
  1126. Feb 11 00:11:40 [1205] vcsquorum crmd: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_dc_release ]
  1127. Feb 11 00:11:40 [1205] vcsquorum crmd: info: free_mem: Dropping I_TERMINATE: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_stop ]
  1128. Feb 11 00:11:40 [1205] vcsquorum crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service
  1129. Feb 11 00:11:40 [1205] vcsquorum crmd: info: crm_xml_cleanup: Cleaning up memory from libxml2
  1130. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process crmd exited (pid=1205, rc=0)
  1131. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping pengine: Sent -15 to process 1204
  1132. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process pengine exited (pid=1204, rc=0)
  1133. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping attrd: Sent -15 to process 1203
  1134. Feb 11 00:11:40 [1203] vcsquorum attrd: notice: main: Exiting...
  1135. Feb 11 00:11:40 [1203] vcsquorum attrd: notice: main: Disconnecting client 0x1168af0, pid=1205...
  1136. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process attrd exited (pid=1203, rc=0)
  1137. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping lrmd: Sent -15 to process 1202
  1138. Feb 11 00:11:40 [1202] vcsquorum lrmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  1139. Feb 11 00:11:40 [1202] vcsquorum lrmd: info: lrmd_shutdown: Terminating with 0 clients
  1140. Feb 11 00:11:40 [1202] vcsquorum lrmd: info: qb_ipcs_us_withdraw: withdrawing server sockets
  1141. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process lrmd exited (pid=1202, rc=0)
  1142. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping stonith-ng: Sent -15 to process 1201
  1143. Feb 11 00:11:40 [1201] vcsquorum stonith-ng: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  1144. Feb 11 00:11:40 [1201] vcsquorum stonith-ng: info: stonith_shutdown: Terminating with 0 clients
  1145. Feb 11 00:11:40 [1201] vcsquorum stonith-ng: info: qb_ipcs_us_withdraw: withdrawing server sockets
  1146. Feb 11 00:11:40 [1201] vcsquorum stonith-ng: info: crm_xml_cleanup: Cleaning up memory from libxml2
  1147. Feb 11 00:11:40 [1201] vcsquorum stonith-ng: info: main: Done
  1148. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process stonith-ng exited (pid=1201, rc=0)
  1149. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: stop_child: Stopping cib: Sent -15 to process 1199
  1150. Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_ipcs_send: Event 321 failed, size=1798, to=0x1942a80[1201], queue=1, rc=-32: <notify t="cib_notify" subt="cib_diff_notify" cib_op="cib_apply_diff" cib_rc="0" cib_object_type="diff"><cib_generation>
  1151. Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
  1152. Feb 11 00:11:40 [1199] vcsquorum cib: info: cib_shutdown: Waiting on 2 clients to disconnect (1)
  1153. Feb 11 00:11:40 [1199] vcsquorum cib: info: cib_shutdown: Waiting on 1 clients to disconnect (0)
  1154. Feb 11 00:11:40 [1199] vcsquorum cib: info: cib_shutdown: All clients disconnected (0)
  1155. Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cib: initiate_exit: Disconnecting from cluster infrastructure
  1156. Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
  1157. Feb 11 00:11:40 [1199] vcsquorum cib: notice: terminate_cs_connection: Disconnecting from Corosync
  1158. Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cs_connection: No Quorum connection
  1159. Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_cluster_disconnect: Disconnected from corosync
  1160. Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cib: initiate_exit: Exiting from mainloop...
  1161. Feb 11 00:11:40 [1199] vcsquorum cib: info: cib_shutdown: Disconnected 3 clients
  1162. Feb 11 00:11:40 [1199] vcsquorum cib: info: cib_shutdown: All clients disconnected (0)
  1163. Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cib: initiate_exit: Disconnecting from cluster infrastructure
  1164. Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
  1165. Feb 11 00:11:40 [1199] vcsquorum cib: notice: terminate_cs_connection: Disconnecting from Corosync
  1166. Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cs_connection: No CPG connection
  1167. Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cs_connection: No Quorum connection
  1168. Feb 11 00:11:40 [1199] vcsquorum cib: info: crm_cluster_disconnect: Disconnected from corosync
  1169. Feb 11 00:11:40 [1199] vcsquorum cib: info: terminate_cib: initiate_exit: Exiting from mainloop...
  1170. Feb 11 00:11:40 [1199] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
  1171. Feb 11 00:11:40 [1199] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
  1172. Feb 11 00:11:40 [1199] vcsquorum cib: info: qb_ipcs_us_withdraw: withdrawing server sockets
  1173. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: pcmk_child_exit: Child process cib exited (pid=1199, rc=0)
  1174. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: notice: pcmk_shutdown_worker: Shutdown complete
  1175. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: qb_ipcs_us_withdraw: withdrawing server sockets
  1176. Feb 11 00:11:40 [1197] vcsquorum pacemakerd: info: main: Exiting pacemakerd
  1177. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: read_config: User configured file based logging and explicitly disabled syslog.
  1178. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: notice: main: Starting Pacemaker 1.1.8 (Build: 1f8858c): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart systemd corosync-native snmp libesmtp
  1179. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: main: Maximum core file size is: 18446744073709551615
  1180. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: qb_ipcs_us_publish: server name: pacemakerd
  1181. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: notice: update_node_processes: 0x1d1f370 Node 755053578 now known as vcsquorum, was:
  1182. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2045 for process cib
  1183. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2047 for process stonith-ng
  1184. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2048 for process lrmd
  1185. Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
  1186. Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: get_cluster_type: Cluster type is: 'corosync'
  1187. Feb 11 00:26:49 [2047] vcsquorum stonith-ng: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  1188. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2049 for process attrd
  1189. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2050 for process pengine
  1190. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: start_child: Forked child 2051 for process crmd
  1191. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: info: main: Starting mainloop
  1192. Feb 11 00:26:49 [2043] vcsquorum pacemakerd: notice: update_node_processes: 0x1f237c0 Node 2868982794 now known as vcs1, was:
  1193. Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
  1194. Feb 11 00:26:49 [2045] vcsquorum cib: notice: main: Using new config location: /var/lib/pacemaker/cib
  1195. Feb 11 00:26:49 [2048] vcsquorum lrmd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
  1196. Feb 11 00:26:49 [2048] vcsquorum lrmd: info: qb_ipcs_us_publish: server name: lrmd
  1197. Feb 11 00:26:49 [2048] vcsquorum lrmd: info: main: Starting
  1198. Feb 11 00:26:49 [2045] vcsquorum cib: info: get_cluster_type: Cluster type is: 'corosync'
  1199. Feb 11 00:26:49 [2045] vcsquorum cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
  1200. Feb 11 00:26:49 [2045] vcsquorum cib: warning: retrieveCib: Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml
  1201. Feb 11 00:26:49 [2045] vcsquorum cib: warning: readCibXmlFile: Primary configuration corrupt or unusable, trying backup...
  1202. Feb 11 00:26:49 [2045] vcsquorum cib: warning: readCibXmlFile: Continuing with an empty configuration.
  1203. Feb 11 00:26:49 [2045] vcsquorum cib: info: validate_with_relaxng: Creating RNG parser context
  1204. Feb 11 00:26:49 [2049] vcsquorum attrd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  1205. Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 755053578
  1206. Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
  1207. Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: init_cs_connection_once: Connection to 'corosync': established
  1208. Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 755053578 is now known as vcsquorum
  1209. Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 755053578 has uuid 755053578
  1210. Feb 11 00:26:49 [2047] vcsquorum stonith-ng: info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
  1211. Feb 11 00:26:49 [2051] vcsquorum crmd: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
  1212. Feb 11 00:26:49 [2051] vcsquorum crmd: notice: main: CRM Git Version: 1f8858c
  1213. Feb 11 00:26:49 [2051] vcsquorum crmd: info: get_cluster_type: Cluster type is: 'corosync'
  1214. Feb 11 00:26:49 [2051] vcsquorum crmd: info: crm_ipc_connect: Could not establish cib_shm connection: Connection refused (111)
  1215. Feb 11 00:26:49 [2049] vcsquorum attrd: notice: main: Starting mainloop...
  1216. Feb 11 00:26:49 [2045] vcsquorum cib: info: startCib: CIB Initialization completed successfully
  1217. Feb 11 00:26:49 [2045] vcsquorum cib: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  1218. Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 755053578
  1219. Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
  1220. Feb 11 00:26:49 [2045] vcsquorum cib: info: init_cs_connection_once: Connection to 'corosync': established
  1221. Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_get_peer: Node 755053578 is now known as vcsquorum
  1222. Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_get_peer: Node 755053578 has uuid 755053578
  1223. Feb 11 00:26:49 [2045] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_ro
  1224. Feb 11 00:26:49 [2045] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_rw
  1225. Feb 11 00:26:49 [2045] vcsquorum cib: info: qb_ipcs_us_publish: server name: cib_shm
  1226. Feb 11 00:26:49 [2045] vcsquorum cib: info: cib_init: Starting cib mainloop
  1227. Feb 11 00:26:49 [2045] vcsquorum cib: info: pcmk_cpg_membership: Joined[0.0] cib.755053578
  1228. Feb 11 00:26:49 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[0.0] cib.755053578
  1229. Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 2868982794
  1230. Feb 11 00:26:49 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[0.1] cib.-1425984502
  1231. Feb 11 00:26:49 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1425984502] - corosync-cpg is now online
  1232. Feb 11 00:26:50 [2047] vcsquorum stonith-ng: notice: setup_cib: Watching for stonith topology changes
  1233. Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: qb_ipcs_us_publish: server name: stonith-ng
  1234. Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: main: Starting stonith-ng mainloop
  1235. Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[0.0] stonith-ng.755053578
  1236. Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[0.0] stonith-ng.755053578
  1237. Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 2868982794
  1238. Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[0.1] stonith-ng.-1425984502
  1239. Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1425984502] - corosync-cpg is now online
  1240. Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 2868982794 is now known as vcs1
  1241. Feb 11 00:26:50 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 2868982794 has uuid 2868982794
  1242. Feb 11 00:26:50 [2051] vcsquorum crmd: info: do_cib_control: CIB connection established
  1243. Feb 11 00:26:50 [2051] vcsquorum crmd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  1244. Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 755053578
  1245. Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_update_peer_proc: init_cpg_connection: Node (null)[755053578] - corosync-cpg is now online
  1246. Feb 11 00:26:50 [2051] vcsquorum crmd: info: init_cs_connection_once: Connection to 'corosync': established
  1247. Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 755053578 is now known as vcsquorum
  1248. Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcsquorum is now (null)
  1249. Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 755053578 has uuid 755053578
  1250. Feb 11 00:26:50 [2051] vcsquorum crmd: notice: init_quorum_connection: Quorum acquired
  1251. Feb 11 00:26:50 [2045] vcsquorum cib: info: crm_get_peer: Node 2868982794 is now known as vcs1
  1252. Feb 11 00:26:50 [2045] vcsquorum cib: info: crm_get_peer: Node 2868982794 has uuid 2868982794
  1253. Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.4.4 -> 0.4.5 from vcs1 not applied to 0.0.0: current "epoch" is less than required
  1254. Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_server_process_diff: Requesting re-sync from peer
  1255. Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs1: 8fbc8cf845733fc66fed998ebf09acc6
  1256. Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_process_replace: Replaced 0.0.0 with 0.4.5 from vcs1
  1257. Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.0.0 -> 0.4.5 from vcs1
  1258. Feb 11 00:26:50 [2051] vcsquorum crmd: info: do_ha_control: Connected to the cluster
  1259. Feb 11 00:26:50 [2051] vcsquorum crmd: info: lrmd_api_connect: Connecting to lrmd
  1260. Feb 11 00:26:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/3, version=0.4.6): OK (rc=0)
  1261. Feb 11 00:26:50 [2048] vcsquorum lrmd: info: lrmd_ipc_accept: Accepting client connection: 0x23d1c00 pid=2051 for uid=997 gid=0
  1262. Feb 11 00:26:50 [2051] vcsquorum crmd: info: do_started: Delaying start, no membership data (0000000000100000)
  1263. Feb 11 00:26:50 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63852: quorum retained (3)
  1264. Feb 11 00:26:50 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcsquorum[755053578] - state is now member
  1265. Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcsquorum is now member (was (null))
  1266. Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 2852205578
  1267. Feb 11 00:26:50 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Obtaining name for new node 2852205578
  1268. Feb 11 00:26:50 [2051] vcsquorum crmd: notice: corosync_node_name: Inferred node name 'vcs0.example.com' for nodeid 2852205578 from DNS
  1269. Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 2852205578 is now known as vcs0.example.com
  1270. Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now (null)
  1271. Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 2852205578 has uuid 2852205578
  1272. Feb 11 00:26:50 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs0.example.com[2852205578] - state is now member
  1273. Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now member (was (null))
  1274. Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node <null> now has id: 2868982794
  1275. Feb 11 00:26:50 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Obtaining name for new node 2868982794
  1276. Feb 11 00:26:50 [2051] vcsquorum crmd: notice: corosync_node_name: Inferred node name 'vcs1.example.com' for nodeid 2868982794 from DNS
  1277. Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 2868982794 is now known as vcs1.example.com
  1278. Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now (null)
  1279. Feb 11 00:26:50 [2051] vcsquorum crmd: info: crm_get_peer: Node 2868982794 has uuid 2868982794
  1280. Feb 11 00:26:50 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs1.example.com[2868982794] - state is now member
  1281. Feb 11 00:26:50 [2051] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now member (was (null))
  1282. Feb 11 00:26:50 [2051] vcsquorum crmd: info: qb_ipcs_us_publish: server name: crmd
  1283. Feb 11 00:26:50 [2051] vcsquorum crmd: notice: do_started: The local CRM is operational
  1284. Feb 11 00:26:50 [2051] vcsquorum crmd: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
  1285. Feb 11 00:26:51 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Joined[0.0] crmd.755053578
  1286. Feb 11 00:26:51 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[0.0] crmd.755053578
  1287. Feb 11 00:26:51 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[0.1] crmd.-1425984502
  1288. Feb 11 00:26:51 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1.example.com[-1425984502] - corosync-cpg is now online
  1289. Feb 11 00:26:51 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs1.example.com/peer now has status [online] (DC=<null>)
  1290. Feb 11 00:27:11 [2051] vcsquorum crmd: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
  1291. Feb 11 00:27:11 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
  1292. Feb 11 00:27:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
  1293. Feb 11 00:29:11 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  1294. Feb 11 00:29:11 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  1295. Feb 11 00:29:11 [2051] vcsquorum crmd: info: do_te_control: Registering TE UUID: 6a6761a2-ec2f-492c-a18c-394db5ac6dfc
  1296. Feb 11 00:29:11 [2051] vcsquorum crmd: info: set_graph_functions: Setting custom graph functions
  1297. Feb 11 00:29:11 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  1298. Feb 11 00:29:11 [2045] vcsquorum cib: info: cib_process_readwrite: We are now in R/W mode
  1299. Feb 11 00:29:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/7, version=0.4.8): OK (rc=0)
  1300. Feb 11 00:29:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/8, version=0.4.9): OK (rc=0)
  1301. Feb 11 00:29:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/10, version=0.4.10): OK (rc=0)
  1302. Feb 11 00:29:11 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63852
  1303. Feb 11 00:29:11 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
  1304. Feb 11 00:29:11 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1305. Feb 11 00:29:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/12, version=0.4.11): OK (rc=0)
  1306. Feb 11 00:29:11 [2051] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcsquorum[755053578] - expected state is now member
  1307. Feb 11 00:29:50 [2045] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs1: 2d4b3f9280b830e9f7ebac276e35345c
  1308. Feb 11 00:29:50 [2045] vcsquorum cib: info: cib_process_replace: Replaced 0.4.11 with 0.4.11 from vcs1
  1309. Feb 11 00:29:50 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update probe_complete=(null) failed: No such device or address
  1310. Feb 11 00:32:11 [2051] vcsquorum crmd: error: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms)
  1311. Feb 11 00:32:11 [2051] vcsquorum crmd: info: crm_timer_popped: Welcomed: 1, Integrated: 1
  1312. Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
  1313. Feb 11 00:32:11 [2051] vcsquorum crmd: warning: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
  1314. Feb 11 00:32:11 [2051] vcsquorum crmd: warning: do_state_transition: 1 cluster nodes failed to respond to the join offer.
  1315. Feb 11 00:32:11 [2051] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs1.example.com 1
  1316. Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-1: Syncing the CIB from vcsquorum to the rest of the cluster
  1317. Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/15, version=0.4.18): OK (rc=0)
  1318. Feb 11 00:32:11 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/transient_attributes
  1319. Feb 11 00:32:11 [2051] vcsquorum crmd: info: update_attrd: Connecting to attrd... 5 retries remaining
  1320. Feb 11 00:32:11 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.4.18
  1321. Feb 11 00:32:11 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.5.1
  1322. Feb 11 00:32:11 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum.example.com" id="755053578" />
  1323. Feb 11 00:32:11 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
  1324. Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/16, version=0.5.1): OK (rc=0)
  1325. Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_dc_join_ack: join-1: Updating node state to member for vcsquorum
  1326. Feb 11 00:32:11 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  1327. Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/transient_attributes (origin=local/crmd/17, version=0.5.2): OK (rc=0)
  1328. Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/18, version=0.5.3): OK (rc=0)
  1329. Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  1330. Feb 11 00:32:11 [2051] vcsquorum crmd: warning: do_state_transition: Only 1 of 2 cluster nodes are eligible to run resources - continue 1
  1331. Feb 11 00:32:11 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  1332. Feb 11 00:32:11 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  1333. Feb 11 00:32:11 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.6.1) : Non-status change
  1334. Feb 11 00:32:11 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.5.4
  1335. Feb 11 00:32:11 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.6.1
  1336. Feb 11 00:32:11 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcs1" id="2868982794" />
  1337. Feb 11 00:32:11 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2868982794" uname="vcs1.example.com" />
  1338. Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/20, version=0.6.1): OK (rc=0)
  1339. Feb 11 00:32:11 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/22, version=0.6.3): OK (rc=0)
  1340. Feb 11 00:32:11 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
  1341. Feb 11 00:32:11 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
  1342. Feb 11 00:32:11 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
  1343. Feb 11 00:32:11 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
  1344. Feb 11 00:32:11 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-0.bz2
  1345. Feb 11 00:32:11 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  1346. Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1347. Feb 11 00:32:11 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1360564331-8) derived from /var/lib/pacemaker/pengine/pe-input-0.bz2
  1348. Feb 11 00:32:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 4: probe_complete probe_complete on vcsquorum (local) - no waiting
  1349. Feb 11 00:32:11 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  1350. Feb 11 00:32:11 [2051] vcsquorum crmd: notice: run_graph: Transition 0 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete
  1351. Feb 11 00:32:11 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  1352. Feb 11 00:32:11 [2049] vcsquorum attrd: notice: attrd_perform_update: Sent update 5: probe_complete=true
  1353. Feb 11 00:32:11 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.4.18 -> 0.4.19 from vcs1 not applied to 0.6.4: current "epoch" is greater than required
  1354. Feb 11 00:35:21 [2051] vcsquorum crmd: crit: crm_get_peer: Node vcs1.example.com and vcs1 share the same cluster node id '2868982794'!
  1355. Feb 11 00:35:21 [2051] vcsquorum crmd: info: crm_get_peer: Node vcs1 now has id: 2868982794
  1356. Feb 11 00:35:21 [2051] vcsquorum crmd: info: crm_get_peer: Node 2868982794 is now known as vcs1
  1357. Feb 11 00:35:21 [2051] vcsquorum crmd: info: peer_update_callback: vcs1 is now (null)
  1358. Feb 11 00:35:21 [2051] vcsquorum crmd: info: crm_get_peer: Node 2868982794 has uuid 2868982794
  1359. Feb 11 00:35:21 [2051] vcsquorum crmd: error: crmd_ais_dispatch: Recieving messages from a node we think is dead: vcs1[-1425984502]
  1360. Feb 11 00:35:21 [2051] vcsquorum crmd: info: crm_update_peer_proc: crmd_ais_dispatch: Node vcs1[-1425984502] - corosync-cpg is now online
  1361. Feb 11 00:35:21 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs1/peer now has status [online] (DC=true)
  1362. Feb 11 00:35:21 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2868982794
  1363. Feb 11 00:35:21 [2051] vcsquorum crmd: notice: peer_update_callback: Stonith/shutdown of vcs1 not matched
  1364. Feb 11 00:35:21 [2051] vcsquorum crmd: info: crm_update_peer_expected: peer_update_callback: Node vcs1[-1425984502] - expected state is now down
  1365. Feb 11 00:35:21 [2051] vcsquorum crmd: info: abort_transition_graph: peer_update_callback:211 - Triggered transition abort (complete=1) : Node failure
  1366. Feb 11 00:35:21 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  1367. Feb 11 00:35:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.4.19 -> 0.6.1 from vcs1 not applied to 0.6.4: current "epoch" is greater than required
  1368. Feb 11 00:35:21 [2051] vcsquorum crmd: warning: do_state_transition: Only 1 of 2 cluster nodes are eligible to run resources - continue 1
  1369. Feb 11 00:35:21 [2051] vcsquorum crmd: notice: do_election_count_vote: Election 3 (current: 2, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  1370. Feb 11 00:35:21 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
  1371. Feb 11 00:35:21 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
  1372. Feb 11 00:35:21 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
  1373. Feb 11 00:35:21 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
  1374. Feb 11 00:35:21 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-1.bz2
  1375. Feb 11 00:35:21 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  1376. Feb 11 00:35:21 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1377. Feb 11 00:35:21 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1360564521-10) derived from /var/lib/pacemaker/pengine/pe-input-1.bz2
  1378. Feb 11 00:35:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.6.1 -> 0.7.1 from vcs1 not applied to 0.6.5: current "num_updates" is greater than required
  1379. Feb 11 00:35:21 [2051] vcsquorum crmd: notice: run_graph: Transition 1 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Complete
  1380. Feb 11 00:35:21 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  1381. Feb 11 00:35:21 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.1 -> 0.7.2 from vcs1 not applied to 0.6.5: current "epoch" is less than required
  1382. Feb 11 00:35:21 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1383. Feb 11 00:35:21 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.2 -> 0.7.3 from vcs1 not applied to 0.6.5: current "epoch" is less than required
  1384. Feb 11 00:35:21 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1385. Feb 11 00:36:06 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.3 -> 0.7.4 from vcs1 not applied to 0.6.5: current "epoch" is less than required
  1386. Feb 11 00:36:06 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1387. Feb 11 00:36:06 [2051] vcsquorum crmd: notice: do_election_count_vote: Election 4 (current: 2, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  1388. Feb 11 00:36:06 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.4 -> 0.7.5 from vcs1 not applied to 0.6.5: current "epoch" is less than required
  1389. Feb 11 00:36:06 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1390. Feb 11 00:36:06 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.5 -> 0.7.6 from vcs1 not applied to 0.6.5: current "epoch" is less than required
  1391. Feb 11 00:36:06 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1392. Feb 11 00:36:06 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.6 -> 0.7.7 from vcs1 not applied to 0.6.5: current "epoch" is less than required
  1393. Feb 11 00:36:06 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1394. Feb 11 00:36:06 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.7.7 -> 0.7.8 from vcs1 not applied to 0.6.5: current "epoch" is less than required
  1395. Feb 11 00:36:06 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1396. Feb 11 00:36:23 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.8.1) : Non-status change
  1397. Feb 11 00:36:23 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  1398. Feb 11 00:36:23 [2051] vcsquorum crmd: warning: do_state_transition: Only 1 of 2 cluster nodes are eligible to run resources - continue 1
  1399. Feb 11 00:36:23 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.6.5 -> 0.8.1 from <null>
  1400. Feb 11 00:36:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.6.5
  1401. Feb 11 00:36:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.8.1
  1402. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <cluster_property_set id="cib-bootstrap-options" >
  1403. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.8-1f8858c" />
  1404. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync" />
  1405. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </cluster_property_set>
  1406. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node id="2868982794" uname="vcs1.example.com" />
  1407. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node id="755053578" uname="vcsquorum" />
  1408. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node id="2852205578" uname="vcs0.example.com" />
  1409. Feb 11 00:36:23 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
  1410. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="2868982794" uname="vcs1" in_ccm="true" crmd="online" crm-debug-origin="peer_update_callback" join="down" expected="member" >
  1411. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <transient_attributes id="2868982794" >
  1412. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <instance_attributes id="status-2868982794" >
  1413. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="status-2868982794-probe_complete" name="probe_complete" value="true" />
  1414. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </instance_attributes>
  1415. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </transient_attributes>
  1416. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <lrm id="2868982794" >
  1417. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <lrm_resources />
  1418. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </lrm>
  1419. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </node_state>
  1420. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="755053578" uname="vcsquorum" in_ccm="true" crmd="online" join="member" crm-debug-origin="do_state_transition" expected="member" >
  1421. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <lrm id="755053578" >
  1422. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <lrm_resources />
  1423. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </lrm>
  1424. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <transient_attributes id="755053578" >
  1425. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <instance_attributes id="status-755053578" >
  1426. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="status-755053578-probe_complete" name="probe_complete" value="true" />
  1427. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </instance_attributes>
  1428. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </transient_attributes>
  1429. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- </node_state>
  1430. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="2852205578" uname="vcs0.example.com" in_ccm="true" crmd="offline" join="down" crm-debug-origin="do_state_transition" />
  1431. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: ++ <cib epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcsquorum" update-client="crmd" cib-last-written="Mon Feb 11 00:32:11 2013" have-quorum="1" dc-uuid="755053578" />
  1432. Feb 11 00:36:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_erase for section 'all' (origin=local/cibadmin/2, version=0.8.1): OK (rc=0)
  1433. Feb 11 00:36:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.9.1
  1434. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="8" num_updates="1" />
  1435. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2868982794" uname="vcs1" />
  1436. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
  1437. Feb 11 00:36:23 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2852205578" uname="vcs0.example.com" />
  1438. Feb 11 00:36:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/28, version=0.9.1): OK (rc=0)
  1439. Feb 11 00:36:23 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  1440. Feb 11 00:36:23 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  1441. Feb 11 00:36:41 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.8 -> 0.7.9 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
  1442. Feb 11 00:36:41 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.9 -> 0.7.10 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
  1443. Feb 11 00:36:41 [2051] vcsquorum crmd: notice: do_election_count_vote: Election 5 (current: 3, owner: 2868982794): Processed vote from vcs1 (Peer is not part of our cluster)
  1444. Feb 11 00:36:41 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.10 -> 0.7.11 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
  1445. Feb 11 00:36:41 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.11 -> 0.7.12 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
  1446. Feb 11 00:36:41 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.12 -> 0.7.13 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
  1447. Feb 11 00:37:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.13 -> 0.7.14 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
  1448. Feb 11 00:37:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.14 -> 0.7.15 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
  1449. Feb 11 00:37:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.7.15 -> 0.8.1 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
  1450. Feb 11 00:37:21 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.8.1 -> 0.9.1 from vcs1 not applied to 0.9.3: current "epoch" is greater than required
  1451. Feb 11 00:38:23 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  1452. Feb 11 00:38:23 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  1453. Feb 11 00:38:23 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  1454. Feb 11 00:38:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/30, version=0.9.4): OK (rc=0)
  1455. Feb 11 00:38:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/31, version=0.9.5): OK (rc=0)
  1456. Feb 11 00:38:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.9.5
  1457. Feb 11 00:38:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.10.1
  1458. Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="9" num_updates="5" />
  1459. Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: ++ <cluster_property_set id="cib-bootstrap-options" >
  1460. Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.8-1f8858c" />
  1461. Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: ++ </cluster_property_set>
  1462. Feb 11 00:38:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/34, version=0.10.1): OK (rc=0)
  1463. Feb 11 00:38:23 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
  1464. Feb 11 00:38:23 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1465. Feb 11 00:38:23 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.11.1
  1466. Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="10" num_updates="1" />
  1467. Feb 11 00:38:23 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync" />
  1468. Feb 11 00:38:23 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/37, version=0.11.1): OK (rc=0)
  1469. Feb 11 00:39:56 [1099] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1425984502
  1470. Feb 11 00:39:56 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63856: quorum retained (2)
  1471. Feb 11 00:39:56 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs1[2868982794] - state is now member
  1472. Feb 11 00:39:56 [2051] vcsquorum crmd: info: peer_update_callback: vcs1 is now member (was (null))
  1473. Feb 11 00:39:56 [2051] vcsquorum crmd: notice: corosync_mark_unseen_peer_dead: Node -1425984502/vcs1.example.com was not seen in the previous transition
  1474. Feb 11 00:39:56 [2051] vcsquorum crmd: notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vcs1.example.com[2868982794] - state is now lost
  1475. Feb 11 00:39:56 [2051] vcsquorum crmd: info: peer_update_callback: vcs1.example.com is now lost (was member)
  1476. Feb 11 00:39:56 [2051] vcsquorum crmd: notice: corosync_mark_unseen_peer_dead: Node -1442761718/vcs0.example.com was not seen in the previous transition
  1477. Feb 11 00:39:56 [2051] vcsquorum crmd: notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vcs0.example.com[2852205578] - state is now lost
  1478. Feb 11 00:39:56 [2051] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now lost (was member)
  1479. Feb 11 00:39:56 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1480. Feb 11 00:39:56 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/42, version=0.11.2): OK (rc=0)
  1481. Feb 11 00:39:56 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1482. Feb 11 00:39:56 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63856) was formed.
  1483. Feb 11 00:39:56 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  1484. Feb 11 00:39:57 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63856
  1485. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
  1486. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-4: Waiting on 2 outstanding join acks
  1487. Feb 11 00:39:57 [2051] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs1 (op=noop)
  1488. Feb 11 00:39:57 [2051] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs1 (op=join_offer)
  1489. Feb 11 00:39:57 [2051] vcsquorum crmd: warning: crmd_ha_msg_filter: Another DC detected: vcs1 (op=join_offer)
  1490. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]
  1491. Feb 11 00:39:57 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.9.1 -> 0.9.2 from vcs1 not applied to 0.11.3: current "epoch" is greater than required
  1492. Feb 11 00:39:57 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
  1493. Feb 11 00:39:57 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
  1494. Feb 11 00:39:57 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.9.2 -> 0.9.3 from vcs1 not applied to 0.11.3: current "epoch" is greater than required
  1495. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_election_count_vote: Election 6 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  1496. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_election_count_vote: Election 7 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  1497. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_election_count_vote: Election 8 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  1498. Feb 11 00:39:57 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
  1499. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  1500. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/44, version=0.11.4): OK (rc=0)
  1501. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/45, version=0.11.5): OK (rc=0)
  1502. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs1/vcs1/(null), version=0.11.5): OK (rc=0)
  1503. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/47, version=0.11.6): OK (rc=0)
  1504. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-5: Waiting on 2 outstanding join acks
  1505. Feb 11 00:39:57 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1506. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/49, version=0.11.7): OK (rc=0)
  1507. Feb 11 00:39:57 [2051] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcs1[-1425984502] - expected state is now member
  1508. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  1509. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-5: Syncing the CIB from vcsquorum to the rest of the cluster
  1510. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/52, version=0.11.8): OK (rc=0)
  1511. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/53, version=0.11.9): OK (rc=0)
  1512. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/54, version=0.11.10): OK (rc=0)
  1513. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_ack: join-5: Updating node state to member for vcsquorum
  1514. Feb 11 00:39:57 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  1515. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_dc_join_ack: join-5: Updating node state to member for vcs1
  1516. Feb 11 00:39:57 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
  1517. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/55, version=0.11.12): OK (rc=0)
  1518. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/57, version=0.11.14): OK (rc=0)
  1519. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  1520. Feb 11 00:39:57 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  1521. Feb 11 00:39:57 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  1522. Feb 11 00:39:57 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  1523. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/59, version=0.11.16): OK (rc=0)
  1524. Feb 11 00:39:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/61, version=0.11.19): OK (rc=0)
  1525. Feb 11 00:39:57 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
  1526. Feb 11 00:39:57 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
  1527. Feb 11 00:39:57 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
  1528. Feb 11 00:39:57 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
  1529. Feb 11 00:39:57 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-2.bz2
  1530. Feb 11 00:39:57 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  1531. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1532. Feb 11 00:39:57 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1360564797-34) derived from /var/lib/pacemaker/pengine/pe-input-2.bz2
  1533. Feb 11 00:39:57 [2051] vcsquorum crmd: notice: run_graph: Transition 2 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-2.bz2): Complete
  1534. Feb 11 00:39:57 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  1535. Feb 11 00:40:09 [1099] vcsquorum corosync notice [QUORUM] Members[3]: 755053578 -1442761718 -1425984502
  1536. Feb 11 00:40:09 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63864: quorum retained (3)
  1537. Feb 11 00:40:09 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs0.example.com[2852205578] - state is now member
  1538. Feb 11 00:40:09 [2051] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now member (was lost)
  1539. Feb 11 00:40:09 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1540. Feb 11 00:40:09 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63864) was formed.
  1541. Feb 11 00:40:09 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/63, version=0.11.22): OK (rc=0)
  1542. Feb 11 00:40:09 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1543. Feb 11 00:40:09 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  1544. Feb 11 00:40:14 [2043] vcsquorum pacemakerd: notice: update_node_processes: 0x1d1fd20 Node 2852205578 now known as vcs0, was:
  1545. Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[1.0] stonith-ng.-1442761718
  1546. Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[1.0] stonith-ng.755053578
  1547. Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node <null> now has id: 2852205578
  1548. Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[1.1] stonith-ng.-1442761718
  1549. Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1442761718] - corosync-cpg is now online
  1550. Feb 11 00:40:14 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[1.2] stonith-ng.-1425984502
  1551. Feb 11 00:40:14 [2045] vcsquorum cib: info: pcmk_cpg_membership: Joined[1.0] cib.-1442761718
  1552. Feb 11 00:40:14 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[1.0] cib.755053578
  1553. Feb 11 00:40:14 [2045] vcsquorum cib: info: crm_get_peer: Node <null> now has id: 2852205578
  1554. Feb 11 00:40:14 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[1.1] cib.-1442761718
  1555. Feb 11 00:40:14 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[-1442761718] - corosync-cpg is now online
  1556. Feb 11 00:40:14 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[1.2] cib.-1425984502
  1557. Feb 11 00:40:15 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 2852205578 is now known as vcs0
  1558. Feb 11 00:40:15 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Joined[1.0] crmd.-1442761718
  1559. Feb 11 00:40:15 [2047] vcsquorum stonith-ng: info: crm_get_peer: Node 2852205578 has uuid 2852205578
  1560. Feb 11 00:40:15 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[1.0] crmd.755053578
  1561. Feb 11 00:40:15 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[1.1] crmd.-1442761718
  1562. Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0.example.com[-1442761718] - corosync-cpg is now online
  1563. Feb 11 00:40:15 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs0.example.com/peer now has status [online] (DC=true)
  1564. Feb 11 00:40:15 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2852205578
  1565. Feb 11 00:40:15 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[1.2] crmd.-1425984502
  1566. Feb 11 00:40:15 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
  1567. Feb 11 00:40:15 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
  1568. Feb 11 00:40:15 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63864
  1569. Feb 11 00:40:15 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-6: Waiting on 3 outstanding join acks
  1570. Feb 11 00:40:15 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1571. Feb 11 00:40:15 [2045] vcsquorum cib: info: crm_get_peer: Node 2852205578 is now known as vcs0
  1572. Feb 11 00:40:15 [2045] vcsquorum cib: info: crm_get_peer: Node 2852205578 has uuid 2852205578
  1573. Feb 11 00:40:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs0/vcs0/(null), version=0.11.24): OK (rc=0)
  1574. Feb 11 00:40:15 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.0.0 -> 0.0.1 from vcs0 not applied to 0.11.24: current "epoch" is greater than required
  1575. Feb 11 00:40:15 [2051] vcsquorum crmd: crit: crm_get_peer: Node vcs0.example.com and vcs0 share the same cluster node id '2852205578'!
  1576. Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_get_peer: Node vcs0 now has id: 2852205578
  1577. Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_get_peer: Node 2852205578 is now known as vcs0
  1578. Feb 11 00:40:15 [2051] vcsquorum crmd: info: peer_update_callback: vcs0 is now (null)
  1579. Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_get_peer: Node 2852205578 has uuid 2852205578
  1580. Feb 11 00:40:15 [2051] vcsquorum crmd: error: crmd_ais_dispatch: Recieving messages from a node we think is dead: vcs0[-1442761718]
  1581. Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_update_peer_proc: crmd_ais_dispatch: Node vcs0[-1442761718] - corosync-cpg is now online
  1582. Feb 11 00:40:15 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [online] (DC=true)
  1583. Feb 11 00:40:15 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2852205578
  1584. Feb 11 00:40:15 [2051] vcsquorum crmd: notice: peer_update_callback: Stonith/shutdown of vcs0 not matched
  1585. Feb 11 00:40:15 [2051] vcsquorum crmd: info: crm_update_peer_expected: peer_update_callback: Node vcs0[-1442761718] - expected state is now down
  1586. Feb 11 00:40:15 [2051] vcsquorum crmd: info: abort_transition_graph: peer_update_callback:211 - Triggered transition abort (complete=1) : Node failure
  1587. Feb 11 00:40:16 [2051] vcsquorum crmd: info: do_dc_join_offer_all: A new node joined the cluster
  1588. Feb 11 00:40:16 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-7: Waiting on 3 outstanding join acks
  1589. Feb 11 00:40:16 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1590. Feb 11 00:40:36 [2051] vcsquorum crmd: notice: do_election_count_vote: Election 2 (current: 11, owner: 2852205578): Processed vote from vcs0 (Peer is not part of our cluster)
  1591. Feb 11 00:43:15 [2051] vcsquorum crmd: error: crm_timer_popped: Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION! (180000ms)
  1592. Feb 11 00:43:15 [2051] vcsquorum crmd: info: crm_timer_popped: Welcomed: 1, Integrated: 2
  1593. Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_TIMER_POPPED origin=crm_timer_popped ]
  1594. Feb 11 00:43:15 [2051] vcsquorum crmd: warning: do_state_transition: Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED
  1595. Feb 11 00:43:15 [2051] vcsquorum crmd: warning: do_state_transition: 1 cluster nodes failed to respond to the join offer.
  1596. Feb 11 00:43:15 [2051] vcsquorum crmd: info: ghash_print_node: Welcome reply not received from: vcs0.example.com 7
  1597. Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-7: Syncing the CIB from vcsquorum to the rest of the cluster
  1598. Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/69, version=0.11.29): OK (rc=0)
  1599. Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/70, version=0.11.30): OK (rc=0)
  1600. Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/71, version=0.11.31): OK (rc=0)
  1601. Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_dc_join_ack: join-7: Updating node state to member for vcsquorum
  1602. Feb 11 00:43:15 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  1603. Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_dc_join_ack: join-7: Updating node state to member for vcs1
  1604. Feb 11 00:43:15 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
  1605. Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/72, version=0.11.32): OK (rc=0)
  1606. Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/74, version=0.11.34): OK (rc=0)
  1607. Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  1608. Feb 11 00:43:15 [2051] vcsquorum crmd: warning: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  1609. Feb 11 00:43:15 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  1610. Feb 11 00:43:15 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.11.32 -> 0.11.33 from vcs0 not applied to 0.11.35: current "num_updates" is greater than required
  1611. Feb 11 00:43:15 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.11.36
  1612. Feb 11 00:43:15 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.12.1
  1613. Feb 11 00:43:15 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcs0.example.com" id="2852205578" />
  1614. Feb 11 00:43:15 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2852205578" uname="vcs0" />
  1615. Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/76, version=0.12.1): OK (rc=0)
  1616. Feb 11 00:43:15 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.12.1) : Non-status change
  1617. Feb 11 00:43:15 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/78, version=0.12.3): OK (rc=0)
  1618. Feb 11 00:43:15 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  1619. Feb 11 00:43:15 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  1620. Feb 11 00:43:15 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
  1621. Feb 11 00:43:15 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
  1622. Feb 11 00:43:15 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
  1623. Feb 11 00:43:15 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
  1624. Feb 11 00:43:15 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-input-3.bz2
  1625. Feb 11 00:43:15 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  1626. Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1627. Feb 11 00:43:15 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1360564995-47) derived from /var/lib/pacemaker/pengine/pe-input-3.bz2
  1628. Feb 11 00:43:15 [2051] vcsquorum crmd: notice: run_graph: Transition 3 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-3.bz2): Complete
  1629. Feb 11 00:43:15 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  1630. Feb 11 00:43:15 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.11.33 -> 0.11.34 from vcs0 not applied to 0.12.5: current "epoch" is greater than required
  1631. Feb 11 00:45:36 [2045] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs0: ccde8dddbecc04ef5e0f9d36a1a27e9c
  1632. Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_replace: Replacement 0.11.34 from vcs0 not applied to 0.12.6: current epoch is greater than the replacement
  1633. Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_diff_notify: Update (client: crmd, call:15): 0.12.6 -> 0.11.34 (Update was older than existing configuration)
  1634. Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.11.34 -> 0.12.1 from vcs0 not applied to 0.12.6: current "epoch" is greater than required
  1635. Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.12.1 -> 0.12.2 from vcs0 not applied to 0.12.6: current "num_updates" is greater than required
  1636. Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.12.2 -> 0.12.3 from vcs0 not applied to 0.12.6: current "num_updates" is greater than required
  1637. Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.12.3 -> 0.12.4 from vcs0 not applied to 0.12.6: current "num_updates" is greater than required
  1638. Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.12.4 -> 0.13.1 from vcs0 not applied to 0.12.6: current "num_updates" is greater than required
  1639. Feb 11 00:45:36 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.1 -> 0.13.2 from vcs0 not applied to 0.12.6: current "epoch" is less than required
  1640. Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1641. Feb 11 00:45:36 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.2 -> 0.13.3 from vcs0 not applied to 0.12.6: current "epoch" is less than required
  1642. Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1643. Feb 11 00:45:36 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs1/vcs1/(null), version=0.12.6): OK (rc=0)
  1644. Feb 11 00:45:36 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.3 -> 0.13.4 from vcs0 not applied to 0.12.7: current "epoch" is less than required
  1645. Feb 11 00:45:36 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1646. Feb 11 00:45:37 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.4 -> 0.13.5 from vcs0 not applied to 0.12.7: current "epoch" is less than required
  1647. Feb 11 00:45:37 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1648. Feb 11 00:45:37 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.5 -> 0.13.6 from vcs0 not applied to 0.12.9: current "epoch" is less than required
  1649. Feb 11 00:45:37 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1650. Feb 11 00:45:45 [2051] vcsquorum crmd: info: handle_shutdown_request: Creating shutdown request for vcs0 (state=S_IDLE)
  1651. Feb 11 00:45:45 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  1652. Feb 11 00:45:45 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.6 -> 0.13.7 from vcs0 not applied to 0.12.9: current "epoch" is less than required
  1653. Feb 11 00:45:45 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1654. Feb 11 00:45:45 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  1655. Feb 11 00:45:45 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.7 -> 0.13.8 from vcs0 not applied to 0.12.9: current "epoch" is less than required
  1656. Feb 11 00:45:45 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1657. Feb 11 00:45:45 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Left[2.0] crmd.-1442761718
  1658. Feb 11 00:45:45 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
  1659. Feb 11 00:45:45 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [offline] (DC=true)
  1660. Feb 11 00:45:45 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2852205578
  1661. Feb 11 00:45:45 [2051] vcsquorum crmd: notice: peer_update_callback: Stonith/shutdown of vcs0 not matched
  1662. Feb 11 00:45:45 [2051] vcsquorum crmd: info: abort_transition_graph: peer_update_callback:211 - Triggered transition abort (complete=1) : Node failure
  1663. Feb 11 00:45:45 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[2.0] crmd.755053578
  1664. Feb 11 00:45:45 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[2.1] crmd.-1425984502
  1665. Feb 11 00:45:45 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  1666. Feb 11 00:45:45 [2051] vcsquorum crmd: warning: do_state_transition: Only 2 of 3 cluster nodes are eligible to run resources - continue 1
  1667. Feb 11 00:45:45 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
  1668. Feb 11 00:45:45 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
  1669. Feb 11 00:45:45 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
  1670. Feb 11 00:45:45 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
  1671. Feb 11 00:45:45 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-4.bz2
  1672. Feb 11 00:45:45 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  1673. Feb 11 00:45:45 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1674. Feb 11 00:45:45 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1360565145-48) derived from /var/lib/pacemaker/pengine/pe-input-4.bz2
  1675. Feb 11 00:45:45 [2051] vcsquorum crmd: notice: run_graph: Transition 4 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-4.bz2): Complete
  1676. Feb 11 00:45:45 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  1677. Feb 11 00:45:46 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Left[2.0] stonith-ng.-1442761718
  1678. Feb 11 00:45:46 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
  1679. Feb 11 00:45:46 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[2.0] stonith-ng.755053578
  1680. Feb 11 00:45:46 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[2.1] stonith-ng.-1425984502
  1681. Feb 11 00:45:46 [2045] vcsquorum cib: info: pcmk_cpg_membership: Left[2.0] cib.-1442761718
  1682. Feb 11 00:45:46 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now offline
  1683. Feb 11 00:45:46 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[2.0] cib.755053578
  1684. Feb 11 00:45:46 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[2.1] cib.-1425984502
  1685. Feb 11 00:46:05 [1099] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1425984502
  1686. Feb 11 00:46:05 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63868: quorum retained (2)
  1687. Feb 11 00:46:05 [2051] vcsquorum crmd: notice: corosync_mark_unseen_peer_dead: Node -1442761718/vcs0.example.com was not seen in the previous transition
  1688. Feb 11 00:46:05 [2051] vcsquorum crmd: notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vcs0.example.com[2852205578] - state is now lost
  1689. Feb 11 00:46:05 [2051] vcsquorum crmd: info: peer_update_callback: vcs0.example.com is now lost (was member)
  1690. Feb 11 00:46:05 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1691. Feb 11 00:46:05 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/83, version=0.12.11): OK (rc=0)
  1692. Feb 11 00:46:05 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1693. Feb 11 00:46:05 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63868) was formed.
  1694. Feb 11 00:46:05 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  1695. Feb 11 00:56:49 [2051] vcsquorum crmd: info: handle_shutdown_request: Creating shutdown request for vcs1 (state=S_IDLE)
  1696. Feb 11 00:56:49 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-2868982794-shutdown, name=shutdown, value=1360565809, magic=NA, cib=0.12.13) : Transient attribute: update
  1697. Feb 11 00:56:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  1698. Feb 11 00:56:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  1699. Feb 11 00:56:49 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
  1700. Feb 11 00:56:49 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
  1701. Feb 11 00:56:49 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
  1702. Feb 11 00:56:49 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
  1703. Feb 11 00:56:49 [2050] vcsquorum pengine: notice: stage6: Scheduling Node vcs1 for shutdown
  1704. Feb 11 00:56:49 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 5: /var/lib/pacemaker/pengine/pe-input-5.bz2
  1705. Feb 11 00:56:49 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  1706. Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1707. Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 5 (ref=pe_calc-dc-1360565809-50) derived from /var/lib/pacemaker/pengine/pe-input-5.bz2
  1708. Feb 11 00:56:49 [2051] vcsquorum crmd: info: te_crm_command: Executing crm-event (7): do_shutdown on vcs1
  1709. Feb 11 00:56:49 [2051] vcsquorum crmd: info: crm_update_peer_expected: te_crm_command: Node vcs1[-1425984502] - expected state is now down
  1710. Feb 11 00:56:49 [2051] vcsquorum crmd: notice: run_graph: Transition 5 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-5.bz2): Complete
  1711. Feb 11 00:56:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  1712. Feb 11 00:56:49 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Left[3.0] crmd.-1425984502
  1713. Feb 11 00:56:49 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now offline
  1714. Feb 11 00:56:49 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs1/peer now has status [offline] (DC=true)
  1715. Feb 11 00:56:49 [2051] vcsquorum crmd: notice: peer_update_callback: do_shutdown of vcs1 (op 7) is complete
  1716. Feb 11 00:56:49 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[3.0] crmd.755053578
  1717. Feb 11 00:56:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
  1718. Feb 11 00:56:49 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
  1719. Feb 11 00:56:49 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63868
  1720. Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-8: Waiting on 1 outstanding join acks
  1721. Feb 11 00:56:49 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Left[3.0] stonith-ng.-1425984502
  1722. Feb 11 00:56:49 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now offline
  1723. Feb 11 00:56:49 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[3.0] stonith-ng.755053578
  1724. Feb 11 00:56:49 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1725. Feb 11 00:56:49 [2045] vcsquorum cib: info: pcmk_cpg_membership: Left[3.0] cib.-1425984502
  1726. Feb 11 00:56:49 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now offline
  1727. Feb 11 00:56:49 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[3.0] cib.755053578
  1728. Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  1729. Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-8: Syncing the CIB from vcsquorum to the rest of the cluster
  1730. Feb 11 00:56:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/88, version=0.12.14): OK (rc=0)
  1731. Feb 11 00:56:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/89, version=0.12.15): OK (rc=0)
  1732. Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_dc_join_ack: join-8: Updating node state to member for vcsquorum
  1733. Feb 11 00:56:49 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  1734. Feb 11 00:56:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/90, version=0.12.16): OK (rc=0)
  1735. Feb 11 00:56:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  1736. Feb 11 00:56:49 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  1737. Feb 11 00:56:49 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  1738. Feb 11 00:56:49 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  1739. Feb 11 00:56:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/92, version=0.12.18): OK (rc=0)
  1740. Feb 11 00:56:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/94, version=0.12.20): OK (rc=0)
  1741. Feb 11 00:56:50 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
  1742. Feb 11 00:56:50 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
  1743. Feb 11 00:56:50 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
  1744. Feb 11 00:56:50 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
  1745. Feb 11 00:56:50 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 6: /var/lib/pacemaker/pengine/pe-input-6.bz2
  1746. Feb 11 00:56:50 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  1747. Feb 11 00:56:50 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1748. Feb 11 00:56:50 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 6 (ref=pe_calc-dc-1360565810-56) derived from /var/lib/pacemaker/pengine/pe-input-6.bz2
  1749. Feb 11 00:56:50 [2051] vcsquorum crmd: notice: run_graph: Transition 6 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-6.bz2): Complete
  1750. Feb 11 00:56:50 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  1751. Feb 11 00:56:56 [1099] vcsquorum corosync notice [QUORUM] This node is within the non-primary component and will NOT provide any services.
  1752. Feb 11 00:56:56 [1099] vcsquorum corosync notice [QUORUM] Members[1]: 755053578
  1753. Feb 11 00:56:56 [2051] vcsquorum crmd: notice: pcmk_quorum_notification: Membership 63872: quorum lost (1)
  1754. Feb 11 00:56:56 [2051] vcsquorum crmd: notice: corosync_mark_unseen_peer_dead: Node -1425984502/vcs1 was not seen in the previous transition
  1755. Feb 11 00:56:56 [2051] vcsquorum crmd: notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vcs1[2868982794] - state is now lost
  1756. Feb 11 00:56:56 [2051] vcsquorum crmd: info: peer_update_callback: vcs1 is now lost (was member)
  1757. Feb 11 00:56:56 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1758. Feb 11 00:56:56 [1099] vcsquorum corosync notice [QUORUM] Members[1]: 755053578
  1759. Feb 11 00:56:56 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/97, version=0.12.22): OK (rc=0)
  1760. Feb 11 00:56:56 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63872) was formed.
  1761. Feb 11 00:56:56 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1762. Feb 11 00:56:56 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  1763. Feb 11 00:56:57 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63872: quorum still lost (1)
  1764. Feb 11 00:56:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/98, version=0.12.23): OK (rc=0)
  1765. Feb 11 00:56:57 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/100, version=0.12.25): OK (rc=0)
  1766. Feb 11 00:57:02 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63876: quorum still lost (2)
  1767. Feb 11 00:57:02 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs1[2868982794] - state is now member
  1768. Feb 11 00:57:02 [2051] vcsquorum crmd: info: peer_update_callback: vcs1 is now member (was lost)
  1769. Feb 11 00:57:02 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1770. Feb 11 00:57:02 [1099] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1425984502
  1771. Feb 11 00:57:02 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/102, version=0.12.27): OK (rc=0)
  1772. Feb 11 00:57:02 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1773. Feb 11 00:57:02 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63876) was formed.
  1774. Feb 11 00:57:02 [1099] vcsquorum corosync notice [QUORUM] This node is within the primary component and will provide service.
  1775. Feb 11 00:57:02 [1099] vcsquorum corosync notice [QUORUM] Members[2]: 755053578 -1425984502
  1776. Feb 11 00:57:02 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  1777. Feb 11 00:57:03 [2051] vcsquorum crmd: notice: pcmk_quorum_notification: Membership 63876: quorum acquired (2)
  1778. Feb 11 00:57:03 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/104, version=0.12.29): OK (rc=0)
  1779. Feb 11 00:57:03 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/105, version=0.12.30): OK (rc=0)
  1780. Feb 11 00:57:10 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[4.0] stonith-ng.-1425984502
  1781. Feb 11 00:57:10 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[4.0] stonith-ng.755053578
  1782. Feb 11 00:57:10 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[4.1] stonith-ng.-1425984502
  1783. Feb 11 00:57:10 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now online
  1784. Feb 11 00:57:10 [2045] vcsquorum cib: info: pcmk_cpg_membership: Joined[4.0] cib.-1425984502
  1785. Feb 11 00:57:10 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[4.0] cib.755053578
  1786. Feb 11 00:57:10 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[4.1] cib.-1425984502
  1787. Feb 11 00:57:10 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now online
  1788. Feb 11 00:57:11 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Joined[4.0] crmd.-1425984502
  1789. Feb 11 00:57:11 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[4.0] crmd.755053578
  1790. Feb 11 00:57:11 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[4.1] crmd.-1425984502
  1791. Feb 11 00:57:11 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs1[-1425984502] - corosync-cpg is now online
  1792. Feb 11 00:57:11 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs1/peer now has status [online] (DC=true)
  1793. Feb 11 00:57:11 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2868982794
  1794. Feb 11 00:57:11 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
  1795. Feb 11 00:57:11 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
  1796. Feb 11 00:57:11 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63876
  1797. Feb 11 00:57:11 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-9: Waiting on 2 outstanding join acks
  1798. Feb 11 00:57:11 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1799. Feb 11 00:57:11 [2045] vcsquorum cib: info: cib_process_diff: Diff 0.13.0 -> 0.13.1 from vcs1 not applied to 0.12.32: current "epoch" is less than required
  1800. Feb 11 00:57:11 [2045] vcsquorum cib: warning: cib_server_process_diff: Not requesting full refresh in R/W mode
  1801. Feb 11 00:57:12 [2051] vcsquorum crmd: info: do_dc_join_offer_all: A new node joined the cluster
  1802. Feb 11 00:57:12 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-10: Waiting on 2 outstanding join acks
  1803. Feb 11 00:57:12 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1804. Feb 11 00:57:13 [2051] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcs1[-1425984502] - expected state is now member
  1805. Feb 11 00:57:13 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  1806. Feb 11 00:57:13 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-10: Syncing the CIB from vcs1 to the rest of the cluster
  1807. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_replace: Digest matched on replace from vcs1: 40b422e480d7c38434ee5f3cca92f438
  1808. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_replace: Replaced 0.12.32 with 0.13.1 from vcs1
  1809. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.12.32 -> 0.13.1 from vcs1
  1810. Feb 11 00:57:13 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.12.32
  1811. Feb 11 00:57:13 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.13.1
  1812. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcs1" id="2868982794" />
  1813. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum" id="755053578" />
  1814. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="2868982794" uname="vcs1" in_ccm="true" crmd="online" join="down" crm-debug-origin="peer_update_callback" expected="down" >
  1815. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <transient_attributes id="2868982794" >
  1816. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <instance_attributes id="status-2868982794" >
  1817. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="status-2868982794-probe_complete" name="probe_complete" value="true" />
  1818. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="status-2868982794-shutdown" name="shutdown" value="1360565809" />
  1819. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </instance_attributes>
  1820. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </transient_attributes>
  1821. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <lrm id="2868982794" >
  1822. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <lrm_resources />
  1823. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </lrm>
  1824. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </node_state>
  1825. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="755053578" uname="vcsquorum" in_ccm="true" crmd="online" join="member" expected="member" crm-debug-origin="post_cache_update" >
  1826. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <transient_attributes id="755053578" >
  1827. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <instance_attributes id="status-755053578" >
  1828. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="status-755053578-probe_complete" name="probe_complete" value="true" />
  1829. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </instance_attributes>
  1830. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </transient_attributes>
  1831. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <lrm id="755053578" >
  1832. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <lrm_resources />
  1833. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </lrm>
  1834. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- </node_state>
  1835. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node_state id="2852205578" uname="vcs0" in_ccm="false" crmd="offline" join="down" crm-debug-origin="post_cache_update" expected="down" />
  1836. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2868982794" uname="vcs1.example.com" />
  1837. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum.example.com" />
  1838. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=vcs1/vcs1/110, version=0.13.1): OK (rc=0)
  1839. Feb 11 00:57:13 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  1840. Feb 11 00:57:13 [2051] vcsquorum crmd: info: do_dc_join_ack: join-10: Updating node state to member for vcsquorum
  1841. Feb 11 00:57:13 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  1842. Feb 11 00:57:13 [2051] vcsquorum crmd: info: do_dc_join_ack: join-10: Updating node state to member for vcs1
  1843. Feb 11 00:57:13 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
  1844. Feb 11 00:57:13 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.14.1
  1845. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcsquorum.example.com" id="755053578" />
  1846. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="755053578" uname="vcsquorum" />
  1847. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/111, version=0.14.1): OK (rc=0)
  1848. Feb 11 00:57:13 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.15.1
  1849. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: -- <node uname="vcs1.example.com" id="2868982794" />
  1850. Feb 11 00:57:13 [2045] vcsquorum cib: notice: cib:diff: ++ <node id="2868982794" uname="vcs1" />
  1851. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/112, version=0.15.1): OK (rc=0)
  1852. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/transient_attributes (origin=vcs1/crmd/8, version=0.15.2): OK (rc=0)
  1853. Feb 11 00:57:13 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  1854. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs1/vcs1/(null), version=0.15.2): OK (rc=0)
  1855. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/113, version=0.15.3): OK (rc=0)
  1856. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/115, version=0.15.5): OK (rc=0)
  1857. Feb 11 00:57:13 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  1858. Feb 11 00:57:13 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  1859. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/117, version=0.15.7): OK (rc=0)
  1860. Feb 11 00:57:13 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/119, version=0.15.9): OK (rc=0)
  1861. Feb 11 00:57:13 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  1862. Feb 11 00:57:13 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  1863. Feb 11 00:57:14 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
  1864. Feb 11 00:57:14 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
  1865. Feb 11 00:57:14 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
  1866. Feb 11 00:57:14 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
  1867. Feb 11 00:57:14 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 7: /var/lib/pacemaker/pengine/pe-input-7.bz2
  1868. Feb 11 00:57:14 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  1869. Feb 11 00:57:14 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1870. Feb 11 00:57:14 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 7 (ref=pe_calc-dc-1360565834-70) derived from /var/lib/pacemaker/pengine/pe-input-7.bz2
  1871. Feb 11 00:57:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 2: probe_complete probe_complete on vcs1 - no waiting
  1872. Feb 11 00:57:14 [2051] vcsquorum crmd: notice: run_graph: Transition 7 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-7.bz2): Complete
  1873. Feb 11 00:57:14 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  1874. Feb 11 00:58:17 [1099] vcsquorum corosync notice [QUORUM] Members[3]: 755053578 -1442761718 -1425984502
  1875. Feb 11 00:58:17 [2051] vcsquorum crmd: info: pcmk_quorum_notification: Membership 63880: quorum retained (3)
  1876. Feb 11 00:58:17 [2051] vcsquorum crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node vcs0[2852205578] - state is now member
  1877. Feb 11 00:58:17 [2051] vcsquorum crmd: info: peer_update_callback: vcs0 is now member (was (null))
  1878. Feb 11 00:58:17 [2051] vcsquorum crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1879. Feb 11 00:58:17 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/122, version=0.15.14): OK (rc=0)
  1880. Feb 11 00:58:17 [2045] vcsquorum cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20)
  1881. Feb 11 00:58:17 [1099] vcsquorum corosync notice [TOTEM ] A processor joined or left the membership and a new membership (192.168.1.45:63880) was formed.
  1882. Feb 11 00:58:17 [1099] vcsquorum corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  1883. Feb 11 00:58:28 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Joined[5.0] stonith-ng.-1442761718
  1884. Feb 11 00:58:28 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[5.0] stonith-ng.755053578
  1885. Feb 11 00:58:28 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[5.1] stonith-ng.-1442761718
  1886. Feb 11 00:58:28 [2047] vcsquorum stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
  1887. Feb 11 00:58:28 [2047] vcsquorum stonith-ng: info: pcmk_cpg_membership: Member[5.2] stonith-ng.-1425984502
  1888. Feb 11 00:58:28 [2045] vcsquorum cib: info: pcmk_cpg_membership: Joined[5.0] cib.-1442761718
  1889. Feb 11 00:58:28 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[5.0] cib.755053578
  1890. Feb 11 00:58:28 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[5.1] cib.-1442761718
  1891. Feb 11 00:58:28 [2045] vcsquorum cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
  1892. Feb 11 00:58:28 [2045] vcsquorum cib: info: pcmk_cpg_membership: Member[5.2] cib.-1425984502
  1893. Feb 11 00:58:29 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Joined[5.0] crmd.-1442761718
  1894. Feb 11 00:58:29 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[5.0] crmd.755053578
  1895. Feb 11 00:58:29 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[5.1] crmd.-1442761718
  1896. Feb 11 00:58:29 [2051] vcsquorum crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vcs0[-1442761718] - corosync-cpg is now online
  1897. Feb 11 00:58:29 [2051] vcsquorum crmd: info: peer_update_callback: Client vcs0/peer now has status [online] (DC=true)
  1898. Feb 11 00:58:29 [2051] vcsquorum crmd: warning: match_down_event: No match for shutdown action on 2852205578
  1899. Feb 11 00:58:29 [2051] vcsquorum crmd: info: pcmk_cpg_membership: Member[5.2] crmd.-1425984502
  1900. Feb 11 00:58:29 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
  1901. Feb 11 00:58:29 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
  1902. Feb 11 00:58:29 [2051] vcsquorum crmd: info: join_make_offer: Making join offers based on membership 63880
  1903. Feb 11 00:58:29 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-11: Waiting on 3 outstanding join acks
  1904. Feb 11 00:58:29 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1905. Feb 11 00:58:29 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vcs0/vcs0/(null), version=0.15.16): OK (rc=0)
  1906. Feb 11 00:58:29 [2045] vcsquorum cib: warning: cib_process_diff: Diff 0.0.0 -> 0.0.1 from vcs0 not applied to 0.15.16: current "epoch" is greater than required
  1907. Feb 11 00:58:30 [2051] vcsquorum crmd: info: do_dc_join_offer_all: A new node joined the cluster
  1908. Feb 11 00:58:30 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-12: Waiting on 3 outstanding join acks
  1909. Feb 11 00:58:30 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  1910. Feb 11 00:58:31 [2051] vcsquorum crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vcs0[-1442761718] - expected state is now member
  1911. Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  1912. Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-12: Syncing the CIB from vcsquorum to the rest of the cluster
  1913. Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/127, version=0.15.16): OK (rc=0)
  1914. Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_dc_join_ack: join-12: Updating node state to member for vcsquorum
  1915. Feb 11 00:58:31 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  1916. Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_dc_join_ack: join-12: Updating node state to member for vcs0
  1917. Feb 11 00:58:31 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
  1918. Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_dc_join_ack: join-12: Updating node state to member for vcs1
  1919. Feb 11 00:58:31 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
  1920. Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/128, version=0.15.17): OK (rc=0)
  1921. Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/129, version=0.15.18): OK (rc=0)
  1922. Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/130, version=0.15.19): OK (rc=0)
  1923. Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/transient_attributes (origin=vcs0/crmd/9, version=0.15.20): OK (rc=0)
  1924. Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/131, version=0.15.22): OK (rc=0)
  1925. Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/133, version=0.15.24): OK (rc=0)
  1926. Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/135, version=0.15.26): OK (rc=0)
  1927. Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  1928. Feb 11 00:58:31 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  1929. Feb 11 00:58:31 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  1930. Feb 11 00:58:31 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  1931. Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/137, version=0.15.28): OK (rc=0)
  1932. Feb 11 00:58:31 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/139, version=0.15.31): OK (rc=0)
  1933. Feb 11 00:58:31 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
  1934. Feb 11 00:58:31 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
  1935. Feb 11 00:58:31 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
  1936. Feb 11 00:58:31 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
  1937. Feb 11 00:58:31 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 8: /var/lib/pacemaker/pengine/pe-input-8.bz2
  1938. Feb 11 00:58:31 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  1939. Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1940. Feb 11 00:58:31 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 8 (ref=pe_calc-dc-1360565911-85) derived from /var/lib/pacemaker/pengine/pe-input-8.bz2
  1941. Feb 11 00:58:31 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 2: probe_complete probe_complete on vcs0 - no waiting
  1942. Feb 11 00:58:31 [2051] vcsquorum crmd: notice: run_graph: Transition 8 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-8.bz2): Complete
  1943. Feb 11 00:58:31 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  1944. Feb 11 00:59:33 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.16.1) : Non-status change
  1945. Feb 11 00:59:33 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  1946. Feb 11 00:59:33 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.15.36
  1947. Feb 11 00:59:33 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.16.1
  1948. Feb 11 00:59:33 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="15" num_updates="36" />
  1949. Feb 11 00:59:33 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="nodes-755053578" >
  1950. Feb 11 00:59:33 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="nodes-755053578-standby" name="standby" value="on" />
  1951. Feb 11 00:59:33 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
  1952. Feb 11 00:59:33 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crm_attribute/5, version=0.16.1): OK (rc=0)
  1953. Feb 11 00:59:33 [2050] vcsquorum pengine: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
  1954. Feb 11 00:59:33 [2050] vcsquorum pengine: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
  1955. Feb 11 00:59:33 [2050] vcsquorum pengine: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
  1956. Feb 11 00:59:33 [2050] vcsquorum pengine: notice: stage6: Delaying fencing operations until there are resources to manage
  1957. Feb 11 00:59:33 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 9: /var/lib/pacemaker/pengine/pe-input-9.bz2
  1958. Feb 11 00:59:33 [2050] vcsquorum pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
  1959. Feb 11 00:59:33 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  1960. Feb 11 00:59:33 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 9 (ref=pe_calc-dc-1360565973-87) derived from /var/lib/pacemaker/pengine/pe-input-9.bz2
  1961. Feb 11 00:59:33 [2051] vcsquorum crmd: notice: run_graph: Transition 9 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-9.bz2): Complete
  1962. Feb 11 00:59:33 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  1963. Feb 11 01:00:45 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.17.1) : Non-status change
  1964. Feb 11 01:00:45 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  1965. Feb 11 01:00:45 [2045] vcsquorum cib: info: cib_replace_notify: Local-only Replace: 0.17.1 from <null>
  1966. Feb 11 01:00:45 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.17.1
  1967. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="16" num_updates="1" />
  1968. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true" />
  1969. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="freeze" />
  1970. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="stonith" id="stonithvcs0" type="external/webpowerswitch" >
  1971. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="stonithvcs0-instance_attributes" >
  1972. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs0-instance_attributes-wps_ipaddr" name="wps_ipaddr" value="192.168.7.100" />
  1973. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs0-instance_attributes-wps_port" name="wps_port" value="2" />
  1974. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs0-instance_attributes-wps_username" name="wps_username" value="xxx" />
  1975. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs0-instance_attributes-wps_password" name="wps_password" value="xxx" />
  1976. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs0-instance_attributes-hostname_to_stonith" name="hostname_to_stonith" value="vcs0" />
  1977. Feb 11 01:00:45 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
  1978. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
  1979. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
  1980. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="stonith" id="stonithvcs1" type="external/webpowerswitch" >
  1981. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="stonithvcs1-instance_attributes" >
  1982. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs1-instance_attributes-wps_ipaddr" name="wps_ipaddr" value="192.168.7.200" />
  1983. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs1-instance_attributes-wps_port" name="wps_port" value="2" />
  1984. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs1-instance_attributes-wps_username" name="wps_username" value="xxx" />
  1985. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs1-instance_attributes-wps_password" name="wps_password" value="xxx" />
  1986. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcs1-instance_attributes-hostname_to_stonith" name="hostname_to_stonith" value="vcs1" />
  1987. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
  1988. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
  1989. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="stonith" id="stonithvcsquorum" type="external/webpowerswitch" >
  1990. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="stonithvcsquorum-instance_attributes" >
  1991. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcsquorum-instance_attributes-wps_ipaddr" name="wps_ipaddr" value="192.168.7.101" />
  1992. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcsquorum-instance_attributes-wps_port" name="wps_port" value="2" />
  1993. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcsquorum-instance_attributes-wps_username" name="wps_username" value="xxx" />
  1994. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcsquorum-instance_attributes-wps_password" name="wps_password" value="xxx" />
  1995. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="stonithvcsquorum-instance_attributes-hostname_to_stonith" name="hostname_to_stonith" value="vcsquorum" />
  1996. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
  1997. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
  1998. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <group id="g_vcs" >
  1999. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="ocf" id="p_fs_vcs" provider="heartbeat" type="Filesystem" >
  2000. Feb 11 01:00:45 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_fs_vcs-instance_attributes" >
  2001. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_fs_vcs-instance_attributes-device" name="device" value="/dev/drbd0" />
  2002. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_fs_vcs-instance_attributes-directory" name="directory" value="/mnt/storage" />
  2003. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_fs_vcs-instance_attributes-fstype" name="fstype" value="ext4" />
  2004. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_fs_vcs-instance_attributes-options" name="options" value="noatime" />
  2005. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
  2006. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
  2007. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_fs_vcs-start-0" interval="0" name="start" timeout="60" />
  2008. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_fs_vcs-stop-0" interval="0" name="stop" timeout="60" />
  2009. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_fs_vcs-monitor-20" interval="20" name="monitor" timeout="40" />
  2010. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
  2011. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
  2012. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="lsb" id="p_daemon_svn" type="svn" >
  2013. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
  2014. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_daemon_svn-monitor-30s" interval="30s" name="monitor" />
  2015. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
  2016. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
  2017. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="lsb" id="p_daemon_git-daemon" type="git-daemon" >
  2018. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
  2019. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_daemon_git-daemon-monitor-30s" interval="30s" name="monitor" />
  2020. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
  2021. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
  2022. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="ocf" id="p_ip_vcs" provider="heartbeat" type="IPaddr2" >
  2023. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_ip_vcs-instance_attributes" >
  2024. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ip_vcs-instance_attributes-ip" name="ip" value="192.168.1.22" />
  2025. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ip_vcs-instance_attributes-cidr_netmask" name="cidr_netmask" value="16" />
  2026. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ip_vcs-instance_attributes-nic" name="nic" value="eth1" />
  2027. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
  2028. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
  2029. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_ip_vcs-monitor-30s" interval="30s" name="monitor" />
  2030. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
  2031. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
  2032. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </group>
  2033. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <master id="ms_drbd_vcs" >
  2034. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <meta_attributes id="ms_drbd_vcs-meta_attributes" >
  2035. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="ms_drbd_vcs-meta_attributes-master-max" name="master-max" value="1" />
  2036. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="ms_drbd_vcs-meta_attributes-master-node-max" name="master-node-max" value="1" />
  2037. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="ms_drbd_vcs-meta_attributes-clone-max" name="clone-max" value="2" />
  2038. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="ms_drbd_vcs-meta_attributes-clone-node-max" name="clone-node-max" value="1" />
  2039. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="ms_drbd_vcs-meta_attributes-notify" name="notify" value="true" />
  2040. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </meta_attributes>
  2041. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="ocf" id="p_drbd_vcs" provider="linbit" type="drbd" >
  2042. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_drbd_vcs-instance_attributes" >
  2043. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_drbd_vcs-instance_attributes-drbd_resource" name="drbd_resource" value="vcs" />
  2044. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
  2045. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
  2046. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_drbd_vcs-start-0" interval="0" name="start" timeout="240" />
  2047. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_drbd_vcs-stop-0" interval="0" name="stop" timeout="100" />
  2048. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_drbd_vcs-monitor-10" interval="10" name="monitor" role="Master" timeout="90" />
  2049. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_drbd_vcs-monitor-20" interval="20" name="monitor" role="Slave" timeout="60" />
  2050. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
  2051. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
  2052. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </master>
  2053. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <clone id="cl_ping" >
  2054. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <meta_attributes id="cl_ping-meta_attributes" >
  2055. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="cl_ping-meta_attributes-interleave" name="interleave" value="true" />
  2056. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </meta_attributes>
  2057. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="ocf" id="p_ping" provider="pacemaker" type="ping" >
  2058. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_ping-instance_attributes" >
  2059. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ping-instance_attributes-name" name="name" value="p_ping" />
  2060. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ping-instance_attributes-host_list" name="host_list" value="192.168.0.128 192.168.0.129 192.168.0.33 192.168.0.1 192.168.0.127" />
  2061. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ping-instance_attributes-dampen" name="dampen" value="25s" />
  2062. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_ping-instance_attributes-multiplier" name="multiplier" value="1000" />
  2063. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
  2064. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
  2065. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_ping-start-0" interval="0" name="start" timeout="60" />
  2066. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_ping-monitor-10s" interval="10s" name="monitor" timeout="60" />
  2067. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
  2068. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
  2069. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </clone>
  2070. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <clone id="cl_sysadmin_notify" >
  2071. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <primitive class="ocf" id="p_sysadmin_notify" provider="heartbeat" type="MailTo" >
  2072. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_sysadmin_notify-instance_attributes" >
  2073. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_sysadmin_notify-instance_attributes-email" name="email" value="sysadmin-alert@xes-inc.com" />
  2074. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
  2075. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <instance_attributes id="p_sysadmin_notify-instance_attributes-0" >
  2076. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="p_sysadmin_notify-instance_attributes-0-subject" name="subject" value="VCS Pacemaker Change" />
  2077. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </instance_attributes>
  2078. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <operations >
  2079. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_sysadmin_notify-start-0" interval="0" name="start" timeout="30" />
  2080. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_sysadmin_notify-stop-0" interval="0" name="stop" timeout="30" />
  2081. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <op id="p_sysadmin_notify-monitor-10" interval="10" name="monitor" timeout="30" />
  2082. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </operations>
  2083. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </primitive>
  2084. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </clone>
  2085. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="loc_run_on_most_connected" rsc="g_vcs" >
  2086. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rule boolean-op="or" id="loc_run_on_most_connected-rule" score="-INFINITY" >
  2087. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <expression attribute="p_ping" id="loc_run_on_most_connected-expression" operation="not_defined" />
  2088. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <expression attribute="p_ping" id="loc_run_on_most_connected-expression-0" operation="lte" value="0" />
  2089. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </rule>
  2090. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ </rsc_location>
  2091. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="loc_st_vcs0" node="vcs0" rsc="stonithvcs0" score="-INFINITY" />
  2092. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="loc_st_vcs1" node="vcs1" rsc="stonithvcs1" score="-INFINITY" />
  2093. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="loc_st_vcsquorum" node="vcsquorum" rsc="stonithvcsquorum" score="-INFINITY" />
  2094. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_colocation id="c_drbd_fs_services" rsc="g_vcs" score="INFINITY" with-rsc="ms_drbd_vcs" with-rsc-role="Master" />
  2095. Feb 11 01:00:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_order first="ms_drbd_vcs" first-action="promote" id="o_drbd_fs_services" score="INFINITY" then="g_vcs" then-action="start" />
  2096. Feb 11 01:00:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/cibadmin/2, version=0.17.1): OK (rc=0)
  2097. Feb 11 01:00:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/144, version=0.17.2): OK (rc=0)
  2098. Feb 11 01:00:46 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  2099. Feb 11 01:00:46 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  2100. Feb 11 01:01:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 2 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2101. Feb 11 01:01:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 3 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2102. Feb 11 01:01:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 4 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2103. Feb 11 01:02:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 5 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2104. Feb 11 01:02:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 6 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2105. Feb 11 01:02:45 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2106. Feb 11 01:02:45 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2107. Feb 11 01:02:45 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2108. Feb 11 01:02:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/146, version=0.17.5): OK (rc=0)
  2109. Feb 11 01:02:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/147, version=0.17.6): OK (rc=0)
  2110. Feb 11 01:02:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/149, version=0.17.7): OK (rc=0)
  2111. Feb 11 01:02:46 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-13: Waiting on 3 outstanding join acks
  2112. Feb 11 01:02:46 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  2113. Feb 11 01:02:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 7 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2114. Feb 11 01:02:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  2115. Feb 11 01:02:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 8 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2116. Feb 11 01:02:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2117. Feb 11 01:02:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/151, version=0.17.8): OK (rc=0)
  2118. Feb 11 01:02:46 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-13
  2119. Feb 11 01:02:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2120. Feb 11 01:03:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 9 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2121. Feb 11 01:03:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 10 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2122. Feb 11 01:03:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 11 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2123. Feb 11 01:04:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 12 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2124. Feb 11 01:04:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 13 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2125. Feb 11 01:04:46 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2126. Feb 11 01:04:46 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2127. Feb 11 01:04:46 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2128. Feb 11 01:04:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/154, version=0.17.9): OK (rc=0)
  2129. Feb 11 01:04:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/155, version=0.17.10): OK (rc=0)
  2130. Feb 11 01:04:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/157, version=0.17.11): OK (rc=0)
  2131. Feb 11 01:04:46 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-14: Waiting on 3 outstanding join acks
  2132. Feb 11 01:04:46 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  2133. Feb 11 01:04:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 14 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2134. Feb 11 01:04:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  2135. Feb 11 01:04:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 15 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2136. Feb 11 01:04:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2137. Feb 11 01:04:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/159, version=0.17.12): OK (rc=0)
  2138. Feb 11 01:04:46 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-14
  2139. Feb 11 01:04:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2140. Feb 11 01:05:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 16 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2141. Feb 11 01:05:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 17 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2142. Feb 11 01:05:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 18 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2143. Feb 11 01:06:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 19 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2144. Feb 11 01:06:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 20 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2145. Feb 11 01:06:46 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2146. Feb 11 01:06:46 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2147. Feb 11 01:06:46 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2148. Feb 11 01:06:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/162, version=0.17.13): OK (rc=0)
  2149. Feb 11 01:06:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/163, version=0.17.14): OK (rc=0)
  2150. Feb 11 01:06:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/165, version=0.17.15): OK (rc=0)
  2151. Feb 11 01:06:46 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-15: Waiting on 3 outstanding join acks
  2152. Feb 11 01:06:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 21 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2153. Feb 11 01:06:46 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  2154. Feb 11 01:06:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  2155. Feb 11 01:06:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 22 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2156. Feb 11 01:06:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/167, version=0.17.16): OK (rc=0)
  2157. Feb 11 01:06:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2158. Feb 11 01:06:46 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-15
  2159. Feb 11 01:06:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2160. Feb 11 01:07:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 23 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2161. Feb 11 01:07:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 24 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2162. Feb 11 01:07:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 25 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2163. Feb 11 01:08:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 26 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2164. Feb 11 01:08:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 27 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2165. Feb 11 01:08:46 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2166. Feb 11 01:08:46 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2167. Feb 11 01:08:46 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2168. Feb 11 01:08:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/170, version=0.17.17): OK (rc=0)
  2169. Feb 11 01:08:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/171, version=0.17.18): OK (rc=0)
  2170. Feb 11 01:08:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/173, version=0.17.19): OK (rc=0)
  2171. Feb 11 01:08:46 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-16: Waiting on 3 outstanding join acks
  2172. Feb 11 01:08:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 28 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2173. Feb 11 01:08:46 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  2174. Feb 11 01:08:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  2175. Feb 11 01:08:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 29 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2176. Feb 11 01:08:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/175, version=0.17.20): OK (rc=0)
  2177. Feb 11 01:08:46 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-16
  2178. Feb 11 01:08:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2179. Feb 11 01:08:46 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2180. Feb 11 01:09:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 30 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2181. Feb 11 01:09:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 31 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2182. Feb 11 01:09:46 [2051] vcsquorum crmd: info: do_election_count_vote: Election 32 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2183. Feb 11 01:10:06 [2051] vcsquorum crmd: info: do_election_count_vote: Election 33 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2184. Feb 11 01:10:26 [2051] vcsquorum crmd: info: do_election_count_vote: Election 34 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2185. Feb 11 01:10:46 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2186. Feb 11 01:10:46 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2187. Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2188. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/178, version=0.17.21): OK (rc=0)
  2189. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/179, version=0.17.22): OK (rc=0)
  2190. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/181, version=0.17.23): OK (rc=0)
  2191. Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-17: Waiting on 3 outstanding join acks
  2192. Feb 11 01:10:46 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  2193. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/183, version=0.17.24): OK (rc=0)
  2194. Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  2195. Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-17: Syncing the CIB from vcsquorum to the rest of the cluster
  2196. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/186, version=0.17.24): OK (rc=0)
  2197. Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_join_ack: join-17: Updating node state to member for vcsquorum
  2198. Feb 11 01:10:46 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  2199. Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_join_ack: join-17: Updating node state to member for vcs0
  2200. Feb 11 01:10:46 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
  2201. Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_dc_join_ack: join-17: Updating node state to member for vcs1
  2202. Feb 11 01:10:46 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
  2203. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/187, version=0.17.25): OK (rc=0)
  2204. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/188, version=0.17.30): OK (rc=0)
  2205. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/189, version=0.17.31): OK (rc=0)
  2206. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/190, version=0.17.32): OK (rc=0)
  2207. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/192, version=0.17.34): OK (rc=0)
  2208. Feb 11 01:10:46 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  2209. Feb 11 01:10:46 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  2210. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/194, version=0.17.36): OK (rc=0)
  2211. Feb 11 01:10:46 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  2212. Feb 11 01:10:46 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  2213. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/196, version=0.17.39): OK (rc=0)
  2214. Feb 11 01:10:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/198, version=0.17.41): OK (rc=0)
  2215. Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start stonithvcs0 (vcs1)
  2216. Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start stonithvcs1 (vcs0)
  2217. Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start stonithvcsquorum (vcs0)
  2218. Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_drbd_vcs:0 (vcs1)
  2219. Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_drbd_vcs:1 (vcs0)
  2220. Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_ping:0 (vcs1)
  2221. Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_ping:1 (vcs0)
  2222. Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_sysadmin_notify:0 (vcs1)
  2223. Feb 11 01:10:47 [2050] vcsquorum pengine: notice: LogActions: Start p_sysadmin_notify:1 (vcs0)
  2224. Feb 11 01:10:47 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2225. Feb 11 01:10:47 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 10: /var/lib/pacemaker/pengine/pe-input-10.bz2
  2226. Feb 11 01:10:47 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 10 (ref=pe_calc-dc-1360566647-146) derived from /var/lib/pacemaker/pengine/pe-input-10.bz2
  2227. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 26: monitor stonithvcs0_monitor_0 on vcsquorum (local)
  2228. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'stonithvcs0' not found (0 active resources)
  2229. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'stonithvcs0' to the rsc list (1 active resources)
  2230. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 15: monitor stonithvcs0_monitor_0 on vcs1
  2231. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 4: monitor stonithvcs0_monitor_0 on vcs0
  2232. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 27: monitor stonithvcs1_monitor_0 on vcsquorum (local)
  2233. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'stonithvcs1' not found (1 active resources)
  2234. Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_device_action: Device stonithvcs0 not found
  2235. Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_command: Processed st_execute from lrmd.2048: No such device (-19)
  2236. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'stonithvcs1' to the rsc list (2 active resources)
  2237. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 16: monitor stonithvcs1_monitor_0 on vcs1
  2238. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 5: monitor stonithvcs1_monitor_0 on vcs0
  2239. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 28: monitor stonithvcsquorum_monitor_0 on vcsquorum (local)
  2240. Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_device_action: Device stonithvcs1 not found
  2241. Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_command: Processed st_execute from lrmd.2048: No such device (-19)
  2242. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'stonithvcsquorum' not found (2 active resources)
  2243. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'stonithvcsquorum' to the rsc list (3 active resources)
  2244. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: monitor stonithvcsquorum_monitor_0 on vcs1
  2245. Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_device_action: Device stonithvcsquorum not found
  2246. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 6: monitor stonithvcsquorum_monitor_0 on vcs0
  2247. Feb 11 01:10:47 [2047] vcsquorum stonith-ng: info: stonith_command: Processed st_execute from lrmd.2048: No such device (-19)
  2248. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 29: monitor p_fs_vcs_monitor_0 on vcsquorum (local)
  2249. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_fs_vcs' not found (3 active resources)
  2250. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_fs_vcs' to the rsc list (4 active resources)
  2251. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: monitor p_fs_vcs_monitor_0 on vcs1
  2252. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 7: monitor p_fs_vcs_monitor_0 on vcs0
  2253. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 30: monitor p_daemon_svn_monitor_0 on vcsquorum (local)
  2254. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_daemon_svn' not found (4 active resources)
  2255. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_daemon_svn' to the rsc list (5 active resources)
  2256. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_daemon_svn_monitor_0 on vcs1
  2257. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 8: monitor p_daemon_svn_monitor_0 on vcs0
  2258. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 31: monitor p_daemon_git-daemon_monitor_0 on vcsquorum (local)
  2259. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_daemon_git-daemon' not found (5 active resources)
  2260. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_daemon_git-daemon' to the rsc list (6 active resources)
  2261. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: monitor p_daemon_git-daemon_monitor_0 on vcs1
  2262. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 9: monitor p_daemon_git-daemon_monitor_0 on vcs0
  2263. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 32: monitor p_ip_vcs_monitor_0 on vcsquorum (local)
  2264. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_ip_vcs' not found (6 active resources)
  2265. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_ip_vcs' to the rsc list (7 active resources)
  2266. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_ip_vcs_monitor_0 on vcs1
  2267. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 10: monitor p_ip_vcs_monitor_0 on vcs0
  2268. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 33: monitor p_drbd_vcs:0_monitor_0 on vcsquorum (local)
  2269. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_drbd_vcs' not found (7 active resources)
  2270. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_drbd_vcs:0' not found (7 active resources)
  2271. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_drbd_vcs' to the rsc list (8 active resources)
  2272. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: monitor p_drbd_vcs:0_monitor_0 on vcs1
  2273. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 11: monitor p_drbd_vcs:1_monitor_0 on vcs0
  2274. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 34: monitor p_ping:0_monitor_0 on vcsquorum (local)
  2275. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_ping' not found (8 active resources)
  2276. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_ping:0' not found (8 active resources)
  2277. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_ping' to the rsc list (9 active resources)
  2278. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: monitor p_ping:0_monitor_0 on vcs1
  2279. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 12: monitor p_ping:1_monitor_0 on vcs0
  2280. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 35: monitor p_sysadmin_notify:0_monitor_0 on vcsquorum (local)
  2281. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_sysadmin_notify' not found (9 active resources)
  2282. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_get_rsc_info: Resource 'p_sysadmin_notify:0' not found (9 active resources)
  2283. Feb 11 01:10:47 [2048] vcsquorum lrmd: info: process_lrmd_rsc_register: Added 'p_sysadmin_notify' to the rsc list (10 active resources)
  2284. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 24: monitor p_sysadmin_notify:0_monitor_0 on vcs1
  2285. Feb 11 01:10:47 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 13: monitor p_sysadmin_notify:1_monitor_0 on vcs0
  2286. Filesystem[2687]: 2013/02/11_01:10:47 WARNING: Couldn't find device [/dev/drbd0]. Expected /dev/??? to exist
  2287. Filesystem[2687]: 2013/02/11_01:10:47 WARNING: Couldn't find device [/dev/drbd0]. Expected /dev/??? to exist
  2288. Feb 11 01:10:47 [2051] vcsquorum crmd: info: process_lrm_event: LRM operation stonithvcs0_monitor_0 (call=5, rc=7, cib-update=201, confirmed=true) not running
  2289. Feb 11 01:10:47 [2051] vcsquorum crmd: info: process_lrm_event: LRM operation stonithvcs1_monitor_0 (call=9, rc=7, cib-update=202, confirmed=true) not running
  2290. Feb 11 01:10:47 [2051] vcsquorum crmd: info: process_lrm_event: LRM operation stonithvcsquorum_monitor_0 (call=13, rc=7, cib-update=203, confirmed=true) not running
  2291. Feb 11 01:10:47 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_daemon_git-daemon_monitor_0 (call=25, rc=7, cib-update=204, confirmed=true) not running
  2292. Feb 11 01:10:47 [2051] vcsquorum crmd: info: process_lrm_event: Result: * git-daemon is not running
  2293. Feb 11 01:10:48 [2051] vcsquorum crmd: info: services_os_action_execute: Managed MailTo_meta-data_0 process 2755 exited with rc=0
  2294. Feb 11 01:10:48 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_sysadmin_notify_monitor_0 (call=44, rc=7, cib-update=205, confirmed=true) not running
  2295. Feb 11 01:10:48 [2051] vcsquorum crmd: info: process_lrm_event: Result: stopped
  2296. Feb 11 01:10:48 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_daemon_svn_monitor_0 (call=21, rc=7, cib-update=206, confirmed=true) not running
  2297. Feb 11 01:10:48 [2051] vcsquorum crmd: info: process_lrm_event: Result: svnserve is not running.
  2298. Feb 11 01:10:49 [2051] vcsquorum crmd: info: services_os_action_execute: Managed ping_meta-data_0 process 2812 exited with rc=0
  2299. Feb 11 01:10:49 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_ping_monitor_0 (call=39, rc=7, cib-update=207, confirmed=true) not running
  2300. Feb 11 01:10:50 [2051] vcsquorum crmd: info: services_os_action_execute: Managed Filesystem_meta-data_0 process 2818 exited with rc=0
  2301. Feb 11 01:10:50 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_fs_vcs_monitor_0 (call=17, rc=7, cib-update=208, confirmed=true) not running
  2302. Feb 11 01:10:51 [2051] vcsquorum crmd: info: services_os_action_execute: Managed IPaddr2_meta-data_0 process 2823 exited with rc=0
  2303. Feb 11 01:10:51 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_ip_vcs_monitor_0 (call=29, rc=7, cib-update=209, confirmed=true) not running
  2304. Feb 11 01:10:52 [2051] vcsquorum crmd: info: services_os_action_execute: Managed drbd_meta-data_0 process 2827 exited with rc=0
  2305. Feb 11 01:10:52 [2051] vcsquorum crmd: notice: process_lrm_event: LRM operation p_drbd_vcs_monitor_0 (call=34, rc=7, cib-update=210, confirmed=true) not running
  2306. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 14: probe_complete probe_complete on vcs1 - no waiting
  2307. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on vcs0 - no waiting
  2308. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 25: probe_complete probe_complete on vcsquorum (local) - no waiting
  2309. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 36: start stonithvcs0_start_0 on vcs1
  2310. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 37: start stonithvcs1_start_0 on vcs0
  2311. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 38: start stonithvcsquorum_start_0 on vcs0
  2312. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 43: start p_drbd_vcs:0_start_0 on vcs1
  2313. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 45: start p_drbd_vcs:1_start_0 on vcs0
  2314. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 71: start p_ping:0_start_0 on vcs1
  2315. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 73: start p_ping:1_start_0 on vcs0
  2316. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 79: start p_sysadmin_notify:0_start_0 on vcs1
  2317. Feb 11 01:10:52 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 81: start p_sysadmin_notify:1_start_0 on vcs0
  2318. Feb 11 01:10:53 [2051] vcsquorum crmd: warning: status_from_rc: Action 79 (p_sysadmin_notify:0_start_0) on vcs1 failed (target: 0 vs. rc: 1): Error
  2319. Feb 11 01:10:53 [2051] vcsquorum crmd: warning: update_failcount: Updating failcount for p_sysadmin_notify on vcs1 after failed start: rc=1 (update=INFINITY, time=1360566653)
  2320. Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_sysadmin_notify_last_failure_0, magic=0:1;79:10:0:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.17.76) : Event failed
  2321. Feb 11 01:10:53 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
  2322. Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2868982794-fail-count-p_sysadmin_notify, name=fail-count-p_sysadmin_notify, value=INFINITY, magic=NA, cib=0.17.77) : Transient attribute: update
  2323. Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2868982794-last-failure-p_sysadmin_notify, name=last-failure-p_sysadmin_notify, value=1360566653, magic=NA, cib=0.17.78) : Transient attribute: update
  2324. Feb 11 01:10:53 [2051] vcsquorum crmd: warning: status_from_rc: Action 81 (p_sysadmin_notify:1_start_0) on vcs0 failed (target: 0 vs. rc: 1): Error
  2325. Feb 11 01:10:53 [2051] vcsquorum crmd: warning: update_failcount: Updating failcount for p_sysadmin_notify on vcs0 after failed start: rc=1 (update=INFINITY, time=1360566653)
  2326. Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_sysadmin_notify_last_failure_0, magic=0:1;81:10:0:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.17.79) : Event failed
  2327. Feb 11 01:10:53 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
  2328. Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2852205578-fail-count-p_sysadmin_notify, name=fail-count-p_sysadmin_notify, value=INFINITY, magic=NA, cib=0.17.80) : Transient attribute: update
  2329. Feb 11 01:10:53 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2852205578-last-failure-p_sysadmin_notify, name=last-failure-p_sysadmin_notify, value=1360566653, magic=NA, cib=0.17.82) : Transient attribute: update
  2330. Feb 11 01:10:53 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
  2331. Feb 11 01:10:53 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
  2332. Feb 11 01:10:54 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2868982794-master-p_drbd_vcs, name=master-p_drbd_vcs, value=5, magic=NA, cib=0.17.84) : Transient attribute: update
  2333. Feb 11 01:10:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2334. Feb 11 01:10:54 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2852205578-master-p_drbd_vcs, name=master-p_drbd_vcs, value=5, magic=NA, cib=0.17.86) : Transient attribute: update
  2335. Feb 11 01:10:54 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 99: notify p_drbd_vcs:0_post_notify_start_0 on vcs1
  2336. Feb 11 01:10:54 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 100: notify p_drbd_vcs:1_post_notify_start_0 on vcs0
  2337. Feb 11 01:10:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2338. Feb 11 01:11:04 [2051] vcsquorum crmd: notice: run_graph: Transition 10 (Complete=55, Pending=0, Fired=0, Skipped=6, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-10.bz2): Stopped
  2339. Feb 11 01:11:04 [2051] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  2340. Feb 11 01:11:04 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2341. Feb 11 01:11:04 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:1 on vcs0: unknown error (1)
  2342. Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2343. Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2344. Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2345. Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2346. Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2347. Feb 11 01:11:04 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2348. Feb 11 01:11:04 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:0 (Slave -> Master vcs1)
  2349. Feb 11 01:11:04 [2050] vcsquorum pengine: notice: LogActions: Stop p_sysadmin_notify:0 (vcs1)
  2350. Feb 11 01:11:04 [2050] vcsquorum pengine: notice: LogActions: Stop p_sysadmin_notify:1 (vcs0)
  2351. Feb 11 01:11:04 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 11: /var/lib/pacemaker/pengine/pe-input-11.bz2
  2352. Feb 11 01:11:04 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2353. Feb 11 01:11:04 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 11 (ref=pe_calc-dc-1360566664-191) derived from /var/lib/pacemaker/pengine/pe-input-11.bz2
  2354. Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 51: monitor p_ping_monitor_10000 on vcs1
  2355. Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 54: monitor p_ping_monitor_10000 on vcs0
  2356. Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 84: notify p_drbd_vcs_pre_notify_promote_0 on vcs1
  2357. Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 86: notify p_drbd_vcs_pre_notify_promote_0 on vcs0
  2358. Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 1: stop p_sysadmin_notify_stop_0 on vcs1
  2359. Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 2: stop p_sysadmin_notify_stop_0 on vcs0
  2360. Feb 11 01:11:04 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: promote p_drbd_vcs_promote_0 on vcs1
  2361. Feb 11 01:11:12 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 85: notify p_drbd_vcs_post_notify_promote_0 on vcs1
  2362. Feb 11 01:11:12 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 87: notify p_drbd_vcs_post_notify_promote_0 on vcs0
  2363. Feb 11 01:11:12 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_drbd_vcs_monitor_10000 on vcs1
  2364. Feb 11 01:11:12 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 24: monitor p_drbd_vcs_monitor_20000 on vcs0
  2365. Feb 11 01:11:12 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2868982794-master-p_drbd_vcs, name=master-p_drbd_vcs, value=10, magic=NA, cib=0.17.97) : Transient attribute: update
  2366. Feb 11 01:11:12 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-2852205578-master-p_drbd_vcs, name=master-p_drbd_vcs, value=10000, magic=NA, cib=0.17.98) : Transient attribute: update
  2367. Feb 11 01:11:12 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2368. Feb 11 01:11:12 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2369. Feb 11 01:11:14 [2051] vcsquorum crmd: notice: run_graph: Transition 11 (Complete=21, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-11.bz2): Complete
  2370. Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  2371. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2372. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2373. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2374. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2375. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2376. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2377. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2378. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2379. Feb 11 01:11:14 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Slave vcs1)
  2380. Feb 11 01:11:14 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:1 (Slave -> Master vcs0)
  2381. Feb 11 01:11:14 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 12: /var/lib/pacemaker/pengine/pe-input-12.bz2
  2382. Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2383. Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 12 (ref=pe_calc-dc-1360566674-203) derived from /var/lib/pacemaker/pengine/pe-input-12.bz2
  2384. Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 2: cancel p_drbd_vcs_cancel_10000 on vcs1
  2385. Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 4: cancel p_drbd_vcs_cancel_20000 on vcs0
  2386. Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 88: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
  2387. Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 90: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
  2388. Feb 11 01:11:14 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_10000, magic=0:8;21:11:8:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.17.105) : Resource op removal
  2389. Feb 11 01:11:14 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_20000, magic=0:0;24:11:0:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.17.106) : Resource op removal
  2390. Feb 11 01:11:14 [2051] vcsquorum crmd: notice: run_graph: Transition 12 (Complete=5, Pending=0, Fired=0, Skipped=12, Incomplete=10, Source=/var/lib/pacemaker/pengine/pe-input-12.bz2): Stopped
  2391. Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  2392. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2393. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2394. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2395. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2396. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2397. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2398. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2399. Feb 11 01:11:14 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2400. Feb 11 01:11:14 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Slave vcs1)
  2401. Feb 11 01:11:14 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:1 (Slave -> Master vcs0)
  2402. Feb 11 01:11:14 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 13: /var/lib/pacemaker/pengine/pe-input-13.bz2
  2403. Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2404. Feb 11 01:11:14 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 13 (ref=pe_calc-dc-1360566674-208) derived from /var/lib/pacemaker/pengine/pe-input-13.bz2
  2405. Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 86: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
  2406. Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 88: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
  2407. Feb 11 01:11:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: demote p_drbd_vcs_demote_0 on vcs1
  2408. Feb 11 01:11:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 87: notify p_drbd_vcs_post_notify_demote_0 on vcs1
  2409. Feb 11 01:11:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 89: notify p_drbd_vcs_post_notify_demote_0 on vcs0
  2410. Feb 11 01:11:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 82: notify p_drbd_vcs_pre_notify_promote_0 on vcs1
  2411. Feb 11 01:11:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 84: notify p_drbd_vcs_pre_notify_promote_0 on vcs0
  2412. Feb 11 01:11:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 24: promote p_drbd_vcs_promote_0 on vcs0
  2413. Feb 11 01:11:24 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 83: notify p_drbd_vcs_post_notify_promote_0 on vcs1
  2414. Feb 11 01:11:24 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 85: notify p_drbd_vcs_post_notify_promote_0 on vcs0
  2415. Feb 11 01:11:24 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_drbd_vcs_monitor_20000 on vcs1
  2416. Feb 11 01:11:24 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 25: monitor p_drbd_vcs_monitor_10000 on vcs0
  2417. Feb 11 01:11:24 [2051] vcsquorum crmd: notice: run_graph: Transition 13 (Complete=25, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-13.bz2): Complete
  2418. Feb 11 01:11:24 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  2419. Feb 11 01:11:28 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
  2420. Feb 11 01:11:28 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-2868982794-p_ping, name=p_ping, value=5000, magic=NA, cib=0.17.111) : Transient attribute: update
  2421. Feb 11 01:11:28 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  2422. Feb 11 01:11:28 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-2852205578-p_ping, name=p_ping, value=5000, magic=NA, cib=0.17.112) : Transient attribute: update
  2423. Feb 11 01:11:28 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2424. Feb 11 01:11:28 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2425. Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2426. Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2427. Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2428. Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2429. Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2430. Feb 11 01:11:28 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2431. Feb 11 01:11:28 [2050] vcsquorum pengine: notice: LogActions: Start p_fs_vcs (vcs0)
  2432. Feb 11 01:11:28 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_svn (vcs0)
  2433. Feb 11 01:11:28 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_git-daemon (vcs0)
  2434. Feb 11 01:11:28 [2050] vcsquorum pengine: notice: LogActions: Start p_ip_vcs (vcs0)
  2435. Feb 11 01:11:28 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 14: /var/lib/pacemaker/pengine/pe-input-14.bz2
  2436. Feb 11 01:11:28 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2437. Feb 11 01:11:28 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 14 (ref=pe_calc-dc-1360566688-221) derived from /var/lib/pacemaker/pengine/pe-input-14.bz2
  2438. Feb 11 01:11:28 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 16: start p_fs_vcs_start_0 on vcs0
  2439. Feb 11 01:11:29 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: monitor p_fs_vcs_monitor_20000 on vcs0
  2440. Feb 11 01:11:29 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: start p_daemon_svn_start_0 on vcs0
  2441. Feb 11 01:11:30 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_daemon_svn_monitor_30000 on vcs0
  2442. Feb 11 01:11:30 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: start p_daemon_git-daemon_start_0 on vcs0
  2443. Feb 11 01:11:30 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_daemon_git-daemon_monitor_30000 on vcs0
  2444. Feb 11 01:11:30 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: start p_ip_vcs_start_0 on vcs0
  2445. Feb 11 01:11:31 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: monitor p_ip_vcs_monitor_30000 on vcs0
  2446. Feb 11 01:11:31 [2051] vcsquorum crmd: notice: run_graph: Transition 14 (Complete=10, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-14.bz2): Complete
  2447. Feb 11 01:11:31 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  2448. Feb 11 01:12:22 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=vcs1/crm_resource/3, version=0.17.121): OK (rc=0)
  2449. Feb 11 01:12:22 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.18.1) : Non-status change
  2450. Feb 11 01:12:22 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  2451. Feb 11 01:12:22 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.17.121
  2452. Feb 11 01:12:22 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.18.1
  2453. Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="17" num_updates="121" />
  2454. Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
  2455. Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: ++ <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
  2456. Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: ++ <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs0" type="string" />
  2457. Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: ++ </rule>
  2458. Feb 11 01:12:22 [2045] vcsquorum cib: notice: cib:diff: ++ </rsc_location>
  2459. Feb 11 01:12:22 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=vcs1/crm_resource/4, version=0.18.1): OK (rc=0)
  2460. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2461. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2462. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2463. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2464. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2465. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2466. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2467. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2468. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_fs_vcs (Started vcs0 -> vcs1)
  2469. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_svn (Started vcs0 -> vcs1)
  2470. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_git-daemon (Started vcs0 -> vcs1)
  2471. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_ip_vcs (Started vcs0 -> vcs1)
  2472. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:0 (Slave -> Master vcs1)
  2473. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:1 (Master -> Slave vcs0)
  2474. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 15: /var/lib/pacemaker/pengine/pe-input-15.bz2
  2475. Feb 11 01:12:22 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2476. Feb 11 01:12:22 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 15 (ref=pe_calc-dc-1360566742-230) derived from /var/lib/pacemaker/pengine/pe-input-15.bz2
  2477. Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 29: stop p_ip_vcs_stop_0 on vcs0
  2478. Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 2: cancel p_drbd_vcs_cancel_20000 on vcs1
  2479. Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 8: cancel p_drbd_vcs_cancel_10000 on vcs0
  2480. Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 100: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
  2481. Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 102: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
  2482. Feb 11 01:12:22 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_20000, magic=0:0;21:13:0:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.18.2) : Resource op removal
  2483. Feb 11 01:12:22 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_10000, magic=0:8;25:13:8:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.18.3) : Resource op removal
  2484. Feb 11 01:12:22 [2051] vcsquorum crmd: notice: run_graph: Transition 15 (Complete=7, Pending=0, Fired=0, Skipped=26, Incomplete=10, Source=/var/lib/pacemaker/pengine/pe-input-15.bz2): Stopped
  2485. Feb 11 01:12:22 [2051] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  2486. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2487. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2488. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2489. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2490. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2491. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2492. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2493. Feb 11 01:12:22 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2494. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_fs_vcs (Started vcs0 -> vcs1)
  2495. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_svn (Started vcs0 -> vcs1)
  2496. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_git-daemon (Started vcs0 -> vcs1)
  2497. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Start p_ip_vcs (vcs1)
  2498. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:0 (Slave -> Master vcs1)
  2499. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:1 (Master -> Slave vcs0)
  2500. Feb 11 01:12:22 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2501. Feb 11 01:12:22 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 16: /var/lib/pacemaker/pengine/pe-input-16.bz2
  2502. Feb 11 01:12:22 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 16 (ref=pe_calc-dc-1360566742-236) derived from /var/lib/pacemaker/pengine/pe-input-16.bz2
  2503. Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: stop p_daemon_git-daemon_stop_0 on vcs0
  2504. Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 96: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
  2505. Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 98: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
  2506. Feb 11 01:12:22 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: stop p_daemon_svn_stop_0 on vcs0
  2507. Feb 11 01:12:24 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: stop p_fs_vcs_stop_0 on vcs0
  2508. Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 37: demote p_drbd_vcs_demote_0 on vcs0
  2509. Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 97: notify p_drbd_vcs_post_notify_demote_0 on vcs1
  2510. Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 99: notify p_drbd_vcs_post_notify_demote_0 on vcs0
  2511. Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 92: notify p_drbd_vcs_pre_notify_promote_0 on vcs1
  2512. Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 94: notify p_drbd_vcs_pre_notify_promote_0 on vcs0
  2513. Feb 11 01:12:25 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 34: promote p_drbd_vcs_promote_0 on vcs1
  2514. Feb 11 01:12:33 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 93: notify p_drbd_vcs_post_notify_promote_0 on vcs1
  2515. Feb 11 01:12:33 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 95: notify p_drbd_vcs_post_notify_promote_0 on vcs0
  2516. Feb 11 01:12:33 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: start p_fs_vcs_start_0 on vcs1
  2517. Feb 11 01:12:33 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 35: monitor p_drbd_vcs_monitor_10000 on vcs1
  2518. Feb 11 01:12:33 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 39: monitor p_drbd_vcs_monitor_20000 on vcs0
  2519. Feb 11 01:12:43 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_fs_vcs_monitor_20000 on vcs1
  2520. Feb 11 01:12:43 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: start p_daemon_svn_start_0 on vcs1
  2521. Feb 11 01:12:44 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: monitor p_daemon_svn_monitor_30000 on vcs1
  2522. Feb 11 01:12:44 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 24: start p_daemon_git-daemon_start_0 on vcs1
  2523. Feb 11 01:12:44 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 25: monitor p_daemon_git-daemon_monitor_30000 on vcs1
  2524. Feb 11 01:12:44 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 26: start p_ip_vcs_start_0 on vcs1
  2525. Feb 11 01:12:45 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 27: monitor p_ip_vcs_monitor_30000 on vcs1
  2526. Feb 11 01:12:45 [2051] vcsquorum crmd: notice: run_graph: Transition 16 (Complete=40, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-16.bz2): Complete
  2527. Feb 11 01:12:45 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  2528. Feb 11 01:13:23 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=1, tag=nvpair, id=status-2868982794-master-p_drbd_vcs, name=master-p_drbd_vcs, value=10000, magic=NA, cib=0.18.20) : Transient attribute: update
  2529. Feb 11 01:13:23 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  2530. Feb 11 01:13:23 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2531. Feb 11 01:13:23 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2532. Feb 11 01:13:23 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2533. Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2534. Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2535. Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2536. Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2537. Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2538. Feb 11 01:13:23 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2539. Feb 11 01:13:23 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 17: /var/lib/pacemaker/pengine/pe-input-17.bz2
  2540. Feb 11 01:13:23 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2541. Feb 11 01:13:23 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 17 (ref=pe_calc-dc-1360566803-260) derived from /var/lib/pacemaker/pengine/pe-input-17.bz2
  2542. Feb 11 01:13:23 [2051] vcsquorum crmd: notice: run_graph: Transition 17 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-17.bz2): Complete
  2543. Feb 11 01:13:23 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  2544. Feb 11 01:13:49 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.19.1) : Non-status change
  2545. Feb 11 01:13:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  2546. Feb 11 01:13:49 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.18.21 -> 0.19.1 from vcs1
  2547. Feb 11 01:13:49 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.18.21
  2548. Feb 11 01:13:49 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.19.1
  2549. Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: -- <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
  2550. Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: -- <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
  2551. Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: -- <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs0" type="string" />
  2552. Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: -- </rule>
  2553. Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: -- </rsc_location>
  2554. Feb 11 01:13:49 [2045] vcsquorum cib: notice: cib:diff: ++ <cib epoch="19" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcs1" update-client="crm_resource" cib-last-written="Mon Feb 11 01:12:22 2013" have-quorum="1" dc-uuid="755053578" />
  2555. Feb 11 01:13:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=vcs1/cibadmin/2, version=0.19.1): OK (rc=0)
  2556. Feb 11 01:13:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
  2557. Feb 11 01:13:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/220, version=0.19.2): OK (rc=0)
  2558. Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  2559. Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
  2560. Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  2561. Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
  2562. Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2563. Feb 11 01:13:49 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
  2564. Feb 11 01:14:09 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=vcs1/crm_resource/3, version=0.19.5): OK (rc=0)
  2565. Feb 11 01:14:09 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.19.5
  2566. Feb 11 01:14:09 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.20.1
  2567. Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="19" num_updates="5" />
  2568. Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
  2569. Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: ++ <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
  2570. Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: ++ <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs1" type="string" />
  2571. Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: ++ </rule>
  2572. Feb 11 01:14:09 [2045] vcsquorum cib: notice: cib:diff: ++ </rsc_location>
  2573. Feb 11 01:14:09 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=vcs1/crm_resource/4, version=0.20.1): OK (rc=0)
  2574. Feb 11 01:14:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 35 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2575. Feb 11 01:14:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 36 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2576. Feb 11 01:14:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 37 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2577. Feb 11 01:15:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 38 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2578. Feb 11 01:15:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 39 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2579. Feb 11 01:15:49 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2580. Feb 11 01:15:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2581. Feb 11 01:15:49 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2582. Feb 11 01:15:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/222, version=0.20.2): OK (rc=0)
  2583. Feb 11 01:15:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/223, version=0.20.3): OK (rc=0)
  2584. Feb 11 01:15:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/225, version=0.20.4): OK (rc=0)
  2585. Feb 11 01:15:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-18: Waiting on 3 outstanding join acks
  2586. Feb 11 01:15:49 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  2587. Feb 11 01:15:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 40 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2588. Feb 11 01:15:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  2589. Feb 11 01:15:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 41 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2590. Feb 11 01:15:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2591. Feb 11 01:15:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/227, version=0.20.5): OK (rc=0)
  2592. Feb 11 01:15:49 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-18
  2593. Feb 11 01:15:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2594. Feb 11 01:16:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 42 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2595. Feb 11 01:16:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 43 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2596. Feb 11 01:16:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 44 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2597. Feb 11 01:17:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 45 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2598. Feb 11 01:17:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 46 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2599. Feb 11 01:17:49 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2600. Feb 11 01:17:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2601. Feb 11 01:17:49 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2602. Feb 11 01:17:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/230, version=0.20.6): OK (rc=0)
  2603. Feb 11 01:17:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/231, version=0.20.7): OK (rc=0)
  2604. Feb 11 01:17:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/233, version=0.20.8): OK (rc=0)
  2605. Feb 11 01:17:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-19: Waiting on 3 outstanding join acks
  2606. Feb 11 01:17:49 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  2607. Feb 11 01:17:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 47 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2608. Feb 11 01:17:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  2609. Feb 11 01:17:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 48 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2610. Feb 11 01:17:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/235, version=0.20.9): OK (rc=0)
  2611. Feb 11 01:17:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2612. Feb 11 01:17:49 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-19
  2613. Feb 11 01:17:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2614. Feb 11 01:18:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 49 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2615. Feb 11 01:18:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 50 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2616. Feb 11 01:18:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 51 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2617. Feb 11 01:19:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 52 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2618. Feb 11 01:19:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 53 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2619. Feb 11 01:19:49 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2620. Feb 11 01:19:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2621. Feb 11 01:19:49 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2622. Feb 11 01:19:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/238, version=0.20.10): OK (rc=0)
  2623. Feb 11 01:19:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/239, version=0.20.11): OK (rc=0)
  2624. Feb 11 01:19:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/241, version=0.20.12): OK (rc=0)
  2625. Feb 11 01:19:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-20: Waiting on 3 outstanding join acks
  2626. Feb 11 01:19:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 54 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2627. Feb 11 01:19:49 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  2628. Feb 11 01:19:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  2629. Feb 11 01:19:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 55 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2630. Feb 11 01:19:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2631. Feb 11 01:19:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/243, version=0.20.13): OK (rc=0)
  2632. Feb 11 01:19:49 [2051] vcsquorum crmd: warning: join_query_callback: No DC for join-20
  2633. Feb 11 01:19:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2634. Feb 11 01:20:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 56 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2635. Feb 11 01:20:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 57 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2636. Feb 11 01:20:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 58 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2637. Feb 11 01:21:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 59 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2638. Feb 11 01:21:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 60 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2639. Feb 11 01:21:49 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2640. Feb 11 01:21:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2641. Feb 11 01:21:49 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2642. Feb 11 01:21:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/246, version=0.20.14): OK (rc=0)
  2643. Feb 11 01:21:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/247, version=0.20.15): OK (rc=0)
  2644. Feb 11 01:21:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/249, version=0.20.16): OK (rc=0)
  2645. Feb 11 01:21:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-21: Waiting on 3 outstanding join acks
  2646. Feb 11 01:21:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 61 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2647. Feb 11 01:21:49 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  2648. Feb 11 01:21:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_OFFER from route_message() received in state S_ELECTION
  2649. Feb 11 01:21:49 [2051] vcsquorum crmd: info: do_election_count_vote: Election 62 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2650. Feb 11 01:21:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/251, version=0.20.17): OK (rc=0)
  2651. Feb 11 01:21:49 [2051] vcsquorum crmd: warning: do_log: FSA: Input I_JOIN_REQUEST from route_message() received in state S_ELECTION
  2652. Feb 11 01:22:08 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=vcs1/crm_resource/3, version=0.20.18): OK (rc=0)
  2653. Feb 11 01:22:08 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=vcs1/crm_resource/4, version=0.20.19): OK (rc=0)
  2654. Feb 11 01:22:09 [2051] vcsquorum crmd: info: do_election_count_vote: Election 63 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2655. Feb 11 01:22:29 [2051] vcsquorum crmd: info: do_election_count_vote: Election 64 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2656. Feb 11 01:22:47 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.20.19 -> 0.21.1 from vcs0
  2657. Feb 11 01:22:47 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.20.19
  2658. Feb 11 01:22:47 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.21.1
  2659. Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: -- <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
  2660. Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: -- <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
  2661. Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: -- <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs1" type="string" />
  2662. Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: -- </rule>
  2663. Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: -- </rsc_location>
  2664. Feb 11 01:22:47 [2045] vcsquorum cib: notice: cib:diff: ++ <cib epoch="21" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcs1" update-client="crm_resource" cib-last-written="Mon Feb 11 01:14:09 2013" have-quorum="1" dc-uuid="755053578" />
  2665. Feb 11 01:22:47 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=vcs0/cibadmin/2, version=0.21.1): OK (rc=0)
  2666. Feb 11 01:22:47 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/253, version=0.21.2): OK (rc=0)
  2667. Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  2668. Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
  2669. Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  2670. Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
  2671. Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2672. Feb 11 01:22:47 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
  2673. Feb 11 01:22:55 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.21.4
  2674. Feb 11 01:22:55 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.22.1
  2675. Feb 11 01:22:55 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="21" num_updates="4" />
  2676. Feb 11 01:22:55 [2045] vcsquorum cib: notice: cib:diff: ++ <meta_attributes id="g_vcs-meta_attributes" >
  2677. Feb 11 01:22:55 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="g_vcs-meta_attributes-target-role" name="target-role" value="Stopped" />
  2678. Feb 11 01:22:55 [2045] vcsquorum cib: notice: cib:diff: ++ </meta_attributes>
  2679. Feb 11 01:22:55 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=vcs0/cibadmin/2, version=0.22.1): OK (rc=0)
  2680. Feb 11 01:23:07 [2051] vcsquorum crmd: info: do_election_count_vote: Election 65 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2681. Feb 11 01:23:27 [2051] vcsquorum crmd: info: do_election_count_vote: Election 66 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2682. Feb 11 01:23:47 [2051] vcsquorum crmd: info: do_election_count_vote: Election 67 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2683. Feb 11 01:23:49 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2684. Feb 11 01:23:49 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2685. Feb 11 01:23:49 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2686. Feb 11 01:23:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/255, version=0.22.2): OK (rc=0)
  2687. Feb 11 01:23:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/256, version=0.22.3): OK (rc=0)
  2688. Feb 11 01:23:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/258, version=0.22.4): OK (rc=0)
  2689. Feb 11 01:23:49 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-22: Waiting on 3 outstanding join acks
  2690. Feb 11 01:23:49 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  2691. Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/260, version=0.22.5): OK (rc=0)
  2692. Feb 11 01:23:50 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  2693. Feb 11 01:23:50 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-22: Syncing the CIB from vcsquorum to the rest of the cluster
  2694. Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/263, version=0.22.5): OK (rc=0)
  2695. Feb 11 01:23:50 [2051] vcsquorum crmd: info: do_dc_join_ack: join-22: Updating node state to member for vcsquorum
  2696. Feb 11 01:23:50 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  2697. Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/264, version=0.22.6): OK (rc=0)
  2698. Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/265, version=0.22.7): OK (rc=0)
  2699. Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/266, version=0.22.8): OK (rc=0)
  2700. Feb 11 01:23:50 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/267, version=0.22.9): OK (rc=0)
  2701. Feb 11 01:23:52 [2051] vcsquorum crmd: info: do_dc_join_ack: join-22: Updating node state to member for vcs0
  2702. Feb 11 01:23:52 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
  2703. Feb 11 01:23:52 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/269, version=0.22.21): OK (rc=0)
  2704. Feb 11 01:23:52 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
  2705. Feb 11 01:23:52 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
  2706. Feb 11 01:23:52 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2707. Feb 11 01:23:52 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
  2708. Feb 11 01:23:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
  2709. Feb 11 01:23:54 [2051] vcsquorum crmd: info: do_dc_join_ack: join-22: Updating node state to member for vcs1
  2710. Feb 11 01:23:54 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
  2711. Feb 11 01:23:54 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/271, version=0.22.34): OK (rc=0)
  2712. Feb 11 01:23:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
  2713. Feb 11 01:23:54 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  2714. Feb 11 01:23:54 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  2715. Feb 11 01:23:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2716. Feb 11 01:23:54 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/273, version=0.22.36): OK (rc=0)
  2717. Feb 11 01:23:54 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/275, version=0.22.38): OK (rc=0)
  2718. Feb 11 01:23:54 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
  2719. Feb 11 01:23:54 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  2720. Feb 11 01:23:54 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  2721. Feb 11 01:23:55 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2722. Feb 11 01:23:55 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2723. Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2724. Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2725. Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2726. Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2727. Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2728. Feb 11 01:23:55 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2729. Feb 11 01:23:55 [2050] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs1)
  2730. Feb 11 01:23:55 [2050] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs1)
  2731. Feb 11 01:23:55 [2050] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs1)
  2732. Feb 11 01:23:55 [2050] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs1)
  2733. Feb 11 01:23:55 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 18: /var/lib/pacemaker/pengine/pe-input-18.bz2
  2734. Feb 11 01:23:55 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2735. Feb 11 01:23:55 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 18 (ref=pe_calc-dc-1360567435-320) derived from /var/lib/pacemaker/pengine/pe-input-18.bz2
  2736. Feb 11 01:23:55 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: stop p_ip_vcs_stop_0 on vcs1
  2737. Feb 11 01:23:55 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: stop p_daemon_git-daemon_stop_0 on vcs1
  2738. Feb 11 01:23:55 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: stop p_daemon_svn_stop_0 on vcs1
  2739. Feb 11 01:23:57 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: stop p_fs_vcs_stop_0 on vcs1
  2740. Feb 11 01:23:57 [2051] vcsquorum crmd: notice: run_graph: Transition 18 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-18.bz2): Complete
  2741. Feb 11 01:23:57 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  2742. Feb 11 01:24:05 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.23.1) : Non-status change
  2743. Feb 11 01:24:05 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  2744. Feb 11 01:24:05 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.22.46
  2745. Feb 11 01:24:05 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.23.1
  2746. Feb 11 01:24:05 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair value="Stopped" id="g_vcs-meta_attributes-target-role" />
  2747. Feb 11 01:24:05 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="g_vcs-meta_attributes-target-role" name="target-role" value="Started" />
  2748. Feb 11 01:24:05 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=vcs0/cibadmin/2, version=0.23.1): OK (rc=0)
  2749. Feb 11 01:24:05 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2750. Feb 11 01:24:05 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2751. Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2752. Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2753. Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2754. Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2755. Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2756. Feb 11 01:24:05 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2757. Feb 11 01:24:05 [2050] vcsquorum pengine: notice: LogActions: Start p_fs_vcs (vcs1)
  2758. Feb 11 01:24:05 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_svn (vcs1)
  2759. Feb 11 01:24:05 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_git-daemon (vcs1)
  2760. Feb 11 01:24:05 [2050] vcsquorum pengine: notice: LogActions: Start p_ip_vcs (vcs1)
  2761. Feb 11 01:24:05 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 19: /var/lib/pacemaker/pengine/pe-input-19.bz2
  2762. Feb 11 01:24:05 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2763. Feb 11 01:24:05 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 19 (ref=pe_calc-dc-1360567445-325) derived from /var/lib/pacemaker/pengine/pe-input-19.bz2
  2764. Feb 11 01:24:05 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 16: start p_fs_vcs_start_0 on vcs1
  2765. Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: monitor p_fs_vcs_monitor_20000 on vcs1
  2766. Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: start p_daemon_svn_start_0 on vcs1
  2767. Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_daemon_svn_monitor_30000 on vcs1
  2768. Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: start p_daemon_git-daemon_start_0 on vcs1
  2769. Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_daemon_git-daemon_monitor_30000 on vcs1
  2770. Feb 11 01:24:06 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: start p_ip_vcs_start_0 on vcs1
  2771. Feb 11 01:24:07 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: monitor p_ip_vcs_monitor_30000 on vcs1
  2772. Feb 11 01:24:07 [2051] vcsquorum crmd: notice: run_graph: Transition 19 (Complete=10, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-19.bz2): Complete
  2773. Feb 11 01:24:07 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  2774. Feb 11 01:24:34 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.24.1) : Non-status change
  2775. Feb 11 01:24:34 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  2776. Feb 11 01:24:34 [2045] vcsquorum cib: info: cib_replace_notify: Replaced: 0.23.9 -> 0.24.1 from vcs0
  2777. Feb 11 01:24:34 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.23.9
  2778. Feb 11 01:24:34 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.24.1
  2779. Feb 11 01:24:34 [2045] vcsquorum cib: notice: cib:diff: -- <meta_attributes id="g_vcs-meta_attributes" >
  2780. Feb 11 01:24:34 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair id="g_vcs-meta_attributes-target-role" name="target-role" value="Started" />
  2781. Feb 11 01:24:34 [2045] vcsquorum cib: notice: cib:diff: -- </meta_attributes>
  2782. Feb 11 01:24:34 [2045] vcsquorum cib: notice: cib:diff: ++ <cib epoch="24" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcs0" update-client="cibadmin" cib-last-written="Mon Feb 11 01:24:05 2013" have-quorum="1" dc-uuid="755053578" />
  2783. Feb 11 01:24:34 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=vcs0/cibadmin/2, version=0.24.1): OK (rc=0)
  2784. Feb 11 01:24:34 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
  2785. Feb 11 01:24:34 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/280, version=0.24.2): OK (rc=0)
  2786. Feb 11 01:24:34 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  2787. Feb 11 01:24:34 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
  2788. Feb 11 01:24:35 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  2789. Feb 11 01:24:35 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
  2790. Feb 11 01:24:35 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2791. Feb 11 01:24:35 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
  2792. Feb 11 01:24:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=vcs0/crm_resource/3, version=0.24.5): OK (rc=0)
  2793. Feb 11 01:24:46 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.24.5
  2794. Feb 11 01:24:46 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.25.1
  2795. Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="24" num_updates="5" />
  2796. Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
  2797. Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: ++ <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
  2798. Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: ++ <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs1" type="string" />
  2799. Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: ++ </rule>
  2800. Feb 11 01:24:46 [2045] vcsquorum cib: notice: cib:diff: ++ </rsc_location>
  2801. Feb 11 01:24:46 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=vcs0/crm_resource/4, version=0.25.1): OK (rc=0)
  2802. Feb 11 01:24:54 [2051] vcsquorum crmd: info: do_election_count_vote: Election 68 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2803. Feb 11 01:25:14 [2051] vcsquorum crmd: info: do_election_count_vote: Election 69 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2804. Feb 11 01:25:34 [2051] vcsquorum crmd: info: do_election_count_vote: Election 70 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2805. Feb 11 01:25:36 [2045] vcsquorum cib: info: cib_replace_notify: Local-only Replace: 0.26.1 from vcs0
  2806. Feb 11 01:25:36 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Local-only Change: 0.26.1
  2807. Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: -- <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
  2808. Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: -- <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
  2809. Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: -- <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs1" type="string" />
  2810. Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: -- </rule>
  2811. Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: -- </rsc_location>
  2812. Feb 11 01:25:36 [2045] vcsquorum cib: notice: cib:diff: ++ <cib epoch="26" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-origin="vcs0" update-client="crm_resource" cib-last-written="Mon Feb 11 01:24:46 2013" have-quorum="1" dc-uuid="755053578" />
  2813. Feb 11 01:25:36 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=vcs0/cibadmin/2, version=0.26.1): OK (rc=0)
  2814. Feb 11 01:25:36 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/282, version=0.26.2): OK (rc=0)
  2815. Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address
  2816. Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
  2817. Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address
  2818. Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
  2819. Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2820. Feb 11 01:25:36 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
  2821. Feb 11 01:25:49 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.26.4
  2822. Feb 11 01:25:49 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.27.1
  2823. Feb 11 01:25:49 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="26" num_updates="4" />
  2824. Feb 11 01:25:49 [2045] vcsquorum cib: notice: cib:diff: ++ <meta_attributes id="g_vcs-meta_attributes" >
  2825. Feb 11 01:25:49 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="g_vcs-meta_attributes-target-role" name="target-role" value="Stopped" />
  2826. Feb 11 01:25:49 [2045] vcsquorum cib: notice: cib:diff: ++ </meta_attributes>
  2827. Feb 11 01:25:49 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=vcs0/cibadmin/2, version=0.27.1): OK (rc=0)
  2828. Feb 11 01:25:56 [2051] vcsquorum crmd: info: do_election_count_vote: Election 71 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2829. Feb 11 01:26:16 [2051] vcsquorum crmd: info: do_election_count_vote: Election 72 (owner: 2868982794) pass: vote from vcs1 (Uptime)
  2830. Feb 11 01:26:34 [2051] vcsquorum crmd: error: crm_timer_popped: Election Timeout (I_ELECTION_DC) just popped in state S_ELECTION! (120000ms)
  2831. Feb 11 01:26:34 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=crm_timer_popped ]
  2832. Feb 11 01:26:34 [2051] vcsquorum crmd: info: do_dc_takeover: Taking over DC status for this partition
  2833. Feb 11 01:26:34 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/284, version=0.27.2): OK (rc=0)
  2834. Feb 11 01:26:34 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/285, version=0.27.3): OK (rc=0)
  2835. Feb 11 01:26:34 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/287, version=0.27.4): OK (rc=0)
  2836. Feb 11 01:26:34 [2051] vcsquorum crmd: info: do_dc_join_offer_all: join-23: Waiting on 3 outstanding join acks
  2837. Feb 11 01:26:34 [2051] vcsquorum crmd: info: update_dc: Set DC to vcsquorum (3.0.6)
  2838. Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/289, version=0.27.5): OK (rc=0)
  2839. Feb 11 01:26:35 [2051] vcsquorum crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  2840. Feb 11 01:26:35 [2051] vcsquorum crmd: info: do_dc_join_finalize: join-23: Syncing the CIB from vcsquorum to the rest of the cluster
  2841. Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/292, version=0.27.5): OK (rc=0)
  2842. Feb 11 01:26:35 [2051] vcsquorum crmd: info: do_dc_join_ack: join-23: Updating node state to member for vcsquorum
  2843. Feb 11 01:26:35 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcsquorum']/lrm
  2844. Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/293, version=0.27.6): OK (rc=0)
  2845. Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/294, version=0.27.7): OK (rc=0)
  2846. Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/295, version=0.27.8): OK (rc=0)
  2847. Feb 11 01:26:35 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcsquorum']/lrm (origin=local/crmd/296, version=0.27.9): OK (rc=0)
  2848. Feb 11 01:26:37 [2051] vcsquorum crmd: info: do_dc_join_ack: join-23: Updating node state to member for vcs0
  2849. Feb 11 01:26:37 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
  2850. Feb 11 01:26:37 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs0']/lrm
  2851. Feb 11 01:26:37 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs0']/lrm (origin=local/crmd/298, version=0.27.21): OK (rc=0)
  2852. Feb 11 01:26:37 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
  2853. Feb 11 01:26:37 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2854. Feb 11 01:26:37 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
  2855. Feb 11 01:26:39 [2051] vcsquorum crmd: info: do_dc_join_ack: join-23: Updating node state to member for vcs1
  2856. Feb 11 01:26:39 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update p_ping=(null) failed: No such device or address
  2857. Feb 11 01:26:39 [2051] vcsquorum crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vcs1']/lrm
  2858. Feb 11 01:26:39 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vcs1']/lrm (origin=local/crmd/300, version=0.27.34): OK (rc=0)
  2859. Feb 11 01:26:39 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update fail-count-p_sysadmin_notify=(null) failed: No such device or address
  2860. Feb 11 01:26:39 [2051] vcsquorum crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  2861. Feb 11 01:26:39 [2051] vcsquorum crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  2862. Feb 11 01:26:39 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update master-p_drbd_vcs=(null) failed: No such device or address
  2863. Feb 11 01:26:39 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/302, version=0.27.36): OK (rc=0)
  2864. Feb 11 01:26:39 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/304, version=0.27.38): OK (rc=0)
  2865. Feb 11 01:26:39 [2049] vcsquorum attrd: warning: attrd_cib_callback: Update last-failure-p_sysadmin_notify=(null) failed: No such device or address
  2866. Feb 11 01:26:39 [2049] vcsquorum attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  2867. Feb 11 01:26:39 [2049] vcsquorum attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  2868. Feb 11 01:26:40 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2869. Feb 11 01:26:40 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2870. Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2871. Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2872. Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2873. Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2874. Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2875. Feb 11 01:26:40 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2876. Feb 11 01:26:40 [2050] vcsquorum pengine: notice: LogActions: Stop p_fs_vcs (vcs1)
  2877. Feb 11 01:26:40 [2050] vcsquorum pengine: notice: LogActions: Stop p_daemon_svn (vcs1)
  2878. Feb 11 01:26:40 [2050] vcsquorum pengine: notice: LogActions: Stop p_daemon_git-daemon (vcs1)
  2879. Feb 11 01:26:40 [2050] vcsquorum pengine: notice: LogActions: Stop p_ip_vcs (vcs1)
  2880. Feb 11 01:26:40 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 20: /var/lib/pacemaker/pengine/pe-input-20.bz2
  2881. Feb 11 01:26:40 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2882. Feb 11 01:26:40 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 20 (ref=pe_calc-dc-1360567600-349) derived from /var/lib/pacemaker/pengine/pe-input-20.bz2
  2883. Feb 11 01:26:40 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: stop p_ip_vcs_stop_0 on vcs1
  2884. Feb 11 01:26:40 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: stop p_daemon_git-daemon_stop_0 on vcs1
  2885. Feb 11 01:26:40 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: stop p_daemon_svn_stop_0 on vcs1
  2886. Feb 11 01:26:42 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: stop p_fs_vcs_stop_0 on vcs1
  2887. Feb 11 01:26:42 [2051] vcsquorum crmd: notice: run_graph: Transition 20 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-20.bz2): Complete
  2888. Feb 11 01:26:42 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  2889. Feb 11 01:27:00 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.28.1) : Non-status change
  2890. Feb 11 01:27:00 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  2891. Feb 11 01:27:00 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.27.46
  2892. Feb 11 01:27:00 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.28.1
  2893. Feb 11 01:27:00 [2045] vcsquorum cib: notice: cib:diff: -- <nvpair value="Stopped" id="g_vcs-meta_attributes-target-role" />
  2894. Feb 11 01:27:00 [2045] vcsquorum cib: notice: cib:diff: ++ <nvpair id="g_vcs-meta_attributes-target-role" name="target-role" value="Started" />
  2895. Feb 11 01:27:00 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=vcs0/cibadmin/2, version=0.28.1): OK (rc=0)
  2896. Feb 11 01:27:00 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2897. Feb 11 01:27:00 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2898. Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2899. Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2900. Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2901. Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2902. Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2903. Feb 11 01:27:00 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2904. Feb 11 01:27:00 [2050] vcsquorum pengine: notice: LogActions: Start p_fs_vcs (vcs1)
  2905. Feb 11 01:27:00 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_svn (vcs1)
  2906. Feb 11 01:27:00 [2050] vcsquorum pengine: notice: LogActions: Start p_daemon_git-daemon (vcs1)
  2907. Feb 11 01:27:00 [2050] vcsquorum pengine: notice: LogActions: Start p_ip_vcs (vcs1)
  2908. Feb 11 01:27:00 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 21: /var/lib/pacemaker/pengine/pe-input-21.bz2
  2909. Feb 11 01:27:00 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2910. Feb 11 01:27:00 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 21 (ref=pe_calc-dc-1360567620-354) derived from /var/lib/pacemaker/pengine/pe-input-21.bz2
  2911. Feb 11 01:27:00 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 16: start p_fs_vcs_start_0 on vcs1
  2912. Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: monitor p_fs_vcs_monitor_20000 on vcs1
  2913. Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: start p_daemon_svn_start_0 on vcs1
  2914. Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_daemon_svn_monitor_30000 on vcs1
  2915. Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: start p_daemon_git-daemon_start_0 on vcs1
  2916. Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: monitor p_daemon_git-daemon_monitor_30000 on vcs1
  2917. Feb 11 01:27:02 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: start p_ip_vcs_start_0 on vcs1
  2918. Feb 11 01:27:03 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: monitor p_ip_vcs_monitor_30000 on vcs1
  2919. Feb 11 01:27:03 [2051] vcsquorum crmd: notice: run_graph: Transition 21 (Complete=10, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-21.bz2): Complete
  2920. Feb 11 01:27:03 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  2921. Feb 11 01:27:10 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_delete for section constraints (origin=vcs0/crm_resource/3, version=0.28.10): OK (rc=0)
  2922. Feb 11 01:27:10 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.29.1) : Non-status change
  2923. Feb 11 01:27:10 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  2924. Feb 11 01:27:10 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: --- 0.28.10
  2925. Feb 11 01:27:10 [2045] vcsquorum cib: notice: log_cib_diff: cib:diff: Diff: +++ 0.29.1
  2926. Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: -- <cib admin_epoch="0" epoch="28" num_updates="10" />
  2927. Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: ++ <rsc_location id="cli-standby-g_vcs" rsc="g_vcs" >
  2928. Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: ++ <rule id="cli-standby-rule-g_vcs" score="-INFINITY" boolean-op="and" >
  2929. Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: ++ <expression id="cli-standby-expr-g_vcs" attribute="#uname" operation="eq" value="vcs1" type="string" />
  2930. Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: ++ </rule>
  2931. Feb 11 01:27:10 [2045] vcsquorum cib: notice: cib:diff: ++ </rsc_location>
  2932. Feb 11 01:27:10 [2045] vcsquorum cib: info: cib_process_request: Operation complete: op cib_modify for section constraints (origin=vcs0/crm_resource/4, version=0.29.1): OK (rc=0)
  2933. Feb 11 01:27:10 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2934. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2935. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2936. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2937. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2938. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2939. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2940. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2941. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_fs_vcs (Started vcs1 -> vcs0)
  2942. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_svn (Started vcs1 -> vcs0)
  2943. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_git-daemon (Started vcs1 -> vcs0)
  2944. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_ip_vcs (Started vcs1 -> vcs0)
  2945. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Slave vcs1)
  2946. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:1 (Slave -> Master vcs0)
  2947. Feb 11 01:27:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2948. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 22: /var/lib/pacemaker/pengine/pe-input-22.bz2
  2949. Feb 11 01:27:11 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 22 (ref=pe_calc-dc-1360567630-363) derived from /var/lib/pacemaker/pengine/pe-input-22.bz2
  2950. Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 29: stop p_ip_vcs_stop_0 on vcs1
  2951. Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 1: cancel p_drbd_vcs_cancel_10000 on vcs1
  2952. Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 7: cancel p_drbd_vcs_cancel_20000 on vcs0
  2953. Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 100: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
  2954. Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 102: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
  2955. Feb 11 01:27:11 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_20000, magic=0:0;39:16:0:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.29.2) : Resource op removal
  2956. Feb 11 01:27:11 [2051] vcsquorum crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_vcs_monitor_10000, magic=0:8;35:16:8:6a6761a2-ec2f-492c-a18c-394db5ac6dfc, cib=0.29.3) : Resource op removal
  2957. Feb 11 01:27:11 [2051] vcsquorum crmd: notice: run_graph: Transition 22 (Complete=7, Pending=0, Fired=0, Skipped=26, Incomplete=10, Source=/var/lib/pacemaker/pengine/pe-input-22.bz2): Stopped
  2958. Feb 11 01:27:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  2959. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs1: unknown error (1)
  2960. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on vcs0: unknown error (1)
  2961. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2962. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2963. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs0 after 1000000 failures (max=1000000)
  2964. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2965. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2966. Feb 11 01:27:11 [2050] vcsquorum pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from vcs1 after 1000000 failures (max=1000000)
  2967. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_fs_vcs (Started vcs1 -> vcs0)
  2968. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_svn (Started vcs1 -> vcs0)
  2969. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Move p_daemon_git-daemon (Started vcs1 -> vcs0)
  2970. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Start p_ip_vcs (vcs0)
  2971. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Demote p_drbd_vcs:0 (Master -> Slave vcs1)
  2972. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: LogActions: Promote p_drbd_vcs:1 (Slave -> Master vcs0)
  2973. Feb 11 01:27:11 [2051] vcsquorum crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  2974. Feb 11 01:27:11 [2050] vcsquorum pengine: notice: process_pe_message: Calculated Transition 23: /var/lib/pacemaker/pengine/pe-input-23.bz2
  2975. Feb 11 01:27:11 [2051] vcsquorum crmd: info: do_te_invoke: Processing graph 23 (ref=pe_calc-dc-1360567631-369) derived from /var/lib/pacemaker/pengine/pe-input-23.bz2
  2976. Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 23: stop p_daemon_git-daemon_stop_0 on vcs1
  2977. Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 96: notify p_drbd_vcs_pre_notify_demote_0 on vcs1
  2978. Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 98: notify p_drbd_vcs_pre_notify_demote_0 on vcs0
  2979. Feb 11 01:27:11 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 20: stop p_daemon_svn_stop_0 on vcs1
  2980. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 17: stop p_fs_vcs_stop_0 on vcs1
  2981. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 33: demote p_drbd_vcs_demote_0 on vcs1
  2982. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 97: notify p_drbd_vcs_post_notify_demote_0 on vcs1
  2983. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 99: notify p_drbd_vcs_post_notify_demote_0 on vcs0
  2984. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 92: notify p_drbd_vcs_pre_notify_promote_0 on vcs1
  2985. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 94: notify p_drbd_vcs_pre_notify_promote_0 on vcs0
  2986. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 38: promote p_drbd_vcs_promote_0 on vcs0
  2987. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 93: notify p_drbd_vcs_post_notify_promote_0 on vcs1
  2988. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 95: notify p_drbd_vcs_post_notify_promote_0 on vcs0
  2989. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 18: start p_fs_vcs_start_0 on vcs0
  2990. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 35: monitor p_drbd_vcs_monitor_20000 on vcs1
  2991. Feb 11 01:27:13 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 39: monitor p_drbd_vcs_monitor_10000 on vcs0
  2992. Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 19: monitor p_fs_vcs_monitor_20000 on vcs0
  2993. Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 21: start p_daemon_svn_start_0 on vcs0
  2994. Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 22: monitor p_daemon_svn_monitor_30000 on vcs0
  2995. Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 24: start p_daemon_git-daemon_start_0 on vcs0
  2996. Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 25: monitor p_daemon_git-daemon_monitor_30000 on vcs0
  2997. Feb 11 01:27:14 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 26: start p_ip_vcs_start_0 on vcs0
  2998. Feb 11 01:27:15 [2051] vcsquorum crmd: info: te_rsc_command: Initiating action 27: monitor p_ip_vcs_monitor_30000 on vcs0
  2999. Feb 11 01:27:16 [2051] vcsquorum crmd: notice: run_graph: Transition 23 (Complete=40, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-23.bz2): Complete
  3000. Feb 11 01:27:16 [2051] vcsquorum crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement