Advertisement
adhioutlined

IPADDR waakakakak

Mar 16th, 2015
349
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 55.49 KB | None | 0 0
  1. Mar 17 09:38:17 [4071] cib: debug: crm_client_new: Connecting 8091458 for uid=0 gid=0 pid=7946 id=792c02ce-26c5-4d6c-b739-9ae5a2fb07a6
  2. Mar 17 09:38:17 [4071] cib: debug: handle_new_connection: IPC credentials authenticated (4071-7946-24)
  3. Mar 17 09:38:17 [4071] cib: debug: qb_ipcs_us_connect: connecting to client (4071-7946-24)
  4. Mar 17 09:38:17 [4071] cib: debug: _sock_add_to_mainloop: added 24 to poll loop (liveness)
  5. Mar 17 09:38:17 [4071] cib: debug: _process_request_: recv from client connection failed (4071-7946-24): Connection reset by peer (131)
  6. Mar 17 09:38:17 [4071] cib: error: qb_ipcs_dispatch_connection_request: request returned error (4071-7946-24): Connection reset by peer (131)
  7. Mar 17 09:38:17 [4071] cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(4071-7946-24) state:2
  8. Mar 17 09:38:17 [4071] cib: debug: crm_client_destroy: Destroying 0 events
  9. Mar 17 09:38:26 [4073] lrmd: debug: recurring_action_timer: Scheduling another invokation of dumtest_monitor_120000
  10. Dummy(dumtest)[7949]: 2015/03/17_09:38:26 DEBUG: dumtest monitor : 0
  11. Mar 17 09:38:26 [4073] lrmd: debug: operation_finished: dumtest_monitor_120000:7949 - exited with rc=0
  12. Mar 17 09:38:26 [4073] lrmd: debug: operation_finished: dumtest_monitor_120000:7949:stderr [ -- empty -- ]
  13. Mar 17 09:38:26 [4073] lrmd: debug: operation_finished: dumtest_monitor_120000:7949:stdout [ -- empty -- ]
  14. Mar 17 09:38:26 [4073] lrmd: debug: log_finished: finished - rsc:dumtest action:monitor call_id:13 pid:7949 exit-code:0 exec-time:0ms queue-time:0ms
  15. Mar 17 09:38:31 [4071] cib: debug: crm_client_new: Connecting 8091458 for uid=0 gid=0 pid=7959 id=9a9bbad5-d16f-6d6a-ccf8-e60e65dcf332
  16. Mar 17 09:38:31 [4071] cib: debug: handle_new_connection: IPC credentials authenticated (4071-7959-24)
  17. Mar 17 09:38:31 [4071] cib: debug: qb_ipcs_us_connect: connecting to client (4071-7959-24)
  18. Mar 17 09:38:31 [4071] cib: debug: _sock_add_to_mainloop: added 24 to poll loop (liveness)
  19. Mar 17 09:38:31 [4076] crmd: debug: crm_client_new: Connecting 8eb9f08 for uid=0 gid=0 pid=7959 id=e0a16909-0964-e443-d93d-f42416b05d02
  20. Mar 17 09:38:31 [4076] crmd: debug: handle_new_connection: IPC credentials authenticated (4076-7959-31)
  21. Mar 17 09:38:31 [4076] crmd: debug: qb_ipcs_us_connect: connecting to client (4076-7959-31)
  22. Mar 17 09:38:31 [4076] crmd: debug: _sock_add_to_mainloop: added 31 to poll loop (liveness)
  23. Mar 17 09:38:31 [4076] crmd: info: delete_resource: Removing resource dicoba for e0a16909-0964-e443-d93d-f42416b05d02 (internal) on omni1
  24. Mar 17 09:38:31 [4073] lrmd: debug: process_lrmd_message: Processed lrmd_rsc_unregister operation from 3b650d49-6b19-45f7-d287-f28bf9f52852: rc=0, reply=1, notify=1, exit=134520064
  25. Mar 17 09:38:31 [4076] crmd: debug: delete_rsc_entry: sync: Sending delete op for dicoba
  26. Mar 17 09:38:31 [4076] crmd: info: notify_deleted: Notifying e0a16909-0964-e443-d93d-f42416b05d02 on omni1 that dicoba was deleted
  27. Mar 17 09:38:31 [4076] crmd: debug: create_operation_update: send_direct_ack: Updating resource dicoba after delete op complete (interval=60000)
  28. Mar 17 09:38:31 [4076] crmd: debug: send_direct_ack: ACK'ing resource op dicoba_delete_60000 from 7959:0:0:xxxxxxxx-xrsc-opxx-xcrm-resourcexxxx: lrm_invoke-lrmd-1426559911-82
  29. Mar 17 09:38:31 [4076] crmd: debug: notify_deleted: Triggering a refresh after e0a16909-0964-e443-d93d-f42416b05d02 deleted dicoba from the LRM
  30. Mar 17 09:38:31 [4071] cib: info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='omni1']//lrm_resource[@id='dicoba'] to master (origin=local/crmd/98)
  31. Mar 17 09:38:31 [4071] cib: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='last-lrm-refresh'] (/cib/configuration/crm_config/cluster_property_set/nvpair[5])
  32. Mar 17 09:38:31 [4076] crmd: debug: find_nvpair_attr_delegate: Match <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1426475585"/>
  33. Mar 17 09:38:31 [4071] cib: info: cib_process_request: Forwarding cib_modify operation for section crm_config to master (origin=local/crmd/100)
  34. Mar 17 09:38:31 [4071] cib: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='omni1']//lrm_resource[@id='dicoba'] (/cib/status/node_state/lrm/lrm_resources/lrm_resource[1])
  35. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: Diff: --- 0.54.14 2
  36. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: Diff: +++ 0.54.15 6cb995985cc3cf1d553670501456c6f1
  37. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: -- /cib/status/node_state[@id='40']/lrm[@id='40']/lrm_resources/lrm_resource[@id='dicoba']
  38. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: + /cib: @num_updates=15
  39. Mar 17 09:38:31 [4071] cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='omni1']//lrm_resource[@id='dicoba']: OK (rc=0, origin=omni1/crmd/98, version=0.54.15)
  40. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: Diff: --- 0.54.15 2
  41. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: Diff: +++ 0.55.0 (null)
  42. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: + /cib: @epoch=55, @num_updates=0
  43. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: + /cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-last-lrm-refresh']: @value=1426559911
  44. Mar 17 09:38:31 [4071] cib: debug: activateCibXml: Triggering CIB write for cib_modify op
  45. Mar 17 09:38:31 [4071] cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=omni1/crmd/100, version=0.55.0)
  46. Mar 17 09:38:31 [4076] crmd: debug: te_update_diff: Processing (cib_delete) diff: 0.54.14 -> 0.54.15 (S_IDLE)
  47. Mar 17 09:38:31 [4076] crmd: info: abort_transition_graph: Transition aborted by deletion of lrm_resource[@id='dicoba']: Resource state removal (cib=0.54.15, source=te_update_diff:429, path=/cib/status/node_state[@id='40']/lrm[@id='40']/lrm_resources/lrm_resource[@id='dicoba'], 1)
  48. Mar 17 09:38:31 [4076] crmd: debug: te_update_diff: Processing (cib_modify) diff: 0.54.15 -> 0.55.0 (S_IDLE)
  49. Mar 17 09:38:31 [4076] crmd: info: abort_transition_graph: Transition aborted by cib-bootstrap-options-last-lrm-refresh, last-lrm-refresh=1426559911: Non-status change (modify cib=0.55.0, source=te_update_diff:383, path=/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-last-lrm-refresh'], 1)
  50. Mar 17 09:38:31 [4076] crmd: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  51. Mar 17 09:38:31 [4076] crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  52. Mar 17 09:38:31 [4076] crmd: debug: do_state_transition: All 1 cluster nodes are eligible to run resources.
  53. Mar 17 09:38:31 [4076] crmd: debug: do_pe_invoke: Query 101: Requesting the current CIB: S_POLICY_ENGINE
  54. Mar 17 09:38:31 [4076] crmd: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
  55. Mar 17 09:38:31 [4076] crmd: debug: do_pe_invoke: Query 102: Requesting the current CIB: S_POLICY_ENGINE
  56. Mar 17 09:38:31 [4072] stonith-ng: debug: xml_patch_version_check: Can apply patch 0.54.15 to 0.54.14
  57. Mar 17 09:38:31 [4072] stonith-ng: debug: xml_patch_version_check: Can apply patch 0.55.0 to 0.54.15
  58. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: STONITH timeout: 60000
  59. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: STONITH of failed nodes is disabled
  60. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: Stop all active resources: false
  61. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
  62. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: Default stickiness: 0
  63. Mar 17 09:38:31 [4075] pengine: notice: unpack_config: On loss of CCM Quorum: Ignore
  64. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  65. Mar 17 09:38:31 [4075] pengine: info: determine_online_status: Node omni1 is online
  66. Mar 17 09:38:31 [4075] pengine: info: native_print: dicoba (ocf::heartbeat:IPaddr): Stopped
  67. Mar 17 09:38:31 [4075] pengine: info: native_print: dumtest (ocf::pacemaker:Dummy): Started omni1
  68. Mar 17 09:38:31 [4075] pengine: info: get_failcount_full: dicoba has failed INFINITY times on omni1
  69. Mar 17 09:38:31 [4075] pengine: warning: common_apply_stickiness: Forcing dicoba away from omni1 after 1000000 failures (max=1000000)
  70. Mar 17 09:38:31 [4075] pengine: debug: common_apply_stickiness: Resource dumtest: preferring current location (node=omni1, weight=100)
  71. Mar 17 09:38:31 [4075] pengine: debug: native_assign_node: All nodes for resource dicoba are unavailable, unclean or shutting down (omni1: 1, -1000000)
  72. Mar 17 09:38:31 [4075] pengine: debug: native_assign_node: Could not allocate a node for dicoba
  73. Mar 17 09:38:31 [4075] pengine: info: native_color: Resource dicoba cannot run anywhere
  74. Mar 17 09:38:31 [4075] pengine: debug: native_assign_node: Assigning omni1 to dumtest
  75. Mar 17 09:38:31 [4075] pengine: debug: native_create_probe: Probing dicoba on omni1 (Stopped)
  76. Mar 17 09:38:31 [4075] pengine: info: LogActions: Leave dicoba (Stopped)
  77. Mar 17 09:38:31 [4075] pengine: info: LogActions: Leave dumtest (Started omni1)
  78. Mar 17 09:38:31 [4075] pengine: notice: process_pe_message: Calculated Transition 64: /opt/var/lib/pacemaker/pengine/pe-input-38.bz2
  79. Mar 17 09:38:31 [4076] crmd: debug: do_pe_invoke_callback: Invoking the PE: query=102, ref=pe_calc-dc-1426559911-83, seq=44, quorate=0
  80. Mar 17 09:38:31 [4076] crmd: debug: config_query_callback: Call 103 : Parsing CIB options
  81. Mar 17 09:38:31 [4076] crmd: warning: throttle_num_cores: Couldn't read /proc/cpuinfo, assuming a single processor: No such file or directory (2)
  82. Mar 17 09:38:31 [4076] crmd: debug: config_query_callback: Shutdown escalation occurs after: 1200000ms
  83. Mar 17 09:38:31 [4076] crmd: debug: config_query_callback: Checking for expired actions every 900000ms
  84. Mar 17 09:38:31 [4076] crmd: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
  85. Mar 17 09:38:31 [4076] crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  86. Mar 17 09:38:31 [4076] crmd: debug: unpack_graph: Unpacked transition 64: 3 actions in 3 synapses
  87. Mar 17 09:38:31 [4076] crmd: info: do_te_invoke: Processing graph 64 (ref=pe_calc-dc-1426559911-83) derived from /opt/var/lib/pacemaker/pengine/pe-input-38.bz2
  88. Mar 17 09:38:31 [4076] crmd: notice: te_rsc_command: Initiating action 5: monitor dicoba_monitor_0 on omni1 (local)
  89. Mar 17 09:38:31 [4073] lrmd: info: process_lrmd_get_rsc_info: Resource 'dicoba' not found (1 active resources)
  90. Mar 17 09:38:31 [4073] lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 3b650d49-6b19-45f7-d287-f28bf9f52852: rc=0, reply=0, notify=0, exit=134520064
  91. Mar 17 09:38:31 [4073] lrmd: info: process_lrmd_rsc_register: Added 'dicoba' to the rsc list (2 active resources)
  92. Mar 17 09:38:31 [4073] lrmd: debug: process_lrmd_message: Processed lrmd_rsc_register operation from 3b650d49-6b19-45f7-d287-f28bf9f52852: rc=0, reply=1, notify=1, exit=134520064
  93. Mar 17 09:38:31 [4073] lrmd: debug: process_lrmd_message: Processed lrmd_rsc_info operation from 3b650d49-6b19-45f7-d287-f28bf9f52852: rc=0, reply=0, notify=0, exit=134520064
  94. Mar 17 09:38:31 [4076] crmd: info: do_lrm_rsc_op: Performing key=5:64:7:9e7f7079-4fae-4be9-9598-a1544ee5a6ed op=dicoba_monitor_0
  95. Mar 17 09:38:31 [4073] lrmd: debug: process_lrmd_message: Processed lrmd_rsc_exec operation from 3b650d49-6b19-45f7-d287-f28bf9f52852: rc=18, reply=1, notify=0, exit=134520064
  96. Mar 17 09:38:31 [4073] lrmd: debug: log_execute: executing - rsc:dicoba action:monitor call_id:18
  97. Mar 17 09:38:31 [4076] crmd: debug: run_graph: Transition 64 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=2, Source=/opt/var/lib/pacemaker/pengine/pe-input-38.bz2): In-progress
  98. Mar 17 09:38:31 [4071] cib: info: write_cib_contents: Archived previous version as /opt/var/lib/pacemaker/cib/cib-58.raw
  99. Mar 17 09:38:31 [4071] cib: debug: write_cib_contents: Writing CIB to disk
  100. Mar 17 09:38:31 [4071] cib: info: write_cib_contents: Wrote version 0.55.0 of the CIB to disk (digest: bbe4a3c3285a251c819354c365827e57)
  101. Mar 17 09:38:31 [4071] cib: debug: write_cib_contents: Wrote digest bbe4a3c3285a251c819354c365827e57 to disk
  102. Mar 17 09:38:31 [4071] cib: info: retrieveCib: Reading cluster configuration from: /opt/var/lib/pacemaker/cib/cib.JzaaJp (digest: /opt/var/lib/pacemaker/cib/cib.KzaaJp)
  103. Mar 17 09:38:31 [4071] cib: debug: write_cib_contents: Activating /opt/var/lib/pacemaker/cib/cib.JzaaJp
  104. Mar 17 09:38:31 [4074] attrd: debug: crm_client_new: Connecting 8223940 for uid=0 gid=0 pid=7959 id=e1d99065-3531-432e-80a2-abf71671be69
  105. Mar 17 09:38:31 [4074] attrd: debug: handle_new_connection: IPC credentials authenticated (4074-7959-19)
  106. Mar 17 09:38:31 [4074] attrd: debug: qb_ipcs_us_connect: connecting to client (4074-7959-19)
  107. Mar 17 09:38:31 [4074] attrd: debug: _sock_add_to_mainloop: added 19 to poll loop (liveness)
  108. Mar 17 09:38:31 [4074] attrd: debug: attrd_client_message: Broadcasting fail-count-dicoba[omni1] = (null) (writer)
  109. Mar 17 09:38:31 [4074] attrd: info: attrd_peer_update: Setting fail-count-dicoba[omni1]: INFINITY -> (null) from omni1
  110. Mar 17 09:38:31 [4074] attrd: debug: write_attribute: Update: omni1[fail-count-dicoba]=(null) (40 40 40 omni1)
  111. Mar 17 09:38:31 [4074] attrd: info: write_attribute: Sent update 6 with 1 changes for fail-count-dicoba, id=<n/a>, set=(null)
  112. Mar 17 09:38:31 [4071] cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/6)
  113. Mar 17 09:38:31 [4071] cib: debug: cib_process_modify: Destroying /cib/status/node_state/transient_attributes/instance_attributes/nvpair[3]
  114. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: Diff: --- 0.55.0 2
  115. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: Diff: +++ 0.55.1 (null)
  116. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: -- /cib/status/node_state[@id='40']/transient_attributes[@id='40']/instance_attributes[@id='status-40']/nvpair[@id='status-40-fail-count-dicoba']
  117. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: + /cib: @num_updates=1
  118. Mar 17 09:38:31 [4071] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=omni1/attrd/6, version=0.55.1)
  119. Mar 17 09:38:31 [4072] stonith-ng: debug: xml_patch_version_check: Can apply patch 0.55.1 to 0.55.0
  120. Mar 17 09:38:31 [4074] attrd: info: attrd_cib_callback: Update 6 for fail-count-dicoba: OK (0)
  121. Mar 17 09:38:31 [4074] attrd: info: attrd_cib_callback: Update 6 for fail-count-dicoba[omni1]=(null): OK (0)
  122. Mar 17 09:38:31 [4076] crmd: debug: te_update_diff: Processing (cib_modify) diff: 0.55.0 -> 0.55.1 (S_TRANSITION_ENGINE)
  123. Mar 17 09:38:31 [4076] crmd: debug: update_abort_priority: Abort priority upgraded from 0 to 1000000
  124. Mar 17 09:38:31 [4076] crmd: debug: update_abort_priority: Abort action done superseded by restart: Transient attribute change
  125. Mar 17 09:38:31 [4076] crmd: notice: abort_transition_graph: Transition aborted by deletion of nvpair[@id='status-40-fail-count-dicoba']: Transient attribute change (cib=0.55.1, source=te_update_diff:391, path=/cib/status/node_state[@id='40']/transient_attributes[@id='40']/instance_attributes[@id='status-40']/nvpair[@id='status-40-fail-count-dicoba'], 0)
  126. Mar 17 09:38:31 [4076] crmd: debug: run_graph: Transition 64 (Complete=0, Pending=1, Fired=0, Skipped=1, Incomplete=1, Source=/opt/var/lib/pacemaker/pengine/pe-input-38.bz2): In-progress
  127. Mar 17 09:38:31 [4071] cib: debug: _sock_connection_liveliness: LIVENESS: fd 24 event 1 conn (4071-7959-24)
  128. Mar 17 09:38:31 [4071] cib: debug: _sock_connection_liveliness: EOF conn (4071-7959-24)
  129. Mar 17 09:38:31 [4071] cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(4071-7959-24) state:2
  130. Mar 17 09:38:31 [4071] cib: debug: crm_client_destroy: Destroying 0 events
  131. Mar 17 09:38:31 [4076] crmd: debug: _sock_connection_liveliness: LIVENESS: fd 31 event 1 conn (4076-7959-31)
  132. Mar 17 09:38:31 [4076] crmd: debug: _sock_connection_liveliness: EOF conn (4076-7959-31)
  133. Mar 17 09:38:31 [4076] crmd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(4076-7959-31) state:2
  134. Mar 17 09:38:31 [4076] crmd: debug: crm_client_destroy: Destroying 0 events
  135. Mar 17 09:38:31 [4074] attrd: debug: _process_request_: recv from client connection failed (4074-7959-19): Connection reset by peer (131)
  136. Mar 17 09:38:31 [4074] attrd: error: qb_ipcs_dispatch_connection_request: request returned error (4074-7959-19): Connection reset by peer (131)
  137. Mar 17 09:38:31 [4074] attrd: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(4074-7959-19) state:2
  138. Mar 17 09:38:31 [4074] attrd: debug: crm_client_destroy: Destroying 0 events
  139. Mar 17 09:38:31 [4073] lrmd: debug: operation_finished: dicoba_monitor_0:7961 - exited with rc=7
  140. Mar 17 09:38:31 [4073] lrmd: debug: operation_finished: dicoba_monitor_0:7961:stderr [ -- empty -- ]
  141. Mar 17 09:38:31 [4073] lrmd: debug: operation_finished: dicoba_monitor_0:7961:stdout [ -- empty -- ]
  142. Mar 17 09:38:31 [4073] lrmd: debug: log_finished: finished - rsc:dicoba action:monitor call_id:18 pid:7961 exit-code:7 exec-time:58ms queue-time:0ms
  143. Mar 17 09:38:31 [4076] crmd: debug: create_operation_update: do_update_resource: Updating resource dicoba after monitor op complete (interval=0)
  144. Mar 17 09:38:31 [4076] crmd: notice: process_lrm_event: Operation dicoba_monitor_0: not running (node=omni1, call=18, rc=7, cib-update=104, confirmed=true)
  145. Mar 17 09:38:31 [4076] crmd: debug: update_history_cache: Updating history for 'dicoba' with monitor op
  146. Mar 17 09:38:31 [4071] cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/104)
  147. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: Diff: --- 0.55.1 2
  148. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: Diff: +++ 0.55.2 (null)
  149. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: + /cib: @num_updates=2
  150. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: ++ /cib/status/node_state[@id='40']/lrm[@id='40']/lrm_resources: <lrm_resource id="dicoba" type="IPaddr" class="ocf" provider="heartbeat"/>
  151. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: ++ <lrm_rsc_op id="dicoba_last_0" operation_key="dicoba_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="5:64:7:9e7f7079-4fae-4be9-9598-a1544ee5a6ed" transition-magic="0:7;5:64:7:9e7f7079-4fae-4be9-9598-a1544ee5a6ed" call-id="18" rc-code="7" op-status="0" interval="0" last-run="1426559911" last-rc-change="1426559911" exec-
  152. Mar 17 09:38:31 [4071] cib: info: cib_perform_op: ++ </lrm_resource>
  153. Mar 17 09:38:31 [4071] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=omni1/crmd/104, version=0.55.2)
  154. Mar 17 09:38:31 [4072] stonith-ng: debug: xml_patch_version_check: Can apply patch 0.55.2 to 0.55.1
  155. Mar 17 09:38:31 [4076] crmd: debug: te_update_diff: Processing (cib_modify) diff: 0.55.1 -> 0.55.2 (S_TRANSITION_ENGINE)
  156. Mar 17 09:38:31 [4076] crmd: info: match_graph_event: Action dicoba_monitor_0 (5) confirmed on omni1 (rc=0)
  157. Mar 17 09:38:31 [4076] crmd: notice: te_rsc_command: Initiating action 4: probe_complete probe_complete-omni1 on omni1 (local) - no waiting
  158. Mar 17 09:38:31 [4076] crmd: debug: attrd_update_delegate: Sent update: probe_complete=true for omni1
  159. Mar 17 09:38:31 [4076] crmd: info: te_rsc_command: Action 4 confirmed - no wait
  160. Mar 17 09:38:31 [4076] crmd: debug: run_graph: Transition 64 (Complete=1, Pending=0, Fired=1, Skipped=1, Incomplete=0, Source=/opt/var/lib/pacemaker/pengine/pe-input-38.bz2): In-progress
  161. Mar 17 09:38:31 [4076] crmd: notice: run_graph: Transition 64 (Complete=2, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/opt/var/lib/pacemaker/pengine/pe-input-38.bz2): Stopped
  162. Mar 17 09:38:31 [4076] crmd: debug: te_graph_trigger: Transition 64 is now complete
  163. Mar 17 09:38:31 [4076] crmd: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
  164. Mar 17 09:38:31 [4076] crmd: debug: notify_crmd: Transition 64 status: restart - Transient attribute change
  165. Mar 17 09:38:31 [4076] crmd: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
  166. Mar 17 09:38:31 [4076] crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  167. Mar 17 09:38:31 [4076] crmd: debug: do_state_transition: All 1 cluster nodes are eligible to run resources.
  168. Mar 17 09:38:31 [4076] crmd: debug: do_pe_invoke: Query 105: Requesting the current CIB: S_POLICY_ENGINE
  169. Mar 17 09:38:31 [4076] crmd: debug: do_pe_invoke_callback: Invoking the PE: query=105, ref=pe_calc-dc-1426559911-86, seq=44, quorate=0
  170. Mar 17 09:38:31 [4074] attrd: debug: attrd_client_message: Broadcasting probe_complete[omni1] = true (writer)
  171. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: STONITH timeout: 60000
  172. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: STONITH of failed nodes is disabled
  173. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: Stop all active resources: false
  174. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
  175. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: Default stickiness: 0
  176. Mar 17 09:38:31 [4075] pengine: notice: unpack_config: On loss of CCM Quorum: Ignore
  177. Mar 17 09:38:31 [4075] pengine: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  178. Mar 17 09:38:31 [4075] pengine: info: determine_online_status: Node omni1 is online
  179. Mar 17 09:38:31 [4075] pengine: info: native_print: dicoba (ocf::heartbeat:IPaddr): Stopped
  180. Mar 17 09:38:31 [4075] pengine: info: native_print: dumtest (ocf::pacemaker:Dummy): Started omni1
  181. Mar 17 09:38:31 [4075] pengine: debug: common_apply_stickiness: Resource dumtest: preferring current location (node=omni1, weight=100)
  182. Mar 17 09:38:31 [4075] pengine: debug: native_assign_node: Assigning omni1 to dicoba
  183. Mar 17 09:38:31 [4075] pengine: debug: native_assign_node: Assigning omni1 to dumtest
  184. Mar 17 09:38:31 [4075] pengine: notice: LogActions: Start dicoba (omni1)
  185. Mar 17 09:38:31 [4075] pengine: info: LogActions: Leave dumtest (Started omni1)
  186. Mar 17 09:38:31 [4075] pengine: notice: process_pe_message: Calculated Transition 65: /opt/var/lib/pacemaker/pengine/pe-input-39.bz2
  187. Mar 17 09:38:31 [4076] crmd: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
  188. Mar 17 09:38:31 [4076] crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  189. Mar 17 09:38:31 [4076] crmd: debug: unpack_graph: Unpacked transition 65: 1 actions in 1 synapses
  190. Mar 17 09:38:31 [4076] crmd: info: do_te_invoke: Processing graph 65 (ref=pe_calc-dc-1426559911-86) derived from /opt/var/lib/pacemaker/pengine/pe-input-39.bz2
  191. Mar 17 09:38:31 [4076] crmd: notice: te_rsc_command: Initiating action 5: start dicoba_start_0 on omni1 (local)
  192. Mar 17 09:38:31 [4076] crmd: debug: do_lrm_rsc_op: Stopped 0 recurring operations in preparation for dicoba_start_0
  193. Mar 17 09:38:31 [4076] crmd: info: do_lrm_rsc_op: Performing key=5:65:0:9e7f7079-4fae-4be9-9598-a1544ee5a6ed op=dicoba_start_0
  194. Mar 17 09:38:31 [4073] lrmd: debug: process_lrmd_message: Processed lrmd_rsc_exec operation from 3b650d49-6b19-45f7-d287-f28bf9f52852: rc=19, reply=1, notify=0, exit=134520064
  195. Mar 17 09:38:31 [4073] lrmd: info: log_execute: executing - rsc:dicoba action:start call_id:19
  196. Mar 17 09:38:31 [4076] crmd: debug: run_graph: Transition 65 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=0, Source=/opt/var/lib/pacemaker/pengine/pe-input-39.bz2): In-progress
  197. IPaddr(dicoba)[7991]: 2015/03/17_09:38:32 DEBUG: Using calculated broadcast for 192.168.24.173: 192.168.24.255
  198. IPaddr(dicoba)[7991]: 2015/03/17_09:38:32 ERROR: Could not add 192.168.24.173 to e1000g0: rc=1
  199. Mar 17 09:38:32 [4073] lrmd: debug: operation_finished: dicoba_start_0:7991 - exited with rc=1
  200. Mar 17 09:38:32 [4073] lrmd: notice: operation_finished: dicoba_start_0:7991:stderr [ Converted dotted-quad netmask to CIDR as: 24 ]
  201. Mar 17 09:38:32 [4073] lrmd: notice: operation_finished: dicoba_start_0:7991:stderr [ touch: cannot touch '/opt/var/run/resource-agents/IPaddr-e1000g0': Permission denied ]
  202. Mar 17 09:38:32 [4073] lrmd: notice: operation_finished: dicoba_start_0:7991:stderr [ ifconfig: cannot plumb e1000g0: Insufficient user authorizations ]
  203. Mar 17 09:38:32 [4073] lrmd: debug: operation_finished: dicoba_start_0:7991:stdout [ ERROR: 'ifconfig e1000g0 plumb' failed. ]
  204. Mar 17 09:38:32 [4073] lrmd: info: log_finished: finished - rsc:dicoba action:start call_id:19 pid:7991 exit-code:1 exec-time:507ms queue-time:0ms
  205. Mar 17 09:38:32 [4076] crmd: debug: create_operation_update: do_update_resource: Updating resource dicoba after start op complete (interval=0)
  206. Mar 17 09:38:36 [4071] cib: info: cib_process_ping: Reporting our current digest to omni1: 85ad91897b25ac3d021722424bc3549f for 0.55.2 (8354170 0)
  207. Mar 17 09:38:37 [4076] crmd: info: action_synced_wait: Managed IPaddr_meta-data_0 process 8076 exited with rc=0
  208. Mar 17 09:38:37 [4076] crmd: notice: process_lrm_event: Operation dicoba_start_0: unknown error (node=omni1, call=19, rc=1, cib-update=106, confirmed=true)
  209. Mar 17 09:38:37 [4076] crmd: notice: process_lrm_event: omni1-dicoba_start_0:19 [ Converted dotted-quad netmask to CIDR as: 24\ntouch: cannot touch '/opt/var/run/resource-agents/IPaddr-e1000g0': Permission denied\nifconfig: cannot plumb e1000g0: Insufficient user authorizations\n ]
  210. Mar 17 09:38:37 [4076] crmd: debug: update_history_cache: Updating history for 'dicoba' with start op
  211. Mar 17 09:38:37 [4071] cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/106)
  212. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: Diff: --- 0.55.2 2
  213. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: Diff: +++ 0.55.3 (null)
  214. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: + /cib: @num_updates=3
  215. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: + /cib/status/node_state[@id='40']/lrm[@id='40']/lrm_resources/lrm_resource[@id='dicoba']/lrm_rsc_op[@id='dicoba_last_0']: @operation_key=dicoba_start_0, @operation=start, @transition-key=5:65:0:9e7f7079-4fae-4be9-9598-a1544ee5a6ed, @transition-magic=0:1;5:65:0:9e7f7079-4fae-4be9-9598-a1544ee5a6ed, @call-id=19, @rc-code=1, @exec-time=507
  216. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: ++ /cib/status/node_state[@id='40']/lrm[@id='40']/lrm_resources/lrm_resource[@id='dicoba']: <lrm_rsc_op id="dicoba_last_failure_0" operation_key="dicoba_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="5:65:0:9e7f7079-4fae-4be9-9598-a1544ee5a6ed" transition-magic="0:1;5:65:0:9e7f7079-4fae-4be9-9598-a1544ee5a6ed" call-id="19" rc-code="1" op-status="0" interval="0" last-run="1426559911" last
  217. Mar 17 09:38:37 [4071] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=omni1/crmd/106, version=0.55.3)
  218. Mar 17 09:38:37 [4072] stonith-ng: debug: xml_patch_version_check: Can apply patch 0.55.3 to 0.55.2
  219. Mar 17 09:38:37 [4076] crmd: debug: te_update_diff: Processing (cib_modify) diff: 0.55.2 -> 0.55.3 (S_TRANSITION_ENGINE)
  220. Mar 17 09:38:37 [4076] crmd: warning: status_from_rc: Action 5 (dicoba_start_0) on omni1 failed (target: 0 vs. rc: 1): Error
  221. Mar 17 09:38:37 [4076] crmd: warning: update_failcount: Updating failcount for dicoba on omni1 after failed start: rc=1 (update=INFINITY, time=1426559917)
  222. Mar 17 09:38:37 [4076] crmd: debug: attrd_update_delegate: Sent update: fail-count-dicoba=INFINITY for omni1
  223. Mar 17 09:38:37 [4074] attrd: debug: attrd_client_message: Broadcasting fail-count-dicoba[omni1] = INFINITY (writer)
  224. Mar 17 09:38:37 [4076] crmd: debug: attrd_update_delegate: Sent update: last-failure-dicoba=1426559917 for omni1
  225. Mar 17 09:38:37 [4074] attrd: debug: attrd_client_message: Broadcasting last-failure-dicoba[omni1] = 1426559917 (writer)
  226. Mar 17 09:38:37 [4074] attrd: info: attrd_peer_update: Setting fail-count-dicoba[omni1]: (null) -> INFINITY from omni1
  227. Mar 17 09:38:37 [4074] attrd: debug: write_attribute: Update: omni1[fail-count-dicoba]=INFINITY (40 40 40 omni1)
  228. Mar 17 09:38:37 [4074] attrd: info: write_attribute: Sent update 7 with 1 changes for fail-count-dicoba, id=<n/a>, set=(null)
  229. Mar 17 09:38:37 [4074] attrd: info: attrd_peer_update: Setting last-failure-dicoba[omni1]: 1426505769 -> 1426559917 from omni1
  230. Mar 17 09:38:37 [4074] attrd: debug: write_attribute: Update: omni1[last-failure-dicoba]=1426559917 (40 40 40 omni1)
  231. Mar 17 09:38:37 [4074] attrd: info: write_attribute: Sent update 8 with 1 changes for last-failure-dicoba, id=<n/a>, set=(null)
  232. Mar 17 09:38:37 [4071] cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/7)
  233. Mar 17 09:38:37 [4076] crmd: debug: update_abort_priority: Abort priority upgraded from 0 to 1
  234. Mar 17 09:38:37 [4076] crmd: debug: update_abort_priority: Abort action done superseded by restart: Event failed
  235. Mar 17 09:38:37 [4071] cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/8)
  236. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: Diff: --- 0.55.3 2
  237. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: Diff: +++ 0.55.4 (null)
  238. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: + /cib: @num_updates=4
  239. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: ++ /cib/status/node_state[@id='40']/transient_attributes[@id='40']/instance_attributes[@id='status-40']: <nvpair id="status-40-fail-count-dicoba" name="fail-count-dicoba" value="INFINITY"/>
  240. Mar 17 09:38:37 [4071] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=omni1/attrd/7, version=0.55.4)
  241. Mar 17 09:38:37 [4072] stonith-ng: debug: xml_patch_version_check: Can apply patch 0.55.4 to 0.55.3
  242. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: Diff: --- 0.55.4 2
  243. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: Diff: +++ 0.55.5 (null)
  244. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: + /cib: @num_updates=5
  245. Mar 17 09:38:37 [4076] crmd: notice: abort_transition_graph: Transition aborted by dicoba_start_0 'modify' on omni1: Event failed (magic=0:1;5:65:0:9e7f7079-4fae-4be9-9598-a1544ee5a6ed, cib=0.55.3, source=match_graph_event:350, 0)
  246. Mar 17 09:38:37 [4076] crmd: info: match_graph_event: Action dicoba_start_0 (5) confirmed on omni1 (rc=4)
  247. Mar 17 09:38:37 [4074] attrd: info: attrd_cib_callback: Update 7 for fail-count-dicoba: OK (0)
  248. Mar 17 09:38:37 [4074] attrd: info: attrd_cib_callback: Update 7 for fail-count-dicoba[omni1]=INFINITY: OK (0)
  249. Mar 17 09:38:37 [4076] crmd: warning: update_failcount: Updating failcount for dicoba on omni1 after failed start: rc=1 (update=INFINITY, time=1426559917)
  250. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: + /cib/status/node_state[@id='40']/transient_attributes[@id='40']/instance_attributes[@id='status-40']/nvpair[@id='status-40-last-failure-dicoba']: @value=1426559917
  251. Mar 17 09:38:37 [4076] crmd: debug: attrd_update_delegate: Sent update: fail-count-dicoba=INFINITY for omni1
  252. Mar 17 09:38:37 [4076] crmd: debug: attrd_update_delegate: Sent update: last-failure-dicoba=1426559917 for omni1
  253. Mar 17 09:38:37 [4076] crmd: info: process_graph_event: Detected action (65.5) dicoba_start_0.19=unknown error: failed
  254. Mar 17 09:38:37 [4076] crmd: warning: status_from_rc: Action 5 (dicoba_start_0) on omni1 failed (target: 0 vs. rc: 1): Error
  255. Mar 17 09:38:37 [4071] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=omni1/attrd/8, version=0.55.5)
  256. Mar 17 09:38:37 [4072] stonith-ng: debug: xml_patch_version_check: Can apply patch 0.55.5 to 0.55.4
  257. Mar 17 09:38:37 [4074] attrd: info: attrd_cib_callback: Update 8 for last-failure-dicoba: OK (0)
  258. Mar 17 09:38:37 [4074] attrd: info: attrd_cib_callback: Update 8 for last-failure-dicoba[omni1]=1426559917: OK (0)
  259. Mar 17 09:38:37 [4074] attrd: debug: attrd_client_message: Broadcasting fail-count-dicoba[omni1] = INFINITY (writer)
  260. Mar 17 09:38:37 [4074] attrd: debug: attrd_client_message: Broadcasting last-failure-dicoba[omni1] = 1426559917 (writer)
  261. Mar 17 09:38:37 [4076] crmd: warning: update_failcount: Updating failcount for dicoba on omni1 after failed start: rc=1 (update=INFINITY, time=1426559917)
  262. Mar 17 09:38:37 [4074] attrd: debug: attrd_client_message: Broadcasting fail-count-dicoba[omni1] = INFINITY (writer)
  263. Mar 17 09:38:37 [4076] crmd: debug: attrd_update_delegate: Sent update: fail-count-dicoba=INFINITY for omni1
  264. Mar 17 09:38:37 [4074] attrd: debug: attrd_client_message: Broadcasting last-failure-dicoba[omni1] = 1426559917 (writer)
  265. Mar 17 09:38:37 [4076] crmd: debug: attrd_update_delegate: Sent update: last-failure-dicoba=1426559917 for omni1
  266. Mar 17 09:38:37 [4076] crmd: info: abort_transition_graph: Transition aborted by dicoba_start_0 'create' on (null): Event failed (magic=0:1;5:65:0:9e7f7079-4fae-4be9-9598-a1544ee5a6ed, cib=0.55.3, source=match_graph_event:350, 0)
  267. Mar 17 09:38:37 [4076] crmd: info: match_graph_event: Action dicoba_start_0 (5) confirmed on omni1 (rc=4)
  268. Mar 17 09:38:37 [4076] crmd: warning: update_failcount: Updating failcount for dicoba on omni1 after failed start: rc=1 (update=INFINITY, time=1426559917)
  269. Mar 17 09:38:37 [4074] attrd: debug: attrd_client_message: Broadcasting fail-count-dicoba[omni1] = INFINITY (writer)
  270. Mar 17 09:38:37 [4076] crmd: debug: attrd_update_delegate: Sent update: fail-count-dicoba=INFINITY for omni1
  271. Mar 17 09:38:37 [4074] attrd: debug: attrd_client_message: Broadcasting last-failure-dicoba[omni1] = 1426559917 (writer)
  272. Mar 17 09:38:37 [4076] crmd: debug: attrd_update_delegate: Sent update: last-failure-dicoba=1426559917 for omni1
  273. Mar 17 09:38:37 [4076] crmd: info: process_graph_event: Detected action (65.5) dicoba_start_0.19=unknown error: failed
  274. Mar 17 09:38:37 [4076] crmd: debug: te_update_diff: Processing (cib_modify) diff: 0.55.3 -> 0.55.4 (S_TRANSITION_ENGINE)
  275. Mar 17 09:38:37 [4076] crmd: debug: update_abort_priority: Abort priority upgraded from 1 to 1000000
  276. Mar 17 09:38:37 [4076] crmd: debug: update_abort_priority: 'Event failed' abort superseded by Transient attribute change
  277. Mar 17 09:38:37 [4076] crmd: notice: abort_transition_graph: Transition aborted by status-40-fail-count-dicoba, fail-count-dicoba=INFINITY: Transient attribute change (create cib=0.55.4, source=te_update_diff:391, path=/cib/status/node_state[@id='40']/transient_attributes[@id='40']/instance_attributes[@id='status-40'], 0)
  278. Mar 17 09:38:37 [4076] crmd: debug: te_update_diff: Processing (cib_modify) diff: 0.55.4 -> 0.55.5 (S_TRANSITION_ENGINE)
  279. Mar 17 09:38:37 [4076] crmd: info: abort_transition_graph: Transition aborted by status-40-last-failure-dicoba, last-failure-dicoba=1426559917: Transient attribute change (modify cib=0.55.5, source=te_update_diff:391, path=/cib/status/node_state[@id='40']/transient_attributes[@id='40']/instance_attributes[@id='status-40']/nvpair[@id='status-40-last-failure-dicoba'], 0)
  280. Mar 17 09:38:37 [4076] crmd: notice: run_graph: Transition 65 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/opt/var/lib/pacemaker/pengine/pe-input-39.bz2): Complete
  281. Mar 17 09:38:37 [4076] crmd: debug: te_graph_trigger: Transition 65 is now complete
  282. Mar 17 09:38:37 [4076] crmd: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
  283. Mar 17 09:38:37 [4076] crmd: debug: notify_crmd: Transition 65 status: restart - Transient attribute change
  284. Mar 17 09:38:37 [4076] crmd: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
  285. Mar 17 09:38:37 [4076] crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  286. Mar 17 09:38:37 [4076] crmd: debug: do_state_transition: All 1 cluster nodes are eligible to run resources.
  287. Mar 17 09:38:37 [4076] crmd: debug: do_pe_invoke: Query 107: Requesting the current CIB: S_POLICY_ENGINE
  288. Mar 17 09:38:37 [4076] crmd: debug: do_pe_invoke_callback: Invoking the PE: query=107, ref=pe_calc-dc-1426559917-88, seq=44, quorate=0
  289. Mar 17 09:38:37 [4075] pengine: debug: unpack_config: STONITH timeout: 60000
  290. Mar 17 09:38:37 [4075] pengine: debug: unpack_config: STONITH of failed nodes is disabled
  291. Mar 17 09:38:37 [4075] pengine: debug: unpack_config: Stop all active resources: false
  292. Mar 17 09:38:37 [4075] pengine: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
  293. Mar 17 09:38:37 [4075] pengine: debug: unpack_config: Default stickiness: 0
  294. Mar 17 09:38:37 [4075] pengine: notice: unpack_config: On loss of CCM Quorum: Ignore
  295. Mar 17 09:38:37 [4075] pengine: debug: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
  296. Mar 17 09:38:37 [4075] pengine: info: determine_online_status: Node omni1 is online
  297. Mar 17 09:38:37 [4075] pengine: debug: determine_op_status: dicoba_start_0 on omni1 returned 'unknown error' (1) instead of the expected value: 'ok' (0)
  298. Mar 17 09:38:37 [4075] pengine: warning: unpack_rsc_op_failure: Processing failed op start for dicoba on omni1: unknown error (1)
  299. Mar 17 09:38:37 [4075] pengine: debug: determine_op_status: dicoba_start_0 on omni1 returned 'unknown error' (1) instead of the expected value: 'ok' (0)
  300. Mar 17 09:38:37 [4075] pengine: warning: unpack_rsc_op_failure: Processing failed op start for dicoba on omni1: unknown error (1)
  301. Mar 17 09:38:37 [4075] pengine: info: native_print: dicoba (ocf::heartbeat:IPaddr): FAILED omni1
  302. Mar 17 09:38:37 [4075] pengine: info: native_print: dumtest (ocf::pacemaker:Dummy): Started omni1
  303. Mar 17 09:38:37 [4075] pengine: debug: common_apply_stickiness: Resource dicoba: preferring current location (node=omni1, weight=100)
  304. Mar 17 09:38:37 [4075] pengine: info: get_failcount_full: dicoba has failed INFINITY times on omni1
  305. Mar 17 09:38:37 [4075] pengine: warning: common_apply_stickiness: Forcing dicoba away from omni1 after 1000000 failures (max=1000000)
  306. Mar 17 09:38:37 [4075] pengine: debug: common_apply_stickiness: Resource dumtest: preferring current location (node=omni1, weight=100)
  307. Mar 17 09:38:37 [4075] pengine: debug: native_assign_node: All nodes for resource dicoba are unavailable, unclean or shutting down (omni1: 1, -1000000)
  308. Mar 17 09:38:37 [4075] pengine: debug: native_assign_node: Could not allocate a node for dicoba
  309. Mar 17 09:38:37 [4075] pengine: info: native_color: Resource dicoba cannot run anywhere
  310. Mar 17 09:38:37 [4075] pengine: debug: native_assign_node: Assigning omni1 to dumtest
  311. Mar 17 09:38:37 [4075] pengine: notice: LogActions: Stop dicoba (omni1)
  312. Mar 17 09:38:37 [4075] pengine: info: LogActions: Leave dumtest (Started omni1)
  313. Mar 17 09:38:37 [4075] pengine: notice: process_pe_message: Calculated Transition 66: /opt/var/lib/pacemaker/pengine/pe-input-40.bz2
  314. Mar 17 09:38:37 [4076] crmd: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
  315. Mar 17 09:38:37 [4076] crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  316. Mar 17 09:38:37 [4076] crmd: debug: unpack_graph: Unpacked transition 66: 2 actions in 2 synapses
  317. Mar 17 09:38:37 [4076] crmd: info: do_te_invoke: Processing graph 66 (ref=pe_calc-dc-1426559917-88) derived from /opt/var/lib/pacemaker/pengine/pe-input-40.bz2
  318. Mar 17 09:38:37 [4076] crmd: notice: te_rsc_command: Initiating action 2: stop dicoba_stop_0 on omni1 (local)
  319. Mar 17 09:38:37 [4076] crmd: debug: do_lrm_rsc_op: Stopped 0 recurring operations in preparation for dicoba_stop_0
  320. Mar 17 09:38:37 [4076] crmd: info: do_lrm_rsc_op: Performing key=2:66:0:9e7f7079-4fae-4be9-9598-a1544ee5a6ed op=dicoba_stop_0
  321. Mar 17 09:38:37 [4073] lrmd: debug: process_lrmd_message: Processed lrmd_rsc_exec operation from 3b650d49-6b19-45f7-d287-f28bf9f52852: rc=20, reply=1, notify=0, exit=134520064
  322. Mar 17 09:38:37 [4073] lrmd: info: log_execute: executing - rsc:dicoba action:stop call_id:20
  323. Mar 17 09:38:37 [4076] crmd: debug: run_graph: Transition 66 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=1, Source=/opt/var/lib/pacemaker/pengine/pe-input-40.bz2): In-progress
  324. Mar 17 09:38:37 [4073] lrmd: debug: operation_finished: dicoba_stop_0:8084 - exited with rc=0
  325. Mar 17 09:38:37 [4073] lrmd: debug: operation_finished: dicoba_stop_0:8084:stderr [ -- empty -- ]
  326. Mar 17 09:38:37 [4073] lrmd: debug: operation_finished: dicoba_stop_0:8084:stdout [ -- empty -- ]
  327. Mar 17 09:38:37 [4073] lrmd: info: log_finished: finished - rsc:dicoba action:stop call_id:20 pid:8084 exit-code:0 exec-time:41ms queue-time:0ms
  328. Mar 17 09:38:37 [4076] crmd: debug: create_operation_update: do_update_resource: Updating resource dicoba after stop op complete (interval=0)
  329. Mar 17 09:38:37 [4076] crmd: notice: process_lrm_event: Operation dicoba_stop_0: ok (node=omni1, call=20, rc=0, cib-update=108, confirmed=true)
  330. Mar 17 09:38:37 [4076] crmd: debug: update_history_cache: Updating history for 'dicoba' with stop op
  331. Mar 17 09:38:37 [4071] cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/108)
  332. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: Diff: --- 0.55.5 2
  333. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: Diff: +++ 0.55.6 (null)
  334. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: + /cib: @num_updates=6
  335. Mar 17 09:38:37 [4071] cib: info: cib_perform_op: + /cib/status/node_state[@id='40']/lrm[@id='40']/lrm_resources/lrm_resource[@id='dicoba']/lrm_rsc_op[@id='dicoba_last_0']: @operation_key=dicoba_stop_0, @operation=stop, @transition-key=2:66:0:9e7f7079-4fae-4be9-9598-a1544ee5a6ed, @transition-magic=0:0;2:66:0:9e7f7079-4fae-4be9-9598-a1544ee5a6ed, @call-id=20, @rc-code=0, @last-run=1426559917, @last-rc-change=1426559917, @exec-time=41
  336. Mar 17 09:38:37 [4071] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=omni1/crmd/108, version=0.55.6)
  337. Mar 17 09:38:37 [4072] stonith-ng: debug: xml_patch_version_check: Can apply patch 0.55.6 to 0.55.5
  338. Mar 17 09:38:37 [4076] crmd: debug: te_update_diff: Processing (cib_modify) diff: 0.55.5 -> 0.55.6 (S_TRANSITION_ENGINE)
  339. Mar 17 09:38:37 [4076] crmd: info: match_graph_event: Action dicoba_stop_0 (2) confirmed on omni1 (rc=0)
  340. Mar 17 09:38:37 [4076] crmd: debug: te_pseudo_action: Pseudo action 3 fired and confirmed
  341. Mar 17 09:38:37 [4076] crmd: debug: run_graph: Transition 66 (Complete=1, Pending=0, Fired=1, Skipped=0, Incomplete=0, Source=/opt/var/lib/pacemaker/pengine/pe-input-40.bz2): In-progress
  342. Mar 17 09:38:37 [4076] crmd: notice: run_graph: Transition 66 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/opt/var/lib/pacemaker/pengine/pe-input-40.bz2): Complete
  343. Mar 17 09:38:37 [4076] crmd: debug: te_graph_trigger: Transition 66 is now complete
  344. Mar 17 09:38:37 [4076] crmd: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
  345. Mar 17 09:38:37 [4076] crmd: debug: notify_crmd: Transition 66 status: done - <null>
  346. Mar 17 09:38:37 [4076] crmd: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
  347. Mar 17 09:38:37 [4076] crmd: info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
  348. Mar 17 09:38:37 [4076] crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  349. Mar 17 09:38:37 [4076] crmd: debug: do_state_transition: Starting PEngine Recheck Timer
  350. Mar 17 09:38:37 [4076] crmd: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=192
  351. Mar 17 09:38:42 [4071] cib: info: cib_process_ping: Reporting our current digest to omni1: ab68e34fa794b2769e8118204051f89c for 0.55.6 (8354170 0)
  352. Mar 17 09:40:26 [4073] lrmd: debug: recurring_action_timer: Scheduling another invokation of dumtest_monitor_120000
  353. Dummy(dumtest)[8114]: 2015/03/17_09:40:26 DEBUG: dumtest monitor : 0
  354. Mar 17 09:40:26 [4073] lrmd: debug: operation_finished: dumtest_monitor_120000:8114 - exited with rc=0
  355. Mar 17 09:40:26 [4073] lrmd: debug: operation_finished: dumtest_monitor_120000:8114:stderr [ -- empty -- ]
  356. Mar 17 09:40:26 [4073] lrmd: debug: operation_finished: dumtest_monitor_120000:8114:stdout [ -- empty -- ]
  357. Mar 17 09:40:26 [4073] lrmd: debug: log_finished: finished - rsc:dumtest action:monitor call_id:13 pid:8114 exit-code:0 exec-time:0ms queue-time:0ms
  358. Mar 17 09:40:27 [4071] cib: debug: crm_client_new: Connecting 8350128 for uid=0 gid=0 pid=8124 id=8513e091-dc4c-c7fd-ec39-d0c199cf0c6b
  359. Mar 17 09:40:27 [4071] cib: debug: handle_new_connection: IPC credentials authenticated (4071-8124-24)
  360. Mar 17 09:40:27 [4071] cib: debug: qb_ipcs_us_connect: connecting to client (4071-8124-24)
  361. Mar 17 09:40:27 [4071] cib: debug: _sock_add_to_mainloop: added 24 to poll loop (liveness)
  362. Mar 17 09:40:27 [4071] cib: debug: _process_request_: recv from client connection failed (4071-8124-24): Connection reset by peer (131)
  363. Mar 17 09:40:27 [4071] cib: error: qb_ipcs_dispatch_connection_request: request returned error (4071-8124-24): Connection reset by peer (131)
  364. Mar 17 09:40:27 [4071] cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(4071-8124-24) state:2
  365. Mar 17 09:40:27 [4071] cib: debug: crm_client_destroy: Destroying 0 events
  366. Mar 17 09:40:27 [4071] cib: debug: crm_client_new: Connecting 8350128 for uid=0 gid=0 pid=8127 id=a1a71be7-5cad-4505-fb3c-c36f36cbfecc
  367. Mar 17 09:40:27 [4071] cib: debug: handle_new_connection: IPC credentials authenticated (4071-8127-24)
  368. Mar 17 09:40:27 [4071] cib: debug: qb_ipcs_us_connect: connecting to client (4071-8127-24)
  369. Mar 17 09:40:27 [4071] cib: debug: _sock_add_to_mainloop: added 24 to poll loop (liveness)
  370. Mar 17 09:40:27 [4071] cib: debug: _process_request_: recv from client connection failed (4071-8127-24): Connection reset by peer (131)
  371. Mar 17 09:40:27 [4071] cib: error: qb_ipcs_dispatch_connection_request: request returned error (4071-8127-24): Connection reset by peer (131)
  372. Mar 17 09:40:27 [4071] cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(4071-8127-24) state:2
  373. Mar 17 09:40:27 [4071] cib: debug: crm_client_destroy: Destroying 0 events
  374. Mar 17 09:40:27 [4071] cib: debug: crm_client_new: Connecting 8350128 for uid=0 gid=0 pid=8130 id=a2dd6652-87f6-4210-c6b6-f35ae9d0693d
  375. Mar 17 09:40:27 [4071] cib: debug: handle_new_connection: IPC credentials authenticated (4071-8130-24)
  376. Mar 17 09:40:27 [4071] cib: debug: qb_ipcs_us_connect: connecting to client (4071-8130-24)
  377. Mar 17 09:40:27 [4071] cib: debug: _sock_add_to_mainloop: added 24 to poll loop (liveness)
  378. Mar 17 09:40:27 [4071] cib: info: cib_process_request: Forwarding cib_apply_diff operation for section 'all' to master (origin=local/cibadmin/2)
  379. Mar 17 09:40:27 [4071] cib: error: crm_element_value: Couldn't find admin_epoch in NULL
  380. Mar 17 09:40:27 [4071] cib: error: crm_abort: crm_element_value: Triggered assert at xml.c:6047 : data != NULL
  381. Mar 17 09:40:27 [4071] cib: error: crm_element_value: Couldn't find epoch in NULL
  382. Mar 17 09:40:27 [4071] cib: error: crm_abort: crm_element_value: Triggered assert at xml.c:6047 : data != NULL
  383. Mar 17 09:40:27 [4071] cib: error: crm_element_value: Couldn't find num_updates in NULL
  384. Mar 17 09:40:27 [4071] cib: error: crm_abort: crm_element_value: Triggered assert at xml.c:6047 : data != NULL
  385. Mar 17 09:40:27 [4071] cib: error: crm_element_value: Couldn't find admin_epoch in NULL
  386. Mar 17 09:40:27 [4071] cib: error: crm_abort: crm_element_value: Triggered assert at xml.c:6047 : data != NULL
  387. Mar 17 09:40:27 [4071] cib: error: crm_element_value: Couldn't find epoch in NULL
  388. Mar 17 09:40:27 [4071] cib: error: crm_abort: crm_element_value: Triggered assert at xml.c:6047 : data != NULL
  389. Mar 17 09:40:27 [4071] cib: error: crm_element_value: Couldn't find num_updates in NULL
  390. Mar 17 09:40:27 [4071] cib: error: crm_abort: crm_element_value: Triggered assert at xml.c:6047 : data != NULL
  391. Mar 17 09:40:27 [4071] cib: debug: xml_patch_version_check: Can apply patch 0.55.7 to 0.55.6
  392. Mar 17 09:40:27 [4071] cib: info: cib_perform_op: Diff: --- 0.55.6 2
  393. Mar 17 09:40:27 [4071] cib: info: cib_perform_op: Diff: +++ 0.56.7 f759878d31330d7fdfd6e0439f23bea1
  394. Mar 17 09:40:27 [4071] cib: info: cib_perform_op: + /cib: @epoch=56, @num_updates=7
  395. Mar 17 09:40:27 [4071] cib: info: cib_process_request: Completed cib_apply_diff operation for section 'all': OK (rc=0, origin=omni1/cibadmin/2, version=0.56.7)
  396. Mar 17 09:40:27 [4072] stonith-ng: debug: xml_patch_version_check: Can apply patch 0.56.7 to 0.55.6
  397. Mar 17 09:40:27 [4071] cib: debug: _process_request_: recv from client connection failed (4071-8130-24): Connection reset by peer (131)
  398. Mar 17 09:40:27 [4071] cib: error: qb_ipcs_dispatch_connection_request: request returned error (4071-8130-24): Connection reset by peer (131)
  399. Mar 17 09:40:27 [4071] cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(4071-8130-24) state:2
  400. Mar 17 09:40:27 [4071] cib: debug: crm_client_destroy: Destroying 0 events
  401. Mar 17 09:40:27 [4076] crmd: debug: te_update_diff: Processing (cib_apply_diff) diff: 0.55.6 -> 0.56.7 (S_IDLE)
  402. Mar 17 09:40:27 [4071] cib: debug: crm_client_new: Connecting 835ba58 for uid=0 gid=0 pid=8131 id=a71c659d-f9cf-e07b-b901-d5d81a63e583
  403. Mar 17 09:40:27 [4071] cib: debug: handle_new_connection: IPC credentials authenticated (4071-8131-24)
  404. Mar 17 09:40:27 [4071] cib: debug: qb_ipcs_us_connect: connecting to client (4071-8131-24)
  405. Mar 17 09:40:27 [4071] cib: debug: _sock_add_to_mainloop: added 24 to poll loop (liveness)
  406. Mar 17 09:40:27 [4071] cib: debug: _process_request_: recv from client connection failed (4071-8131-24): Connection reset by peer (131)
  407. Mar 17 09:40:27 [4071] cib: error: qb_ipcs_dispatch_connection_request: request returned error (4071-8131-24): Connection reset by peer (131)
  408. Mar 17 09:40:27 [4071] cib: debug: qb_ipcs_disconnect: qb_ipcs_disconnect(4071-8131-24) state:2
  409. Mar 17 09:40:27 [4071] cib: debug: crm_client_destroy: Destroying 0 events
  410. Mar 17 09:40:32 [4071] cib: info: cib_process_ping: Reporting our current digest to omni1: f759878d31330d7fdfd6e0439f23bea1 for 0.56.7 (8351c98 0)
  411. M
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement