Advertisement
Guest User

Untitled

a guest
Apr 26th, 2013
78
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 8.48 KB | None | 0 0
  1. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_replace_notify: Replaced: 0.36.19 -> 0.37.1 from <null>
  2. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: - <cib admin_epoch="0" epoch="36" num_updates="19" />
  3. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + <cib epoch="37" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.6" update-orig
  4. in="node-dbpool01" update-client="cibadmin" cib-last-written="Sat Apr 27 06:57:57 2013" have-quorum="0" dc-uuid="node-dbpool01" >
  5. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + <configuration >
  6. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + <resources >
  7. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + <primitive class="ocf" id="pgPool" provider="heartbeat" type="pgpool" __crm_diff_marker__="added:top" >
  8. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + <instance_attributes id="pgPool-instance_attributes" >
  9. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + <nvpair id="pgPool-instance_attributes-logfile" name="logfile" value="/var/log/postgresql/pgpool" />
  10. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + </instance_attributes>
  11. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + </primitive>
  12. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + </resources>
  13. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.37
  14. .1) : Non-status change
  15. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + </configuration>
  16. Apr 27 06:58:56 node-dbpool01 attrd: [1094]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  17. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transi
  18. tion_graph ]
  19. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib:diff: + </cib>
  20. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_
  21. replaced ]
  22. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/cibadmin/2, version=0.37.1): ok (rc=0
  23. )
  24. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/135, version=0.37.2): ok (rc=0)
  25. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_ele
  26. ction_check ]
  27. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: do_dc_takeover: Taking over DC status for this partition
  28. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/138, version=0.37.4): ok (rc=0)
  29. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/139, version=0.37.5): ok (rc=0)
  30. Apr 27 06:58:56 node-dbpool01 attrd: [1094]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-pgPool (1367034624)
  31. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/141, version=0.37.7): ok (rc
  32. =0)
  33. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/143, version=0.37.9): ok (rc=0)
  34. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: do_dc_join_offer_all: join-5: Waiting on 1 outstanding join acks
  35. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: ais_dispatch_message: Membership 116: quorum still lost
  36. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: crmd_ais_dispatch: Setting expected votes to 2
  37. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: ais_dispatch_message: Membership 116: quorum still lost
  38. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/146, version=0.37.10): ok (rc=0)
  39. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: crmd_ais_dispatch: Setting expected votes to 2
  40. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: update_dc: Set DC to node-dbpool01 (3.0.6)
  41. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/148, version=0.37.11): ok (rApr 27 06:58:56 node-dbpool01 crmd: [1096]: notice: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  42. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: do_dc_join_finalize: join-5: Syncing the CIB from node-dbpool01 to the rest of the cluster
  43. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/150, version=0.37.11): ok (rc=0)
  44. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/151, version=0.37.12): ok (rc=0)
  45. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: do_dc_join_ack: join-5: Updating node state to member for node-dbpool01
  46. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: erase_status_tag: Deleting xpath: //node_state[@uname='node-dbpool01']/lrm
  47. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node-dbpool01']/lrm (origin=local/crmd/152, version=0.37.13): ok (rc=0)
  48. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: notice: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  49. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled
  50. Apr 27 06:58:56 node-dbpool01 attrd: [1094]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  51. Apr 27 06:58:56 node-dbpool01 attrd: [1094]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  52. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/154, version=0.37.15): ok (rc=0)
  53. Apr 27 06:58:56 node-dbpool01 cib: [1091]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/156, version=0.37.17): ok (rc=0)
  54. Apr 27 06:58:56 node-dbpool01 pengine: [1095]: notice: unpack_config: On loss of CCM Quorum: Ignore
  55. Apr 27 06:58:56 node-dbpool01 pengine: [1095]: notice: LogActions: Start pgPool#011(node-dbpool01)
  56. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  57. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: do_te_invoke: Processing graph 14 (ref=pe_calc-dc-1367035136-58) derived from /var/lib/pengine/pe-input-92.bz2
  58. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: te_rsc_command: Initiating action 7: start pgPool_start_0 on node-dbpool01 (local)
  59. Apr 27 06:58:56 node-dbpool01 attrd: [1094]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-pgPool (1367034624)
  60. Apr 27 06:58:56 node-dbpool01 lrmd: [1093]: info: rsc:pgPool start[12] (pid 6283)
  61. Apr 27 06:58:56 node-dbpool01 pengine: [1095]: notice: process_pe_message: Transition 14: PEngine Input stored in: /var/lib/pengine/pe-input-92.bz2
  62. Apr 27 06:58:56 node-dbpool01 pgpool(pgPool)[6283]: INFO: pgPool: /usr/sbin/pgpool -f /etc/pgpool2/pgpool.conf AS postgres
  63. Apr 27 06:58:56 node-dbpool01 lrmd: [1093]: info: operation start[12] on pgPool for client 1096: pid 6283 exited with return code 0
  64. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: info: process_lrm_event: LRM operation pgPool_start_0 (call=12, rc=0, cib-update=158, confirmed=true) ok
  65. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: notice: run_graph: ==== Transition 14 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-92.bz2): Complete
  66. Apr 27 06:58:56 node-dbpool01 crmd: [1096]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement