Advertisement
Guest User

Untitled

a guest
Aug 28th, 2015
459
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 50.61 KB | None | 0 0
  1. Aug 28 17:32:50 [693] pm1 corosync notice [MAIN ] Corosync Cluster Engine ('2.3.5'): started and ready to provide service.
  2. Aug 28 17:32:50 [693] pm1 corosync info [MAIN ] Corosync built-in features: pie relro bindnow
  3. Aug 28 17:32:50 [693] pm1 corosync notice [TOTEM ] Initializing transport (UDP/IP Unicast).
  4. Aug 28 17:32:50 [693] pm1 corosync notice [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
  5. Aug 28 17:32:51 [693] pm1 corosync notice [TOTEM ] The network interface [192.168.122.113] is now up.
  6. Aug 28 17:32:51 [693] pm1 corosync notice [SERV ] Service engine loaded: corosync configuration map access [0]
  7. Aug 28 17:32:51 [693] pm1 corosync info [QB ] server name: cmap
  8. Aug 28 17:32:51 [693] pm1 corosync notice [SERV ] Service engine loaded: corosync configuration service [1]
  9. Aug 28 17:32:51 [693] pm1 corosync info [QB ] server name: cfg
  10. Aug 28 17:32:51 [693] pm1 corosync notice [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
  11. Aug 28 17:32:51 [693] pm1 corosync info [QB ] server name: cpg
  12. Aug 28 17:32:51 [693] pm1 corosync notice [SERV ] Service engine loaded: corosync profile loading service [4]
  13. Aug 28 17:32:51 [693] pm1 corosync notice [QUORUM] Using quorum provider corosync_votequorum
  14. Aug 28 17:32:51 [693] pm1 corosync notice [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
  15. Aug 28 17:32:51 [693] pm1 corosync notice [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
  16. Aug 28 17:32:51 [693] pm1 corosync info [QB ] server name: votequorum
  17. Aug 28 17:32:51 [693] pm1 corosync notice [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
  18. Aug 28 17:32:51 [693] pm1 corosync info [QB ] server name: quorum
  19. Aug 28 17:32:51 [693] pm1 corosync notice [TOTEM ] adding new UDPU member {192.168.122.172}
  20. Aug 28 17:32:51 [693] pm1 corosync notice [TOTEM ] adding new UDPU member {192.168.122.113}
  21. Aug 28 17:32:51 [693] pm1 corosync notice [TOTEM ] A new membership (192.168.122.113:984) was formed. Members joined: 3232266865
  22. Aug 28 17:32:51 [693] pm1 corosync notice [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
  23. Aug 28 17:32:51 [693] pm1 corosync notice [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
  24. Aug 28 17:32:51 [693] pm1 corosync notice [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
  25. Aug 28 17:32:51 [693] pm1 corosync notice [QUORUM] Members[1]: -1062700431
  26. Aug 28 17:32:51 [693] pm1 corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  27. Aug 28 17:32:53 [693] pm1 corosync notice [TOTEM ] A new membership (192.168.122.113:988) was formed. Members joined: 3232266924
  28. Aug 28 17:32:53 [693] pm1 corosync notice [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
  29. Aug 28 17:32:53 [693] pm1 corosync notice [QUORUM] This node is within the primary component and will provide service.
  30. Aug 28 17:32:53 [693] pm1 corosync notice [QUORUM] Members[2]: -1062700431 -1062700372
  31. Aug 28 17:32:53 [693] pm1 corosync notice [MAIN ] Completed service synchronization, ready to provide service.
  32. Set r/w permissions for uid=108, gid=113 on /var/log/cluster/corosync.log
  33. Aug 28 17:33:55 [711] pm1 pacemakerd: notice: mcp_read_config: Configured corosync to accept connections from group 113: OK (1)
  34. Aug 28 17:33:55 [711] pm1 pacemakerd: notice: main: Starting Pacemaker 1.1.12 (Build: 561c4cf): ncurses libqb-logging libqb-ipc lha-fencing nagios corosync-native acls
  35. Aug 28 17:33:55 [711] pm1 pacemakerd: info: main: Maximum core file size is: 18446744073709551615
  36. Aug 28 17:33:55 [711] pm1 pacemakerd: info: qb_ipcs_us_publish: server name: pacemakerd
  37. Aug 28 17:33:58 [711] pm1 pacemakerd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  38. Aug 28 17:33:58 [711] pm1 pacemakerd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266865
  39. Aug 28 17:33:58 [711] pm1 pacemakerd: info: crm_get_peer: Created entry 1822bbd1-3fbe-4518-af96-2393e16ee197/0xf3c7d0 for node (null)/3232266865 (1 total)
  40. Aug 28 17:33:58 [711] pm1 pacemakerd: info: crm_get_peer: Node 3232266865 has uuid 3232266865
  41. Aug 28 17:33:58 [711] pm1 pacemakerd: info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232266865] - corosync-cpg is now online
  42. Aug 28 17:33:59 [711] pm1 pacemakerd: notice: cluster_connect_quorum: Quorum acquired
  43. Aug 28 17:34:01 [711] pm1 pacemakerd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  44. Aug 28 17:34:01 [711] pm1 pacemakerd: notice: get_node_name: Defaulting to uname -n for the local corosync node name
  45. Aug 28 17:34:01 [711] pm1 pacemakerd: info: crm_get_peer: Node 3232266865 is now known as pm1
  46. Aug 28 17:34:02 [711] pm1 pacemakerd: info: start_child: Using uid=108 and group=113 for process cib
  47. Aug 28 17:34:02 [711] pm1 pacemakerd: info: start_child: Forked child 717 for process cib
  48. Aug 28 17:34:03 [717] pm1 cib: info: crm_log_init: Changed active directory to /usr/var/lib/heartbeat/cores/hacluster
  49. Aug 28 17:34:03 [717] pm1 cib: info: get_cluster_type: Verifying cluster type: 'corosync'
  50. Aug 28 17:34:03 [717] pm1 cib: info: get_cluster_type: Assuming an active 'corosync' cluster
  51. Aug 28 17:34:03 [717] pm1 cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
  52. Aug 28 17:34:03 [711] pm1 pacemakerd: info: start_child: Forked child 718 for process stonith-ng
  53. Aug 28 17:34:04 [717] pm1 cib: info: validate_with_relaxng: Creating RNG parser context
  54. Aug 28 17:34:04 [711] pm1 pacemakerd: info: start_child: Forked child 719 for process lrmd
  55. Aug 28 17:34:05 [717] pm1 cib: info: startCib: CIB Initialization completed successfully
  56. Aug 28 17:34:05 [717] pm1 cib: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  57. Aug 28 17:34:05 [711] pm1 pacemakerd: info: start_child: Using uid=108 and group=113 for process attrd
  58. Aug 28 17:34:05 [711] pm1 pacemakerd: info: start_child: Forked child 720 for process attrd
  59. Aug 28 17:34:06 [718] pm1 stonithd: info: crm_log_init: Changed active directory to /usr/var/lib/heartbeat/cores/root
  60. Aug 28 17:34:06 [718] pm1 stonithd: info: get_cluster_type: Verifying cluster type: 'corosync'
  61. Aug 28 17:34:06 [718] pm1 stonithd: info: get_cluster_type: Assuming an active 'corosync' cluster
  62. Aug 28 17:34:06 [718] pm1 stonithd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  63. Aug 28 17:34:06 [720] pm1 attrd: info: crm_log_init: Changed active directory to /usr/var/lib/heartbeat/cores/hacluster
  64. Aug 28 17:34:06 [720] pm1 attrd: info: main: Starting up
  65. Aug 28 17:34:06 [720] pm1 attrd: info: get_cluster_type: Verifying cluster type: 'corosync'
  66. Aug 28 17:34:06 [720] pm1 attrd: info: get_cluster_type: Assuming an active 'corosync' cluster
  67. Aug 28 17:34:06 [720] pm1 attrd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  68. Aug 28 17:34:08 [717] pm1 cib: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  69. Aug 28 17:34:08 [717] pm1 cib: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266865
  70. Aug 28 17:34:08 [717] pm1 cib: info: crm_get_peer: Created entry 23a79809-77d0-4e1d-9a82-adc13c20e2e6/0x1ff3d20 for node (null)/3232266865 (1 total)
  71. Aug 28 17:34:08 [717] pm1 cib: info: crm_get_peer: Node 3232266865 has uuid 3232266865
  72. Aug 28 17:34:08 [717] pm1 cib: info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232266865] - corosync-cpg is now online
  73. Aug 28 17:34:08 [717] pm1 cib: info: init_cs_connection_once: Connection to 'corosync': established
  74. Aug 28 17:34:08 [719] pm1 pacemaker_remoted: info: crm_log_init: Changed active directory to /usr/var/lib/heartbeat/cores/root
  75. Aug 28 17:34:08 [719] pm1 pacemaker_remoted: info: qb_ipcs_us_publish: server name: lrmd
  76. Aug 28 17:34:08 [719] pm1 pacemaker_remoted: info: main: Starting
  77. Aug 28 17:34:09 [711] pm1 pacemakerd: info: start_child: Using uid=108 and group=113 for process pengine
  78. Aug 28 17:34:09 [711] pm1 pacemakerd: info: start_child: Forked child 721 for process pengine
  79. Aug 28 17:34:09 [718] pm1 stonithd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  80. Aug 28 17:34:09 [718] pm1 stonithd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266865
  81. Aug 28 17:34:09 [718] pm1 stonithd: info: crm_get_peer: Created entry 7d7dbe71-b8cb-40db-b6d5-e64c22913cff/0x15d5910 for node (null)/3232266865 (1 total)
  82. Aug 28 17:34:09 [718] pm1 stonithd: info: crm_get_peer: Node 3232266865 has uuid 3232266865
  83. Aug 28 17:34:09 [718] pm1 stonithd: info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232266865] - corosync-cpg is now online
  84. Aug 28 17:34:09 [718] pm1 stonithd: info: init_cs_connection_once: Connection to 'corosync': established
  85. Aug 28 17:34:09 [721] pm1 pengine: info: crm_log_init: Changed active directory to /usr/var/lib/heartbeat/cores/hacluster
  86. Aug 28 17:34:09 [721] pm1 pengine: info: qb_ipcs_us_publish: server name: pengine
  87. Aug 28 17:34:09 [721] pm1 pengine: info: main: Starting pengine
  88. Aug 28 17:34:09 [717] pm1 cib: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  89. Aug 28 17:34:09 [717] pm1 cib: notice: get_node_name: Defaulting to uname -n for the local corosync node name
  90. Aug 28 17:34:09 [717] pm1 cib: info: crm_get_peer: Node 3232266865 is now known as pm1
  91. Aug 28 17:34:10 [720] pm1 attrd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  92. Aug 28 17:34:10 [720] pm1 attrd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266865
  93. Aug 28 17:34:10 [720] pm1 attrd: info: crm_get_peer: Created entry 8372954d-fa38-4e7d-bc61-c8b96d321d2f/0x1a98440 for node (null)/3232266865 (1 total)
  94. Aug 28 17:34:10 [720] pm1 attrd: info: crm_get_peer: Node 3232266865 has uuid 3232266865
  95. Aug 28 17:34:10 [720] pm1 attrd: info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232266865] - corosync-cpg is now online
  96. Aug 28 17:34:10 [720] pm1 attrd: notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232266865] - state is now member (was (null))
  97. Aug 28 17:34:10 [720] pm1 attrd: info: init_cs_connection_once: Connection to 'corosync': established
  98. Aug 28 17:34:10 [718] pm1 stonithd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  99. Aug 28 17:34:10 [718] pm1 stonithd: notice: get_node_name: Defaulting to uname -n for the local corosync node name
  100. Aug 28 17:34:10 [718] pm1 stonithd: info: crm_get_peer: Node 3232266865 is now known as pm1
  101. Aug 28 17:34:10 [720] pm1 attrd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  102. Aug 28 17:34:10 [720] pm1 attrd: notice: get_node_name: Defaulting to uname -n for the local corosync node name
  103. Aug 28 17:34:10 [720] pm1 attrd: info: crm_get_peer: Node 3232266865 is now known as pm1
  104. Aug 28 17:34:11 [711] pm1 pacemakerd: info: start_child: Using uid=108 and group=113 for process crmd
  105. Aug 28 17:34:11 [711] pm1 pacemakerd: info: start_child: Forked child 722 for process crmd
  106. Aug 28 17:34:11 [717] pm1 cib: info: qb_ipcs_us_publish: server name: cib_ro
  107. Aug 28 17:34:11 [717] pm1 cib: info: qb_ipcs_us_publish: server name: cib_rw
  108. Aug 28 17:34:11 [717] pm1 cib: info: qb_ipcs_us_publish: server name: cib_shm
  109. Aug 28 17:34:11 [717] pm1 cib: info: cib_init: Starting cib mainloop
  110. Aug 28 17:34:11 [717] pm1 cib: info: pcmk_cpg_membership: Joined[0.0] cib.3232266865
  111. Aug 28 17:34:12 [722] pm1 crmd: info: crm_log_init: Changed active directory to /usr/var/lib/heartbeat/cores/hacluster
  112. Aug 28 17:34:12 [722] pm1 crmd: notice: main: CRM Git Version: 561c4cf
  113. Aug 28 17:34:12 [722] pm1 crmd: info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
  114. Aug 28 17:34:12 [722] pm1 crmd: info: get_cluster_type: Verifying cluster type: 'corosync'
  115. Aug 28 17:34:12 [722] pm1 crmd: info: get_cluster_type: Assuming an active 'corosync' cluster
  116. Aug 28 17:34:12 [720] pm1 attrd: info: main: Cluster connection active
  117. Aug 28 17:34:12 [720] pm1 attrd: info: qb_ipcs_us_publish: server name: attrd
  118. Aug 28 17:34:12 [720] pm1 attrd: info: main: Accepting attribute updates
  119. Aug 28 17:34:12 [711] pm1 pacemakerd: info: main: Starting mainloop
  120. Aug 28 17:34:12 [711] pm1 pacemakerd: info: pcmk_quorum_notification: Membership 988: quorum retained (2)
  121. Aug 28 17:34:13 [711] pm1 pacemakerd: notice: crm_update_peer_state: pcmk_quorum_notification: Node pm1[3232266865] - state is now member (was (null))
  122. Aug 28 17:34:14 [717] pm1 cib: info: pcmk_cpg_membership: Member[0.0] cib.3232266865
  123. Aug 28 17:34:14 [717] pm1 cib: info: pcmk_cpg_membership: Joined[1.0] cib.3232266924
  124. Aug 28 17:34:14 [711] pm1 pacemakerd: info: corosync_node_name: Unable to get node name for nodeid 3232266924
  125. Aug 28 17:34:15 [711] pm1 pacemakerd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266924
  126. Aug 28 17:34:15 [711] pm1 pacemakerd: info: crm_get_peer: Created entry 22b73928-314f-4076-897a-32479bff5c44/0xf3e800 for node (null)/3232266924 (2 total)
  127. Aug 28 17:34:15 [711] pm1 pacemakerd: info: crm_get_peer: Node 3232266924 has uuid 3232266924
  128. Aug 28 17:34:15 [711] pm1 pacemakerd: info: pcmk_quorum_notification: Obtaining name for new node 3232266924
  129. Aug 28 17:34:15 [717] pm1 cib: info: pcmk_cpg_membership: Member[1.0] cib.3232266865
  130. Aug 28 17:34:16 [711] pm1 pacemakerd: info: corosync_node_name: Unable to get node name for nodeid 3232266924
  131. Aug 28 17:34:17 [717] pm1 cib: info: corosync_node_name: Unable to get node name for nodeid 3232266924
  132. Aug 28 17:34:17 [717] pm1 cib: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266924
  133. Aug 28 17:34:17 [717] pm1 cib: info: crm_get_peer: Created entry 6b2762f1-ab8b-4295-99a6-e5b0f87dbc88/0x1ff5f60 for node (null)/3232266924 (2 total)
  134. Aug 28 17:34:17 [717] pm1 cib: info: crm_get_peer: Node 3232266924 has uuid 3232266924
  135. Aug 28 17:34:17 [717] pm1 cib: info: pcmk_cpg_membership: Member[1.1] cib.3232266924
  136. Aug 28 17:34:17 [717] pm1 cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232266924] - corosync-cpg is now online
  137. Aug 28 17:34:17 [720] pm1 attrd: info: attrd_cib_connect: Connected to the CIB after 1 attempts
  138. Aug 28 17:34:17 [720] pm1 attrd: info: main: CIB connection active
  139. Aug 28 17:34:17 [720] pm1 attrd: info: pcmk_cpg_membership: Joined[0.0] attrd.3232266865
  140. Aug 28 17:34:17 [718] pm1 stonithd: notice: setup_cib: Watching for stonith topology changes
  141. Aug 28 17:34:17 [718] pm1 stonithd: info: qb_ipcs_us_publish: server name: stonith-ng
  142. Aug 28 17:34:17 [718] pm1 stonithd: info: main: Starting stonithd mainloop
  143. Aug 28 17:34:17 [718] pm1 stonithd: info: pcmk_cpg_membership: Joined[0.0] stonithd.3232266865
  144. Aug 28 17:34:17 [722] pm1 crmd: info: do_cib_control: CIB connection established
  145. Aug 28 17:34:17 [722] pm1 crmd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
  146. Aug 28 17:34:17 [717] pm1 cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-32.raw
  147. Aug 28 17:34:18 [711] pm1 pacemakerd: info: corosync_node_name: Unable to get node name for nodeid 3232266924
  148. Aug 28 17:34:18 [711] pm1 pacemakerd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266924
  149. Aug 28 17:34:18 [711] pm1 pacemakerd: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232266924] - state is now member (was (null))
  150. Aug 28 17:34:18 [711] pm1 pacemakerd: info: crm_get_peer: Node 3232266924 is now known as pm2
  151. Aug 28 17:34:18 [717] pm1 cib: info: write_cib_contents: Wrote version 0.30.0 of the CIB to disk (digest: 393ee90f5ad0b66d8421751a7ec7eaf7)
  152. Aug 28 17:34:18 [720] pm1 attrd: info: pcmk_cpg_membership: Member[0.0] attrd.3232266865
  153. Aug 28 17:34:18 [720] pm1 attrd: info: pcmk_cpg_membership: Joined[1.0] attrd.3232266924
  154. Aug 28 17:34:19 [718] pm1 stonithd: info: pcmk_cpg_membership: Member[0.0] stonithd.3232266865
  155. Aug 28 17:34:19 [718] pm1 stonithd: info: init_cib_cache_cb: Updating device list from the cib: init
  156. Aug 28 17:34:19 [718] pm1 stonithd: info: cib_devices_update: Updating devices to version 0.30.0
  157. Aug 28 17:34:19 [718] pm1 stonithd: notice: unpack_config: On loss of CCM Quorum: Ignore
  158. Aug 28 17:34:19 [722] pm1 crmd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  159. Aug 28 17:34:19 [722] pm1 crmd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266865
  160. Aug 28 17:34:19 [722] pm1 crmd: info: crm_get_peer: Created entry 0918f5e3-6fe4-497e-b31c-9b691ca7dbcc/0x1892270 for node (null)/3232266865 (1 total)
  161. Aug 28 17:34:19 [722] pm1 crmd: info: crm_get_peer: Node 3232266865 has uuid 3232266865
  162. Aug 28 17:34:19 [722] pm1 crmd: info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232266865] - corosync-cpg is now online
  163. Aug 28 17:34:19 [722] pm1 crmd: info: init_cs_connection_once: Connection to 'corosync': established
  164. Aug 28 17:34:19 [718] pm1 stonithd: info: pcmk_cpg_membership: Joined[1.0] stonithd.3232266924
  165. Aug 28 17:34:19 [717] pm1 cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.nlS2hT (digest: /var/lib/pacemaker/cib/cib.88LM5T)
  166. Aug 28 17:34:19 [722] pm1 crmd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  167. Aug 28 17:34:19 [722] pm1 crmd: notice: get_node_name: Defaulting to uname -n for the local corosync node name
  168. Aug 28 17:34:19 [722] pm1 crmd: info: crm_get_peer: Node 3232266865 is now known as pm1
  169. Aug 28 17:34:19 [722] pm1 crmd: info: peer_update_callback: pm1 is now (null)
  170. Aug 28 17:34:22 [720] pm1 attrd: info: pcmk_cpg_membership: Member[1.0] attrd.3232266865
  171. Aug 28 17:34:23 [718] pm1 stonithd: info: pcmk_cpg_membership: Member[1.0] stonithd.3232266865
  172. Aug 28 17:34:23 [720] pm1 attrd: info: corosync_node_name: Unable to get node name for nodeid 3232266924
  173. Aug 28 17:34:23 [720] pm1 attrd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266924
  174. Aug 28 17:34:23 [720] pm1 attrd: info: crm_get_peer: Created entry 9da41128-303a-4990-a17d-f0e612987632/0x1b1a630 for node (null)/3232266924 (2 total)
  175. Aug 28 17:34:23 [720] pm1 attrd: info: crm_get_peer: Node 3232266924 has uuid 3232266924
  176. Aug 28 17:34:23 [720] pm1 attrd: info: pcmk_cpg_membership: Member[1.1] attrd.3232266924
  177. Aug 28 17:34:23 [720] pm1 attrd: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232266924] - corosync-cpg is now online
  178. Aug 28 17:34:23 [720] pm1 attrd: notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232266924] - state is now member (was (null))
  179. Aug 28 17:34:23 [722] pm1 crmd: notice: cluster_connect_quorum: Quorum acquired
  180. Aug 28 17:34:25 [718] pm1 stonithd: info: corosync_node_name: Unable to get node name for nodeid 3232266924
  181. Aug 28 17:34:25 [718] pm1 stonithd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266924
  182. Aug 28 17:34:25 [718] pm1 stonithd: info: crm_get_peer: Created entry 15eed942-3a9e-4161-8013-d6778cf374a0/0x166ff40 for node (null)/3232266924 (2 total)
  183. Aug 28 17:34:25 [718] pm1 stonithd: info: crm_get_peer: Node 3232266924 has uuid 3232266924
  184. Aug 28 17:34:25 [718] pm1 stonithd: info: pcmk_cpg_membership: Member[1.1] stonithd.3232266924
  185. Aug 28 17:34:25 [718] pm1 stonithd: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232266924] - corosync-cpg is now online
  186. Aug 28 17:34:25 [717] pm1 cib: info: cib_process_request: Forwarding cib_modify operation for section nodes to master (origin=local/crmd/3)
  187. Aug 28 17:34:26 [718] pm1 stonithd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  188. Aug 28 17:34:26 [718] pm1 stonithd: notice: get_node_name: Defaulting to uname -n for the local corosync node name
  189. Aug 28 17:34:26 [718] pm1 stonithd: info: crm_get_peer: Node 3232266924 is now known as pm2
  190. Aug 28 17:34:26 [722] pm1 crmd: info: do_ha_control: Connected to the cluster
  191. Aug 28 17:34:26 [722] pm1 crmd: info: lrmd_ipc_connect: Connecting to lrmd
  192. Aug 28 17:34:26 [722] pm1 crmd: info: do_lrm_control: LRM connection established
  193. Aug 28 17:34:26 [722] pm1 crmd: info: do_started: Delaying start, no membership data (0000000000100000)
  194. Aug 28 17:34:26 [722] pm1 crmd: info: pcmk_quorum_notification: Membership 988: quorum retained (2)
  195. Aug 28 17:34:26 [717] pm1 cib: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  196. Aug 28 17:34:26 [717] pm1 cib: notice: get_node_name: Defaulting to uname -n for the local corosync node name
  197. Aug 28 17:34:26 [717] pm1 cib: info: crm_get_peer: Node 3232266924 is now known as pm2
  198. Aug 28 17:34:29 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=pm2/crmd/3, version=0.30.0)
  199. Aug 28 17:34:30 [722] pm1 crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node pm1[3232266865] - state is now member (was (null))
  200. Aug 28 17:34:30 [722] pm1 crmd: info: peer_update_callback: pm1 is now member (was (null))
  201. Aug 28 17:34:32 [722] pm1 crmd: info: corosync_node_name: Unable to get node name for nodeid 3232266924
  202. Aug 28 17:34:32 [722] pm1 crmd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266924
  203. Aug 28 17:34:32 [722] pm1 crmd: info: crm_get_peer: Created entry b4fa4bbe-9170-44e5-9a0b-934ce754670a/0x1994df0 for node (null)/3232266924 (2 total)
  204. Aug 28 17:34:32 [722] pm1 crmd: info: crm_get_peer: Node 3232266924 has uuid 3232266924
  205. Aug 28 17:34:32 [722] pm1 crmd: info: pcmk_quorum_notification: Obtaining name for new node 3232266924
  206. Aug 28 17:34:32 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=pm1/crmd/3, version=0.30.0)
  207. Aug 28 17:34:33 [722] pm1 crmd: info: corosync_node_name: Unable to get node name for nodeid 3232266924
  208. Aug 28 17:34:33 [722] pm1 crmd: info: corosync_node_name: Unable to get node name for nodeid 3232266924
  209. Aug 28 17:34:33 [722] pm1 crmd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266924
  210. Aug 28 17:34:33 [722] pm1 crmd: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232266924] - state is now member (was (null))
  211. Aug 28 17:34:34 [722] pm1 crmd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  212. Aug 28 17:34:34 [722] pm1 crmd: notice: get_node_name: Defaulting to uname -n for the local corosync node name
  213. Aug 28 17:34:34 [722] pm1 crmd: info: do_started: Delaying start, Config not read (0000000000000040)
  214. Aug 28 17:34:34 [722] pm1 crmd: info: qb_ipcs_us_publish: server name: crmd
  215. Aug 28 17:34:34 [722] pm1 crmd: notice: do_started: The local CRM is operational
  216. Aug 28 17:34:34 [722] pm1 crmd: info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
  217. Aug 28 17:34:34 [722] pm1 crmd: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
  218. Aug 28 17:34:35 [722] pm1 crmd: info: pcmk_cpg_membership: Joined[0.0] crmd.3232266865
  219. Aug 28 17:34:36 [722] pm1 crmd: info: pcmk_cpg_membership: Member[0.0] crmd.3232266865
  220. Aug 28 17:34:37 [722] pm1 crmd: info: corosync_node_name: Unable to get node name for nodeid 3232266924
  221. Aug 28 17:34:37 [722] pm1 crmd: notice: get_node_name: Could not obtain a node name for corosync nodeid 3232266924
  222. Aug 28 17:34:37 [722] pm1 crmd: info: pcmk_cpg_membership: Member[0.1] crmd.3232266924
  223. Aug 28 17:34:37 [722] pm1 crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232266924] - corosync-cpg is now online
  224. Aug 28 17:34:37 [722] pm1 crmd: info: crm_get_peer: Node 3232266924 is now known as pm2
  225. Aug 28 17:34:37 [722] pm1 crmd: info: peer_update_callback: pm2 is now member
  226. Aug 28 17:34:55 [722] pm1 crmd: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
  227. Aug 28 17:34:55 [722] pm1 crmd: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
  228. Aug 28 17:34:55 [722] pm1 crmd: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
  229. Aug 28 17:35:00 [722] pm1 crmd: info: election_count_vote: Election 1 (owner: 3232266924) lost: vote from pm2 (Uptime)
  230. Aug 28 17:35:00 [722] pm1 crmd: notice: do_state_transition: State transition S_ELECTION -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
  231. Aug 28 17:35:00 [722] pm1 crmd: info: do_dc_release: DC role released
  232. Aug 28 17:35:00 [722] pm1 crmd: info: do_te_control: Transitioner is now inactive
  233. Aug 28 17:35:00 [722] pm1 crmd: info: do_log: FSA: Input I_RELEASE_SUCCESS from do_dc_release() received in state S_PENDING
  234. Aug 28 17:35:04 [722] pm1 crmd: info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
  235. Aug 28 17:35:09 [722] pm1 crmd: info: election_count_vote: Election 2 (owner: 3232266924) lost: vote from pm2 (Uptime)
  236. Aug 28 17:35:09 [722] pm1 crmd: info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
  237. Aug 28 17:35:18 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=pm2/crmd/7, version=0.30.0)
  238. Aug 28 17:35:20 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=pm2/crmd/9, version=0.30.0)
  239. Aug 28 17:35:20 [722] pm1 crmd: info: update_dc: Set DC to pm2 (3.0.9)
  240. Aug 28 17:35:24 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=pm2/crmd/11, version=0.30.0)
  241. Aug 28 17:35:25 [722] pm1 crmd: info: election_count_vote: Election 3 (owner: 3232266924) lost: vote from pm2 (Uptime)
  242. Aug 28 17:35:25 [722] pm1 crmd: info: update_dc: Unset DC. Was pm2
  243. Aug 28 17:35:25 [722] pm1 crmd: info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
  244. Aug 28 17:35:27 [722] pm1 crmd: info: update_dc: Set DC to pm2 (3.0.9)
  245. Aug 28 17:35:30 [717] pm1 cib: info: cib_process_ping: Reporting our current digest to pm2: f6a8c0b8c443ce3e955e1e5091934c10 for 0.30.0 (0x1dff030 0)
  246. Aug 28 17:35:34 [722] pm1 crmd: notice: throttle_handle_load: High CPU load detected: 2.470000
  247. Aug 28 17:35:34 [722] pm1 crmd: info: throttle_send_command: Updated throttle state to 0100
  248. Aug 28 17:35:49 [717] pm1 cib: info: cib_process_replace: Digest matched on replace from pm2: f6a8c0b8c443ce3e955e1e5091934c10
  249. Aug 28 17:35:49 [717] pm1 cib: info: cib_process_replace: Replaced 0.30.0 with 0.30.0 from pm2
  250. Aug 28 17:35:49 [717] pm1 cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=pm2/crmd/15, version=0.30.0)
  251. Aug 28 17:35:50 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=pm2/crmd/16, version=0.30.0)
  252. Aug 28 17:35:50 [717] pm1 cib: info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-33.raw
  253. Aug 28 17:35:50 [717] pm1 cib: info: write_cib_contents: Wrote version 0.30.0 of the CIB to disk (digest: 27a256069bf792d95e73e9885d60f458)
  254. Aug 28 17:35:50 [717] pm1 cib: info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.sPsA3Z (digest: /var/lib/pacemaker/cib/cib.BRwcH1)
  255. Aug 28 17:35:53 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=pm2/crmd/17, version=0.30.0)
  256. Aug 28 17:35:57 [722] pm1 crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='pm1']/transient_attributes
  257. Aug 28 17:35:57 [722] pm1 crmd: info: update_attrd_helper: Connecting to attrd... 5 retries remaining
  258. Aug 28 17:35:57 [717] pm1 cib: info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='pm1']/transient_attributes to master (origin=local/crmd/9)
  259. Aug 28 17:35:58 [720] pm1 attrd: info: attrd_client_message: Starting an election to determine the writer
  260. Aug 28 17:35:59 [717] pm1 cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='pm2']/transient_attributes: OK (rc=0, origin=pm2/crmd/18, version=0.30.0)
  261. Aug 28 17:36:00 [722] pm1 crmd: info: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
  262. Aug 28 17:36:00 [722] pm1 crmd: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
  263. Aug 28 17:36:00 [717] pm1 cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='pm1']/transient_attributes: OK (rc=0, origin=pm1/crmd/9, version=0.30.0)
  264. Aug 28 17:36:01 [720] pm1 attrd: info: corosync_node_name: Unable to get node name for nodeid 3232266865
  265. Aug 28 17:36:01 [720] pm1 attrd: notice: get_node_name: Defaulting to uname -n for the local corosync node name
  266. Aug 28 17:36:01 [720] pm1 attrd: info: attrd_client_message: Updating all attributes
  267. Aug 28 17:36:05 [722] pm1 crmd: notice: throttle_handle_load: High CPU load detected: 2.460000
  268. Aug 28 17:36:05 [717] pm1 cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='pm2']/lrm: OK (rc=0, origin=pm2/crmd/19, version=0.30.0)
  269. Aug 28 17:36:06 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.0 2
  270. Aug 28 17:36:06 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.1 (null)
  271. Aug 28 17:36:06 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=1
  272. Aug 28 17:36:06 [717] pm1 cib: info: cib_perform_op: ++ /cib/status: <node_state id="3232266924" uname="pm2" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member"/>
  273. Aug 28 17:36:06 [717] pm1 cib: info: cib_perform_op: ++ <lrm id="3232266924">
  274. Aug 28 17:36:06 [717] pm1 cib: info: cib_perform_op: ++ <lrm_resources/>
  275. Aug 28 17:36:06 [717] pm1 cib: info: cib_perform_op: ++ </lrm>
  276. Aug 28 17:36:06 [717] pm1 cib: info: cib_perform_op: ++ </node_state>
  277. Aug 28 17:36:06 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm2/crmd/20, version=0.30.1)
  278. Aug 28 17:36:11 [717] pm1 cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='pm1']/lrm: OK (rc=0, origin=pm2/crmd/21, version=0.30.1)
  279. Aug 28 17:36:12 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.1 2
  280. Aug 28 17:36:12 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.2 (null)
  281. Aug 28 17:36:12 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=2
  282. Aug 28 17:36:12 [717] pm1 cib: info: cib_perform_op: ++ /cib/status: <node_state id="3232266865" uname="pm1" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member"/>
  283. Aug 28 17:36:12 [717] pm1 cib: info: cib_perform_op: ++ <lrm id="3232266865">
  284. Aug 28 17:36:12 [717] pm1 cib: info: cib_perform_op: ++ <lrm_resources/>
  285. Aug 28 17:36:12 [717] pm1 cib: info: cib_perform_op: ++ </lrm>
  286. Aug 28 17:36:12 [717] pm1 cib: info: cib_perform_op: ++ </node_state>
  287. Aug 28 17:36:12 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm2/crmd/22, version=0.30.2)
  288. Aug 28 17:36:13 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=pm2/crmd/25, version=0.30.2)
  289. Aug 28 17:36:14 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.2 2
  290. Aug 28 17:36:15 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.3 (null)
  291. Aug 28 17:36:15 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=3
  292. Aug 28 17:36:15 [717] pm1 cib: info: cib_perform_op: + /cib/status/node_state[@id='3232266924']: @crm-debug-origin=do_state_transition
  293. Aug 28 17:36:15 [717] pm1 cib: info: cib_perform_op: + /cib/status/node_state[@id='3232266865']: @crm-debug-origin=do_state_transition
  294. Aug 28 17:36:15 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm2/crmd/26, version=0.30.3)
  295. Aug 28 17:36:16 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.3 2
  296. Aug 28 17:36:16 [720] pm1 attrd: info: attrd_peer_update: Setting shutdown[pm1]: (null) -> 0 from pm1
  297. Aug 28 17:36:16 [720] pm1 attrd: info: crm_get_peer: Node 3232266924 is now known as pm2
  298. Aug 28 17:36:16 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.4 (null)
  299. Aug 28 17:36:17 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=4, @dc-uuid=3232266924
  300. Aug 28 17:36:17 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=pm2/crmd/27, version=0.30.4)
  301. Aug 28 17:36:20 [720] pm1 attrd: info: election_count_vote: Election 1 (owner: 3232266924) pass: vote from pm2 (Uptime)
  302. Aug 28 17:36:23 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.4 2
  303. Aug 28 17:36:23 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.5 (null)
  304. Aug 28 17:36:23 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=5
  305. Aug 28 17:36:23 [717] pm1 cib: info: cib_perform_op: + /cib/status/node_state[@id='3232266924']: @crm-debug-origin=do_update_resource
  306. Aug 28 17:36:23 [717] pm1 cib: info: cib_perform_op: ++ /cib/status/node_state[@id='3232266924']/lrm[@id='3232266924']/lrm_resources: <lrm_resource id="HA_IP" type="IPaddr2" class="ocf" provider="heartbeat"/>
  307. Aug 28 17:36:23 [717] pm1 cib: info: cib_perform_op: ++ <lrm_rsc_op id="HA_IP_last_0" operation_key="HA_IP_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="6:0:7:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb" transition-magic="0:7;6:0:7:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1440776180" last-rc-change="144
  308. Aug 28 17:36:23 [717] pm1 cib: info: cib_perform_op: ++ </lrm_resource>
  309. Aug 28 17:36:23 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm2/crmd/30, version=0.30.5)
  310. Aug 28 17:36:28 [719] pm1 pacemaker_remoted: info: process_lrmd_get_rsc_info: Resource 'HA_IP' not found (0 active resources)
  311. Aug 28 17:36:28 [719] pm1 pacemaker_remoted: info: process_lrmd_rsc_register: Added 'HA_IP' to the rsc list (1 active resources)
  312. Aug 28 17:36:28 [722] pm1 crmd: info: do_lrm_rsc_op: Performing key=4:0:7:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb op=HA_IP_monitor_0
  313. Aug 28 17:36:28 [720] pm1 attrd: info: attrd_peer_update: Setting shutdown[pm2]: (null) -> 0 from pm2
  314. Aug 28 17:36:30 [722] pm1 crmd: info: services_os_action_execute: Managed IPaddr2_meta-data_0 process 769 exited with rc=0
  315. Aug 28 17:36:30 [722] pm1 crmd: notice: process_lrm_event: Operation HA_IP_monitor_0: not running (node=pm1, call=5, rc=7, cib-update=10, confirmed=true)
  316. Aug 28 17:36:30 [717] pm1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/10)
  317. Aug 28 17:36:32 [717] pm1 cib: info: cib_process_ping: Reporting our current digest to pm2: 98d6feffbd4f04c93783e89edff6f0f5 for 0.30.5 (0x1e032b0 0)
  318. Aug 28 17:36:35 [722] pm1 crmd: notice: throttle_handle_load: High CPU load detected: 3.180000
  319. Aug 28 17:36:38 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.5 2
  320. Aug 28 17:36:39 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.6 (null)
  321. Aug 28 17:36:39 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=6
  322. Aug 28 17:36:39 [717] pm1 cib: info: cib_perform_op: + /cib/status/node_state[@id='3232266865']: @crm-debug-origin=do_update_resource
  323. Aug 28 17:36:39 [717] pm1 cib: info: cib_perform_op: ++ /cib/status/node_state[@id='3232266865']/lrm[@id='3232266865']/lrm_resources: <lrm_resource id="HA_IP" type="IPaddr2" class="ocf" provider="heartbeat"/>
  324. Aug 28 17:36:39 [717] pm1 cib: info: cib_perform_op: ++ <lrm_rsc_op id="HA_IP_last_0" operation_key="HA_IP_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="4:0:7:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb" transition-magic="0:7;4:0:7:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1440776188" last-rc-change="144
  325. Aug 28 17:36:39 [717] pm1 cib: info: cib_perform_op: ++ </lrm_resource>
  326. Aug 28 17:36:39 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm1/crmd/10, version=0.30.6)
  327. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.6 2
  328. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.7 (null)
  329. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=7
  330. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: ++ /cib/status/node_state[@id='3232266924']: <transient_attributes id="3232266924"/>
  331. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: ++ <instance_attributes id="status-3232266924">
  332. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: ++ <nvpair id="status-3232266924-shutdown" name="shutdown" value="0"/>
  333. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: ++ </instance_attributes>
  334. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: ++ </transient_attributes>
  335. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: ++ /cib/status/node_state[@id='3232266865']: <transient_attributes id="3232266865"/>
  336. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: ++ <instance_attributes id="status-3232266865">
  337. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: ++ <nvpair id="status-3232266865-shutdown" name="shutdown" value="0"/>
  338. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: ++ </instance_attributes>
  339. Aug 28 17:36:42 [717] pm1 cib: info: cib_perform_op: ++ </transient_attributes>
  340. Aug 28 17:36:42 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm2/attrd/2, version=0.30.7)
  341. Aug 28 17:36:43 [722] pm1 crmd: info: do_lrm_rsc_op: Performing key=7:0:0:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb op=HA_IP_start_0
  342. Aug 28 17:36:43 [719] pm1 pacemaker_remoted: info: log_execute: executing - rsc:HA_IP action:start call_id:6
  343. IPaddr2(HA_IP)[773]: 2015/08/28_17:36:43 INFO: Adding inet address 192.168.122.3/24 with broadcast address 192.168.122.255 to device eth0 (with label eth0:ha_ip)
  344. IPaddr2(HA_IP)[773]: 2015/08/28_17:36:43 INFO: Bringing device eth0 up
  345. IPaddr2(HA_IP)[773]: 2015/08/28_17:36:43 INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.122.3 eth0 192.168.122.3 auto not_used not_used
  346. Aug 28 17:36:43 [719] pm1 pacemaker_remoted: info: log_finished: finished - rsc:HA_IP action:start call_id:6 pid:773 exit-code:0 exec-time:27ms queue-time:0ms
  347. Aug 28 17:36:43 [722] pm1 crmd: info: services_os_action_execute: Managed IPaddr2_meta-data_0 process 839 exited with rc=0
  348. Aug 28 17:36:43 [722] pm1 crmd: notice: process_lrm_event: Operation HA_IP_start_0: ok (node=pm1, call=6, rc=0, cib-update=11, confirmed=true)
  349. Aug 28 17:36:43 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.7 2
  350. Aug 28 17:36:43 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.8 (null)
  351. Aug 28 17:36:43 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=8
  352. Aug 28 17:36:43 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm2/attrd/3, version=0.30.8)
  353. Aug 28 17:36:43 [717] pm1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/11)
  354. Aug 28 17:36:46 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.8 2
  355. Aug 28 17:36:46 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.9 (null)
  356. Aug 28 17:36:46 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=9
  357. Aug 28 17:36:46 [717] pm1 cib: info: cib_perform_op: + /cib/status/node_state[@id='3232266865']/lrm[@id='3232266865']/lrm_resources/lrm_resource[@id='HA_IP']/lrm_rsc_op[@id='HA_IP_last_0']: @operation_key=HA_IP_start_0, @operation=start, @transition-key=7:0:0:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb, @transition-magic=0:0;7:0:0:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb, @call-id=6, @rc-code=0, @last-run=1440776203, @last-rc-change=1440776203, @exec-time=27
  358. Aug 28 17:36:46 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm1/crmd/11, version=0.30.9)
  359. Aug 28 17:36:46 [720] pm1 attrd: info: election_complete: Election election-attrd complete
  360. Aug 28 17:36:48 [717] pm1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/2)
  361. Aug 28 17:36:48 [720] pm1 attrd: info: write_attribute: Sent update 2 with 2 changes for shutdown, id=<n/a>, set=(null)
  362. Aug 28 17:36:50 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm1/attrd/2, version=0.30.9)
  363. Aug 28 17:36:51 [717] pm1 cib: info: cib_process_ping: Reporting our current digest to pm2: ff3830932ed971563e140ab072127ffa for 0.30.9 (0x1e032b0 0)
  364. Aug 28 17:36:52 [722] pm1 crmd: info: do_lrm_rsc_op: Performing key=7:1:0:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb op=HA_IP_monitor_10000
  365. Aug 28 17:36:52 [720] pm1 attrd: info: write_attribute: Sent update 3 with 2 changes for terminate, id=<n/a>, set=(null)
  366. Aug 28 17:36:52 [717] pm1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/3)
  367. Aug 28 17:36:52 [720] pm1 attrd: info: attrd_cib_callback: Update 2 for shutdown: OK (0)
  368. Aug 28 17:36:52 [720] pm1 attrd: info: attrd_cib_callback: Update 2 for shutdown[pm1]=0: OK (0)
  369. Aug 28 17:36:52 [720] pm1 attrd: info: attrd_cib_callback: Update 2 for shutdown[pm2]=0: OK (0)
  370. Aug 28 17:36:53 [722] pm1 crmd: notice: process_lrm_event: Operation HA_IP_monitor_10000: ok (node=pm1, call=7, rc=0, cib-update=12, confirmed=false)
  371. Aug 28 17:36:53 [717] pm1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/12)
  372. Aug 28 17:36:55 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.9 2
  373. Aug 28 17:36:55 [720] pm1 attrd: info: attrd_peer_update: Setting probe_complete[pm2]: (null) -> true from pm2
  374. Aug 28 17:36:55 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.10 (null)
  375. Aug 28 17:36:55 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=10
  376. Aug 28 17:36:56 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm1/attrd/3, version=0.30.10)
  377. Aug 28 17:36:56 [720] pm1 attrd: info: write_attribute: Sent update 4 with 1 changes for probe_complete, id=<n/a>, set=(null)
  378. Aug 28 17:36:56 [720] pm1 attrd: info: attrd_cib_callback: Update 3 for terminate: OK (0)
  379. Aug 28 17:36:56 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.10 2
  380. Aug 28 17:36:56 [720] pm1 attrd: info: attrd_cib_callback: Update 3 for terminate[pm1]=(null): OK (0)
  381. Aug 28 17:36:56 [720] pm1 attrd: info: attrd_cib_callback: Update 3 for terminate[pm2]=(null): OK (0)
  382. Aug 28 17:36:57 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.11 (null)
  383. Aug 28 17:36:57 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=11
  384. Aug 28 17:36:57 [717] pm1 cib: info: cib_perform_op: ++ /cib/status/node_state[@id='3232266865']/lrm[@id='3232266865']/lrm_resources/lrm_resource[@id='HA_IP']: <lrm_rsc_op id="HA_IP_monitor_10000" operation_key="HA_IP_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="7:1:0:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb" transition-magic="0:0;7:1:0:2d8d46e0-7b9b-4943-99ac-c0c1f8880ebb" call-id="7" rc-code="0" op-status="0" interval="10000" la
  385. Aug 28 17:36:57 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm1/crmd/12, version=0.30.11)
  386. Aug 28 17:36:57 [717] pm1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/4)
  387. Aug 28 17:37:01 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.11 2
  388. Aug 28 17:37:01 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.12 (null)
  389. Aug 28 17:37:01 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=12
  390. Aug 28 17:37:01 [717] pm1 cib: info: cib_perform_op: ++ /cib/status/node_state[@id='3232266924']/transient_attributes[@id='3232266924']/instance_attributes[@id='status-3232266924']: <nvpair id="status-3232266924-probe_complete" name="probe_complete" value="true"/>
  391. Aug 28 17:37:01 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm1/attrd/4, version=0.30.12)
  392. Aug 28 17:37:03 [720] pm1 attrd: info: attrd_cib_callback: Update 4 for probe_complete: OK (0)
  393. Aug 28 17:37:03 [720] pm1 attrd: info: attrd_cib_callback: Update 4 for probe_complete[pm2]=true: OK (0)
  394. Aug 28 17:37:05 [722] pm1 crmd: notice: throttle_handle_load: High CPU load detected: 3.620000
  395. Aug 28 17:37:06 [717] pm1 cib: info: cib_process_ping: Reporting our current digest to pm2: d41b0b83de884625269e5e9cfe5d98e5 for 0.30.12 (0x1e032b0 0)
  396. Aug 28 17:37:07 [720] pm1 attrd: info: attrd_peer_update: Setting probe_complete[pm1]: (null) -> true from pm1
  397. Aug 28 17:37:09 [717] pm1 cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/5)
  398. Aug 28 17:37:09 [720] pm1 attrd: info: write_attribute: Sent update 5 with 2 changes for probe_complete, id=<n/a>, set=(null)
  399. Aug 28 17:37:11 [717] pm1 cib: info: cib_perform_op: Diff: --- 0.30.12 2
  400. Aug 28 17:37:11 [717] pm1 cib: info: cib_perform_op: Diff: +++ 0.30.13 (null)
  401. Aug 28 17:37:11 [717] pm1 cib: info: cib_perform_op: + /cib: @num_updates=13
  402. Aug 28 17:37:11 [717] pm1 cib: info: cib_perform_op: ++ /cib/status/node_state[@id='3232266865']/transient_attributes[@id='3232266865']/instance_attributes[@id='status-3232266865']: <nvpair id="status-3232266865-probe_complete" name="probe_complete" value="true"/>
  403. Aug 28 17:37:11 [717] pm1 cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=pm1/attrd/5, version=0.30.13)
  404. Aug 28 17:37:12 [720] pm1 attrd: info: attrd_cib_callback: Update 5 for probe_complete: OK (0)
  405. Aug 28 17:37:12 [720] pm1 attrd: info: attrd_cib_callback: Update 5 for probe_complete[pm1]=true: OK (0)
  406. Aug 28 17:37:12 [720] pm1 attrd: info: attrd_cib_callback: Update 5 for probe_complete[pm2]=true: OK (0)
  407. Aug 28 17:37:17 [717] pm1 cib: info: cib_process_ping: Reporting our current digest to pm2: 713c504f7b8c5b42878c256ccc2ee047 for 0.30.13 (0x1e032b0 0)
  408. Aug 28 17:37:35 [722] pm1 crmd: notice: throttle_handle_load: High CPU load detected: 2.690000
  409. Aug 28 17:38:05 [722] pm1 crmd: notice: throttle_handle_load: High CPU load detected: 1.630000
  410. Aug 28 17:38:35 [722] pm1 crmd: info: throttle_handle_load: Moderate CPU load detected: 0.990000
  411. Aug 28 17:38:35 [722] pm1 crmd: info: throttle_send_command: Updated throttle state to 0010
  412. Aug 28 17:39:05 [722] pm1 crmd: info: throttle_send_command: Updated throttle state to 0001
  413. Aug 28 17:39:35 [722] pm1 crmd: info: throttle_send_command: Updated throttle state to 0000
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement