Advertisement
amartin

DRBD Won't Promote

Apr 9th, 2012
143
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 48.02 KB | None | 0 0
  1. Apr 9 20:21:29 node1 heartbeat: [1890]: info: No log entry found in ha.cf -- use logd
  2. Apr 9 20:21:29 node1 heartbeat: [1890]: info: Enabling logging daemon
  3. Apr 9 20:21:29 node1 heartbeat: [1890]: info: logfile and debug file are those specified in logd config file (default /etc/logd.cf)
  4. Apr 9 20:21:29 node1 heartbeat: [1890]: info: **************************
  5. Apr 9 20:21:29 node1 heartbeat: [1890]: info: Configuration validated. Starting heartbeat 3.0.5
  6. Apr 9 20:21:29 node1 heartbeat: [2345]: info: heartbeat: version 3.0.5
  7. Apr 9 20:21:30 node1 heartbeat: [2345]: info: Heartbeat generation: 1333386521
  8. Apr 9 20:21:30 node1 heartbeat: [2345]: info: glib: UDP multicast heartbeat started for group 239.0.0.43 port 694 interface br0 (ttl=1 loop=0)
  9. Apr 9 20:21:30 node1 heartbeat: [2345]: info: glib: UDP Broadcast heartbeat started on port 694 (694) interface br1
  10. Apr 9 20:21:30 node1 heartbeat: [2345]: info: glib: UDP Broadcast heartbeat closed on port 694 interface br1 - Status: 1
  11. Apr 9 20:21:30 node1 heartbeat: [2345]: info: Local status now set to: 'up'
  12. Apr 9 20:21:30 node1 ntpd[2264]: bind() fd 27, family AF_INET6, port 123, scope 7, addr fe80::acf9:a3ff:fe76:4998, mcast=0 flags=0x11 fails: Cannot assign requested address
  13. Apr 9 20:21:30 node1 ntpd[2264]: unable to create socket on virbr0 (12) for fe80::acf9:a3ff:fe76:4998#123
  14. Apr 9 20:21:30 node1 ntpd[2264]: failed to initialize interface for address fe80::acf9:a3ff:fe76:4998
  15. Apr 9 20:21:30 node1 heartbeat: [2345]: info: Link node2:br0 up.
  16. Apr 9 20:21:30 node1 heartbeat: [2345]: info: Link quorumnode:br0 up.
  17. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Comm_now_up(): updating status to active
  18. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Local status now set to: 'active'
  19. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/ccm" (112,122)
  20. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/cib" (112,122)
  21. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/lrmd -r" (0,0)
  22. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/stonithd" (0,0)
  23. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/attrd" (112,122)
  24. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/crmd" (112,122)
  25. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Starting child client "/usr/lib/heartbeat/dopd" (112,122)
  26. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Status update for node quorumnode: status active
  27. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Link node2:br1 up.
  28. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Status update for node node2: status active
  29. Apr 9 20:21:31 node1 heartbeat: [2345]: info: Link node1:br1 up.
  30. Apr 9 20:21:31 node1 heartbeat: [2404]: info: Starting "/usr/lib/heartbeat/lrmd -r" as uid 0 gid 0 (pid 2404)
  31. Apr 9 20:21:31 node1 heartbeat: [2407]: info: Starting "/usr/lib/heartbeat/crmd" as uid 112 gid 122 (pid 2407)
  32. Apr 9 20:21:31 node1 heartbeat: [2402]: info: Starting "/usr/lib/heartbeat/ccm" as uid 112 gid 122 (pid 2402)
  33. Apr 9 20:21:31 node1 heartbeat: [2405]: info: Starting "/usr/lib/heartbeat/stonithd" as uid 0 gid 0 (pid 2405)
  34. Apr 9 20:21:31 node1 heartbeat: [2408]: info: Starting "/usr/lib/heartbeat/dopd" as uid 112 gid 122 (pid 2408)
  35. Apr 9 20:21:31 node1 heartbeat: [2403]: info: Starting "/usr/lib/heartbeat/cib" as uid 112 gid 122 (pid 2403)
  36. Apr 9 20:21:31 node1 heartbeat: [2406]: info: Starting "/usr/lib/heartbeat/attrd" as uid 112 gid 122 (pid 2406)
  37. Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: PID=2408
  38. Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: Signing in with heartbeat
  39. Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: [We are node1]
  40. Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: Setting message filter mode
  41. Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: Setting message signal
  42. Apr 9 20:21:31 node1 /usr/lib/heartbeat/dopd: [2408]: debug: Waiting for messages...
  43. Apr 9 20:21:31 node1 ccm: [2402]: info: Hostname: node1
  44. Apr 9 20:21:31 node1 lrmd: [2404]: info: enabling coredumps
  45. Apr 9 20:21:31 node1 lrmd: [2404]: WARN: Core dumps could be lost if multiple dumps occur.
  46. Apr 9 20:21:31 node1 lrmd: [2404]: WARN: Consider setting non-default value in /proc/sys/kernel/core_pattern (or equivalent) for maximum supportability
  47. Apr 9 20:21:31 node1 lrmd: [2404]: WARN: Consider setting /proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum supportability
  48. Apr 9 20:21:31 node1 lrmd: [2404]: info: Started.
  49. Apr 9 20:21:31 node1 cib: [2403]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
  50. Apr 9 20:21:31 node1 attrd: [2406]: info: Invoked: /usr/lib/heartbeat/attrd
  51. Apr 9 20:21:31 node1 attrd: [2406]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
  52. Apr 9 20:21:31 node1 stonith-ng: [2405]: info: Invoked: /usr/lib/heartbeat/stonithd
  53. Apr 9 20:21:31 node1 heartbeat: [2345]: info: the send queue length from heartbeat to client ccm is set to 1024
  54. Apr 9 20:21:31 node1 stonith-ng: [2405]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
  55. Apr 9 20:21:31 node1 stonith-ng: [2405]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
  56. Apr 9 20:21:31 node1 stonith-ng: [2405]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
  57. Apr 9 20:21:31 node1 attrd: [2406]: notice: main: Starting mainloop...
  58. Apr 9 20:21:31 node1 crmd: [2407]: info: Invoked: /usr/lib/heartbeat/crmd
  59. Apr 9 20:21:31 node1 crmd: [2407]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
  60. Apr 9 20:21:31 node1 crmd: [2407]: info: main: CRM Hg Version: 9971ebba4494012a93c03b40a2c58ec0eb60f50c
  61. Apr 9 20:21:31 node1 crmd: [2407]: info: crmd_init: Starting crmd
  62. Apr 9 20:21:31 node1 cib: [2403]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
  63. Apr 9 20:21:31 node1 cib: [2403]: info: validate_with_relaxng: Creating RNG parser context
  64. Apr 9 20:21:31 node1 cib: [2403]: info: startCib: CIB Initialization completed successfully
  65. Apr 9 20:21:31 node1 cib: [2403]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
  66. Apr 9 20:21:31 node1 cib: [2403]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
  67. Apr 9 20:21:31 node1 heartbeat: [2345]: info: the send queue length from heartbeat to client attrd is set to 1024
  68. Apr 9 20:21:31 node1 stonith-ng: [2405]: info: register_heartbeat_conn: Hostname: node1
  69. Apr 9 20:21:31 node1 stonith-ng: [2405]: info: register_heartbeat_conn: UUID: 1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5
  70. Apr 9 20:21:31 node1 stonith-ng: [2405]: info: main: Starting stonith-ng mainloop
  71. Apr 9 20:21:31 node1 heartbeat: [2345]: info: the send queue length from heartbeat to client stonith-ng is set to 1024
  72. Apr 9 20:21:31 node1 cib: [2403]: info: register_heartbeat_conn: Hostname: node1
  73. Apr 9 20:21:31 node1 cib: [2403]: info: register_heartbeat_conn: UUID: 1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5
  74. Apr 9 20:21:31 node1 heartbeat: [2345]: info: the send queue length from heartbeat to client cib is set to 1024
  75. Apr 9 20:21:31 node1 cib: [2403]: info: ccm_connect: Registering with CCM...
  76. Apr 9 20:21:31 node1 cib: [2403]: info: cib_init: Requesting the list of configured nodes
  77. Apr 9 20:21:32 node1 cib: [2403]: info: cib_init: Starting cib mainloop
  78. Apr 9 20:21:32 node1 cib: [2403]: info: cib_client_status_callback: Status update: Client node1/cib now has status [join]
  79. Apr 9 20:21:32 node1 cib: [2403]: info: crm_new_peer: Node 0 is now known as node1
  80. Apr 9 20:21:32 node1 cib: [2403]: info: crm_update_peer_proc: node1.cib is now online
  81. Apr 9 20:21:32 node1 cib: [2403]: WARN: cib_peer_callback: Discarding cib_apply_diff message (5688) from node2: not in our membership
  82. Apr 9 20:21:32 node1 crmd: [2407]: info: do_cib_control: CIB connection established
  83. Apr 9 20:21:32 node1 crmd: [2407]: info: get_cluster_type: Assuming a 'heartbeat' based cluster
  84. Apr 9 20:21:32 node1 crmd: [2407]: notice: crm_cluster_connect: Connecting to cluster infrastructure: heartbeat
  85. Apr 9 20:21:32 node1 cib: [2403]: info: cib_client_status_callback: Status update: Client node1/cib now has status [online]
  86. Apr 9 20:21:32 node1 cib: [2403]: info: cib_client_status_callback: Status update: Client node2/cib now has status [online]
  87. Apr 9 20:21:32 node1 cib: [2403]: info: crm_new_peer: Node 0 is now known as node2
  88. Apr 9 20:21:32 node1 crmd: [2407]: info: register_heartbeat_conn: Hostname: node1
  89. Apr 9 20:21:32 node1 cib: [2403]: info: crm_update_peer_proc: node2.cib is now online
  90. Apr 9 20:21:32 node1 crmd: [2407]: info: register_heartbeat_conn: UUID: 1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5
  91. Apr 9 20:21:32 node1 cib: [2403]: info: cib_client_status_callback: Status update: Client quorumnode/cib now has status [online]
  92. Apr 9 20:21:33 node1 heartbeat: [2345]: info: the send queue length from heartbeat to client crmd is set to 1024
  93. Apr 9 20:21:33 node1 cib: [2403]: info: crm_new_peer: Node 0 is now known as quorumnode
  94. Apr 9 20:21:33 node1 cib: [2403]: info: crm_update_peer_proc: quorumnode.cib is now online
  95. Apr 9 20:21:33 node1 cib: [2403]: info: cib_process_diff: Diff 5.116.26 -> 5.116.27 not applied to 5.116.0: current "num_updates" is less than required
  96. Apr 9 20:21:33 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  97. Apr 9 20:21:33 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.116.27 -> 5.116.28 (sync in progress)
  98. Apr 9 20:21:33 node1 crmd: [2407]: info: do_ha_control: Connected to the cluster
  99. Apr 9 20:21:33 node1 crmd: [2407]: info: do_ccm_control: CCM connection established... waiting for first callback
  100. Apr 9 20:21:33 node1 crmd: [2407]: info: do_started: Delaying start, no membership data (0000000000100000)
  101. Apr 9 20:21:33 node1 crmd: [2407]: info: crmd_init: Starting crmd's mainloop
  102. Apr 9 20:21:33 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  103. Apr 9 20:21:33 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
  104. Apr 9 20:21:33 node1 crmd: [2407]: notice: crmd_client_status_callback: Status update: Client node1/crmd now has status [online] (DC=false)
  105. Apr 9 20:21:33 node1 crmd: [2407]: info: crm_new_peer: Node 0 is now known as node1
  106. Apr 9 20:21:33 node1 crmd: [2407]: info: ais_status_callback: status: node1 is now unknown
  107. Apr 9 20:21:33 node1 crmd: [2407]: info: crm_update_peer_proc: node1.crmd is now online
  108. Apr 9 20:21:33 node1 crmd: [2407]: notice: crmd_peer_update: Status update: Client node1/crmd now has status [online] (DC=<null>)
  109. Apr 9 20:21:33 node1 crmd: [2407]: notice: crmd_client_status_callback: Status update: Client node1/crmd now has status [online] (DC=false)
  110. Apr 9 20:21:34 node1 crmd: [2407]: notice: crmd_client_status_callback: Status update: Client node2/crmd now has status [online] (DC=false)
  111. Apr 9 20:21:34 node1 crmd: [2407]: info: crm_new_peer: Node 0 is now known as node2
  112. Apr 9 20:21:34 node1 crmd: [2407]: info: ais_status_callback: status: node2 is now unknown
  113. Apr 9 20:21:34 node1 crmd: [2407]: info: crm_update_peer_proc: node2.crmd is now online
  114. Apr 9 20:21:34 node1 crmd: [2407]: notice: crmd_peer_update: Status update: Client node2/crmd now has status [online] (DC=<null>)
  115. Apr 9 20:21:34 node1 crmd: [2407]: notice: crmd_client_status_callback: Status update: Client quorumnode/crmd now has status [offline] (DC=false)
  116. Apr 9 20:21:34 node1 crmd: [2407]: info: crm_new_peer: Node 0 is now known as quorumnode
  117. Apr 9 20:21:34 node1 crmd: [2407]: info: ais_status_callback: status: quorumnode is now unknown
  118. Apr 9 20:21:34 node1 crmd: [2407]: info: do_started: Delaying start, no membership data (0000000000100000)
  119. Apr 9 20:21:34 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.116.28 from node2
  120. Apr 9 20:21:35 node1 crmd: [2407]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
  121. Apr 9 20:21:35 node1 crmd: [2407]: info: mem_handle_event: instance=19, nodes=3, new=3, lost=0, n_idx=0, new_idx=0, old_idx=6
  122. Apr 9 20:21:35 node1 crmd: [2407]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=19)
  123. Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: NEW MEMBERSHIP: trans=19, nodes=3, new=3, lost=0 n_idx=0, new_idx=0, old_idx=6
  124. Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011CURRENT: node2 [nodeid=2, born=1]
  125. Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011CURRENT: quorumnode [nodeid=0, born=17]
  126. Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011CURRENT: node1 [nodeid=1, born=19]
  127. Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011NEW: node2 [nodeid=2, born=1]
  128. Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011NEW: quorumnode [nodeid=0, born=17]
  129. Apr 9 20:21:35 node1 crmd: [2407]: info: ccm_event_detail: #011NEW: node1 [nodeid=1, born=19]
  130. Apr 9 20:21:35 node1 crmd: [2407]: info: crm_get_peer: Node node2 now has id: 2
  131. Apr 9 20:21:35 node1 crmd: [2407]: info: ais_status_callback: status: node2 is now member (was unknown)
  132. Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer: Node node2: id=2 state=member (new) addr=(null) votes=-1 born=1 seen=19 proc=00000000000000000000000000000200
  133. Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer_proc: node2.ais is now online
  134. Apr 9 20:21:35 node1 crmd: [2407]: info: ais_status_callback: status: quorumnode is now member (was unknown)
  135. Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer: Node quorumnode: id=0 state=member (new) addr=(null) votes=-1 born=17 seen=19 proc=00000000000000000000000000000000
  136. Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer_proc: quorumnode.ais is now online
  137. Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer_proc: quorumnode.crmd is now online
  138. Apr 9 20:21:35 node1 crmd: [2407]: notice: crmd_peer_update: Status update: Client quorumnode/crmd now has status [online] (DC=<null>)
  139. Apr 9 20:21:35 node1 crmd: [2407]: info: crm_get_peer: Node node1 now has id: 1
  140. Apr 9 20:21:35 node1 crmd: [2407]: info: ais_status_callback: status: node1 is now member (was unknown)
  141. Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=19 seen=19 proc=00000000000000000000000000000200
  142. Apr 9 20:21:35 node1 crmd: [2407]: info: crm_update_peer_proc: node1.ais is now online
  143. Apr 9 20:21:35 node1 crmd: [2407]: info: do_started: The local CRM is operational
  144. Apr 9 20:21:35 node1 crmd: [2407]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
  145. Apr 9 20:21:35 node1 cib: [2403]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
  146. Apr 9 20:21:35 node1 cib: [2403]: info: mem_handle_event: instance=19, nodes=3, new=3, lost=0, n_idx=0, new_idx=0, old_idx=6
  147. Apr 9 20:21:35 node1 cib: [2403]: info: cib_ccm_msg_callback: Processing CCM event=NEW MEMBERSHIP (id=19)
  148. Apr 9 20:21:35 node1 cib: [2403]: info: crm_get_peer: Node node2 now has id: 2
  149. Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer: Node node2: id=2 state=member (new) addr=(null) votes=-1 born=1 seen=19 proc=00000000000000000000000000000100
  150. Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: node2.ais is now online
  151. Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: node2.crmd is now online
  152. Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer: Node quorumnode: id=0 state=member (new) addr=(null) votes=-1 born=17 seen=19 proc=00000000000000000000000000000100
  153. Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: quorumnode.ais is now online
  154. Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: quorumnode.crmd is now online
  155. Apr 9 20:21:35 node1 cib: [2403]: info: crm_get_peer: Node node1 now has id: 1
  156. Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer: Node node1: id=1 state=member (new) addr=(null) votes=-1 born=19 seen=19 proc=00000000000000000000000000000100
  157. Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: node1.ais is now online
  158. Apr 9 20:21:35 node1 cib: [2403]: info: crm_update_peer_proc: node1.crmd is now online
  159. Apr 9 20:21:36 node1 crmd: [2407]: info: te_connect_stonith: Attempting connection to fencing daemon...
  160. Apr 9 20:21:37 node1 crmd: [2407]: info: te_connect_stonith: Connected
  161. Apr 9 20:21:37 node1 crmd: [2407]: info: update_dc: Set DC to node2 (3.0.5)
  162. Apr 9 20:21:38 node1 ntpd[2264]: synchronized to 10.52.0.33, stratum 3
  163. Apr 9 20:21:38 node1 ntpd[2264]: kernel time sync status change 2001
  164. Apr 9 20:22:52 node1 crmd: [2407]: info: update_attrd: Connecting to attrd...
  165. Apr 9 20:22:52 node1 crmd: [2407]: info: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
  166. Apr 9 20:22:52 node1 attrd: [2406]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  167. Apr 9 20:22:53 node1 crmd: [2407]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/transient_attributes": ok (rc=0)
  168. Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=13:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_sysadmin_notify:1_monitor_0 )
  169. Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:p_sysadmin_notify:1 probe[2] (pid 2863)
  170. Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=14:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_ping:1_monitor_0 )
  171. Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:p_ping:1 probe[3] (pid 2864)
  172. Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=15:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_0 )
  173. Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 probe[4] (pid 2865)
  174. Apr 9 20:22:54 node1 lrmd: [2404]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
  175. Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=16:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=stonithnode1_monitor_0 )
  176. Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:stonithnode1 probe[5] (pid 2866)
  177. Apr 9 20:22:54 node1 lrmd: [2404]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
  178. Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=17:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=stonithnode2_monitor_0 )
  179. Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:stonithnode2 probe[6] (pid 2867)
  180. Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=18:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:0_monitor_0 )
  181. Apr 9 20:22:54 node1 lrmd: [2404]: info: rsc:p_drbd_mount1:0 probe[7] (pid 2868)
  182. Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=19:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:1_monitor_0 )
  183. Apr 9 20:22:54 node1 lrmd: [2404]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
  184. Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=20:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_libvirt-bin_monitor_0 )
  185. Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=21:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_fs_vmstore_monitor_0 )
  186. Apr 9 20:22:54 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=22:233:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_vm_monitor_0 )
  187. Apr 9 20:22:54 node1 stonith-ng: [2405]: notice: stonith_device_action: Device stonithnode2 not found
  188. Apr 9 20:22:54 node1 stonith-ng: [2405]: info: stonith_command: Processed st_execute from lrmd: rc=-12
  189. Apr 9 20:22:54 node1 stonith-ng: [2405]: notice: stonith_device_action: Device stonithnode1 not found
  190. Apr 9 20:22:54 node1 stonith-ng: [2405]: info: stonith_command: Processed st_execute from lrmd: rc=-12
  191. Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[6] on stonithnode2 for client 2407: pid 2867 exited with return code 7
  192. Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[5] on stonithnode1 for client 2407: pid 2866 exited with return code 7
  193. Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation stonithnode2_monitor_0 (call=6, rc=7, cib-update=7, confirmed=true) not running
  194. Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation stonithnode1_monitor_0 (call=5, rc=7, cib-update=8, confirmed=true) not running
  195. Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[2] on p_sysadmin_notify:1 for client 2407: pid 2863 exited with return code 7
  196. Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_sysadmin_notify:1_monitor_0 (call=2, rc=7, cib-update=9, confirmed=true) not running
  197. Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[3] on p_ping:1 for client 2407: pid 2864 exited with return code 7
  198. Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_ping:1_monitor_0 (call=3, rc=7, cib-update=10, confirmed=true) not running
  199. Apr 9 20:22:54 node1 crm_attribute: [2935]: info: Invoked: crm_attribute -N node1 -n master-p_drbd_vmstore:0 -l reboot -D
  200. Apr 9 20:22:54 node1 crm_attribute: [2936]: info: Invoked: crm_attribute -N node1 -n master-p_drbd_mount1:0 -l reboot -D
  201. Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[4] on p_drbd_vmstore:0 for client 2407: pid 2865 exited with return code 7
  202. Apr 9 20:22:54 node1 lrmd: [2404]: info: operation monitor[7] on p_drbd_mount1:0 for client 2407: pid 2868 exited with return code 7
  203. Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_0 (call=4, rc=7, cib-update=11, confirmed=true) not running
  204. Apr 9 20:22:54 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount1:0_monitor_0 (call=7, rc=7, cib-update=12, confirmed=true) not running
  205. Apr 9 20:22:55 node1 lrmd: [2404]: info: rsc:p_drbd_mount2:1 probe[8] (pid 2937)
  206. Apr 9 20:22:55 node1 lrmd: [2404]: info: rsc:p_libvirt-bin probe[9] (pid 2938)
  207. Apr 9 20:22:55 node1 lrmd: [2404]: info: rsc:p_fs_vmstore probe[10] (pid 2939)
  208. Apr 9 20:22:55 node1 lrmd: [2404]: info: rsc:p_vm probe[11] (pid 2940)
  209. Apr 9 20:22:55 node1 lrmd: [2404]: info: operation monitor[9] on p_libvirt-bin for client 2407: pid 2938 exited with return code 0
  210. Apr 9 20:22:55 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_libvirt-bin_monitor_0 (call=9, rc=0, cib-update=13, confirmed=true) ok
  211. Apr 9 20:22:55 node1 Filesystem[2939]: [2956]: WARNING: Couldn't find device [/dev/drbd0]. Expected /dev/??? to exist
  212. Apr 9 20:22:55 node1 lrmd: [2404]: info: operation monitor[10] on p_fs_vmstore for client 2407: pid 2939 exited with return code 7
  213. Apr 9 20:22:55 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_fs_vmstore_monitor_0 (call=10, rc=7, cib-update=14, confirmed=true) not running
  214. Apr 9 20:22:55 node1 VirtualDomain[2940]: [3013]: INFO: Configuration file /mnt/storage/vmstore/config/vm.xml not readable during probe.
  215. Apr 9 20:22:55 node1 lrmd: [2404]: info: operation monitor[11] on p_vm for client 2407: pid 2940 exited with return code 7
  216. Apr 9 20:22:55 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_vm_monitor_0 (call=11, rc=7, cib-update=15, confirmed=true) not running
  217. Apr 9 20:22:55 node1 crm_attribute: [3014]: info: Invoked: crm_attribute -N node1 -n master-p_drbd_mount2:1 -l reboot -D
  218. Apr 9 20:22:55 node1 lrmd: [2404]: info: operation monitor[8] on p_drbd_mount2:1 for client 2407: pid 2937 exited with return code 7
  219. Apr 9 20:22:55 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount2:1_monitor_0 (call=8, rc=7, cib-update=16, confirmed=true) not running
  220. Apr 9 20:22:56 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  221. Apr 9 20:22:56 node1 attrd: [2406]: notice: attrd_perform_update: Sent update 12: probe_complete=true
  222. Apr 9 20:22:56 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  223. Apr 9 20:22:56 node1 attrd: [2406]: notice: attrd_perform_update: Sent update 15: probe_complete=true
  224. Apr 9 20:26:30 node1 ntpd[2264]: Listening on interface #13 virbr0, fe80::acf9:a3ff:fe76:4998#123 Enabled
  225. Apr 9 20:26:30 node1 ntpd[2264]: Listening on interface #14 tap0, fe80::7060:ceff:fe9b:7c46#123 Enabled
  226. Apr 9 20:26:30 node1 ntpd[2264]: new interface(s) found: waking up resolver
  227. Apr 9 20:30:09 node1 cib: [2403]: info: apply_xml_diff: Digest mis-match: expected f3816d6269e8cd580705d41fe50810d0, calculated ae0d5a39a0170ac9076f9bc4cd9deaa3
  228. Apr 9 20:30:09 node1 cib: [2403]: notice: cib_process_diff: Diff 5.116.64 -> 5.116.65 not applied to 5.116.64: Failed application of an update diff
  229. Apr 9 20:30:09 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  230. Apr 9 20:30:09 node1 crmd: [2407]: info: delete_resource: Removing resource p_drbd_vmstore:0 for 3951_mount2_resource (internal) on node1
  231. Apr 9 20:30:09 node1 crmd: [2407]: info: notify_deleted: Notifying 3951_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
  232. Apr 9 20:30:09 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
  233. Apr 9 20:30:09 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021409-5
  234. Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.116.64 -> 5.116.65 (sync in progress)
  235. Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.116.65 -> 5.117.1 (sync in progress)
  236. Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.117.1 -> 5.117.2 (sync in progress)
  237. Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.117.1 -> 5.117.2 (sync in progress)
  238. Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.117.2 -> 5.117.3 (sync in progress)
  239. Apr 9 20:30:09 node1 cib: [2403]: info: cib_process_diff: Diff 5.117.3 -> 5.117.4 not applied to 5.116.64: current "epoch" is less than required
  240. Apr 9 20:30:09 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  241. Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.117.4 -> 5.117.5 (sync in progress)
  242. Apr 9 20:30:09 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.117.5 -> 5.117.6 (sync in progress)
  243. Apr 9 20:30:09 node1 crmd: [2407]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
  244. Apr 9 20:30:09 node1 crmd: [2407]: info: notify_deleted: Notifying 3951_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
  245. Apr 9 20:30:09 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
  246. Apr 9 20:30:09 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021409-6
  247. Apr 9 20:30:09 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-3951) in sscanf result (3) for 0:0:crm-resource-3951
  248. Apr 9 20:30:09 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-3951: lrm_invoke-lrmd-1334021409-7
  249. Apr 9 20:30:10 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=10:238:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_0 )
  250. Apr 9 20:30:10 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 probe[12] (pid 3952)
  251. Apr 9 20:30:10 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.117.6 from node2
  252. Apr 9 20:30:10 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  253. Apr 9 20:30:10 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  254. Apr 9 20:30:10 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
  255. Apr 9 20:30:10 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  256. Apr 9 20:30:10 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
  257. Apr 9 20:30:10 node1 crm_attribute: [3981]: info: Invoked: crm_attribute -N node1 -n master-p_drbd_vmstore:0 -l reboot -D
  258. Apr 9 20:30:10 node1 lrmd: [2404]: info: operation monitor[12] on p_drbd_vmstore:0 for client 2407: pid 3952 exited with return code 7
  259. Apr 9 20:30:10 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_0 (call=12, rc=7, cib-update=26, confirmed=true) not running
  260. Apr 9 20:31:32 node1 cib: [2403]: info: cib_stats: Processed 139 operations (143.00us average, 0% utilization) in the last 10min
  261. Apr 9 20:31:58 node1 cib: [2403]: info: apply_xml_diff: Digest mis-match: expected 6eb7faf8513de7c395f08b4a5f5ec9b8, calculated b3c232c95807c8764d9ff63756b991ff
  262. Apr 9 20:31:58 node1 cib: [2403]: notice: cib_process_diff: Diff 5.118.6 -> 5.118.7 not applied to 5.118.6: Failed application of an update diff
  263. Apr 9 20:31:58 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  264. Apr 9 20:31:58 node1 crmd: [2407]: info: delete_resource: Removing resource p_drbd_vmstore:0 for 4363_mount2_resource (internal) on node1
  265. Apr 9 20:31:58 node1 crmd: [2407]: info: notify_deleted: Notifying 4363_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
  266. Apr 9 20:31:58 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
  267. Apr 9 20:31:58 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-9
  268. Apr 9 20:31:58 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.118.6 -> 5.118.7 (sync in progress)
  269. Apr 9 20:31:58 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.118.7 -> 5.119.1 (sync in progress)
  270. Apr 9 20:31:58 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.119.1 -> 5.119.2 (sync in progress)
  271. Apr 9 20:31:58 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.119.1 -> 5.119.2 (sync in progress)
  272. Apr 9 20:31:58 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.119.2 -> 5.119.3 (sync in progress)
  273. Apr 9 20:31:58 node1 crmd: [2407]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
  274. Apr 9 20:31:58 node1 crmd: [2407]: info: notify_deleted: Notifying 4363_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
  275. Apr 9 20:31:58 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
  276. Apr 9 20:31:58 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-10
  277. Apr 9 20:31:58 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4363) in sscanf result (3) for 0:0:crm-resource-4363
  278. Apr 9 20:31:58 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4363: lrm_invoke-lrmd-1334021518-11
  279. Apr 9 20:31:59 node1 cib: [2403]: info: cib_process_diff: Diff 5.119.3 -> 5.119.4 not applied to 5.118.6: current "epoch" is less than required
  280. Apr 9 20:31:59 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  281. Apr 9 20:32:00 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=10:242:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_0 )
  282. Apr 9 20:32:00 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 probe[13] (pid 4369)
  283. Apr 9 20:32:00 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.119.4 from node2
  284. Apr 9 20:32:00 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  285. Apr 9 20:32:00 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  286. Apr 9 20:32:00 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
  287. Apr 9 20:32:00 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:0 (10000)
  288. Apr 9 20:32:00 node1 attrd: [2406]: notice: attrd_perform_update: Sent update 22: master-p_drbd_vmstore:0=10000
  289. Apr 9 20:32:00 node1 lrmd: [2404]: info: operation monitor[13] on p_drbd_vmstore:0 for client 2407: pid 4369 exited with return code 0
  290. Apr 9 20:32:00 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_0 (call=13, rc=0, cib-update=35, confirmed=true) ok
  291. Apr 9 20:32:01 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=40:243:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_20000 )
  292. Apr 9 20:32:01 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 monitor[14] (pid 4397)
  293. Apr 9 20:32:01 node1 lrmd: [2404]: info: operation monitor[14] on p_drbd_vmstore:0 for client 2407: pid 4397 exited with return code 0
  294. Apr 9 20:32:01 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_20000 (call=14, rc=0, cib-update=36, confirmed=false) ok
  295. Apr 9 20:32:08 node1 cib: [2403]: info: apply_xml_diff: Digest mis-match: expected c7a260969809bd977ed6b4e461507213, calculated e3bf86196343f83f13b3f81c7bd413c5
  296. Apr 9 20:32:08 node1 cib: [2403]: notice: cib_process_diff: Diff 5.119.15 -> 5.119.16 not applied to 5.119.15: Failed application of an update diff
  297. Apr 9 20:32:08 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  298. Apr 9 20:32:08 node1 crmd: [2407]: info: delete_resource: Removing resource p_drbd_vmstore:0 for 4463_mount2_resource (internal) on node1
  299. Apr 9 20:32:08 node1 crmd: [2407]: info: lrm_remove_deleted_op: Removing op p_drbd_vmstore:0_monitor_20000:14 for deleted resource p_drbd_vmstore:0
  300. Apr 9 20:32:08 node1 crmd: [2407]: info: notify_deleted: Notifying 4463_mount2_resource on node1 that p_drbd_vmstore:0 was deleted
  301. Apr 9 20:32:08 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
  302. Apr 9 20:32:08 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-13
  303. Apr 9 20:32:08 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.119.15 -> 5.119.16 (sync in progress)
  304. Apr 9 20:32:08 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.119.16 -> 5.120.1 (sync in progress)
  305. Apr 9 20:32:08 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.120.1 -> 5.120.2 (sync in progress)
  306. Apr 9 20:32:08 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.120.1 -> 5.120.2 (sync in progress)
  307. Apr 9 20:32:08 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.120.2 -> 5.120.3 (sync in progress)
  308. Apr 9 20:32:08 node1 cib: [2403]: info: cib_process_diff: Diff 5.120.3 -> 5.120.4 not applied to 5.119.15: current "epoch" is less than required
  309. Apr 9 20:32:08 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  310. Apr 9 20:32:08 node1 crmd: [2407]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
  311. Apr 9 20:32:08 node1 crmd: [2407]: info: notify_deleted: Notifying 4463_mount2_resource on node1 that p_drbd_vmstore:1 was deleted
  312. Apr 9 20:32:08 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
  313. Apr 9 20:32:08 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-14
  314. Apr 9 20:32:08 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4463) in sscanf result (3) for 0:0:crm-resource-4463
  315. Apr 9 20:32:08 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:1_delete_60000 from 0:0:crm-resource-4463: lrm_invoke-lrmd-1334021528-15
  316. Apr 9 20:32:08 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_monitor_20000 from 1:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021528-16
  317. Apr 9 20:32:08 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=159:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_notify_0 )
  318. Apr 9 20:32:08 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 notify[15] (pid 4464)
  319. Apr 9 20:32:08 node1 lrmd: [2404]: info: RA output: (p_drbd_vmstore:0:notify:stdout) drbdsetup 0 syncer --set-defaults --create-device --rate=34M
  320. Apr 9 20:32:08 node1 lrmd: [2404]: info: operation notify[15] on p_drbd_vmstore:0 for client 2407: pid 4464 exited with return code 0
  321. Apr 9 20:32:08 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_notify_0 from 159:245:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28: lrm_invoke-lrmd-1334021528-17
  322. Apr 9 20:32:08 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_notify_0 (call=15, rc=0, cib-update=0, confirmed=true) ok
  323. Apr 9 20:32:10 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=10:246:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_0 )
  324. Apr 9 20:32:10 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 probe[16] (pid 4496)
  325. Apr 9 20:32:10 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.120.4 from node2
  326. Apr 9 20:32:10 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:0 (10000)
  327. Apr 9 20:32:10 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  328. Apr 9 20:32:10 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  329. Apr 9 20:32:10 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
  330. Apr 9 20:32:10 node1 lrmd: [2404]: info: operation monitor[16] on p_drbd_vmstore:0 for client 2407: pid 4496 exited with return code 0
  331. Apr 9 20:32:10 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_0 (call=16, rc=0, cib-update=46, confirmed=true) ok
  332. Apr 9 20:32:11 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=40:247:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_vmstore:0_monitor_20000 )
  333. Apr 9 20:32:11 node1 lrmd: [2404]: info: rsc:p_drbd_vmstore:0 monitor[17] (pid 4524)
  334. Apr 9 20:32:11 node1 lrmd: [2404]: info: operation monitor[17] on p_drbd_vmstore:0 for client 2407: pid 4524 exited with return code 0
  335. Apr 9 20:32:11 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_20000 (call=17, rc=0, cib-update=47, confirmed=false) ok
  336. Apr 9 20:32:17 node1 cib: [2403]: info: apply_xml_diff: Digest mis-match: expected 5849452c68971ff82825919bc39ad90e, calculated a7dc750ac875169d270be7354e10a561
  337. Apr 9 20:32:17 node1 cib: [2403]: notice: cib_process_diff: Diff 5.120.16 -> 5.120.17 not applied to 5.120.16: Failed application of an update diff
  338. Apr 9 20:32:17 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  339. Apr 9 20:32:17 node1 crmd: [2407]: info: delete_resource: Removing resource p_drbd_mount1:0 for 4552_mount2_resource (internal) on node1
  340. Apr 9 20:32:17 node1 crmd: [2407]: info: notify_deleted: Notifying 4552_mount2_resource on node1 that p_drbd_mount1:0 was deleted
  341. Apr 9 20:32:17 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4552) in sscanf result (3) for 0:0:crm-resource-4552
  342. Apr 9 20:32:17 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.120.16 -> 5.120.17 (sync in progress)
  343. Apr 9 20:32:17 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:0_delete_60000 from 0:0:crm-resource-4552: lrm_invoke-lrmd-1334021537-19
  344. Apr 9 20:32:17 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.120.17 -> 5.121.1 (sync in progress)
  345. Apr 9 20:32:17 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.121.1 -> 5.121.2 (sync in progress)
  346. Apr 9 20:32:17 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.121.1 -> 5.121.2 (sync in progress)
  347. Apr 9 20:32:17 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.121.2 -> 5.121.3 (sync in progress)
  348. Apr 9 20:32:17 node1 cib: [2403]: info: cib_process_diff: Diff 5.121.3 -> 5.121.4 not applied to 5.120.16: current "epoch" is less than required
  349. Apr 9 20:32:17 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  350. Apr 9 20:32:17 node1 crmd: [2407]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
  351. Apr 9 20:32:17 node1 crmd: [2407]: info: notify_deleted: Notifying 4552_mount2_resource on node1 that p_drbd_mount1:1 was deleted
  352. Apr 9 20:32:17 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4552) in sscanf result (3) for 0:0:crm-resource-4552
  353. Apr 9 20:32:17 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_delete_60000 from 0:0:crm-resource-4552: lrm_invoke-lrmd-1334021537-20
  354. Apr 9 20:32:17 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4552) in sscanf result (3) for 0:0:crm-resource-4552
  355. Apr 9 20:32:17 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:1_delete_60000 from 0:0:crm-resource-4552: lrm_invoke-lrmd-1334021537-21
  356. Apr 9 20:32:18 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.121.4 from node2
  357. Apr 9 20:32:18 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:0 (10000)
  358. Apr 9 20:32:18 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  359. Apr 9 20:32:18 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
  360. Apr 9 20:32:18 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  361. Apr 9 20:32:18 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
  362. Apr 9 20:32:18 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  363. Apr 9 20:32:18 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=12:251:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:0_monitor_0 )
  364. Apr 9 20:32:18 node1 lrmd: [2404]: info: rsc:p_drbd_mount1:0 probe[18] (pid 4554)
  365. Apr 9 20:32:18 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount1:0 (10000)
  366. Apr 9 20:32:18 node1 attrd: [2406]: notice: attrd_perform_update: Sent update 33: master-p_drbd_mount1:0=10000
  367. Apr 9 20:32:18 node1 lrmd: [2404]: info: operation monitor[18] on p_drbd_mount1:0 for client 2407: pid 4554 exited with return code 0
  368. Apr 9 20:32:18 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount1:0_monitor_0 (call=18, rc=0, cib-update=57, confirmed=true) ok
  369. Apr 9 20:32:19 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=76:252:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount1:0_monitor_20000 )
  370. Apr 9 20:32:19 node1 lrmd: [2404]: info: rsc:p_drbd_mount1:0 monitor[19] (pid 4581)
  371. Apr 9 20:32:19 node1 lrmd: [2404]: info: operation monitor[19] on p_drbd_mount1:0 for client 2407: pid 4581 exited with return code 0
  372. Apr 9 20:32:19 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount1:0_monitor_20000 (call=19, rc=0, cib-update=58, confirmed=false) ok
  373. Apr 9 20:32:21 node1 crmd: [2407]: notice: do_lrm_invoke: Not creating resource for a delete event: (null)
  374. Apr 9 20:32:21 node1 crmd: [2407]: info: notify_deleted: Notifying 4609_mount2_resource on node1 that p_drbd_mount2:0 was deleted
  375. Apr 9 20:32:21 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4609) in sscanf result (3) for 0:0:crm-resource-4609
  376. Apr 9 20:32:21 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_delete_60000 from 0:0:crm-resource-4609: lrm_invoke-lrmd-1334021541-23
  377. Apr 9 20:32:21 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4609) in sscanf result (3) for 0:0:crm-resource-4609
  378. Apr 9 20:32:21 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_delete_60000 from 0:0:crm-resource-4609: lrm_invoke-lrmd-1334021541-24
  379. Apr 9 20:32:22 node1 cib: [2403]: info: apply_xml_diff: Digest mis-match: expected 65e4ad017b3e64d10be5a7606dce9840, calculated 4d0cb2b0b6658edac47a946867278d2a
  380. Apr 9 20:32:22 node1 cib: [2403]: notice: cib_process_diff: Diff 5.123.1 -> 5.123.2 not applied to 5.123.1: Failed application of an update diff
  381. Apr 9 20:32:22 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  382. Apr 9 20:32:22 node1 crmd: [2407]: info: delete_resource: Removing resource p_drbd_mount2:1 for 4609_mount2_resource (internal) on node1
  383. Apr 9 20:32:22 node1 crmd: [2407]: info: notify_deleted: Notifying 4609_mount2_resource on node1 that p_drbd_mount2:1 was deleted
  384. Apr 9 20:32:22 node1 crmd: [2407]: WARN: decode_transition_key: Bad UUID (crm-resource-4609) in sscanf result (3) for 0:0:crm-resource-4609
  385. Apr 9 20:32:22 node1 crmd: [2407]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:1_delete_60000 from 0:0:crm-resource-4609: lrm_invoke-lrmd-1334021542-25
  386. Apr 9 20:32:22 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.123.1 -> 5.123.2 (sync in progress)
  387. Apr 9 20:32:22 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.123.1 -> 5.123.2 (sync in progress)
  388. Apr 9 20:32:22 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.123.2 -> 5.124.1 (sync in progress)
  389. Apr 9 20:32:22 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.124.1 -> 5.124.2 (sync in progress)
  390. Apr 9 20:32:22 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.124.2 -> 5.124.3 (sync in progress)
  391. Apr 9 20:32:22 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  392. Apr 9 20:32:22 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
  393. Apr 9 20:32:23 node1 cib: [2403]: info: cib_process_diff: Diff 5.124.3 -> 5.124.4 not applied to 5.123.1: current "epoch" is less than required
  394. Apr 9 20:32:23 node1 cib: [2403]: info: cib_server_process_diff: Requesting re-sync from peer
  395. Apr 9 20:32:23 node1 cib: [2403]: notice: cib_server_process_diff: Not applying diff 5.124.4 -> 5.124.5 (sync in progress)
  396. Apr 9 20:32:24 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=14:256:7:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_monitor_0 )
  397. Apr 9 20:32:24 node1 lrmd: [2404]: info: rsc:p_drbd_mount2:0 probe[20] (pid 4611)
  398. Apr 9 20:32:24 node1 cib: [2403]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 5.124.5 from node2
  399. Apr 9 20:32:24 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_vmstore:0 (10000)
  400. Apr 9 20:32:24 node1 crmd: [2407]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
  401. Apr 9 20:32:24 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount1:0 (10000)
  402. Apr 9 20:32:24 node1 crmd: [2407]: info: config_query_callback: Checking for expired actions every 900000ms
  403. Apr 9 20:32:24 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  404. Apr 9 20:32:24 node1 attrd: [2406]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_mount2:0 (10000)
  405. Apr 9 20:32:24 node1 attrd: [2406]: notice: attrd_perform_update: Sent update 43: master-p_drbd_mount2:0=10000
  406. Apr 9 20:32:24 node1 lrmd: [2404]: info: operation monitor[20] on p_drbd_mount2:0 for client 2407: pid 4611 exited with return code 0
  407. Apr 9 20:32:24 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount2:0_monitor_0 (call=20, rc=0, cib-update=68, confirmed=true) ok
  408. Apr 9 20:32:25 node1 crmd: [2407]: info: do_lrm_rsc_op: Performing key=109:257:0:a0d8c7f2-ab52-4894-bf64-ea2a75b29c28 op=p_drbd_mount2:0_monitor_20000 )
  409. Apr 9 20:32:25 node1 lrmd: [2404]: info: rsc:p_drbd_mount2:0 monitor[21] (pid 4640)
  410. Apr 9 20:32:25 node1 lrmd: [2404]: info: operation monitor[21] on p_drbd_mount2:0 for client 2407: pid 4640 exited with return code 0
  411. Apr 9 20:32:25 node1 crmd: [2407]: info: process_lrm_event: LRM operation p_drbd_mount2:0_monitor_20000 (call=21, rc=0, cib-update=69, confirmed=false) ok
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement