Advertisement
amartin

storage1 corosync log during migration

Dec 4th, 2012
57
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 49.96 KB | None | 0 0
  1. Dec 03 08:31:17 [3165] storage1 crmd: info: pcmk_cpg_membership: Left[3.0] crmd.-1811860470
  2. Dec 03 08:31:17 [3165] storage1 crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node storage0[-1811860470] - corosync-cpg is now offline
  3. Dec 03 08:31:17 [3165] storage1 crmd: info: peer_update_callback: Client storage0/peer now has status [offline] (DC=storage0)
  4. Dec 03 08:31:17 [3165] storage1 crmd: notice: peer_update_callback: Got client status callback - our DC is dead
  5. Dec 03 08:31:17 [3165] storage1 crmd: info: pcmk_cpg_membership: Member[3.0] crmd.402732042
  6. Dec 03 08:31:17 [3165] storage1 crmd: info: pcmk_cpg_membership: Member[3.1] crmd.-1795083254
  7. Dec 03 08:31:17 [3165] storage1 crmd: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback ]
  8. Dec 03 08:31:17 [3165] storage1 crmd: info: do_election_count_vote: Election 2 (owner: 402732042) pass: vote from storagequorum (Uptime)
  9. Dec 03 08:31:17 [3165] storage1 crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
  10. Dec 03 08:31:17 [3165] storage1 crmd: info: do_te_control: Registering TE UUID: 636c188b-692d-46b5-a377-912f1e978ec8
  11. Dec 03 08:31:17 [3165] storage1 crmd: info: set_graph_functions: Setting custom graph functions
  12. Dec 03 08:31:17 [3165] storage1 crmd: info: do_dc_takeover: Taking over DC status for this partition
  13. Dec 03 08:31:17 [3159] storage1 cib: info: cib_process_readwrite: We are now in R/W mode
  14. Dec 03 08:31:17 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/49, version=0.233.122): OK (rc=0)
  15. Dec 03 08:31:17 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/50, version=0.233.123): OK (rc=0)
  16. Dec 03 08:31:17 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/52, version=0.233.124): OK (rc=0)
  17. Dec 03 08:31:17 [3165] storage1 crmd: info: join_make_offer: Making join offers based on membership 1842136
  18. Dec 03 08:31:17 [3165] storage1 crmd: info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
  19. Dec 03 08:31:17 [3165] storage1 crmd: info: update_dc: Set DC to storage1 (3.0.6)
  20. Dec 03 08:31:17 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/54, version=0.233.125): OK (rc=0)
  21. Dec 03 08:31:17 [3165] storage1 crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node storage1[-1795083254] - expected state is now member
  22. Dec 03 08:31:17 [3165] storage1 crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node storagequorum[402732042] - expected state is now member
  23. Dec 03 08:31:17 [3165] storage1 crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  24. Dec 03 08:31:17 [3165] storage1 crmd: info: do_dc_join_finalize: join-1: Syncing the CIB from storage1 to the rest of the cluster
  25. Dec 03 08:31:17 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/57, version=0.233.125): OK (rc=0)
  26. Dec 03 08:31:17 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/58, version=0.233.126): OK (rc=0)
  27. Dec 03 08:31:17 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/59, version=0.233.127): OK (rc=0)
  28. Dec 03 08:31:18 [3165] storage1 crmd: info: services_os_action_execute: Managed MailTo_meta-data_0 process 2006 exited with rc=0
  29. Dec 03 08:31:19 [3165] storage1 crmd: info: services_os_action_execute: Managed drbd_meta-data_0 process 2014 exited with rc=0
  30. Dec 03 08:31:20 [3165] storage1 crmd: info: services_os_action_execute: Managed ping_meta-data_0 process 2020 exited with rc=0
  31. Dec 03 08:31:20 [3165] storage1 crmd: info: pcmk_cpg_membership: Joined[4.0] crmd.-1811860470
  32. Dec 03 08:31:20 [3165] storage1 crmd: info: pcmk_cpg_membership: Member[4.0] crmd.402732042
  33. Dec 03 08:31:20 [3165] storage1 crmd: info: pcmk_cpg_membership: Member[4.1] crmd.-1811860470
  34. Dec 03 08:31:20 [3165] storage1 crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node storage0[-1811860470] - corosync-cpg is now online
  35. Dec 03 08:31:20 [3165] storage1 crmd: info: peer_update_callback: Client storage0/peer now has status [online] (DC=true)
  36. Dec 03 08:31:20 [3165] storage1 crmd: warning: match_down_event: No match for shutdown action on 2483106826
  37. Dec 03 08:31:20 [3165] storage1 crmd: info: pcmk_cpg_membership: Member[4.2] crmd.-1795083254
  38. Dec 03 08:31:20 [3165] storage1 crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
  39. Dec 03 08:31:20 [3165] storage1 crmd: info: do_dc_join_offer_all: join-2: Waiting on 3 outstanding join acks
  40. Dec 03 08:31:20 [3165] storage1 crmd: info: do_dc_join_offer_all: A new node joined the cluster
  41. Dec 03 08:31:20 [3165] storage1 crmd: info: do_dc_join_offer_all: join-3: Waiting on 3 outstanding join acks
  42. Dec 03 08:31:20 [3165] storage1 crmd: info: update_dc: Set DC to storage1 (3.0.6)
  43. Dec 03 08:31:20 [3165] storage1 crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node storage0[-1811860470] - expected state is now member
  44. Dec 03 08:31:20 [3165] storage1 crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
  45. Dec 03 08:31:20 [3165] storage1 crmd: info: do_dc_join_finalize: join-3: Syncing the CIB from storage1 to the rest of the cluster
  46. Dec 03 08:31:20 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/63, version=0.233.132): OK (rc=0)
  47. Dec 03 08:31:20 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/64, version=0.233.133): OK (rc=0)
  48. Dec 03 08:31:20 [3165] storage1 crmd: info: do_dc_join_ack: join-3: Updating node state to member for storage0
  49. Dec 03 08:31:20 [3165] storage1 crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='storage0']/lrm
  50. Dec 03 08:31:20 [3165] storage1 crmd: info: do_dc_join_ack: join-3: Updating node state to member for storage1
  51. Dec 03 08:31:20 [3165] storage1 crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='storage1']/lrm
  52. Dec 03 08:31:20 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/65, version=0.233.134): OK (rc=0)
  53. Dec 03 08:31:20 [3165] storage1 crmd: info: do_dc_join_ack: join-3: Updating node state to member for storagequorum
  54. Dec 03 08:31:20 [3165] storage1 crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='storagequorum']/lrm
  55. Dec 03 08:31:20 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/66, version=0.233.135): OK (rc=0)
  56. Dec 03 08:31:20 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='storage0']/lrm (origin=local/crmd/67, version=0.233.136): OK (rc=0)
  57. Dec 03 08:31:20 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='storage1']/lrm (origin=local/crmd/69, version=0.233.138): OK (rc=0)
  58. Dec 03 08:31:20 [3165] storage1 crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
  59. Dec 03 08:31:20 [3165] storage1 crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
  60. Dec 03 08:31:20 [3163] storage1 attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)
  61. Dec 03 08:31:20 [3163] storage1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: p_ping (5000)
  62. Dec 03 08:31:20 [3163] storage1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: master-p_drbd_drives (10000)
  63. Dec 03 08:31:20 [3165] storage1 crmd: info: abort_transition_graph: te_update_diff:227 - Triggered transition abort (complete=1) : LRM Refresh
  64. Dec 03 08:31:20 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='storagequorum']/lrm (origin=local/crmd/71, version=0.233.140): OK (rc=0)
  65. Dec 03 08:31:20 [3165] storage1 crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=stonithstorage0_last_0, magic=0:7;23:0:7:9dba7597-539d-48c6-b05d-645ebde5ec47, cib=0.233.140) : Resource op removal
  66. Dec 03 08:31:20 [3165] storage1 crmd: info: abort_transition_graph: te_update_diff:227 - Triggered transition abort (complete=1) : LRM Refresh
  67. Dec 03 08:31:20 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/73, version=0.233.142): OK (rc=0)
  68. Dec 03 08:31:20 [3163] storage1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-p_sysadmin_notify (INFINITY)
  69. Dec 03 08:31:20 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/75, version=0.233.145): OK (rc=0)
  70. Dec 03 08:31:20 [3163] storage1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
  71. Dec 03 08:31:20 [3164] storage1 pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on storage1: unknown error (1)
  72. Dec 03 08:31:20 [3164] storage1 pengine: notice: unpack_rsc_op: Preventing p_daemon_nmbd from re-starting on storagequorum: operation monitor failed 'not installed' (rc=5)
  73. Dec 03 08:31:20 [3164] storage1 pengine: notice: unpack_rsc_op: Preventing p_daemon_smbd from re-starting on storagequorum: operation monitor failed 'not installed' (rc=5)
  74. Dec 03 08:31:20 [3164] storage1 pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from storage1 after 1000000 failures (max=1000000)
  75. Dec 03 08:31:20 [3164] storage1 pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from storage1 after 1000000 failures (max=1000000)
  76. Dec 03 08:31:20 [3164] storage1 pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from storage1 after 1000000 failures (max=1000000)
  77. Dec 03 08:31:20 [3163] storage1 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-p_sysadmin_notify (1354165704)
  78. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start stonithstorage1 (storage0)
  79. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start stonithstoragequorum (storage0)
  80. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_fs_drives (storage1)
  81. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_fs_bind_opt (storage1)
  82. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_exportfs_storage (storage1)
  83. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_daemon_smbd (storage1)
  84. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_daemon_nmbd (storage1)
  85. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_ip_storage (storage1)
  86. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_ip_drives0 (storage1)
  87. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_ip_drives1 (storage1)
  88. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_ip_drives2 (storage1)
  89. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_ip_drives3 (storage1)
  90. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Promote p_drbd_drives:0 (Slave -> Master storage1)
  91. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_drbd_drives:1 (storage0)
  92. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_ping:1 (storage0)
  93. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_daemon_nfs-kernel-server:1 (storage0)
  94. Dec 03 08:31:20 [3164] storage1 pengine: notice: LogActions: Start p_sysadmin_notify:0 (storage0)
  95. Dec 03 08:31:20 [3164] storage1 pengine: notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-27.bz2
  96. Dec 03 08:31:20 [3165] storage1 crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  97. Dec 03 08:31:20 [3165] storage1 crmd: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1354545080-34) derived from /var/lib/pacemaker/pengine/pe-input-27.bz2
  98. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 7: monitor stonithstorage0_monitor_0 on storage0
  99. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 8: monitor stonithstorage1_monitor_0 on storage0
  100. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 9: monitor stonithstoragequorum_monitor_0 on storage0
  101. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 10: monitor p_fs_drives_monitor_0 on storage0
  102. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 11: monitor p_fs_bind_opt_monitor_0 on storage0
  103. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 12: monitor p_exportfs_storage_monitor_0 on storage0
  104. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 13: monitor p_daemon_smbd_monitor_0 on storage0
  105. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 14: monitor p_daemon_nmbd_monitor_0 on storage0
  106. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 15: monitor p_ip_storage_monitor_0 on storage0
  107. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 16: monitor p_ip_drives0_monitor_0 on storage0
  108. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 17: monitor p_ip_drives1_monitor_0 on storage0
  109. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 18: monitor p_ip_drives2_monitor_0 on storage0
  110. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 19: monitor p_ip_drives3_monitor_0 on storage0
  111. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 2: cancel p_drbd_drives_cancel_20000 on storage1 (local)
  112. Dec 03 08:31:20 [3162] storage1 lrmd: info: cancel_recurring_action: Cancelling operation p_drbd_drives_monitor_20000
  113. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 20: monitor p_drbd_drives_monitor_0 on storage0
  114. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 21: monitor p_ping_monitor_0 on storage0
  115. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 22: monitor p_daemon_nfs-kernel-server_monitor_0 on storage0
  116. Dec 03 08:31:20 [3165] storage1 crmd: info: te_rsc_command: Initiating action 23: monitor p_sysadmin_notify_monitor_0 on storage0
  117. Dec 03 08:31:20 [3159] storage1 cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='storage0']/transient_attributes (origin=storage0/crmd/8, version=0.233.151): OK (rc=0)
  118. Dec 03 08:31:20 [3165] storage1 crmd: info: abort_transition_graph: te_update_diff:194 - Triggered transition abort (complete=0, tag=transient_attributes, id=2483106826, magic=NA, cib=0.233.151) : Transient attribute: removal
  119. Dec 03 08:31:20 [3165] storage1 crmd: info: process_lrm_event: LRM operation p_drbd_drives_monitor_20000 (call=110, status=1, cib-update=0, confirmed=false) Cancelled
  120. Dec 03 08:31:20 [3159] storage1 cib: warning: cib_process_request: Operation complete: op cib_modify for section status (origin=storage0/attrd/144, version=0.233.151): No such device or address (rc=-6)
  121. Dec 03 08:31:20 [3159] storage1 cib: warning: cib_process_request: Operation complete: op cib_modify for section status (origin=storage0/attrd/146, version=0.233.151): No such device or address (rc=-6)
  122. Dec 03 08:31:20 [3159] storage1 cib: warning: cib_process_request: Operation complete: op cib_modify for section status (origin=storage0/attrd/148, version=0.233.151): No such device or address (rc=-6)
  123. Dec 03 08:31:20 [3159] storage1 cib: warning: cib_process_request: Operation complete: op cib_modify for section status (origin=storage0/attrd/150, version=0.233.151): No such device or address (rc=-6)
  124. Dec 03 08:31:20 [3159] storage1 cib: warning: cib_process_request: Operation complete: op cib_modify for section status (origin=storage0/attrd/152, version=0.233.151): No such device or address (rc=-6)
  125. Dec 03 08:31:20 [3159] storage1 cib: warning: cib_process_request: Operation complete: op cib_modify for section status (origin=storage0/attrd/155, version=0.233.151): No such device or address (rc=-6)
  126. Dec 03 08:31:20 [3165] storage1 crmd: info: abort_transition_graph: te_update_diff:271 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_drives_monitor_20000, magic=0:0;53:17:0:df650c4d-2ebf-4de5-b427-88012ac23369, cib=0.233.152) : Resource op removal
  127. Dec 03 08:31:21 [3165] storage1 crmd: warning: status_from_rc: Action 13 (p_daemon_smbd_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  128. Dec 03 08:31:21 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_daemon_smbd_last_failure_0, magic=0:0;13:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.161) : Event failed
  129. Dec 03 08:31:21 [3165] storage1 crmd: warning: status_from_rc: Action 14 (p_daemon_nmbd_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  130. Dec 03 08:31:21 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_daemon_nmbd_last_failure_0, magic=0:0;14:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.162) : Event failed
  131. Dec 03 08:31:22 [3165] storage1 crmd: warning: status_from_rc: Action 10 (p_fs_drives_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  132. Dec 03 08:31:22 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_fs_drives_last_failure_0, magic=0:0;10:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.163) : Event failed
  133. Dec 03 08:31:22 [3165] storage1 crmd: warning: status_from_rc: Action 11 (p_fs_bind_opt_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  134. Dec 03 08:31:22 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_fs_bind_opt_last_failure_0, magic=0:0;11:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.164) : Event failed
  135. Dec 03 08:31:23 [3165] storage1 crmd: warning: status_from_rc: Action 15 (p_ip_storage_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  136. Dec 03 08:31:23 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_ip_storage_last_failure_0, magic=0:0;15:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.165) : Event failed
  137. Dec 03 08:31:23 [3165] storage1 crmd: warning: status_from_rc: Action 16 (p_ip_drives0_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  138. Dec 03 08:31:23 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_ip_drives0_last_failure_0, magic=0:0;16:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.166) : Event failed
  139. Dec 03 08:31:23 [3165] storage1 crmd: warning: status_from_rc: Action 17 (p_ip_drives1_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  140. Dec 03 08:31:23 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_ip_drives1_last_failure_0, magic=0:0;17:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.167) : Event failed
  141. Dec 03 08:31:23 [3165] storage1 crmd: warning: status_from_rc: Action 18 (p_ip_drives2_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  142. Dec 03 08:31:23 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_ip_drives2_last_failure_0, magic=0:0;18:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.168) : Event failed
  143. Dec 03 08:31:23 [3165] storage1 crmd: warning: status_from_rc: Action 19 (p_ip_drives3_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  144. Dec 03 08:31:23 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_ip_drives3_last_failure_0, magic=0:0;19:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.169) : Event failed
  145. Dec 03 08:31:24 [3165] storage1 crmd: warning: status_from_rc: Action 23 (p_sysadmin_notify_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  146. Dec 03 08:31:24 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_sysadmin_notify_last_failure_0, magic=0:0;23:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.170) : Event failed
  147. Dec 03 08:31:25 [3165] storage1 crmd: warning: status_from_rc: Action 20 (p_drbd_drives_monitor_0) on storage0 failed (target: 7 vs. rc: 8): Error
  148. Dec 03 08:31:25 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_drbd_drives_last_failure_0, magic=0:8;20:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.171) : Event failed
  149. Dec 03 08:31:25 [3165] storage1 crmd: warning: status_from_rc: Action 22 (p_daemon_nfs-kernel-server_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  150. Dec 03 08:31:25 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_daemon_nfs-kernel-server_last_failure_0, magic=0:0;22:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.172) : Event failed
  151. Dec 03 08:31:26 [3165] storage1 crmd: warning: status_from_rc: Action 12 (p_exportfs_storage_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  152. Dec 03 08:31:26 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_exportfs_storage_last_failure_0, magic=0:0;12:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.173) : Event failed
  153. Dec 03 08:31:26 [3165] storage1 crmd: warning: status_from_rc: Action 8 (stonithstorage1_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  154. Dec 03 08:31:26 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=stonithstorage1_last_failure_0, magic=0:0;8:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.174) : Event failed
  155. Dec 03 08:31:26 [3165] storage1 crmd: warning: status_from_rc: Action 9 (stonithstoragequorum_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  156. Dec 03 08:31:26 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=stonithstoragequorum_last_failure_0, magic=0:0;9:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.175) : Event failed
  157. Dec 03 08:31:31 [3165] storage1 crmd: warning: status_from_rc: Action 21 (p_ping_monitor_0) on storage0 failed (target: 7 vs. rc: 0): Error
  158. Dec 03 08:31:31 [3165] storage1 crmd: info: abort_transition_graph: match_graph_event:276 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=p_ping_last_failure_0, magic=0:0;21:0:7:636c188b-692d-46b5-a377-912f1e978ec8, cib=0.233.176) : Event failed
  159. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 6: probe_complete probe_complete on storage0 - no waiting
  160. Dec 03 08:31:31 [3165] storage1 crmd: notice: run_graph: Transition 0 (Complete=23, Pending=0, Fired=0, Skipped=43, Incomplete=13, Source=/var/lib/pacemaker/pengine/pe-input-27.bz2): Stopped
  161. Dec 03 08:31:31 [3165] storage1 crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
  162. Dec 03 08:31:31 [3164] storage1 pengine: warning: unpack_rsc_op: Processing failed op start for p_sysadmin_notify:0 on storage1: unknown error (1)
  163. Dec 03 08:31:31 [3164] storage1 pengine: notice: unpack_rsc_op: Preventing p_daemon_nmbd from re-starting on storagequorum: operation monitor failed 'not installed' (rc=5)
  164. Dec 03 08:31:31 [3164] storage1 pengine: notice: unpack_rsc_op: Preventing p_daemon_smbd from re-starting on storagequorum: operation monitor failed 'not installed' (rc=5)
  165. Dec 03 08:31:31 [3164] storage1 pengine: notice: unpack_rsc_op: Operation monitor found resource p_drbd_drives:1 active in master mode on storage0
  166. Dec 03 08:31:31 [3164] storage1 pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from storage1 after 1000000 failures (max=1000000)
  167. Dec 03 08:31:31 [3164] storage1 pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from storage1 after 1000000 failures (max=1000000)
  168. Dec 03 08:31:31 [3164] storage1 pengine: warning: common_apply_stickiness: Forcing cl_sysadmin_notify away from storage1 after 1000000 failures (max=1000000)
  169. ##############################################################################################
  170. ## Starting to move resources from storage0 to storage1
  171. ##############################################################################################
  172. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Move p_fs_drives (Started storage0 -> storage1)
  173. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Move p_fs_bind_opt (Started storage0 -> storage1)
  174. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Move p_exportfs_storage (Started storage0 -> storage1)
  175. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Move p_daemon_smbd (Started storage0 -> storage1)
  176. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Move p_daemon_nmbd (Started storage0 -> storage1)
  177. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Move p_ip_storage (Started storage0 -> storage1)
  178. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Move p_ip_drives0 (Started storage0 -> storage1)
  179. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Move p_ip_drives1 (Started storage0 -> storage1)
  180. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Move p_ip_drives2 (Started storage0 -> storage1)
  181. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Move p_ip_drives3 (Started storage0 -> storage1)
  182. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Promote p_drbd_drives:0 (Slave -> Master storage1)
  183. Dec 03 08:31:31 [3164] storage1 pengine: notice: LogActions: Demote p_drbd_drives:1 (Master -> Slave storage0)
  184. Dec 03 08:31:31 [3164] storage1 pengine: notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-28.bz2
  185. Dec 03 08:31:31 [3165] storage1 crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
  186. Dec 03 08:31:31 [3165] storage1 crmd: warning: destroy_action: Cancelling timer for action 2 (src=97)
  187. Dec 03 08:31:31 [3165] storage1 crmd: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1354545091-55) derived from /var/lib/pacemaker/pengine/pe-input-28.bz2
  188. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 41: stop p_ip_drives3_stop_0 on storage0
  189. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 84: monitor p_ping_monitor_10000 on storage0
  190. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 93: monitor p_daemon_nfs-kernel-server_monitor_30000 on storage0
  191. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 100: monitor p_sysadmin_notify_monitor_10000 on storage0
  192. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 125: notify p_drbd_drives_pre_notify_demote_0 on storage1 (local)
  193. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 127: notify p_drbd_drives_pre_notify_demote_0 on storage0
  194. Dec 03 08:31:31 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_drbd_drives_notify_0 (call=131, rc=0, cib-update=0, confirmed=true) ok
  195. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 38: stop p_ip_drives2_stop_0 on storage0
  196. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 35: stop p_ip_drives1_stop_0 on storage0
  197. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 32: stop p_ip_drives0_stop_0 on storage0
  198. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 29: stop p_ip_storage_stop_0 on storage0
  199. Dec 03 08:31:31 [3165] storage1 crmd: info: te_rsc_command: Initiating action 26: stop p_daemon_nmbd_stop_0 on storage0
  200. Dec 03 08:32:06 [3165] storage1 crmd: info: te_rsc_command: Initiating action 23: stop p_daemon_smbd_stop_0 on storage0
  201. Dec 03 08:32:07 [3165] storage1 crmd: info: te_rsc_command: Initiating action 20: stop p_exportfs_storage_stop_0 on storage0
  202. Dec 03 08:32:07 [3165] storage1 crmd: info: te_rsc_command: Initiating action 17: stop p_fs_bind_opt_stop_0 on storage0
  203. Dec 03 08:32:07 [3165] storage1 crmd: info: te_rsc_command: Initiating action 14: stop p_fs_drives_stop_0 on storage0
  204. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 53: demote p_drbd_drives_demote_0 on storage0
  205. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 126: notify p_drbd_drives_post_notify_demote_0 on storage1 (local)
  206. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 128: notify p_drbd_drives_post_notify_demote_0 on storage0
  207. Dec 03 08:32:42 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_drbd_drives_notify_0 (call=134, rc=0, cib-update=0, confirmed=true) ok
  208. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 121: notify p_drbd_drives_pre_notify_promote_0 on storage1 (local)
  209. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 123: notify p_drbd_drives_pre_notify_promote_0 on storage0
  210. Dec 03 08:32:42 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_drbd_drives_notify_0 (call=137, rc=0, cib-update=0, confirmed=true) ok
  211. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 50: promote p_drbd_drives_promote_0 on storage1 (local)
  212. Dec 03 08:32:42 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_drbd_drives_promote_0 (call=140, rc=0, cib-update=82, confirmed=true) ok
  213. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 122: notify p_drbd_drives_post_notify_promote_0 on storage1 (local)
  214. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 124: notify p_drbd_drives_post_notify_promote_0 on storage0
  215. Dec 03 08:32:42 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_drbd_drives_notify_0 (call=143, rc=0, cib-update=0, confirmed=true) ok
  216. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 15: start p_fs_drives_start_0 on storage1 (local)
  217. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 51: monitor p_drbd_drives_monitor_10000 on storage1 (local)
  218. Dec 03 08:32:42 [3165] storage1 crmd: info: te_rsc_command: Initiating action 55: monitor p_drbd_drives_monitor_20000 on storage0
  219. Filesystem[2355]: 2012/12/03_08:32:42 INFO: Running start for /dev/drbd0 on /mnt/storage
  220. Filesystem[2355]: 2012/12/03_08:32:42 INFO: Running start for /dev/drbd0 on /mnt/storage
  221. Dec 03 08:32:42 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_drbd_drives_monitor_10000 (call=148, rc=8, cib-update=83, confirmed=false) master
  222. Dec 03 08:32:42 [3165] storage1 crmd: info: process_lrm_event: Result:
  223. Dec 03 08:32:42 [3162] storage1 lrmd: notice: operation_finished: p_fs_drives_start_0:2355 [ FATAL: Module scsi_hostadapter not found. ]
  224. Dec 03 08:32:43 [3165] storage1 crmd: info: services_os_action_execute: Managed Filesystem_meta-data_0 process 2435 exited with rc=0
  225. Dec 03 08:32:43 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_fs_drives_start_0 (call=146, rc=0, cib-update=84, confirmed=true) ok
  226. Dec 03 08:32:43 [3165] storage1 crmd: info: te_rsc_command: Initiating action 16: monitor p_fs_drives_monitor_20000 on storage1 (local)
  227. Dec 03 08:32:43 [3165] storage1 crmd: info: te_rsc_command: Initiating action 18: start p_fs_bind_opt_start_0 on storage1 (local)
  228. Dec 03 08:32:43 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_fs_drives_monitor_20000 (call=152, rc=0, cib-update=85, confirmed=false) ok
  229. Filesystem[2441]: 2012/12/03_08:32:43 INFO: Running start for /mnt/storage/home/shared/opt on /opt
  230. Filesystem[2441]: 2012/12/03_08:32:43 INFO: Running start for /mnt/storage/home/shared/opt on /opt
  231. Dec 03 08:32:43 [3162] storage1 lrmd: notice: operation_finished: p_fs_bind_opt_start_0:2441 [ FATAL: Module scsi_hostadapter not found. ]
  232. Dec 03 08:32:43 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_fs_bind_opt_start_0 (call=154, rc=0, cib-update=86, confirmed=true) ok
  233. Dec 03 08:32:43 [3165] storage1 crmd: info: te_rsc_command: Initiating action 19: monitor p_fs_bind_opt_monitor_20000 on storage1 (local)
  234. Dec 03 08:32:43 [3165] storage1 crmd: info: te_rsc_command: Initiating action 21: start p_exportfs_storage_start_0 on storage1 (local)
  235. exportfs[2520]: 2012/12/03_08:32:43 INFO: Directory /mnt/storage is not exported to 10.0.0.0/255.0.0.0 (stopped).
  236. exportfs[2520]: 2012/12/03_08:32:43 INFO: Directory /mnt/storage is not exported to 10.0.0.0/255.0.0.0 (stopped).
  237. exportfs[2520]: 2012/12/03_08:32:43 INFO: Exporting file system ...
  238. exportfs[2520]: 2012/12/03_08:32:43 INFO: Exporting file system ...
  239. exportfs[2520]: 2012/12/03_08:32:44 INFO: exporting 10.0.0.0/255.0.0.0:/mnt/storage
  240. exportfs[2520]: 2012/12/03_08:32:44 INFO: exporting 10.0.0.0/255.0.0.0:/mnt/storage
  241. Dec 03 08:32:44 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_fs_bind_opt_monitor_20000 (call=158, rc=0, cib-update=87, confirmed=false) ok
  242. exportfs[2520]: 2012/12/03_08:32:44 INFO: File system exported
  243. exportfs[2520]: 2012/12/03_08:32:44 INFO: File system exported
  244. Dec 03 08:32:45 [3165] storage1 crmd: info: services_os_action_execute: Managed exportfs_meta-data_0 process 2582 exited with rc=0
  245. Dec 03 08:32:45 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_exportfs_storage_start_0 (call=160, rc=0, cib-update=88, confirmed=true) ok
  246. Dec 03 08:32:45 [3165] storage1 crmd: info: te_rsc_command: Initiating action 22: monitor p_exportfs_storage_monitor_30000 on storage1 (local)
  247. Dec 03 08:32:45 [3165] storage1 crmd: info: te_rsc_command: Initiating action 24: start p_daemon_smbd_start_0 on storage1 (local)
  248. exportfs[2585]: 2012/12/03_08:32:45 INFO: Directory /mnt/storage is exported to 10.0.0.0/255.0.0.0 (started).
  249. exportfs[2585]: 2012/12/03_08:32:45 INFO: Directory /mnt/storage is exported to 10.0.0.0/255.0.0.0 (started).
  250. Dec 03 08:32:45 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_exportfs_storage_monitor_30000 (call=164, rc=0, cib-update=89, confirmed=false) ok
  251. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_exec_done: Call to start passed: type '(o)' /com/ubuntu/Upstart/jobs/smbd/_
  252. Dec 03 08:32:45 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_daemon_smbd_start_0 (call=166, rc=0, cib-update=90, confirmed=true) ok
  253. Dec 03 08:32:45 [3165] storage1 crmd: info: te_rsc_command: Initiating action 25: monitor p_daemon_smbd_monitor_30000 on storage1 (local)
  254. Dec 03 08:32:45 [3165] storage1 crmd: info: te_rsc_command: Initiating action 27: start p_daemon_nmbd_start_0 on storage1 (local)
  255. Dec 03 08:32:45 [3162] storage1 lrmd: info: get_first_instance: Result: /com/ubuntu/Upstart/jobs/smbd/_
  256. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_property: Calling GetAll on /com/ubuntu/Upstart/jobs/smbd/_
  257. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_property: Call to GetAll passed: type '(a{sv})' 1
  258. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_property: Got value 'running' for /com/ubuntu/Upstart/jobs/smbd/_[state]
  259. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_running: State of smbd: running
  260. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_running: smbd is running
  261. Dec 03 08:32:45 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_daemon_smbd_monitor_30000 (call=170, rc=0, cib-update=91, confirmed=false) ok
  262. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_exec_done: Call to start passed: type '(o)' /com/ubuntu/Upstart/jobs/nmbd/_
  263. Dec 03 08:32:45 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_daemon_nmbd_start_0 (call=172, rc=0, cib-update=92, confirmed=true) ok
  264. Dec 03 08:32:45 [3165] storage1 crmd: info: te_rsc_command: Initiating action 28: monitor p_daemon_nmbd_monitor_30000 on storage1 (local)
  265. Dec 03 08:32:45 [3165] storage1 crmd: info: te_rsc_command: Initiating action 30: start p_ip_storage_start_0 on storage1 (local)
  266. Dec 03 08:32:45 [3162] storage1 lrmd: info: get_first_instance: Result: /com/ubuntu/Upstart/jobs/nmbd/_
  267. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_property: Calling GetAll on /com/ubuntu/Upstart/jobs/nmbd/_
  268. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_property: Call to GetAll passed: type '(a{sv})' 1
  269. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_property: Got value 'running' for /com/ubuntu/Upstart/jobs/nmbd/_[state]
  270. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_running: State of nmbd: running
  271. Dec 03 08:32:45 [3162] storage1 lrmd: info: upstart_job_running: nmbd is running
  272. Dec 03 08:32:45 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_daemon_nmbd_monitor_30000 (call=176, rc=0, cib-update=93, confirmed=false) ok
  273. IPaddr2[2606]: 2012/12/03_08:32:45 INFO: ip -f inet addr add 10.10.1.38/16 brd 10.10.255.255 dev eth3
  274. IPaddr2[2606]: 2012/12/03_08:32:45 INFO: ip -f inet addr add 10.10.1.38/16 brd 10.10.255.255 dev eth3
  275. IPaddr2[2606]: 2012/12/03_08:32:45 INFO: ip link set eth3 up
  276. IPaddr2[2606]: 2012/12/03_08:32:45 INFO: ip link set eth3 up
  277. IPaddr2[2606]: 2012/12/03_08:32:45 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.10.1.38 eth3 10.10.1.38 auto not_used not_used
  278. IPaddr2[2606]: 2012/12/03_08:32:45 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.10.1.38 eth3 10.10.1.38 auto not_used not_used
  279. Dec 03 08:32:46 [3165] storage1 crmd: info: services_os_action_execute: Managed IPaddr2_meta-data_0 process 2655 exited with rc=0
  280. Dec 03 08:32:46 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_ip_storage_start_0 (call=178, rc=0, cib-update=94, confirmed=true) ok
  281. Dec 03 08:32:46 [3165] storage1 crmd: info: te_rsc_command: Initiating action 31: monitor p_ip_storage_monitor_30000 on storage1 (local)
  282. Dec 03 08:32:46 [3165] storage1 crmd: info: te_rsc_command: Initiating action 33: start p_ip_drives0_start_0 on storage1 (local)
  283. Dec 03 08:32:46 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_ip_storage_monitor_30000 (call=182, rc=0, cib-update=95, confirmed=false) ok
  284. IPaddr2[2661]: 2012/12/03_08:32:46 INFO: ip -f inet addr add 10.10.0.3/16 brd 10.10.255.255 dev eth3
  285. IPaddr2[2661]: 2012/12/03_08:32:46 INFO: ip -f inet addr add 10.10.0.3/16 brd 10.10.255.255 dev eth3
  286. IPaddr2[2661]: 2012/12/03_08:32:46 INFO: ip link set eth3 up
  287. IPaddr2[2661]: 2012/12/03_08:32:46 INFO: ip link set eth3 up
  288. IPaddr2[2661]: 2012/12/03_08:32:46 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.10.0.3 eth3 10.10.0.3 auto not_used not_used
  289. IPaddr2[2661]: 2012/12/03_08:32:46 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.10.0.3 eth3 10.10.0.3 auto not_used not_used
  290. Dec 03 08:32:46 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_ip_drives0_start_0 (call=184, rc=0, cib-update=96, confirmed=true) ok
  291. Dec 03 08:32:46 [3165] storage1 crmd: info: te_rsc_command: Initiating action 34: monitor p_ip_drives0_monitor_30000 on storage1 (local)
  292. Dec 03 08:32:46 [3165] storage1 crmd: info: te_rsc_command: Initiating action 36: start p_ip_drives1_start_0 on storage1 (local)
  293. Dec 03 08:32:46 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_ip_drives0_monitor_30000 (call=188, rc=0, cib-update=97, confirmed=false) ok
  294. IPaddr2[2752]: 2012/12/03_08:32:46 INFO: ip -f inet addr add 10.10.251.5/16 brd 10.10.255.255 dev eth3
  295. IPaddr2[2752]: 2012/12/03_08:32:46 INFO: ip -f inet addr add 10.10.251.5/16 brd 10.10.255.255 dev eth3
  296. IPaddr2[2752]: 2012/12/03_08:32:46 INFO: ip link set eth3 up
  297. IPaddr2[2752]: 2012/12/03_08:32:46 INFO: ip link set eth3 up
  298. IPaddr2[2752]: 2012/12/03_08:32:46 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.10.251.5 eth3 10.10.251.5 auto not_used not_used
  299. IPaddr2[2752]: 2012/12/03_08:32:46 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.10.251.5 eth3 10.10.251.5 auto not_used not_used
  300. Dec 03 08:32:46 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_ip_drives1_start_0 (call=190, rc=0, cib-update=98, confirmed=true) ok
  301. Dec 03 08:32:46 [3165] storage1 crmd: info: te_rsc_command: Initiating action 37: monitor p_ip_drives1_monitor_30000 on storage1 (local)
  302. Dec 03 08:32:46 [3165] storage1 crmd: info: te_rsc_command: Initiating action 39: start p_ip_drives2_start_0 on storage1 (local)
  303. Dec 03 08:32:46 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_ip_drives1_monitor_30000 (call=194, rc=0, cib-update=99, confirmed=false) ok
  304. IPaddr2[2830]: 2012/12/03_08:32:46 INFO: ip -f inet addr add 10.10.251.6/16 brd 10.10.255.255 dev eth3
  305. IPaddr2[2830]: 2012/12/03_08:32:46 INFO: ip -f inet addr add 10.10.251.6/16 brd 10.10.255.255 dev eth3
  306. IPaddr2[2830]: 2012/12/03_08:32:46 INFO: ip link set eth3 up
  307. IPaddr2[2830]: 2012/12/03_08:32:46 INFO: ip link set eth3 up
  308. IPaddr2[2830]: 2012/12/03_08:32:46 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.10.251.6 eth3 10.10.251.6 auto not_used not_used
  309. IPaddr2[2830]: 2012/12/03_08:32:46 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.10.251.6 eth3 10.10.251.6 auto not_used not_used
  310. Dec 03 08:32:46 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_ip_drives2_start_0 (call=196, rc=0, cib-update=100, confirmed=true) ok
  311. Dec 03 08:32:46 [3165] storage1 crmd: info: te_rsc_command: Initiating action 40: monitor p_ip_drives2_monitor_30000 on storage1 (local)
  312. Dec 03 08:32:46 [3165] storage1 crmd: info: te_rsc_command: Initiating action 42: start p_ip_drives3_start_0 on storage1 (local)
  313. Dec 03 08:32:46 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_ip_drives2_monitor_30000 (call=200, rc=0, cib-update=101, confirmed=false) ok
  314. IPaddr2[2907]: 2012/12/03_08:32:46 INFO: ip -f inet addr add 10.10.0.7/16 brd 10.10.255.255 dev eth3
  315. IPaddr2[2907]: 2012/12/03_08:32:46 INFO: ip -f inet addr add 10.10.0.7/16 brd 10.10.255.255 dev eth3
  316. IPaddr2[2907]: 2012/12/03_08:32:46 INFO: ip link set eth3 up
  317. IPaddr2[2907]: 2012/12/03_08:32:46 INFO: ip link set eth3 up
  318. IPaddr2[2907]: 2012/12/03_08:32:46 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.10.0.7 eth3 10.10.0.7 auto not_used not_used
  319. IPaddr2[2907]: 2012/12/03_08:32:46 INFO: /usr/lib/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.10.0.7 eth3 10.10.0.7 auto not_used not_used
  320. Dec 03 08:32:46 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_ip_drives3_start_0 (call=202, rc=0, cib-update=102, confirmed=true) ok
  321. Dec 03 08:32:46 [3165] storage1 crmd: info: te_rsc_command: Initiating action 43: monitor p_ip_drives3_monitor_30000 on storage1 (local)
  322. Dec 03 08:32:46 [3165] storage1 crmd: notice: process_lrm_event: LRM operation p_ip_drives3_monitor_30000 (call=206, rc=0, cib-update=103, confirmed=false) ok
  323. Dec 03 08:32:46 [3165] storage1 crmd: notice: run_graph: Transition 1 (Complete=62, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-28.bz2): Complete
  324. Dec 03 08:32:46 [3165] storage1 crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
  325. exportfs[3347]: 2012/12/03_08:33:15 INFO: Directory /mnt/storage is exported to 10.0.0.0/255.0.0.0 (started).
  326. exportfs[3347]: 2012/12/03_08:33:15 INFO: Directory /mnt/storage is exported to 10.0.0.0/255.0.0.0 (started).
  327. Dec 03 08:33:15 [3162] storage1 lrmd: info: get_first_instance: Result: /com/ubuntu/Upstart/jobs/smbd/_
  328. Dec 03 08:33:15 [3162] storage1 lrmd: info: upstart_job_property: Calling GetAll on /com/ubuntu/Upstart/jobs/smbd/_
  329. Dec 03 08:33:15 [3162] storage1 lrmd: info: upstart_job_property: Call to GetAll passed: type '(a{sv})' 1
  330. Dec 03 08:33:15 [3162] storage1 lrmd: info: upstart_job_property: Got value 'running' for /com/ubuntu/Upstart/jobs/smbd/_[state]
  331. Dec 03 08:33:15 [3162] storage1 lrmd: info: upstart_job_running: State of smbd: running
  332. Dec 03 08:33:15 [3162] storage1 lrmd: info: upstart_job_running: smbd is running
  333. Dec 03 08:33:15 [3162] storage1 lrmd: info: get_first_instance: Result: /com/ubuntu/Upstart/jobs/nmbd/_
  334. Dec 03 08:33:15 [3162] storage1 lrmd: info: upstart_job_property: Calling GetAll on /com/ubuntu/Upstart/jobs/nmbd/_
  335. Dec 03 08:33:15 [3162] storage1 lrmd: info: upstart_job_property: Call to GetAll passed: type '(a{sv})' 1
  336. Dec 03 08:33:15 [3162] storage1 lrmd: info: upstart_job_property: Got value 'running' for /com/ubuntu/Upstart/jobs/nmbd/_[state]
  337. Dec 03 08:33:15 [3162] storage1 lrmd: info: upstart_job_running: State of nmbd: running
  338. Dec 03 08:33:15 [3162] storage1 lrmd: info: upstart_job_running: nmbd is running
  339. exportfs[3771]: 2012/12/03_08:33:45 INFO: Directory /mnt/storage is exported to 10.0.0.0/255.0.0.0 (started).
  340. exportfs[3771]: 2012/12/03_08:33:45 INFO: Directory /mnt/storage is exported to 10.0.0.0/255.0.0.0 (started).
  341. Dec 03 08:33:45 [3162] storage1 lrmd: info: get_first_instance: Result: /com/ubuntu/Upstart/jobs/smbd/_
  342. Dec 03 08:33:45 [3162] storage1 lrmd: info: upstart_job_property: Calling GetAll on /com/ubuntu/Upstart/jobs/smbd/_
  343. Dec 03 08:33:45 [3162] storage1 lrmd: info: upstart_job_property: Call to GetAll passed: type '(a{sv})' 1
  344. Dec 03 08:33:45 [3162] storage1 lrmd: info: upstart_job_property: Got value 'running' for /com/ubuntu/Upstart/jobs/smbd/_[state]
  345. Dec 03 08:33:45 [3162] storage1 lrmd: info: upstart_job_running: State of smbd: running
  346. Dec 03 08:33:45 [3162] storage1 lrmd: info: upstart_job_running: smbd is running
  347. Dec 03 08:33:45 [3162] storage1 lrmd: info: get_first_instance: Result: /com/ubuntu/Upstart/jobs/nmbd/_
  348. Dec 03 08:33:45 [3162] storage1 lrmd: info: upstart_job_property: Calling GetAll on /com/ubuntu/Upstart/jobs/nmbd/_
  349. Dec 03 08:33:45 [3162] storage1 lrmd: info: upstart_job_property: Call to GetAll passed: type '(a{sv})' 1
  350. Dec 03 08:33:45 [3162] storage1 lrmd: info: upstart_job_property: Got value 'running' for /com/ubuntu/Upstart/jobs/nmbd/_[state]
  351. Dec 03 08:33:45 [3162] storage1 lrmd: info: upstart_job_running: State of nmbd: running
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement