Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- 2013-08-05 11:32:49,442 INFO [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Investigating why host 7 has disconnected with event AgentDisconnected
- 2013-08-05 11:32:49,443 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) checking if agent (7) is alive
- 2013-08-05 11:32:49,448 DEBUG [agent.transport.Request] (AgentTaskPool-1:null) Seq 7-1858602374: Sending { Cmd , MgmtId: 90520732148205, via: 7, Ver: v1, Flags: 100011, [{"CheckHealthCommand":{"wait":50}}] }
- 2013-08-05 11:32:49,451 INFO [agent.manager.AgentAttache] (AgentTaskPool-1:null) Seq 7-1858602374: Unable to send due to Resource [Host:7] is unreachable: Host 7: Channel is closed
- 2013-08-05 11:32:49,451 DEBUG [agent.manager.AgentAttache] (AgentTaskPool-1:null) Seq 7-1858602374: Cancelling.
- 2013-08-05 11:32:49,451 WARN [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Resource [Host:7] is unreachable: Host 7: Channel is closed
- 2013-08-05 11:32:49,452 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (AgentTaskPool-1:null) SimpleInvestigator unable to determine the state of the host. Moving on.
- 2013-08-05 11:32:49,452 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (AgentTaskPool-1:null) XenServerInvestigator unable to determine the state of the host. Moving on.
- 2013-08-05 11:32:49,453 DEBUG [cloud.ha.UserVmDomRInvestigator] (AgentTaskPool-1:null) checking if agent (7) is alive
- 2013-08-05 11:32:49,454 DEBUG [cloud.ha.UserVmDomRInvestigator] (AgentTaskPool-1:null) sending ping from (1) to agent's host ip address (192.168.122.32)
- 2013-08-05 11:32:49,456 DEBUG [agent.transport.Request] (AgentTaskPool-1:null) Seq 1-204538710: Sending { Cmd , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 100011, [{"PingTestCommand":{"_computingHostIp":"192.168.122.32","wait":20}}] }
- 2013-08-05 11:32:49,506 DEBUG [agent.transport.Request] (AgentManager-Handler-11:null) Seq 1-204538710: Processing: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, [{"Answer":{"result":true,"wait":0}}] }
- 2013-08-05 11:32:49,507 DEBUG [agent.transport.Request] (AgentTaskPool-1:null) Seq 1-204538710: Received: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, { Answer } }
- 2013-08-05 11:32:49,507 DEBUG [cloud.ha.AbstractInvestigatorImpl] (AgentTaskPool-1:null) host (192.168.122.32) has been successfully pinged, returning that host is up
- 2013-08-05 11:32:49,507 DEBUG [cloud.ha.UserVmDomRInvestigator] (AgentTaskPool-1:null) ping from (1) to agent's host ip address (192.168.122.32) successful, returning that agent is disconnected
- 2013-08-05 11:32:49,507 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (AgentTaskPool-1:null) null was able to determine host 7 is in Disconnected
- 2013-08-05 11:32:49,507 INFO [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) The state determined is Disconnected
- 2013-08-05 11:32:49,508 WARN [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Agent is disconnected but the host is still up: 7-n2
- 2013-08-05 11:32:49,512 INFO [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Host 7 is disconnecting with event AgentDisconnected
- 2013-08-05 11:32:49,514 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) The next status of agent 7is Alert, current status is Up
- 2013-08-05 11:32:49,514 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Deregistering link for 7 with state Alert
- 2013-08-05 11:32:49,514 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Remove Agent : 7
- 2013-08-05 11:32:49,514 DEBUG [agent.manager.ConnectedAgentAttache] (AgentTaskPool-1:null) Processing Disconnect.
- 2013-08-05 11:32:49,514 DEBUG [agent.manager.AgentAttache] (AgentTaskPool-1:null) Seq 7-1858600962: Sending disconnect to class com.cloud.network.security.SecurityGroupListener
- 2013-08-05 11:32:49,515 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.hypervisor.xen.discoverer.XcpServerDiscoverer_EnhancerByCloudStack_dc08546e
- 2013-08-05 11:32:49,515 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.network.NetworkManagerImpl_EnhancerByCloudStack_b45df077
- 2013-08-05 11:32:49,515 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.storage.secondary.SecondaryStorageListener
- 2013-08-05 11:32:49,515 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.network.security.SecurityGroupListener
- 2013-08-05 11:32:49,515 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.storage.listener.StoragePoolMonitor
- 2013-08-05 11:32:49,515 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.vm.ClusteredVirtualMachineManagerImpl_EnhancerByCloudStack_e11ea17b
- 2013-08-05 11:32:49,515 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.storage.LocalStoragePoolListener
- 2013-08-05 11:32:49,515 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.network.SshKeysDistriMonitor
- 2013-08-05 11:32:49,516 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.network.router.VirtualNetworkApplianceManagerImpl_EnhancerByCloudStack_7a900e1c
- 2013-08-05 11:32:49,516 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.network.SshKeysDistriMonitor
- 2013-08-05 11:32:49,516 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.network.router.VpcVirtualNetworkApplianceManagerImpl_EnhancerByCloudStack_f573d63f
- 2013-08-05 11:32:49,516 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.storage.upload.UploadListener
- 2013-08-05 11:32:49,517 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.storage.download.DownloadListener
- 2013-08-05 11:32:49,518 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.agent.manager.AgentMonitor
- 2013-08-05 11:32:49,518 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.capacity.StorageCapacityListener
- 2013-08-05 11:32:49,518 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.capacity.ComputeCapacityListener
- 2013-08-05 11:32:49,518 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.network.NetworkUsageManagerImpl$DirectNetworkStatsListener
- 2013-08-05 11:32:49,518 DEBUG [cloud.network.NetworkUsageManagerImpl] (AgentTaskPool-1:null) Disconnected called on 7 with status Alert
- 2013-08-05 11:32:49,518 DEBUG [agent.manager.AgentManagerImpl] (AgentTaskPool-1:null) Sending Disconnect to listener: com.cloud.consoleproxy.ConsoleProxyListener
- 2013-08-05 11:32:49,520 DEBUG [cloud.host.Status] (AgentTaskPool-1:null) Transition:[Resource state = Enabled, Agent event = AgentDisconnected, Host id = 7, name = n2]
- 2013-08-05 11:32:49,552 DEBUG [cloud.host.Status] (AgentTaskPool-1:null) Agent status update: [id = 7; name = n2; old status = Up; event = AgentDisconnected; new status = Alert; old update count = 99; new update count = 100]
- 2013-08-05 11:32:49,552 DEBUG [agent.manager.ClusteredAgentManagerImpl] (AgentTaskPool-1:null) Notifying other nodes of to disconnect
- 2013-08-05 11:32:49,555 WARN [cloud.ha.HighAvailabilityManagerImpl] (AgentTaskPool-1:null) Scheduling restart for VMs on host 7
- 2013-08-05 11:32:49,562 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (AgentTaskPool-1:null) Notifying HA Mgr of to restart vm 19-ca8d1be8-7927-4eff-b9f8-5a68e389e733
- 2013-08-05 11:32:49,567 INFO [cloud.ha.HighAvailabilityManagerImpl] (AgentTaskPool-1:null) Schedule vm for HA: VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]
- 2013-08-05 11:32:49,572 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) Processing HAWork[104-HA-19-Running-Investigating]
- 2013-08-05 11:32:49,575 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) HA on VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]
- 2013-08-05 11:32:49,576 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (AgentTaskPool-1:null) Notifying HA Mgr of to restart vm 23-28a63a18-9969-4546-9463-f1c8828831b3
- 2013-08-05 11:32:49,581 INFO [cloud.ha.HighAvailabilityManagerImpl] (AgentTaskPool-1:null) Schedule vm for HA: VM[User|28a63a18-9969-4546-9463-f1c8828831b3]
- 2013-08-05 11:32:49,585 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) Processing HAWork[105-HA-23-Running-Investigating]
- 2013-08-05 11:32:49,588 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) HA on VM[User|28a63a18-9969-4546-9463-f1c8828831b3]
- 2013-08-05 11:32:49,589 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (AgentTaskPool-1:null) Notifying HA Mgr of to restart vm 25-c88924e9-a8c9-4705-acc8-3237ffcf009d
- 2013-08-05 11:32:49,591 DEBUG [cloud.ha.CheckOnAgentInvestigator] (HA-Worker-0:work-104) Unable to reach the agent for VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]: Resource [Host:7] is unreachable: Host 7: Host with specified id is not in the right state: Alert
- 2013-08-05 11:32:49,594 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) SimpleInvestigator found VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]to be alive? null
- 2013-08-05 11:32:49,594 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) XenServerInvestigator found VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]to be alive? null
- 2013-08-05 11:32:49,594 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-0:work-104) testing if VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733] is alive
- 2013-08-05 11:32:49,597 INFO [cloud.ha.HighAvailabilityManagerImpl] (AgentTaskPool-1:null) Schedule vm for HA: VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]
- 2013-08-05 11:32:49,602 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) Processing HAWork[106-HA-25-Running-Investigating]
- 2013-08-05 11:32:49,604 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) HA on VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]
- 2013-08-05 11:32:49,606 DEBUG [agent.manager.ClusteredAgentManagerImpl] (AgentTaskPool-1:null) Notifying other nodes of to disconnect
- 2013-08-05 11:32:49,596 DEBUG [cloud.ha.CheckOnAgentInvestigator] (HA-Worker-1:work-105) Unable to reach the agent for VM[User|28a63a18-9969-4546-9463-f1c8828831b3]: Resource [Host:7] is unreachable: Host 7: Host with specified id is not in the right state: Alert
- 2013-08-05 11:32:49,606 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) SimpleInvestigator found VM[User|28a63a18-9969-4546-9463-f1c8828831b3]to be alive? null
- 2013-08-05 11:32:49,607 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) XenServerInvestigator found VM[User|28a63a18-9969-4546-9463-f1c8828831b3]to be alive? null
- 2013-08-05 11:32:49,607 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-1:work-105) testing if VM[User|28a63a18-9969-4546-9463-f1c8828831b3] is alive
- 2013-08-05 11:32:49,612 DEBUG [cloud.ha.CheckOnAgentInvestigator] (HA-Worker-2:work-106) Unable to reach the agent for VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]: Resource [Host:7] is unreachable: Host 7: Host with specified id is not in the right state: Alert
- 2013-08-05 11:32:49,612 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) SimpleInvestigator found VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]to be alive? null
- 2013-08-05 11:32:49,612 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) XenServerInvestigator found VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]to be alive? null
- 2013-08-05 11:32:49,613 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-2:work-106) testing if VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d] is alive
- 2013-08-05 11:32:49,623 DEBUG [agent.transport.Request] (HA-Worker-0:work-104) Seq 1-204538711: Sending { Cmd , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 100011, [{"PingTestCommand":{"_routerIp":"169.254.1.243","_privateIp":"192.168.122.112","wait":20}}] }
- 2013-08-05 11:32:49,628 DEBUG [agent.transport.Request] (HA-Worker-2:work-106) Seq 1-204538712: Sending { Cmd , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 100011, [{"PingTestCommand":{"_routerIp":"169.254.1.243","_privateIp":"192.168.122.170","wait":20}}] }
- 2013-08-05 11:32:49,632 DEBUG [agent.transport.Request] (HA-Worker-1:work-105) Seq 1-204538713: Sending { Cmd , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 100011, [{"PingTestCommand":{"_routerIp":"169.254.1.243","_privateIp":"192.168.122.133","wait":20}}] }
- 2013-08-05 11:32:54,097 DEBUG [agent.transport.Request] (AgentManager-Handler-8:null) Seq 1-204538711: Processing: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, [{"Answer":{"result":false,"details":"PING 192.168.122.112 (192.168.122.112): 56 data bytes64 bytes from 192.168.122.151: Destination Host UnreachableVr HL TOS Len ID Flg off TTL Pro cks Src Dst Data 4 5 00 5400 0000 0 0040 40 01 50c4 192.168.122.151 192.168.122.112 --- 192.168.122.112 ping statistics ---1 packets transmitted, 0 packets received, 100% packet lossUnable to ping the vm, exiting","wait":0}}] }
- 2013-08-05 11:32:54,098 DEBUG [agent.transport.Request] (HA-Worker-0:work-104) Seq 1-204538711: Received: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, { Answer } }
- 2013-08-05 11:32:54,098 DEBUG [agent.manager.AgentManagerImpl] (HA-Worker-0:work-104) Details from executing class com.cloud.agent.api.PingTestCommand: PING 192.168.122.112 (192.168.122.112): 56 data bytes64 bytes from 192.168.122.151: Destination Host UnreachableVr HL TOS Len ID Flg off TTL Pro cks Src Dst Data 4 5 00 5400 0000 0 0040 40 01 50c4 192.168.122.151 192.168.122.112 --- 192.168.122.112 ping statistics ---1 packets transmitted, 0 packets received, 100% packet lossUnable to ping the vm, exiting
- 2013-08-05 11:32:54,099 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-0:work-104) VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733] could not be pinged, returning that it is unknown
- 2013-08-05 11:32:54,099 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-0:work-104) Returning null since we're unable to determine state of VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]
- 2013-08-05 11:32:54,099 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) null found VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]to be alive? null
- 2013-08-05 11:32:54,100 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-0:work-104) Not a System Vm, unable to determine state of VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733] returning null
- 2013-08-05 11:32:54,101 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-0:work-104) Testing if VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733] is alive
- 2013-08-05 11:32:54,107 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-0:work-104) Unable to find a management nic, cannot ping this system VM, unable to determine state of VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733] returning null
- 2013-08-05 11:32:54,107 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) null found VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]to be alive? null
- 2013-08-05 11:32:54,108 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) Fencing off VM that we don't know the state of
- 2013-08-05 11:32:54,108 DEBUG [cloud.ha.XenServerFencer] (HA-Worker-0:work-104) Don't know how to fence non XenServer hosts KVM
- 2013-08-05 11:32:54,108 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) Fencer null returned null
- 2013-08-05 11:32:54,115 DEBUG [agent.transport.Request] (HA-Worker-0:work-104) Seq 1-204538714: Sending { Cmd , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 100011, [{"FenceCommand":{"vmName":"i-2-19-VM","hostGuid":"945a8003-3d66-335b-b49d-dc4e6e40f46e-LibvirtComputingResource","hostIp":"192.168.122.32","inSeq":false,"wait":0}}] }
- 2013-08-05 11:32:54,201 DEBUG [agent.transport.Request] (AgentManager-Handler-10:null) Seq 1-204538714: Processing: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, [{"FenceAnswer":{"result":false,"details":"Heart is still beating...","wait":0}}] }
- 2013-08-05 11:32:54,202 DEBUG [agent.transport.Request] (HA-Worker-0:work-104) Seq 1-204538714: Received: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, { FenceAnswer } }
- 2013-08-05 11:32:54,202 DEBUG [cloud.ha.KVMFencer] (HA-Worker-0:work-104) Unable to fence off VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733] on Host[-7-Routing]
- 2013-08-05 11:32:54,203 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) Fencer KVMFenceBuilder returned false
- 2013-08-05 11:32:54,203 DEBUG [ovm.hypervisor.OvmFencer] (HA-Worker-0:work-104) Don't know how to fence non Ovm hosts KVM
- 2013-08-05 11:32:54,203 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) Fencer OvmFenceBuilder returned null
- 2013-08-05 11:32:54,204 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) We were unable to fence off the VM VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]
- 2013-08-05 11:32:54,210 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-104) Rescheduling HAWork[104-HA-19-Running-Investigating] to try again at Mon Aug 05 11:43:08 CEST 2013
- 2013-08-05 11:32:54,372 DEBUG [cloud.server.StatsCollector] (StatsCollector-1:null) StorageCollector is running...
- 2013-08-05 11:32:54,443 DEBUG [agent.transport.Request] (StatsCollector-1:null) Seq 10-312148543: Received: { Ans: , MgmtId: 90520732148205, via: 10, Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
- 2013-08-05 11:32:54,610 DEBUG [agent.transport.Request] (StatsCollector-1:null) Seq 1-204538715: Received: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, { GetStorageStatsAnswer } }
- 2013-08-05 11:32:57,535 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-14:null) SeqA 9-49969: Processing Seq 9-49969: { Cmd , MgmtId: -1, via: 9, Ver: v1, Flags: 11, [{"ConsoleProxyLoadReportCommand":{"_proxyVmId":28,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] }
- 2013-08-05 11:32:57,538 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-14:null) SeqA 9-49969: Sending Seq 9-49969: { Ans: , MgmtId: 90520732148205, via: 9, Ver: v1, Flags: 100010, [{"AgentControlAnswer":{"result":true,"wait":0}}] }
- 2013-08-05 11:32:58,383 DEBUG [agent.transport.Request] (AgentManager-Handler-13:null) Seq 1-204538713: Processing: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, [{"Answer":{"result":false,"details":"timeout","wait":0}}] }
- 2013-08-05 11:32:58,383 DEBUG [agent.transport.Request] (HA-Worker-1:work-105) Seq 1-204538713: Received: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, { Answer } }
- 2013-08-05 11:32:58,383 DEBUG [agent.manager.AgentManagerImpl] (HA-Worker-1:work-105) Details from executing class com.cloud.agent.api.PingTestCommand: timeout
- 2013-08-05 11:32:58,384 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-1:work-105) VM[User|28a63a18-9969-4546-9463-f1c8828831b3] could not be pinged, returning that it is unknown
- 2013-08-05 11:32:58,384 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-1:work-105) Returning null since we're unable to determine state of VM[User|28a63a18-9969-4546-9463-f1c8828831b3]
- 2013-08-05 11:32:58,384 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) null found VM[User|28a63a18-9969-4546-9463-f1c8828831b3]to be alive? null
- 2013-08-05 11:32:58,385 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-1:work-105) Not a System Vm, unable to determine state of VM[User|28a63a18-9969-4546-9463-f1c8828831b3] returning null
- 2013-08-05 11:32:58,385 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-1:work-105) Testing if VM[User|28a63a18-9969-4546-9463-f1c8828831b3] is alive
- 2013-08-05 11:32:58,388 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-1:work-105) Unable to find a management nic, cannot ping this system VM, unable to determine state of VM[User|28a63a18-9969-4546-9463-f1c8828831b3] returning null
- 2013-08-05 11:32:58,388 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) null found VM[User|28a63a18-9969-4546-9463-f1c8828831b3]to be alive? null
- 2013-08-05 11:32:58,389 DEBUG [agent.transport.Request] (AgentManager-Handler-1:null) Seq 1-204538712: Processing: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, [{"Answer":{"result":false,"details":"timeout","wait":0}}] }
- 2013-08-05 11:32:58,390 DEBUG [agent.transport.Request] (HA-Worker-2:work-106) Seq 1-204538712: Received: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, { Answer } }
- 2013-08-05 11:32:58,390 DEBUG [agent.manager.AgentManagerImpl] (HA-Worker-2:work-106) Details from executing class com.cloud.agent.api.PingTestCommand: timeout
- 2013-08-05 11:32:58,390 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-2:work-106) VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d] could not be pinged, returning that it is unknown
- 2013-08-05 11:32:58,391 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-2:work-106) Returning null since we're unable to determine state of VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]
- 2013-08-05 11:32:58,391 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) null found VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]to be alive? null
- 2013-08-05 11:32:58,391 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-2:work-106) Not a System Vm, unable to determine state of VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d] returning null
- 2013-08-05 11:32:58,391 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-2:work-106) Testing if VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d] is alive
- 2013-08-05 11:32:58,394 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-2:work-106) Unable to find a management nic, cannot ping this system VM, unable to determine state of VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d] returning null
- 2013-08-05 11:32:58,394 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) null found VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]to be alive? null
- 2013-08-05 11:32:58,398 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) Fencing off VM that we don't know the state of
- 2013-08-05 11:32:58,398 DEBUG [cloud.ha.XenServerFencer] (HA-Worker-1:work-105) Don't know how to fence non XenServer hosts KVM
- 2013-08-05 11:32:58,401 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) Fencer null returned null
- 2013-08-05 11:32:58,404 DEBUG [agent.transport.Request] (HA-Worker-1:work-105) Seq 1-204538716: Sending { Cmd , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 100011, [{"FenceCommand":{"vmName":"i-2-23-VM","hostGuid":"945a8003-3d66-335b-b49d-dc4e6e40f46e-LibvirtComputingResource","hostIp":"192.168.122.32","inSeq":false,"wait":0}}] }
- 2013-08-05 11:32:58,404 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) Fencing off VM that we don't know the state of
- 2013-08-05 11:32:58,405 DEBUG [cloud.ha.XenServerFencer] (HA-Worker-2:work-106) Don't know how to fence non XenServer hosts KVM
- 2013-08-05 11:32:58,405 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) Fencer null returned null
- 2013-08-05 11:32:58,408 DEBUG [agent.transport.Request] (HA-Worker-2:work-106) Seq 1-204538717: Sending { Cmd , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 100011, [{"FenceCommand":{"vmName":"i-2-25-VM","hostGuid":"945a8003-3d66-335b-b49d-dc4e6e40f46e-LibvirtComputingResource","hostIp":"192.168.122.32","inSeq":false,"wait":0}}] }
- 2013-08-05 11:32:58,535 DEBUG [agent.transport.Request] (AgentManager-Handler-3:null) Seq 1-204538716: Processing: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, [{"FenceAnswer":{"result":false,"details":"Heart is still beating...","wait":0}}] }
- 2013-08-05 11:32:58,535 DEBUG [agent.transport.Request] (HA-Worker-1:work-105) Seq 1-204538716: Received: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, { FenceAnswer } }
- 2013-08-05 11:32:58,535 DEBUG [cloud.ha.KVMFencer] (HA-Worker-1:work-105) Unable to fence off VM[User|28a63a18-9969-4546-9463-f1c8828831b3] on Host[-7-Routing]
- 2013-08-05 11:32:58,536 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) Fencer KVMFenceBuilder returned false
- 2013-08-05 11:32:58,535 DEBUG [agent.transport.Request] (AgentManager-Handler-4:null) Seq 1-204538717: Processing: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, [{"FenceAnswer":{"result":false,"details":"Heart is still beating...","wait":0}}] }
- 2013-08-05 11:32:58,536 DEBUG [ovm.hypervisor.OvmFencer] (HA-Worker-1:work-105) Don't know how to fence non Ovm hosts KVM
- 2013-08-05 11:32:58,537 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) Fencer OvmFenceBuilder returned null
- 2013-08-05 11:32:58,537 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) We were unable to fence off the VM VM[User|28a63a18-9969-4546-9463-f1c8828831b3]
- 2013-08-05 11:32:58,539 DEBUG [agent.transport.Request] (HA-Worker-2:work-106) Seq 1-204538717: Received: { Ans: , MgmtId: 90520732148205, via: 1, Ver: v1, Flags: 10, { FenceAnswer } }
- 2013-08-05 11:32:58,539 DEBUG [cloud.ha.KVMFencer] (HA-Worker-2:work-106) Unable to fence off VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d] on Host[-7-Routing]
- 2013-08-05 11:32:58,545 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) Fencer KVMFenceBuilder returned false
- 2013-08-05 11:32:58,547 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-1:work-105) Rescheduling HAWork[105-HA-23-Running-Investigating] to try again at Mon Aug 05 11:43:12 CEST 2013
- 2013-08-05 11:32:58,548 DEBUG [ovm.hypervisor.OvmFencer] (HA-Worker-2:work-106) Don't know how to fence non Ovm hosts KVM
- 2013-08-05 11:32:58,549 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) Fencer OvmFenceBuilder returned null
- 2013-08-05 11:32:58,551 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) We were unable to fence off the VM VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]
- 2013-08-05 11:32:58,555 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-2:work-106) Rescheduling HAWork[106-HA-25-Running-Investigating] to try again at Mon Aug 05 11:43:12 CEST 2013
- 2013-08-05 11:43:49,650 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-3:work-105) Processing HAWork[105-HA-23-Running-Investigating]
- 2013-08-05 11:43:49,656 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-4:work-104) Processing HAWork[104-HA-19-Running-Investigating]
- 2013-08-05 11:43:49,661 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-3:work-105) HA on VM[User|28a63a18-9969-4546-9463-f1c8828831b3]
- 2013-08-05 11:43:49,661 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-4:work-104) HA on VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]
- 2013-08-05 11:43:49,666 DEBUG [cloud.ha.CheckOnAgentInvestigator] (HA-Worker-3:work-105) Unable to reach the agent for VM[User|28a63a18-9969-45
- 46-9463-f1c8828831b3]: Resource [Host:7] is unreachable: Host 7: Host with specified id is not in the right state: Alert
- 2013-08-05 11:43:49,667 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-3:work-105) SimpleInvestigator found VM[User|28a63a18-9969-4546
- -9463-f1c8828831b3]to be alive? null
- 2013-08-05 11:43:49,667 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-3:work-105) XenServerInvestigator found VM[User|28a63a18-9969-4
- 546-9463-f1c8828831b3]to be alive? null
- 2013-08-05 11:43:49,667 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-3:work-105) testing if VM[User|28a63a18-9969-4546-9463-f1c8828831b3]
- is alive
- 2013-08-05 11:43:49,667 DEBUG [cloud.ha.CheckOnAgentInvestigator] (HA-Worker-4:work-104) Unable to reach the agent for VM[User|ca8d1be8-7927-4e
- ff-b9f8-5a68e389e733]: Resource [Host:7] is unreachable: Host 7: Host with specified id is not in the right state: Alert
- 2013-08-05 11:43:49,668 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-4:work-104) SimpleInvestigator found VM[User|ca8d1be8-7927-4eff
- -b9f8-5a68e389e733]to be alive? null
- 2013-08-05 11:43:49,668 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-4:work-104) XenServerInvestigator found VM[User|ca8d1be8-7927-4
- eff-b9f8-5a68e389e733]to be alive? null
- 2013-08-05 11:43:49,668 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-4:work-104) testing if VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]
- is alive
- 2013-08-05 11:43:49,677 DEBUG [agent.transport.Request] (HA-Worker-3:work-105) Seq 1-204538739: Sending { Cmd , MgmtId: 90520732148205, via: 1
- , Ver: v1, Flags: 100011, [{"PingTestCommand":{"_routerIp":"169.254.1.243","_privateIp":"192.168.122.133","wait":20}}] }
- 2013-08-05 11:43:49,678 DEBUG [agent.transport.Request] (HA-Worker-4:work-104) Seq 1-204538740: Sending { Cmd , MgmtId: 90520732148205, via: 1
- , Ver: v1, Flags: 100011, [{"PingTestCommand":{"_routerIp":"169.254.1.243","_privateIp":"192.168.122.112","wait":20}}] }
- 2013-08-05 11:43:51,203 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-6:null) Ping from 9
- 2013-08-05 11:43:52,703 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-9:null) SeqA 9-50081: Processing Seq 9-50081: { Cmd , Mgm
- tId: -1, via: 9, Ver: v1, Flags: 11, [{"ConsoleProxyLoadReportCommand":{"_proxyVmId":28,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] }
- 2013-08-05 11:43:52,706 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-9:null) SeqA 9-50081: Sending Seq 9-50081: { Ans: , MgmtI
- d: 90520732148205, via: 9, Ver: v1, Flags: 100010, [{"AgentControlAnswer":{"result":true,"wait":0}}] }
- 2013-08-05 11:43:54,243 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-106) Processing HAWork[106-HA-25-Running-Investigating]
- 2013-08-05 11:43:54,250 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-106) HA on VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]
- 2013-08-05 11:43:54,257 DEBUG [cloud.ha.CheckOnAgentInvestigator] (HA-Worker-0:work-106) Unable to reach the agent for VM[User|c88924e9-a8c9-47
- 05-acc8-3237ffcf009d]: Resource [Host:7] is unreachable: Host 7: Host with specified id is not in the right state: Alert
- 2013-08-05 11:43:54,258 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-106) SimpleInvestigator found VM[User|c88924e9-a8c9-4705
- -acc8-3237ffcf009d]to be alive? null
- 2013-08-05 11:43:54,259 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-0:work-106) XenServerInvestigator found VM[User|c88924e9-a8c9-4
- 705-acc8-3237ffcf009d]to be alive? null
- 2013-08-05 11:43:54,259 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-0:work-106) testing if VM[User|c88924e9-a8c9-4705-acc8-3237ffcf009d]
- is alive
- 2013-08-05 11:43:54,266 DEBUG [agent.transport.Request] (HA-Worker-0:work-106) Seq 1-204538741: Sending { Cmd , MgmtId: 90520732148205, via: 1
- , Ver: v1, Flags: 100011, [{"PingTestCommand":{"_routerIp":"169.254.1.243","_privateIp":"192.168.122.170","wait":20}}] }
- 2013-08-05 11:43:54,315 DEBUG [agent.transport.Request] (AgentManager-Handler-5:null) Seq 1-204538739: Processing: { Ans: , MgmtId: 9052073214
- 8205, via: 1, Ver: v1, Flags: 10, [{"Answer":{"result":false,"details":"PING 192.168.122.133 (192.168.122.133): 56 data bytes64 bytes from 192.
- 168.122.151: Destination Host UnreachableVr HL TOS Len ID Flg off TTL Pro cks Src Dst Data 4 5 00 5400 0000 0 0040 40 01 3
- bc4 192.168.122.151 192.168.122.133 --- 192.168.122.133 ping statistics ---1 packets transmitted, 0 packets received, 100% packet lossUnable t
- o ping the vm, exiting","wait":0}}] }
- 2013-08-05 11:43:54,316 DEBUG [agent.transport.Request] (HA-Worker-3:work-105) Seq 1-204538739: Received: { Ans: , MgmtId: 90520732148205, via
- : 1, Ver: v1, Flags: 10, { Answer } }
- 2013-08-05 11:43:54,316 DEBUG [agent.manager.AgentManagerImpl] (HA-Worker-3:work-105) Details from executing class com.cloud.agent.api.PingTest
- Command: PING 192.168.122.133 (192.168.122.133): 56 data bytes64 bytes from 192.168.122.151: Destination Host UnreachableVr HL TOS Len ID Fl
- g off TTL Pro cks Src Dst Data 4 5 00 5400 0000 0 0040 40 01 3bc4 192.168.122.151 192.168.122.133 --- 192.168.122.133 ping s
- tatistics ---1 packets transmitted, 0 packets received, 100% packet lossUnable to ping the vm, exiting
- 2013-08-05 11:43:54,316 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-3:work-105) VM[User|28a63a18-9969-4546-9463-f1c8828831b3] could not
- be pinged, returning that it is unknown
- 2013-08-05 11:43:54,316 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-3:work-105) Returning null since we're unable to determine state of
- VM[User|28a63a18-9969-4546-9463-f1c8828831b3]
- 2013-08-05 11:43:54,316 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-3:work-105) null found VM[User|28a63a18-9969-4546-9463-f1c88288
- 31b3]to be alive? null
- 2013-08-05 11:43:54,317 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-3:work-105) Not a System Vm, unable to determine state of
- VM[User|28a63a18-9969-4546-9463-f1c8828831b3] returning null
- 2013-08-05 11:43:54,317 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-3:work-105) Testing if VM[User|28a63a18-9969-4546-9463-f1c
- 8828831b3] is alive
- 2013-08-05 11:43:54,316 DEBUG [agent.transport.Request] (AgentManager-Handler-7:null) Seq 1-204538740: Processing: { Ans: , MgmtId: 9052073214
- 8205, via: 1, Ver: v1, Flags: 10, [{"Answer":{"result":false,"details":"PING 192.168.122.112 (192.168.122.112): 56 data bytes64 bytes from 192.
- 168.122.151: Destination Host UnreachableVr HL TOS Len ID Flg off TTL Pro cks Src Dst Data 4 5 00 5400 0000 0 0040 40 01 5
- 0c4 192.168.122.151 192.168.122.112 --- 192.168.122.112 ping statistics ---1 packets transmitted, 0 packets received, 100% packet lossUnable t
- o ping the vm, exiting","wait":0}}] }
- 2013-08-05 11:43:54,317 DEBUG [agent.transport.Request] (HA-Worker-4:work-104) Seq 1-204538740: Received: { Ans: , MgmtId: 90520732148205, via
- : 1, Ver: v1, Flags: 10, { Answer } }
- 2013-08-05 11:43:54,317 DEBUG [agent.manager.AgentManagerImpl] (HA-Worker-4:work-104) Details from executing class com.cloud.agent.api.PingTest
- Command: PING 192.168.122.112 (192.168.122.112): 56 data bytes64 bytes from 192.168.122.151: Destination Host UnreachableVr HL TOS Len ID Fl
- g off TTL Pro cks Src Dst Data 4 5 00 5400 0000 0 0040 40 01 50c4 192.168.122.151 192.168.122.112 --- 192.168.122.112 ping s
- tatistics ---1 packets transmitted, 0 packets received, 100% packet lossUnable to ping the vm, exiting
- 2013-08-05 11:43:54,318 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-4:work-104) VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733] could not
- be pinged, returning that it is unknown
- 2013-08-05 11:43:54,318 DEBUG [cloud.ha.UserVmDomRInvestigator] (HA-Worker-4:work-104) Returning null since we're unable to determine state of
- VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]
- 2013-08-05 11:43:54,318 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-4:work-104) null found VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389
- e733]to be alive? null
- 2013-08-05 11:43:54,318 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-4:work-104) Not a System Vm, unable to determine state of
- VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733] returning null
- 2013-08-05 11:43:54,318 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-4:work-104) Testing if VM[User|ca8d1be8-7927-4eff-b9f8-5a6
- 8e389e733] is alive
- 2013-08-05 11:43:54,322 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-3:work-105) Unable to find a management nic, cannot ping t
- his system VM, unable to determine state of VM[User|28a63a18-9969-4546-9463-f1c8828831b3] returning null
- 2013-08-05 11:43:54,322 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-3:work-105) null found VM[User|28a63a18-9969-4546-9463-f1c88288
- 31b3]to be alive? null
- 2013-08-05 11:43:54,322 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-3:work-105) Fencing off VM that we don't know the state of
- 2013-08-05 11:43:54,323 DEBUG [cloud.ha.XenServerFencer] (HA-Worker-3:work-105) Don't know how to fence non XenServer hosts KVM
- 2013-08-05 11:43:54,323 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-3:work-105) Fencer null returned null
- 2013-08-05 11:43:54,322 DEBUG [cloud.ha.ManagementIPSystemVMInvestigator] (HA-Worker-4:work-104) Unable to find a management nic, cannot ping t
- his system VM, unable to determine state of VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733] returning null
- 2013-08-05 11:43:54,323 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-4:work-104) null found VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389
- e733]to be alive? null
- 2013-08-05 11:43:54,324 DEBUG [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-4:work-104) Fencing off VM that we don't know the state of
- 2013-08-05 11:43:54,324 DEBUG [cloud.ha.XenServerFencer] (HA-Worker-4:work-104) Don't know how to fence non XenServer hosts KVM
- 2013-08-05 11:43:54,324 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-4:work-104) Fencer null returned null
- 2013-08-05 11:43:54,328 DEBUG [agent.transport.Request] (HA-Worker-3:work-105) Seq 1-204538742: Sending { Cmd , MgmtId: 90520732148205, via: 1
- , Ver: v1, Flags: 100011, [{"FenceCommand":{"vmName":"i-2-23-VM","hostGuid":"945a8003-3d66-335b-b49d-dc4e6e40f46e-LibvirtComputingResource","ho
- stIp":"192.168.122.32","inSeq":false,"wait":0}}] }
- 2013-08-05 11:43:54,329 DEBUG [agent.transport.Request] (HA-Worker-4:work-104) Seq 1-204538743: Sending { Cmd , MgmtId: 90520732148205, via: 1
- , Ver: v1, Flags: 100011, [{"FenceCommand":{"vmName":"i-2-19-VM","hostGuid":"945a8003-3d66-335b-b49d-dc4e6e40f46e-LibvirtComputingResource","ho
- stIp":"192.168.122.32","inSeq":false,"wait":0}}] }
- 2013-08-05 11:43:54,407 DEBUG [agent.transport.Request] (AgentManager-Handler-11:null) Seq 1-204538743: Processing: { Ans: , MgmtId: 905207321
- 48205, via: 1, Ver: v1, Flags: 10, [{"FenceAnswer":{"result":true,"wait":0}}] }
- 2013-08-05 11:43:54,407 DEBUG [agent.transport.Request] (AgentManager-Handler-8:null) Seq 1-204538742: Processing: { Ans: , MgmtId: 9052073214
- 8205, via: 1, Ver: v1, Flags: 10, [{"FenceAnswer":{"result":true,"wait":0}}] }
- 2013-08-05 11:43:54,407 DEBUG [agent.transport.Request] (HA-Worker-3:work-105) Seq 1-204538742: Received: { Ans: , MgmtId: 90520732148205, via
- : 1, Ver: v1, Flags: 10, { FenceAnswer } }
- 2013-08-05 11:43:54,408 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-3:work-105) Fencer KVMFenceBuilder returned true
- 2013-08-05 11:43:54,408 DEBUG [agent.transport.Request] (HA-Worker-4:work-104) Seq 1-204538743: Received: { Ans: , MgmtId: 90520732148205, via
- : 1, Ver: v1, Flags: 10, { FenceAnswer } }
- 2013-08-05 11:43:54,408 INFO [cloud.ha.HighAvailabilityManagerImpl] (HA-Worker-4:work-104) Fencer KVMFenceBuilder returned true
- 2013-08-05 11:43:54,416 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-4:work-104) VM state transitted from :Running to Stopping with ev
- ent: StopRequestedvm's original host id: 1 new host id: 7 host id before state transition: 7
- 2013-08-05 11:43:54,416 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-3:work-105) VM state transitted from :Running to Stopping with ev
- ent: StopRequestedvm's original host id: 1 new host id: 7 host id before state transition: 7
- 2013-08-05 11:43:54,420 WARN [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-4:work-104) Unable to stop vm, agent unavailable: com.cloud.excep
- tion.AgentUnavailableException: Resource [Host:7] is unreachable: Host 7: Host with specified id is not in the right state: Alert
- 2013-08-05 11:43:54,421 WARN [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-4:work-104) Unable to actually stop VM[User|ca8d1be8-7927-4eff-b9
- f8-5a68e389e733] but continue with release because it's a force stop
- 2013-08-05 11:43:54,420 WARN [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-3:work-105) Unable to stop vm, agent unavailable: com.cloud.excep
- tion.AgentUnavailableException: Resource [Host:7] is unreachable: Host 7: Host with specified id is not in the right state: Alert
- 2013-08-05 11:43:54,421 WARN [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-3:work-105) Unable to actually stop VM[User|28a63a18-9969-4546-94
- 63-f1c8828831b3] but continue with release because it's a force stop
- 2013-08-05 11:43:54,423 DEBUG [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-3:work-105) VM[User|28a63a18-9969-4546-9463-f1c8828831b3] is stop
- ped on the host. Proceeding to release resource held.
- 2013-08-05 11:43:54,424 DEBUG [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-4:work-104) VM[User|ca8d1be8-7927-4eff-b9f8-5a68e389e733] is stop
- ped on the host. Proceeding to release resource held.
- 2013-08-05 11:43:54,436 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-4:work-104) Changing active number of nics for network id=204 on -1
- 2013-08-05 11:43:54,436 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-3:work-105) Changing active number of nics for network id=204 on -1
- 2013-08-05 11:43:54,440 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-4:work-104) Asking VirtualRouter to release Nic[66-19-78f7d9c1-9668
- -486f-8351-323d2471b34b-192.168.122.112]
- 2013-08-05 11:43:54,440 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-4:work-104) Asking Ovs to release Nic[66-19-78f7d9c1-9668-486f-8351
- -323d2471b34b-192.168.122.112]
- 2013-08-05 11:43:54,441 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-4:work-104) Asking SecurityGroupProvider to release Nic[66-19-78f7d
- 9c1-9668-486f-8351-323d2471b34b-192.168.122.112]
- 2013-08-05 11:43:54,441 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-4:work-104) Asking VpcVirtualRouter to release Nic[66-19-78f7d9c1-9
- 668-486f-8351-323d2471b34b-192.168.122.112]
- 2013-08-05 11:43:54,441 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-4:work-104) Asking NiciraNvp to release Nic[66-19-78f7d9c1-9668-486
- f-8351-323d2471b34b-192.168.122.112]
- 2013-08-05 11:43:54,441 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-3:work-105) Asking VirtualRouter to release Nic[74-23-884e62bb-9cd6
- -4880-8232-13692d151df3-192.168.122.133]
- 2013-08-05 11:43:54,441 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-3:work-105) Asking Ovs to release Nic[74-23-884e62bb-9cd6-4880-8232
- -13692d151df3-192.168.122.133]
- 2013-08-05 11:43:54,442 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-3:work-105) Asking SecurityGroupProvider to release Nic[74-23-884e6
- 2bb-9cd6-4880-8232-13692d151df3-192.168.122.133]
- 2013-08-05 11:43:54,442 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-3:work-105) Asking VpcVirtualRouter to release Nic[74-23-884e62bb-9
- cd6-4880-8232-13692d151df3-192.168.122.133]
- 2013-08-05 11:43:54,442 DEBUG [cloud.network.NetworkManagerImpl] (HA-Worker-3:work-105) Asking NiciraNvp to release Nic[74-23-884e62bb-9cd6-488
- 0-8232-13692d151df3-192.168.122.133]
- 2013-08-05 11:43:54,443 DEBUG [network.element.NiciraNvpElement] (HA-Worker-3:work-105) Checking if NiciraNvpElement can handle service Connect
- ivity on network defaultGuestNetwork
- 2013-08-05 11:43:54,443 DEBUG [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-3:work-105) Successfully released network resources for the vm VM
- [User|28a63a18-9969-4546-9463-f1c8828831b3]
- 2013-08-05 11:43:54,443 DEBUG [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-3:work-105) Successfully released storage resources for the vm VM
- [User|28a63a18-9969-4546-9463-f1c8828831b3]
- 2013-08-05 11:43:54,441 DEBUG [network.element.NiciraNvpElement] (HA-Worker-4:work-104) Checking if NiciraNvpElement can handle service Connect
- ivity on network defaultGuestNetwork
- 2013-08-05 11:43:54,444 DEBUG [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-4:work-104) Successfully released network resources for the vm VM
- [User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]
- 2013-08-05 11:43:54,444 DEBUG [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-4:work-104) Successfully released storage resources for the vm VM
- [User|ca8d1be8-7927-4eff-b9f8-5a68e389e733]
- 2013-08-05 11:43:54,508 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-4:work-104) VM state transitted from :Stopping to Stopped with ev
- ent: OperationSucceededvm's original host id: 1 new host id: null host id before state transition: 7
- 2013-08-05 11:43:54,509 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-3:work-105) VM state transitted from :Stopping to Stopped with ev
- ent: OperationSucceededvm's original host id: 1 new host id: null host id before state transition: 7
- 2013-08-05 11:43:54,513 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-4:work-104) Hosts's actual total CPU: 3299 and CPU after applying
- overprovisioning: 3299
- 2013-08-05 11:43:54,514 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-4:work-104) release cpu from host: 7, old used: 1500,reserved: 0,
- actual total: 3299, total with overprovisioning: 3299; new used: 1000,reserved:500; movedfromreserved: false,moveToReserveredtrue
- 2013-08-05 11:43:54,514 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-4:work-104) release mem from host: 7, old used: 1610612736,reserv
- ed: 0, total: 4145119232; new used: 1073741824,reserved:536870912; movedfromreserved: false,moveToReserveredtrue
- 2013-08-05 11:43:54,516 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-3:work-105) Hosts's actual total CPU: 3299 and CPU after applying
- overprovisioning: 3299
- 2013-08-05 11:43:54,517 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-3:work-105) release cpu from host: 7, old used: 1000,reserved: 50
- 0, actual total: 3299, total with overprovisioning: 3299; new used: 500,reserved:1000; movedfromreserved: false,moveToReserveredtrue
- 2013-08-05 11:43:54,517 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-3:work-105) release mem from host: 7, old used: 1073741824,reserv
- ed: 536870912, total: 4145119232; new used: 536870912,reserved:1073741824; movedfromreserved: false,moveToReserveredtrue
- 2013-08-05 11:43:54,539 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-4:work-104) VM state transitted from :Stopped to Starting with ev
- ent: StartRequestedvm's original host id: 1 new host id: null host id before state transition: null
- 2013-08-05 11:43:54,539 DEBUG [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-4:work-104) Successfully transitioned to start state for VM[User|
- ca8d1be8-7927-4eff-b9f8-5a68e389e733] reservation id = 9d90ccf4-4f8e-4517-960a-858a0dbd3eeb
- 2013-08-05 11:43:54,543 DEBUG [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-4:work-104) Trying to deploy VM, vm has dcId: 1 and podId: 1
- 2013-08-05 11:43:54,543 DEBUG [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-4:work-104) Deploy avoids pods: null, clusters: null, hosts: null
- 2013-08-05 11:43:54,546 DEBUG [cloud.capacity.CapacityManagerImpl] (HA-Worker-3:work-105) VM state transitted from :Stopped to Starting with ev
- ent: StartRequestedvm's original host id: 1 new host id: null host id before state transition: null
- 2013-08-05 11:43:54,546 DEBUG [cloud.vm.VirtualMachineManagerImpl] (HA-Worker-3:work-105) Successfully transitioned to start state for VM[User|
- 28a63a18-9969-4546-9463-f1c8828831b3] reservation id = 9dd9a9fd-6910-4477-867d-7d8365e1db5b
Advertisement
Add Comment
Please, Sign In to add comment