Advertisement
Fuma83

Untitled

Mar 16th, 2024
43
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 17.34 KB | None | 0 0
  1. [CODE]PVEA1:
  2.  
  3. Jan 16 23:17:01 pvea1 CRON[3192939]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
  4.  
  5. Jan 16 23:17:01 pvea1 CRON[3192940]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
  6.  
  7. Jan 16 23:17:01 pvea1 CRON[3192939]: pam_unix(cron:session): session closed for user root
  8.  
  9. Jan 16 23:17:37 pvea1 kernel: tg3 0000:02:00.0 eno3: Link is down
  10.  
  11. Jan 16 23:17:37 pvea1 kernel: tg3 0000:01:00.1 eno2: Link is down
  12.  
  13. Jan 16 23:17:37 pvea1 kernel: vmbr2: port 2(eno3) entered disabled state
  14.  
  15. Jan 16 23:17:37 pvea1 corosync[4222]: [KNET ] link: host: 2 link: 0 is down
  16.  
  17. Jan 16 23:17:37 pvea1 corosync[4222]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
  18.  
  19. Jan 16 23:17:37 pvea1 corosync[4222]: [KNET ] host: host: 2 has no active links
  20.  
  21. Jan 16 23:17:37 pvea1 corosync[4222]: [TOTEM ] Token has not been received in 2250 ms
  22.  
  23. Jan 16 23:17:38 pvea1 kernel: vmbr1: port 1(eno2) entered disabled state
  24.  
  25. Jan 16 23:17:38 pvea1 corosync[4222]: [TOTEM ] A processor failed, forming new configuration: token timed out (3000ms), waiting 3600ms for consensus.
  26.  
  27. Jan 16 23:17:39 pvea1 kernel: tg3 0000:02:00.0 eno3: Link is up at 100 Mbps, full duplex
  28.  
  29. Jan 16 23:17:39 pvea1 kernel: tg3 0000:02:00.0 eno3: Flow control is on for TX and on for RX
  30.  
  31. Jan 16 23:17:39 pvea1 kernel: tg3 0000:02:00.0 eno3: EEE is enabled
  32.  
  33. Jan 16 23:17:39 pvea1 kernel: vmbr2: port 2(eno3) entered blocking state
  34.  
  35. Jan 16 23:17:39 pvea1 kernel: vmbr2: port 2(eno3) entered forwarding state
  36.  
  37. Jan 16 23:17:40 pvea1 kernel: tg3 0000:01:00.1 eno2: Link is up at 100 Mbps, full duplex
  38.  
  39. Jan 16 23:17:40 pvea1 kernel: tg3 0000:01:00.1 eno2: Flow control is on for TX and on for RX
  40.  
  41. Jan 16 23:17:40 pvea1 kernel: tg3 0000:01:00.1 eno2: EEE is enabled
  42.  
  43. Jan 16 23:17:40 pvea1 kernel: vmbr1: port 1(eno2) entered blocking state
  44.  
  45. Jan 16 23:17:40 pvea1 kernel: vmbr1: port 1(eno2) entered forwarding state
  46.  
  47. Jan 16 23:17:42 pvea1 corosync[4222]: [QUORUM] Sync members[1]: 1
  48.  
  49. Jan 16 23:17:42 pvea1 corosync[4222]: [QUORUM] Sync left[1]: 2
  50.  
  51. Jan 16 23:17:42 pvea1 corosync[4222]: [VOTEQ ] waiting for quorum device Qdevice poll (but maximum for 30000 ms)
  52.  
  53. Jan 16 23:17:42 pvea1 corosync[4222]: [TOTEM ] A new membership (1.103) was formed. Members left: 2
  54.  
  55. Jan 16 23:17:42 pvea1 corosync[4222]: [TOTEM ] Failed to receive the leave message. failed: 2
  56.  
  57. Jan 16 23:17:42 pvea1 pmxcfs[4217]: [dcdb] notice: members: 1/4217
  58.  
  59. Jan 16 23:17:42 pvea1 pmxcfs[4217]: [status] notice: members: 1/4217
  60.  
  61. Jan 16 23:17:43 pvea1 pmxcfs[4217]: [status] notice: cpg_send_message retry 10
  62.  
  63. Jan 16 23:17:44 pvea1 pmxcfs[4217]: [status] notice: cpg_send_message retry 20
  64.  
  65. Jan 16 23:17:45 pvea1 pmxcfs[4217]: [status] notice: cpg_send_message retry 30
  66.  
  67. Jan 16 23:17:46 pvea1 pmxcfs[4217]: [status] notice: cpg_send_message retry 40
  68.  
  69. Jan 16 23:17:46 pvea1 corosync[4222]: [QUORUM] Members[1]: 1
  70.  
  71. Jan 16 23:17:46 pvea1 corosync[4222]: [MAIN ] Completed service synchronization, ready to provide service.
  72.  
  73. Jan 16 23:17:46 pvea1 pmxcfs[4217]: [status] notice: cpg_send_message retried 42 times
  74.  
  75. Jan 16 23:17:46 pvea1 pve-ha-crm[4435]: node 'pvea2': state changed from 'online' => 'unknown'
  76.  
  77. Jan 16 23:18:13 pvea1 pvescheduler[3787580]: INFO: Finished Backup of VM 100 (00:18:07)
  78.  
  79. Jan 16 23:18:16 pvea1 pvescheduler[3787580]: INFO: Backup job finished successfully
  80.  
  81. Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: service 'vm:101': state changed from 'started' to 'fence'
  82.  
  83. Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: node 'pvea2': state changed from 'unknown' => 'fence'
  84.  
  85. Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: lost lock 'ha_agent_pvea2_lock - can't get cfs lock
  86.  
  87. Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: successfully acquired lock 'ha_agent_pvea2_lock'
  88.  
  89. Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: fencing: acknowledged - got agent lock for node 'pvea2'
  90.  
  91. Jan 16 23:18:36 pvea1 pve-ha-crm[4435]: node 'pvea2': state changed from 'fence' => 'unknown'
  92.  
  93. Jan 16 23:18:37 pvea1 postfix/pickup[3784564]: 029391D719: uid=0 from=<root>
  94.  
  95. Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 029391D719: message-id=<20240116221837.029391D719@pvea1.au.lan>
  96.  
  97. Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 029391D719: from=<root@pvea1.au.lan>, size=1225, nrcpt=1 (queue active)
  98.  
  99. Jan 16 23:18:37 pvea1 postfix/pickup[3784564]: 13C921D71A: uid=0 from=<root>
  100.  
  101. Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 13C921D71A: message-id=<20240116221837.13C921D71A@pvea1.au.lan>
  102.  
  103. Jan 16 23:18:37 pvea1 pve-ha-crm[4435]: service 'vm:101': state changed from 'fence' to 'recovery'
  104.  
  105. Jan 16 23:18:37 pvea1 pve-ha-crm[4435]: recover service 'vm:101' from fenced node 'pvea2' to node 'pvea1'
  106.  
  107. Jan 16 23:18:37 pvea1 pve-ha-crm[4435]: service 'vm:101': state changed from 'recovery' to 'started' (node = pvea1)
  108.  
  109. Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 13C921D71A: from=<root@pvea1.au.lan>, size=1291, nrcpt=1 (queue active)
  110.  
  111. Jan 16 23:18:37 pvea1 proxmox-mail-fo[3433531]: pvea1 proxmox-mail-forward[3433531]: forward mail to <nope@nope.nope>
  112.  
  113. Jan 16 23:18:37 pvea1 proxmox-mail-fo[3433533]: pvea1 proxmox-mail-forward[3433533]: forward mail to <nope@nope.nope>
  114.  
  115. Jan 16 23:18:37 pvea1 postfix/pickup[3784564]: 423031D71B: uid=65534 from=<root>
  116.  
  117. Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 423031D71B: message-id=<20240116221837.13C921D71A@pvea1.au.lan>
  118.  
  119. Jan 16 23:18:37 pvea1 postfix/local[3433474]: 029391D719: to=<root@pvea1.au.lan>, orig_to=<root>, relay=local, delay=0.37, delays=0.18/0.13/0/0.06, dsn=2.0.0, status=sent (delivered to command: /usr/bin/proxmox-mail-forward)
  120.  
  121. Jan 16 23:18:37 pvea1 postfix/local[3433530]: 13C921D71A: to=<root@pvea1.au.lan>, orig_to=<root>, relay=local, delay=0.26, delays=0.14/0.07/0/0.06, dsn=2.0.0, status=sent (delivered to command: /usr/bin/proxmox-mail-forward)
  122.  
  123. Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 029391D719: removed
  124.  
  125. Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 13C921D71A: removed
  126.  
  127. Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 423031D71B: from=<root@pvea1.au.lan>, size=1456, nrcpt=1 (queue active)
  128.  
  129. Jan 16 23:18:37 pvea1 postfix/pickup[3784564]: 4B4AE1D71C: uid=65534 from=<root>
  130.  
  131. Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 4B4AE1D71C: message-id=<20240116221837.029391D719@pvea1.au.lan>
  132.  
  133. Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 4B4AE1D71C: from=<root@pvea1.au.lan>, size=1390, nrcpt=1 (queue active)
  134.  
  135. Jan 16 23:18:37 pvea1 postfix/smtp[3433541]: 423031D71B: to=<nope@nope.nope>, relay=none, delay=0.24, delays=0.09/0.01/0.15/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=agcolatecnovite.it type=AAAA: Host not found)
  136.  
  137. Jan 16 23:18:37 pvea1 postfix/smtp[3433548]: 4B4AE1D71C: to=<nope@nope.nope>, relay=none, delay=0.25, delays=0.12/0.01/0.12/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=agcolatecnovite.it type=AAAA: Host not found)
  138.  
  139. Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 423031D71B: removed
  140.  
  141. Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 7A8501D98D: message-id=<20240116221837.7A8501D98D@pvea1.au.lan>
  142.  
  143. Jan 16 23:18:37 pvea1 postfix/qmgr[4208]: 4B4AE1D71C: removed
  144.  
  145. Jan 16 23:18:37 pvea1 postfix/cleanup[3433470]: 7AA581D98E: message-id=<20240116221837.7AA581D98E@pvea1.au.lan>
  146.  
  147. Jan 16 23:18:41 pvea1 pve-ha-lrm[4446]: watchdog active
  148.  
  149. Jan 16 23:18:41 pvea1 pve-ha-lrm[4446]: status change wait_for_agent_lock => active
  150.  
  151. Jan 16 23:18:41 pvea1 pve-ha-lrm[3433667]: starting service vm:101
  152.  
  153. Jan 16 23:18:41 pvea1 pve-ha-lrm[3433668]: start VM 101: UPID:pvea1:003464C4:0A5FEC70:65A700C1:qmstart:101:root@pam:
  154.  
  155. Jan 16 23:18:41 pvea1 pve-ha-lrm[3433667]: <root@pam> starting task UPID:pvea1:003464C4:0A5FEC70:65A700C1:qmstart:101:root@pam:
  156.  
  157. Jan 16 23:18:43 pvea1 systemd[1]: Started 101.scope.
  158.  
  159. Jan 16 23:18:43 pvea1 kernel: device tap101i0 entered promiscuous mode
  160.  
  161. Jan 16 23:18:44 pvea1 kernel: vmbr0v2: port 3(fwpr101p0) entered blocking state
  162.  
  163. Jan 16 23:18:44 pvea1 kernel: vmbr0v2: port 3(fwpr101p0) entered disabled state
  164.  
  165. Jan 16 23:18:44 pvea1 kernel: device fwpr101p0 entered promiscuous mode
  166.  
  167. Jan 16 23:18:44 pvea1 kernel: vmbr0v2: port 3(fwpr101p0) entered blocking state
  168.  
  169. Jan 16 23:18:44 pvea1 kernel: vmbr0v2: port 3(fwpr101p0) entered forwarding state
  170.  
  171. Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
  172.  
  173. Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
  174.  
  175. Jan 16 23:18:44 pvea1 kernel: device fwln101i0 entered promiscuous mode
  176.  
  177. Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
  178.  
  179. Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
  180.  
  181. Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
  182.  
  183. Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 2(tap101i0) entered disabled state
  184.  
  185. Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 2(tap101i0) entered blocking state
  186.  
  187. Jan 16 23:18:44 pvea1 kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
  188.  
  189. Jan 16 23:18:44 pvea1 pve-ha-lrm[3433667]: <root@pam> end task UPID:pvea1:003464C4:0A5FEC70:65A700C1:qmstart:101:root@pam: OK
  190.  
  191. Jan 16 23:18:44 pvea1 pve-ha-lrm[3433667]: service status vm:101 started
  192.  
  193. Jan 16 23:19:03 pvea1 pvescheduler[3459945]: VM 101 qmp command failed - VM 101 qmp command 'guest-ping' failed - got timeout
  194.  
  195. Jan 16 23:19:03 pvea1 pvescheduler[3459945]: QEMU Guest Agent is not running - VM 101 qmp command 'guest-ping' failed - got timeout
  196.  
  197. Jan 16 23:19:22 pvea1 pvescheduler[3459945]: 101-0: got unexpected replication job error - command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pvea2' root@192.168.29.30 pvecm mtunnel -migration_network 10.99.99.2/24 -get_migration_ip' failed: exit code 255
  198.  
  199. Jan 16 23:19:22 pvea1 postfix/pickup[3784564]: 69A621D71E: uid=0 from=<root>
  200.  
  201. Jan 16 23:19:22 pvea1 postfix/cleanup[3433470]: 69A621D71E: message-id=<20240116221922.69A621D71E@pvea1.au.lan>
  202.  
  203. Jan 16 23:19:22 pvea1 postfix/qmgr[4208]: 69A621D71E: from=<root@pvea1.au.lan>, size=756, nrcpt=1 (queue active)
  204.  
  205. Jan 16 23:19:22 pvea1 proxmox-mail-fo[3470338]: pvea1 proxmox-mail-forward[3470338]: forward mail to <nope@nope.nope>
  206.  
  207. Jan 16 23:19:22 pvea1 postfix/pickup[3784564]: 7683D1D5AD: uid=65534 from=<root>
  208.  
  209. Jan 16 23:19:22 pvea1 postfix/cleanup[3433583]: 7683D1D5AD: message-id=<20240116221922.69A621D71E@pvea1.au.lan>
  210.  
  211. Jan 16 23:19:22 pvea1 postfix/local[3433474]: 69A621D71E: to=<root@pvea1.au.lan>, orig_to=<root>, relay=local, delay=0.08, delays=0.05/0/0/0.03, dsn=2.0.0, status=sent (delivered to command: /usr/bin/proxmox-mail-forward)
  212.  
  213. Jan 16 23:19:22 pvea1 postfix/qmgr[4208]: 69A621D71E: removed
  214.  
  215. Jan 16 23:19:22 pvea1 postfix/qmgr[4208]: 7683D1D5AD: from=<root@pvea1.au.lan>, size=921, nrcpt=1 (queue active)
  216.  
  217. Jan 16 23:19:22 pvea1 postfix/smtp[3433541]: 7683D1D5AD: to=<nope@nope.nope>, relay=none, delay=0.15, delays=0.03/0/0.12/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=agcolatecnovite.it type=AAAA: Host not found)
  218.  
  219. Jan 16 23:19:22 pvea1 postfix/qmgr[4208]: 7683D1D5AD: removed
  220.  
  221. Jan 16 23:19:22 pvea1 postfix/cleanup[3433470]: 994321D98F: message-id=<20240116221922.994321D98F@pvea1.au.lan>
  222.  
  223. Jan 16 23:20:03 pvea1 pvescheduler[3476774]: 100-0: got unexpected replication job error - command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pvea2' root@192.168.29.30 pvecm mtunnel -migration_network 10.99.99.2/24 -get_migration_ip' failed: exit code 255
  224.  
  225. Jan 16 23:20:03 pvea1 postfix/pickup[3784564]: A7BFE1CE34: uid=0 from=<root>
  226.  
  227. Jan 16 23:20:03 pvea1 postfix/cleanup[3433583]: A7BFE1CE34: message-id=<20240116222003.A7BFE1CE34@pvea1.au.lan>
  228.  
  229. Jan 16 23:20:03 pvea1 postfix/qmgr[4208]: A7BFE1CE34: from=<root@pvea1.au.lan>, size=760, nrcpt=1 (queue active)
  230.  
  231. Jan 16 23:20:03 pvea1 proxmox-mail-fo[3476895]: pvea1 proxmox-mail-forward[3476895]: forward mail to <nope@nope.nope>
  232.  
  233. Jan 16 23:20:03 pvea1 postfix/pickup[3784564]: BCE741D71F: uid=65534 from=<root>
  234.  
  235. Jan 16 23:20:03 pvea1 postfix/cleanup[3433470]: BCE741D71F: message-id=<20240116222003.A7BFE1CE34@pvea1.au.lan>
  236.  
  237. Jan 16 23:20:03 pvea1 postfix/local[3433530]: A7BFE1CE34: to=<root@pvea1.au.lan>, orig_to=<root>, relay=local, delay=0.11, delays=0.04/0/0/0.06, dsn=2.0.0, status=sent (delivered to command: /usr/bin/proxmox-mail-forward)
  238.  
  239. Jan 16 23:20:03 pvea1 postfix/qmgr[4208]: A7BFE1CE34: removed
  240.  
  241. Jan 16 23:20:03 pvea1 postfix/qmgr[4208]: BCE741D71F: from=<root@pvea1.au.lan>, size=925, nrcpt=1 (queue active)
  242.  
  243. Jan 16 23:20:03 pvea1 postfix/smtp[3433548]: BCE741D71F: to=<nope@nope.nope>, relay=none, delay=0.15, delays=0.02/0.01/0.12/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=agcolatecnovite.it type=AAAA: Host not found)
  244.  
  245. Jan 16 23:20:03 pvea1 postfix/qmgr[4208]: BCE741D71F: removed
  246.  
  247. Jan 16 23:20:03 pvea1 postfix/cleanup[3433583]: E22061CE35: message-id=<20240116222003.E22061CE35@pvea1.au.lan>
  248.  
  249. Jan 16 23:20:40 pvea1 kernel: tg3 0000:01:00.1 eno2: Link is down
  250.  
  251. Jan 16 23:20:41 pvea1 kernel: vmbr1: port 1(eno2) entered disabled state
  252.  
  253. Jan 16 23:20:41 pvea1 kernel: tg3 0000:02:00.0 eno3: Link is down
  254.  
  255. Jan 16 23:20:42 pvea1 kernel: vmbr2: port 2(eno3) entered disabled state
  256.  
  257. Jan 16 23:20:43 pvea1 kernel: tg3 0000:02:00.0 eno3: Link is up at 1000 Mbps, full duplex
  258.  
  259. Jan 16 23:20:43 pvea1 kernel: tg3 0000:02:00.0 eno3: Flow control is on for TX and on for RX
  260.  
  261. Jan 16 23:20:43 pvea1 kernel: tg3 0000:02:00.0 eno3: EEE is enabled
  262.  
  263. Jan 16 23:20:43 pvea1 kernel: vmbr2: port 2(eno3) entered blocking state
  264.  
  265. Jan 16 23:20:43 pvea1 kernel: vmbr2: port 2(eno3) entered forwarding state
  266.  
  267. Jan 16 23:20:44 pvea1 kernel: tg3 0000:01:00.1 eno2: Link is up at 1000 Mbps, full duplex
  268.  
  269. Jan 16 23:20:44 pvea1 kernel: tg3 0000:01:00.1 eno2: Flow control is on for TX and on for RX
  270.  
  271. Jan 16 23:20:44 pvea1 kernel: tg3 0000:01:00.1 eno2: EEE is enabled
  272.  
  273. Jan 16 23:20:44 pvea1 kernel: vmbr1: port 1(eno2) entered blocking state
  274.  
  275. Jan 16 23:20:44 pvea1 kernel: vmbr1: port 1(eno2) entered forwarding state
  276.  
  277. Jan 16 23:20:44 pvea1 corosync[4222]: [KNET ] rx: host: 2 link: 0 is up
  278.  
  279. Jan 16 23:20:44 pvea1 corosync[4222]: [KNET ] link: Resetting MTU for link 0 because host 2 joined
  280.  
  281. Jan 16 23:20:44 pvea1 corosync[4222]: [KNET ] host: host: 2 (passive) best link: 0 (pri: 1)
  282.  
  283. Jan 16 23:20:44 pvea1 corosync[4222]: [KNET ] pmtud: Global data MTU changed to: 1397
  284.  
  285. Jan 16 23:20:46 pvea1 corosync[4222]: [QUORUM] Sync members[2]: 1 2
  286.  
  287. Jan 16 23:20:46 pvea1 corosync[4222]: [QUORUM] Sync joined[1]: 2
  288.  
  289. Jan 16 23:20:46 pvea1 corosync[4222]: [VOTEQ ] waiting for quorum device Qdevice poll (but maximum for 30000 ms)
  290.  
  291. Jan 16 23:20:46 pvea1 corosync[4222]: [TOTEM ] A new membership (1.108) was formed. Members joined: 2
  292.  
  293. Jan 16 23:20:46 pvea1 corosync[4222]: [QUORUM] Members[2]: 1 2
  294.  
  295. Jan 16 23:20:46 pvea1 corosync[4222]: [MAIN ] Completed service synchronization, ready to provide service.
  296.  
  297. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: members: 1/4217, 2/12277
  298.  
  299. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: starting data syncronisation
  300.  
  301. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [status] notice: members: 1/4217, 2/12277
  302.  
  303. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [status] notice: starting data syncronisation
  304.  
  305. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: received sync request (epoch 1/4217/0000000D)
  306.  
  307. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [status] notice: received sync request (epoch 1/4217/0000000D)
  308.  
  309. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: received all states
  310.  
  311. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: leader is 1/4217
  312.  
  313. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: synced members: 1/4217
  314.  
  315. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: start sending inode updates
  316.  
  317. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: sent all (11) updates
  318.  
  319. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [dcdb] notice: all data is up to date
  320.  
  321. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [status] notice: received all states
  322.  
  323. Jan 16 23:20:48 pvea1 pmxcfs[4217]: [status] notice: all data is up to date
  324.  
  325. Jan 16 23:20:50 pvea1 pmxcfs[4217]: [status] notice: received log
  326.  
  327. Jan 16 23:20:50 pvea1 pmxcfs[4217]: [status] notice: received log
  328.  
  329. Jan 16 23:20:56 pvea1 pve-ha-crm[4435]: node 'pvea2': state changed from 'unknown' => 'online'
  330.  
  331. Jan 16 23:20:56 pvea1 pve-ha-crm[4435]: migrate service 'vm:101' to node 'pvea2' (running)
  332.  
  333. Jan 16 23:20:56 pvea1 pve-ha-crm[4435]: service 'vm:101': state changed from 'started' to 'migrate' (node = pvea1, target = pvea2)
  334.  
  335. Jan 16 23:21:01 pvea1 pve-ha-lrm[3481555]: <root@pam> starting task UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:
  336.  
  337. Jan 16 23:21:06 pvea1 pve-ha-lrm[3481555]: Task 'UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:' still active, waiting
  338.  
  339. Jan 16 23:21:11 pvea1 pve-ha-lrm[3481555]: Task 'UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:' still active, waiting
  340.  
  341. Jan 16 23:21:16 pvea1 pve-ha-lrm[3481555]: Task 'UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:' still active, waiting
  342.  
  343. Jan 16 23:21:21 pvea1 pve-ha-lrm[3481555]: Task 'UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:' still active, waiting
  344.  
  345. Jan 16 23:21:21 pvea1 pve-ha-lrm[3481556]: zfs error: could not find any snapshots to destroy; check snapshot names.
  346.  
  347. Jan 16 23:21:24 pvea1 pmxcfs[4217]: [status] notice: received log
  348.  
  349. Jan 16 23:21:25 pvea1 pmxcfs[4217]: [status] notice: received log
  350.  
  351. Jan 16 23:21:26 pvea1 pve-ha-lrm[3481555]: Task 'UPID:pvea1:00351FD4:0A60230D:65A7014D:qmigrate:101:root@pam:' still active, waiting
  352.  
  353. [/CODE]
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement