Guest User

Untitled

a guest
Feb 1st, 2024
56
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 21.11 KB | None | 0 0
  1. root@pve1:/etc/pve/qemu-server# journalctl --since "2024-02-1 18:55:00"
  2. Feb 01 18:57:03 pve1 kernel: zd16: p1
  3. Feb 01 18:57:03 pve1 pvedaemon[1279]: <root@pam> end task UPID:pve1:00000896:0000441B:65BC2D12:qmrestore:110:root@pam: OK
  4. Feb 01 18:58:15 pve1 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories...
  5. Feb 01 18:58:15 pve1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
  6. Feb 01 18:58:15 pve1 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories.
  7. Feb 01 18:58:15 pve1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
  8. Feb 01 18:58:44 pve1 pvedaemon[1279]: <root@pam> starting task UPID:pve1:0000168C:00017D42:65BC3034:qmstart:110:root@pam:
  9. Feb 01 18:58:44 pve1 pvedaemon[5772]: start VM 110: UPID:pve1:0000168C:00017D42:65BC3034:qmstart:110:root@pam:
  10. Feb 01 18:58:44 pve1 kernel: Histogram named irq intervals (ms, count, last_update_jiffy)
  11. Feb 01 18:58:44 pve1 kernel: Total: 0
  12. Feb 01 18:58:44 pve1 kernel: Histogram named deferred intervals (ms, count, last_update_jiffy)
  13. Feb 01 18:58:44 pve1 kernel: Total: 0
  14. Feb 01 18:58:44 pve1 kernel: Histogram named irq to deferred intervals (ms, count, last_update_jiffy)
  15. Feb 01 18:58:44 pve1 kernel: Total: 0
  16. Feb 01 18:58:44 pve1 kernel: Histogram named encoder/vbi read() intervals (ms, count, last_update_jiffy)
  17. Feb 01 18:58:44 pve1 kernel: Total: 0
  18. Feb 01 18:58:44 pve1 kernel: Histogram named encoder/vbi poll() intervals (ms, count, last_update_jiffy)
  19. Feb 01 18:58:44 pve1 kernel: Total: 0
  20. Feb 01 18:58:44 pve1 kernel: Histogram named encoder/vbi read() intervals (ms, count, last_update_jiffy)
  21. Feb 01 18:58:44 pve1 kernel: Total: 0
  22. Feb 01 18:58:44 pve1 kernel: Histogram named encoder/vbi poll() intervals (ms, count, last_update_jiffy)
  23. Feb 01 18:58:44 pve1 kernel: Total: 0
  24. Feb 01 18:58:44 pve1 kernel: pcieport 0000:00:1c.3: Enabling MPC IRBNCE
  25. Feb 01 18:58:44 pve1 kernel: pcieport 0000:00:1c.3: Intel PCH root port ACS workaround enabled
  26. Feb 01 18:58:45 pve1 kernel: pcieport 0000:00:1c.4: Enabling MPC IRBNCE
  27. Feb 01 18:58:45 pve1 kernel: pcieport 0000:00:1c.4: Intel PCH root port ACS workaround enabled
  28. Feb 01 18:58:45 pve1 systemd[1]: Created slice qemu.slice - Slice /qemu.
  29. Feb 01 18:58:45 pve1 systemd[1]: Started 110.scope.
  30. Feb 01 18:58:46 pve1 kernel: tap110i0: entered promiscuous mode
  31. Feb 01 18:58:46 pve1 kernel: vmbr0: port 2(tap110i0) entered blocking state
  32. Feb 01 18:58:46 pve1 kernel: vmbr0: port 2(tap110i0) entered disabled state
  33. Feb 01 18:58:46 pve1 kernel: tap110i0: entered allmulticast mode
  34. Feb 01 18:58:46 pve1 kernel: vmbr0: port 2(tap110i0) entered blocking state
  35. Feb 01 18:58:46 pve1 kernel: vmbr0: port 2(tap110i0) entered forwarding state
  36. Feb 01 18:58:46 pve1 kernel: tap110i1: entered promiscuous mode
  37. Feb 01 18:58:46 pve1 kernel: vmbr1: port 2(tap110i1) entered blocking state
  38. Feb 01 18:58:46 pve1 kernel: vmbr1: port 2(tap110i1) entered disabled state
  39. Feb 01 18:58:46 pve1 kernel: tap110i1: entered allmulticast mode
  40. Feb 01 18:58:46 pve1 kernel: igb 0000:05:00.0 eno2: entered promiscuous mode
  41. Feb 01 18:58:46 pve1 kernel: vmbr1: port 2(tap110i1) entered blocking state
  42. Feb 01 18:58:46 pve1 kernel: vmbr1: port 2(tap110i1) entered forwarding state
  43. Feb 01 18:58:47 pve1 kernel: tap110i2: entered promiscuous mode
  44. Feb 01 18:58:47 pve1 kernel: vmbr1: port 3(tap110i2) entered blocking state
  45. Feb 01 18:58:47 pve1 kernel: vmbr1: port 3(tap110i2) entered disabled state
  46. Feb 01 18:58:47 pve1 kernel: tap110i2: entered allmulticast mode
  47. Feb 01 18:58:47 pve1 kernel: vmbr1: port 3(tap110i2) entered blocking state
  48. Feb 01 18:58:47 pve1 kernel: vmbr1: port 3(tap110i2) entered forwarding state
  49. Feb 01 18:58:51 pve1 kernel: kvm invoked oom-killer: gfp_mask=0x140dc2(GFP_HIGHUSER|__GFP_COMP|__GFP_ZERO), order=0, oom_score_adj=0
  50. Feb 01 18:58:51 pve1 kernel: CPU: 5 PID: 5786 Comm: kvm Tainted: P OE 6.5.11-7-pve #1
  51. Feb 01 18:58:51 pve1 kernel: Hardware name: Intel Corporation S1200RP_SE/S1200RP_SE, BIOS S1200RP.86B.03.04.0002.110820161604 11/08/2016
  52. Feb 01 18:58:51 pve1 kernel: Call Trace:
  53. Feb 01 18:58:51 pve1 kernel: <TASK>
  54. Feb 01 18:58:51 pve1 kernel: dump_stack_lvl+0x48/0x70
  55. Feb 01 18:58:51 pve1 kernel: dump_stack+0x10/0x20
  56. Feb 01 18:58:51 pve1 kernel: dump_header+0x4f/0x260
  57. Feb 01 18:58:51 pve1 kernel: oom_kill_process+0x10d/0x1c0
  58. Feb 01 18:58:51 pve1 kernel: out_of_memory+0x270/0x560
  59. Feb 01 18:58:51 pve1 kernel: __alloc_pages+0x114f/0x12e0
  60. Feb 01 18:58:51 pve1 kernel: __folio_alloc+0x1d/0x60
  61. Feb 01 18:58:51 pve1 kernel: ? policy_node+0x69/0x80
  62. Feb 01 18:58:51 pve1 kernel: vma_alloc_folio+0x9f/0x3a0
  63. Feb 01 18:58:51 pve1 kernel: do_anonymous_page+0x76/0x3c0
  64. Feb 01 18:58:51 pve1 kernel: __handle_mm_fault+0xb50/0xc30
  65. Feb 01 18:58:51 pve1 kernel: handle_mm_fault+0x164/0x360
  66. Feb 01 18:58:51 pve1 kernel: __get_user_pages+0x1f5/0x630
  67. lines 25-65
  68. Feb 01 18:58:45 pve1 kernel: pcieport 0000:00:1c.4: Enabling MPC IRBNCE
  69. Feb 01 18:58:45 pve1 kernel: pcieport 0000:00:1c.4: Intel PCH root port ACS workaround enabled
  70. Feb 01 18:58:45 pve1 systemd[1]: Created slice qemu.slice - Slice /qemu.
  71. Feb 01 18:58:45 pve1 systemd[1]: Started 110.scope.
  72. Feb 01 18:58:46 pve1 kernel: tap110i0: entered promiscuous mode
  73. Feb 01 18:58:46 pve1 kernel: vmbr0: port 2(tap110i0) entered blocking state
  74. Feb 01 18:58:46 pve1 kernel: vmbr0: port 2(tap110i0) entered disabled state
  75. Feb 01 18:58:46 pve1 kernel: tap110i0: entered allmulticast mode
  76. Feb 01 18:58:46 pve1 kernel: vmbr0: port 2(tap110i0) entered blocking state
  77. Feb 01 18:58:46 pve1 kernel: vmbr0: port 2(tap110i0) entered forwarding state
  78. Feb 01 18:58:46 pve1 kernel: tap110i1: entered promiscuous mode
  79. Feb 01 18:58:46 pve1 kernel: vmbr1: port 2(tap110i1) entered blocking state
  80. Feb 01 18:58:46 pve1 kernel: vmbr1: port 2(tap110i1) entered disabled state
  81. Feb 01 18:58:46 pve1 kernel: tap110i1: entered allmulticast mode
  82. Feb 01 18:58:46 pve1 kernel: igb 0000:05:00.0 eno2: entered promiscuous mode
  83. Feb 01 18:58:46 pve1 kernel: vmbr1: port 2(tap110i1) entered blocking state
  84. Feb 01 18:58:46 pve1 kernel: vmbr1: port 2(tap110i1) entered forwarding state
  85. Feb 01 18:58:47 pve1 kernel: tap110i2: entered promiscuous mode
  86. Feb 01 18:58:47 pve1 kernel: vmbr1: port 3(tap110i2) entered blocking state
  87. Feb 01 18:58:47 pve1 kernel: vmbr1: port 3(tap110i2) entered disabled state
  88. Feb 01 18:58:47 pve1 kernel: tap110i2: entered allmulticast mode
  89. Feb 01 18:58:47 pve1 kernel: vmbr1: port 3(tap110i2) entered blocking state
  90. Feb 01 18:58:47 pve1 kernel: vmbr1: port 3(tap110i2) entered forwarding state
  91. Feb 01 18:58:51 pve1 kernel: kvm invoked oom-killer: gfp_mask=0x140dc2(GFP_HIGHUSER|__GFP_COMP|__GFP_ZERO), order=0, oom_score_adj=0
  92. Feb 01 18:58:51 pve1 kernel: CPU: 5 PID: 5786 Comm: kvm Tainted: P OE 6.5.11-7-pve #1
  93. Feb 01 18:58:51 pve1 kernel: Hardware name: Intel Corporation S1200RP_SE/S1200RP_SE, BIOS S1200RP.86B.03.04.0002.110820161604 11/08/2016
  94. Feb 01 18:58:51 pve1 kernel: Call Trace:
  95. Feb 01 18:58:51 pve1 kernel: <TASK>
  96. Feb 01 18:58:51 pve1 kernel: dump_stack_lvl+0x48/0x70
  97. Feb 01 18:58:51 pve1 kernel: dump_stack+0x10/0x20
  98. Feb 01 18:58:51 pve1 kernel: dump_header+0x4f/0x260
  99. Feb 01 18:58:51 pve1 kernel: oom_kill_process+0x10d/0x1c0
  100. Feb 01 18:58:51 pve1 kernel: out_of_memory+0x270/0x560
  101. Feb 01 18:58:51 pve1 kernel: __alloc_pages+0x114f/0x12e0
  102. Feb 01 18:58:51 pve1 kernel: __folio_alloc+0x1d/0x60
  103. Feb 01 18:58:51 pve1 kernel: ? policy_node+0x69/0x80
  104. Feb 01 18:58:51 pve1 kernel: vma_alloc_folio+0x9f/0x3a0
  105. Feb 01 18:58:51 pve1 kernel: do_anonymous_page+0x76/0x3c0
  106. Feb 01 18:58:51 pve1 kernel: __handle_mm_fault+0xb50/0xc30
  107. Feb 01 18:58:51 pve1 kernel: handle_mm_fault+0x164/0x360
  108. Feb 01 18:58:51 pve1 kernel: __get_user_pages+0x1f5/0x630
  109. Feb 01 18:58:51 pve1 kernel: ? sysvec_apic_timer_interrupt+0xa6/0xd0
  110. Feb 01 18:58:51 pve1 kernel: __gup_longterm_locked+0x27e/0xc20
  111. Feb 01 18:58:51 pve1 kernel: ? __domain_mapping+0x280/0x4a0
  112. Feb 01 18:58:51 pve1 kernel: pin_user_pages_remote+0x7a/0xb0
  113. Feb 01 18:58:51 pve1 kernel: vaddr_get_pfns+0x78/0x290 [vfio_iommu_type1]
  114. Feb 01 18:58:51 pve1 kernel: vfio_pin_pages_remote+0x370/0x4e0 [vfio_iommu_type1]
  115. Feb 01 18:58:51 pve1 kernel: ? intel_iommu_iotlb_sync_map+0x8f/0x100
  116. Feb 01 18:58:51 pve1 kernel: vfio_iommu_type1_ioctl+0x10c7/0x1af0 [vfio_iommu_type1]
  117. Feb 01 18:58:51 pve1 kernel: vfio_fops_unl_ioctl+0x6b/0x380 [vfio]
  118. Feb 01 18:58:51 pve1 kernel: ? __fget_light+0xa5/0x120
  119. Feb 01 18:58:51 pve1 kernel: __x64_sys_ioctl+0xa3/0xf0
  120. Feb 01 18:58:51 pve1 kernel: do_syscall_64+0x5b/0x90
  121. Feb 01 18:58:51 pve1 kernel: ? __rseq_handle_notify_resume+0xa5/0x4d0
  122. Feb 01 18:58:51 pve1 kernel: ? task_mm_cid_work+0x1a1/0x240
  123. Feb 01 18:58:51 pve1 kernel: ? exit_to_user_mode_prepare+0xa5/0x190
  124. Feb 01 18:58:51 pve1 kernel: ? syscall_exit_to_user_mode+0x37/0x60
  125. Feb 01 18:58:51 pve1 kernel: ? do_syscall_64+0x67/0x90
  126. Feb 01 18:58:51 pve1 kernel: ? exc_page_fault+0x94/0x1b0
  127. Feb 01 18:58:51 pve1 kernel: entry_SYSCALL_64_after_hwframe+0x6e/0xd8
  128. Feb 01 18:58:51 pve1 kernel: RIP: 0033:0x7f847df28b5b
  129. Feb 01 18:58:51 pve1 kernel: Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 >
  130. Feb 01 18:58:51 pve1 kernel: RSP: 002b:00007ffde4f57870 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
  131. Feb 01 18:58:51 pve1 kernel: RAX: ffffffffffffffda RBX: 000056221343f7b0 RCX: 00007f847df28b5b
  132. Feb 01 18:58:51 pve1 kernel: RDX: 00007ffde4f578d0 RSI: 0000000000003b71 RDI: 0000000000000034
  133. Feb 01 18:58:51 pve1 kernel: RBP: 0000000100000000 R08: 0000000000000000 R09: ffffffffffffffff
  134. Feb 01 18:58:51 pve1 kernel: R10: 0000000180000000 R11: 0000000000000246 R12: 0000000180000000
  135. Feb 01 18:58:51 pve1 kernel: R13: 0000000180000000 R14: 00007ffde4f578d0 R15: 000056221343f7b0
  136. Feb 01 18:58:51 pve1 kernel: </TASK>
  137. Feb 01 18:58:51 pve1 kernel: Mem-Info:
  138. Feb 01 18:58:51 pve1 kernel: active_anon:1976429 inactive_anon:202522 isolated_anon:0
  139. active_file:49 inactive_file:4 isolated_file:0
  140. unevictable:3453 dirty:66 writeback:14
  141. slab_reclaimable:11310 slab_unreclaimable:99089
  142. mapped:15584 shmem:12828 pagetables:6219
  143. sec_pagetables:0 bounce:0
  144. kernel_misc_reclaimable:0
  145. free:35759 free_pcp:612 free_cma:0
  146. Feb 01 18:58:51 pve1 kernel: Node 0 active_anon:7905716kB inactive_anon:810088kB active_file:196kB inactive_file:16kB unevictable:13812kB isolated(anon):0kB isolated(file):0kB>
  147. Feb 01 18:58:51 pve1 kernel: Node 0 DMA free:13312kB boost:0kB min:60kB low:72kB high:84kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_f>
  148. Feb 01 18:58:51 pve1 kernel: lowmem_reserve[]: 0 2098 15833 15833 15833
  149. Feb 01 18:58:51 pve1 kernel: Node 0 DMA32 free:65436kB boost:0kB min:8948kB low:11184kB high:13420kB reserved_highatomic:2048KB active_anon:2034896kB inactive_anon:23148kB act>
  150. Feb 01 18:58:51 pve1 kernel: lowmem_reserve[]: 0 0 13734 13734 13734
  151. Feb 01 18:58:51 pve1 kernel: Node 0 Normal free:64288kB boost:0kB min:58568kB low:73208kB high:87848kB reserved_highatomic:6144KB active_anon:5910752kB inactive_anon:747008kB >
  152. Feb 01 18:58:51 pve1 kernel: lowmem_reserve[]: 0 0 0 0 0
  153. Feb 01 18:58:51 pve1 kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 2*2048kB (UM) 2*4096kB (M) = 13312kB
  154. Feb 01 18:58:51 pve1 kernel: Node 0 DMA32: 116*4kB (MH) 212*8kB (MH) 219*16kB (UMH) 210*32kB (UMH) 169*64kB (UMH) 130*128kB (UMH) 61*256kB (UMH) 19*512kB (UMH) 0*1024kB 0*2048>
  155. Feb 01 18:58:51 pve1 kernel: Node 0 Normal: 292*4kB (UMEH) 615*8kB (UMEH) 622*16kB (MEH) 636*32kB (UMEH) 255*64kB (UMEH) 88*128kB (UMH) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*409>
  156. Feb 01 18:58:51 pve1 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
  157. Feb 01 18:58:51 pve1 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
  158. Feb 01 18:58:51 pve1 kernel: 15884 total pagecache pages
  159. Feb 01 18:58:51 pve1 kernel: 0 pages in swap cache
  160. Feb 01 18:58:51 pve1 kernel: Free swap = 0kB
  161. Feb 01 18:58:51 pve1 kernel: Total swap = 0kB
  162. Feb 01 18:58:51 pve1 kernel: 4178361 pages RAM
  163. Feb 01 18:58:51 pve1 kernel: 0 pages HighMem/MovableOnly
  164. Feb 01 18:58:51 pve1 kernel: 103122 pages reserved
  165. Feb 01 18:58:51 pve1 kernel: 0 pages hwpoisoned
  166. Feb 01 18:58:51 pve1 kernel: Tasks state (memory values in pages):
  167. Feb 01 18:58:51 pve1 kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
  168. Feb 01 18:58:51 pve1 kernel: [ 495] 0 495 15492 768 147456 0 -250 systemd-journal
  169. Feb 01 18:58:51 pve1 kernel: [ 509] 0 509 6781 928 73728 0 -1000 systemd-udevd
  170. Feb 01 18:58:51 pve1 kernel: [ 755] 0 755 19796 288 53248 0 0 pvefw-logger
  171. Feb 01 18:58:51 pve1 kernel: [ 758] 103 758 1969 512 53248 0 0 rpcbind
  172. Feb 01 18:58:51 pve1 kernel: [ 774] 102 774 2304 544 57344 0 -900 dbus-daemon
  173. Feb 01 18:58:51 pve1 kernel: [ 778] 0 778 38187 320 61440 0 -1000 lxcfs
  174. Feb 01 18:58:51 pve1 kernel: [ 780] 0 780 69539 448 90112 0 0 pve-lxc-syscall
  175. Feb 01 18:58:51 pve1 kernel: [ 783] 0 783 55444 640 81920 0 0 rsyslogd
  176. Feb 01 18:58:51 pve1 kernel: [ 784] 0 784 1766 211 53248 0 0 ksmtuned
  177. Feb 01 18:58:51 pve1 kernel: [ 786] 0 786 2981 832 57344 0 0 smartd
  178. Feb 01 18:58:51 pve1 kernel: [ 787] 0 787 1327 192 53248 0 0 qmeventd
  179. Feb 01 18:58:51 pve1 kernel: [ 799] 0 799 6338 960 73728 0 0 systemd-logind
  180. Feb 01 18:58:51 pve1 kernel: [ 800] 0 800 583 224 45056 0 -1000 watchdog-mux
  181. Feb 01 18:58:51 pve1 kernel: [ 805] 0 805 60164 864 102400 0 0 zed
  182. Feb 01 18:58:51 pve1 kernel: [ 945] 0 945 1256 288 40960 0 0 lxc-monitord
  183. Feb 01 18:58:51 pve1 kernel: [ 949] 0 949 1223 288 53248 0 0 upsmon
  184. Feb 01 18:58:51 pve1 kernel: [ 963] 0 963 2166 448 61440 0 0 upsmon
  185. Feb 01 18:58:51 pve1 kernel: [ 964] 0 964 3333 396 57344 0 0 iscsid
  186. Feb 01 18:58:51 pve1 kernel: [ 968] 0 968 3459 3279 61440 0 -17 iscsid
  187. Feb 01 18:58:51 pve1 kernel: [ 981] 0 981 1468 448 49152 0 0 agetty
  188. Feb 01 18:58:51 pve1 kernel: [ 989] 0 989 3853 1344 73728 0 -1000 sshd
  189. Feb 01 18:58:51 pve1 kernel: [ 1020] 101 1020 4715 586 57344 0 0 chronyd
  190. Feb 01 18:58:51 pve1 kernel: [ 1024] 101 1024 2633 499 57344 0 0 chronyd
  191. Feb 01 18:58:51 pve1 kernel: [ 1052] 0 1052 1656 863 45056 0 0 apache2
  192. Feb 01 18:58:51 pve1 kernel: [ 1054] 33 1054 1886 737 49152 0 0 apache2
  193. Feb 01 18:58:51 pve1 kernel: [ 1055] 33 1055 499746 802 278528 0 0 apache2
  194. Feb 01 18:58:51 pve1 kernel: [ 1056] 33 1056 499746 802 278528 0 0 apache2
  195. Feb 01 18:58:51 pve1 kernel: [ 1128] 0 1128 165457 690 172032 0 0 rrdcached
  196. Feb 01 18:58:51 pve1 kernel: [ 1143] 0 1143 133296 15229 389120 0 0 pmxcfs
  197. Feb 01 18:58:51 pve1 kernel: [ 1234] 0 1234 10664 613 69632 0 0 master
  198. Feb 01 18:58:51 pve1 kernel: [ 1235] 106 1235 10763 576 73728 0 0 pickup
  199. Feb 01 18:58:51 pve1 kernel: [ 1236] 106 1236 10810 672 69632 0 0 qmgr
  200. Feb 01 18:58:51 pve1 kernel: [ 1241] 0 1241 1652 512 49152 0 0 cron
  201. Feb 01 18:58:51 pve1 kernel: [ 1249] 0 1249 72426 24066 303104 0 0 pve-firewall
  202. Feb 01 18:58:51 pve1 kernel: [ 1251] 0 1251 72364 24938 327680 0 0 pvestatd
  203. Feb 01 18:58:51 pve1 kernel: [ 1254] 0 1254 615 256 40960 0 0 bpfilter_umh
  204. Feb 01 18:58:51 pve1 kernel: [ 1277] 0 1277 91367 33899 413696 0 0 pvedaemon
  205. Feb 01 18:58:51 pve1 kernel: [ 1278] 0 1278 94218 35532 450560 0 0 pvedaemon worke
  206. Feb 01 18:58:51 pve1 kernel: [ 1279] 0 1279 94215 35756 462848 0 0 pvedaemon worke
  207. Feb 01 18:58:51 pve1 kernel: [ 1280] 0 1280 94208 35564 450560 0 0 pvedaemon worke
  208. Feb 01 18:58:51 pve1 kernel: [ 1285] 0 1285 87916 27484 372736 0 0 pve-ha-crm
  209. Feb 01 18:58:51 pve1 kernel: [ 1292] 33 1292 91720 34192 413696 0 0 pveproxy
  210. Feb 01 18:58:51 pve1 kernel: [ 1293] 33 1293 95303 36720 466944 0 0 pveproxy worker
  211. Feb 01 18:58:51 pve1 kernel: [ 1294] 33 1294 95084 36656 462848 0 0 pveproxy worker
  212. Feb 01 18:58:51 pve1 kernel: [ 1295] 33 1295 95086 36560 462848 0 0 pveproxy worker
  213. Feb 01 18:58:51 pve1 kernel: [ 1301] 33 1301 20207 12890 184320 0 0 spiceproxy
  214. Feb 01 18:58:51 pve1 kernel: [ 1302] 33 1302 20447 13115 184320 0 0 spiceproxy work
  215. Feb 01 18:58:51 pve1 kernel: [ 1303] 0 1303 87784 27364 380928 0 0 pve-ha-lrm
  216. Feb 01 18:58:51 pve1 kernel: [ 1314] 0 1314 86716 28062 364544 0 0 pvescheduler
  217. Feb 01 18:58:51 pve1 kernel: [ 2764] 0 2764 4534 2048 77824 0 0 sshd
  218. Feb 01 18:58:51 pve1 kernel: [ 2766] 0 2766 4494 1984 77824 0 0 sshd
  219. Feb 01 18:58:51 pve1 kernel: [ 2770] 0 2770 4773 1504 73728 0 100 systemd
  220. Feb 01 18:58:51 pve1 kernel: [ 2771] 0 2771 42339 1186 94208 0 100 (sd-pam)
  221. Feb 01 18:58:51 pve1 kernel: [ 2794] 0 2794 2092 864 57344 0 0 bash
  222. Feb 01 18:58:51 pve1 kernel: [ 2795] 0 2795 661 416 40960 0 0 sftp-server
  223. Feb 01 18:58:51 pve1 kernel: [ 5728] 0 5728 1366 320 53248 0 0 sleep
  224. Feb 01 18:58:51 pve1 kernel: [ 5772] 0 5772 96056 35070 446464 0 0 task UPID:pve1:
  225. Feb 01 18:58:51 pve1 kernel: [ 5781] 0 5781 96089 34878 438272 0 0 task UPID:pve1:
  226. Feb 01 18:58:51 pve1 kernel: [ 5782] 0 5782 77886 3260 249856 0 0 kvm
  227. Feb 01 18:58:51 pve1 kernel: [ 5786] 0 5786 2317766 1913420 15642624 0 0 kvm
  228. Feb 01 18:58:51 pve1 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=qemu.slice,mems_allowed=0,global_oom,task_memcg=/qemu.slice/110.scope,task=kvm,pid=5786>
  229. Feb 01 18:58:51 pve1 kernel: Out of memory: Killed process 5786 (kvm) total-vm:9271064kB, anon-rss:7651888kB, file-rss:1792kB, shmem-rss:0kB, UID:0 pgtables:15276kB oom_score_>
  230. Feb 01 18:58:51 pve1 kernel: vmbr1: port 3(tap110i2) entered disabled state
  231. Feb 01 18:58:51 pve1 kernel: tap110i2 (unregistering): left allmulticast mode
  232. Feb 01 18:58:51 pve1 kernel: vmbr1: port 3(tap110i2) entered disabled state
  233. Feb 01 18:58:51 pve1 kernel: vmbr1: port 2(tap110i1) entered disabled state
  234. Feb 01 18:58:51 pve1 kernel: tap110i1 (unregistering): left allmulticast mode
  235. Feb 01 18:58:51 pve1 kernel: vmbr1: port 2(tap110i1) entered disabled state
  236. Feb 01 18:58:51 pve1 kernel: igb 0000:05:00.0 eno2: left promiscuous mode
  237. Feb 01 18:58:51 pve1 systemd[1]: 110.scope: A process of this unit has been killed by the OOM killer.
  238. Feb 01 18:58:51 pve1 systemd[1]: 110.scope: Failed with result 'oom-kill'.
  239. Feb 01 18:58:51 pve1 systemd[1]: 110.scope: Consumed 5.277s CPU time.
  240. Feb 01 18:58:51 pve1 kernel: vmbr0: port 2(tap110i0) entered disabled state
  241. Feb 01 18:58:51 pve1 kernel: tap110i0 (unregistering): left allmulticast mode
  242. Feb 01 18:58:51 pve1 kernel: vmbr0: port 2(tap110i0) entered disabled state
  243. Feb 01 18:58:51 pve1 pvestatd[1251]: VM 110 qmp command failed - VM 110 not running
  244. Feb 01 18:58:51 pve1 pvedaemon[1278]: VM 110 qmp command failed - VM 110 qmp command 'query-proxmox-support' failed - unable to connect to VM 110 qmp socket - Connection refus>
  245. Feb 01 18:58:51 pve1 pvedaemon[5772]: start failed: QEMU exited with code 1
  246. Feb 01 18:58:51 pve1 pvedaemon[1279]: <root@pam> end task UPID:pve1:0000168C:00017D42:65BC3034:qmstart:110:root@pam: start failed: QEMU exited with code 1
  247. Feb 01 18:58:51 pve1 systemd[1]: qemu.slice: A process of this unit has been killed by the OOM killer.
  248. Feb 01 18:59:21 pve1 pvedaemon[1279]: <root@pam> successful auth for user 'root@pam'
  249.  
Advertisement
Add Comment
Please, Sign In to add comment