Advertisement
Guest User

proxmoxpve01oom

a guest
Dec 14th, 2023
20
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 17.73 KB | Source Code | 0 0
  1. Dec 13 19:17:01 pve01 CRON[1102116]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
  2. Dec 13 19:17:01 pve01 CRON[1102117]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
  3. Dec 13 19:17:01 pve01 CRON[1102116]: pam_unix(cron:session): session closed for user root
  4. Dec 13 19:36:22 pve01 pmxcfs[1752]: [dcdb] notice: data verification successful
  5. Dec 13 19:56:18 pve01 systemd: systemd-journald.service: State 'stop-watchdog' timed out. Killing.
  6. Dec 13 19:56:18 pve01 systemd: systemd-journald.service: Killing process 653 (systemd-journal) with signal SIGKILL.
  7. Dec 13 19:56:18 pve01 kernel: oom_reaper: reaped process 653 (systemd-journal), now anon-rss:0kB, file-rss:64kB, shmem-rss:0kB
  8. Dec 13 19:56:18 pve01 systemd: systemd-journald.service: Processes still around after SIGKILL. Ignoring.
  9. Dec 13 19:56:18 pve01 systemd: systemd-journald.service: State 'final-sigterm' timed out. Killing.
  10. Dec 13 19:56:18 pve01 systemd: systemd-journald.service: Killing process 653 (systemd-journal) with signal SIGKILL.
  11. Dec 13 19:56:18 pve01 systemd: systemd-journald.service: Main process exited, code=killed, status=9/KILL
  12. Dec 13 19:56:18 pve01 systemd: systemd-journald.service: Failed with result 'watchdog'.
  13. Dec 13 19:56:18 pve01 systemd: systemd-journald.service: Consumed 3.790s CPU time.
  14. Dec 13 19:56:18 pve01 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
  15. Dec 13 19:56:18 pve01 systemd: Stopped systemd-journald.service - Journal Service.
  16. Dec 13 19:56:18 pve01 systemd: systemd-journald.service: Consumed 3.790s CPU time.
  17. Dec 13 19:56:18 pve01 kernel: systemd invoked oom-killer: gfp_mask=0x40dc0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), order=2, oom_score_adj=0
  18. Dec 13 19:56:18 pve01 kernel: CPU: 11 PID: 1 Comm: systemd Tainted: P O 6.5.11-7-pve #1
  19. Dec 13 19:56:18 pve01 kernel: Hardware name: FUJITSU PRIMERGY RX2520 M1/D3169-A1, BIOS V4.6.5.4 R1.17.0 for D3169-A1x 02/09/2016
  20. Dec 13 19:56:18 pve01 kernel: Call Trace:
  21. Dec 13 19:56:18 pve01 kernel: <TASK>
  22. Dec 13 19:56:18 pve01 kernel: dump_stack_lvl+0x48/0x70
  23. Dec 13 19:56:18 pve01 kernel: dump_stack+0x10/0x20
  24. Dec 13 19:56:18 pve01 kernel: dump_header+0x4f/0x260
  25. Dec 13 19:56:18 pve01 kernel: oom_kill_process+0x10d/0x1c0
  26. Dec 13 19:56:18 pve01 kernel: out_of_memory+0x270/0x560
  27. Dec 13 19:56:18 pve01 kernel: __alloc_pages+0x114f/0x12e0
  28. Dec 13 19:56:18 pve01 kernel: __kmalloc_large_node+0x7e/0x160
  29. Dec 13 19:56:18 pve01 kernel: kmalloc_large+0x22/0xc0
  30. Dec 13 19:56:18 pve01 kernel: bpf_check+0x7c/0x2d40
  31. Dec 13 19:56:18 pve01 kernel: ? do_user_addr_fault+0x238/0x6a0
  32. Dec 13 19:56:18 pve01 kernel: ? exc_page_fault+0x94/0x1b0
  33. Dec 13 19:56:18 pve01 kernel: ? strncpy_from_user+0xa8/0x170
  34. Dec 13 19:56:18 pve01 kernel: ? __pfx_read_tsc+0x10/0x10
  35. Dec 13 19:56:18 pve01 kernel: bpf_prog_load+0x862/0xbf0
  36. Dec 13 19:56:18 pve01 kernel: __sys_bpf+0x777/0x2680
  37. Dec 13 19:56:18 pve01 kernel: ? __handle_mm_fault+0x6cc/0xc30
  38. Dec 13 19:56:18 pve01 kernel: __x64_sys_bpf+0x1a/0x30
  39. Dec 13 19:56:18 pve01 kernel: do_syscall_64+0x5b/0x90
  40. Dec 13 19:56:18 pve01 kernel: ? exit_to_user_mode_prepare+0x39/0x190
  41. Dec 13 19:56:18 pve01 kernel: ? irqentry_exit_to_user_mode+0x17/0x20
  42. Dec 13 19:56:18 pve01 kernel: ? irqentry_exit+0x43/0x50
  43. Dec 13 19:56:18 pve01 kernel: ? exc_page_fault+0x94/0x1b0
  44. Dec 13 19:56:18 pve01 kernel: entry_SYSCALL_64_after_hwframe+0x6e/0xd8
  45. Dec 13 19:56:18 pve01 kernel: RIP: 0033:0x7fad04b39559
  46. Dec 13 19:56:18 pve01 kernel: Code: 08 89 e8 5b 5d c3 66 2e 0f 1f 84 00 00 00 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 77 08 0d 00 f7 d8 64 89 01 48
  47. Dec 13 19:56:18 pve01 kernel: RSP: 002b:00007ffe99da5238 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
  48. Dec 13 19:56:18 pve01 kernel: RAX: ffffffffffffffda RBX: 0000557f394ba560 RCX: 00007fad04b39559
  49. Dec 13 19:56:18 pve01 kernel: RDX: 0000000000000090 RSI: 00007ffe99da5240 RDI: 0000000000000005
  50. Dec 13 19:56:18 pve01 kernel: RBP: 00007ffe99da5240 R08: 0000000000000000 R09: 0000003f0000000f
  51. Dec 13 19:56:18 pve01 kernel: R10: 0000000000000004 R11: 0000000000000246 R12: 0000000000000002
  52. Dec 13 19:56:18 pve01 kernel: R13: 0000000000000006 R14: 00007ffe99da5310 R15: 0000557f394b7020
  53. Dec 13 19:56:18 pve01 kernel: </TASK>
  54. Dec 13 19:56:18 pve01 kernel: Mem-Info:
  55. Dec 13 19:56:18 pve01 kernel: active_anon:713273 inactive_anon:4735294 isolated_anon:0
  56. active_file:0 inactive_file:0 isolated_file:0
  57. unevictable:38194 dirty:4 writeback:14
  58. slab_reclaimable:7109 slab_unreclaimable:1144890
  59. mapped:21001 shmem:17723 pagetables:15051
  60. sec_pagetables:12163 bounce:0
  61. kernel_misc_reclaimable:0
  62. free:457262 free_pcp:36 free_cma:0
  63. Dec 13 19:56:18 pve01 kernel: Node 0 active_anon:19402448kB inactive_anon:2392056kB active_file:116kB inactive_file:560kB unevictable:152776kB isolated(anon):0kB isolated(file):0kB mapped:84652kB dirty:16kB writeback:56kB shmem:70892kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 303104kB writeback_tmp:0kB kernel_stack:7952kB pagetables:60204kB sec_pagetables:48652kB all_unreclaimable? no
  64. Dec 13 19:56:18 pve01 kernel: Node 0 DMA free:11264kB boost:0kB min:20kB low:32kB high:44kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15996kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
  65. Dec 13 19:56:18 pve01 kernel: lowmem_reserve[]: 0 1862 48038 48038 48038
  66. Dec 13 19:56:18 pve01 kernel: Node 0 DMA32 free:186004kB boost:0kB min:2616kB low:4520kB high:6424kB reserved_highatomic:2048KB active_anon:1355136kB inactive_anon:276344kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:2031304kB managed:1965496kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
  67. Dec 13 19:56:18 pve01 kernel: lowmem_reserve[]: 0 0 46175 46175 46175
  68. Dec 13 19:56:18 pve01 kernel: Node 0 Normal free:1631216kB boost:129024kB min:193964kB low:241248kB high:288532kB reserved_highatomic:2048KB active_anon:17941688kB inactive_anon:2221476kB active_file:0kB inactive_file:40kB unevictable:152776kB writepending:72kB present:48234496kB managed:47292404kB mlocked:152776kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
  69. Dec 13 19:56:18 pve01 kernel: lowmem_reserve[]: 0 0 0 0 0
  70. Dec 13 19:56:18 pve01 kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 2*4096kB (M) = 11264kB
  71. Dec 13 19:56:18 pve01 kernel: Node 0 DMA32: 872*4kB (ME) 17*8kB (ME) 6*16kB (UME) 7*32kB (UME) 5*64kB (UME) 8*128kB (UME) 144*256kB (UME) 143*512kB (UM) 67*1024kB (UE) 1*2048kB (M) 0*4096kB = 186024kB
  72. Dec 13 19:56:18 pve01 kernel: Node 0 Normal: 6571*4kB (UM) 200568*8kB (UE) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1630828kB
  73. Dec 13 19:56:18 pve01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
  74. Dec 13 19:56:18 pve01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
  75. Dec 13 19:56:18 pve01 kernel: 22682 total pagecache pages
  76. Dec 13 19:56:18 pve01 kernel: 0 pages in swap cache
  77. Dec 13 19:56:18 pve01 kernel: Free swap = 0kB
  78. Dec 13 19:56:18 pve01 kernel: Total swap = 0kB
  79. Dec 13 19:56:18 pve01 kernel: 12570449 pages RAM
  80. Dec 13 19:56:18 pve01 kernel: 0 pages HighMem/MovableOnly
  81. Dec 13 19:56:18 pve01 kernel: 252134 pages reserved
  82. Dec 13 19:56:18 pve01 kernel: 0 pages hwpoisoned
  83. Dec 13 19:56:18 pve01 kernel: Tasks state (memory values in pages):
  84. Dec 13 19:56:18 pve01 kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
  85. Dec 13 19:56:18 pve01 kernel: [ 680] 0 680 6867 1024 73728 0 -1000 systemd-udevd
  86. Dec 13 19:56:18 pve01 kernel: [ 1349] 103 1349 1969 544 49152 0 0 rpcbind
  87. Dec 13 19:56:18 pve01 kernel: [ 1351] 102 1351 2293 672 53248 0 -900 dbus-daemon
  88. Dec 13 19:56:18 pve01 kernel: [ 1355] 0 1355 38187 320 61440 0 -1000 lxcfs
  89. Dec 13 19:56:18 pve01 kernel: [ 1357] 0 1357 1766 435 49152 0 0 ksmtuned
  90. Dec 13 19:56:18 pve01 kernel: [ 1358] 0 1358 69539 448 86016 0 0 pve-lxc-syscall
  91. Dec 13 19:56:18 pve01 kernel: [ 1365] 0 1365 55444 672 77824 0 0 rsyslogd
  92. Dec 13 19:56:18 pve01 kernel: [ 1366] 0 1366 1327 352 49152 0 0 qmeventd
  93. Dec 13 19:56:18 pve01 kernel: [ 1371] 0 1371 3063 840 57344 0 0 smartd
  94. Dec 13 19:56:18 pve01 kernel: [ 1372] 0 1372 4163 1088 69632 0 0 systemd-logind
  95. Dec 13 19:56:18 pve01 kernel: [ 1373] 0 1373 583 256 36864 0 -1000 watchdog-mux
  96. Dec 13 19:56:18 pve01 kernel: [ 1380] 0 1380 60197 960 86016 0 0 zed
  97. Dec 13 19:56:18 pve01 kernel: [ 1620] 0 1620 1256 320 49152 0 0 lxc-monitord
  98. Dec 13 19:56:18 pve01 kernel: [ 1632] 112 1632 6120 1856 86016 0 0 snmpd
  99. Dec 13 19:56:18 pve01 kernel: [ 1643] 0 1643 3333 396 61440 0 0 iscsid
  100. Dec 13 19:56:18 pve01 kernel: [ 1644] 0 1644 3459 3343 65536 0 -17 iscsid
  101. Dec 13 19:56:18 pve01 kernel: [ 1653] 0 1653 3852 1376 69632 0 -1000 sshd
  102. Dec 13 19:56:18 pve01 kernel: [ 1656] 0 1656 1468 416 53248 0 0 agetty
  103. Dec 13 19:56:18 pve01 kernel: [ 1692] 101 1692 4715 746 61440 0 0 chronyd
  104. Dec 13 19:56:18 pve01 kernel: [ 1696] 101 1696 2633 530 61440 0 0 chronyd
  105. Dec 13 19:56:18 pve01 kernel: [ 1725] 0 1725 181857 756 180224 0 0 rrdcached
  106. Dec 13 19:56:18 pve01 kernel: [ 1752] 0 1752 211885 15847 446464 0 0 pmxcfs
  107. Dec 13 19:56:18 pve01 kernel: [ 1823] 0 1823 10664 613 69632 0 0 master
  108. Dec 13 19:56:18 pve01 kernel: [ 1825] 106 1825 10774 768 73728 0 0 qmgr
  109. Dec 13 19:56:18 pve01 kernel: [ 1830] 0 1830 139331 41162 397312 0 0 corosync
  110. Dec 13 19:56:18 pve01 kernel: [ 1831] 0 1831 1652 576 53248 0 0 cron
  111. Dec 13 19:56:18 pve01 kernel: [ 1848] 0 1848 72412 24657 319488 0 0 pve-firewall
  112. Dec 13 19:56:18 pve01 kernel: [ 1851] 0 1851 72366 25413 323584 0 0 pvestatd
  113. Dec 13 19:56:18 pve01 kernel: [ 1853] 0 1853 615 224 40960 0 0 bpfilter_umh
  114. Dec 13 19:56:18 pve01 kernel: [ 1878] 0 1878 91374 33964 413696 0 0 pvedaemon
  115. Dec 13 19:56:18 pve01 kernel: [ 1885] 0 1885 87928 27919 376832 0 0 pve-ha-crm
  116. Dec 13 19:56:18 pve01 kernel: [ 1925] 33 1925 91726 35620 458752 0 0 pveproxy
  117. Dec 13 19:56:18 pve01 kernel: [ 1961] 33 1961 20197 14208 200704 0 0 spiceproxy
  118. Dec 13 19:56:18 pve01 kernel: [ 1963] 0 1963 87791 27823 380928 0 0 pve-ha-lrm
  119. Dec 13 19:56:18 pve01 kernel: [ 1978] 0 1978 3290 1255 61440 0 0 swtpm
  120. Dec 13 19:56:18 pve01 kernel: [ 1986] 0 1986 7804326 6309420 52912128 0 0 kvm
  121. Dec 13 19:56:18 pve01 kernel: [ 2281] 0 2281 86713 28151 368640 0 0 pvescheduler
  122. Dec 13 19:56:18 pve01 kernel: [ 119516] 0 119516 93573 35149 430080 0 0 pvedaemon worke
  123. Dec 13 19:56:18 pve01 kernel: [ 121139] 0 121139 93573 35149 430080 0 0 pvedaemon worke
  124. Dec 13 19:56:18 pve01 kernel: [ 128336] 0 128336 93573 34989 430080 0 0 pvedaemon worke
  125. Dec 13 19:56:18 pve01 kernel: [ 744165] 0 744165 19796 480 49152 0 0 pvefw-logger
  126. Dec 13 19:56:18 pve01 kernel: [ 744169] 33 744169 20247 13131 184320 0 0 spiceproxy work
  127. Dec 13 19:56:18 pve01 kernel: [ 744171] 33 744171 91759 35021 421888 0 0 pveproxy worker
  128. Dec 13 19:56:18 pve01 kernel: [ 744172] 33 744172 91759 35053 421888 0 0 pveproxy worker
  129. Dec 13 19:56:18 pve01 kernel: [ 744173] 33 744173 91759 35021 421888 0 0 pveproxy worker
  130. Dec 13 19:56:18 pve01 kernel: [1087653] 106 1087653 10763 800 73728 0 0 pickup
  131. Dec 13 19:56:18 pve01 kernel: [1111509] 0 1111509 88521 28247 380928 0 0 pvescheduler
  132. Dec 13 19:56:18 pve01 kernel: [1111510] 0 1111510 88521 28151 393216 0 0 pvescheduler
  133. Dec 13 19:56:18 pve01 kernel: [1111723] 0 1111723 1366 352 49152 0 0 sleep
  134. Dec 13 19:56:18 pve01 kernel: [1111769] 0 1111769 72412 23635 303104 0 0 pve-firewall
  135. Dec 13 19:56:18 pve01 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init.scope,mems_allowed=0,global_oom,task_memcg=/qemu.slice/102.scope,task=kvm,pid=1986,uid=0
  136. Dec 13 19:56:18 pve01 kernel: Out of memory: Killed process 1986 (kvm) total-vm:31217304kB, anon-rss:25235120kB, file-rss:2560kB, shmem-rss:0kB, UID:0 pgtables:51672kB oom_score_adj:0
  137. Dec 13 19:56:18 pve01 systemd: Starting systemd-journald.service - Journal Service...
  138. Dec 13 19:56:18 pve01 systemd: 102.scope: A process of this unit has been killed by the OOM killer.
  139. Dec 13 19:56:18 pve01 systemd: 102.scope: Failed with result 'oom-kill'.
  140. Dec 13 19:56:18 pve01 systemd: 102.scope: Consumed 1w 4d 43min 40.536s CPU time.
  141. Dec 13 19:56:18 pve01 kernel: zd16: p1
  142. Dec 13 19:56:18 pve01 systemd-journald[1111771]: Journal started
  143. Dec 13 19:56:18 pve01 systemd-journald[1111771]: System Journal (/var/log/journal/5e3855d3c35e41a7bb73a557f49189db) is 284.8M, max 4.0G, 3.7G free.
  144. Dec 13 19:56:18 pve01 systemd[1]: systemd-journald.service: Watchdog timeout (limit 3min)!
  145. Dec 13 19:56:18 pve01 pve-ha-lrm[1963]: loop take too long (49 seconds)
  146. Dec 13 19:56:18 pve01 systemd: Started systemd-journald.service - Journal Service.
  147. Dec 13 19:56:18 pve01 systemd[1]: systemd-journald.service: Killing process 653 (systemd-journal) with signal SIGABRT.
  148. Dec 13 19:56:18 pve01 pvestatd[1851]: closing with write buffer at /usr/share/perl5/IO/Multiplex.pm line 928.
  149. Dec 13 19:56:18 pve01 pvestatd[1851]: VM 102 qmp command failed - VM 102 qmp command 'query-proxmox-support' failed - got timeout
  150. Dec 13 19:56:18 pve01 pve-ha-lrm[1963]: loop take too long (52 seconds)
  151. Dec 13 19:56:18 pve01 pvestatd[1851]: got timeout
  152. Dec 13 19:56:18 pve01 pvestatd[1851]: unable to activate storage 'local' - directory '/var/lib/vz' does not exist or is unreachable
  153. Dec 13 19:56:18 pve01 pvestatd[1851]: command 'zfs get -o value -Hp available,used rpool/data' failed: got timeout
  154. Dec 13 19:56:18 pve01 pve-ha-lrm[1963]: loop take too long (65 seconds)
  155. Dec 13 19:56:18 pve01 pve-firewall[1848]: firewall update time (209.158 seconds)
  156. Dec 13 19:56:18 pve01 pve-ha-lrm[1963]: loop take too long (62 seconds)
  157. Dec 13 19:56:18 pve01 pve-ha-lrm[1963]: loop take too long (55 seconds)
  158. Dec 13 19:56:18 pve01 pve-firewall[1848]: firewall update time (196.009 seconds)
  159. Dec 13 19:56:18 pve01 pve-ha-crm[1885]: loop take too long (32 seconds)
  160. Dec 13 19:56:18 pve01 pvestatd[1851]: proxmox-backup-client failed: Error: channel closed
  161. Dec 13 19:56:18 pve01 pve-ha-lrm[1963]: loop take too long (69 seconds)
  162. Dec 13 19:56:18 pve01 pve-firewall[1848]: firewall update time (31.817 seconds)
  163. Dec 13 19:56:18 pve01 pvestatd[1851]: status update time (456.473 seconds)
  164. Dec 13 19:56:18 pve01 pve-firewall[1848]: firewall update time (9.455 seconds)
  165. Dec 13 19:56:18 pve01 pve-firewall[1848]: firewall update time (5.962 seconds)
  166. Dec 13 19:56:18 pve01 pve-firewall[1848]: firewall update time (8.533 seconds)
  167. Dec 13 19:56:18 pve01 pve-firewall[1848]: firewall update time (7.883 seconds)
  168. Dec 13 19:56:18 pve01 pvescheduler[1111510]: jobs: 'file-jobs_cfg'-locked command timed out - aborting
  169. Dec 13 19:56:18 pve01 pve-firewall[1848]: firewall update time (8.454 seconds)
  170. Dec 13 19:56:19 pve01 kernel: fwbr102i0: port 2(tap102i0) entered disabled state
  171. Dec 13 19:56:19 pve01 kernel: tap102i0 (unregistering): left allmulticast mode
  172. Dec 13 19:56:19 pve01 kernel: fwbr102i0: port 2(tap102i0) entered disabled state
  173. Dec 13 19:56:20 pve01 qmeventd[1111804]: Starting cleanup for 102
  174. Dec 13 19:56:20 pve01 kernel: fwbr102i0: port 1(fwln102i0) entered disabled state
  175. Dec 13 19:56:20 pve01 kernel: vmbr0: port 2(fwpr102p0) entered disabled state
  176. Dec 13 19:56:20 pve01 kernel: fwln102i0 (unregistering): left allmulticast mode
  177. Dec 13 19:56:20 pve01 kernel: fwln102i0 (unregistering): left promiscuous mode
  178. Dec 13 19:56:20 pve01 kernel: fwbr102i0: port 1(fwln102i0) entered disabled state
  179. Dec 13 19:56:20 pve01 kernel: fwpr102p0 (unregistering): left allmulticast mode
  180. Dec 13 19:56:20 pve01 kernel: fwpr102p0 (unregistering): left promiscuous mode
  181. Dec 13 19:56:20 pve01 kernel: vmbr0: port 2(fwpr102p0) entered disabled state
  182. Dec 13 19:56:21 pve01 qmeventd[1111804]: Finished cleanup for 102
  183. Dec 13 19:56:21 pve01 kernel: oom_reaper: reaped process 1986 (kvm), now anon-rss:272kB, file-rss:468kB, shmem-rss:0kB
  184. Dec 13 19:56:22 pve01 systemd[1]: qemu.slice: A process of this unit has been killed by the OOM killer.
  185. Dec 13 19:57:19 pve01 pvestatd[1851]: status update time (110.661 seconds)
  186. Dec 13 19:57:28 pve01 kernel: zd64: p1 p2
  187. Dec 13 19:57:28 pve01 kernel: zd32: p1 p2 p3 p4
  188. Dec 13 20:02:26 pve01 systemd[1]: Starting apt-daily.service - Daily apt download activities...
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement