Guest User

Untitled

a guest
Feb 15th, 2018
21
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. === # pveversion -V
  2.  
  3. proxmox-ve: 5.1-38 (running kernel: 4.13.13-5-pve)
  4. pve-manager: 5.1-43 (running version: 5.1-43/bdb08029)
  5. pve-kernel-4.13.13-2-pve: 4.13.13-33
  6. pve-kernel-4.10.15-1-pve: 4.10.15-15
  7. pve-kernel-4.13.8-3-pve: 4.13.8-30
  8. pve-kernel-4.13.13-5-pve: 4.13.13-38
  9. pve-kernel-4.10.17-3-pve: 4.10.17-23
  10. libpve-http-server-perl: 2.0-8
  11. lvm2: 2.02.168-pve6
  12. corosync: 2.4.2-pve3
  13. libqb0: 1.0.1-1
  14. pve-cluster: 5.0-19
  15. qemu-server: 5.0-20
  16. pve-firmware: 2.0-3
  17. libpve-common-perl: 5.0-25
  18. libpve-guest-common-perl: 2.0-14
  19. libpve-access-control: 5.0-7
  20. libpve-storage-perl: 5.0-17
  21. pve-libspice-server1: 0.12.8-3
  22. vncterm: 1.5-3
  23. pve-docs: 5.1-16
  24. pve-qemu-kvm: 2.9.1-6
  25. pve-container: 2.0-18
  26. pve-firewall: 3.0-5
  27. pve-ha-manager: 2.0-4
  28. ksm-control-daemon: 1.2-2
  29. glusterfs-client: 3.8.8-1
  30. lxc-pve: 2.1.1-2
  31. lxcfs: 2.0.8-1
  32. criu: 2.11.1-1~bpo90
  33. novnc-pve: 0.6-4
  34. smartmontools: 6.5+svn4324-1
  35. zfsutils-linux: 0.7.4-pve2~bpo9
  36. openvswitch-switch: 2.7.0-2
  37.  
  38.  
  39.  
  40. === /var/log/kern.log
  41.  
  42. Feb 13 08:45:10 pve kernel: [60173.977143] INFO: task jbd2/dm-5-8:438 blocked for more than 120 seconds.
  43. Feb 13 08:45:10 pve kernel: [60173.977895] Tainted: P O 4.13.13-5-pve #1
  44. Feb 13 08:45:10 pve kernel: [60173.978731] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  45. Feb 13 08:45:10 pve kernel: [60173.979640] jbd2/dm-5-8 D 0 438 2 0x00000000
  46. ...
  47.  
  48.  
  49.  
  50. === /var/log/messages
  51.  
  52. Feb 13 06:25:10 pve liblogging-stdlog: [origin software="rsyslogd" swVersion="8.24.0" x-pid="1293" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
  53. Feb 13 06:25:10 pve liblogging-stdlog: [origin software="rsyslogd" swVersion="8.24.0" x-pid="1293" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
  54. Feb 13 08:45:10 pve kernel: [60173.979640] jbd2/dm-5-8 D 0 438 2 0x00000000
  55. Feb 13 08:45:10 pve kernel: [60173.980362] Call Trace:
  56. Feb 13 08:45:10 pve kernel: [60173.981136] __schedule+0x3cc/0x850
  57. Feb 13 08:45:10 pve kernel: [60173.981971] schedule+0x36/0x80
  58. Feb 13 08:45:10 pve kernel: [60173.982777] io_schedule+0x16/0x40
  59. Feb 13 08:45:10 pve kernel: [60173.983541] wait_on_page_bit_common+0xf3/0x180
  60. Feb 13 08:45:10 pve kernel: [60173.984308] ? page_cache_tree_insert+0xc0/0xc0
  61. Feb 13 08:45:10 pve kernel: [60173.985046] __filemap_fdatawait_range+0x114/0x180
  62. Feb 13 08:45:10 pve kernel: [60173.985759] ? submit_bio+0x73/0x150
  63. Feb 13 08:45:10 pve kernel: [60173.986674] ? submit_bio+0x73/0x150
  64. Feb 13 08:45:10 pve kernel: [60173.987400] ? jbd2_journal_write_metadata_buffer+0x249/0x3a0
  65. Feb 13 08:45:10 pve kernel: [60173.988104] ? jbd2_journal_begin_ordered_truncate+0xb0/0xb0
  66. Feb 13 08:45:10 pve kernel: [60173.988684] filemap_fdatawait_keep_errors+0x27/0x50
  67. Feb 13 08:45:10 pve kernel: [60173.989398] jbd2_journal_commit_transaction+0x8a5/0x16d0
  68. Feb 13 08:45:10 pve kernel: [60173.990397] ? finish_task_switch+0x14e/0x200
  69. Feb 13 08:45:10 pve kernel: [60173.991380] kjournald2+0xd2/0x270
  70. Feb 13 08:45:10 pve kernel: [60173.992081] ? kjournald2+0xd2/0x270
  71. Feb 13 08:45:10 pve kernel: [60173.992742] ? wait_woken+0x80/0x80
  72. Feb 13 08:45:10 pve kernel: [60173.993401] kthread+0x109/0x140
  73. Feb 13 08:45:10 pve kernel: [60173.994026] ? commit_timeout+0x10/0x10
  74. Feb 13 08:45:10 pve kernel: [60173.994579] ? kthread_create_on_node+0x70/0x70
  75. Feb 13 08:45:10 pve kernel: [60173.995096] ret_from_fork+0x1f/0x30
  76. Feb 13 08:45:10 pve kernel: [60173.997511] pve-firewall D 0 2370 1 0x00000000
  77. Feb 13 08:45:10 pve kernel: [60173.998032] Call Trace:
  78. Feb 13 08:45:10 pve kernel: [60173.998513] __schedule+0x3cc/0x850
  79. Feb 13 08:45:10 pve kernel: [60173.998983] schedule+0x36/0x80
  80. Feb 13 08:45:10 pve kernel: [60173.999393] io_schedule+0x16/0x40
  81. Feb 13 08:45:10 pve kernel: [60173.999802] wait_on_page_bit+0xf6/0x130
  82. Feb 13 08:45:10 pve kernel: [60174.000234] ? page_cache_tree_insert+0xc0/0xc0
  83. Feb 13 08:45:10 pve kernel: [60174.000643] truncate_inode_pages_range+0x44e/0x830
  84. Feb 13 08:45:10 pve kernel: [60174.001075] ? __filemap_fdatawrite_range+0xd4/0x100
  85. Feb 13 08:45:10 pve kernel: [60174.001681] truncate_inode_pages_final+0x4d/0x60
  86. Feb 13 08:45:10 pve kernel: [60174.002139] ext4_evict_inode+0x156/0x5c0
  87. Feb 13 08:45:10 pve kernel: [60174.002569] evict+0xc7/0x1a0
  88. Feb 13 08:45:10 pve kernel: [60174.003053] iput+0x1c3/0x220
  89. Feb 13 08:45:10 pve kernel: [60174.003539] dentry_unlink_inode+0xc1/0x160
  90. Feb 13 08:45:10 pve kernel: [60174.003930] __dentry_kill+0xbe/0x160
  91. Feb 13 08:45:10 pve kernel: [60174.004380] dput+0x138/0x1f0
  92. Feb 13 08:45:10 pve kernel: [60174.004755] SyS_rename+0x297/0x410
  93. Feb 13 08:45:10 pve kernel: [60174.005113] entry_SYSCALL_64_fastpath+0x33/0xa3
  94. Feb 13 08:45:10 pve kernel: [60174.005608] RIP: 0033:0x7f7981186d17
  95. Feb 13 08:45:10 pve kernel: [60174.006034] RSP: 002b:00007ffdf3df5198 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
  96. Feb 13 08:45:10 pve kernel: [60174.006438] RAX: ffffffffffffffda RBX: 000055934a3dc010 RCX: 00007f7981186d17
  97. Feb 13 08:45:10 pve kernel: [60174.006848] RDX: 0000000000000032 RSI: 000055934e1e3720 RDI: 000055934e1f0070
  98. Feb 13 08:45:10 pve kernel: [60174.007261] RBP: 000055934a3e18f8 R08: 0000000000000200 R09: 0000000000000009
  99. Feb 13 08:45:10 pve kernel: [60174.007806] R10: 0000000000000000 R11: 0000000000000246 R12: 000055934a3e1900
  100. Feb 13 08:45:10 pve kernel: [60174.008166] R13: 000055934b901e18 R14: 000055934e1e3720 R15: 0000000000000000
  101. Feb 13 08:47:10 pve kernel: [60294.808883] jbd2/dm-5-8 D 0 438 2 0x00000000
  102. Feb 13 08:47:10 pve kernel: [60294.809482] Call Trace:
  103. Feb 13 08:47:10 pve kernel: [60294.809871] __schedule+0x3cc/0x850
  104. Feb 13 08:47:10 pve kernel: [60294.810316] schedule+0x36/0x80
  105. Feb 13 08:47:10 pve kernel: [60294.810785] io_schedule+0x16/0x40
  106. Feb 13 08:47:10 pve kernel: [60294.811345] wait_on_page_bit_common+0xf3/0x180
  107. Feb 13 08:47:10 pve kernel: [60294.811994] ? page_cache_tree_insert+0xc0/0xc0
  108. Feb 13 08:47:10 pve kernel: [60294.812618] __filemap_fdatawait_range+0x114/0x180
  109. Feb 13 08:47:10 pve kernel: [60294.813087] ? submit_bio+0x73/0x150
  110. Feb 13 08:47:10 pve kernel: [60294.813515] ? submit_bio+0x73/0x150
  111. Feb 13 08:47:10 pve kernel: [60294.814003] ? jbd2_journal_write_metadata_buffer+0x249/0x3a0
  112. Feb 13 08:47:10 pve kernel: [60294.814430] ? jbd2_journal_begin_ordered_truncate+0xb0/0xb0
  113. Feb 13 08:47:10 pve kernel: [60294.814964] filemap_fdatawait_keep_errors+0x27/0x50
  114. Feb 13 08:47:10 pve kernel: [60294.815552] jbd2_journal_commit_transaction+0x8a5/0x16d0
  115. Feb 13 08:47:10 pve kernel: [60294.816067] ? finish_task_switch+0x14e/0x200
  116. Feb 13 08:47:10 pve kernel: [60294.816492] kjournald2+0xd2/0x270
  117. Feb 13 08:47:10 pve kernel: [60294.817020] ? kjournald2+0xd2/0x270
  118. Feb 13 08:47:10 pve kernel: [60294.817454] ? wait_woken+0x80/0x80
  119. Feb 13 08:47:10 pve kernel: [60294.817837] kthread+0x109/0x140
  120. Feb 13 08:47:10 pve kernel: [60294.818228] ? commit_timeout+0x10/0x10
  121. Feb 13 08:47:10 pve kernel: [60294.818625] ? kthread_create_on_node+0x70/0x70
  122. Feb 13 08:47:10 pve kernel: [60294.818974] ret_from_fork+0x1f/0x30
  123. Feb 13 08:47:10 pve kernel: [60294.820740] pve-firewall D 0 2370 1 0x00000000
  124. Feb 13 08:47:10 pve kernel: [60294.821206] Call Trace:
  125. Feb 13 08:47:10 pve kernel: [60294.821697] __schedule+0x3cc/0x850
  126. Feb 13 08:47:10 pve kernel: [60294.822292] schedule+0x36/0x80
  127. Feb 13 08:47:10 pve kernel: [60294.822938] io_schedule+0x16/0x40
  128. Feb 13 08:47:10 pve kernel: [60294.823591] wait_on_page_bit+0xf6/0x130
  129. Feb 13 08:47:10 pve kernel: [60294.824237] ? page_cache_tree_insert+0xc0/0xc0
  130. Feb 13 08:47:10 pve kernel: [60294.824890] truncate_inode_pages_range+0x44e/0x830
  131. Feb 13 08:47:10 pve kernel: [60294.825548] ? __filemap_fdatawrite_range+0xd4/0x100
  132. Feb 13 08:47:10 pve kernel: [60294.826249] truncate_inode_pages_final+0x4d/0x60
  133. Feb 13 08:47:10 pve kernel: [60294.826913] ext4_evict_inode+0x156/0x5c0
  134. Feb 13 08:47:10 pve kernel: [60294.827589] evict+0xc7/0x1a0
  135. Feb 13 08:47:10 pve kernel: [60294.828218] iput+0x1c3/0x220
  136. Feb 13 08:47:10 pve kernel: [60294.828866] dentry_unlink_inode+0xc1/0x160
  137. Feb 13 08:47:10 pve kernel: [60294.829514] __dentry_kill+0xbe/0x160
  138. Feb 13 08:47:10 pve kernel: [60294.830140] dput+0x138/0x1f0
  139. Feb 13 08:47:10 pve kernel: [60294.830755] SyS_rename+0x297/0x410
  140. Feb 13 08:47:10 pve kernel: [60294.831378] entry_SYSCALL_64_fastpath+0x33/0xa3
  141. Feb 13 08:47:10 pve kernel: [60294.831964] RIP: 0033:0x7f7981186d17
  142. Feb 13 08:47:10 pve kernel: [60294.832583] RSP: 002b:00007ffdf3df5198 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
  143. Feb 13 08:47:10 pve kernel: [60294.833209] RAX: ffffffffffffffda RBX: 000055934a3dc010 RCX: 00007f7981186d17
  144. Feb 13 08:47:10 pve kernel: [60294.833851] RDX: 0000000000000032 RSI: 000055934e1e3720 RDI: 000055934e1f0070
  145. Feb 13 08:47:10 pve kernel: [60294.834472] RBP: 000055934a3e18f8 R08: 0000000000000200 R09: 0000000000000009
  146. Feb 13 08:47:10 pve kernel: [60294.835072] R10: 0000000000000000 R11: 0000000000000246 R12: 000055934a3e1900
  147. Feb 13 08:47:10 pve kernel: [60294.835688] R13: 000055934b901e18 R14: 000055934e1e3720 R15: 0000000000000000
  148. Feb 13 08:49:11 pve kernel: [60415.638469] jbd2/dm-5-8 D 0 438 2 0x00000000
  149. Feb 13 08:49:11 pve kernel: [60415.638886] Call Trace:
  150. Feb 13 08:49:11 pve kernel: [60415.639320] __schedule+0x3cc/0x850
  151. Feb 13 08:49:11 pve kernel: [60415.639695] schedule+0x36/0x80
  152. Feb 13 08:49:11 pve kernel: [60415.640111] io_schedule+0x16/0x40
  153. Feb 13 08:49:11 pve kernel: [60415.640497] wait_on_page_bit_common+0xf3/0x180
  154. Feb 13 08:49:11 pve kernel: [60415.640949] ? page_cache_tree_insert+0xc0/0xc0
  155. Feb 13 08:49:11 pve kernel: [60415.641436] __filemap_fdatawait_range+0x114/0x180
  156. Feb 13 08:49:11 pve kernel: [60415.641970] ? submit_bio+0x73/0x150
  157. Feb 13 08:49:11 pve kernel: [60415.642387] ? submit_bio+0x73/0x150
  158. Feb 13 08:49:11 pve kernel: [60415.642841] ? jbd2_journal_write_metadata_buffer+0x249/0x3a0
  159. Feb 13 08:49:11 pve kernel: [60415.643264] ? jbd2_journal_begin_ordered_truncate+0xb0/0xb0
  160. Feb 13 08:49:11 pve kernel: [60415.643726] filemap_fdatawait_keep_errors+0x27/0x50
  161. Feb 13 08:49:11 pve kernel: [60415.644159] jbd2_journal_commit_transaction+0x8a5/0x16d0
  162. Feb 13 08:49:11 pve kernel: [60415.644579] ? finish_task_switch+0x14e/0x200
  163. Feb 13 08:49:11 pve kernel: [60415.645023] kjournald2+0xd2/0x270
  164. Feb 13 08:49:11 pve kernel: [60415.645491] ? kjournald2+0xd2/0x270
  165. Feb 13 08:49:11 pve kernel: [60415.646052] ? wait_woken+0x80/0x80
  166. Feb 13 08:49:11 pve kernel: [60415.646490] kthread+0x109/0x140
  167. Feb 13 08:49:11 pve kernel: [60415.646873] ? commit_timeout+0x10/0x10
  168. Feb 13 08:49:11 pve kernel: [60415.647239] ? kthread_create_on_node+0x70/0x70
  169. Feb 13 08:49:11 pve kernel: [60415.647617] ret_from_fork+0x1f/0x30
  170. Feb 13 08:49:11 pve kernel: [60415.649156] systemd-journal D 0 482 1 0x00000104
  171. Feb 13 08:49:11 pve kernel: [60415.649672] Call Trace:
  172. Feb 13 08:49:11 pve kernel: [60415.650153] __schedule+0x3cc/0x850
  173. Feb 13 08:49:11 pve kernel: [60415.650679] schedule+0x36/0x80
  174. Feb 13 08:49:11 pve kernel: [60415.651149] jbd2_log_wait_commit+0x98/0x120
  175. Feb 13 08:49:11 pve kernel: [60415.651587] ? wait_woken+0x80/0x80
  176. Feb 13 08:49:11 pve kernel: [60415.652046] jbd2_complete_transaction+0x5b/0xa0
  177. Feb 13 08:49:11 pve kernel: [60415.652483] ext4_sync_file+0x1cf/0x3a0
  178. Feb 13 08:49:11 pve kernel: [60415.652873] vfs_fsync_range+0x4b/0xb0
  179. Feb 13 08:49:11 pve kernel: [60415.653272] do_fsync+0x3d/0x70
  180. Feb 13 08:49:11 pve kernel: [60415.653673] SyS_fsync+0x10/0x20
  181. Feb 13 08:49:11 pve kernel: [60415.654106] do_syscall_64+0x5b/0xc0
  182. Feb 13 08:49:11 pve kernel: [60415.654506] entry_SYSCALL64_slow_path+0x8/0x8
  183. Feb 13 08:49:11 pve kernel: [60415.654875] RIP: 0033:0x7f782b9046ed
  184. Feb 13 08:49:11 pve kernel: [60415.655266] RSP: 002b:00007ffe3c0172f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
  185. Feb 13 08:49:11 pve kernel: [60415.655635] RAX: ffffffffffffffda RBX: 00005565f83c6c30 RCX: 00007f782b9046ed
  186. Feb 13 08:49:11 pve kernel: [60415.656025] RDX: 00007f782c37d000 RSI: 0000000000000000 RDI: 0000000000000015
  187. Feb 13 08:49:11 pve kernel: [60415.656417] RBP: 0000000000000001 R08: e1966ac35cfecfcc R09: 0000000000000000
  188. Feb 13 08:49:11 pve kernel: [60415.656790] R10: 50dc7e97e946c954 R11: 0000000000000293 R12: 00005565f83c6c30
  189. Feb 13 08:49:11 pve kernel: [60415.657164] R13: 00007ffe3c017440 R14: 00007ffe3c017438 R15: 00007ffe3c017440
  190. Feb 13 08:49:11 pve kernel: [60415.659091] pve-firewall D 0 2370 1 0x00000000
  191. Feb 13 08:49:11 pve kernel: [60415.659502] Call Trace:
  192. Feb 13 08:49:11 pve kernel: [60415.659889] __schedule+0x3cc/0x850
  193. Feb 13 08:49:11 pve kernel: [60415.660363] schedule+0x36/0x80
  194. Feb 13 08:49:11 pve kernel: [60415.660937] io_schedule+0x16/0x40
  195. Feb 13 08:49:11 pve kernel: [60415.661370] wait_on_page_bit+0xf6/0x130
  196. Feb 13 08:49:11 pve kernel: [60415.661892] ? page_cache_tree_insert+0xc0/0xc0
  197. Feb 13 08:49:11 pve kernel: [60415.662328] truncate_inode_pages_range+0x44e/0x830
  198. Feb 13 08:49:11 pve kernel: [60415.662802] ? __filemap_fdatawrite_range+0xd4/0x100
  199. Feb 13 08:49:11 pve kernel: [60415.663216] truncate_inode_pages_final+0x4d/0x60
  200. Feb 13 08:49:11 pve kernel: [60415.663623] ext4_evict_inode+0x156/0x5c0
  201. Feb 13 08:49:11 pve kernel: [60415.664026] evict+0xc7/0x1a0
  202. Feb 13 08:49:11 pve kernel: [60415.664416] iput+0x1c3/0x220
  203. Feb 13 08:49:11 pve kernel: [60415.664813] dentry_unlink_inode+0xc1/0x160
  204. Feb 13 08:49:11 pve kernel: [60415.665235] __dentry_kill+0xbe/0x160
  205. Feb 13 08:49:11 pve kernel: [60415.665699] dput+0x138/0x1f0
  206. Feb 13 08:49:11 pve kernel: [60415.666347] SyS_rename+0x297/0x410
  207. Feb 13 08:49:11 pve kernel: [60415.666981] entry_SYSCALL_64_fastpath+0x33/0xa3
  208. Feb 13 08:49:11 pve kernel: [60415.667381] RIP: 0033:0x7f7981186d17
  209. Feb 13 08:49:11 pve kernel: [60415.667749] RSP: 002b:00007ffdf3df5198 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
  210. Feb 13 08:49:11 pve kernel: [60415.668148] RAX: ffffffffffffffda RBX: 000055934a3dc010 RCX: 00007f7981186d17
  211. Feb 13 08:49:11 pve kernel: [60415.668596] RDX: 0000000000000032 RSI: 000055934e1e3720 RDI: 000055934e1f0070
  212. Feb 13 08:49:11 pve kernel: [60415.668978] RBP: 000055934a3e18f8 R08: 0000000000000200 R09: 0000000000000009
  213. Feb 13 08:49:11 pve kernel: [60415.669378] R10: 0000000000000000 R11: 0000000000000246 R12: 000055934a3e1900
  214. Feb 13 08:49:11 pve kernel: [60415.669836] R13: 000055934b901e18 R14: 000055934e1e3720 R15: 0000000000000000
  215. Feb 13 08:51:12 pve kernel: [60536.469377] jbd2/dm-5-8 D 0 438 2 0x00000000
  216. Feb 13 08:51:12 pve kernel: [60536.470115] Call Trace:
  217. Feb 13 08:51:12 pve kernel: [60536.470813] __schedule+0x3cc/0x850
  218. Feb 13 08:51:12 pve kernel: [60536.471474] schedule+0x36/0x80
  219. Feb 13 08:51:12 pve kernel: [60536.472185] io_schedule+0x16/0x40
  220. Feb 13 08:51:12 pve kernel: [60536.472881] wait_on_page_bit_common+0xf3/0x180
  221. Feb 13 08:51:12 pve kernel: [60536.473591] ? page_cache_tree_insert+0xc0/0xc0
  222. Feb 13 08:51:12 pve kernel: [60536.474299] __filemap_fdatawait_range+0x114/0x180
  223. Feb 13 08:51:12 pve kernel: [60536.474990] ? submit_bio+0x73/0x150
  224. Feb 13 08:51:12 pve kernel: [60536.475666] ? submit_bio+0x73/0x150
  225. Feb 13 08:51:12 pve kernel: [60536.476336] ? jbd2_journal_write_metadata_buffer+0x249/0x3a0
  226. Feb 13 08:51:12 pve kernel: [60536.476901] ? jbd2_journal_begin_ordered_truncate+0xb0/0xb0
  227. Feb 13 08:51:12 pve kernel: [60536.477367] filemap_fdatawait_keep_errors+0x27/0x50
  228. Feb 13 08:51:12 pve kernel: [60536.477775] jbd2_journal_commit_transaction+0x8a5/0x16d0
  229. Feb 13 08:51:12 pve kernel: [60536.478199] ? finish_task_switch+0x14e/0x200
  230. Feb 13 08:51:12 pve kernel: [60536.478604] kjournald2+0xd2/0x270
  231. Feb 13 08:51:12 pve kernel: [60536.478991] ? kjournald2+0xd2/0x270
  232. Feb 13 08:51:12 pve kernel: [60536.479382] ? wait_woken+0x80/0x80
  233. Feb 13 08:51:12 pve kernel: [60536.479823] kthread+0x109/0x140
  234. Feb 13 08:51:12 pve kernel: [60536.480252] ? commit_timeout+0x10/0x10
  235. Feb 13 08:51:12 pve kernel: [60536.480626] ? kthread_create_on_node+0x70/0x70
  236. Feb 13 08:51:12 pve kernel: [60536.480993] ret_from_fork+0x1f/0x30
  237. Feb 13 08:51:12 pve kernel: [60536.482605] systemd-journal D 0 482 1 0x00000104
  238. Feb 13 08:51:12 pve kernel: [60536.483015] Call Trace:
  239. Feb 13 08:51:12 pve kernel: [60536.483424] __schedule+0x3cc/0x850
  240. Feb 13 08:51:12 pve kernel: [60536.483937] schedule+0x36/0x80
  241. Feb 13 08:51:12 pve kernel: [60536.484376] jbd2_log_wait_commit+0x98/0x120
  242. Feb 13 08:51:12 pve kernel: [60536.484831] ? wait_woken+0x80/0x80
  243. Feb 13 08:51:12 pve kernel: [60536.485279] jbd2_complete_transaction+0x5b/0xa0
  244. Feb 13 08:51:12 pve kernel: [60536.485728] ext4_sync_file+0x1cf/0x3a0
  245. Feb 13 08:51:12 pve kernel: [60536.486157] vfs_fsync_range+0x4b/0xb0
  246. Feb 13 08:51:12 pve kernel: [60536.486668] do_fsync+0x3d/0x70
  247. Feb 13 08:51:12 pve kernel: [60536.487108] SyS_fsync+0x10/0x20
  248. Feb 13 08:51:12 pve kernel: [60536.487611] do_syscall_64+0x5b/0xc0
  249. Feb 13 08:51:12 pve kernel: [60536.488072] entry_SYSCALL64_slow_path+0x8/0x8
  250. Feb 13 08:51:12 pve kernel: [60536.488466] RIP: 0033:0x7f782b9046ed
  251. Feb 13 08:51:12 pve kernel: [60536.488845] RSP: 002b:00007ffe3c0172f0 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
  252. Feb 13 08:51:12 pve kernel: [60536.489309] RAX: ffffffffffffffda RBX: 00005565f83c6c30 RCX: 00007f782b9046ed
  253. Feb 13 08:51:12 pve kernel: [60536.489777] RDX: 00007f782c37d000 RSI: 0000000000000000 RDI: 0000000000000015
  254. Feb 13 08:51:12 pve kernel: [60536.490223] RBP: 0000000000000001 R08: e1966ac35cfecfcc R09: 0000000000000000
  255. Feb 13 08:51:12 pve kernel: [60536.490667] R10: 50dc7e97e946c954 R11: 0000000000000293 R12: 00005565f83c6c30
  256. Feb 13 08:51:12 pve kernel: [60536.491115] R13: 00007ffe3c017440 R14: 00007ffe3c017438 R15: 00007ffe3c017440
  257. Feb 13 08:51:12 pve kernel: [60536.493298] pve-firewall D 0 2370 1 0x00000000
  258. Feb 13 08:51:12 pve kernel: [60536.493680] Call Trace:
  259. Feb 13 08:51:12 pve kernel: [60536.494168] __schedule+0x3cc/0x850
  260. Feb 13 08:51:12 pve kernel: [60536.494579] schedule+0x36/0x80
  261. Feb 13 08:51:12 pve kernel: [60536.495031] io_schedule+0x16/0x40
  262. Feb 13 08:51:12 pve kernel: [60536.495509] wait_on_page_bit+0xf6/0x130
  263. Feb 13 08:51:12 pve kernel: [60536.495972] ? page_cache_tree_insert+0xc0/0xc0
  264. Feb 13 08:51:12 pve kernel: [60536.496430] truncate_inode_pages_range+0x44e/0x830
  265. Feb 13 08:51:12 pve kernel: [60536.496846] ? __filemap_fdatawrite_range+0xd4/0x100
  266. Feb 13 08:51:12 pve kernel: [60536.497244] truncate_inode_pages_final+0x4d/0x60
  267. Feb 13 08:51:12 pve kernel: [60536.497718] ext4_evict_inode+0x156/0x5c0
  268. Feb 13 08:51:12 pve kernel: [60536.498250] evict+0xc7/0x1a0
  269. Feb 13 08:51:12 pve kernel: [60536.498645] iput+0x1c3/0x220
  270. Feb 13 08:51:12 pve kernel: [60536.499072] dentry_unlink_inode+0xc1/0x160
  271. Feb 13 08:51:12 pve kernel: [60536.499508] __dentry_kill+0xbe/0x160
  272. Feb 13 08:51:12 pve kernel: [60536.499901] dput+0x138/0x1f0
  273. Feb 13 08:51:12 pve kernel: [60536.500310] SyS_rename+0x297/0x410
  274. Feb 13 08:51:12 pve kernel: [60536.500705] entry_SYSCALL_64_fastpath+0x33/0xa3
  275. Feb 13 08:51:12 pve kernel: [60536.501123] RIP: 0033:0x7f7981186d17
  276. Feb 13 08:51:12 pve kernel: [60536.501554] RSP: 002b:00007ffdf3df5198 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
  277. Feb 13 08:51:12 pve kernel: [60536.501954] RAX: ffffffffffffffda RBX: 000055934a3dc010 RCX: 00007f7981186d17
  278. Feb 13 08:51:12 pve kernel: [60536.502450] RDX: 0000000000000032 RSI: 000055934e1e3720 RDI: 000055934e1f0070
  279. Feb 13 08:51:12 pve kernel: [60536.502833] RBP: 000055934a3e18f8 R08: 0000000000000200 R09: 0000000000000009
  280. Feb 13 08:51:12 pve kernel: [60536.503240] R10: 0000000000000000 R11: 0000000000000246 R12: 000055934a3e1900
  281. Feb 13 08:51:12 pve kernel: [60536.503736] R13: 000055934b901e18 R14: 000055934e1e3720 R15: 0000000000000000
  282. Feb 13 09:29:40 pve kernel: [62844.488044] e1000e: eth0 NIC Link is Down
  283. Feb 13 09:29:40 pve kernel: [62844.490194] vmbr0: port 1(eth0) entered disabled state
  284. Feb 13 09:29:42 pve kernel: [62846.446813] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
  285. Feb 13 09:29:42 pve kernel: [62846.448517] vmbr0: port 1(eth0) entered blocking state
  286. Feb 13 09:29:42 pve kernel: [62846.448983] vmbr0: port 1(eth0) entered forwarding state
  287.  
  288.  
  289.  
  290. === # journalctl -fxe
  291.  
  292. -- Unit pve-container@109.service has begun starting up.
  293. Feb 13 02:17:34 pve.example.com kernel: EXT4-fs warning (device loop3): ext4_multi_mount_protect:324: MMP interval 42 higher than expected, please wait.
  294. Feb 13 02:17:42 pve.example.com pvedaemon[30283]: starting lxc vnc proxy UPID:pve:0000764B:003858CE:5A823CB6:vncproxy:109:root@pam:
  295. Feb 13 02:17:42 pve.example.com pvedaemon[2434]: <root@pam> starting task UPID:pve:0000764B:003858CE:5A823CB6:vncproxy:109:root@pam:
  296. Feb 13 02:18:10 pve.example.com pvedaemon[2434]: <root@pam> end task UPID:pve:0000764B:003858CE:5A823CB6:vncproxy:109:root@pam: OK
  297. Feb 13 02:18:19 pve.example.com kernel: EXT4-fs (loop3): recovery complete
  298. Feb 13 02:18:19 pve.example.com kernel: EXT4-fs (loop3): mounted filesystem with ordered data mode. Opts: (null)
  299. Feb 13 02:18:20 pve.example.com kernel: IPv6: ADDRCONF(NETDEV_UP): veth109i0: link is not ready
  300. Feb 13 02:18:20 pve.example.com ovs-vsctl[30378]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port veth109i0
  301. Feb 13 02:18:20 pve.example.com ovs-vsctl[30378]: ovs|00002|db_ctl_base|ERR|no port named veth109i0
  302. Feb 13 02:18:20 pve.example.com ovs-vsctl[30379]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl del-port fwln109i0
  303. Feb 13 02:18:20 pve.example.com ovs-vsctl[30379]: ovs|00002|db_ctl_base|ERR|no port named fwln109i0
  304. Feb 13 02:18:20 pve.example.com ovs-vsctl[30380]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl add-port vmbr1 veth109i0 tag=212
  305. Feb 13 02:18:20 pve.example.com kernel: device veth109i0 entered promiscuous mode
  306. Feb 13 02:18:20 pve.example.com kernel: eth0: renamed from vethPLF92E
  307. Feb 13 02:18:21 pve.example.com systemd[1]: Started PVE LXC Container: 109.
  308. -- Subject: Unit pve-container@109.service has finished start-up
  309. -- Defined-By: systemd
  310. -- Support: https://www.debian.org/support
  311. --
  312. -- Unit pve-container@109.service has finished starting up.
  313. --
  314. -- The start-up result is done.
  315. Feb 13 02:18:21 pve.example.com pvedaemon[2433]: <root@pam> end task UPID:pve:00007625:00385579:5A823CAD:vzstart:109:root@pam: OK
  316. Feb 13 02:18:21 pve.example.com pvestatd[2376]: modified cpu set for lxc/103: 1,3
  317. Feb 13 02:18:21 pve.example.com pvestatd[2376]: modified cpu set for lxc/104: 5,7
  318. Feb 13 02:18:21 pve.example.com pvestatd[2376]: status update time (42.929 seconds)
  319. Feb 13 02:18:21 pve.example.com pmxcfs[2305]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/local: -1
  320. Feb 13 02:18:21 pve.example.com pmxcfs[2305]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/ISO: -1
  321. Feb 13 02:18:21 pve.example.com pmxcfs[2305]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/data: -1
  322. Feb 13 02:18:21 pve.example.com pmxcfs[2305]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/datenklo: -1
  323. Feb 13 02:18:21 pve.example.com pmxcfs[2305]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/pve/backup: -1
  324. Feb 13 02:18:26 pve.example.com pvedaemon[2433]: <root@pam> starting task UPID:pve:00007858:00386A36:5A823CE2:vncproxy:109:root@pam:
  325. Feb 13 02:18:26 pve.example.com pvedaemon[30808]: starting lxc vnc proxy UPID:pve:00007858:00386A36:5A823CE2:vncproxy:109:root@pam:
  326. Feb 13 02:20:42 pve.example.com rrdcached[2263]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/pve/data) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/pve/data: illegal attempt to update using time 1518484548 when last update time is 1518484701 (minimum one second step))
  327. Feb 13 02:20:42 pve.example.com rrdcached[2263]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/pve/datenklo) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/pve/datenklo: illegal attempt to update using time 1518484548 when last update time is 1518484701 (minimum one second step))
  328. Feb 13 02:20:42 pve.example.com rrdcached[2263]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/pve/backup) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/pve/backup: illegal attempt to update using time 1518484548 when last update time is 1518484701 (minimum one second step))
  329. Feb 13 02:20:42 pve.example.com rrdcached[2263]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/pve/ISO) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/pve/ISO: illegal attempt to update using time 1518484548 when last update time is 1518484701 (minimum one second step))
  330. Feb 13 02:20:42 pve.example.com rrdcached[2263]: queue_thread_main: rrd_update_r (/var/lib/rrdcached/db/pve2-storage/pve/local) failed with status -1. (/var/lib/rrdcached/db/pve2-storage/pve/local: illegal attempt to update using time 1518484548 when last update time is 1518484701 (minimum one second step))
  331. Feb 13 02:22:57 pve.example.com pveproxy[11547]: worker exit
  332. Feb 13 02:22:57 pve.example.com pveproxy[2487]: worker 11547 finished
  333. Feb 13 02:22:57 pve.example.com pveproxy[2487]: starting 1 worker(s)
  334. Feb 13 02:22:57 pve.example.com pveproxy[2487]: worker 32320 started
  335. Feb 13 02:30:37 pve.example.com sshd[1864]: Accepted publickey for root from 1.2.3.4 port 1876 ssh2: ED25519 SHA256:RSXI6CUxxx...
  336. Feb 13 02:30:37 pve.example.com sshd[1864]: pam_unix(sshd:session): session opened for user root by (uid=0)
  337. Feb 13 02:30:37 pve.example.com systemd-logind[1234]: New session 36 of user root.
  338.  
  339. [...]
  340.  
  341. Feb 13 05:21:28 pve.example.com ntpd[1841]: peer 144.76.102.204 now invalid
  342. Feb 13 05:39:34 pve.example.com ntpd[1841]: peer 144.76.102.204 now valid
  343. Feb 13 06:03:36 pve.example.com rrdcached[2263]: flushing old values
  344. Feb 13 06:03:36 pve.example.com rrdcached[2263]: rotating journals
  345. Feb 13 06:03:36 pve.example.com rrdcached[2263]: started new journal /var/lib/rrdcached/journal/rrd.journal.1518498216.200060
  346. Feb 13 06:03:36 pve.example.com rrdcached[2263]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1518491016.200033
  347. Feb 13 06:17:01 pve.example.com CRON[19104]: pam_unix(cron:session): session opened for user root by (uid=0)
  348. Feb 13 06:17:01 pve.example.com CRON[19105]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
  349. Feb 13 06:17:01 pve.example.com CRON[19104]: pam_unix(cron:session): session closed for user root
  350. Feb 13 06:25:01 pve.example.com CRON[20838]: pam_unix(cron:session): session opened for user root by (uid=0)
  351. Feb 13 06:25:01 pve.example.com CRON[20839]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
  352. Feb 13 06:25:07 pve.example.com systemd[1]: Reloading PVE API Proxy Server.
  353. -- Subject: Unit pveproxy.service has begun reloading its configuration
  354. -- Defined-By: systemd
  355. -- Support: https://www.debian.org/support
  356. --
  357. -- Unit pveproxy.service has begun reloading its configuration
  358. Feb 13 06:25:08 pve.example.com pveproxy[20920]: send HUP to 2487
  359. Feb 13 06:25:08 pve.example.com pveproxy[2487]: received signal HUP
  360. Feb 13 06:25:08 pve.example.com pveproxy[2487]: server closing
  361. Feb 13 06:25:08 pve.example.com pveproxy[2487]: server shutdown (restart)
  362. Feb 13 06:25:08 pve.example.com systemd[1]: Reloaded PVE API Proxy Server.
  363. -- Subject: Unit pveproxy.service has finished reloading its configuration
  364. -- Defined-By: systemd
  365. -- Support: https://www.debian.org/support
  366. --
  367. -- Unit pveproxy.service has finished reloading its configuration
  368. --
  369. -- The result is done.
  370. Feb 13 06:25:08 pve.example.com systemd[1]: Reloading PVE SPICE Proxy Server.
  371. -- Subject: Unit spiceproxy.service has begun reloading its configuration
  372. -- Defined-By: systemd
  373. -- Support: https://www.debian.org/support
  374. --
  375. -- Unit spiceproxy.service has begun reloading its configuration
  376. Feb 13 06:25:08 pve.example.com spiceproxy[20925]: send HUP to 2507
  377. Feb 13 06:25:08 pve.example.com spiceproxy[2507]: received signal HUP
  378. Feb 13 06:25:08 pve.example.com spiceproxy[2507]: server closing
  379. Feb 13 06:25:08 pve.example.com spiceproxy[2507]: server shutdown (restart)
  380. Feb 13 06:25:08 pve.example.com systemd[1]: Reloaded PVE SPICE Proxy Server.
  381. -- Subject: Unit spiceproxy.service has finished reloading its configuration
  382. -- Defined-By: systemd
  383. -- Support: https://www.debian.org/support
  384. --
  385. -- Unit spiceproxy.service has finished reloading its configuration
  386. --
  387. -- The result is done.
  388. Feb 13 06:25:09 pve.example.com pveproxy[2487]: restarting server
  389. Feb 13 06:25:09 pve.example.com pveproxy[2487]: starting 3 worker(s)
  390. Feb 13 06:25:09 pve.example.com pveproxy[2487]: worker 20939 started
  391. Feb 13 06:25:09 pve.example.com pveproxy[2487]: worker 20940 started
  392. Feb 13 06:25:09 pve.example.com pveproxy[2487]: worker 20941 started
  393. Feb 13 06:25:09 pve.example.com spiceproxy[2507]: restarting server
  394. Feb 13 06:25:09 pve.example.com spiceproxy[2507]: starting 1 worker(s)
  395. Feb 13 06:25:09 pve.example.com spiceproxy[2507]: worker 20942 started
  396. Feb 13 06:25:09 pve.example.com systemd[1]: Stopping Proxmox VE firewall logger...
  397. -- Subject: Unit pvefw-logger.service has begun shutting down
  398. -- Defined-By: systemd
  399. -- Support: https://www.debian.org/support
  400. --
  401. -- Unit pvefw-logger.service has begun shutting down.
  402. Feb 13 06:25:09 pve.example.com pvefw-logger[1000]: received terminate request (signal)
  403. Feb 13 06:25:09 pve.example.com pvefw-logger[1000]: stopping pvefw logger
  404. Feb 13 06:25:09 pve.example.com systemd[1]: Stopped Proxmox VE firewall logger.
  405. -- Subject: Unit pvefw-logger.service has finished shutting down
  406. -- Defined-By: systemd
  407. -- Support: https://www.debian.org/support
  408. --
  409. -- Unit pvefw-logger.service has finished shutting down.
  410. Feb 13 06:25:09 pve.example.com systemd[1]: Starting Proxmox VE firewall logger...
  411. -- Subject: Unit pvefw-logger.service has begun start-up
  412. -- Defined-By: systemd
  413. -- Support: https://www.debian.org/support
  414. --
  415. -- Unit pvefw-logger.service has begun starting up.
  416. Feb 13 06:25:09 pve.example.com pvefw-logger[20956]: starting pvefw logger
  417. Feb 13 06:25:09 pve.example.com systemd[1]: Started Proxmox VE firewall logger.
  418. -- Subject: Unit pvefw-logger.service has finished start-up
  419. -- Defined-By: systemd
  420. -- Support: https://www.debian.org/support
  421. --
  422. -- Unit pvefw-logger.service has finished starting up.
  423. --
  424. -- The start-up result is done.
  425. Feb 13 06:25:10 pve.example.com liblogging-stdlog[1293]: [origin software="rsyslogd" swVersion="8.24.0" x-pid="1293" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
  426. Feb 13 06:25:10 pve.example.com liblogging-stdlog[1293]: [origin software="rsyslogd" swVersion="8.24.0" x-pid="1293" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
  427. Feb 13 06:25:14 pve.example.com pveproxy[32320]: worker exit
  428. Feb 13 06:25:14 pve.example.com pveproxy[4632]: worker exit
  429. Feb 13 06:25:14 pve.example.com pveproxy[6270]: worker exit
  430. Feb 13 06:25:14 pve.example.com pveproxy[2487]: worker 6270 finished
  431. Feb 13 06:25:14 pve.example.com pveproxy[2487]: worker 4632 finished
  432. Feb 13 06:25:14 pve.example.com pveproxy[2487]: worker 32320 finished
  433. Feb 13 06:25:14 pve.example.com spiceproxy[2511]: worker exit
  434. Feb 13 06:25:14 pve.example.com spiceproxy[2507]: worker 2511 finished
  435. Feb 13 06:25:15 pve.example.com CRON[20838]: pam_unix(cron:session): session closed for user root
  436. Feb 13 06:35:11 pve.example.com ntpd[1841]: peer 176.9.1.211 now invalid
  437. Feb 13 06:36:25 pve.example.com ntpd[1841]: peer 85.10.199.217 now invalid
  438. Feb 13 06:38:02 pve.example.com systemd[1]: Starting Daily apt upgrade and clean activities...
  439. -- Subject: Unit apt-daily-upgrade.service has begun start-up
  440. -- Defined-By: systemd
  441. -- Support: https://www.debian.org/support
  442. --
  443. -- Unit apt-daily-upgrade.service has begun starting up.
  444. Feb 13 06:38:02 pve.example.com systemd[1]: Started Daily apt upgrade and clean activities.
  445. -- Subject: Unit apt-daily-upgrade.service has finished start-up
  446. -- Defined-By: systemd
  447. -- Support: https://www.debian.org/support
  448. --
  449. -- Unit apt-daily-upgrade.service has finished starting up.
  450. --
  451. -- The start-up result is done.
  452. Feb 13 06:38:02 pve.example.com systemd[1]: apt-daily-upgrade.timer: Adding 8min 7.515037s random time.
  453. Feb 13 06:38:02 pve.example.com systemd[1]: apt-daily-upgrade.timer: Adding 59min 45.064872s random time.
  454. Feb 13 06:53:22 pve.example.com ntpd[1841]: peer 176.9.1.211 now valid
  455. Feb 13 07:03:36 pve.example.com rrdcached[2263]: flushing old values
  456. Feb 13 07:03:36 pve.example.com rrdcached[2263]: rotating journals
  457. Feb 13 07:03:36 pve.example.com rrdcached[2263]: started new journal /var/lib/rrdcached/journal/rrd.journal.1518501816.200058
  458. Feb 13 07:03:36 pve.example.com rrdcached[2263]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1518494616.200050
  459. Feb 13 07:04:17 pve.example.com ntpd[1841]: peer 85.10.199.217 now valid
  460. Feb 13 07:13:07 pve.example.com ntpd[1841]: peer 85.10.199.217 now invalid
  461. Feb 13 07:17:01 pve.example.com CRON[32463]: pam_unix(cron:session): session opened for user root by (uid=0)
  462. Feb 13 07:17:01 pve.example.com CRON[32464]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
  463. Feb 13 07:17:01 pve.example.com CRON[32463]: pam_unix(cron:session): session closed for user root
  464. Feb 13 07:41:30 pve.example.com ntpd[1841]: peer 85.10.199.217 now valid
  465. Feb 13 07:50:35 pve.example.com ntpd[1841]: peer 85.10.199.217 now invalid
  466. Feb 13 08:03:36 pve.example.com rrdcached[2263]: flushing old values
  467. Feb 13 08:03:36 pve.example.com rrdcached[2263]: rotating journals
  468. Feb 13 08:03:36 pve.example.com rrdcached[2263]: started new journal /var/lib/rrdcached/journal/rrd.journal.1518505416.200058
  469. Feb 13 08:03:36 pve.example.com rrdcached[2263]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1518498216.200060
  470. Feb 13 08:08:41 pve.example.com ntpd[1841]: peer 85.10.199.217 now valid
  471. Feb 13 08:17:01 pve.example.com CRON[13246]: pam_unix(cron:session): session opened for user root by (uid=0)
  472. Feb 13 08:17:01 pve.example.com CRON[13247]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
  473. Feb 13 08:17:01 pve.example.com CRON[13246]: pam_unix(cron:session): session closed for user root
  474.  
  475.  
  476.  
  477. === # dmesg (several hours later)
  478.  
  479. [94374.491688] systemd[1]: Stopped Journal Service.
  480. [94374.492567] systemd[1]: Starting Journal Service...
  481. [94464.737068] systemd[1]: systemd-journald.service: Start operation timed out. Terminating.
  482. [94554.985595] systemd[1]: systemd-journald.service: State 'stop-final-sigterm' timed out. Killing.
  483. [94554.986188] systemd[1]: systemd-journald.service: Killing process 17082 (systemd-journal) with signal SIGKILL.
  484. [94554.986809] systemd[1]: systemd-journald.service: Killing process 482 (systemd-journal) with signal SIGKILL.
  485. [94554.987363] systemd[1]: systemd-journald.service: Killing process 654 (systemd-journal) with signal SIGKILL.
  486. [94554.987812] systemd[1]: systemd-journald.service: Killing process 772 (systemd-journal) with signal SIGKILL.
  487. [94554.988233] systemd[1]: systemd-journald.service: Killing process 1061 (systemd-journal) with signal SIGKILL.
  488. [94554.988678] systemd[1]: systemd-journald.service: Killing process 1468 (systemd-journal) with signal SIGKILL.
  489. [94554.989097] systemd[1]: systemd-journald.service: Killing process 1509 (systemd-journal) with signal SIGKILL.
  490. [94554.989499] systemd[1]: systemd-journald.service: Killing process 1837 (systemd-journal) with signal SIGKILL.
  491. [94554.989863] systemd[1]: systemd-journald.service: Killing process 2198 (systemd-journal) with signal SIGKILL.
  492. [94645.234127] systemd[1]: systemd-journald.service: Processes still around after final SIGKILL. Entering failed mode.
  493. [94645.234708] systemd[1]: Failed to start Journal Service.
  494. [94645.235214] systemd[1]: systemd-journald.service: Unit entered failed state.
  495. [94645.235884] systemd[1]: systemd-journald.service: Failed with result 'timeout'.
  496. [94645.236818] systemd[1]: systemd-journald.service: Service has no hold-off time, scheduling restart.
  497. [94645.237558] systemd[1]: Stopped Journal Service.
  498. [94645.238370] systemd[1]: Starting Journal Service...
  499. [94735.482659] systemd[1]: systemd-journald.service: Start operation timed out. Terminating.
  500. [94825.731198] systemd[1]: systemd-journald.service: State 'stop-final-sigterm' timed out. Killing.
  501. [94825.731865] systemd[1]: systemd-journald.service: Killing process 17966 (systemd-journal) with signal SIGKILL.
  502. [94825.732385] systemd[1]: systemd-journald.service: Killing process 482 (systemd-journal) with signal SIGKILL.
  503. [94825.732875] systemd[1]: systemd-journald.service: Killing process 654 (systemd-journal) with signal SIGKILL.
  504. [94825.733326] systemd[1]: systemd-journald.service: Killing process 772 (systemd-journal) with signal SIGKILL.
  505. [94825.733737] systemd[1]: systemd-journald.service: Killing process 1061 (systemd-journal) with signal SIGKILL.
  506. [94825.734131] systemd[1]: systemd-journald.service: Killing process 1468 (systemd-journal) with signal SIGKILL.
  507. [94825.734617] systemd[1]: systemd-journald.service: Killing process 1509 (systemd-journal) with signal SIGKILL.
  508. [94825.734999] systemd[1]: systemd-journald.service: Killing process 1837 (systemd-journal) with signal SIGKILL.
  509. [94825.735366] systemd[1]: systemd-journald.service: Killing process 2198 (systemd-journal) with signal SIGKILL.
  510. [94915.979727] systemd[1]: systemd-journald.service: Processes still around after final SIGKILL. Entering failed mode.
  511. [94915.980386] systemd[1]: Failed to start Journal Service.
  512. [94915.980822] systemd[1]: systemd-journald.service: Unit entered failed state.
  513. [94915.981252] systemd[1]: systemd-journald.service: Failed with result 'timeout'.
  514. [94915.982228] systemd[1]: systemd-journald.service: Service has no hold-off time, scheduling restart.
  515. [94915.982742] systemd[1]: Stopped Journal Service.
  516. [94915.983583] systemd[1]: Starting Journal Service...
  517. usw...
  518.  
  519.  
  520.  
  521. === # pvesm status
  522.  
  523. Name Type Status Total Used Available %
  524. ISO dir active 52403200 6745148 45658052 12.87%
  525. backup dir active 754286748 236744848 517541900 31.39%
  526. data dir active 1073217536 383752412 689465124 35.76%
  527. datenklo dir active 826930684 492482752 334447932 59.56%
  528. local dir active 51475068 3828092 45339488 7.44%
  529.  
  530.  
  531.  
  532.  
  533. === # lvs -a -o+devices
  534.  
  535. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
  536. ISO pve -wi-ao---- 50.00g /dev/sda4(15361)
  537. backup pve -wi-ao---- 719.70g /dev/sda4(290305)
  538. data pve rwi-aor--- 1.00t 100.00 data_rimage_0(0),data_rimage_1(0)
  539. [data_rimage_0] pve iwi-aor--- 1.00t /dev/sdb2(0)
  540. [data_rimage_1] pve iwi-aor--- 1.00t /dev/sda4(28162)
  541. [data_rimage_1] pve iwi-aor--- 1.00t /dev/sda2(2)
  542. [data_rmeta_0] pve ewi-aor--- 4.00m /dev/sdb2(262144)
  543. [data_rmeta_1] pve ewi-aor--- 4.00m /dev/sda4(28161)
  544. datenklo pve -wi-ao---- 789.01g /dev/sdb2(262145)
  545. root pve rwi-aor--- 50.00g 100.00 root_rimage_0(0),root_rimage_1(0)
  546. [root_rimage_0] pve iwi-aor--- 50.00g /dev/sda4(2560)
  547. [root_rimage_1] pve iwi-aor--- 50.00g /dev/sdb1(1)
  548. [root_rimage_1] pve iwi-aor--- 50.00g /dev/sda2(0)
  549. [root_rmeta_0] pve ewi-aor--- 4.00m /dev/sda4(15360)
  550. [root_rmeta_1] pve ewi-aor--- 4.00m /dev/sdb1(0)
  551. swap pve -wi-ao---- 10.00g /dev/sda4(0)
  552.  
  553.  
  554.  
  555. === /var/log/lxc/lxc-monitord.log
  556.  
  557. lxc-monitord 20170918223148.527 INFO lxc_monitord - lxc_monitord.c:lxc_monitord_sock_accept:222 - Accepted client file descriptor 7. Number of accepted file descriptors is now 1.
  558. lxc-monitord 20170918223149.785 DEBUG lxc_monitord - lxc_monitord.c:lxc_monitord_fifo_create:108 - lxc-monitord already running on lxcpath /var/lib/lxc.
  559. lxc-monitord 20170918223149.788 INFO lxc_monitord - lxc_monitord.c:lxc_monitord_sock_accept:222 - Accepted client file descriptor 7. Number of accepted file descriptors is now 1.
  560. lxc-monitord 20170919130752.947 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:201 - using monitor socket name "lxc/ad055575fe28ddd5//var/lib/lxc" (length of socket name 33 must be <= 105)
  561. lxc-monitord 20170919130752.954 NOTICE lxc_monitord - lxc_monitord.c:main:456 - lxc-monitord with pid 2561 is now monitoring lxcpath /var/lib/lxc.
  562. lxc-monitord 20170919165333.264 DEBUG lxc_monitord - lxc_monitord.c:lxc_monitord_fifo_create:109 - lxc-monitord already running on lxcpath /var/lib/lxc.
  563. lxc-monitord 20170919165333.264 INFO lxc_monitord - lxc_monitord.c:lxc_monitord_sock_accept:223 - Accepted client file descriptor 7. Number of accepted file descriptors is now 1.
  564. lxc-monitord 20171210220519.957 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:201 - using monitor socket name "lxc/ad055575fe28ddd5//var/lib/lxc" (length of socket name 33 must be <= 105)
  565. lxc-monitord 20171210220519.964 NOTICE lxc_monitord - lxc_monitord.c:main:456 - lxc-monitord with pid 2063 is now monitoring lxcpath /var/lib/lxc.
  566. lxc-monitord 20171223155336.414 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:201 - using monitor socket name "lxc/ad055575fe28ddd5//var/lib/lxc" (length of socket name 33 must be <= 105)
  567. lxc-monitord 20171223155336.421 NOTICE lxc_monitord - lxc_monitord.c:main:456 - lxc-monitord with pid 2054 is now monitoring lxcpath /var/lib/lxc.
  568. lxc-monitord 20171225120333.958 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:201 - using monitor socket name "lxc/ad055575fe28ddd5//var/lib/lxc" (length of socket name 33 must be <= 105)
  569. lxc-monitord 20171225120333.997 NOTICE lxc_monitord - lxc_monitord.c:main:456 - lxc-monitord with pid 2083 is now monitoring lxcpath /var/lib/lxc.
  570. lxc-monitord 20171225121825.796 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:201 - using monitor socket name "lxc/ad055575fe28ddd5//var/lib/lxc" (length of socket name 33 must be <= 105)
  571. lxc-monitord 20171225121825.959 NOTICE lxc_monitord - lxc_monitord.c:main:456 - lxc-monitord with pid 2079 is now monitoring lxcpath /var/lib/lxc.
  572. lxc-monitord 20180121171324.177 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:201 - using monitor socket name "lxc/ad055575fe28ddd5//var/lib/lxc" (length of socket name 33 must be <= 105)
  573. lxc-monitord 20180121171324.194 NOTICE lxc_monitord - lxc_monitord.c:main:456 - lxc-monitord with pid 2024 is now monitoring lxcpath /var/lib/lxc.
  574. lxc-monitord 20180121172726.257 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:201 - using monitor socket name "lxc/ad055575fe28ddd5//var/lib/lxc" (length of socket name 33 must be <= 105)
  575. lxc-monitord 20180121172726.270 NOTICE lxc_monitord - lxc_monitord.c:main:456 - lxc-monitord with pid 1992 is now monitoring lxcpath /var/lib/lxc.
  576. lxc-monitord 20180127134334.423 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:201 - using monitor socket name "lxc/ad055575fe28ddd5//var/lib/lxc" (length of socket name 33 must be <= 105)
  577. lxc-monitord 20180127134334.431 NOTICE lxc_monitord - lxc_monitord.c:main:456 - lxc-monitord with pid 2038 is now monitoring lxcpath /var/lib/lxc.
  578. lxc-monitord 20180127135921.737 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:201 - using monitor socket name "lxc/ad055575fe28ddd5//var/lib/lxc" (length of socket name 33 must be <= 105)
  579. lxc-monitord 20180127135921.744 NOTICE lxc_monitord - lxc_monitord.c:main:456 - lxc-monitord with pid 1903 is now monitoring lxcpath /var/lib/lxc.
  580. ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ lxc-monitord 20180212150335.403 INFO lxc_monitor - monitor.c:lxc_monitor_sock_name:201 - using monitor socket name "lxc/ad055575fe28ddd5//var/lib/lxc" (length of socket name 33 must be <= 105)
  581. lxc-monitord 20180212150335.416 NOTICE lxc_monitord - lxc_monitord.c:main:456 - lxc-monitord with pid 1951 is now monitoring lxcpath /var/lib/lxc.
  582. lxc-monitord 20180213140315.532 DEBUG lxc_monitord - lxc_monitord.c:lxc_monitord_fifo_create:109 - lxc-monitord already running on lxcpath /var/lib/lxc.
  583. lxc-monitord 20180213140315.533 INFO lxc_monitord - lxc_monitord.c:lxc_monitord_sock_accept:223 - Accepted client file descriptor 7. Number of accepted file descriptors is now 1.
RAW Paste Data