Guest User

ceph kclient boot messages

a guest
May 11th, 2023
212
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 23.38 KB | None | 0 0
  1. [Wed May 10 14:04:54 2023] libceph: loaded (mon/osd proto 15/24)
  2. [Wed May 10 14:04:54 2023] ceph: loaded (mds proto 32)
  3. [Wed May 10 14:04:54 2023] libceph: mon2 (1)192.168.32.67:6789 session established
  4. [Wed May 10 14:04:54 2023] libceph: mon0 (1)192.168.32.65:6789 session established
  5. [Wed May 10 14:04:54 2023] libceph: mon0 (1)192.168.32.65:6789 session established
  6. [Wed May 10 14:04:54 2023] libceph: mon1 (1)192.168.32.66:6789 session established
  7. [Wed May 10 14:04:54 2023] libceph: mon0 (1)192.168.32.65:6789 session established
  8. [Wed May 10 14:04:54 2023] libceph: mon1 (1)192.168.32.66:6789 session established
  9. [Wed May 10 14:04:54 2023] libceph: client210370165 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  10. [Wed May 10 14:04:54 2023] libceph: client210356766 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  11. [Wed May 10 14:04:54 2023] libceph: client210295802 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  12. [Wed May 10 14:04:54 2023] libceph: client210370170 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  13. [Wed May 10 14:04:54 2023] libceph: mon2 (1)192.168.32.67:6789 session established
  14. [Wed May 10 14:04:54 2023] libceph: mon2 (1)192.168.32.67:6789 session established
  15. [Wed May 10 14:04:54 2023] libceph: mon0 (1)192.168.32.65:6789 session established
  16. [Wed May 10 14:04:54 2023] libceph: mon2 (1)192.168.32.67:6789 session established
  17. [Wed May 10 14:04:54 2023] libceph: client210370160 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  18. [Wed May 10 14:04:54 2023] libceph: client210356771 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  19. [Wed May 10 14:04:54 2023] libceph: client210295807 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  20. [Wed May 10 14:04:54 2023] libceph: client210295812 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  21. [Wed May 10 14:04:54 2023] libceph: client210370175 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  22. [Wed May 10 14:04:54 2023] libceph: client210295817 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  23. [Wed May 10 14:04:56 2023] ISO 9660 Extensions: Microsoft Joliet Level 3
  24. [Wed May 10 14:04:56 2023] ISO 9660 Extensions: RRIP_1991A
  25. [Wed May 10 14:05:12 2023] WARNING: CPU: 3 PID: 34 at fs/ceph/caps.c:689 ceph_add_cap+0x53e/0x550 [ceph]
  26. [Wed May 10 14:05:12 2023] Modules linked in: ceph libceph dns_resolver nls_utf8 isofs cirrus drm_shmem_helper drm_kms_helper syscopyarea sysfillrect sysimgblt intel_rapl_msr intel_rapl_common fb_sys_fops virtio_net iTCO_wdt net_failover iTCO_vendor_support drm pcspkr failover virtio_balloon joydev lpc_ich i2c_i801 nfsd nfs_acl lockd grace auth_rpcgss sunrpc xfs libcrc32c sr_mod cdrom sg crct10dif_pclmul ahci crc32_pclmul libahci crc32c_intel libata ghash_clmulni_intel serio_raw virtio_blk virtio_console virtio_scsi dm_mirror dm_region_hash dm_log dm_mod fuse
  27. [Wed May 10 14:05:12 2023] CPU: 3 PID: 34 Comm: kworker/3:0 Not tainted 4.18.0-486.el8.x86_64 #1
  28. [Wed May 10 14:05:12 2023] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014
  29. [Wed May 10 14:05:12 2023] Workqueue: ceph-msgr ceph_con_workfn [libceph]
  30. [Wed May 10 14:05:12 2023] RIP: 0010:ceph_add_cap+0x53e/0x550 [ceph]
  31. [Wed May 10 14:05:12 2023] Code: c0 48 c7 c7 c0 39 74 c0 e8 6c 7c 5d c7 0f 0b 44 89 7c 24 04 e9 7e fc ff ff 44 8b 7c 24 04 e9 68 fe ff ff 0f 0b e9 c9 fc ff ff <0f> 0b e9 0a fe ff ff 0f 0b e9 12 fe ff ff 0f 0b 66 90 0f 1f 44 00
  32. [Wed May 10 14:05:12 2023] RSP: 0018:ffffa4c980d87b48 EFLAGS: 00010203
  33. [Wed May 10 14:05:12 2023] RAX: 0000000000000000 RBX: 0000000000000005 RCX: dead000000000200
  34. [Wed May 10 14:05:12 2023] RDX: ffff8cfc56dbf7d0 RSI: ffff8cfc56dbf7d0 RDI: ffff8cfc56dbf7c8
  35. [Wed May 10 14:05:12 2023] RBP: ffff8cfc45503970 R08: ffff8cfc56dbf7d0 R09: 0000000000000001
  36. [Wed May 10 14:05:12 2023] R10: ffff8cfc42658780 R11: 00000000ffff8ce0 R12: 0000000000000155
  37. [Wed May 10 14:05:12 2023] R13: ffff8cfc42658780 R14: ffff8cfc42658788 R15: 0000000000000001
  38. [Wed May 10 14:05:12 2023] FS: 0000000000000000(0000) GS:ffff8cfdb7d80000(0000) knlGS:0000000000000000
  39. [Wed May 10 14:05:12 2023] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  40. [Wed May 10 14:05:12 2023] CR2: 00007f7b7b013570 CR3: 0000000104cf4000 CR4: 00000000003506e0
  41. [Wed May 10 14:05:12 2023] Call Trace:
  42. [Wed May 10 14:05:12 2023] ceph_handle_caps+0xdf2/0x1780 [ceph]
  43. [Wed May 10 14:05:12 2023] mds_dispatch+0x13a/0x670 [ceph]
  44. [Wed May 10 14:05:12 2023] ceph_con_process_message+0x79/0x140 [libceph]
  45. [Wed May 10 14:05:12 2023] ? calc_signature+0xdf/0x110 [libceph]
  46. [Wed May 10 14:05:12 2023] ceph_con_v1_try_read+0x5d7/0xf30 [libceph]
  47. [Wed May 10 14:05:12 2023] ? available_idle_cpu+0x41/0x50
  48. [Wed May 10 14:05:12 2023] ceph_con_workfn+0x329/0x680 [libceph]
  49. [Wed May 10 14:05:12 2023] process_one_work+0x1a7/0x360
  50. [Wed May 10 14:05:12 2023] worker_thread+0x30/0x390
  51. [Wed May 10 14:05:12 2023] ? create_worker+0x1a0/0x1a0
  52. [Wed May 10 14:05:12 2023] kthread+0x134/0x150
  53. [Wed May 10 14:05:12 2023] ? set_kthread_struct+0x50/0x50
  54. [Wed May 10 14:05:12 2023] ret_from_fork+0x35/0x40
  55. [Wed May 10 14:05:12 2023] ---[ end trace 03a1d82065fdafbd ]---
  56. [Wed May 10 14:05:56 2023] ISO 9660 Extensions: Microsoft Joliet Level 3
  57. [Wed May 10 14:05:56 2023] ISO 9660 Extensions: RRIP_1991A
  58. [Wed May 10 16:01:24 2023] ceph: mds1 reconnect start
  59. [Wed May 10 16:01:24 2023] ceph: mds1 reconnect start
  60. [Wed May 10 16:01:24 2023] ceph: mds1 reconnect start
  61. [Wed May 10 16:01:24 2023] ceph: mds1 reconnect start
  62. [Wed May 10 16:01:24 2023] ceph: mds1 reconnect start
  63. [Wed May 10 16:01:24 2023] ceph: mds1 reconnect start
  64. [Wed May 10 16:01:25 2023] ceph: mds1 reconnect success
  65. [Wed May 10 16:01:25 2023] ceph: mds1 reconnect success
  66. [Wed May 10 16:01:25 2023] ceph: mds1 reconnect success
  67. [Wed May 10 16:01:25 2023] ceph: mds1 reconnect success
  68. [Wed May 10 16:01:25 2023] ceph: mds1 reconnect success
  69. [Wed May 10 16:01:25 2023] ceph: mds1 reconnect success
  70. [Wed May 10 16:02:12 2023] ceph: mds1 caps stale
  71. [Wed May 10 16:02:19 2023] ceph: mds1 caps renewed
  72. [Wed May 10 16:02:19 2023] ceph: mds1 caps renewed
  73. [Wed May 10 16:02:19 2023] ceph: mds1 caps renewed
  74. [Wed May 10 16:02:19 2023] ceph: mds1 caps renewed
  75. [Wed May 10 16:02:19 2023] ceph: mds1 caps renewed
  76. [Wed May 10 16:02:19 2023] ceph: mds1 caps renewed
  77. [Wed May 10 16:03:06 2023] ceph: update_snap_trace error -5
  78. [Wed May 10 16:03:06 2023] ceph: ceph_update_snap_trace failed to blocklist (3)192.168.48.142:0: -13
  79. [Wed May 10 16:03:06 2023] ------------[ cut here ]------------
  80. [Wed May 10 16:03:06 2023] ceph_update_snap_trace: do remount to continue after corrupted snaptrace is fixed
  81. [Wed May 10 16:03:06 2023] WARNING: CPU: 0 PID: 5143 at fs/ceph/snap.c:841 ceph_update_snap_trace.cold.21+0x68/0x137 [ceph]
  82. [Wed May 10 16:03:06 2023] Modules linked in: ceph libceph dns_resolver nls_utf8 isofs cirrus drm_shmem_helper drm_kms_helper syscopyarea sysfillrect sysimgblt intel_rapl_msr intel_rapl_common fb_sys_fops virtio_net iTCO_wdt net_failover iTCO_vendor_support drm pcspkr failover virtio_balloon joydev lpc_ich i2c_i801 nfsd nfs_acl lockd grace auth_rpcgss sunrpc xfs libcrc32c sr_mod cdrom sg crct10dif_pclmul ahci crc32_pclmul libahci crc32c_intel libata ghash_clmulni_intel serio_raw virtio_blk virtio_console virtio_scsi dm_mirror dm_region_hash dm_log dm_mod fuse
  83. [Wed May 10 16:03:06 2023] CPU: 0 PID: 5143 Comm: kworker/0:0 Tainted: G W --------- - - 4.18.0-486.el8.x86_64 #1
  84. [Wed May 10 16:03:06 2023] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014
  85. [Wed May 10 16:03:06 2023] Workqueue: ceph-msgr ceph_con_workfn [libceph]
  86. [Wed May 10 16:03:06 2023] RIP: 0010:ceph_update_snap_trace.cold.21+0x68/0x137 [ceph]
  87. [Wed May 10 16:03:06 2023] Code: c6 74 c0 48 c7 c5 10 5e 74 c0 48 89 c8 49 89 e8 48 89 c2 44 89 0c 24 48 c7 c6 60 bf 73 c0 48 c7 c7 48 61 74 c0 e8 08 e7 5b c7 <0f> 0b 44 8b 0c 24 e9 6c e8 fe ff 44 89 ce 48 c7 c7 60 5e 74 c0 44
  88. [Wed May 10 16:03:06 2023] RSP: 0018:ffffa4c981fbbbb8 EFLAGS: 00010286
  89. [Wed May 10 16:03:06 2023] RAX: 0000000000000000 RBX: ffff8cfc4649a089 RCX: 0000000000000027
  90. [Wed May 10 16:03:06 2023] RDX: 0000000000000027 RSI: 00000000ffff7fff RDI: ffff8cfdb7c1e690
  91. [Wed May 10 16:03:06 2023] RBP: ffffffffc0745e10 R08: 0000000000000000 R09: c0000000ffff7fff
  92. [Wed May 10 16:03:06 2023] R10: 0000000000000001 R11: ffffa4c981fbb9d0 R12: ffff8cfc623c6b48
  93. [Wed May 10 16:03:06 2023] R13: 0000000000000000 R14: ffff8cfc623c6b18 R15: ffff8cfb65855f00
  94. [Wed May 10 16:03:06 2023] FS: 0000000000000000(0000) GS:ffff8cfdb7c00000(0000) knlGS:0000000000000000
  95. [Wed May 10 16:03:06 2023] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  96. [Wed May 10 16:03:06 2023] CR2: 00007fedace4d008 CR3: 000000025f67c000 CR4: 00000000003506f0
  97. [Wed May 10 16:03:06 2023] Call Trace:
  98. [Wed May 10 16:03:06 2023] ceph_handle_snap+0x188/0x4e0 [ceph]
  99. [Wed May 10 16:03:06 2023] mds_dispatch+0x17a/0x670 [ceph]
  100. [Wed May 10 16:03:06 2023] ceph_con_process_message+0x79/0x140 [libceph]
  101. [Wed May 10 16:03:06 2023] ? calc_signature+0xdf/0x110 [libceph]
  102. [Wed May 10 16:03:06 2023] ceph_con_v1_try_read+0x5d7/0xf30 [libceph]
  103. [Wed May 10 16:03:06 2023] ? available_idle_cpu+0x41/0x50
  104. [Wed May 10 16:03:06 2023] ceph_con_workfn+0x329/0x680 [libceph]
  105. [Wed May 10 16:03:06 2023] process_one_work+0x1a7/0x360
  106. [Wed May 10 16:03:06 2023] ? create_worker+0x1a0/0x1a0
  107. [Wed May 10 16:03:06 2023] worker_thread+0x30/0x390
  108. [Wed May 10 16:03:06 2023] ? create_worker+0x1a0/0x1a0
  109. [Wed May 10 16:03:06 2023] kthread+0x134/0x150
  110. [Wed May 10 16:03:06 2023] ? set_kthread_struct+0x50/0x50
  111. [Wed May 10 16:03:06 2023] ret_from_fork+0x35/0x40
  112. [Wed May 10 16:03:06 2023] ---[ end trace 03a1d82065fdafbe ]---
  113. [Wed May 10 16:03:06 2023] ceph: corrupt snap message from mds1
  114. [Wed May 10 16:03:06 2023] header: 00000000: 05 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
  115. [Wed May 10 16:03:06 2023] header: 00000010: 12 03 7f 00 01 00 00 01 00 00 00 00 00 00 00 00 ................
  116. [Wed May 10 16:03:06 2023] header: 00000020: 00 00 00 00 02 01 00 00 00 00 00 00 00 01 00 00 ................
  117. [Wed May 10 16:03:06 2023] header: 00000030: 00 98 0d 60 93 ...`.
  118. [Wed May 10 16:03:06 2023] front: 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
  119. [Wed May 10 16:03:06 2023] front: 00000010: 0c 00 00 00 88 00 00 00 d1 c0 71 38 00 01 00 00 ..........q8....
  120. [Wed May 10 16:03:06 2023] front: 00000020: 22 c8 71 38 00 01 00 00 d7 c7 71 38 00 01 00 00 ".q8......q8....
  121. [Wed May 10 16:03:06 2023] front: 00000030: d9 c7 71 38 00 01 00 00 d4 c7 71 38 00 01 00 00 ..q8......q8....
  122. [Wed May 10 16:03:06 2023] front: 00000040: f1 c0 71 38 00 01 00 00 d4 c0 71 38 00 01 00 00 ..q8......q8....
  123. [Wed May 10 16:03:06 2023] front: 00000050: 20 c8 71 38 00 01 00 00 1d c8 71 38 00 01 00 00 .q8......q8....
  124. [Wed May 10 16:03:06 2023] front: 00000060: ec c0 71 38 00 01 00 00 d6 c0 71 38 00 01 00 00 ..q8......q8....
  125. [Wed May 10 16:03:06 2023] front: 00000070: ef c0 71 38 00 01 00 00 6a 11 2d 1a 00 01 00 00 ..q8....j.-.....
  126. [Wed May 10 16:03:06 2023] front: 00000080: 01 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 ................
  127. [Wed May 10 16:03:06 2023] front: 00000090: ee 01 00 00 00 00 00 00 01 00 00 00 00 00 00 00 ................
  128. [Wed May 10 16:03:06 2023] front: 000000a0: 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 ................
  129. [Wed May 10 16:03:06 2023] front: 000000b0: 01 09 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
  130. [Wed May 10 16:03:06 2023] front: 000000c0: 01 00 00 00 00 00 00 00 02 09 00 00 00 00 00 00 ................
  131. [Wed May 10 16:03:06 2023] front: 000000d0: 05 00 00 00 00 00 00 00 01 09 00 00 00 00 00 00 ................
  132. [Wed May 10 16:03:06 2023] front: 000000e0: ff 08 00 00 00 00 00 00 fd 08 00 00 00 00 00 00 ................
  133. [Wed May 10 16:03:06 2023] front: 000000f0: fb 08 00 00 00 00 00 00 f9 08 00 00 00 00 00 00 ................
  134. [Wed May 10 16:03:06 2023] footer: 00000000: ca 39 06 07 00 00 00 00 00 00 00 00 42 06 63 61 .9..........B.ca
  135. [Wed May 10 16:03:06 2023] footer: 00000010: 7b 4b 5d 2d 05 {K]-.
  136. [Wed May 10 16:03:06 2023] ceph: ceph_do_invalidate_pages: inode 1001a2d116a.fffffffffffffffe is shut down
  137. [Wed May 10 16:03:07 2023] WARNING: CPU: 3 PID: 5069 at fs/ceph/mds_client.c:4623 check_session_state+0x67/0x70 [ceph]
  138. [Wed May 10 16:03:07 2023] Modules linked in: ceph libceph dns_resolver nls_utf8 isofs cirrus drm_shmem_helper drm_kms_helper syscopyarea sysfillrect sysimgblt intel_rapl_msr intel_rapl_common fb_sys_fops virtio_net iTCO_wdt net_failover iTCO_vendor_support drm pcspkr failover virtio_balloon joydev lpc_ich i2c_i801 nfsd nfs_acl lockd grace auth_rpcgss sunrpc xfs libcrc32c sr_mod cdrom sg crct10dif_pclmul ahci crc32_pclmul libahci crc32c_intel libata ghash_clmulni_intel serio_raw virtio_blk virtio_console virtio_scsi dm_mirror dm_region_hash dm_log dm_mod fuse
  139. [Wed May 10 16:03:07 2023] CPU: 3 PID: 5069 Comm: kworker/3:1 Tainted: G W --------- - - 4.18.0-486.el8.x86_64 #1
  140. [Wed May 10 16:03:07 2023] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014
  141. [Wed May 10 16:03:07 2023] Workqueue: events delayed_work [ceph]
  142. [Wed May 10 16:03:07 2023] RIP: 0010:check_session_state+0x67/0x70 [ceph]
  143. [Wed May 10 16:03:07 2023] Code: 3f 36 ed c8 48 39 d0 0f 88 5a 6a 00 00 89 d8 5b e9 6e ea 2c c8 48 83 7f 10 00 74 f1 48 8b 07 48 8b 00 8b 40 28 83 f8 04 74 e3 <0f> 0b eb df 0f 1f 44 00 00 0f 1f 44 00 00 41 57 41 56 49 89 f6 41
  144. [Wed May 10 16:03:07 2023] RSP: 0018:ffffa4c9844b7e48 EFLAGS: 00010202
  145. [Wed May 10 16:03:07 2023] RAX: 0000000000000006 RBX: 0000000000000000 RCX: 0000000000000007
  146. [Wed May 10 16:03:07 2023] RDX: 0000000000000005 RSI: ffff8cfc469e07cc RDI: ffff8cfc469e0000
  147. [Wed May 10 16:03:07 2023] RBP: ffff8cfc469e0000 R08: 0000000000000000 R09: 000073746e657665
  148. [Wed May 10 16:03:07 2023] R10: 8080808080808080 R11: ffffffff8965c148 R12: ffff8cfc491d5130
  149. [Wed May 10 16:03:07 2023] R13: 0000000000000000 R14: ffff8cfc491d5008 R15: ffff8cfc491d5000
  150. [Wed May 10 16:03:07 2023] FS: 0000000000000000(0000) GS:ffff8cfdb7d80000(0000) knlGS:0000000000000000
  151. [Wed May 10 16:03:07 2023] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  152. [Wed May 10 16:03:07 2023] CR2: 00007fedace4d008 CR3: 00000001d4506000 CR4: 00000000003506e0
  153. [Wed May 10 16:03:07 2023] Call Trace:
  154. [Wed May 10 16:03:07 2023] delayed_work+0x143/0x240 [ceph]
  155. [Wed May 10 16:03:07 2023] process_one_work+0x1a7/0x360
  156. [Wed May 10 16:03:07 2023] ? create_worker+0x1a0/0x1a0
  157. [Wed May 10 16:03:07 2023] worker_thread+0x30/0x390
  158. [Wed May 10 16:03:07 2023] ? create_worker+0x1a0/0x1a0
  159. [Wed May 10 16:03:07 2023] kthread+0x134/0x150
  160. [Wed May 10 16:03:07 2023] ? set_kthread_struct+0x50/0x50
  161. [Wed May 10 16:03:07 2023] ret_from_fork+0x35/0x40
  162. [Wed May 10 16:03:07 2023] ---[ end trace 03a1d82065fdafbf ]---
  163. [Wed May 10 16:03:09 2023] ceph: mds1 caps renewed
  164. [Wed May 10 16:03:09 2023] ceph: mds1 caps renewed
  165. [Wed May 10 16:03:09 2023] ceph: mds1 caps renewed
  166. [Wed May 10 16:03:09 2023] ceph: mds1 caps renewed
  167. [Wed May 10 16:03:09 2023] ceph: mds1 caps renewed
  168. [Wed May 10 16:03:09 2023] ceph: mds1 recovery completed
  169. [Wed May 10 16:03:09 2023] ceph: mds1 recovery completed
  170. [Wed May 10 16:03:09 2023] ceph: mds1 recovery completed
  171. [Wed May 10 16:03:09 2023] ceph: mds1 recovery completed
  172. [Wed May 10 16:03:09 2023] ceph: mds1 recovery completed
  173. [Wed May 10 16:03:09 2023] ceph: mds1 recovery completed
  174. [Wed May 10 16:04:10 2023] ceph: ceph_do_invalidate_pages: inode 20011a825dd.fffffffffffffffe is shut down
  175. [Wed May 10 16:04:10 2023] ceph: ceph_do_invalidate_pages: inode 200118b9384.fffffffffffffffe is shut down
  176. [Wed May 10 16:04:10 2023] ceph: ceph_do_invalidate_pages: inode 1001c664b26.fffffffffffffffe is shut down
  177. [Wed May 10 16:04:10 2023] ceph: ceph_do_invalidate_pages: inode 1001c665c34.fffffffffffffffe is shut down
  178. [Wed May 10 16:04:10 2023] ceph: ceph_do_invalidate_pages: inode 1001c66a2b6.fffffffffffffffe is shut down
  179. [Wed May 10 16:04:10 2023] ceph: ceph_do_invalidate_pages: inode 2000e6d3a15.fffffffffffffffe is shut down
  180. [Wed May 10 16:04:10 2023] ceph: ceph_do_invalidate_pages: inode 100219a7adf.fffffffffffffffe is shut down
  181. [Wed May 10 16:04:10 2023] ceph: ceph_do_invalidate_pages: inode 10023fb4137.fffffffffffffffe is shut down
  182. [Wed May 10 16:04:10 2023] ceph: ceph_do_invalidate_pages: inode 10024484920.fffffffffffffffe is shut down
  183. [Wed May 10 16:04:10 2023] ceph: ceph_do_invalidate_pages: inode 2000e6d3a4b.fffffffffffffffe is shut down
  184. [Wed May 10 16:04:12 2023] libceph: mds1 (1)192.168.32.74:6801 socket closed (con state OPEN)
  185. [Wed May 10 16:04:12 2023] libceph: mds1 (1)192.168.32.74:6801 socket closed (con state OPEN)
  186. [Wed May 10 16:04:12 2023] libceph: mds1 (1)192.168.32.74:6801 socket closed (con state OPEN)
  187. [Wed May 10 16:04:12 2023] libceph: mds1 (1)192.168.32.74:6801 socket closed (con state OPEN)
  188. [Wed May 10 16:04:12 2023] libceph: mds1 (1)192.168.32.74:6801 socket closed (con state OPEN)
  189. [Wed May 10 16:04:13 2023] libceph: wrong peer, want (1)192.168.32.74:6801/3586644765, got (1)192.168.32.74:6801/3778319964
  190. [Wed May 10 16:04:13 2023] libceph: mds1 (1)192.168.32.74:6801 wrong peer at address
  191. [Wed May 10 16:04:13 2023] libceph: wrong peer, want (1)192.168.32.74:6801/3586644765, got (1)192.168.32.74:6801/3778319964
  192. [Wed May 10 16:04:13 2023] libceph: mds1 (1)192.168.32.74:6801 wrong peer at address
  193. [Wed May 10 16:04:13 2023] libceph: wrong peer, want (1)192.168.32.74:6801/3586644765, got (1)192.168.32.74:6801/3778319964
  194. [Wed May 10 16:04:13 2023] libceph: mds1 (1)192.168.32.74:6801 wrong peer at address
  195. [Wed May 10 16:04:13 2023] libceph: wrong peer, want (1)192.168.32.74:6801/3586644765, got (1)192.168.32.74:6801/3778319964
  196. [Wed May 10 16:04:13 2023] libceph: mds1 (1)192.168.32.74:6801 wrong peer at address
  197. [Wed May 10 16:04:13 2023] libceph: wrong peer, want (1)192.168.32.74:6801/3586644765, got (1)192.168.32.74:6801/3778319964
  198. [Wed May 10 16:04:13 2023] libceph: mds1 (1)192.168.32.74:6801 wrong peer at address
  199. [Wed May 10 16:04:14 2023] libceph: wrong peer, want (1)192.168.32.74:6801/3586644765, got (1)192.168.32.74:6801/3778319964
  200. [Wed May 10 16:04:14 2023] libceph: mds1 (1)192.168.32.74:6801 wrong peer at address
  201. [Wed May 10 16:04:14 2023] libceph: wrong peer, want (1)192.168.32.74:6801/3586644765, got (1)192.168.32.74:6801/3778319964
  202. [Wed May 10 16:04:14 2023] libceph: mds1 (1)192.168.32.74:6801 wrong peer at address
  203. [Wed May 10 16:04:14 2023] libceph: wrong peer, want (1)192.168.32.74:6801/3586644765, got (1)192.168.32.74:6801/3778319964
  204. [Wed May 10 16:04:14 2023] libceph: mds1 (1)192.168.32.74:6801 wrong peer at address
  205. [Wed May 10 16:04:14 2023] libceph: wrong peer, want (1)192.168.32.74:6801/3586644765, got (1)192.168.32.74:6801/3778319964
  206. [Wed May 10 16:04:14 2023] libceph: mds1 (1)192.168.32.74:6801 wrong peer at address
  207. [Wed May 10 16:04:14 2023] libceph: wrong peer, want (1)192.168.32.74:6801/3586644765, got (1)192.168.32.74:6801/3778319964
  208. [Wed May 10 16:04:14 2023] libceph: mds1 (1)192.168.32.74:6801 wrong peer at address
  209. [Wed May 10 16:04:20 2023] ceph: mds1 reconnect start
  210. [Wed May 10 16:04:20 2023] ceph: mds1 reconnect start
  211. [Wed May 10 16:04:20 2023] ceph: mds1 reconnect start
  212. [Wed May 10 16:04:20 2023] ceph: mds1 reconnect start
  213. [Wed May 10 16:04:20 2023] ceph: mds1 reconnect start
  214. [Wed May 10 16:04:23 2023] ceph: mds1 reconnect success
  215. [Wed May 10 16:04:23 2023] ceph: mds1 reconnect success
  216. [Wed May 10 16:04:23 2023] ceph: mds1 reconnect success
  217. [Wed May 10 16:04:23 2023] ceph: mds1 reconnect success
  218. [Wed May 10 16:04:23 2023] ceph: mds1 reconnect success
  219. [Wed May 10 16:06:14 2023] ceph: mds1 caps stale
  220. [Wed May 10 16:06:14 2023] ceph: mds1 caps stale
  221. [Wed May 10 16:06:14 2023] ceph: mds1 caps stale
  222. [Wed May 10 16:06:14 2023] ceph: mds1 caps stale
  223. [Wed May 10 16:06:14 2023] ceph: mds1 caps stale
  224. [Wed May 10 16:06:45 2023] ceph: mds1 caps renewed
  225. [Wed May 10 16:06:45 2023] ceph: mds1 caps renewed
  226. [Wed May 10 16:06:45 2023] ceph: mds1 caps renewed
  227. [Wed May 10 16:06:45 2023] ceph: mds1 caps renewed
  228. [Wed May 10 16:06:45 2023] ceph: mds1 caps renewed
  229. [Wed May 10 16:06:45 2023] ceph: mds1 recovery completed
  230. [Wed May 10 16:06:45 2023] ceph: mds1 recovery completed
  231. [Wed May 10 16:06:45 2023] ceph: mds1 recovery completed
  232. [Wed May 10 16:06:45 2023] ceph: mds1 recovery completed
  233. [Wed May 10 16:06:45 2023] ceph: mds1 recovery completed
  234. [Wed May 10 16:22:24 2023] WARNING: CPU: 1 PID: 5392 at fs/ceph/caps.c:689 ceph_add_cap+0x53e/0x550 [ceph]
  235. [Wed May 10 16:22:24 2023] Modules linked in: ceph libceph dns_resolver nls_utf8 isofs cirrus drm_shmem_helper drm_kms_helper syscopyarea sysfillrect sysimgblt intel_rapl_msr intel_rapl_common fb_sys_fops virtio_net iTCO_wdt net_failover iTCO_vendor_support drm pcspkr failover virtio_balloon joydev lpc_ich i2c_i801 nfsd nfs_acl lockd grace auth_rpcgss sunrpc xfs libcrc32c sr_mod cdrom sg crct10dif_pclmul ahci crc32_pclmul libahci crc32c_intel libata ghash_clmulni_intel serio_raw virtio_blk virtio_console virtio_scsi dm_mirror dm_region_hash dm_log dm_mod fuse
  236. [Wed May 10 16:22:24 2023] CPU: 1 PID: 5392 Comm: kworker/1:0 Tainted: G W --------- - - 4.18.0-486.el8.x86_64 #1
  237. [Wed May 10 16:22:24 2023] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014
  238. [Wed May 10 16:22:24 2023] Workqueue: ceph-msgr ceph_con_workfn [libceph]
  239. [Wed May 10 16:22:24 2023] RIP: 0010:ceph_add_cap+0x53e/0x550 [ceph]
  240. [Wed May 10 16:22:24 2023] Code: c0 48 c7 c7 c0 39 74 c0 e8 6c 7c 5d c7 0f 0b 44 89 7c 24 04 e9 7e fc ff ff 44 8b 7c 24 04 e9 68 fe ff ff 0f 0b e9 c9 fc ff ff <0f> 0b e9 0a fe ff ff 0f 0b e9 12 fe ff ff 0f 0b 66 90 0f 1f 44 00
  241. [Wed May 10 16:22:24 2023] RSP: 0018:ffffa4c984a97b48 EFLAGS: 00010207
  242. [Wed May 10 16:22:24 2023] RAX: 0000000000000000 RBX: 0000000000000005 RCX: dead000000000200
  243. [Wed May 10 16:22:24 2023] RDX: ffff8cfc56d7d7d0 RSI: ffff8cfc56d7d7d0 RDI: ffff8cfc56d7d7c8
  244. [Wed May 10 16:22:24 2023] RBP: ffff8cfc45507570 R08: ffff8cfc56d7d7d0 R09: 0000000000000001
  245. [Wed May 10 16:22:24 2023] R10: ffff8cfce4206f00 R11: 00000000ffff8ce0 R12: 0000000000000155
  246. [Wed May 10 16:22:24 2023] R13: ffff8cfce4206f00 R14: ffff8cfce4206f08 R15: 0000000000000001
  247. [Wed May 10 16:22:24 2023] FS: 0000000000000000(0000) GS:ffff8cfdb7c80000(0000) knlGS:0000000000000000
  248. [Wed May 10 16:22:24 2023] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  249. [Wed May 10 16:22:24 2023] CR2: 00007f7b61a629f0 CR3: 00000001049f0000 CR4: 00000000003506e0
  250. [Wed May 10 16:22:24 2023] Call Trace:
  251. [Wed May 10 16:22:24 2023] ceph_handle_caps+0xdf2/0x1780 [ceph]
  252. [Wed May 10 16:22:24 2023] mds_dispatch+0x13a/0x670 [ceph]
  253. [Wed May 10 16:22:24 2023] ceph_con_process_message+0x79/0x140 [libceph]
  254. [Wed May 10 16:22:24 2023] ? calc_signature+0xdf/0x110 [libceph]
  255. [Wed May 10 16:22:24 2023] ceph_con_v1_try_read+0x5d7/0xf30 [libceph]
  256. [Wed May 10 16:22:24 2023] ceph_con_workfn+0x329/0x680 [libceph]
  257. [Wed May 10 16:22:24 2023] process_one_work+0x1a7/0x360
  258. [Wed May 10 16:22:24 2023] ? create_worker+0x1a0/0x1a0
  259. [Wed May 10 16:22:24 2023] worker_thread+0x30/0x390
  260. [Wed May 10 16:22:24 2023] ? create_worker+0x1a0/0x1a0
  261. [Wed May 10 16:22:24 2023] kthread+0x134/0x150
  262. [Wed May 10 16:22:24 2023] ? set_kthread_struct+0x50/0x50
  263. [Wed May 10 16:22:24 2023] ret_from_fork+0x35/0x40
  264. [Wed May 10 16:22:24 2023] ---[ end trace 03a1d82065fdafc0 ]---
  265. [Wed May 10 17:54:14 2023] libceph: mon0 (1)192.168.32.65:6789 session established
  266. [Wed May 10 17:54:14 2023] libceph: client210426330 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  267. [Wed May 10 17:54:16 2023] libceph: mon2 (1)192.168.32.67:6789 session established
  268. [Wed May 10 17:54:16 2023] libceph: client210350757 fsid e4ece518-f2cb-4708-b00f-b6bf511e91d9
  269.  
Advertisement
Add Comment
Please, Sign In to add comment