Advertisement
rharjani

xfs-issue-fio_subpage_blocksize

Aug 7th, 2020 (edited)
58
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.01 KB | None | 0 0
  1. root@qemu:/home/qemu# xfs_info /dev/loop0
  2. meta-data=/dev/loop0 isize=512 agcount=4, agsize=3932160 blks
  3. = sectsz=512 attr=2, projid32bit=1
  4. = crc=1 finobt=1, sparse=1, rmapbt=0
  5. = reflink=0
  6. data = bsize=1024 blocks=15728640, imaxpct=25
  7. = sunit=0 swidth=0 blks
  8. naming =version 2 bsize=4096 ascii-ci=0, ftype=1
  9. log =internal log bsize=1024 blocks=10240, version=2
  10. = sectsz=512 sunit=0 blks, lazy-count=1
  11. realtime =none extsz=4096 blocks=0, rtextents=0
  12.  
  13. <dmesg>
  14. root@qemu:/home/qemu# [ 631.063793] kworker/dying (177) used greatest stack depth: 10640 bytes left
  15. [ 3074.163020] INFO: task kworker/u32:0:1649 blocked for more than 122 seconds.
  16. [ 3074.166288] Not tainted 5.8.0-rc7+ #7
  17. [ 3074.168428] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  18. [ 3074.171951] kworker/u32:0 D11376 1649 2 0x00004000
  19. [ 3074.173814] Workqueue: xfs-cil/loop0 xlog_cil_push_work
  20. [ 3074.175335] Call Trace:
  21. [ 3074.176207] __schedule+0x405/0xa30
  22. [ 3074.177291] schedule+0x4f/0x100
  23. [ 3074.178477] xlog_state_get_iclog_space+0x1fc/0x350
  24. [ 3074.180012] ? wake_up_q+0xa0/0xa0
  25. [ 3074.181068] xlog_write+0x14b/0x9d0
  26. [ 3074.182211] xlog_cil_push_work+0x2ff/0x610
  27. [ 3074.183470] ? _raw_spin_unlock_irq+0x28/0x50
  28. [ 3074.184836] process_one_work+0x23c/0x5b0
  29. [ 3074.186054] worker_thread+0x1e9/0x3b0
  30. [ 3074.187215] ? process_one_work+0x5b0/0x5b0
  31. [ 3074.188691] kthread+0x14c/0x190
  32. [ 3074.189822] ? kthread_park+0x90/0x90
  33. [ 3074.191053] ret_from_fork+0x22/0x30
  34. [ 3074.192199] INFO: task kworker/8:2:1703 blocked for more than 122 seconds.
  35. [ 3074.194208] Not tainted 5.8.0-rc7+ #7
  36. [ 3074.195581] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
  37. [ 3074.197725] kworker/8:2 D13312 1703 2 0x00004000
  38. [ 3074.199276] Workqueue: xfs-sync/loop0 xfs_log_worker
  39. [ 3074.200724] Call Trace:
  40. [ 3074.201399] __schedule+0x405/0xa30
  41. [ 3074.202730] ? trace_hardirqs_on+0x55/0x110
  42. [ 3074.204126] ? wait_for_completion+0x7e/0x110
  43. [ 3074.205391] schedule+0x4f/0x100
  44. [ 3074.206408] schedule_timeout+0x1e8/0x300
  45. [ 3074.207734] ? wait_for_completion+0x7e/0x110
  46. [ 3074.209126] ? _raw_spin_unlock_irq+0x28/0x50
  47. [ 3074.210459] ? wait_for_completion+0xa1/0x110
  48. [ 3074.211871] ? __this_cpu_preempt_check+0x13/0x20
  49. [ 3074.213243] ? lockdep_hardirqs_on+0xa3/0x120
  50. [ 3074.214636] ? _raw_spin_unlock_irq+0x28/0x50
  51. [ 3074.215927] ? trace_hardirqs_on+0x55/0x110
  52. [ 3074.217194] ? wait_for_completion+0x7e/0x110
  53. [ 3074.218586] wait_for_completion+0xa9/0x110
  54. [ 3074.219848] __flush_work+0x238/0x450
  55. [ 3074.221047] ? flush_workqueue_prep_pwqs+0x150/0x150
  56. [ 3074.222609] ? wait_for_completion+0x47/0x110
  57. [ 3074.223926] flush_work+0x10/0x20
  58. [ 3074.225049] xlog_cil_force_lsn+0x9b/0x280
  59. [ 3074.226866] ? debug_smp_processor_id+0x17/0x20
  60. [ 3074.228354] ? xfs_log_worker+0x35/0x100
  61. [ 3074.229640] xfs_log_force+0x95/0x260
  62. [ 3074.230782] xfs_log_worker+0x35/0x100
  63. [ 3074.231949] process_one_work+0x23c/0x5b0
  64. [ 3074.233222] worker_thread+0x50/0x3b0
  65. [ 3074.234331] ? process_one_work+0x5b0/0x5b0
  66. [ 3074.235699] kthread+0x14c/0x190
  67. [ 3074.236909] ? kthread_park+0x90/0x90
  68. [ 3074.238019] ret_from_fork+0x22/0x30
  69. [ 3074.239225]
  70. [ 3074.239225] Showing all locks held in the system:
  71. [ 3074.241025] 1 lock held by khungtaskd/94:
  72. [ 3074.242641] #0: ffffffff82c7eb60 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x23/0x17e
  73. [ 3074.245343] 4 locks held by kworker/u32:7/182:
  74. [ 3074.246661] #0: ffff888a04548548 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x1be/0x5b0
  75. [ 3074.249263] #1: ffffc9000180fe40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x1be/0x5b0
  76. [ 3074.252470] #2: ffff8889f4d310e8 (&type->s_umount_key#36){++++}-{3:3}, at: trylock_super+0x1b/0x50
  77. [ 3074.255260] #3: ffff8889f4d35ae8 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x43/0xe0
  78. [ 3074.257865] 1 lock held by in:imklog/592:
  79. [ 3074.259095] #0: ffff8889e2e7b140 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0x4e/0x60
  80. [ 3074.261624] 1 lock held by loop0/978:
  81. [ 3074.262891] #0: ffff8889f4d35ae8 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x43/0xe0
  82. [ 3074.265695] 2 locks held by kworker/u32:0/1649:
  83. [ 3074.267105] #0: ffff8889acd30548 ((wq_completion)xfs-cil/loop0){+.+.}-{0:0}, at: process_one_work+0x1be/0x5b0
  84. [ 3074.269871] #1: ffffc900022c3e40 ((work_completion)(&cil->xc_push_work)){+.+.}-{0:0}, at: process_one_work+0x1be/0x5b0
  85. [ 3074.272942] 2 locks held by kworker/8:2/1703:
  86. [ 3074.274282] #0: ffff8889af47e948 ((wq_completion)xfs-sync/loop0){+.+.}-{0:0}, at: process_one_work+0x1be/0x5b0
  87. [ 3074.277014] #1: ffffc90001d9be40 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work+0x1be/0x5b0
  88. [ 3074.280118] 2 locks held by fio/1770:
  89. [ 3074.281312] #0: ffff8889e3be9480 (sb_writers#11){.+.+}-{0:0}, at: vfs_write+0x1c0/0x220
  90. [ 3074.283589] #1: ffff8889828c2420 (&sb->s_type->i_mutex_key#15){++++}-{3:3}, at: xfs_ilock+0x105/0x2a0
  91. [ 3074.286180] 2 locks held by fio/1771:
  92. [ 3074.287279] #0: ffff8889e3be9480 (sb_writers#11){.+.+}-{0:0}, at: vfs_write+0x1c0/0x220
  93. [ 3074.289804] #1: ffff88899828b420 (&sb->s_type->i_mutex_key#15){++++}-{3:3}, at: xfs_ilock+0x105/0x2a0
  94. [ 3074.292591] 2 locks held by fio/1772:
  95. [ 3074.293760] #0: ffff8889e3be9480 (sb_writers#11){.+.+}-{0:0}, at: vfs_write+0x1c0/0x220
  96. [ 3074.296008] #1: ffff88899e35b420 (&sb->s_type->i_mutex_key#15){++++}-{3:3}, at: xfs_ilock+0x105/0x2a0
  97. [ 3074.298643] 2 locks held by fio/1773:
  98. [ 3074.299815] #0: ffff8889e3be9480 (sb_writers#11){.+.+}-{0:0}, at: vfs_write+0x1c0/0x220
  99. [ 3074.302015] #1: ffff88867e61c420 (&sb->s_type->i_mutex_key#15){++++}-{3:3}, at: xfs_ilock+0x105/0x2a0
  100. [ 3074.304673] 2 locks held by fio/1774:
  101. [ 3074.305817] #0: ffff8889e3be9480 (sb_writers#11){.+.+}-{0:0}, at: vfs_write+0x1c0/0x220
  102. [ 3074.308164] #1: ffff8888cb5ab420 (&sb->s_type->i_mutex_key#15){++++}-{3:3}, at: xfs_ilock+0x105/0x2a0
  103. [ 3074.310868] 2 locks held by fio/1775:
  104. [ 3074.311992] #0: ffff8889e3be9480 (sb_writers#11){.+.+}-{0:0}, at: vfs_write+0x1c0/0x220
  105. [ 3074.314267] #1: ffff88866ec1c420 (&sb->s_type->i_mutex_key#15){++++}-{3:3}, at: xfs_ilock+0x105/0x2a0
  106. [ 3074.316987] 2 locks held by fio/1776:
  107. [ 3074.318316] #0: ffff8889e3be9480 (sb_writers#11){.+.+}-{0:0}, at: vfs_write+0x1c0/0x220
  108. [ 3074.320647] #1: ffff888673e69420 (&sb->s_type->i_mutex_key#15){++++}-{3:3}, at: xfs_ilock+0x105/0x2a0
  109. [ 3074.323368] 2 locks held by fio/1777:
  110. [ 3074.324670] #0: ffff8889e3be9480 (sb_writers#11){.+.+}-{0:0}, at: vfs_write+0x1c0/0x220
  111. [ 3074.327062] #1: ffff8889828c0420 (&sb->s_type->i_mutex_key#15){++++}-{3:3}, at: xfs_ilock+0x105/0x2a0
  112. [ 3074.329867]
  113. [ 3074.330469] =============================================
  114. [ 3074.330469]
  115. [ 8791.060171] kworker/dying (182) used greatest stack depth: 10352 bytes left
  116.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement