Advertisement
Guest User

Untitled

a guest
Jan 12th, 2014
152
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.52 KB | None | 0 0
  1. [ 403.596588] ======================================================
  2. [ 403.596618] [ INFO: possible circular locking dependency detected ]
  3. [ 403.596649] 3.5.3-00283-g190cb64-dirty #83 Not tainted
  4. [ 403.596679] -------------------------------------------------------
  5. [ 403.596679] kworker/u:4/482 is trying to acquire lock:
  6. [ 403.596710] (&wl->mutex){+.+.+.}, at: [<bf1718cc>] wl1271_connection_loss_work+0x20/0x7c [wlcore]
  7. [ 403.596923]
  8. but task is already holding lock:
  9. [ 403.596954] ((&(&wl->connection_loss_work)->work)){+.+...}, at: [<c0042aa4>] process_one_work+0x1d8/0x438
  10. [ 403.597045]
  11. which lock already depends on the new lock.
  12.  
  13. [ 403.597045]
  14. the existing dependency chain (in reverse order) is:
  15. [ 403.597076]
  16. -> #1 ((&(&wl->connection_loss_work)->work)){+.+...}:
  17. [ 403.597137] [<c00619f0>] lock_acquire+0x60/0x74
  18. [ 403.597167] [<c0043980>] wait_on_work+0x3c/0xdc
  19. [ 403.597198] [<c0043ad4>] __cancel_work_timer+0xb4/0x100
  20. [ 403.597259] [<bf173178>] wl1271_op_bss_info_changed+0x620/0xb8c [wlcore]
  21. [ 403.597381] [<bf11d4c0>] ieee80211_bss_info_change_notify+0x150/0x170 [mac80211]
  22. [ 403.597656] [<bf143334>] ieee80211_assoc_success+0x3b4/0x568 [mac80211]
  23. [ 403.597839]
  24. -> #0 (&wl->mutex){+.+.+.}:
  25. [ 403.597900] [<c0060de4>] __lock_acquire+0xf3c/0x16f0
  26. [ 403.597930] [<c00619f0>] lock_acquire+0x60/0x74
  27. [ 403.597961] [<c048f3a0>] mutex_lock_nested+0x44/0x2fc
  28. [ 403.598022] [<bf1718cc>] wl1271_connection_loss_work+0x20/0x7c [wlcore]
  29. [ 403.598144] [<c0042b28>] process_one_work+0x25c/0x438
  30. [ 403.598175] [<c0042ed8>] worker_thread+0x1a8/0x2dc
  31. [ 403.598205] [<c00475ec>] kthread+0x80/0x90
  32. [ 403.598266] [<c000e10c>] kernel_thread_exit+0x0/0x8
  33. [ 403.598297]
  34. other info that might help us debug this:
  35.  
  36. [ 403.598327] Possible unsafe locking scenario:
  37.  
  38. [ 403.598358] CPU0 CPU1
  39. [ 403.598358] ---- ----
  40. [ 403.598388] lock((&(&wl->connection_loss_work)->work));
  41. [ 403.598419] lock(&wl->mutex);
  42. [ 403.598449] lock((&(&wl->connection_loss_work)->work));
  43. [ 403.598480] lock(&wl->mutex);
  44. [ 403.598510]
  45. *** DEADLOCK ***
  46.  
  47. [ 403.598541] 2 locks held by kworker/u:4/482:
  48. [ 403.598571] #0: (wiphy_name(local->hw.wiphy)){.+.+.+}, at: [<c0042aa4>] process_one_work+0x1d8/0x438
  49. [ 403.598632] #1: ((&(&wl->connection_loss_work)->work)){+.+...}, at: [<c0042aa4>] process_one_work+0x1d8/0x438
  50. [ 403.598693]
  51. stack backtrace:
  52. [ 403.598754] [<c00124bc>] (unwind_backtrace+0x0/0xe0) from [<c0487a44>] (print_circular_bug+0x260/0x2ac)
  53. [ 403.598815] [<c0487a44>] (print_circular_bug+0x260/0x2ac) from [<c0060de4>] (__lock_acquire+0xf3c/0x16f0)
  54. [ 403.598846] [<c0060de4>] (__lock_acquire+0xf3c/0x16f0) from [<c00619f0>] (lock_acquire+0x60/0x74)
  55. [ 403.598907] [<c00619f0>] (lock_acquire+0x60/0x74) from [<c048f3a0>] (mutex_lock_nested+0x44/0x2fc)
  56. [ 403.599029] [<c048f3a0>] (mutex_lock_nested+0x44/0x2fc) from [<bf1718cc>] (wl1271_connection_loss_work+0x20/0x7c [wlcore])
  57. [ 403.599151] [<bf1718cc>] (wl1271_connection_loss_work+0x20/0x7c [wlcore]) from [<c0042b28>] (process_one_work+0x25c/0x438)
  58. [ 403.599212] [<c0042b28>] (process_one_work+0x25c/0x438) from [<c0042ed8>] (worker_thread+0x1a8/0x2dc)
  59. [ 403.599273] [<c0042ed8>] (worker_thread+0x1a8/0x2dc) from [<c00475ec>] (kthread+0x80/0x90)
  60. [ 403.599304] [<c00475ec>] (kthread+0x80/0x90) from [<c000e10c>] (kernel_thread_exit+0x0/0x8)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement