Advertisement
Guest User

sched-Fix-select_idle_sibling-bouncing-cow-syndrome.patch

a guest
May 12th, 2013
256
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 2.18 KB | None | 0 0
  1. From e0a79f529d5ba2507486d498b25da40911d95cf6 Mon Sep 17 00:00:00 2001
  2. From: Mike Galbraith <bitbucket@online.de>
  3. Date: Mon, 28 Jan 2013 11:19:25 +0000
  4. Subject: sched: Fix select_idle_sibling() bouncing cow syndrome
  5.  
  6. If the previous CPU is cache affine and idle, select it.
  7.  
  8. The current implementation simply traverses the sd_llc domain,
  9. taking the first idle CPU encountered, which walks buddy pairs
  10. hand in hand over the package, inflicting excruciating pain.
  11.  
  12. 1 tbench pair (worst case) in a 10 core + SMT package:
  13.  
  14. pre 15.22 MB/sec 1 procs
  15. post 252.01 MB/sec 1 procs
  16.  
  17. Signed-off-by: Mike Galbraith <bitbucket@online.de>
  18. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
  19. Link: http://lkml.kernel.org/r/1359371965.5783.127.camel@marge.simpson.net
  20. Signed-off-by: Ingo Molnar <mingo@kernel.org>
  21. ---
  22. diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
  23. index 8dbee9f..ed18c74 100644
  24. --- a/kernel/sched/fair.c
  25. +++ b/kernel/sched/fair.c
  26. @@ -3252,25 +3252,18 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
  27. */
  28. static int select_idle_sibling(struct task_struct *p, int target)
  29. {
  30. - int cpu = smp_processor_id();
  31. - int prev_cpu = task_cpu(p);
  32. struct sched_domain *sd;
  33. struct sched_group *sg;
  34. - int i;
  35. + int i = task_cpu(p);
  36.  
  37. - /*
  38. - * If the task is going to be woken-up on this cpu and if it is
  39. - * already idle, then it is the right target.
  40. - */
  41. - if (target == cpu && idle_cpu(cpu))
  42. - return cpu;
  43. + if (idle_cpu(target))
  44. + return target;
  45.  
  46. /*
  47. - * If the task is going to be woken-up on the cpu where it previously
  48. - * ran and if it is currently idle, then it the right target.
  49. + * If the prevous cpu is cache affine and idle, don't be stupid.
  50. */
  51. - if (target == prev_cpu && idle_cpu(prev_cpu))
  52. - return prev_cpu;
  53. + if (i != target && cpus_share_cache(i, target) && idle_cpu(i))
  54. + return i;
  55.  
  56. /*
  57. * Otherwise, iterate the domains and find an elegible idle cpu.
  58. @@ -3284,7 +3277,7 @@ static int select_idle_sibling(struct task_struct *p, int target)
  59. goto next;
  60.  
  61. for_each_cpu(i, sched_group_cpus(sg)) {
  62. - if (!idle_cpu(i))
  63. + if (i == target || !idle_cpu(i))
  64. goto next;
  65. }
  66.  
  67. --
  68. cgit v0.9.1
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement