Guest User

Untitled

a guest
Aug 1st, 2018
298
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.96 KB | None | 0 0
  1. Return-Path: <fbarrat@linux.ibm.com>
  2. Received: from g01zcilapp002.ahe.pok.ibm.com ([unix socket])
  3. by g01zcilapp002 (Cyrus v2.3.11) with LMTPA;
  4. Tue, 31 Jul 2018 09:24:55 -0400
  5. X-Sieve: CMU Sieve 2.3
  6. Received: from localhost (localhost [127.0.0.1])
  7. by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id B05D92607C;
  8. Tue, 31 Jul 2018 09:24:55 -0400 (EDT)
  9. X-Virus-Scanned: amavisd-new at linux.ibm.com
  10. X-Spam-Flag: NO
  11. X-Spam-Score: -1
  12. X-Spam-Level:
  13. X-Spam-Status: No, score=-1 tagged_above=-9999 required=6.2
  14. tests=[ALL_TRUSTED=-1] autolearn=disabled
  15. Received: from g01zcilapp002.ahe.pok.ibm.com ([127.0.0.1])
  16. by localhost (g01zcilapp002.ahe.pok.ibm.com [127.0.0.1]) (amavisd-new, port 10024)
  17. with LMTP id 9-GI_G-McCEv; Tue, 31 Jul 2018 09:24:55 -0400 (EDT)
  18. Received: from g01zcilapp001.ahe.pok.ibm.com (g01zcilapp001.ahe.pok.ibm.com [9.63.16.68])
  19. by g01zcilapp002.ahe.pok.ibm.com (Postfix) with ESMTP id 2D01926064;
  20. Tue, 31 Jul 2018 09:24:55 -0400 (EDT)
  21. Received: from b06cxnps4074.portsmouth.uk.ibm.com (d06relay11.portsmouth.uk.ibm.com [9.149.109.196])
  22. by g01zcilapp001.ahe.pok.ibm.com (Postfix) with ESMTP id ECFE313E002;
  23. Tue, 31 Jul 2018 09:24:54 -0400 (EDT)
  24. Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62])
  25. by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w6VDOsGF23593046
  26. (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
  27. Tue, 31 Jul 2018 13:24:54 GMT
  28. Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1])
  29. by IMSVA (Postfix) with ESMTP id 013C5AE04D;
  30. Tue, 31 Jul 2018 16:24:56 +0100 (BST)
  31. Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1])
  32. by IMSVA (Postfix) with ESMTP id DCFBDAE055;
  33. Tue, 31 Jul 2018 16:24:54 +0100 (BST)
  34. Received: from borneo.ttt.fr.ibm.com (unknown [9.143.107.186])
  35. by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP;
  36. Tue, 31 Jul 2018 16:24:54 +0100 (BST)
  37. From: Frederic Barrat <fbarrat@linux.ibm.com>
  38. To: linuxppc-dev@lists.ozlabs.org, vaibhav@linux.ibm.com, npiggin@gmail.com
  39. Cc: felix@linux.ibm.com, clombard@linux.ibm.com
  40. Subject: [PATCH] powerpc/64s/radix: Fix missing global invalidations when removing copro
  41. Date: Tue, 31 Jul 2018 15:24:52 +0200
  42. Message-Id: <20180731132452.15994-1-fbarrat@linux.ibm.com>
  43. X-Mailer: git-send-email 2.17.1
  44. X-TM-AS-GCONF: 00
  45.  
  46. With the optimizations for TLB invalidation from commit 0cef77c7798a
  47. ("powerpc/64s/radix: flush remote CPUs out of single-threaded
  48. mm_cpumask"), the scope of a TLBI (global vs. local) can now be
  49. influenced by the value of the 'copros' counter of the memory context.
  50.  
  51. When calling mm_context_remove_copro(), the 'copros' counter is
  52. decremented first before flushing. It may have the unintended side
  53. effect of sending local TLBIs when we explicitly need global
  54. invalidations in this case. Thus breaking any nMMU user in a bad and
  55. unpredictable way.
  56.  
  57. Fix it by flushing first, before updating the 'copros' counter, so
  58. that invalidations will be global.
  59.  
  60. Fixes: 0cef77c7798a ("powerpc/64s/radix: flush remote CPUs out of single-threaded mm_cpumask")
  61. Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
  62. ---
  63. arch/powerpc/include/asm/mmu_context.h | 33 ++++++++++++++++----------
  64. 1 file changed, 21 insertions(+), 12 deletions(-)
  65.  
  66. diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
  67. index 79d570cbf332..b2f89b621b15 100644
  68. --- a/arch/powerpc/include/asm/mmu_context.h
  69. +++ b/arch/powerpc/include/asm/mmu_context.h
  70. @@ -143,24 +143,33 @@ static inline void mm_context_remove_copro(struct mm_struct *mm)
  71. {
  72. int c;
  73.  
  74. - c = atomic_dec_if_positive(&mm->context.copros);
  75. -
  76. - /* Detect imbalance between add and remove */
  77. - WARN_ON(c < 0);
  78. -
  79. /*
  80. - * Need to broadcast a global flush of the full mm before
  81. - * decrementing active_cpus count, as the next TLBI may be
  82. - * local and the nMMU and/or PSL need to be cleaned up.
  83. - * Should be rare enough so that it's acceptable.
  84. + * When removing the last copro, we need to broadcast a global
  85. + * flush of the full mm, as the next TLBI may be local and the
  86. + * nMMU and/or PSL need to be cleaned up.
  87. + *
  88. + * Both the 'copros' and 'active_cpus' counts are looked at in
  89. + * flush_all_mm() to determine the scope (local/global) of the
  90. + * TLBIs, so we need to flush first before decrementing
  91. + * 'copros'. If this API is used by several callers for the
  92. + * same context, it can lead to over-flushing. It's hopefully
  93. + * not common enough to be a problem.
  94. *
  95. * Skip on hash, as we don't know how to do the proper flush
  96. * for the time being. Invalidations will remain global if
  97. - * used on hash.
  98. + * used on hash. Note that we can't drop 'copros' either, as
  99. + * it could make some invalidations local with no flush
  100. + * in-between.
  101. */
  102. - if (c == 0 && radix_enabled()) {
  103. + if (radix_enabled()) {
  104. flush_all_mm(mm);
  105. - dec_mm_active_cpus(mm);
  106. +
  107. + c = atomic_dec_if_positive(&mm->context.copros);
  108. + /* Detect imbalance between add and remove */
  109. + WARN_ON(c < 0);
  110. +
  111. + if (c == 0)
  112. + dec_mm_active_cpus(mm);
  113. }
  114. }
  115. #else
  116. --
  117. 2.17.1
Add Comment
Please, Sign In to add comment