Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- commit b5b6c6b1f3d9d274428f7e69c8d491caaac3c7d0
- Author: Breno Leitao <breno.leitao@gmail.com>
- Date: Wed Jun 21 17:01:33 2017 -0400
- powerpc/kernel: Disassociate FP and VEC laziness
- Currently, if an application only uses FP, then VEC registers will
- continue to be saved and loaded on context switches independently if
- they are being used or not.
- This change disassociate both of them, i.e, stop restoring VEC if it is
- not used. The same thing for FP (if only VEC is being used).
- In order to do so, we are relaying on load_vec and load_fp, if they
- overflow to zero, we stop restoring the registers until an exception
- happens and load_{fp,vec} become positive again.
- Signed-off-by: Breno Leitao <leitao@debian.org>
- Signed-off-by: Gustavo Romero <gromero@linux.vnet.ibm.com>
- diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
- index 5d6af58270e6..9a4bd01094ef 100644
- --- a/arch/powerpc/kernel/process.c
- +++ b/arch/powerpc/kernel/process.c
- @@ -243,8 +243,6 @@ static int restore_fp(struct task_struct *tsk) { return 0; }
- #endif /* CONFIG_PPC_FPU */
- #ifdef CONFIG_ALTIVEC
- -#define loadvec(thr) ((thr).load_vec)
- -
- static void __giveup_altivec(struct task_struct *tsk)
- {
- unsigned long msr;
- @@ -323,7 +321,6 @@ static int restore_altivec(struct task_struct *tsk)
- return 0;
- }
- #else
- -#define loadvec(thr) 0
- static inline int restore_altivec(struct task_struct *tsk) { return 0; }
- #endif /* CONFIG_ALTIVEC */
- @@ -506,9 +503,16 @@ EXPORT_SYMBOL(giveup_all);
- void restore_math(struct pt_regs *regs)
- {
- unsigned long msr;
- + u8 load_fp = current->thread.load_fp;
- + u8 load_vec = current->thread.load_vec;
- - if (!msr_tm_active(regs->msr) &&
- - !current->thread.load_fp && !loadvec(current->thread))
- + /* If in TM mode, force restore of vec and fp.
- + * If not in TM and !load_vec and !load_fp, return now
- + */
- + if (msr_tm_active(regs->msr)) {
- + load_vec = 1;
- + load_fp = 1;
- + } else if (!load_fp && !load_vec)
- return;
- msr = regs->msr;
- @@ -518,10 +522,10 @@ void restore_math(struct pt_regs *regs)
- * Only reload if the bit is not set in the user MSR, the bit BEING set
- * indicates that the registers are hot
- */
- - if ((!(msr & MSR_FP)) && restore_fp(current))
- + if (load_fp && (!(msr & MSR_FP)) && restore_fp(current))
- msr |= MSR_FP | current->thread.fpexc_mode;
- - if ((!(msr & MSR_VEC)) && restore_altivec(current))
- + if (load_vec && (!(msr & MSR_VEC)) && restore_altivec(current))
- msr |= MSR_VEC;
- if ((msr & (MSR_FP | MSR_VEC)) == (MSR_FP | MSR_VEC) &&
- commit 4055dd873df463db78461fda8849b4e07076f1ee
- Author: Breno Leitao <breno.leitao@gmail.com>
- Date: Wed Jun 21 15:16:42 2017 -0400
- powerpc/kernel: Avoid redundancies on giveup_all
- Currently giveup_all() call __giveup_fpu(), __giveup_altivec() and
- __giveup_vsx(), but, __giveup_vsx() calls __giveup_fpu() and
- __giveup_altivec() again, in a redudant manner.
- Other than giving up FPU and Altivec, __giveup_vsx() also disables
- MSR_VSX on MSR, but this is already done on by __giveup_{fp,altivec}().
- As VSX could not be enabled alone (without FP and/or VEC enabled), this
- is also a redundancy.
- This change improves giveup_all() in just 3%, but since giveup_all() is
- called very frequently, around 8x per CPU per second, this change might
- be interesting.
- Signed-off-by: Breno Leitao <leitao@debian.org>
- Signed-off-by: Gustavo Romero <gromero@linux.vnet.ibm.com>
- diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
- index 2ad725ef4368..5d6af58270e6 100644
- --- a/arch/powerpc/kernel/process.c
- +++ b/arch/powerpc/kernel/process.c
- @@ -494,10 +494,6 @@ void giveup_all(struct task_struct *tsk)
- if (usermsr & MSR_VEC)
- __giveup_altivec(tsk);
- #endif
- -#ifdef CONFIG_VSX
- - if (usermsr & MSR_VSX)
- - __giveup_vsx(tsk);
- -#endif
- #ifdef CONFIG_SPE
- if (usermsr & MSR_SPE)
- __giveup_spe(tsk);
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement