diff options
| author | Paolo Bonzini <pbonzini@redhat.com> | 2023-02-15 12:33:28 -0500 |
|---|---|---|
| committer | Paolo Bonzini <pbonzini@redhat.com> | 2023-02-15 12:33:28 -0500 |
| commit | 33436335e93a1788a58443fc99c5ab320ce4b9d9 (patch) | |
| tree | d92f88768c8dbd00f8b65164f47a8a091768b95a /arch/x86/kernel/cpu/resctrl/rdtgroup.c | |
| parent | 27b025ebb0f6092d5c0a88de2ab73545bc1c496e (diff) | |
| parent | c39cea6f38eefe356d64d0bc1e1f2267e282cdd3 (diff) | |
| download | linux-33436335e93a1788a58443fc99c5ab320ce4b9d9.tar.gz linux-33436335e93a1788a58443fc99c5ab320ce4b9d9.tar.bz2 linux-33436335e93a1788a58443fc99c5ab320ce4b9d9.zip | |
Merge tag 'kvm-riscv-6.3-1' of https://github.com/kvm-riscv/linux into HEAD
KVM/riscv changes for 6.3
- Fix wrong usage of PGDIR_SIZE to check page sizes
- Fix privilege mode setting in kvm_riscv_vcpu_trap_redirect()
- Redirect illegal instruction traps to guest
- SBI PMU support for guest
Diffstat (limited to 'arch/x86/kernel/cpu/resctrl/rdtgroup.c')
| -rw-r--r-- | arch/x86/kernel/cpu/resctrl/rdtgroup.c | 12 |
1 files changed, 11 insertions, 1 deletions
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index e5a48f05e787..5993da21d822 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -580,8 +580,10 @@ static int __rdtgroup_move_task(struct task_struct *tsk, /* * Ensure the task's closid and rmid are written before determining if * the task is current that will decide if it will be interrupted. + * This pairs with the full barrier between the rq->curr update and + * resctrl_sched_in() during context switch. */ - barrier(); + smp_mb(); /* * By now, the task's closid and rmid are set. If the task is current @@ -2402,6 +2404,14 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to, WRITE_ONCE(t->rmid, to->mon.rmid); /* + * Order the closid/rmid stores above before the loads + * in task_curr(). This pairs with the full barrier + * between the rq->curr update and resctrl_sched_in() + * during context switch. + */ + smp_mb(); + + /* * If the task is on a CPU, set the CPU in the mask. * The detection is inaccurate as tasks might move or * schedule before the smp function call takes place. |
