[PATCH v2 03/11] asm-generic/mmiowb: Mark accesses to fix KCSAN warnings

Rohan McLure rmclure at linux.ibm.com
Wed May 10 13:31:09 AEST 2023


Prior to this patch, data races are detectable by KCSAN of the following
forms:

[1] Asynchronous calls to mmiowb_set_pending() from an interrupt context
    or otherwise outside of a critical section
[2] Interrupted critical sections, where the interrupt will itself
    acquire a lock

In case [1], calling context does not need an mmiowb() call to be
issued, otherwise it would do so itself. Such calls to
mmiowb_set_pending() are either idempotent or no-ops.

In case [2], irrespective of when the interrupt occurs, the interrupt
will acquire and release its locks prior to its return, nesting_count
will continue balanced. In the worst case, the interrupted critical
section during a mmiowb_spin_unlock() call observes an mmiowb to be
pending and afterward is interrupted, leading to an extraneous call to
mmiowb(). This data race is clearly innocuous.

Mark all potentially asynchronous memory accesses with READ_ONCE or
WRITE_ONCE, including increments and decrements to nesting_count. This
has the effect of removing KCSAN warnings at consumer's callsites.

Signed-off-by: Rohan McLure <rmclure at linux.ibm.com>
Reported-by: Michael Ellerman <mpe at ellerman.id.au>
Reported-by: Gautam Menghani <gautam at linux.ibm.com>
Tested-by: Gautam Menghani <gautam at linux.ibm.com>
Acked-by: Arnd Bergmann <arnd at arndb.de>
---
v2: Remove extraneous READ_ONCE in mmiowb_set_pending for nesting_count
---
 include/asm-generic/mmiowb.h | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
index 5698fca3bf56..6dea28c8835b 100644
--- a/include/asm-generic/mmiowb.h
+++ b/include/asm-generic/mmiowb.h
@@ -37,25 +37,29 @@ static inline void mmiowb_set_pending(void)
 	struct mmiowb_state *ms = __mmiowb_state();
 
 	if (likely(ms->nesting_count))
-		ms->mmiowb_pending = ms->nesting_count;
+		WRITE_ONCE(ms->mmiowb_pending, ms->nesting_count);
 }
 
 static inline void mmiowb_spin_lock(void)
 {
 	struct mmiowb_state *ms = __mmiowb_state();
-	ms->nesting_count++;
+
+	/* Increment need not be atomic. Nestedness is balanced over interrupts. */
+	WRITE_ONCE(ms->nesting_count, READ_ONCE(ms->nesting_count) + 1);
 }
 
 static inline void mmiowb_spin_unlock(void)
 {
 	struct mmiowb_state *ms = __mmiowb_state();
+	u16 pending = READ_ONCE(ms->mmiowb_pending);
 
-	if (unlikely(ms->mmiowb_pending)) {
-		ms->mmiowb_pending = 0;
+	WRITE_ONCE(ms->mmiowb_pending, 0);
+	if (unlikely(pending)) {
 		mmiowb();
 	}
 
-	ms->nesting_count--;
+	/* Decrement need not be atomic. Nestedness is balanced over interrupts. */
+	WRITE_ONCE(ms->nesting_count, READ_ONCE(ms->nesting_count) - 1);
 }
 #else
 #define mmiowb_set_pending()		do { } while (0)
-- 
2.37.2



More information about the Linuxppc-dev mailing list