summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2022-12-12 07:47:15 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2022-12-12 07:47:15 -0800
commit1fab45ab6e823f9d7e5bc9520b2aa6564d6d58a7 (patch)
tree0fed32c7ec3b36f8050c49281c3161ec3834df9a
parent830b3c68c1fb1e9176028d02ef86f3cf76aa2476 (diff)
parent87492c06e68d802852c7ba76b4d3fde50807d72a (diff)
downloadlinux-1fab45ab6e823f9d7e5bc9520b2aa6564d6d58a7.tar.gz
linux-1fab45ab6e823f9d7e5bc9520b2aa6564d6d58a7.tar.bz2
linux-1fab45ab6e823f9d7e5bc9520b2aa6564d6d58a7.zip
Merge tag 'rcu.2022.12.02a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu
Pull RCU updates from Paul McKenney: - Documentation updates. This is the second in a series from an ongoing review of the RCU documentation. - Miscellaneous fixes. - Introduce a default-off Kconfig option that depends on RCU_NOCB_CPU that, on CPUs mentioned in the nohz_full or rcu_nocbs boot-argument CPU lists, causes call_rcu() to introduce delays. These delays result in significant power savings on nearly idle Android and ChromeOS systems. These savings range from a few percent to more than ten percent. This series also includes several commits that change call_rcu() to a new call_rcu_hurry() function that avoids these delays in a few cases, for example, where timely wakeups are required. Several of these are outside of RCU and thus have acks and reviews from the relevant maintainers. - Create an srcu_read_lock_nmisafe() and an srcu_read_unlock_nmisafe() for architectures that support NMIs, but which do not provide NMI-safe this_cpu_inc(). These NMI-safe SRCU functions are required by the upcoming lockless printk() work by John Ogness et al. - Changes providing minor but important increases in torture test coverage for the new RCU polled-grace-period APIs. - Changes to torturescript that avoid redundant kernel builds, thus providing about a 30% speedup for the torture.sh acceptance test. * tag 'rcu.2022.12.02a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (49 commits) net: devinet: Reduce refcount before grace period net: Use call_rcu_hurry() for dst_release() workqueue: Make queue_rcu_work() use call_rcu_hurry() percpu-refcount: Use call_rcu_hurry() for atomic switch scsi/scsi_error: Use call_rcu_hurry() instead of call_rcu() rcu/rcutorture: Use call_rcu_hurry() where needed rcu/rcuscale: Use call_rcu_hurry() for async reader test rcu/sync: Use call_rcu_hurry() instead of call_rcu rcuscale: Add laziness and kfree tests rcu: Shrinker for lazy rcu rcu: Refactor code a bit in rcu_nocb_do_flush_bypass() rcu: Make call_rcu() lazy to save power rcu: Implement lockdep_rcu_enabled for !CONFIG_DEBUG_LOCK_ALLOC srcu: Debug NMI safety even on archs that don't require it srcu: Explain the reason behind the read side critical section on GP start srcu: Warn when NMI-unsafe API is used in NMI arch/s390: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option arch/loongarch: Add ARCH_HAS_NMI_SAFE_THIS_CPU_OPS Kconfig option rcu: Fix __this_cpu_read() lockdep warning in rcu_force_quiescent_state() rcu-tasks: Make grace-period-age message human-readable ...
-rw-r--r--Documentation/RCU/arrayRCU.rst165
-rw-r--r--Documentation/RCU/checklist.rst244
-rw-r--r--Documentation/RCU/index.rst1
-rw-r--r--Documentation/RCU/listRCU.rst174
-rw-r--r--Documentation/RCU/lockdep.rst4
-rw-r--r--arch/Kconfig3
-rw-r--r--arch/arm64/Kconfig1
-rw-r--r--arch/loongarch/Kconfig1
-rw-r--r--arch/s390/Kconfig1
-rw-r--r--arch/x86/Kconfig1
-rw-r--r--drivers/scsi/scsi_error.c2
-rw-r--r--include/linux/kvm_host.h2
-rw-r--r--include/linux/rcupdate.h14
-rw-r--r--include/linux/rcutiny.h4
-rw-r--r--include/linux/rcutree.h4
-rw-r--r--include/linux/slab.h11
-rw-r--r--include/linux/srcu.h63
-rw-r--r--include/linux/srcutree.h5
-rw-r--r--kernel/rcu/Kconfig11
-rw-r--r--kernel/rcu/rcu.h8
-rw-r--r--kernel/rcu/rcuscale.c69
-rw-r--r--kernel/rcu/rcutorture.c72
-rw-r--r--kernel/rcu/srcutree.c100
-rw-r--r--kernel/rcu/sync.c2
-rw-r--r--kernel/rcu/tasks.h2
-rw-r--r--kernel/rcu/tiny.c2
-rw-r--r--kernel/rcu/tree.c152
-rw-r--r--kernel/rcu/tree.h12
-rw-r--r--kernel/rcu/tree_exp.h2
-rw-r--r--kernel/rcu/tree_nocb.h259
-rw-r--r--kernel/rcu/tree_plugin.h5
-rw-r--r--kernel/workqueue.c2
-rw-r--r--lib/percpu-refcount.c3
-rw-r--r--net/core/dst.c2
-rw-r--r--net/ipv4/devinet.c19
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/config2csv.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/config_override.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/configcheck.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/configinit.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-again.sh49
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-assign-cpus.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-build.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-end-run-stats.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-recheck.sh2
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-remote.sh13
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-test-1-run-batch.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-test-1-run-qemu.sh5
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-transform.sh68
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/parse-build.sh3
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/torture.sh145
52 files changed, 1159 insertions, 578 deletions
diff --git a/Documentation/RCU/arrayRCU.rst b/Documentation/RCU/arrayRCU.rst
deleted file mode 100644
index a5f2ff8fc54c..000000000000
--- a/Documentation/RCU/arrayRCU.rst
+++ /dev/null
@@ -1,165 +0,0 @@
-.. _array_rcu_doc:
-
-Using RCU to Protect Read-Mostly Arrays
-=======================================
-
-Although RCU is more commonly used to protect linked lists, it can
-also be used to protect arrays. Three situations are as follows:
-
-1. :ref:`Hash Tables <hash_tables>`
-
-2. :ref:`Static Arrays <static_arrays>`
-
-3. :ref:`Resizable Arrays <resizable_arrays>`
-
-Each of these three situations involves an RCU-protected pointer to an
-array that is separately indexed. It might be tempting to consider use
-of RCU to instead protect the index into an array, however, this use
-case is **not** supported. The problem with RCU-protected indexes into
-arrays is that compilers can play way too many optimization games with
-integers, which means that the rules governing handling of these indexes
-are far more trouble than they are worth. If RCU-protected indexes into
-arrays prove to be particularly valuable (which they have not thus far),
-explicit cooperation from the compiler will be required to permit them
-to be safely used.
-
-That aside, each of the three RCU-protected pointer situations are
-described in the following sections.
-
-.. _hash_tables:
-
-Situation 1: Hash Tables
-------------------------
-
-Hash tables are often implemented as an array, where each array entry
-has a linked-list hash chain. Each hash chain can be protected by RCU
-as described in listRCU.rst. This approach also applies to other
-array-of-list situations, such as radix trees.
-
-.. _static_arrays:
-
-Situation 2: Static Arrays
---------------------------
-
-Static arrays, where the data (rather than a pointer to the data) is
-located in each array element, and where the array is never resized,
-have not been used with RCU. Rik van Riel recommends using seqlock in
-this situation, which would also have minimal read-side overhead as long
-as updates are rare.
-
-Quick Quiz:
- Why is it so important that updates be rare when using seqlock?
-
-:ref:`Answer to Quick Quiz <answer_quick_quiz_seqlock>`
-
-.. _resizable_arrays:
-
-Situation 3: Resizable Arrays
-------------------------------
-
-Use of RCU for resizable arrays is demonstrated by the grow_ary()
-function formerly used by the System V IPC code. The array is used
-to map from semaphore, message-queue, and shared-memory IDs to the data
-structure that represents the corresponding IPC construct. The grow_ary()
-function does not acquire any locks; instead its caller must hold the
-ids->sem semaphore.
-
-The grow_ary() function, shown below, does some limit checks, allocates a
-new ipc_id_ary, copies the old to the new portion of the new, initializes
-the remainder of the new, updates the ids->entries pointer to point to
-the new array, and invokes ipc_rcu_putref() to free up the old array.
-Note that rcu_assign_pointer() is used to update the ids->entries pointer,
-which includes any memory barriers required on whatever architecture
-you are running on::
-
- static int grow_ary(struct ipc_ids* ids, int newsize)
- {
- struct ipc_id_ary* new;
- struct ipc_id_ary* old;
- int i;
- int size = ids->entries->size;
-
- if(newsize > IPCMNI)
- newsize = IPCMNI;
- if(newsize <= size)
- return newsize;
-
- new = ipc_rcu_alloc(sizeof(struct kern_ipc_perm *)*newsize +
- sizeof(struct ipc_id_ary));
- if(new == NULL)
- return size;
- new->size = newsize;
- memcpy(new->p, ids->entries->p,
- sizeof(struct kern_ipc_perm *)*size +
- sizeof(struct ipc_id_ary));
- for(i=size;i<newsize;i++) {
- new->p[i] = NULL;
- }
- old = ids->entries;
-
- /*
- * Use rcu_assign_pointer() to make sure the memcpyed
- * contents of the new array are visible before the new
- * array becomes visible.
- */
- rcu_assign_pointer(ids->entries, new);
-
- ipc_rcu_putref(old);
- return newsize;
- }
-
-The ipc_rcu_putref() function decrements the array's reference count
-and then, if the reference count has dropped to zero, uses call_rcu()
-to free the array after a grace period has elapsed.
-
-The array is traversed by the ipc_lock() function. This function
-indexes into the array under the protection of rcu_read_lock(),
-using rcu_dereference() to pick up the pointer to the array so
-that it may later safely be dereferenced -- memory barriers are
-required on the Alpha CPU. Since the size of the array is stored
-with the array itself, there can be no array-size mismatches, so
-a simple check suffices. The pointer to the structure corresponding
-to the desired IPC object is placed in "out", with NULL indicating
-a non-existent entry. After acquiring "out->lock", the "out->deleted"
-flag indicates whether the IPC object is in the process of being
-deleted, and, if not, the pointer is returned::
-
- struct kern_ipc_perm* ipc_lock(struct ipc_ids* ids, int id)
- {
- struct kern_ipc_perm* out;
- int lid = id % SEQ_MULTIPLIER;
- struct ipc_id_ary* entries;
-
- rcu_read_lock();
- entries = rcu_dereference(ids->entries);
- if(lid >= entries->size) {
- rcu_read_unlock();
- return NULL;
- }
- out = entries->p[lid];
- if(out == NULL) {
- rcu_read_unlock();
- return NULL;
- }
- spin_lock(&out->lock);
-
- /* ipc_rmid() may have already freed the ID while ipc_lock
- * was spinning: here verify that the structure is still valid
- */
- if (out->deleted) {
- spin_unlock(&out->lock);
- rcu_read_unlock();
- return NULL;
- }
- return out;
- }
-
-.. _answer_quick_quiz_seqlock:
-
-Answer to Quick Quiz:
- Why is it so important that updates be rare when using seqlock?
-
- The reason that it is important that updates be rare when
- using seqlock is that frequent updates can livelock readers.
- One way to avoid this problem is to assign a seqlock for
- each array entry rather than to the entire array.
diff --git a/Documentation/RCU/checklist.rst b/Documentation/RCU/checklist.rst
index 048c5bc1f813..cc361fb01ed4 100644
--- a/Documentation/RCU/checklist.rst
+++ b/Documentation/RCU/checklist.rst
@@ -32,8 +32,8 @@ over a rather long period of time, but improvements are always welcome!
for lockless updates. This does result in the mildly
counter-intuitive situation where rcu_read_lock() and
rcu_read_unlock() are used to protect updates, however, this
- approach provides the same potential simplifications that garbage
- collectors do.
+ approach can provide the same simplifications to certain types
+ of lockless algorithms that garbage collectors do.
1. Does the update code have proper mutual exclusion?
@@ -49,12 +49,12 @@ over a rather long period of time, but improvements are always welcome!
them -- even x86 allows later loads to be reordered to precede
earlier stores), and be prepared to explain why this added
complexity is worthwhile. If you choose #c, be prepared to
- explain how this single task does not become a major bottleneck on
- big multiprocessor machines (for example, if the task is updating
- information relating to itself that other tasks can read, there
- by definition can be no bottleneck). Note that the definition
- of "large" has changed significantly: Eight CPUs was "large"
- in the year 2000, but a hundred CPUs was unremarkable in 2017.
+ explain how this single task does not become a major bottleneck
+ on large systems (for example, if the task is updating information
+ relating to itself that other tasks can read, there by definition
+ can be no bottleneck). Note that the definition of "large" has
+ changed significantly: Eight CPUs was "large" in the year 2000,
+ but a hundred CPUs was unremarkable in 2017.
2. Do the RCU read-side critical sections make proper use of
rcu_read_lock() and friends? These primitives are needed
@@ -97,33 +97,38 @@ over a rather long period of time, but improvements are always welcome!
b. Proceed as in (a) above, but also maintain per-element
locks (that are acquired by both readers and writers)
- that guard per-element state. Of course, fields that
- the readers refrain from accessing can be guarded by
- some other lock acquired only by updaters, if desired.
+ that guard per-element state. Fields that the readers
+ refrain from accessing can be guarded by some other lock
+ acquired only by updaters, if desired.
- This works quite well, also.
+ This also works quite well.
c. Make updates appear atomic to readers. For example,
pointer updates to properly aligned fields will
appear atomic, as will individual atomic primitives.
Sequences of operations performed under a lock will *not*
appear to be atomic to RCU readers, nor will sequences
- of multiple atomic primitives.
+ of multiple atomic primitives. One alternative is to
+ move multiple individual fields to a separate structure,
+ thus solving the multiple-field problem by imposing an
+ additional level of indirection.
This can work, but is starting to get a bit tricky.
- d. Carefully order the updates and the reads so that
- readers see valid data at all phases of the update.
- This is often more difficult than it sounds, especially
- given modern CPUs' tendency to reorder memory references.
- One must usually liberally sprinkle memory barriers
- (smp_wmb(), smp_rmb(), smp_mb()) through the code,
- making it difficult to understand and to test.
-
- It is usually better to group the changing data into
- a separate structure, so that the change may be made
- to appear atomic by updating a pointer to reference
- a new structure containing updated values.
+ d. Carefully order the updates and the reads so that readers
+ see valid data at all phases of the update. This is often
+ more difficult than it sounds, especially given modern
+ CPUs' tendency to reorder memory references. One must
+ usually liberally sprinkle memory-ordering operations
+ through the code, making it difficult to understand and
+ to test. Where it works, it is better to use things
+ like smp_store_release() and smp_load_acquire(), but in
+ some cases the smp_mb() full memory barrier is required.
+
+ As noted earlier, it is usually better to group the
+ changing data into a separate structure, so that the
+ change may be made to appear atomic by updating a pointer
+ to reference a new structure containing updated values.
4. Weakly ordered CPUs pose special challenges. Almost all CPUs
are weakly ordered -- even x86 CPUs allow later loads to be
@@ -188,26 +193,29 @@ over a rather long period of time, but improvements are always welcome!
when publicizing a pointer to a structure that can
be traversed by an RCU read-side critical section.
-5. If call_rcu() or call_srcu() is used, the callback function will
- be called from softirq context. In particular, it cannot block.
- If you need the callback to block, run that code in a workqueue
- handler scheduled from the callback. The queue_rcu_work()
- function does this for you in the case of call_rcu().
+5. If any of call_rcu(), call_srcu(), call_rcu_tasks(),
+ call_rcu_tasks_rude(), or call_rcu_tasks_trace() is used,
+ the callback function may be invoked from softirq context,
+ and in any case with bottom halves disabled. In particular,
+ this callback function cannot block. If you need the callback
+ to block, run that code in a workqueue handler scheduled from
+ the callback. The queue_rcu_work() function does this for you
+ in the case of call_rcu().
6. Since synchronize_rcu() can block, it cannot be called
from any sort of irq context. The same rule applies
- for synchronize_srcu(), synchronize_rcu_expedited(), and
- synchronize_srcu_expedited().
+ for synchronize_srcu(), synchronize_rcu_expedited(),
+ synchronize_srcu_expedited(), synchronize_rcu_tasks(),
+ synchronize_rcu_tasks_rude(), and synchronize_rcu_tasks_trace().
The expedited forms of these primitives have the same semantics
- as the non-expedited forms, but expediting is both expensive and
- (with the exception of synchronize_srcu_expedited()) unfriendly
- to real-time workloads. Use of the expedited primitives should
- be restricted to rare configuration-change operations that would
- not normally be undertaken while a real-time workload is running.
- However, real-time workloads can use rcupdate.rcu_normal kernel
- boot parameter to completely disable expedited grace periods,
- though this might have performance implications.
+ as the non-expedited forms, but expediting is more CPU intensive.
+ Use of the expedited primitives should be restricted to rare
+ configuration-change operations that would not normally be
+ undertaken while a real-time workload is running. Note that
+ IPI-sensitive real-time workloads can use the rcupdate.rcu_normal
+ kernel boot parameter to completely disable expedited grace
+ periods, though this might have performance implications.
In particular, if you find yourself invoking one of the expedited
primitives repeatedly in a loop, please do everyone a favor:
@@ -215,8 +223,9 @@ over a rather long period of time, but improvements are always welcome!
a single non-expedited primitive to cover the entire batch.
This will very likely be faster than the loop containing the
expedited primitive, and will be much much easier on the rest
- of the system, especially to real-time workloads running on
- the rest of the system.
+ of the system, especially to real-time workloads running on the
+ rest of the system. Alternatively, instead use asynchronous
+ primitives such as call_rcu().
7. As of v4.20, a given kernel implements only one RCU flavor, which
is RCU-sched for PREEMPTION=n and RCU-preempt for PREEMPTION=y.
@@ -239,7 +248,8 @@ over a rather long period of time, but improvements are always welcome!
the corresponding readers must use rcu_read_lock_trace() and
rcu_read_unlock_trace(). If an updater uses call_rcu_tasks_rude()
or synchronize_rcu_tasks_rude(), then the corresponding readers
- must use anything that disables interrupts.
+ must use anything that disables preemption, for example,
+ preempt_disable() and preempt_enable().
Mixing things up will result in confusion and broken kernels, and
has even resulted in an exploitable security issue. Therefore,
@@ -253,15 +263,16 @@ over a rather long period of time, but improvements are always welcome!
that this usage is safe is that readers can use anything that
disables BH when updaters use call_rcu() or synchronize_rcu().
-8. Although synchronize_rcu() is slower than is call_rcu(), it
- usually results in simpler code. So, unless update performance is
- critically important, the updaters cannot block, or the latency of
- synchronize_rcu() is visible from userspace, synchronize_rcu()
- should be used in preference to call_rcu(). Furthermore,
- kfree_rcu() usually results in even simpler code than does
- synchronize_rcu() without synchronize_rcu()'s multi-millisecond
- latency. So please take advantage of kfree_rcu()'s "fire and
- forget" memory-freeing capabilities where it applies.
+8. Although synchronize_rcu() is slower than is call_rcu(),
+ it usually results in simpler code. So, unless update
+ performance is critically important, the updaters cannot block,
+ or the latency of synchronize_rcu() is visible from userspace,
+ synchronize_rcu() should be used in preference to call_rcu().
+ Furthermore, kfree_rcu() and kvfree_rcu() usually result
+ in even simpler code than does synchronize_rcu() without
+ synchronize_rcu()'s multi-millisecond latency. So please take
+ advantage of kfree_rcu()'s and kvfree_rcu()'s "fire and forget"
+ memory-freeing capabilities where it applies.
An especially important property of the synchronize_rcu()
primitive is that it automatically self-limits: if grace periods
@@ -271,8 +282,8 @@ over a rather long period of time, but improvements are always welcome!
cases where grace periods are delayed, as failing to do so can
result in excessive realtime latencies or even OOM conditions.
- Ways of gaining this self-limiting property when using call_rcu()
- include:
+ Ways of gaining this self-limiting property when using call_rcu(),
+ kfree_rcu(), or kvfree_rcu() include:
a. Keeping a count of the number of data-structure elements
used by the RCU-protected data structure, including
@@ -304,18 +315,21 @@ over a rather long period of time, but improvements are always welcome!
here is that superuser already has lots of ways to crash
the machine.
- d. Periodically invoke synchronize_rcu(), permitting a limited
- number of updates per grace period. Better yet, periodically
- invoke rcu_barrier() to wait for all outstanding callbacks.
+ d. Periodically invoke rcu_barrier(), permitting a limited
+ number of updates per grace period.
- The same cautions apply to call_srcu() and kfree_rcu().
+ The same cautions apply to call_srcu(), call_rcu_tasks(),
+ call_rcu_tasks_rude(), and call_rcu_tasks_trace(). This is
+ why there is an srcu_barrier(), rcu_barrier_tasks(),
+ rcu_barrier_tasks_rude(), and rcu_barrier_tasks_rude(),
+ respectively.
- Note that although these primitives do take action to avoid memory
- exhaustion when any given CPU has too many callbacks, a determined
- user could still exhaust memory. This is especially the case
- if a system with a large number of CPUs has been configured to
- offload all of its RCU callbacks onto a single CPU, or if the
- system has relatively little free memory.
+ Note that although these primitives do take action to avoid
+ memory exhaustion when any given CPU has too many callbacks,
+ a determined user or administrator can still exhaust memory.
+ This is especially the case if a system with a large number of
+ CPUs has been configured to offload all of its RCU callbacks onto
+ a single CPU, or if the system has relatively little free memory.
9. All RCU list-traversal primitives, which include
rcu_dereference(), list_for_each_entry_rcu(), and
@@ -344,14 +358,14 @@ over a rather long period of time, but improvements are always welcome!
and you don't hold the appropriate update-side lock, you *must*
use the "_rcu()" variants of the list macros. Failing to do so
will break Alpha, cause aggressive compilers to generate bad code,
- and confuse people trying to read your code.
+ and confuse people trying to understand your code.
11. Any lock acquired by an RCU callback must be acquired elsewhere
- with softirq disabled, e.g., via spin_lock_irqsave(),
- spin_lock_bh(), etc. Failing to disable softirq on a given
- acquisition of that lock will result in deadlock as soon as
- the RCU softirq handler happens to run your RCU callback while
- interrupting that acquisition's critical section.
+ with softirq disabled, e.g., via spin_lock_bh(). Failing to
+ disable softirq on a given acquisition of that lock will result
+ in deadlock as soon as the RCU softirq handler happens to run
+ your RCU callback while interrupting that acquisition's critical
+ section.
12. RCU callbacks can be and are executed in parallel. In many cases,
the callback code simply wrappers around kfree(), so that this
@@ -372,7 +386,17 @@ over a rather long period of time, but improvements are always welcome!
for some real-time workloads, this is the whole point of using
the rcu_nocbs= kernel boot parameter.
-13. Unlike other forms of RCU, it *is* permissible to block in an
+ In addition, do not assume that callbacks queued in a given order
+ will be invoked in that order, even if they all are queued on the
+ same CPU. Furthermore, do not assume that same-CPU callbacks will
+ be invoked serially. For example, in recent kernels, CPUs can be
+ switched between offloaded and de-offloaded callback invocation,
+ and while a given CPU is undergoing such a switch, its callbacks
+ might be concurrently invoked by that CPU's softirq handler and
+ that CPU's rcuo kthread. At such times, that CPU's callbacks
+ might be executed both concurrently and out of order.
+
+13. Unlike most flavors of RCU, it *is* permissible to block in an
SRCU read-side critical section (demarked by srcu_read_lock()
and srcu_read_unlock()), hence the "SRCU": "sleepable RCU".
Please note that if you don't need to sleep in read-side critical
@@ -412,6 +436,12 @@ over a rather long period of time, but improvements are always welcome!
never sends IPIs to other CPUs, so it is easier on
real-time workloads than is synchronize_rcu_expedited().
+ It is also permissible to sleep in RCU Tasks Trace read-side
+ critical, which are delimited by rcu_read_lock_trace() and
+ rcu_read_unlock_trace(). However, this is a specialized flavor
+ of RCU, and you should not use it without first checking with
+ its current users. In most cases, you should instead use SRCU.
+
Note that rcu_assign_pointer() relates to SRCU just as it does to
other forms of RCU, but instead of rcu_dereference() you should
use srcu_dereference() in order to avoid lockdep splats.
@@ -442,50 +472,62 @@ over a rather long period of time, but improvements are always welcome!
find problems as follows:
CONFIG_PROVE_LOCKING:
- check that accesses to RCU-protected data
- structures are carried out under the proper RCU
- read-side critical section, while holding the right
- combination of locks, or whatever other conditions
- are appropriate.
+ check that accesses to RCU-protected data structures
+ are carried out under the proper RCU read-side critical
+ section, while holding the right combination of locks,
+ or whatever other conditions are appropriate.
CONFIG_DEBUG_OBJECTS_RCU_HEAD:
- check that you don't pass the
- same object to call_rcu() (or friends) before an RCU
- grace period has elapsed since the last time that you
- passed that same object to call_rcu() (or friends).
+ check that you don't pass the same object to call_rcu()
+ (or friends) before an RCU grace period has elapsed
+ since the last time that you passed that same object to
+ call_rcu() (or friends).
__rcu sparse checks:
- tag the pointer to the RCU-protected data
- structure with __rcu, and sparse will warn you if you
- access that pointer without the services of one of the
- variants of rcu_dereference().
+ tag the pointer to the RCU-protected data structure
+ with __rcu, and sparse will warn you