tracing_mutex/lib.rs
1//! Mutexes can deadlock each other, but you can avoid this by always acquiring your locks in a
2//! consistent order. This crate provides tracing to ensure that you do.
3//!
4//! This crate tracks a virtual "stack" of locks that the current thread holds, and whenever a new
5//! lock is acquired, a dependency is created from the last lock to the new one. These dependencies
6//! together form a graph. As long as that graph does not contain any cycles, your program is
7//! guaranteed to never deadlock.
8//!
9//! # Panics
10//!
11//! The primary method by which this crate signals an invalid lock acquisition order is by
12//! panicking. When a cycle is created in the dependency graph when acquiring a lock, the thread
13//! will instead panic. This panic will not poison the underlying mutex.
14//!
15//! This conflicting dependency is not added to the graph, so future attempts at locking should
16//! succeed as normal.
17//!
18//! You can suppress panics by calling [`suppress_panics`]. This will cause the crate to print the
19//! cycle to stderr instead of panicking. This is useful for incrementally adopting tracing-mutex to
20//! a large codebase compiled with `panic=abort`, as it allows you to continue running your program
21//! even when a cycle is detected.
22//!
23//! # Structure
24//!
25//! Each module in this crate exposes wrappers for a specific base-mutex with dependency trakcing
26//! added. This includes [`stdsync`] which provides wrappers for the base locks in the standard
27//! library, and more depending on enabled compile-time features. More back-ends may be added as
28//! features in the future.
29//!
30//! # Feature flags
31//!
32//! `tracing-mutex` uses feature flags to reduce the impact of this crate on both your compile time
33//! and runtime overhead. Below are the available flags. Modules are annotated with the features
34//! they require.
35//!
36//! - `backtraces`: Enables capturing backtraces of mutex dependencies, to make it easier to
37//! determine what sequence of events would trigger a deadlock. This is enabled by default, but if
38//! the performance overhead is unacceptable, it can be disabled by disabling default features.
39//!
40//! - `lockapi`: Enables the wrapper lock for [`lock_api`][lock_api] locks
41//!
42//! - `parkinglot`: Enables wrapper types for [`parking_lot`][parking_lot] mutexes
43//!
44//! - `experimental`: Enables experimental features. Experimental features are intended to test new
45//! APIs and play with new APIs before committing to them. As such, breaking changes may be
46//! introduced in it between otherwise semver-compatible versions, and the MSRV does not apply to
47//! experimental features.
48//!
49//! # Performance considerations
50//!
51//! Tracing a mutex adds overhead to certain mutex operations in order to do the required
52//! bookkeeping. The following actions have the following overhead.
53//!
54//! - **Acquiring a lock** locks the global dependency graph temporarily to check if the new lock
55//! would introduce a cyclic dependency. This crate uses the algorithm proposed in ["A Dynamic
56//! Topological Sort Algorithm for Directed Acyclic Graphs" by David J. Pearce and Paul H.J.
57//! Kelly][paper] to detect cycles as efficently as possible. In addition, a thread local lock set
58//! is updated with the new lock.
59//!
60//! - **Releasing a lock** updates a thread local lock set to remove the released lock.
61//!
62//! - **Allocating a lock** performs an atomic update to a shared counter.
63//!
64//! - **Deallocating a mutex** temporarily locks the global dependency graph to remove the lock
65//! entry in the dependency graph.
66//!
67//! These operations have been reasonably optimized, but the performance penalty may yet be too much
68//! for production use. In those cases, it may be beneficial to instead use debug-only versions
69//! (such as [`stdsync::Mutex`]) which evaluate to a tracing mutex when debug assertions are
70//! enabled, and to the underlying mutex when they're not.
71//!
72//! For ease of debugging, this crate will, by default, capture a backtrace when establishing a new
73//! dependency between two mutexes. This has an additional overhead of over 60%. If this additional
74//! debugging aid is not required, it can be disabled by disabling default features.
75//!
76//! [paper]: https://whileydave.com/publications/pk07_jea/
77//! [lock_api]: https://docs.rs/lock_api/0.4/lock_api/index.html
78//! [parking_lot]: https://docs.rs/parking_lot/0.12.1/parking_lot/
79#![cfg_attr(docsrs, feature(doc_cfg))]
80use std::cell::RefCell;
81use std::fmt;
82use std::marker::PhantomData;
83use std::ops::Deref;
84use std::ops::DerefMut;
85use std::sync::Mutex;
86use std::sync::MutexGuard;
87use std::sync::OnceLock;
88use std::sync::PoisonError;
89use std::sync::atomic::AtomicUsize;
90use std::sync::atomic::Ordering;
91
92#[cfg(feature = "lock_api")]
93#[cfg_attr(docsrs, doc(cfg(feature = "lockapi")))]
94#[deprecated = "The top-level re-export `lock_api` is deprecated. Use `tracing_mutex::lockapi::raw` instead"]
95pub use lock_api;
96#[cfg(feature = "parking_lot")]
97#[cfg_attr(docsrs, doc(cfg(feature = "parkinglot")))]
98#[deprecated = "The top-level re-export `parking_lot` is deprecated. Use `tracing_mutex::parkinglot::raw` instead"]
99pub use parking_lot;
100
101use graph::DiGraph;
102use reporting::Dep;
103use reporting::Reportable;
104
105mod graph;
106#[cfg(any(feature = "lock_api", feature = "lockapi"))]
107#[cfg_attr(docsrs, doc(cfg(feature = "lock_api")))]
108#[cfg_attr(
109 all(not(docsrs), feature = "lockapi", not(feature = "lock_api")),
110 deprecated = "The `lockapi` feature has been renamed `lock_api`"
111)]
112pub mod lockapi;
113#[cfg(any(feature = "parking_lot", feature = "parkinglot"))]
114#[cfg_attr(docsrs, doc(cfg(feature = "parking_lot")))]
115#[cfg_attr(
116 all(not(docsrs), feature = "parkinglot", not(feature = "parking_lot")),
117 deprecated = "The `parkinglot` feature has been renamed `parking_lot`"
118)]
119pub mod parkinglot;
120mod reporting;
121pub mod stdsync;
122pub mod util;
123
124pub use reporting::suppress_panics;
125
126thread_local! {
127 /// Stack to track which locks are held
128 ///
129 /// Assuming that locks are roughly released in the reverse order in which they were acquired,
130 /// a stack should be more efficient to keep track of the current state than a set would be.
131 static HELD_LOCKS: RefCell<Vec<usize>> = const { RefCell::new(Vec::new()) };
132}
133
134/// Dedicated ID type for Mutexes
135///
136/// # Unstable
137///
138/// This type is currently private to prevent usage while the exact implementation is figured out,
139/// but it will likely be public in the future.
140struct MutexId(usize);
141
142impl MutexId {
143 /// Get a new, unique, mutex ID.
144 ///
145 /// This ID is guaranteed to be unique within the runtime of the program.
146 ///
147 /// # Panics
148 ///
149 /// This function may panic when there are no more mutex IDs available. The number of mutex ids
150 /// is `usize::MAX - 1` which should be plenty for most practical applications.
151 pub fn new() -> Self {
152 // Counter for Mutex IDs. Atomic avoids the need for locking.
153 static ID_SEQUENCE: AtomicUsize = AtomicUsize::new(0);
154
155 ID_SEQUENCE
156 .fetch_update(Ordering::SeqCst, Ordering::SeqCst, |id| id.checked_add(1))
157 .map(Self)
158 .expect("Mutex ID wraparound happened, results unreliable")
159 }
160
161 pub fn value(&self) -> usize {
162 self.0
163 }
164
165 /// Get a borrowed guard for this lock.
166 ///
167 /// This method adds checks adds this Mutex ID to the dependency graph as needed, and adds the
168 /// lock to the list of
169 ///
170 /// # Panics
171 ///
172 /// This method panics if the new dependency would introduce a cycle.
173 pub fn get_borrowed(&self) -> BorrowedMutex<'_> {
174 self.mark_held();
175 BorrowedMutex {
176 id: self,
177 _not_send: PhantomData,
178 }
179 }
180
181 /// Mark this lock as held for the purposes of dependency tracking.
182 ///
183 /// # Panics
184 ///
185 /// This method panics if the new dependency would introduce a cycle.
186 pub fn mark_held(&self) {
187 let opt_cycle = HELD_LOCKS.with(|locks| {
188 if let Some(&previous) = locks.borrow().last() {
189 let mut graph = get_dependency_graph();
190
191 graph.add_edge(previous, self.value(), Dep::capture).err()
192 } else {
193 None
194 }
195 });
196
197 if let Some(cycle) = opt_cycle {
198 reporting::report_cycle(&cycle);
199 }
200
201 HELD_LOCKS.with(|locks| locks.borrow_mut().push(self.value()));
202 }
203
204 pub unsafe fn mark_released(&self) {
205 HELD_LOCKS.with(|locks| {
206 let mut locks = locks.borrow_mut();
207
208 for (i, &lock) in locks.iter().enumerate().rev() {
209 if lock == self.value() {
210 locks.remove(i);
211 return;
212 }
213 }
214
215 // Drop impls shouldn't panic but if this happens something is seriously broken.
216 unreachable!("Tried to drop lock for mutex {:?} but it wasn't held", self)
217 });
218 }
219
220 /// Execute the given closure while the guard is held.
221 pub fn with_held<T>(&self, f: impl FnOnce() -> T) -> T {
222 // Note: we MUST construct the RAII guard, we cannot simply mark held + mark released, as
223 // f() may panic and corrupt our state.
224 let _guard = self.get_borrowed();
225 f()
226 }
227}
228
229impl Default for MutexId {
230 fn default() -> Self {
231 Self::new()
232 }
233}
234
235impl fmt::Debug for MutexId {
236 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
237 write!(f, "MutexID({:?})", self.0)
238 }
239}
240
241impl Drop for MutexId {
242 fn drop(&mut self) {
243 get_dependency_graph().remove_node(self.value());
244 }
245}
246
247/// `const`-compatible version of [`crate::MutexId`].
248///
249/// This struct can be used similarly to the normal mutex ID, but to be const-compatible its ID is
250/// generated on first use. This allows it to be used as the mutex ID for mutexes with a `const`
251/// constructor.
252///
253/// This type can be largely replaced once std::lazy gets stabilized.
254struct LazyMutexId {
255 inner: OnceLock<MutexId>,
256}
257
258impl LazyMutexId {
259 pub const fn new() -> Self {
260 Self {
261 inner: OnceLock::new(),
262 }
263 }
264}
265
266impl fmt::Debug for LazyMutexId {
267 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
268 write!(f, "{:?}", self.deref())
269 }
270}
271
272impl Default for LazyMutexId {
273 fn default() -> Self {
274 Self::new()
275 }
276}
277
278impl Deref for LazyMutexId {
279 type Target = MutexId;
280
281 fn deref(&self) -> &Self::Target {
282 self.inner.get_or_init(MutexId::new)
283 }
284}
285
286/// Borrowed mutex ID
287///
288/// This type should be used as part of a mutex guard wrapper. It can be acquired through
289/// [`MutexId::get_borrowed`] and will automatically mark the mutex as not borrowed when it is
290/// dropped.
291///
292/// This type intentionally is [`!Send`](std::marker::Send) because the ownership tracking is based
293/// on a thread-local stack which doesn't work if a guard gets released in a different thread from
294/// where they're acquired.
295#[derive(Debug)]
296struct BorrowedMutex<'a> {
297 /// Reference to the mutex we're borrowing from
298 id: &'a MutexId,
299 /// This value serves no purpose but to make the type [`!Send`](std::marker::Send)
300 _not_send: PhantomData<MutexGuard<'static, ()>>,
301}
302
303/// Drop a lock held by the current thread.
304///
305/// # Panics
306///
307/// This function panics if the lock did not appear to be handled by this thread. If that happens,
308/// that is an indication of a serious design flaw in this library.
309impl Drop for BorrowedMutex<'_> {
310 fn drop(&mut self) {
311 // Safety: the only way to get a BorrowedMutex is by locking the mutex.
312 unsafe { self.id.mark_released() };
313 }
314}
315
316/// Get a reference to the current dependency graph
317fn get_dependency_graph() -> impl DerefMut<Target = DiGraph<usize, Dep>> {
318 static DEPENDENCY_GRAPH: OnceLock<Mutex<DiGraph<usize, Dep>>> = OnceLock::new();
319
320 DEPENDENCY_GRAPH
321 .get_or_init(Default::default)
322 .lock()
323 .unwrap_or_else(PoisonError::into_inner)
324}
325
326#[cfg(test)]
327mod tests {
328 use rand::seq::SliceRandom;
329 use rand::thread_rng;
330
331 use super::*;
332
333 #[test]
334 fn test_next_mutex_id() {
335 let initial = MutexId::new();
336 let next = MutexId::new();
337
338 // Can't assert N + 1 because multiple threads running tests
339 assert!(initial.0 < next.0);
340 }
341
342 #[test]
343 fn test_lazy_mutex_id() {
344 let a = LazyMutexId::new();
345 let b = LazyMutexId::new();
346 let c = LazyMutexId::new();
347
348 let mut graph = get_dependency_graph();
349 assert!(graph.add_edge(a.value(), b.value(), Dep::capture).is_ok());
350 assert!(graph.add_edge(b.value(), c.value(), Dep::capture).is_ok());
351
352 // Creating an edge c → a should fail as it introduces a cycle.
353 assert!(graph.add_edge(c.value(), a.value(), Dep::capture).is_err());
354
355 // Drop graph handle so we can drop vertices without deadlocking
356 drop(graph);
357
358 drop(b);
359
360 // If b's destructor correctly ran correctly we can now add an edge from c to a.
361 assert!(
362 get_dependency_graph()
363 .add_edge(c.value(), a.value(), Dep::capture)
364 .is_ok()
365 );
366 }
367
368 /// Test creating a cycle, then panicking.
369 #[test]
370 #[should_panic]
371 fn test_mutex_id_conflict() {
372 let ids = [MutexId::new(), MutexId::new(), MutexId::new()];
373
374 for i in 0..3 {
375 let _first_lock = ids[i].get_borrowed();
376 let _second_lock = ids[(i + 1) % 3].get_borrowed();
377 }
378 }
379
380 /// Fuzz the global dependency graph by fake-acquiring lots of mutexes in a valid order.
381 ///
382 /// This test generates all possible forward edges in a 100-node graph consisting of natural
383 /// numbers, shuffles them, then adds them to the graph. This will always be a valid directed,
384 /// acyclic graph because there is a trivial order (the natural numbers) but because the edges
385 /// are added in a random order the DiGraph will still occassionally need to reorder nodes.
386 #[test]
387 fn fuzz_mutex_id() {
388 const NUM_NODES: usize = 100;
389
390 let ids: Vec<MutexId> = (0..NUM_NODES).map(|_| Default::default()).collect();
391
392 let mut edges = Vec::with_capacity(NUM_NODES * NUM_NODES);
393 for i in 0..NUM_NODES {
394 for j in i..NUM_NODES {
395 if i != j {
396 edges.push((i, j));
397 }
398 }
399 }
400
401 edges.shuffle(&mut thread_rng());
402
403 for (x, y) in edges {
404 // Acquire the mutexes, smallest first to ensure a cycle-free graph
405 let _ignored = ids[x].get_borrowed();
406 let _ = ids[y].get_borrowed();
407 }
408 }
409}