Expand description
Mutexes can deadlock each other, but you can avoid this by always acquiring your locks in a consistent order. This crate provides tracing to ensure that you do.
This crate tracks a virtual “stack” of locks that the current thread holds, and whenever a new lock is acquired, a dependency is created from the last lock to the new one. These dependencies together form a graph. As long as that graph does not contain any cycles, your program is guaranteed to never deadlock.
§Panics
The primary method by which this crate signals an invalid lock acquisition order is by panicking. When a cycle is created in the dependency graph when acquiring a lock, the thread will instead panic. This panic will not poison the underlying mutex.
This conflicting dependency is not added to the graph, so future attempts at locking should succeed as normal.
§Structure
Each module in this crate exposes wrappers for a specific base-mutex with dependency trakcing
added. This includes stdsync
which provides wrappers for the base locks in the standard
library, and more depending on enabled compile-time features. More back-ends may be added as
features in the future.
§Feature flags
tracing-mutex
uses feature flags to reduce the impact of this crate on both your compile time
and runtime overhead. Below are the available flags. Modules are annotated with the features
they require.
-
backtraces
: Enables capturing backtraces of mutex dependencies, to make it easier to determine what sequence of events would trigger a deadlock. This is enabled by default, but if the performance overhead is unaccceptable, it can be disabled by disabling default features. -
lockapi
: Enables the wrapper lock forlock_api
locks -
parkinglot
: Enables wrapper types forparking_lot
mutexes
§Performance considerations
Tracing a mutex adds overhead to certain mutex operations in order to do the required bookkeeping. The following actions have the following overhead.
-
Acquiring a lock locks the global dependency graph temporarily to check if the new lock would introduce a cyclic dependency. This crate uses the algorithm proposed in “A Dynamic Topological Sort Algorithm for Directed Acyclic Graphs” by David J. Pearce and Paul H.J. Kelly to detect cycles as efficently as possible. In addition, a thread local lock set is updated with the new lock.
-
Releasing a lock updates a thread local lock set to remove the released lock.
-
Allocating a lock performs an atomic update to a shared counter.
-
Deallocating a mutex temporarily locks the global dependency graph to remove the lock entry in the dependency graph.
These operations have been reasonably optimized, but the performance penalty may yet be too much
for production use. In those cases, it may be beneficial to instead use debug-only versions
(such as stdsync::Mutex
) which evaluate to a tracing mutex when debug assertions are
enabled, and to the underlying mutex when they’re not.
For ease of debugging, this crate will, by default, capture a backtrace when establishing a new dependency between two mutexes. This has an additional overhead of over 60%. If this additional debugging aid is not required, it can be disabled by disabling default features.
Re-exports§
pub use lock_api;
pub use parking_lot;
Modules§
- Wrapper implementations for
lock_api
. - Wrapper types and type aliases for tracing
parking_lot
mutexes. - Tracing mutex wrappers for locks found in
std::sync
.