pub struct Stressor { /* private fields */ }Expand description
Designed to ‘exercise’ the filesystem and thrash the dirent cache. This cyclically works its way through 10000 files (the dirent cache holds 8000).
On each iteration it will either read, write or rewrite a file. Unaligned random reads and append-only writes up to 128kB each in length. Write errors causes the file to be truncated to zero bytes.
The file order is partially permuted each time (only those files which are known not to be in the dirent cache) to reduce cyclic write/delete patterns.
The truncation of files when we hit the target limit is also randomized to avoid introducing cyclic patterns (n writes, truncate, n writes, truncate, …).
Despite these measures, we still expect to see some cyclic behaviour. Each pass has equal probability of read or write. It takes about 3 writes to reach 150kB (the average size before we hit target bytes of 1.5GB) so we can assume n reads, n writes, n/3 truncates in steady state.
n + n + n/3 = 10000 n = 30000 / 7 = 4285
On average, 4285/3 = ~1500 files are truncated each pass. Assuming an average file size, this is around 220MB or 14%. Best case it will take 7 passes to rewrite all data, but we can assume that most of the data will be rewritten in a few dozen passes.
Each pass anecdotally takes between 2 and 20 seconds depending on system load, allocation strategy and fragmentation.