shared_buffer/lib.rs
1// Copyright 2018 The Fuchsia Authors. All rights reserved.
2// Use of this source code is governed by a BSD-style license that can be
3// found in the LICENSE file.
4
5//! Utilities for safely operating on memory shared between untrusting
6//! processes.
7//!
8//! `shared-buffer` provides support for safely operating on memory buffers
9//! which are shared with another process which is untrusted. The Rust memory
10//! model assumes that only code running in the current process - and thus
11//! either trusted or generated by Rust itself - operates on a given region of
12//! memory. As a result, simply treating a region of memory to which another,
13//! untrusted process has read or write access as equivalent to normal process
14//! memory is unsafe. This crate provides the `SharedBuffer` type, which has
15//! methods that allow safe access to such memory.
16//!
17//! Examples of issues that could arise if shared memory were treated as normal
18//! memory include:
19//! - Unintentionally leaking sensitive values to another process
20//! - Allowing other processes to cause an invalid sequence of memory to be
21//! interpreted as a given type
22
23// NOTES(joshlf) on implementation: We need to worry about the following issues:
24// - If another process has write access to a given region of memory, then
25// arbitrary writes may happen at any time. Thus, it is never safe to access
26// this memory through any Rust type other than a raw pointer, or else the
27// compiler might allow operations or make optimizations based on the
28// assumption that the memory is either owned (in the case of a mutable
29// reference) or immutable (in the case of an immutable reference). In either
30// of these cases, any such allowance or optimization would be unsound. For
31// example, the compiler might decide that, after having written a T to a
32// particular memory location, it is safe to read that memory location and
33// treat it as a T. This would cause undefined behavior if the other process
34// modified that memory location in the meantime. Perhaps more fundamentally,
35// both mutable and immutable references guarantee that nobody else is
36// modifying this memory other than me (and not even me, in the case of an
37// immutable reference). On this basis alone, it is clear that neither
38// reference is compatible with foreign write access to the referent.
39// - If another process has read access to a given region of memory, then it
40// cannot affect the correctness of a Rust program. However, it can do things
41// that do not technically violate correctness, but are still undesirable. The
42// canonical example is reading memory which contains sensitive information.
43// Even if the programmer were to construct a mutable reference to such memory
44// and write a value to it which the programmer intended to be shared with the
45// other process, the compiler might use the fact that it had exclusive access
46// to the memory (so says the mutable reference...) to store any arbitrary
47// value in the memory temporarily. So long as it's not observable from the
48// Rust program, it preserves the semantics of the program. Of course, it *is*
49// observable from the other process, and there are no guarantees on what the
50// compiler might decide to store there, including any value currently in your
51// memory space, including particularly sensitive values. As a result, while
52// read-only access doesn't violate the correctness of a Rust program, it's
53// still worth handling carefully.
54//
55// In order to address both of these issues, our approach is simple: never treat
56// the memory as anything other than a raw pointer. Do not construct any
57// references, mutable or immutable, even temporarily, and even if they are
58// never used. This basically boils down to only accessing the memory using the
59// various functions from core::ptr which operate directly on raw pointers.
60
61// NOTE(joshlf):
62// - Since you must assume that the other process might be writing to the
63// memory, there's no technical requirement to have exclusive access. E.g., we
64// could safely implement Clone, have write and write_at take immutable
65// references, etc. (see here for a discussion of the soundness of using
66// copy_nonoverlapping simultaneously in multiple threads:
67// https://users.rust-lang.org/t/copy-nonoverlapping-concurrently/18353).
68// However, this would be confusing because it would depart from the Rust
69// idiom. Instead, we provide SharedBuffer, which has ownership semantics
70// analogous to Vec, and SharedBufferSlice and SharedBufferSliceMut, which
71// have reference semantics analogous to immutable and mutable slice
72// references. Similarly, write, write_at, and release_writes take mutable
73// references.
74// Clone and provide slicing methods. There's no point not to.
75// - Since all access to these buffers must go through the methods of
76// SharedBuffer, correct code may not construct a reference to this memory.
77// Thus, the references to dst and src passed to read, read_at, write, and
78// write_at cannot overlap with the buffer itself, and so it's safe to use
79// ptr::copy_nonoverlapping.
80// - Note on volatility and observability: The memory in a SharedBuffer is
81// either allocated by this process and then sent to another process, or
82// allocated by another process and sent to this process. However, on Fuchsia,
83// what's actually shared is a VMO, which is then mapped into the address
84// space. While LLVM is almost certainly guaranteed to treat this call as
85// opaque, and thus to be unable to prove to itself that the returned memory
86// is not shared, it is worth hedging against that reasoning being wrong. If
87// LLVM were, for some reason, to decide that mapping a VMO resulted in
88// uniquely owned memory, it would be able to reason that writes to that
89// memory could never be observed by other threads, and so if the writes were
90// not observed by the _current_ thread, they could be elided altogether since
91// they could have no effect. In order to hedge against this possibility, and
92// to ensure that LLVM definitely cannot take this line of reasoning, we
93// volatile write the pointer when we first construct the SharedBuffer. LLVM
94// must conclude that it doesn't know who else is using the memory once a
95// pointer to it has been written in a volatile manner, and so must assume
96// that all future writes must be observable. This single volatile write which
97// happens at most once per message (although more likely once when the
98// connection is first established) has minimal performance overhead.
99
100// TODO(joshlf):
101// - Create a variant for read-only memory
102// - Create a variant for write-only memory?
103
104#![no_std]
105
106use core::marker::PhantomData;
107use core::ops::{Bound, Range, RangeBounds};
108use core::ptr;
109use core::sync::atomic::{fence, Ordering};
110
111// A buffer with no ownership or reference semantics. It is the caller's
112// responsibility to wrap this type in a type which provides ownership or
113// reference semantics, and to only call methods when apporpriate.
114#[derive(Debug)]
115struct SharedBufferInner {
116 // invariant: '(buf as usize) + len' doesn't overflow usize
117 buf: *mut u8,
118 len: usize,
119}
120
121impl SharedBufferInner {
122 fn read_at(&self, offset: usize, dst: &mut [u8]) -> usize {
123 if let Some(to_copy) = overlap(offset, dst.len(), self.len) {
124 // Since overlap returned Some, we're guaranteed that 'offset +
125 // to_copy <= self.len'. That in turn means that, so long as the
126 // invariant holds that '(self.buf as usize) + self.len' doesn't
127 // overflow usize, then this call to offset_from won't overflow, and
128 // neither will the call to copy_nonoverlapping.
129 let base = offset_from(self.buf, offset);
130 unsafe { ptr::copy_nonoverlapping(base, dst.as_mut_ptr(), to_copy) };
131 to_copy
132 } else {
133 panic!("byte offset {} out of range for SharedBuffer of length {}", offset, self.len);
134 }
135 }
136
137 fn write_at(&self, offset: usize, src: &[u8]) -> usize {
138 if let Some(to_copy) = overlap(offset, src.len(), self.len) {
139 // Since overlap returned Some, we're guaranteed that 'offset +
140 // to_copy <= self.len'. That in turn means that, so long as the
141 // invariant holds that '(self.buf as usize) + self.len' doesn't
142 // overflow usize, then this call to offset_from won't overflow, and
143 // neither will the call to copy_nonoverlapping.
144 let base = offset_from(self.buf, offset);
145 unsafe { ptr::copy_nonoverlapping(src.as_ptr(), base, to_copy) };
146 to_copy
147 } else {
148 panic!("byte offset {} out of range for SharedBuffer of length {}", offset, self.len);
149 }
150 }
151
152 fn slice<R: RangeBounds<usize>>(&self, range: R) -> SharedBufferInner {
153 let range = canonicalize_range_infallible(self.len, range);
154 SharedBufferInner { buf: offset_from(self.buf, range.start), len: range.end - range.start }
155 }
156
157 fn split_at(&self, idx: usize) -> (SharedBufferInner, SharedBufferInner) {
158 assert!(idx <= self.len, "split index out of bounds");
159 let a = SharedBufferInner { buf: self.buf, len: idx };
160 let b = SharedBufferInner { buf: offset_from(self.buf, idx), len: self.len - idx };
161 (a, b)
162 }
163}
164
165// Verifies that 'offset' is in range of range_len (that 'offset <= range_len'),
166// and returns the amount of overlap between a copy of length 'copy_len'
167// starting at 'offset' and a buffer of length 'range_len'. The number it
168// returns is guaranteed to be less than or equal to 'range_len'.
169//
170// overlap is guaranteed to be correct for any three usize values.
171fn overlap(offset: usize, copy_len: usize, range_len: usize) -> Option<usize> {
172 if offset > range_len {
173 None
174 } else if offset.checked_add(copy_len).map(|sum| sum <= range_len).unwrap_or(false) {
175 // if 'offset + copy_len' overflows usize, then 'offset + copy_len >
176 // range_len', so we unwrap_or(false)
177 Some(copy_len)
178 } else {
179 Some(range_len - offset)
180 }
181}
182
183// Like the offset method on primitive pointers, but for unsigned offsets. Both
184// the 'offset' and 'add' methods on primitive pointers have the limitation that
185// the offset cannot overflow an isize or else it will cause UB. offset_from
186// function has no such restriction.
187//
188// The caller must guarantee that '(ptr as usize) + offset' doesn't overflow
189// usize.
190fn offset_from(ptr: *mut u8, offset: usize) -> *mut u8 {
191 // just in case our logic is wrong, better to catch it at runtime than
192 // invoke UB
193 (ptr as usize).checked_add(offset).unwrap() as *mut u8
194}
195
196// Return the inclusive equivalent of the bound.
197fn canonicalize_lower_bound(bound: Bound<&usize>) -> usize {
198 match bound {
199 Bound::Included(x) => *x,
200 Bound::Excluded(x) => *x + 1,
201 Bound::Unbounded => 0,
202 }
203}
204// Return the exclusive equivalent of the bound, verifying that it is in range
205// of len.
206fn canonicalize_upper_bound(len: usize, bound: Bound<&usize>) -> Option<usize> {
207 let bound = match bound {
208 Bound::Included(x) => *x + 1,
209 Bound::Excluded(x) => *x,
210 Bound::Unbounded => len,
211 };
212 if bound > len {
213 return None;
214 }
215 Some(bound)
216}
217// Return the inclusive-exclusive equivalent of the bound, verifying that it is
218// in range of len, and panicking if it is not or if the range is nonsensical.
219fn canonicalize_range_infallible<R: RangeBounds<usize>>(len: usize, range: R) -> Range<usize> {
220 let lower = canonicalize_lower_bound(range.start_bound());
221 let upper =
222 canonicalize_upper_bound(len, range.end_bound()).expect("slice range out of bounds");
223 assert!(lower <= upper, "invalid range");
224 lower..upper
225}
226
227/// A shared region of memory.
228///
229/// A `SharedBuffer` is a view into a region of memory to which another process
230/// has access. It provides methods to access this memory in a way that
231/// preserves memory safety. From the perspective of the current process, it
232/// owns its memory (analogous to a `Vec`).
233///
234/// Since the buffer is shared by an untrusted process, it is never valid to
235/// assume that a given region of the buffer will not change in between method
236/// calls. Even if no thread in this process wrote anything to the buffer, the
237/// other process might have.
238///
239/// # Unmapping
240///
241/// `SharedBuffer`s do nothing when dropped. In order to avoid leaking memory,
242/// use the `consume` method to consume the `SharedBuffer` and get back the
243/// underlying pointer and length, and unmap the memory manually.
244#[derive(Debug)]
245pub struct SharedBuffer {
246 inner: SharedBufferInner,
247}
248
249impl SharedBuffer {
250 /// Create a new `SharedBuffer` from a raw buffer.
251 ///
252 /// `new` creates a new `SharedBuffer` from the provided buffer and lenth,
253 /// taking ownership of the memory.
254 ///
255 /// # Safety
256 ///
257 /// Memory in a shared buffer must never be accessed except through the
258 /// methods of `SharedBuffer`. It must not be treated as normal memory, and
259 /// pointers to it must not be passed to unsafe code which is designed to
260 /// operate on normal memory. It must be guaranteed that, for the lifetime
261 /// of the `SharedBuffer`, the memory region is mapped, readable, and
262 /// writable.
263 ///
264 /// If any of these guarantees are violated, it may cause undefined
265 /// behavior.
266 #[inline]
267 pub unsafe fn new(buf: *mut u8, len: usize) -> SharedBuffer {
268 // Write the pointer and the length using a volatile write so that LLVM
269 // must assume that the memory has escaped, and that all future writes
270 // to it are observable. See the NOTE above for more details.
271 let mut scratch = (ptr::null_mut(), 0);
272 ptr::write_volatile(&mut scratch, (buf, len));
273
274 // Acquire any writes to the buffer that happened in a different thread
275 // or process already so they are visible without having to call the
276 // acquire_writes method.
277 fence(Ordering::Acquire);
278 SharedBuffer { inner: SharedBufferInner { buf, len } }
279 }
280
281 /// Read bytes from the buffer.
282 ///
283 /// Read up to `dst.len()` bytes from the buffer, returning how many bytes
284 /// were read. The only thing that can cause fewer bytes to be read than
285 /// requested is if `dst` is larger than the buffer itself.
286 ///
287 /// A call to `read` is only guaranteed to happen after an operation in
288 /// another thread or process if the mechanism used to signal the other
289 /// process has well-defined memory ordering semantics. Otherwise, the
290 /// `acquire_writes` method must be called before `read` and after receiving
291 /// a signal from the other process in order to provide such ordering
292 /// guarantees. In practice, this means that `acquire_writes` should be the
293 /// first read operation that happens after receiving a signal from another
294 /// process that the memory may be read. See the `acquire_writes`
295 /// documentation for more details.
296 #[inline]
297 pub fn read(&self, dst: &mut [u8]) -> usize {
298 self.inner.read_at(0, dst)
299 }
300
301 /// Read bytes from the buffer at an offset.
302 ///
303 /// Read up to `dst.len()` bytes starting at `offset` into the buffer,
304 /// returning how many bytes were read. The only thing that can cause fewer
305 /// bytes to be read than requested is if there are fewer than `dst.len()`
306 /// bytes available starting at `offset` within the buffer.
307 ///
308 /// A call to `read_at` is only guaranteed to happen after an operation in
309 /// another thread or process if the mechanism used to signal the other
310 /// process has well-defined memory ordering semantics. Otherwise, the
311 /// `acquire_writes` method must be called before `read_at` and after
312 /// receiving a signal from the other process in order to provide such
313 /// ordering guarantees. In practice, this means that `acquire_writes`
314 /// should be the first read operation that happens after receiving a signal
315 /// from another process that the memory may be read. See the
316 /// `acquire_writes` documentation for more details.
317 ///
318 /// # Panics
319 ///
320 /// `read_at` panics if `offset` is greater than the length of the buffer.
321 #[inline]
322 pub fn read_at(&self, offset: usize, dst: &mut [u8]) -> usize {
323 self.inner.read_at(offset, dst)
324 }
325
326 /// Write bytes to the buffer.
327 ///
328 /// Write up to `src.len()` bytes into the buffer, returning how many bytes
329 /// were written. The only thing that can cause fewer bytes to be written
330 /// than requested is if `src` is larger than the buffer itself.
331 ///
332 /// A call to `write` is only guaranteed to happen before an operation in
333 /// another thread or process if the mechanism used to signal the other
334 /// process has well-defined memory ordering semantics. Otherwise, the
335 /// `release_writes` method must be called after `write` and before
336 /// signalling the other process in order to provide such ordering
337 /// guarantees. In practice, this means that `release_writes` should be the
338 /// last write operation that happens before signalling another process that
339 /// the memory may be read. See the `release_writes` documentation for more
340 /// details.
341 #[inline]
342 pub fn write(&self, src: &[u8]) -> usize {
343 self.inner.write_at(0, src)
344 }
345
346 /// Write bytes to the buffer at an offset.
347 ///
348 /// Write up to `src.len()` bytes starting at `offset` into the buffer,
349 /// returning how many bytes were written. The only thing that can cause
350 /// fewer bytes to be written than requested is if there are fewer than
351 /// `src.len()` bytes available starting at `offset` within the buffer.
352 ///
353 /// A call to `write_at` is only guaranteed to happen before an operation in
354 /// another thread or process if the mechanism used to signal the other
355 /// process has well-defined memory ordering semantics. Otherwise, the
356 /// `release_writes` method must be called after `write_at` and before
357 /// signalling the other process in order to provide such ordering
358 /// guarantees. In practice, this means that `release_writes` should be the
359 /// last write operation that happens before signalling another process that
360 /// the memory may be read. See the `release_writes` documentation for more
361 /// details.
362 ///
363 /// # Panics
364 ///
365 /// `write_at` panics if `offset` is greater than the length of the buffer.
366 #[inline]
367 pub fn write_at(&self, offset: usize, src: &[u8]) -> usize {
368 self.inner.write_at(offset, src)
369 }
370
371 /// Acquire all writes performed by the other process.
372 ///
373 /// On some systems (such as Fuchsia, currently), the communication
374 /// mechanism used for signalling a process that memory is readable does not
375 /// have well-defined synchronization semantics. On those systems, this
376 /// method MUST be called after receiving such a signal, or else writes
377 /// performed before that signal are not guaranteed to be observed by this
378 /// process.
379 ///
380 /// `acquire_writes` acquires any writes performed on this buffer or any
381 /// slice within the buffer.
382 ///
383 /// # Note on Fuchsia
384 ///
385 /// Zircon, the Fuchsia kernel, will likely eventually have well-defined
386 /// semantics around the synchronization behavior of various syscalls. Once
387 /// that happens, calling this method in Fuchsia programs may become
388 /// optional. This work is tracked in [https://fxbug.dev/42107145].
389 ///
390 /// [https://fxbug.dev/42107145]: #
391 // TODO(joshlf): Replace with link once issues are public.
392 #[inline]
393 pub fn acquire_writes(&self) {
394 fence(Ordering::Acquire);
395 }
396
397 /// Release all writes performed so far.
398 ///
399 /// On some systems (such as Fuchsia, currently), the communication
400 /// mechanism used for signalling the other process that memory is readable
401 /// does not have well-defined synchronization semantics. On those systems,
402 /// this method MUST be called before such signalling, or else writes
403 /// performed before that signal are not guaranteed to be observed by the
404 /// other process.
405 ///
406 /// `release_writes` releases any writes performed on this buffer or any
407 /// slice within the buffer.
408 ///
409 /// # Note on Fuchsia
410 ///
411 /// Zircon, the Fuchsia kernel, will likely eventually have well-defined
412 /// semantics around the synchronization behavior of various syscalls. Once
413 /// that happens, calling this method in Fuchsia programs may become
414 /// optional. This work is tracked in [https://fxbug.dev/42107145].
415 ///
416 /// [https://fxbug.dev/42107145]: #
417 // TODO(joshlf): Replace with link once issues are public.
418 #[inline]
419 pub fn release_writes(&mut self) {
420 fence(Ordering::Release);
421 }
422
423 /// The number of bytes in this `SharedBuffer`.
424 #[inline]
425 pub fn len(&self) -> usize {
426 self.inner.len
427 }
428
429 /// Create a slice of the original `SharedBuffer`.
430 ///
431 /// Just like the slicing operation on array and slice references, `slice`
432 /// constructs a `SharedBufferSlice` which points to the same memory as the
433 /// original `SharedBuffer`, but starting and index `from` (inclusive) and
434 /// ending at index `to` (exclusive).
435 ///
436 /// # Panics
437 ///
438 /// `slice` panics if `range` is out of bounds of `self` or if `range` is
439 /// nonsensical (its lower bound is larger than its upper bound).
440 #[inline]
441 pub fn slice<'a, R: RangeBounds<usize>>(&'a self, range: R) -> SharedBufferSlice<'a> {
442 SharedBufferSlice { inner: self.inner.slice(range), _marker: PhantomData }
443 }
444
445 /// Create a mutable slice of the original `SharedBuffer`.
446 ///
447 /// Just like the mutable slicing operation on array and slice references,
448 /// `slice_mut` constructs a `SharedBufferSliceMut` which points to the same
449 /// memory as the original `SharedBuffer`, but starting and index `from`
450 /// (inclusive) and ending at index `to` (exclusive).
451 ///
452 /// # Panics
453 ///
454 /// `slice_mut` panics if `range` is out of bounds of `self` or if `range`
455 /// is nonsensical (its lower bound is larger than its upper bound).
456 #[inline]
457 pub fn slice_mut<'a, R: RangeBounds<usize>>(
458 &'a mut self,
459 range: R,
460 ) -> SharedBufferSliceMut<'a> {
461 SharedBufferSliceMut { inner: self.inner.slice(range), _marker: PhantomData }
462 }
463
464 /// Create two non-overlapping slices of the original `SharedBuffer`.
465 ///
466 /// Just like the `split_at` method on array and slice references,
467 /// `split_at` constructs one `SharedBufferSlice` which represents bytes
468 /// `[0, idx)`, and one which represents bytes `[idx, len)`, where `len` is
469 /// the length of the buffer.
470 ///
471 /// # Panics
472 ///
473 /// `split_at` panics if `idx > self.len()`.
474 #[inline]
475 pub fn split_at<'a>(&'a self, idx: usize) -> (SharedBufferSlice<'a>, SharedBufferSlice<'a>) {
476 let (a, b) = self.inner.split_at(idx);
477 let a = SharedBufferSlice { inner: a, _marker: PhantomData };
478 let b = SharedBufferSlice { inner: b, _marker: PhantomData };
479 (a, b)
480 }
481
482 /// Create two non-overlapping mutable slices of the original `SharedBuffer`.
483 ///
484 /// Just like the `split_at_mut` method on array and slice references,
485 /// `split_at_miut` constructs one `SharedBufferSliceMut` which represents
486 /// bytes `[0, idx)`, and one which represents bytes `[idx, len)`, where
487 /// `len` is the length of the buffer.
488 ///
489 /// # Panics
490 ///
491 /// `split_at_mut` panics if `idx > self.len()`.
492 #[inline]
493 pub fn split_at_mut<'a>(
494 &'a mut self,
495 idx: usize,
496 ) -> (SharedBufferSliceMut<'a>, SharedBufferSliceMut<'a>) {
497 let (a, b) = self.inner.split_at(idx);
498 let a = SharedBufferSliceMut { inner: a, _marker: PhantomData };
499 let b = SharedBufferSliceMut { inner: b, _marker: PhantomData };
500 (a, b)
501 }
502
503 /// Get the buffer pointer and length so that the memory can be freed.
504 ///
505 /// This method is an alternative to calling `consume` if relinquishing
506 /// ownership of the object is infeasible (for example, when the object is a
507 /// struct field and thus can't be moved out of the struct). Since it allows
508 /// the object to continue existing, it must be used with care (see the
509 /// "Safety" section below).
510 ///
511 /// # Safety
512 ///
513 /// The returned pointer must *only* be used to free the memory. Since the
514 /// memory is shared by another process, using it as a normal raw pointer to
515 /// normal memory owned by this process is unsound.
516 ///
517 /// If the pointer is used for this purpose, then the caller must ensure
518 /// that no methods will be called on the object after the call to
519 /// `as_ptr_len`. The only scenario in which the object may be used again is
520 /// if the caller does nothing at all with the return value of this method
521 /// (although that would be kind of pointless...).
522 pub fn as_ptr_len(&mut self) -> (*mut u8, usize) {
523 (self.inner.buf, self.inner.len)
524 }
525
526 /// Consume the `SharedBuffer`, returning the underlying buffer pointer and
527 /// length.
528 ///
529 /// Since `SharedBuffer`s do nothing on drop, the only way to ensure that
530 /// resources are not leaked is to `consume` a `SharedBuffer` and then unmap
531 /// the memory manually.
532 #[inline]
533 pub fn consume(self) -> (*mut u8, usize) {
534 (self.inner.buf, self.inner.len)
535 }
536}
537
538impl Drop for SharedBuffer {
539 fn drop(&mut self) {
540 // Release any writes performed after the last call to
541 // self.release_writes().
542 fence(Ordering::Release);
543 }
544}
545
546/// An immutable slice into a `SharedBuffer`.
547///
548/// A `SharedBufferSlice` is created with `SharedBuffer::slice`,
549/// `SharedBufferSlice::slice`, or `SharedBufferSliceMut::slice`.
550#[derive(Debug)]
551pub struct SharedBufferSlice<'a> {
552 inner: SharedBufferInner,
553 _marker: PhantomData<&'a ()>,
554}
555
556impl<'a> SharedBufferSlice<'a> {
557 /// Read bytes from the buffer.
558 ///
559 /// Read up to `dst.len()` bytes from the buffer, returning how many bytes
560 /// were read. The only thing that can cause fewer bytes to be read than
561 /// requested is if `dst` is larger than the buffer itself.
562 ///
563 /// A call to `read` is only guaranteed to happen after an operation in
564 /// another thread or process if the mechanism used to signal the other
565 /// process has well-defined memory ordering semantics. Otherwise, the
566 /// `acquire_writes` method must be called before `read` and after receiving
567 /// a signal from the other process in order to provide such ordering
568 /// guarantees. In practice, this means that `acquire_writes` should be the
569 /// first read operation that happens after receiving a signal from another
570 /// process that the memory may be read. See the `acquire_writes`
571 /// documentation for more details.
572 #[inline]
573 pub fn read(&self, dst: &mut [u8]) -> usize {
574 self.inner.read_at(0, dst)
575 }
576
577 /// Read bytes from the buffer at an offset.
578 ///
579 /// Read up to `dst.len()` bytes starting at `offset` into the buffer,
580 /// returning how many bytes were read. The only thing that can cause fewer
581 /// bytes to be read than requested is if there are fewer than `dst.len()`
582 /// bytes available starting at `offset` within the buffer.
583 ///
584 /// A call to `read_at` is only guaranteed to happen after an operation in
585 /// another thread or process if the mechanism used to signal the other
586 /// process has well-defined memory ordering semantics. Otherwise, the
587 /// `acquire_writes` method must be called before `read_at` and after
588 /// receiving a signal from the other process in order to provide such
589 /// ordering guarantees. In practice, this means that `acquire_writes`
590 /// should be the first read operation that happens after receiving a signal
591 /// from another process that the memory may be read. See the
592 /// `acquire_writes` documentation for more details.
593 ///
594 /// # Panics
595 ///
596 /// `read_at` panics if `offset` is greater than the length of the buffer.
597 #[inline]
598 pub fn read_at(&self, offset: usize, dst: &mut [u8]) -> usize {
599 self.inner.read_at(offset, dst)
600 }
601
602 /// Acquire all writes performed by the other process.
603 ///
604 /// On some systems (such as Fuchsia, currently), the communication
605 /// mechanism used for signalling a process that memory is readable does not
606 /// have well-defined synchronization semantics. On those systems, this
607 /// method MUST be called after receiving such a signal, or else writes
608 /// performed before that signal are not guaranteed to be observed by this
609 /// process.
610 ///
611 /// `acquire_writes` acquires any writes performed on this buffer or any
612 /// slice within the buffer.
613 ///
614 /// # Note on Fuchsia
615 ///
616 /// Zircon, the Fuchsia kernel, will likely eventually have well-defined
617 /// semantics around the synchronization behavior of various syscalls. Once
618 /// that happens, calling this method in Fuchsia programs may become
619 /// optional. This work is tracked in [https://fxbug.dev/42107145].
620 ///
621 /// [https://fxbug.dev/42107145]: #
622 // TODO(joshlf): Replace with link once issues are public.
623 #[inline]
624 pub fn acquire_writes(&self) {
625 fence(Ordering::Acquire);
626 }
627
628 /// Create a sub-slice of this `SharedBufferSlice`.
629 ///
630 /// Just like the slicing operation on array and slice references, `slice`
631 /// constructs a new `SharedBufferSlice` which points to the same memory as
632 /// the original, but starting and index `from` (inclusive) and ending at
633 /// index `to` (exclusive).
634 ///
635 /// # Panics
636 ///
637 /// `slice` panics if `range` is out of bounds of `self` or if `range` is
638 /// nonsensical (its lower bound is larger than its upper bound).
639 #[inline]
640 pub fn slice<R: RangeBounds<usize>>(&self, range: R) -> SharedBufferSlice<'a> {
641 SharedBufferSlice { inner: self.inner.slice(range), _marker: PhantomData }
642 }
643
644 /// Split this `SharedBufferSlice` in two.
645 ///
646 /// Just like the `split_at` method on array and slice references,
647 /// `split_at` constructs one `SharedBufferSlice` which represents bytes
648 /// `[0, idx)`, and one which represents bytes `[idx, len)`, where `len` is
649 /// the length of the buffer slice.
650 ///
651 /// # Panics
652 ///
653 /// `split_at` panics if `idx > self.len()`.
654 #[inline]
655 pub fn split_at(&self, idx: usize) -> (SharedBufferSlice<'a>, SharedBufferSlice<'a>) {
656 let (a, b) = self.inner.split_at(idx);
657 let a = SharedBufferSlice { inner: a, _marker: PhantomData };
658 let b = SharedBufferSlice { inner: b, _marker: PhantomData };
659 (a, b)
660 }
661
662 /// The number of bytes in this `SharedBufferSlice`.
663 #[inline]
664 pub fn len(&self) -> usize {
665 self.inner.len
666 }
667}
668
669/// A mutable slice into a `SharedBuffer`.
670///
671/// A `SharedBufferSliceMut` is created with `SharedBuffer::slice_mut` or
672/// `SharedBufferSliceMut::slice_mut`.
673#[derive(Debug)]
674pub struct SharedBufferSliceMut<'a> {
675 inner: SharedBufferInner,
676 _marker: PhantomData<&'a ()>,
677}
678
679impl<'a> SharedBufferSliceMut<'a> {
680 /// Read bytes from the buffer.
681 ///
682 /// Read up to `dst.len()` bytes from the buffer, returning how many bytes
683 /// were read. The only thing that can cause fewer bytes to be read than
684 /// requested is if `dst` is larger than the buffer itself.
685 ///
686 /// A call to `read` is only guaranteed to happen after an operation in
687 /// another thread or process if the mechanism used to signal the other
688 /// process has well-defined memory ordering semantics. Otherwise, the
689 /// `acquire_writes` method must be called before `read` and after receiving
690 /// a signal from the other process in order to provide such ordering
691 /// guarantees. In practice, this means that `acquire_writes` should be the
692 /// first read operation that happens after receiving a signal from another
693 /// process that the memory may be read. See the `acquire_writes`
694 /// documentation for more details.
695 #[inline]
696 pub fn read(&self, dst: &mut [u8]) -> usize {
697 self.inner.read_at(0, dst)
698 }
699
700 /// Read bytes from the buffer at an offset.
701 ///
702 /// Read up to `dst.len()` bytes starting at `offset` into the buffer,
703 /// returning how many bytes were read. The only thing that can cause fewer
704 /// bytes to be read than requested is if there are fewer than `dst.len()`
705 /// bytes available starting at `offset` within the buffer.
706 ///
707 /// A call to `read_at` is only guaranteed to happen after an operation in
708 /// another thread or process if the mechanism used to signal the other
709 /// process has well-defined memory ordering semantics. Otherwise, the
710 /// `acquire_writes` method must be called before `read_at` and after
711 /// receiving a signal from the other process in order to provide such
712 /// ordering guarantees. In practice, this means that `acquire_writes`
713 /// should be the first read operation that happens after receiving a signal
714 /// from another process that the memory may be read. See the
715 /// `acquire_writes` documentation for more details.
716 ///
717 /// # Panics
718 ///
719 /// `read_at` panics if `offset` is greater than the length of the buffer.
720 #[inline]
721 pub fn read_at(&self, offset: usize, dst: &mut [u8]) -> usize {
722 self.inner.read_at(offset, dst)
723 }
724
725 /// Write bytes to the buffer.
726 ///
727 /// Write up to `src.len()` bytes into the buffer, returning how many bytes
728 /// were written. The only thing that can cause fewer bytes to be written
729 /// than requested is if `src` is larger than the buffer itself.
730 ///
731 /// A call to `write` is only guaranteed to happen before an operation in
732 /// another thread or process if the mechanism used to signal the other
733 /// process has well-defined memory ordering semantics. Otherwise, the
734 /// `release_writes` method must be called after `write` and before
735 /// signalling the other process in order to provide such ordering
736 /// guarantees. In practice, this means that `release_writes` should be the
737 /// last write operation that happens before signalling another process that
738 /// the memory may be read. See the `release_writes` documentation for more
739 /// details.
740 #[inline]
741 pub fn write(&self, src: &[u8]) -> usize {
742 self.inner.write_at(0, src)
743 }
744
745 /// Write bytes to the buffer at an offset.
746 ///
747 /// Write up to `src.len()` bytes starting at `offset` into the buffer,
748 /// returning how many bytes were written. The only thing that can cause
749 /// fewer bytes to be written than requested is if there are fewer than
750 /// `src.len()` bytes available starting at `offset` within the buffer.
751 ///
752 /// A call to `write_at` is only guaranteed to happen before an operation in
753 /// another thread or process if the mechanism used to signal the other
754 /// process has well-defined memory ordering semantics. Otherwise, the
755 /// `release_writes` method must be called after `write_at` and before
756 /// signalling the other process in order to provide such ordering
757 /// guarantees. In practice, this means that `release_writes` should be the
758 /// last write operation that happens before signalling another process that
759 /// the memory may be read. See the `release_writes` documentation for more
760 /// details.
761 ///
762 /// # Panics
763 ///
764 /// `write_at` panics if `offset` is greater than the length of the buffer.
765 #[inline]
766 pub fn write_at(&self, offset: usize, src: &[u8]) -> usize {
767 self.inner.write_at(offset, src)
768 }
769
770 /// Acquire all writes performed by the other process.
771 ///
772 /// On some systems (such as Fuchsia, currently), the communication
773 /// mechanism used for signalling a process that memory is readable does not
774 /// have well-defined synchronization semantics. On those systems, this
775 /// method MUST be called after receiving such a signal, or else writes
776 /// performed before that signal are not guaranteed to be observed by this
777 /// process.
778 ///
779 /// `acquire_writes` acquires any writes performed on this buffer or any
780 /// slice within the buffer.
781 ///
782 /// # Note on Fuchsia
783 ///
784 /// Zircon, the Fuchsia kernel, will likely eventually have well-defined
785 /// semantics around the synchronization behavior of various syscalls. Once
786 /// that happens, calling this method in Fuchsia programs may become
787 /// optional. This work is tracked in [https://fxbug.dev/42107145].
788 ///
789 /// [https://fxbug.dev/42107145]: #
790 // TODO(joshlf): Replace with link once issues are public.
791 #[inline]
792 pub fn acquire_writes(&self) {
793 fence(Ordering::Acquire);
794 }
795
796 /// Atomically release all writes performed so far.
797 ///
798 /// On some systems (such as Fuchsia, currently), the communication
799 /// mechanism used for signalling the other process that memory is readable
800 /// does not have well-defined synchronization semantics. On those systems,
801 /// this method MUST be called before such signalling, or else writes
802 /// performed before that signal are not guaranteed to be observed by the
803 /// other process.
804 ///
805 /// `release_writes` releases any writes performed on this slice or any
806 /// sub-slice of this slice.
807 ///
808 /// # Note on Fuchsia
809 ///
810 /// Zircon, the Fuchsia kernel, will likely eventually have well-defined
811 /// semantics around the synchronization behavior of various syscalls. Once
812 /// that happens, calling this method in Fuchsia programs may become
813 /// optional. This work is tracked in [https://fxbug.dev/42107145].
814 ///
815 /// [https://fxbug.dev/42107145]: #
816 // TODO(joshlf): Replace with link once issues are public.
817 #[inline]
818 pub fn release_writes(&mut self) {
819 fence(Ordering::Release);
820 }
821
822 /// Create a sub-slice of this `SharedBufferSliceMut`.
823 ///
824 /// Just like the slicing operation on array and slice references, `slice`
825 /// constructs a new `SharedBufferSlice` which points to the same memory as
826 /// the original, but starting and index `from` (inclusive) and ending at
827 /// index `to` (exclusive).
828 ///
829 /// # Panics
830 ///
831 /// `slice` panics if `range` is out of bounds of `self` or if `range` is
832 /// nonsensical (its lower bound is larger than its upper bound).
833 #[inline]
834 pub fn slice<R: RangeBounds<usize>>(&self, range: R) -> SharedBufferSlice<'a> {
835 SharedBufferSlice { inner: self.inner.slice(range), _marker: PhantomData }
836 }
837
838 /// Create a mutable slice of the original `SharedBufferSliceMut`.
839 ///
840 /// Just like the mutable slicing operation on array and slice references,
841 /// `slice_mut` constructs a new `SharedBufferSliceMut` which points to the
842 /// same memory as the original, but starting and index `from` (inclusive)
843 /// and ending at index `to` (exclusive).
844 ///
845 /// # Panics
846 ///
847 /// `slice_mut` panics if `range` is out of bounds of `self` or if `range`
848 /// is nonsensical (its lower bound is larger than its upper bound).
849 #[inline]
850 pub fn slice_mut<R: RangeBounds<usize>>(&mut self, range: R) -> SharedBufferSliceMut<'a> {
851 SharedBufferSliceMut { inner: self.inner.slice(range), _marker: PhantomData }
852 }
853
854 /// Split this `SharedBufferSliceMut` into two immutable slices.
855 ///
856 /// Just like the `split_at` method on array and slice references,
857 /// `split_at` constructs one `SharedBufferSlice` which represents bytes
858 /// `[0, idx)`, and one which represents bytes `[idx, len)`, where `len` is
859 /// the length of the buffer slice.
860 ///
861 /// # Panics
862 ///
863 /// `split_at` panics if `idx > self.len()`.
864 #[inline]
865 pub fn split_at(&self, idx: usize) -> (SharedBufferSlice<'a>, SharedBufferSlice<'a>) {
866 let (a, b) = self.inner.split_at(idx);
867 let a = SharedBufferSlice { inner: a, _marker: PhantomData };
868 let b = SharedBufferSlice { inner: b, _marker: PhantomData };
869 (a, b)
870 }
871
872 /// Split this `SharedBufferSliceMut` in two.
873 ///
874 /// Just like the `split_at_mut` method on array and slice references,
875 /// `split_at` constructs one `SharedBufferSliceMut` which represents bytes
876 /// `[0, idx)`, and one which represents bytes `[idx, len)`, where `len` is
877 /// the length of the buffer slice.
878 ///
879 /// # Panics
880 ///
881 /// `split_at_mut` panics if `idx > self.len()`.
882 #[inline]
883 pub fn split_at_mut(
884 &mut self,
885 idx: usize,
886 ) -> (SharedBufferSliceMut<'a>, SharedBufferSliceMut<'a>) {
887 let (a, b) = self.inner.split_at(idx);
888 let a = SharedBufferSliceMut { inner: a, _marker: PhantomData };
889 let b = SharedBufferSliceMut { inner: b, _marker: PhantomData };
890 (a, b)
891 }
892
893 /// The number of bytes in this `SharedBufferSlice`.
894 #[inline]
895 pub fn len(&self) -> usize {
896 self.inner.len
897 }
898}
899
900// Send and Sync implementations. Send and Sync are definitely safe since
901// SharedBufferXXX are all written under the assumption that a remote process is
902// concurrently modifying the memory. However, we aim to provide a Rust-like API
903// with lifetimes and an immutable/mutable distinction, so the real question is
904// whether Send and Sync make sense by analogy to normal Rust types. Insofar as
905// SharedBuffer is analogous to [u8], SharedBufferSlice is analogous to &[u8],
906// and SharedBufferSliceMut is analogous to &mut [u8], the answer is yes - all
907// of those types implement both Send and Sync.
908
909unsafe impl Send for SharedBuffer {}
910unsafe impl Sync for SharedBuffer {}
911unsafe impl<'a> Send for SharedBufferSlice<'a> {}
912unsafe impl<'a> Sync for SharedBufferSlice<'a> {}
913unsafe impl<'a> Send for SharedBufferSliceMut<'a> {}
914unsafe impl<'a> Sync for SharedBufferSliceMut<'a> {}
915
916#[cfg(test)]
917mod tests {
918 use core::{mem, ptr};
919
920 use super::{overlap, SharedBuffer};
921
922 // use the referent as the backing memory for a SharedBuffer
923 unsafe fn buf_from_ref<T>(x: &mut T) -> SharedBuffer {
924 let size = mem::size_of::<T>();
925 SharedBuffer::new(x as *mut _ as *mut u8, size)
926 }
927
928 #[test]
929 fn test_buf() {
930 // initialize some memory and turn it into a SharedBuffer
931 const ONE: [u8; 8] = [0, 1, 2, 3, 4, 5, 6, 7];
932 let mut buf_memory = ONE;
933 let buf = unsafe { buf_from_ref(&mut buf_memory) };
934
935 // we read the same initial contents back
936 let mut bytes = [0u8; 8];
937 assert_eq!(buf.read(&mut bytes[..]), 8);
938 assert_eq!(bytes, ONE);
939
940 // when we write new contents, we read those back
941 const TWO: [u8; 8] = [7, 6, 5, 4, 3, 2, 1, 0];
942 assert_eq!(buf.write(&TWO[..]), 8);
943 assert_eq!(buf.read(&mut bytes[..]), 8);
944 assert_eq!(bytes, TWO);
945
946 // even with a bigger buffer, we still only read 8 bytes
947 let mut bytes = [0u8; 16];
948 assert_eq!(buf.read(&mut bytes[..]), 8);
949 // starting at offset 4, there are only 4 bytes left, so we only read 4
950 // bytes
951 assert_eq!(buf.read_at(4, &mut bytes[..]), 4);
952 }
953
954 #[test]
955 fn test_slice() {
956 // various slices give us the lengths we expect
957 let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) };
958 let tmp = buf.slice(..);
959 assert_eq!(tmp.len(), 10);
960 let tmp = buf.slice(..10);
961 assert_eq!(tmp.len(), 10);
962 let tmp = buf.slice(5..10);
963 assert_eq!(tmp.len(), 5);
964 let tmp = buf.slice(0..0);
965 assert_eq!(tmp.len(), 0);
966 let tmp = buf.slice(10..10);
967 assert_eq!(tmp.len(), 0);
968
969 // initialize some memory and turn it into a SharedBuffer
970 const INIT: [u8; 8] = [0, 1, 2, 3, 4, 5, 6, 7];
971 let mut buf_memory = INIT;
972 let buf = unsafe { buf_from_ref(&mut buf_memory) };
973
974 // we read the same initial contents back
975 let mut bytes = [0u8; 8];
976 assert_eq!(buf.read_at(0, &mut bytes[..]), 8);
977 assert_eq!(bytes, INIT);
978
979 // create a slice to the second half of the SharedBuffer
980 let buf2 = buf.slice(4..8);
981
982 // now we read back only the second half of the original SharedBuffer
983 bytes = [0; 8];
984 assert_eq!(buf2.read(&mut bytes[..]), 4);
985 assert_eq!(bytes, [4, 5, 6, 7, 0, 0, 0, 0]);
986 }
987
988 #[test]
989 fn test_split() {
990 // various splits give us the lengths we expect
991 let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) };
992 let (tmp1, tmp2) = buf.split_at(10);
993 assert_eq!(tmp1.len(), 10);
994 assert_eq!(tmp2.len(), 0);
995 let (tmp1, tmp2) = buf.split_at(5);
996 assert_eq!(tmp1.len(), 5);
997 assert_eq!(tmp2.len(), 5);
998 let (tmp1, tmp2) = buf.split_at(0);
999 assert_eq!(tmp1.len(), 0);
1000 assert_eq!(tmp2.len(), 10);
1001
1002 // initialize some memory and turn it into a SharedBuffer
1003 const INIT: [u8; 8] = [0, 1, 2, 3, 4, 5, 6, 7];
1004 let mut buf_memory = INIT;
1005 let mut buf = unsafe { buf_from_ref(&mut buf_memory) };
1006
1007 // we read the same initial contents back
1008 let mut bytes = [0u8; 8];
1009 assert_eq!(buf.read_at(0, &mut bytes[..]), 8);
1010 assert_eq!(bytes, INIT);
1011
1012 // split in two equal-sized halves
1013 let (buf1, buf2) = buf.split_at_mut(4);
1014
1015 // now we read back the halves separately
1016 bytes = [0; 8];
1017 assert_eq!(buf1.read(&mut bytes[..4]), 4);
1018 assert_eq!(buf2.read(&mut bytes[4..]), 4);
1019 assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
1020
1021 // use the mutable slices to write to the buffer
1022 assert_eq!(buf1.write(&[7, 6, 5, 4]), 4);
1023 assert_eq!(buf2.write(&[3, 2, 1, 0]), 4);
1024
1025 // split again into equal-sized quarters
1026 let ((buf1, buf2), (buf3, buf4)) = (buf1.split_at(2), buf2.split_at(2));
1027
1028 // now we read back the quarters separately
1029 bytes = [0; 8];
1030 assert_eq!(buf1.read(&mut bytes[..2]), 2);
1031 assert_eq!(buf2.read(&mut bytes[2..4]), 2);
1032 assert_eq!(buf3.read(&mut bytes[4..6]), 2);
1033 assert_eq!(buf4.read(&mut bytes[6..]), 2);
1034 assert_eq!(bytes, [7, 6, 5, 4, 3, 2, 1, 0]);
1035 }
1036
1037 #[test]
1038 fn test_overlap() {
1039 // overlap(offset, copy_len, range_len)
1040
1041 // first branch: offset > range_len
1042 assert_eq!(overlap(10, 4, 8), None);
1043
1044 // middle branch: offset + copy_len <= range_len
1045 assert_eq!(overlap(0, 4, 8), Some(4));
1046 assert_eq!(overlap(4, 4, 8), Some(4));
1047
1048 // middle branch: 'offset + copy_len' overflows usize
1049 assert_eq!(overlap(4, ::core::usize::MAX, 8), Some(4));
1050
1051 // last branch: else
1052 assert_eq!(overlap(6, 4, 8), Some(2));
1053 assert_eq!(overlap(8, 4, 8), Some(0));
1054 }
1055
1056 #[test]
1057 #[should_panic]
1058 fn test_panic_read_at() {
1059 let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) };
1060 // "byte offset 11 out of range for SharedBuffer of length 10"
1061 buf.read_at(11, &mut [][..]);
1062 }
1063
1064 #[test]
1065 #[should_panic]
1066 fn test_panic_write_at() {
1067 let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) };
1068 // "byte offset 11 out of range for SharedBuffer of length 10"
1069 buf.write_at(11, &[][..]);
1070 }
1071
1072 #[test]
1073 #[should_panic]
1074 fn test_panic_slice_1() {
1075 let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) };
1076 // "byte index 11 out of range for SharedBuffer of length 10"
1077 buf.slice(0..11);
1078 }
1079
1080 #[test]
1081 #[should_panic]
1082 fn test_panic_slice_2() {
1083 let buf = unsafe { SharedBuffer::new(ptr::null_mut(), 10) };
1084 // "slice starts at byte 6 but ends at byte 5"
1085 #[allow(clippy::reversed_empty_ranges)]
1086 buf.slice(6..5);
1087 }
1088}