class AllocateVmo
Defined at line 104 of file fidling/gen/sdk/fidl/fuchsia.hardware.sysmem/fuchsia.hardware.sysmem/cpp/fidl/fuchsia.hardware.sysmem/cpp/markers.h
Request a new memory allocation of `size` on heap. For heaps which don't
permit CPU access to the buffer data, this will create a VMO with an
official size, but which never has any physical pages. For such heaps,
the VMO is effectively used as an opaque buffer identifier.
The `buffer_collection_id` + `buffer_index` are retreivable from any
sysmem-provided VMO that's derived from the returned `vmo` using
[`fuchsia.sysmem2/Allocator.GetVmoInfo`].
The [`fuchsia.hardware.sysmem/Heap`] server must ensure that if all
handles to `vmo` + descendents of `vmo` are closed, the
[`fuchsia.hardware.sysmem/Heap`] server will clean up any state
associated with `vmo` even in the absence of any call to `DeleteVmo`.
The [`fuchsia.hardware.sysmem/Heap`] server must create `vmo` as a
ZX_VMO_CHILD_SLICE (or ZX_VMO_CHILD_REFERENCE) of a parent VMO retained
by the [`fuchsia.hardware.sysmem/Heap`] server, with the
[`fuchsia.hardware.sysmem/Heap`] server waiting on ZX_VMO_ZERO_CHILDREN
to trigger cleanup. The [`fuchsia.hardware.sysmem/Heap`] server must not
retain any handles to `vmo` or descendents of `vmo` or VMAR mappings to
`vmo` as that would prevent ZX_VMO_ZERO_CHILDREN from being signaled.
However, the server is free to keep handles to the server's parent VMO,
VMAR mappings to the server's parent VMO, or similar via separate (from
`vmo`) child VMOs, as long as those can be cleaned up synchronously
during `DeleteVmo` (absent process failures).
As long as the caller doesn't crash, the caller guarantees that
[`fuchsia.hardware.sysmem/Heap.DeleteVmo`] will be passed `vmo` later
with ZX_VMO_ZERO_CHILDREN already signaled on `vmo`, and with `vmo`
being the only remaining handle to the VMO (assuming the heap server did
not itself retain any handle to `vmo`).
Upon noticing ZX_VMO_ZERO_CHILDREN on the server's parent VMO, the
server should clean up any resources associated with `vmo`.
The heap server can create any associated resources (including any
hardware-specific resources) during this call, and clean them up upon
noticing ZX_VMO_ZERO_CHILDREN on the parent VMO retained by the server.
The [`fuchsia.hardware.sysmem/Heap`] channel client end closing should
not trigger any per-VMO cleanup. Instead, the ZX_VMO_ZERO_CHILDREN
signal should perform that cleanup. This way, all buffer-associated
resources stay valid until it's no longer possible for any client to be
using or referring to the buffer. The risk of cleaning up early is that
a client may still be using the buffer and/or an associated resource
despite the [`fuchsia.hardware.sysmem/Heap`] client end closing.
+ request `size` The size in bytes, aligned up to a page boundary. In
contrast, `settings.buffer_settings.size_bytes` is the logical size in
bytes and is not rounded up to a page boundary.
+ request `settings` These are the sysmem settings applicable to the
buffer. A heap is encouraged to completely ignore this parameter
unless there is a specific need to look at this parameter.
+ For example, if a [`fuchsia.hardware.sysmem/Heap`] server also
allocates some sort of internal image resource to go with the
allocated VMO, the heap server may need to look at
`settings.image_format_constraints`.
+ As another example, a [`fuchsia.hardware.sysmem/Heap`] server may
support both contiguous and non-contiguous allocations, in which
case the heap server would need to look at
`settings.buffer_settings.is_physically_contiguous`.
+ However, if a [`fuchsia.hardware.sysmem/Heap`] server is allocating
a buffer of size with no dependence on settings, the
[`fuchsia.hardware.sysmem/Heap`] server should just ignore settings
(in contrast to validating settings or similar).
+ request `buffer_collection_id` This can be obtained later from a
sysmem provided VMO using [`fuchsia.sysmem2/Allocator.GetVmoInfo`], or
at any time from a [`fuchsia.sysmem2/BufferCollectionToken`],
[`fuchsia.sysmem2/BufferCollection`], or
[`fuchsia.sysmem2/BufferCollectionTokenGroup`] associated with the
logical buffer collection. However, care must be taken to avoid trying
to use [`fuchsia.sysmem2/Allocator.GetVmoInfo`] (or any other sysmem
call) on a VMO newly allocated within this call, since sysmem doesn't
know about the VMO until returned from this call. Also, any call back
to sysmem during this call will deadlock since sysmem is waiting on
this call to complete before processing
[`fuchsia.sysmem2/Allocator.GetVmoInfo`] (or any other call), so it'll
be fairly obvious that something is wrong (please remove the call to
[`fuchsia.sysmem2/Allocator.GetVmoInfo`] or similar.
+ request `buffer_index` This can be obtained later from a sysmem
provided VMO using [`fuchsia.sysmem2/Allocator.GetVmoInfo`]. See also
previous paragraph re. not calling
[`fuchsia.sysmem2/Allocator.GetVmoInfo`] during this call.
- response `vmo` The allocated VMO; see above for relevant requirements.
Public Members
static const bool kHasClientToServer
static const bool kHasClientToServerBody
static const bool kHasServerToClient
static const bool kHasServerToClientBody
static const bool kHasNonEmptyUserFacingResponse
static const bool kHasDomainError
static const bool kHasFrameworkError
static const uint64_t kOrdinal