template <>
class WireWeakOnewayBufferClientImpl
Defined at line 21604 of file fidling/gen/sdk/fidl/fuchsia.sysmem2/fuchsia.sysmem2/cpp/fidl/fuchsia.sysmem2/cpp/wire_messaging.h
Public Methods
::fidl::OneWayStatus Release ()
###### On a [`fuchsia.sysmem2/BufferCollectionToken`] channel:
Normally a participant will convert a `BufferCollectionToken` into a
[`fuchsia.sysmem2/BufferCollection`], but a participant can instead send
`Release` via the token (and then close the channel immediately or
shortly later in response to server closing the server end), which
avoids causing buffer collection failure. Without a prior `Release`,
closing the `BufferCollectionToken` client end will cause buffer
collection failure.
###### On a [`fuchsia.sysmem2/BufferCollection`] channel:
By default the server handles unexpected closure of a
[`fuchsia.sysmem2/BufferCollection`] client end (without `Release`
first) by failing the buffer collection. Partly this is to expedite
closing VMO handles to reclaim memory when any participant fails. If a
participant would like to cleanly close a `BufferCollection` without
causing buffer collection failure, the participant can send `Release`
before closing the `BufferCollection` client end. The `Release` can
occur before or after `SetConstraints`. If before `SetConstraints`, the
buffer collection won't require constraints from this node in order to
allocate. If after `SetConstraints`, the constraints are retained and
aggregated, despite the lack of `BufferCollection` connection at the
time of constraints aggregation.
###### On a [`fuchsia.sysmem2/BufferCollectionTokenGroup`] channel:
By default, unexpected closure of a `BufferCollectionTokenGroup` client
end (without `Release` first) will trigger failure of the buffer
collection. To close a `BufferCollectionTokenGroup` channel without
failing the buffer collection, ensure that AllChildrenPresent() has been
sent, and send `Release` before closing the `BufferCollectionTokenGroup`
client end.
If `Release` occurs before
[`fuchsia.sysmem2/BufferCollectionTokenGroup.AllChildrenPresent], the
buffer collection will fail (triggered by reception of `Release` without
prior `AllChildrenPresent`). This is intentionally not analogous to how
[`fuchsia.sysmem2/BufferCollection.Release`] without
[`fuchsia.sysmem2/BufferCollection.SetConstraints`] first doesn't cause
buffer collection failure. For a `BufferCollectionTokenGroup`, clean
close requires `AllChildrenPresent` (if not already sent), then
`Release`, then close client end.
If `Release` occurs after `AllChildrenPresent`, the children and all
their constraints remain intact (just as they would if the
`BufferCollectionTokenGroup` channel had remained open), and the client
end close doesn't trigger buffer collection failure.
###### On all [`fuchsia.sysmem2/Node`] channels (any of the above):
For brevity, the per-channel-protocol paragraphs above ignore the
separate failure domain created by
[`fuchsia.sysmem2/BufferCollectionToken.SetDispensable`] or
[`fuchsia.sysmem2/BufferCollection.AttachToken`]. When a client end
unexpectedly closes (without `Release` first) and that client end is
under a failure domain, instead of failing the whole buffer collection,
the failure domain is failed, but the buffer collection itself is
isolated from failure of the failure domain. Such failure domains can be
nested, in which case only the inner-most failure domain in which the
`Node` resides fails.
Caller provides the backing storage for FIDL message.
::fidl::OneWayStatus SetName (::fuchsia_sysmem2::wire::NodeSetNameRequest NodeSetNameRequest)
Set a name for VMOs in this buffer collection.
If the name doesn't fit in ZX_MAX_NAME_LEN, the name of the vmo itself
will be truncated to fit. The name of the vmo will be suffixed with the
buffer index within the collection (if the suffix fits within
ZX_MAX_NAME_LEN). The name specified here (without truncation) will be
listed in the inspect data.
The name only affects VMOs allocated after the name is set; this call
does not rename existing VMOs. If multiple clients set different names
then the larger priority value will win. Setting a new name with the
same priority as a prior name doesn't change the name.
All table fields are currently required.
+ request `priority` The name is only set if this is the first `SetName`
or if `priority` is greater than any previous `priority` value in
prior `SetName` calls across all `Node`(s) of this buffer collection.
+ request `name` The name for VMOs created under this buffer collection.
Caller provides the backing storage for FIDL message.
::fidl::OneWayStatus SetDebugClientInfo (::fuchsia_sysmem2::wire::NodeSetDebugClientInfoRequest NodeSetDebugClientInfoRequest)
Set information about the current client that can be used by sysmem to
help diagnose leaking memory and allocation stalls waiting for a
participant to send [`fuchsia.sysmem2/BufferCollection.SetConstraints`].
This sets the debug client info on this [`fuchsia.sysmem2/Node`] and all
`Node`(s) derived from this `Node`, unless overriden by
[`fuchsia.sysmem2/Allocator.SetDebugClientInfo`] or a later
[`fuchsia.sysmem2/Node.SetDebugClientInfo`].
Sending [`fuchsia.sysmem2/Allocator.SetDebugClientInfo`] once per
`Allocator` is the most efficient way to ensure that all
[`fuchsia.sysmem2/Node`](s) will have at least some debug client info
set, and is also more efficient than separately sending the same debug
client info via [`fuchsia.sysmem2/Node.SetDebugClientInfo`] for each
created [`fuchsia.sysmem2/Node`].
Also used when verbose logging is enabled (see `SetVerboseLogging`) to
indicate which client is closing their channel first, leading to subtree
failure (which can be normal if the purpose of the subtree is over, but
if happening earlier than expected, the client-channel-specific name can
help diagnose where the failure is first coming from, from sysmem's
point of view).
All table fields are currently required.
+ request `name` This can be an arbitrary string, but the current
process name (see `fsl::GetCurrentProcessName`) is a good default.
+ request `id` This can be an arbitrary id, but the current process ID
(see `fsl::GetCurrentProcessKoid`) is a good default.
Caller provides the backing storage for FIDL message.
::fidl::OneWayStatus SetDebugTimeoutLogDeadline (::fuchsia_sysmem2::wire::NodeSetDebugTimeoutLogDeadlineRequest NodeSetDebugTimeoutLogDeadlineRequest)
Sysmem logs a warning if sysmem hasn't seen
[`fuchsia.sysmem2/BufferCollection.SetConstraints`] from all clients
within 5 seconds after creation of a new collection.
Clients can call this method to change when the log is printed. If
multiple client set the deadline, it's unspecified which deadline will
take effect.
In most cases the default works well.
All table fields are currently required.
+ request `deadline` The time at which sysmem will start trying to log
the warning, unless all constraints are with sysmem by then.
Caller provides the backing storage for FIDL message.
::fidl::OneWayStatus SetVerboseLogging ()
This enables verbose logging for the buffer collection.
Verbose logging includes constraints set via
[`fuchsia.sysmem2/BufferCollection.SetConstraints`] from each client
along with info set via [`fuchsia.sysmem2/Node.SetDebugClientInfo`] (or
[`fuchsia.sysmem2/Allocator.SetDebugClientInfo`]) and the structure of
the tree of `Node`(s).
Normally sysmem prints only a single line complaint when aggregation
fails, with just the specific detailed reason that aggregation failed,
with little surrounding context. While this is often enough to diagnose
a problem if only a small change was made and everything was working
before the small change, it's often not particularly helpful for getting
a new buffer collection to work for the first time. Especially with
more complex trees of nodes, involving things like
[`fuchsia.sysmem2/BufferCollection.AttachToken`],
[`fuchsia.sysmem2/BufferCollectionToken.SetDispensable`],
[`fuchsia.sysmem2/BufferCollectionTokenGroup`] nodes, and associated
subtrees of nodes, verbose logging may help in diagnosing what the tree
looks like and why it's failing a logical allocation, or why a tree or
subtree is failing sooner than expected.
The intent of the extra logging is to be acceptable from a performance
point of view, under the assumption that verbose logging is only enabled
on a low number of buffer collections. If we're not tracking down a bug,
we shouldn't send this message.
Caller provides the backing storage for FIDL message.
::fidl::OneWayStatus SetWeak ()
Sets the current [`fuchsia.sysmem2/Node`] and all child `Node`(s)
created after this message to weak, which means that a client's `Node`
client end (or a child created after this message) is not alone
sufficient to keep allocated VMOs alive.
All VMOs obtained from weak `Node`(s) are weak sysmem VMOs. See also
`close_weak_asap`.
This message is only permitted before the `Node` becomes ready for
allocation (else the server closes the channel with `ZX_ERR_BAD_STATE`):
* `BufferCollectionToken`: any time
* `BufferCollection`: before `SetConstraints`
* `BufferCollectionTokenGroup`: before `AllChildrenPresent`
Currently, no conversion from strong `Node` to weak `Node` after ready
for allocation is provided, but a client can simulate that by creating
an additional `Node` before allocation and setting that additional
`Node` to weak, and then potentially at some point later sending
`Release` and closing the client end of the client's strong `Node`, but
keeping the client's weak `Node`.
Zero strong `Node`(s) and zero strong VMO handles will result in buffer
collection failure (all `Node` client end(s) will see
`ZX_CHANNEL_PEER_CLOSED` and all `close_weak_asap` `client_end`(s) will
see `ZX_EVENTPAIR_PEER_CLOSED`), but sysmem (intentionally) won't notice
this situation until all `Node`(s) are ready for allocation. For initial
allocation to succeed, at least one strong `Node` is required to exist
at allocation time, but after that client receives VMO handles, that
client can `BufferCollection.Release` and close the client end without
causing this type of failure.
This implies [`fuchsia.sysmem2/Node.SetWeakOk`] as well, but does not
imply `SetWeakOk` with `for_children_also` true, which can be sent
separately as appropriate.
Caller provides the backing storage for FIDL message.
::fidl::OneWayStatus SetWeakOk (::fuchsia_sysmem2::wire::NodeSetWeakOkRequest NodeSetWeakOkRequest)
This indicates to sysmem that the client is prepared to pay attention to
`close_weak_asap`.
If sent, this message must be before
[`fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated`].
All participants using a weak [`fuchsia.sysmem2/BufferCollection`] must
send this message before `WaitForAllBuffersAllocated`, or a parent
`Node` must have sent [`fuchsia.sysmem2/Node.SetWeakOk`] with
`for_child_nodes_also` true, else the `WaitForAllBuffersAllocated` will
trigger buffer collection failure.
This message is necessary because weak sysmem VMOs have not always been
a thing, so older clients are not aware of the need to pay attention to
`close_weak_asap` `ZX_EVENTPAIR_PEER_CLOSED` and close all remaining
sysmem weak VMO handles asap. By having this message and requiring
participants to indicate their acceptance of this aspect of the overall
protocol, we avoid situations where an older client is delivered a weak
VMO without any way for sysmem to get that VMO to close quickly later
(and on a per-buffer basis).
A participant that doesn't handle `close_weak_asap` and also doesn't
retrieve any VMO handles via `WaitForAllBuffersAllocated` doesn't need
to send `SetWeakOk` (and doesn't need to have a parent `Node` send
`SetWeakOk` with `for_child_nodes_also` true either). However, if that
same participant has a child/delegate which does retrieve VMOs, that
child/delegate will need to send `SetWeakOk` before
`WaitForAllBuffersAllocated`.
+ request `for_child_nodes_also` If present and true, this means direct
child nodes of this node created after this message plus all
descendants of those nodes will behave as if `SetWeakOk` was sent on
those nodes. Any child node of this node that was created before this
message is not included. This setting is "sticky" in the sense that a
subsequent `SetWeakOk` without this bool set to true does not reset
the server-side bool. If this creates a problem for a participant, a
workaround is to `SetWeakOk` with `for_child_nodes_also` true on child
tokens instead, as appropriate. A participant should only set
`for_child_nodes_also` true if the participant can really promise to
obey `close_weak_asap` both for its own weak VMO handles, and for all
weak VMO handles held by participants holding the corresponding child
`Node`(s). When `for_child_nodes_also` is set, descendent `Node`(s)
which are using sysmem(1) can be weak, despite the clients of those
sysmem1 `Node`(s) not having any direct way to `SetWeakOk` or any
direct way to find out about `close_weak_asap`. This only applies to
descendents of this `Node` which are using sysmem(1), not to this
`Node` when converted directly from a sysmem2 token to a sysmem(1)
token, which will fail allocation unless an ancestor of this `Node`
specified `for_child_nodes_also` true.
Caller provides the backing storage for FIDL message.
::fidl::OneWayStatus AttachNodeTracking (::fuchsia_sysmem2::wire::NodeAttachNodeTrackingRequest NodeAttachNodeTrackingRequest)
The server_end will be closed after this `Node` and any child nodes have
have released their buffer counts, making those counts available for
reservation by a different `Node` via
[`fuchsia.sysmem2/BufferCollection.AttachToken`].
The `Node` buffer counts may not be released until the entire tree of
`Node`(s) is closed or failed, because
[`fuchsia.sysmem2/BufferCollection.Release`] followed by channel close
does not immediately un-reserve the `Node` buffer counts. Instead, the
`Node` buffer counts remain reserved until the orphaned node is later
cleaned up.
If the `Node` exceeds a fairly large number of attached eventpair server
ends, a log message will indicate this and the `Node` (and the
appropriate) sub-tree will fail.
The `server_end` will remain open when
[`fuchsia.sysmem2/Allocator.BindSharedCollection`] converts a
[`fuchsia.sysmem2/BufferCollectionToken`] into a
[`fuchsia.sysmem2/BufferCollection`].
This message can also be used with a
[`fuchsia.sysmem2/BufferCollectionTokenGroup`].
Caller provides the backing storage for FIDL message.
::fidl::OneWayStatus Duplicate (::fuchsia_sysmem2::wire::BufferCollectionTokenDuplicateRequest BufferCollectionTokenDuplicateRequest)
Create an additional [`fuchsia.sysmem2/BufferCollectionToken`] from this
one, referring to the same buffer collection.
The created token is a child of this token in the
[`fuchsia.sysmem2/Node`] heirarchy.
This method can be used to add a participant, by transferring the newly
created token to another participant.
This one-way message can be used instead of the two-way
[`fuchsia.sysmem2/BufferCollectionToken.DuplicateSync`] FIDL call in
performance sensitive cases where it would be undesireable to wait for
sysmem to respond to
[`fuchsia.sysmem2/BufferCollectionToken.DuplicateSync`] or when the
client code isn't structured to make it easy to duplicate all the needed
tokens at once.
After sending one or more `Duplicate` messages, and before sending the
newly created child tokens to other participants (or to other
[`fuchsia.sysmem2/Allocator`] channels), the client must send a
[`fuchsia.sysmem2/Node.Sync`] and wait for the `Sync` response. The
`Sync` call can be made on the token, or on the `BufferCollection`
obtained by passing this token to `BindSharedCollection`. Either will
ensure that the server knows about the tokens created via `Duplicate`
before the other participant sends the token to the server via separate
`Allocator` channel.
All tokens must be turned in via
[`fuchsia.sysmem2/Allocator.BindSharedCollection`] or
[`fuchsia.sysmem2/Node.Release`] for a `BufferCollection` to
successfully allocate buffers.
All table fields are currently required.
+ request `rights_attenuation_mask` The rights bits that are zero in
this mask will be absent in the buffer VMO rights obtainable via the
client end of `token_request`. This allows an initiator or
intermediary participant to attenuate the rights available to a
delegate participant. This does not allow a participant to gain rights
that the participant doesn't already have. The value
`ZX_RIGHT_SAME_RIGHTS` can be used to specify that no attenuation
should be applied.
+ These values for rights_attenuation_mask result in no attenuation:
+ `ZX_RIGHT_SAME_RIGHTS` (preferred)
+ 0xFFFFFFFF (this is reasonable when an attenuation mask is
computed)
+ 0 (deprecated - do not use 0 - an ERROR will go to the log)
+ request `token_request` is the server end of a `BufferCollectionToken`
channel. The client end of this channel acts as another participant in
the shared buffer collection.
Caller provides the backing storage for FIDL message.
::fidl::OneWayStatus SetDispensable ()
Set this [`fuchsia.sysmem2/BufferCollectionToken`] to dispensable.
When the `BufferCollectionToken` is converted to a
[`fuchsia.sysmem2/BufferCollection`], the dispensable status applies to
the `BufferCollection` also.
Normally, if a client closes a [`fuchsia.sysmem2/BufferCollection`]
client end without having sent
[`fuchsia.sysmem2/BufferCollection.Release`] first, the
`BufferCollection` [`fuchisa.sysmem2/Node`] will fail, which also
propagates failure to the parent [`fuchsia.sysmem2/Node`] and so on up
to the root `Node`, which fails the whole buffer collection. In
contrast, a dispensable `Node` can fail after buffers are allocated
without causing failure of its parent in the [`fuchsia.sysmem2/Node`]
heirarchy.
The dispensable `Node` participates in constraints aggregation along
with its parent before buffer allocation. If the dispensable `Node`
fails before buffers are allocated, the failure propagates to the
dispensable `Node`'s parent.
After buffers are allocated, failure of the dispensable `Node` (or any
child of the dispensable `Node`) does not propagate to the dispensable
`Node`'s parent. Failure does propagate from a normal child of a
dispensable `Node` to the dispensable `Node`. Failure of a child is
blocked from reaching its parent if the child is attached using
[`fuchsia.sysmem2/BufferCollection.AttachToken`], or if the child is
dispensable and the failure occurred after allocation.
A dispensable `Node` can be used in cases where a participant needs to
provide constraints, but after buffers are allocated, the participant
can fail without causing buffer collection failure from the parent
`Node`'s point of view.
In contrast, `BufferCollection.AttachToken` can be used to create a
`BufferCollectionToken` which does not participate in constraints
aggregation with its parent `Node`, and whose failure at any time does
not propagate to its parent `Node`, and whose potential delay providing
constraints does not prevent the parent `Node` from completing its
buffer allocation.
An initiator (creator of the root `Node` using
[`fuchsia.sysmem2/Allocator.AllocateSharedCollection`]) may in some
scenarios choose to initially use a dispensable `Node` for a first
instance of a participant, and then later if the first instance of that
participant fails, a new second instance of that participant my be given
a `BufferCollectionToken` created with `AttachToken`.
Normally a client will `SetDispensable` on a `BufferCollectionToken`
shortly before sending the dispensable `BufferCollectionToken` to a
delegate participant. Because `SetDispensable` prevents propagation of
child `Node` failure to parent `Node`(s), if the client was relying on
noticing child failure via failure of the parent `Node` retained by the
client, the client may instead need to notice failure via other means.
If other means aren't available/convenient, the client can instead
retain the dispensable `Node` and create a child `Node` under that to
send to the delegate participant, retaining this `Node` in order to
notice failure of the subtree rooted at this `Node` via this `Node`'s
ZX_CHANNEL_PEER_CLOSED signal, and take whatever action is appropriate
(e.g. starting a new instance of the delegate participant and handing it
a `BufferCollectionToken` created using
[`fuchsia.sysmem2/BufferCollection.AttachToken`], or propagate failure
and clean up in a client-specific way).
While it is possible (and potentially useful) to `SetDispensable` on a
direct child of a `BufferCollectionTokenGroup` `Node`, it isn't possible
to later replace a failed dispensable `Node` that was a direct child of
a `BufferCollectionTokenGroup` with a new token using `AttachToken`
(since there's no `AttachToken` on a group). Instead, to enable
`AttachToken` replacement in this case, create an additional
non-dispensable token that's a direct child of the group and make the
existing dispensable token a child of the additional token. This way,
the additional token that is a direct child of the group has
`BufferCollection.AttachToken` which can be used to replace the failed
dispensable token.
`SetDispensable` on an already-dispensable token is idempotent.
Caller provides the backing storage for FIDL message.
::fidl::OneWayStatus CreateBufferCollectionTokenGroup (::fuchsia_sysmem2::wire::BufferCollectionTokenCreateBufferCollectionTokenGroupRequest BufferCollectionTokenCreateBufferCollectionTokenGroupRequest)
Create a logical OR among a set of tokens, called a
[`fuchsia.sysmem2/BufferCollectionTokenGroup`].
Most sysmem clients and many participants don't need to care about this
message or about `BufferCollectionTokenGroup`(s). However, in some cases
a participant wants to attempt to include one set of delegate
participants, but if constraints don't combine successfully that way,
fall back to a different (possibly overlapping) set of delegate
participants, and/or fall back to a less demanding strategy (in terms of
how strict the [`fuchisa.sysmem2/BufferCollectionConstraints`] are,
across all involved delegate participants). In such cases, a
`BufferCollectionTokenGroup` is useful.
A `BufferCollectionTokenGroup` is used to create a 1 of N OR among N
child [`fuchsia.sysmem2/BufferCollectionToken`](s). The child tokens
which are not selected during aggregation will fail (close), which a
potential participant should notice when their `BufferCollection`
channel client endpoint sees PEER_CLOSED, allowing the participant to
clean up the speculative usage that didn't end up happening (this is
simimlar to a normal `BufferCollection` server end closing on failure to
allocate a logical buffer collection or later async failure of a buffer
collection).
See comments on protocol `BufferCollectionTokenGroup`.
Any `rights_attenuation_mask` or `AttachToken`/`SetDispensable` to be
applied to the whole group can be achieved with a
`BufferCollectionToken` for this purpose as a direct parent of the
`BufferCollectionTokenGroup`.
All table fields are currently required.
+ request `group_request` The server end of a
`BufferCollectionTokenGroup` channel to be served by sysmem.
Caller provides the backing storage for FIDL message.