fidl_fuchsia_sysmem2

Struct BufferCollectionTokenSynchronousProxy

Source
pub struct BufferCollectionTokenSynchronousProxy { /* private fields */ }

Implementations§

Source§

impl BufferCollectionTokenSynchronousProxy

Source

pub fn new(channel: Channel) -> Self

Source

pub fn into_channel(self) -> Channel

Source

pub fn wait_for_event( &self, deadline: MonotonicInstant, ) -> Result<BufferCollectionTokenEvent, Error>

Waits until an event arrives and returns it. It is safe for other threads to make concurrent requests while waiting for an event.

Source

pub fn sync(&self, ___deadline: MonotonicInstant) -> Result<(), Error>

Ensure that previous messages have been received server side. This is particularly useful after previous messages that created new tokens, because a token must be known to the sysmem server before sending the token to another participant.

Calling [fuchsia.sysmem2/BufferCollectionToken.Sync] on a token that isn’t/wasn’t a valid token risks the Sync stalling forever. See [fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken] for one way to mitigate the possibility of a hostile/fake [fuchsia.sysmem2/BufferCollectionToken] at the cost of one round trip. Another way is to pass the token to [fuchsia.sysmem2/Allocator/BindSharedCollection], which also validates the token as part of exchanging it for a [fuchsia.sysmem2/BufferCollection] channel, and [fuchsia.sysmem2/BufferCollection.Sync] can then be used without risk of stalling.

After creating one or more fuchsia.sysmem2/BufferCollectionToken and then starting and completing a Sync, it’s then safe to send the BufferCollectionToken client ends to other participants knowing the server will recognize the tokens when they’re sent by the other participants to sysmem in a [fuchsia.sysmem2/Allocator.BindSharedCollection] message. This is an efficient way to create tokens while avoiding unnecessary round trips.

Other options include waiting for each [fuchsia.sysmem2/BufferCollectionToken.Duplicate] to complete individually (using separate call to Sync after each), or calling [fuchsia.sysmem2/BufferCollection.Sync] after a token has been converted to a BufferCollection via [fuchsia.sysmem2/Allocator.BindSharedCollection], or using [fuchsia.sysmem2/BufferCollectionToken.DuplicateSync] which includes the sync step and can create multiple tokens at once.

Source

pub fn release(&self) -> Result<(), Error>

§On a [fuchsia.sysmem2/BufferCollectionToken] channel:

Normally a participant will convert a BufferCollectionToken into a [fuchsia.sysmem2/BufferCollection], but a participant can instead send Release via the token (and then close the channel immediately or shortly later in response to server closing the server end), which avoids causing buffer collection failure. Without a prior Release, closing the BufferCollectionToken client end will cause buffer collection failure.

§On a [fuchsia.sysmem2/BufferCollection] channel:

By default the server handles unexpected closure of a [fuchsia.sysmem2/BufferCollection] client end (without Release first) by failing the buffer collection. Partly this is to expedite closing VMO handles to reclaim memory when any participant fails. If a participant would like to cleanly close a BufferCollection without causing buffer collection failure, the participant can send Release before closing the BufferCollection client end. The Release can occur before or after SetConstraints. If before SetConstraints, the buffer collection won’t require constraints from this node in order to allocate. If after SetConstraints, the constraints are retained and aggregated, despite the lack of BufferCollection connection at the time of constraints aggregation.

§On a [fuchsia.sysmem2/BufferCollectionTokenGroup] channel:

By default, unexpected closure of a BufferCollectionTokenGroup client end (without Release first) will trigger failure of the buffer collection. To close a BufferCollectionTokenGroup channel without failing the buffer collection, ensure that AllChildrenPresent() has been sent, and send Release before closing the BufferCollectionTokenGroup client end.

If Release occurs before [fuchsia.sysmem2/BufferCollectionTokenGroup.AllChildrenPresent], the buffer collection will fail (triggered by reception of Releasewithout priorAllChildrenPresent). This is intentionally not analogous to how [fuchsia.sysmem2/BufferCollection.Release] without [fuchsia.sysmem2/BufferCollection.SetConstraints] first doesn't cause buffer collection failure. For a BufferCollectionTokenGroup, clean close requires AllChildrenPresent(if not already sent), thenRelease`, then close client end.

If Release occurs after AllChildrenPresent, the children and all their constraints remain intact (just as they would if the BufferCollectionTokenGroup channel had remained open), and the client end close doesn’t trigger buffer collection failure.

§On all [fuchsia.sysmem2/Node] channels (any of the above):

For brevity, the per-channel-protocol paragraphs above ignore the separate failure domain created by [fuchsia.sysmem2/BufferCollectionToken.SetDispensable] or [fuchsia.sysmem2/BufferCollection.AttachToken]. When a client end unexpectedly closes (without Release first) and that client end is under a failure domain, instead of failing the whole buffer collection, the failure domain is failed, but the buffer collection itself is isolated from failure of the failure domain. Such failure domains can be nested, in which case only the inner-most failure domain in which the Node resides fails.

Source

pub fn set_name(&self, payload: &NodeSetNameRequest) -> Result<(), Error>

Set a name for VMOs in this buffer collection.

If the name doesn’t fit in ZX_MAX_NAME_LEN, the name of the vmo itself will be truncated to fit. The name of the vmo will be suffixed with the buffer index within the collection (if the suffix fits within ZX_MAX_NAME_LEN). The name specified here (without truncation) will be listed in the inspect data.

The name only affects VMOs allocated after the name is set; this call does not rename existing VMOs. If multiple clients set different names then the larger priority value will win. Setting a new name with the same priority as a prior name doesn’t change the name.

All table fields are currently required.

  • request priority The name is only set if this is the first SetName or if priority is greater than any previous priority value in prior SetName calls across all Node(s) of this buffer collection.
  • request name The name for VMOs created under this buffer collection.
Source

pub fn set_debug_client_info( &self, payload: &NodeSetDebugClientInfoRequest, ) -> Result<(), Error>

Set information about the current client that can be used by sysmem to help diagnose leaking memory and allocation stalls waiting for a participant to send [fuchsia.sysmem2/BufferCollection.SetConstraints].

This sets the debug client info on this [fuchsia.sysmem2/Node] and all Node(s) derived from this Node, unless overriden by [fuchsia.sysmem2/Allocator.SetDebugClientInfo] or a later [fuchsia.sysmem2/Node.SetDebugClientInfo].

Sending [fuchsia.sysmem2/Allocator.SetDebugClientInfo] once per Allocator is the most efficient way to ensure that all fuchsia.sysmem2/Node will have at least some debug client info set, and is also more efficient than separately sending the same debug client info via [fuchsia.sysmem2/Node.SetDebugClientInfo] for each created [fuchsia.sysmem2/Node].

Also used when verbose logging is enabled (see SetVerboseLogging) to indicate which client is closing their channel first, leading to subtree failure (which can be normal if the purpose of the subtree is over, but if happening earlier than expected, the client-channel-specific name can help diagnose where the failure is first coming from, from sysmem’s point of view).

All table fields are currently required.

  • request name This can be an arbitrary string, but the current process name (see fsl::GetCurrentProcessName) is a good default.
  • request id This can be an arbitrary id, but the current process ID (see fsl::GetCurrentProcessKoid) is a good default.
Source

pub fn set_debug_timeout_log_deadline( &self, payload: &NodeSetDebugTimeoutLogDeadlineRequest, ) -> Result<(), Error>

Sysmem logs a warning if sysmem hasn’t seen [fuchsia.sysmem2/BufferCollection.SetConstraints] from all clients within 5 seconds after creation of a new collection.

Clients can call this method to change when the log is printed. If multiple client set the deadline, it’s unspecified which deadline will take effect.

In most cases the default works well.

All table fields are currently required.

  • request deadline The time at which sysmem will start trying to log the warning, unless all constraints are with sysmem by then.
Source

pub fn set_verbose_logging(&self) -> Result<(), Error>

This enables verbose logging for the buffer collection.

Verbose logging includes constraints set via [fuchsia.sysmem2/BufferCollection.SetConstraints] from each client along with info set via [fuchsia.sysmem2/Node.SetDebugClientInfo] (or [fuchsia.sysmem2/Allocator.SetDebugClientInfo]) and the structure of the tree of Node(s).

Normally sysmem prints only a single line complaint when aggregation fails, with just the specific detailed reason that aggregation failed, with little surrounding context. While this is often enough to diagnose a problem if only a small change was made and everything was working before the small change, it’s often not particularly helpful for getting a new buffer collection to work for the first time. Especially with more complex trees of nodes, involving things like [fuchsia.sysmem2/BufferCollection.AttachToken], [fuchsia.sysmem2/BufferCollectionToken.SetDispensable], [fuchsia.sysmem2/BufferCollectionTokenGroup] nodes, and associated subtrees of nodes, verbose logging may help in diagnosing what the tree looks like and why it’s failing a logical allocation, or why a tree or subtree is failing sooner than expected.

The intent of the extra logging is to be acceptable from a performance point of view, under the assumption that verbose logging is only enabled on a low number of buffer collections. If we’re not tracking down a bug, we shouldn’t send this message.

Source

pub fn get_node_ref( &self, ___deadline: MonotonicInstant, ) -> Result<NodeGetNodeRefResponse, Error>

This gets a handle that can be used as a parameter to [fuchsia.sysmem2/Node.IsAlternateFor] called on any [fuchsia.sysmem2/Node]. This handle is only for use as proof that the client obtained this handle from this Node.

Because this is a get not a set, no [fuchsia.sysmem2/Node.Sync] is needed between the GetNodeRef and the call to IsAlternateFor, despite the two calls typically being on different channels.

See also [fuchsia.sysmem2/Node.IsAlternateFor].

All table fields are currently required.

  • response node_ref This handle can be sent via IsAlternateFor on a different Node channel, to prove that the client obtained the handle from this Node.
Source

pub fn is_alternate_for( &self, payload: NodeIsAlternateForRequest, ___deadline: MonotonicInstant, ) -> Result<NodeIsAlternateForResult, Error>

Check whether the calling [fuchsia.sysmem2/Node] is in a subtree rooted at a different child token of a common parent [fuchsia.sysmem2/BufferCollectionTokenGroup], in relation to the passed-in node_ref.

This call is for assisting with admission control de-duplication, and with debugging.

The node_ref must be obtained using [fuchsia.sysmem2/Node.GetNodeRef].

The node_ref can be a duplicated handle; it’s not necessary to call GetNodeRef for every call to [fuchsia.sysmem2/Node.IsAlternateFor].

If a calling token may not actually be a valid token at all due to a potentially hostile/untrusted provider of the token, call [fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken] first instead of potentially getting stuck indefinitely if IsAlternateFor never responds due to a calling token not being a real token (not really talking to sysmem). Another option is to call [fuchsia.sysmem2/Allocator.BindSharedCollection] with this token first which also validates the token along with converting it to a [fuchsia.sysmem2/BufferCollection], then call IsAlternateFor.

All table fields are currently required.

  • response is_alternate
    • true: The first parent node in common between the calling node and the node_ref Node is a BufferCollectionTokenGroup. This means that the calling Node and the node_ref Node will not have both their constraints apply - rather sysmem will choose one or the other of the constraints - never both. This is because only one child of a BufferCollectionTokenGroup is selected during logical allocation, with only that one child’s subtree contributing to constraints aggregation.
    • false: The first parent node in common between the calling Node and the node_ref Node is not a BufferCollectionTokenGroup. Currently, this means the first parent node in common is a BufferCollectionToken or BufferCollection (regardless of not Releaseed). This means that the calling Node and the node_ref Node may have both their constraints apply during constraints aggregation of the logical allocation, if both Node(s) are selected by any parent BufferCollectionTokenGroup(s) involved. In this case, there is no BufferCollectionTokenGroup that will directly prevent the two Node(s) from both being selected and their constraints both aggregated, but even when false, one or both Node(s) may still be eliminated from consideration if one or both Node(s) has a direct or indirect parent BufferCollectionTokenGroup which selects a child subtree other than the subtree containing the calling Node or node_ref Node.
  • error [fuchsia.sysmem2/Error.NOT_FOUND] The node_ref wasn’t associated with the same buffer collection as the calling Node. Another reason for this error is if the node_ref is an [zx.Handle.EVENT] handle with sufficient rights, but isn’t actually a real node_ref obtained from GetNodeRef.
  • error [fuchsia.sysmem2/Error.PROTOCOL_DEVIATION] The caller passed a node_ref that isn’t a [zx.Handle:EVENT] handle , or doesn’t have the needed rights expected on a real node_ref.
  • No other failing status codes are returned by this call. However, sysmem may add additional codes in future, so the client should have sensible default handling for any failing status code.
Source

pub fn get_buffer_collection_id( &self, ___deadline: MonotonicInstant, ) -> Result<NodeGetBufferCollectionIdResponse, Error>

Get the buffer collection ID. This ID is also available from [fuchsia.sysmem2/Allocator.GetVmoInfo] (along with the buffer_index within the collection).

This call is mainly useful in situations where we can’t convey a [fuchsia.sysmem2/BufferCollectionToken] or [fuchsia.sysmem2/BufferCollection] directly, but can only convey a VMO handle, which can be joined back up with a BufferCollection client end that was created via a different path. Prefer to convey a BufferCollectionToken or BufferCollection directly when feasible.

Trusting a buffer_collection_id value from a source other than sysmem is analogous to trusting a koid value from a source other than zircon. Both should be avoided unless really necessary, and both require caution. In some situations it may be reasonable to refer to a pre-established BufferCollection by buffer_collection_id via a protocol for efficiency reasons, but an incoming value purporting to be a buffer_collection_id is not sufficient alone to justify granting the sender of the buffer_collection_id any capability. The sender must first prove to a receiver that the sender has/had a VMO or has/had a BufferCollectionToken to the same collection by sending a handle that sysmem confirms is a valid sysmem handle and which sysmem maps to the buffer_collection_id value. The receiver should take care to avoid assuming that a sender had a BufferCollectionToken in cases where the sender has only proven that the sender had a VMO.

  • response buffer_collection_id This ID is unique per buffer collection per boot. Each buffer is uniquely identified by the buffer_collection_id and buffer_index together.
Source

pub fn set_weak(&self) -> Result<(), Error>

Sets the current [fuchsia.sysmem2/Node] and all child Node(s) created after this message to weak, which means that a client’s Node client end (or a child created after this message) is not alone sufficient to keep allocated VMOs alive.

All VMOs obtained from weak Node(s) are weak sysmem VMOs. See also close_weak_asap.

This message is only permitted before the Node becomes ready for allocation (else the server closes the channel with ZX_ERR_BAD_STATE):

  • BufferCollectionToken: any time
  • BufferCollection: before SetConstraints
  • BufferCollectionTokenGroup: before AllChildrenPresent

Currently, no conversion from strong Node to weak Node after ready for allocation is provided, but a client can simulate that by creating an additional Node before allocation and setting that additional Node to weak, and then potentially at some point later sending Release and closing the client end of the client’s strong Node, but keeping the client’s weak Node.

Zero strong Node(s) and zero strong VMO handles will result in buffer collection failure (all Node client end(s) will see ZX_CHANNEL_PEER_CLOSED and all close_weak_asap client_end(s) will see ZX_EVENTPAIR_PEER_CLOSED), but sysmem (intentionally) won’t notice this situation until all Node(s) are ready for allocation. For initial allocation to succeed, at least one strong Node is required to exist at allocation time, but after that client receives VMO handles, that client can BufferCollection.Release and close the client end without causing this type of failure.

This implies [fuchsia.sysmem2/Node.SetWeakOk] as well, but does not imply SetWeakOk with for_children_also true, which can be sent separately as appropriate.

Source

pub fn set_weak_ok(&self, payload: NodeSetWeakOkRequest) -> Result<(), Error>

This indicates to sysmem that the client is prepared to pay attention to close_weak_asap.

If sent, this message must be before [fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated].

All participants using a weak [fuchsia.sysmem2/BufferCollection] must send this message before WaitForAllBuffersAllocated, or a parent Node must have sent [fuchsia.sysmem2/Node.SetWeakOk] with for_child_nodes_also true, else the WaitForAllBuffersAllocated will trigger buffer collection failure.

This message is necessary because weak sysmem VMOs have not always been a thing, so older clients are not aware of the need to pay attention to close_weak_asap ZX_EVENTPAIR_PEER_CLOSED and close all remaining sysmem weak VMO handles asap. By having this message and requiring participants to indicate their acceptance of this aspect of the overall protocol, we avoid situations where an older client is delivered a weak VMO without any way for sysmem to get that VMO to close quickly later (and on a per-buffer basis).

A participant that doesn’t handle close_weak_asap and also doesn’t retrieve any VMO handles via WaitForAllBuffersAllocated doesn’t need to send SetWeakOk (and doesn’t need to have a parent Node send SetWeakOk with for_child_nodes_also true either). However, if that same participant has a child/delegate which does retrieve VMOs, that child/delegate will need to send SetWeakOk before WaitForAllBuffersAllocated.

  • request for_child_nodes_also If present and true, this means direct child nodes of this node created after this message plus all descendants of those nodes will behave as if SetWeakOk was sent on those nodes. Any child node of this node that was created before this message is not included. This setting is “sticky” in the sense that a subsequent SetWeakOk without this bool set to true does not reset the server-side bool. If this creates a problem for a participant, a workaround is to SetWeakOk with for_child_nodes_also true on child tokens instead, as appropriate. A participant should only set for_child_nodes_also true if the participant can really promise to obey close_weak_asap both for its own weak VMO handles, and for all weak VMO handles held by participants holding the corresponding child Node(s). When for_child_nodes_also is set, descendent Node(s) which are using sysmem(1) can be weak, despite the clients of those sysmem1 Node(s) not having any direct way to SetWeakOk or any direct way to find out about close_weak_asap. This only applies to descendents of this Node which are using sysmem(1), not to this Node when converted directly from a sysmem2 token to a sysmem(1) token, which will fail allocation unless an ancestor of this Node specified for_child_nodes_also true.
Source

pub fn attach_node_tracking( &self, payload: NodeAttachNodeTrackingRequest, ) -> Result<(), Error>

The server_end will be closed after this Node and any child nodes have have released their buffer counts, making those counts available for reservation by a different Node via [fuchsia.sysmem2/BufferCollection.AttachToken].

The Node buffer counts may not be released until the entire tree of Node(s) is closed or failed, because [fuchsia.sysmem2/BufferCollection.Release] followed by channel close does not immediately un-reserve the Node buffer counts. Instead, the Node buffer counts remain reserved until the orphaned node is later cleaned up.

If the Node exceeds a fairly large number of attached eventpair server ends, a log message will indicate this and the Node (and the appropriate) sub-tree will fail.

The server_end will remain open when [fuchsia.sysmem2/Allocator.BindSharedCollection] converts a [fuchsia.sysmem2/BufferCollectionToken] into a [fuchsia.sysmem2/BufferCollection].

This message can also be used with a [fuchsia.sysmem2/BufferCollectionTokenGroup].

Source

pub fn duplicate_sync( &self, payload: &BufferCollectionTokenDuplicateSyncRequest, ___deadline: MonotonicInstant, ) -> Result<BufferCollectionTokenDuplicateSyncResponse, Error>

Create additional fuchsia.sysmem2/BufferCollectionToken from this one, referring to the same buffer collection.

The created tokens are children of this token in the [fuchsia.sysmem2/Node] heirarchy.

This method can be used to add more participants, by transferring the newly created tokens to additional participants.

A new token will be returned for each entry in the rights_attenuation_masks array.

If the called token may not actually be a valid token due to a potentially hostile/untrusted provider of the token, consider using [fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken] first instead of potentially getting stuck indefinitely if [fuchsia.sysmem2/BufferCollectionToken.DuplicateSync] never responds due to the calling token not being a real token.

In contrast to [fuchsia.sysmem2/BufferCollectionToken.Duplicate], no separate [fuchsia.sysmem2/Node.Sync] is needed after calling this method, because the sync step is included in this call, at the cost of a round trip during this call.

All tokens must be turned in to sysmem via [fuchsia.sysmem2/Allocator.BindSharedCollection] or [fuchsia.sysmem2/Node.Release] for a BufferCollection to successfully allocate buffers (or to logically allocate buffers in the case of subtrees involving [fuchsia.sysmem2/BufferCollectionToken.AttachToken]).

All table fields are currently required.

  • request rights_attenuation_mask In each entry of rights_attenuation_masks, rights bits that are zero will be absent in the buffer VMO rights obtainable via the corresponding returned token. This allows an initiator or intermediary participant to attenuate the rights available to a participant. This does not allow a participant to gain rights that the participant doesn’t already have. The value ZX_RIGHT_SAME_RIGHTS can be used to specify that no attenuation should be applied.
  • response tokens The client ends of each newly created token.
Source

pub fn duplicate( &self, payload: BufferCollectionTokenDuplicateRequest, ) -> Result<(), Error>

Create an additional [fuchsia.sysmem2/BufferCollectionToken] from this one, referring to the same buffer collection.

The created token is a child of this token in the [fuchsia.sysmem2/Node] heirarchy.

This method can be used to add a participant, by transferring the newly created token to another participant.

This one-way message can be used instead of the two-way [fuchsia.sysmem2/BufferCollectionToken.DuplicateSync] FIDL call in performance sensitive cases where it would be undesireable to wait for sysmem to respond to [fuchsia.sysmem2/BufferCollectionToken.DuplicateSync] or when the client code isn’t structured to make it easy to duplicate all the needed tokens at once.

After sending one or more Duplicate messages, and before sending the newly created child tokens to other participants (or to other [fuchsia.sysmem2/Allocator] channels), the client must send a [fuchsia.sysmem2/Node.Sync] and wait for the Sync response. The Sync call can be made on the token, or on the BufferCollection obtained by passing this token to BindSharedCollection. Either will ensure that the server knows about the tokens created via Duplicate before the other participant sends the token to the server via separate Allocator channel.

All tokens must be turned in via [fuchsia.sysmem2/Allocator.BindSharedCollection] or [fuchsia.sysmem2/Node.Release] for a BufferCollection to successfully allocate buffers.

All table fields are currently required.

  • request rights_attenuation_mask The rights bits that are zero in this mask will be absent in the buffer VMO rights obtainable via the client end of token_request. This allows an initiator or intermediary participant to attenuate the rights available to a delegate participant. This does not allow a participant to gain rights that the participant doesn’t already have. The value ZX_RIGHT_SAME_RIGHTS can be used to specify that no attenuation should be applied.
    • These values for rights_attenuation_mask result in no attenuation:
      • ZX_RIGHT_SAME_RIGHTS (preferred)
      • 0xFFFFFFFF (this is reasonable when an attenuation mask is computed)
      • 0 (deprecated - do not use 0 - an ERROR will go to the log)
  • request token_request is the server end of a BufferCollectionToken channel. The client end of this channel acts as another participant in the shared buffer collection.
Source

pub fn set_dispensable(&self) -> Result<(), Error>

Set this [fuchsia.sysmem2/BufferCollectionToken] to dispensable.

When the BufferCollectionToken is converted to a [fuchsia.sysmem2/BufferCollection], the dispensable status applies to the BufferCollection also.

Normally, if a client closes a [fuchsia.sysmem2/BufferCollection] client end without having sent [fuchsia.sysmem2/BufferCollection.Release] first, the BufferCollection [fuchisa.sysmem2/Node] will fail, which also propagates failure to the parent [fuchsia.sysmem2/Node] and so on up to the root Node, which fails the whole buffer collection. In contrast, a dispensable Node can fail after buffers are allocated without causing failure of its parent in the [fuchsia.sysmem2/Node] heirarchy.

The dispensable Node participates in constraints aggregation along with its parent before buffer allocation. If the dispensable Node fails before buffers are allocated, the failure propagates to the dispensable Node’s parent.

After buffers are allocated, failure of the dispensable Node (or any child of the dispensable Node) does not propagate to the dispensable Node’s parent. Failure does propagate from a normal child of a dispensable Node to the dispensable Node. Failure of a child is blocked from reaching its parent if the child is attached using [fuchsia.sysmem2/BufferCollection.AttachToken], or if the child is dispensable and the failure occurred after allocation.

A dispensable Node can be used in cases where a participant needs to provide constraints, but after buffers are allocated, the participant can fail without causing buffer collection failure from the parent Node’s point of view.

In contrast, BufferCollection.AttachToken can be used to create a BufferCollectionToken which does not participate in constraints aggregation with its parent Node, and whose failure at any time does not propagate to its parent Node, and whose potential delay providing constraints does not prevent the parent Node from completing its buffer allocation.

An initiator (creator of the root Node using [fuchsia.sysmem2/Allocator.AllocateSharedCollection]) may in some scenarios choose to initially use a dispensable Node for a first instance of a participant, and then later if the first instance of that participant fails, a new second instance of that participant my be given a BufferCollectionToken created with AttachToken.

Normally a client will SetDispensable on a BufferCollectionToken shortly before sending the dispensable BufferCollectionToken to a delegate participant. Because SetDispensable prevents propagation of child Node failure to parent Node(s), if the client was relying on noticing child failure via failure of the parent Node retained by the client, the client may instead need to notice failure via other means. If other means aren’t available/convenient, the client can instead retain the dispensable Node and create a child Node under that to send to the delegate participant, retaining this Node in order to notice failure of the subtree rooted at this Node via this Node’s ZX_CHANNEL_PEER_CLOSED signal, and take whatever action is appropriate (e.g. starting a new instance of the delegate participant and handing it a BufferCollectionToken created using [fuchsia.sysmem2/BufferCollection.AttachToken], or propagate failure and clean up in a client-specific way).

While it is possible (and potentially useful) to SetDispensable on a direct child of a BufferCollectionTokenGroup Node, it isn’t possible to later replace a failed dispensable Node that was a direct child of a BufferCollectionTokenGroup with a new token using AttachToken (since there’s no AttachToken on a group). Instead, to enable AttachToken replacement in this case, create an additional non-dispensable token that’s a direct child of the group and make the existing dispensable token a child of the additional token. This way, the additional token that is a direct child of the group has BufferCollection.AttachToken which can be used to replace the failed dispensable token.

SetDispensable on an already-dispensable token is idempotent.

Source

pub fn create_buffer_collection_token_group( &self, payload: BufferCollectionTokenCreateBufferCollectionTokenGroupRequest, ) -> Result<(), Error>

Create a logical OR among a set of tokens, called a [fuchsia.sysmem2/BufferCollectionTokenGroup].

Most sysmem clients and many participants don’t need to care about this message or about BufferCollectionTokenGroup(s). However, in some cases a participant wants to attempt to include one set of delegate participants, but if constraints don’t combine successfully that way, fall back to a different (possibly overlapping) set of delegate participants, and/or fall back to a less demanding strategy (in terms of how strict the [fuchisa.sysmem2/BufferCollectionConstraints] are, across all involved delegate participants). In such cases, a BufferCollectionTokenGroup is useful.

A BufferCollectionTokenGroup is used to create a 1 of N OR among N child fuchsia.sysmem2/BufferCollectionToken. The child tokens which are not selected during aggregation will fail (close), which a potential participant should notice when their BufferCollection channel client endpoint sees PEER_CLOSED, allowing the participant to clean up the speculative usage that didn’t end up happening (this is simimlar to a normal BufferCollection server end closing on failure to allocate a logical buffer collection or later async failure of a buffer collection).

See comments on protocol BufferCollectionTokenGroup.

Any rights_attenuation_mask or AttachToken/SetDispensable to be applied to the whole group can be achieved with a BufferCollectionToken for this purpose as a direct parent of the BufferCollectionTokenGroup.

All table fields are currently required.

  • request group_request The server end of a BufferCollectionTokenGroup channel to be served by sysmem.

Trait Implementations§

Source§

impl Debug for BufferCollectionTokenSynchronousProxy

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl SynchronousProxy for BufferCollectionTokenSynchronousProxy

Source§

type Proxy = BufferCollectionTokenProxy

The async proxy for the same protocol.
Source§

type Protocol = BufferCollectionTokenMarker

The protocol which this Proxy controls.
Source§

fn from_channel(inner: Channel) -> Self

Create a proxy over the given channel.
Source§

fn into_channel(self) -> Channel

Convert the proxy back into a channel.
Source§

fn as_channel(&self) -> &Channel

Get a reference to the proxy’s underlying channel. Read more
§

fn is_closed(&self) -> Result<bool, Status>

Returns true if the proxy has received the PEER_CLOSED signal. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
§

impl<T, D> Encode<Ambiguous1, D> for T
where D: ResourceDialect,

§

unsafe fn encode( self, _encoder: &mut Encoder<'_, D>, _offset: usize, _depth: Depth, ) -> Result<(), Error>

Encodes the object into the encoder’s buffers. Any handles stored in the object are swapped for Handle::INVALID. Read more
§

impl<T, D> Encode<Ambiguous2, D> for T
where D: ResourceDialect,

§

unsafe fn encode( self, _encoder: &mut Encoder<'_, D>, _offset: usize, _depth: Depth, ) -> Result<(), Error>

Encodes the object into the encoder’s buffers. Any handles stored in the object are swapped for Handle::INVALID. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

§

impl<T> Instrument for T

§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided [Span], returning an Instrumented wrapper. Read more
§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

§

impl<T> Pointable for T

§

const ALIGN: usize = _

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<T> WithSubscriber for T

§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a [WithDispatch] wrapper. Read more
§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a [WithDispatch] wrapper. Read more