pub struct BufferCollectionSynchronousProxy { /* private fields */ }
Implementations§
Source§impl BufferCollectionSynchronousProxy
impl BufferCollectionSynchronousProxy
pub fn new(channel: Channel) -> Self
pub fn into_channel(self) -> Channel
Sourcepub fn wait_for_event(
&self,
deadline: MonotonicInstant,
) -> Result<BufferCollectionEvent, Error>
pub fn wait_for_event( &self, deadline: MonotonicInstant, ) -> Result<BufferCollectionEvent, Error>
Waits until an event arrives and returns it. It is safe for other threads to make concurrent requests while waiting for an event.
Sourcepub fn sync(&self, ___deadline: MonotonicInstant) -> Result<(), Error>
pub fn sync(&self, ___deadline: MonotonicInstant) -> Result<(), Error>
Ensure that previous messages, including Duplicate() messages on a token, collection, or group, have been received server side.
Calling BufferCollectionToken.Sync() on a token that isn’t/wasn’t a valid sysmem token risks the Sync() hanging forever. See ValidateBufferCollectionToken() for one way to mitigate the possibility of a hostile/fake BufferCollectionToken at the cost of one round trip. Another way is to pass the token to BindSharedCollection(), which also validates the token as part of exchanging it for a BufferCollection channel, and BufferCollection Sync() can then be used.
After a Sync(), it’s then safe to send the client end of token_request to another participant knowing the server will recognize the token when it’s sent into BindSharedCollection() by the other participant.
Other options include waiting for each token.Duplicate() to complete individually (using separate call to token.Sync() after each), or calling Sync() on BufferCollection after the token has been turned in via BindSharedCollection().
Another way to mitigate is to avoid calling Sync() on the token, and instead later deal with potential failure of BufferCollection.Sync() if the original token was invalid. This option can be preferable from a performance point of view, but requires client code to delay sending tokens duplicated from this token until after client code has converted the duplicating token to a BufferCollection and received successful response from BufferCollection.Sync().
Prefer using BufferCollection.Sync() instead, when feasible (see above). When BufferCollection.Sync() isn’t feasible, the caller must already know that this token is/was valid, or BufferCollectionToken.Sync() may hang forever. See ValidateBufferCollectionToken() to check token validity first if the token isn’t already known to be (is/was) valid.
Sourcepub fn close(&self) -> Result<(), Error>
pub fn close(&self) -> Result<(), Error>
On a BufferCollectionToken channel:
Normally a participant will convert a BufferCollectionToken into a BufferCollection view, but a participant is also free to Close() the token (and then close the channel immediately or shortly later in response to server closing its end), which avoids causing logical buffer collection failure. Normally an unexpected token channel close will cause logical buffer collection failure (the only exceptions being certain cases involving AttachToken() or SetDispensable()).
On a BufferCollection channel:
By default the server handles unexpected failure of a BufferCollection by failing the whole logical buffer collection. Partly this is to expedite closing VMO handles to reclaim memory when any participant fails. If a participant would like to cleanly close a BufferCollection view without causing logical buffer collection failure, the participant can send Close() before closing the client end of the BufferCollection channel. If this is the last BufferCollection view, the logical buffer collection will still go away. The Close() can occur before or after SetConstraints(). If before SetConstraints(), the buffer collection won’t require constraints from this node in order to allocate. If after SetConstraints(), the constraints are retained and aggregated along with any subsequent logical allocation(s), despite the lack of channel connection.
On a BufferCollectionTokenGroup channel:
By default, unexpected failure of a BufferCollectionTokenGroup will trigger failure of the logical BufferCollectionTokenGroup and will propagate failure to its parent. To close a BufferCollectionTokenGroup channel without failing the logical group or propagating failure, send Close() before closing the channel client endpoint.
If Close() occurs before AllChildrenPresent(), the logical buffer collection will still fail despite the Close() (because sysmem can’t be sure whether all relevant children were created, so it’s ambiguous whether all relevant constraints will be provided to sysmem). If Close() occurs after AllChildrenPresent(), the children and all their constraints remain intact (just as they would if the BufferCollectionTokenGroup channel had remained open), and the close doesn’t trigger or propagate failure.
Sourcepub fn set_name(&self, priority: u32, name: &str) -> Result<(), Error>
pub fn set_name(&self, priority: u32, name: &str) -> Result<(), Error>
Set a name for VMOs in this buffer collection. The name may be truncated shorter. The name only affects VMOs allocated after it’s set - this call does not rename existing VMOs. If multiple clients set different names then the larger priority value will win.
Sourcepub fn set_debug_client_info(&self, name: &str, id: u64) -> Result<(), Error>
pub fn set_debug_client_info(&self, name: &str, id: u64) -> Result<(), Error>
Set information about the current client that can be used by sysmem to help debug leaking memory and hangs waiting for constraints. |name| can be an arbitrary string, but the current process name (see fsl::GetCurrentProcessName()) is a good default. |id| can be an arbitrary id, but the current process ID (see fsl::GetCurrentProcessKoid()) is a good default.
Also used when verbose logging is enabled (see SetVerboseLogging()) to indicate which client is closing their channel first, leading to sub-tree failure (which can be normal if the purpose of the sub-tree is over, but if happening earlier than expected, the client-channel-specific name can help diagnose where the failure is first coming from, from sysmem’s point of view).
By default (unless overriden by this message or using Allocator.SetDebugClientInfo()), a Node will copy info from its parent Node at the time the child Node is created. While this can be better than nothing, it’s often better for each participant to use Node.SetDebugClientInfo() or Allocator.SetDebugClientInfo() to keep the info directly relevant to the current client. Also, SetVerboseLogging() can be used to help disambiguate if a Node is suspected of having info that was copied from its parent.
Sourcepub fn set_debug_timeout_log_deadline(&self, deadline: i64) -> Result<(), Error>
pub fn set_debug_timeout_log_deadline(&self, deadline: i64) -> Result<(), Error>
Sysmem logs a warning if not all clients have set constraints 5 seconds after creating a collection. Clients can call this method to change when the log is printed. If multiple client set the deadline, it’s unspecified which deadline will take effect.
Sourcepub fn set_verbose_logging(&self) -> Result<(), Error>
pub fn set_verbose_logging(&self) -> Result<(), Error>
Verbose logging includes constraints set via SetConstraints() from each client along with info set via SetDebugClientInfo() and the structure of the tree of Node(s).
Normally sysmem prints only a single line complaint when aggregation fails, with just the specific detailed reason that aggregation failed, with minimal context. While this is often enough to diagnose a problem if only a small change was made and the system had been working before the small change, it’s often not particularly helpful for getting a new buffer collection to work for the first time. Especially with more complex trees of nodes, involving things like AttachToken(), SetDispensable(), BufferCollectionTokenGroup nodes, and associated sub-trees of nodes, verbose logging may help in diagnosing what the tree looks like and why it’s failing a logical allocation, or why a tree or sub-tree is failing sooner than expected.
The intent of the extra logging is to be acceptable from a performance point of view, if only enabled on a low number of buffer collections. If we’re not tracking down a bug, we shouldn’t send this message.
If too many participants leave verbose logging enabled, we may end up needing to require that system-wide sysmem verbose logging be permitted via some other setting, to avoid sysmem spamming the log too much due to this message.
This may be a NOP for some nodes due to intentional policy associated with the node, if we don’t trust a node enough to let it turn on verbose logging.
Sourcepub fn get_node_ref(
&self,
___deadline: MonotonicInstant,
) -> Result<Event, Error>
pub fn get_node_ref( &self, ___deadline: MonotonicInstant, ) -> Result<Event, Error>
This gets an event handle that can be used as a parameter to IsAlternateFor() called on any Node. The client will not be granted the right to signal this event, as this handle should only be used as proof that the client obtained this handle from this Node.
Because this is a get not a set, no Sync() is needed between the GetNodeRef() and the call to IsAlternateFor(), despite the two calls potentially being on different channels.
See also IsAlternateFor().
Sourcepub fn is_alternate_for(
&self,
node_ref: Event,
___deadline: MonotonicInstant,
) -> Result<NodeIsAlternateForResult, Error>
pub fn is_alternate_for( &self, node_ref: Event, ___deadline: MonotonicInstant, ) -> Result<NodeIsAlternateForResult, Error>
This checks whether the calling node is in a subtree rooted at a different child token of a common parent BufferCollectionTokenGroup, in relation to the passed-in node_ref.
This call is for assisting with admission control de-duplication, and with debugging.
The node_ref must be obtained using GetNodeRef() of a BufferCollectionToken, BufferCollection, or BufferCollectionTokenGroup.
The node_ref can be a duplicated handle; it’s not necessary to call GetNodeRef() for every call to IsAlternateFor().
If a calling token may not actually be a valid token at all due to a potentially hostile/untrusted provider of the token, call ValidateBufferCollectionToken() first instead of potentially getting stuck indefinitely if IsAlternateFor() never responds due to a calling token not being a real token (not really talking to sysmem). Another option is to call BindSharedCollection with this token first which also validates the token along with converting it to a BufferCollection, then call BufferCollection IsAlternateFor().
error values:
ZX_ERR_NOT_FOUND means the node_ref wasn’t found within the same logical buffer collection as the calling Node. Before logical allocation and within the same logical allocation sub-tree, this essentially means that the node_ref was never part of this logical buffer collection, since before logical allocation all node_refs that come into existence remain in existence at least until logical allocation (including Node(s) that have done a Close() and closed their channel), and for ZX_ERR_NOT_FOUND to be returned, this Node’s channel needs to still be connected server side, which won’t be the case if the whole logical allocation has failed. After logical allocation or in a different logical allocation sub-tree there are additional potential reasons for this error. For example a different logical allocation (separated from this Node(s) logical allocation by an AttachToken() or SetDispensable()) can fail its sub-tree deleting those Node(s), or a BufferCollectionTokenGroup may exist and may select a different child sub-tree than the sub-tree the node_ref is in causing deletion of the node_ref Node. The only time sysmem keeps a Node around after that Node has no corresponding channel is when Close() is used and the Node’s sub-tree has not yet failed. Another reason for this error is if the node_ref is an eventpair handle with sufficient rights, but isn’t actually a real node_ref obtained from GetNodeRef().
ZX_ERR_INVALID_ARGS means the caller passed a node_ref that isn’t an eventpair handle, or doesn’t have the needed rights expected on a real node_ref.
No other failing status codes are returned by this call. However, sysmem may add additional codes in future, so the client should have sensible default handling for any failing status code.
On success, is_alternate has the following meaning:
- true - The first parent node in common between the calling node and the node_ref Node is a BufferCollectionTokenGroup. This means that the calling Node and the node_ref Node will not have both their constraints apply - rather sysmem will choose one or the other of the constraints - never both. This is because only one child of a BufferCollectionTokenGroup is selected during logical allocation, with only that one child’s sub-tree contributing to constraints aggregation.
- false - The first parent node in common between the calling Node and the node_ref Node is not a BufferCollectionTokenGroup. Currently, this means the first parent node in common is a BufferCollectionToken or BufferCollection (regardless of not Close()ed or Close()ed). This means that the calling Node and the node_ref Node may have both their constraints apply during constraints aggregation of the logical allocation, if both Node(s) are selected by any parent BufferCollectionTokenGroup(s) involved. In this case, there is no BufferCollectionTokenGroup that will directly prevent the two Node(s) from both being selected and their constraints both aggregated, but even when false, one or both Node(s) may still be eliminated from consideration if one or both Node(s) has a direct or indirect parent BufferCollectionTokenGroup which selects a child sub-tree other than the sub-tree containing the calling Node or node_ref Node.
Sourcepub fn set_constraints(
&self,
has_constraints: bool,
constraints: &BufferCollectionConstraints,
) -> Result<(), Error>
pub fn set_constraints( &self, has_constraints: bool, constraints: &BufferCollectionConstraints, ) -> Result<(), Error>
Provide BufferCollectionConstraints to the logical BufferCollection.
A participant may only call SetConstraints() once.
Sometimes the initiator is a participant only in the sense of wanting to
keep an eye on success/failure to populate with buffers, and zx.Status
on failure. In that case, has_constraints
can be false, and
constraints
will be ignored.
VMO handles will not be provided to the client that sends null constraints - that can be intentional for an initiator that doesn’t need VMO handles. Not having VMO handles doesn’t prevent the initator from adjusting which portion of a buffer is considered valid and similar, but the initiator can’t hold a VMO handle open to prevent the logical BufferCollection from cleaning up if the logical BufferCollection needs to go away regardless of the initiator’s degree of involvement for whatever reason.
For population of buffers to be attempted, all holders of a BufferCollection client channel need to call SetConstraints() before sysmem will attempt to allocate buffers.
has_constraints
if false, the constraints are effectively null, and
constraints
are ignored. The sender of null constraints won’t get any
VMO handles in BufferCollectionInfo, but can still find out how many
buffers were allocated and can still refer to buffers by their
buffer_index.
constraints
are constraints on the buffer collection.
Sourcepub fn wait_for_buffers_allocated(
&self,
___deadline: MonotonicInstant,
) -> Result<(i32, BufferCollectionInfo2), Error>
pub fn wait_for_buffers_allocated( &self, ___deadline: MonotonicInstant, ) -> Result<(i32, BufferCollectionInfo2), Error>
This request completes when buffers have been allocated, responds with some failure detail if allocation has been attempted but failed.
The following must occur before buffers will be allocated:
- All BufferCollectionToken(s) of the logical BufferCollectionToken must be turned in via BindSharedCollection().
- All BufferCollection(s) of the logical BufferCollection must have had SetConstraints() sent to them.
Returns ZX_OK
if successful.
Returns ZX_ERR_NO_MEMORY
if the request is valid but cannot be
fulfilled due to resource exhaustion.
Returns ZX_ERR_ACCESS_DENIED
if the caller is not permitted to
obtain the buffers it requested.
Returns ZX_ERR_INVALID_ARGS
if the request is malformed.
Returns ZX_ERR_NOT_SUPPORTED
if request is valid but cannot be
satisfied, perhaps due to hardware limitations.
buffer_collection_info
has the VMO handles and other related info.
Sourcepub fn check_buffers_allocated(
&self,
___deadline: MonotonicInstant,
) -> Result<i32, Error>
pub fn check_buffers_allocated( &self, ___deadline: MonotonicInstant, ) -> Result<i32, Error>
This returns the same result code as WaitForBuffersAllocated if the
buffer collection has been allocated or failed, or ZX_ERR_UNAVAILABLE
if WaitForBuffersAllocated would block.
pub fn set_constraints_aux_buffers( &self, constraints: &BufferCollectionConstraintsAuxBuffers, ) -> Result<(), Error>
pub fn get_aux_buffers( &self, ___deadline: MonotonicInstant, ) -> Result<(i32, BufferCollectionInfo2), Error>
Sourcepub fn attach_token(
&self,
rights_attenuation_mask: u32,
token_request: ServerEnd<BufferCollectionTokenMarker>,
) -> Result<(), Error>
pub fn attach_token( &self, rights_attenuation_mask: u32, token_request: ServerEnd<BufferCollectionTokenMarker>, ) -> Result<(), Error>
Create a new token, for trying to add a new participant to an existing collection, if the existing collection’s buffer counts, constraints, and participants allow.
This can be useful in replacing a failed participant, and/or in adding/re-adding a participant after buffers have already been allocated.
Failure of an attached token / collection does not propagate to the parent of the attached token. Failure does propagate from a normal child of a dispensable token to the dispensable token. Failure of a child is blocked from reaching its parent if the child is attached, or if the child is dispensable and the failure occurred after logical allocation.
An initiator may in some scenarios choose to initially use a dispensable token for a given instance of a participant, and then later if the first instance of that participant fails, a new second instance of that participant my be given a token created with AttachToken().
From the point of view of the client end of the BufferCollectionToken channel, the token acts like any other token. The client can Duplicate() the token as needed, and can send the token to a different process. The token should be converted to a BufferCollection channel as normal by calling BindSharedCollection(). SetConstraints() should be called on that BufferCollection channel.
A success result from WaitForBuffersAllocated() means the new participant’s constraints were satisfiable using the already-existing buffer collection, the already-established BufferCollectionInfo including image format constraints, and the already-existing other participants and their buffer counts. A failure result means the new participant’s constraints cannot be satisfied using the existing buffer collection and its already-logically-allocated participants. Creating a new collection instead may allow all participant’s constraints to be satisfied, assuming SetDispensable() is used in place of AttachToken(), or a normal token is used.
A token created with AttachToken() performs constraints aggregation with all constraints currently in effect on the buffer collection, plus the attached token under consideration plus child tokens under the attached token which are not themselves an attached token or under such a token.
Allocation of buffer_count to min_buffer_count_for_camping etc is first-come first-served, but a child can’t logically allocate before all its parents have sent SetConstraints().
See also SetDispensable(), which in contrast to AttachToken(), has the created token + children participate in constraints aggregation along with its parent.
The newly created token needs to be Sync()ed to sysmem before the new token can be passed to BindSharedCollection(). The Sync() of the new token can be accomplished with BufferCollection.Sync() on this BufferCollection. Alternately BufferCollectionToken.Sync() on the new token also works. A BufferCollectionToken.Sync() can be started after any BufferCollectionToken.Duplicate() messages have been sent via the newly created token, to also sync those additional tokens to sysmem using a single round-trip.
These values for rights_attenuation_mask result in no attenuation (note that 0 is not on this list; 0 will output an ERROR to the system log to help diagnose the bug in client code):
- ZX_RIGHT_SAME_RIGHTS (preferred)
- 0xFFFFFFFF (this is reasonable when an attenuation mask is computed)
Sourcepub fn attach_lifetime_tracking(
&self,
server_end: EventPair,
buffers_remaining: u32,
) -> Result<(), Error>
pub fn attach_lifetime_tracking( &self, server_end: EventPair, buffers_remaining: u32, ) -> Result<(), Error>
AttachLifetimeTracking:
AttachLifetimeTracking() is intended to allow a client to wait until an old logical buffer collection is fully or mostly deallocated before attempting allocation of a new logical buffer collection.
Attach an eventpair endpoint to the logical buffer collection, so that the server_end will be closed when the number of buffers allocated drops to ‘buffers_remaining’. The server_end won’t close until after logical allocation has completed.
If logical allocation fails, such as for an attached sub-tree (using AttachToken()), the server_end will close during that failure regardless of the number of buffers potenitally allocated in the overall logical buffer collection.
The lifetime signalled by this event includes asynchronous cleanup of allocated buffers, and this asynchronous cleanup cannot occur until all holders of VMO handles to the buffers have closed those VMO handles. Therefore clients should take care not to become blocked forever waiting for ZX_EVENTPAIR_PEER_CLOSED to be signalled, especially if any of the participants using the logical buffer collection are less trusted or less reliable.
The buffers_remaining parameter allows waiting for all but buffers_remaining buffers to be fully deallocated. This can be useful in situations where a known number of buffers are intentionally not closed so that the data can continue to be used, such as for keeping the last available video picture displayed in the UI even if the video stream was using protected output buffers. It’s outside the scope of the BufferCollection interface (at least for now) to determine how many buffers may be held without closing, but it’ll typically be in the range 0-2.
This mechanism is meant to be compatible with other protocols providing a similar AttachLifetimeTracking() mechanism, in that duplicates of the same event can be sent to more than one AttachLifetimeTracking(), and the ZX_EVENTPAIR_PEER_CLOSED will be signalled when all the lifetime over conditions are met (all holders of duplicates have closed their handle(s)).
There is no way to cancel an attach. Closing the client end of the eventpair doesn’t subtract from the number of pending attach(es).
Closing the client’s end doesn’t result in any action by the server. If the server listens to events from the client end at all, it is for debug logging only.
The server intentionally doesn’t “trust” any bits signalled by the client. This mechanism intentionally uses only ZX_EVENTPAIR_PEER_CLOSED which can’t be triggered early, and is only triggered when all handles to server_end are closed. No meaning is associated with any of the other signal bits, and clients should functionally ignore any other signal bits on either end of the eventpair or its peer.
The server_end may lack ZX_RIGHT_SIGNAL or ZX_RIGHT_SIGNAL_PEER, but must have ZX_RIGHT_DUPLICATE (and must have ZX_RIGHT_TRANSFER to transfer without causing CodecFactory channel failure).
Trait Implementations§
Source§impl SynchronousProxy for BufferCollectionSynchronousProxy
impl SynchronousProxy for BufferCollectionSynchronousProxy
Source§type Proxy = BufferCollectionProxy
type Proxy = BufferCollectionProxy
Source§type Protocol = BufferCollectionMarker
type Protocol = BufferCollectionMarker
Proxy
controls.