pub struct BufferCollectionProxy { /* private fields */ }
Implementations§
Source§impl BufferCollectionProxy
impl BufferCollectionProxy
Sourcepub fn new(channel: AsyncChannel) -> Self
pub fn new(channel: AsyncChannel) -> Self
Create a new Proxy for fuchsia.sysmem2/BufferCollection.
Sourcepub fn take_event_stream(&self) -> BufferCollectionEventStream
pub fn take_event_stream(&self) -> BufferCollectionEventStream
Get a Stream of events from the remote end of the protocol.
§Panics
Panics if the event stream was already taken.
Sourcepub fn sync(&self) -> QueryResponseFut<(), DefaultFuchsiaResourceDialect>
pub fn sync(&self) -> QueryResponseFut<(), DefaultFuchsiaResourceDialect>
Ensure that previous messages have been received server side. This is particularly useful after previous messages that created new tokens, because a token must be known to the sysmem server before sending the token to another participant.
Calling [fuchsia.sysmem2/BufferCollectionToken.Sync
] on a token that
isn’t/wasn’t a valid token risks the Sync
stalling forever. See
[fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken
] for one way
to mitigate the possibility of a hostile/fake
[fuchsia.sysmem2/BufferCollectionToken
] at the cost of one round trip.
Another way is to pass the token to
[fuchsia.sysmem2/Allocator/BindSharedCollection
], which also validates
the token as part of exchanging it for a
[fuchsia.sysmem2/BufferCollection
] channel, and
[fuchsia.sysmem2/BufferCollection.Sync
] can then be used without risk
of stalling.
After creating one or more fuchsia.sysmem2/BufferCollectionToken
and then starting and completing a Sync
, it’s then safe to send the
BufferCollectionToken
client ends to other participants knowing the
server will recognize the tokens when they’re sent by the other
participants to sysmem in a
[fuchsia.sysmem2/Allocator.BindSharedCollection
] message. This is an
efficient way to create tokens while avoiding unnecessary round trips.
Other options include waiting for each
[fuchsia.sysmem2/BufferCollectionToken.Duplicate
] to complete
individually (using separate call to Sync
after each), or calling
[fuchsia.sysmem2/BufferCollection.Sync
] after a token has been
converted to a BufferCollection
via
[fuchsia.sysmem2/Allocator.BindSharedCollection
], or using
[fuchsia.sysmem2/BufferCollectionToken.DuplicateSync
] which includes
the sync step and can create multiple tokens at once.
Sourcepub fn release(&self) -> Result<(), Error>
pub fn release(&self) -> Result<(), Error>
§On a [fuchsia.sysmem2/BufferCollectionToken
] channel:
Normally a participant will convert a BufferCollectionToken
into a
[fuchsia.sysmem2/BufferCollection
], but a participant can instead send
Release
via the token (and then close the channel immediately or
shortly later in response to server closing the server end), which
avoids causing buffer collection failure. Without a prior Release
,
closing the BufferCollectionToken
client end will cause buffer
collection failure.
§On a [fuchsia.sysmem2/BufferCollection
] channel:
By default the server handles unexpected closure of a
[fuchsia.sysmem2/BufferCollection
] client end (without Release
first) by failing the buffer collection. Partly this is to expedite
closing VMO handles to reclaim memory when any participant fails. If a
participant would like to cleanly close a BufferCollection
without
causing buffer collection failure, the participant can send Release
before closing the BufferCollection
client end. The Release
can
occur before or after SetConstraints
. If before SetConstraints
, the
buffer collection won’t require constraints from this node in order to
allocate. If after SetConstraints
, the constraints are retained and
aggregated, despite the lack of BufferCollection
connection at the
time of constraints aggregation.
§On a [fuchsia.sysmem2/BufferCollectionTokenGroup
] channel:
By default, unexpected closure of a BufferCollectionTokenGroup
client
end (without Release
first) will trigger failure of the buffer
collection. To close a BufferCollectionTokenGroup
channel without
failing the buffer collection, ensure that AllChildrenPresent() has been
sent, and send Release
before closing the BufferCollectionTokenGroup
client end.
If Release
occurs before
[fuchsia.sysmem2/BufferCollectionTokenGroup.AllChildrenPresent], the buffer collection will fail (triggered by reception of
Releasewithout prior
AllChildrenPresent). This is intentionally not analogous to how [
fuchsia.sysmem2/BufferCollection.Release] without [
fuchsia.sysmem2/BufferCollection.SetConstraints] first doesn't cause buffer collection failure. For a
BufferCollectionTokenGroup, clean close requires
AllChildrenPresent(if not already sent), then
Release`, then close client end.
If Release
occurs after AllChildrenPresent
, the children and all
their constraints remain intact (just as they would if the
BufferCollectionTokenGroup
channel had remained open), and the client
end close doesn’t trigger buffer collection failure.
§On all [fuchsia.sysmem2/Node
] channels (any of the above):
For brevity, the per-channel-protocol paragraphs above ignore the
separate failure domain created by
[fuchsia.sysmem2/BufferCollectionToken.SetDispensable
] or
[fuchsia.sysmem2/BufferCollection.AttachToken
]. When a client end
unexpectedly closes (without Release
first) and that client end is
under a failure domain, instead of failing the whole buffer collection,
the failure domain is failed, but the buffer collection itself is
isolated from failure of the failure domain. Such failure domains can be
nested, in which case only the inner-most failure domain in which the
Node
resides fails.
Sourcepub fn set_name(&self, payload: &NodeSetNameRequest) -> Result<(), Error>
pub fn set_name(&self, payload: &NodeSetNameRequest) -> Result<(), Error>
Set a name for VMOs in this buffer collection.
If the name doesn’t fit in ZX_MAX_NAME_LEN, the name of the vmo itself will be truncated to fit. The name of the vmo will be suffixed with the buffer index within the collection (if the suffix fits within ZX_MAX_NAME_LEN). The name specified here (without truncation) will be listed in the inspect data.
The name only affects VMOs allocated after the name is set; this call does not rename existing VMOs. If multiple clients set different names then the larger priority value will win. Setting a new name with the same priority as a prior name doesn’t change the name.
All table fields are currently required.
- request
priority
The name is only set if this is the firstSetName
or ifpriority
is greater than any previouspriority
value in priorSetName
calls across allNode
(s) of this buffer collection. - request
name
The name for VMOs created under this buffer collection.
Sourcepub fn set_debug_client_info(
&self,
payload: &NodeSetDebugClientInfoRequest,
) -> Result<(), Error>
pub fn set_debug_client_info( &self, payload: &NodeSetDebugClientInfoRequest, ) -> Result<(), Error>
Set information about the current client that can be used by sysmem to
help diagnose leaking memory and allocation stalls waiting for a
participant to send [fuchsia.sysmem2/BufferCollection.SetConstraints
].
This sets the debug client info on this [fuchsia.sysmem2/Node
] and all
Node
(s) derived from this Node
, unless overriden by
[fuchsia.sysmem2/Allocator.SetDebugClientInfo
] or a later
[fuchsia.sysmem2/Node.SetDebugClientInfo
].
Sending [fuchsia.sysmem2/Allocator.SetDebugClientInfo
] once per
Allocator
is the most efficient way to ensure that all
fuchsia.sysmem2/Node
will have at least some debug client info
set, and is also more efficient than separately sending the same debug
client info via [fuchsia.sysmem2/Node.SetDebugClientInfo
] for each
created [fuchsia.sysmem2/Node
].
Also used when verbose logging is enabled (see SetVerboseLogging
) to
indicate which client is closing their channel first, leading to subtree
failure (which can be normal if the purpose of the subtree is over, but
if happening earlier than expected, the client-channel-specific name can
help diagnose where the failure is first coming from, from sysmem’s
point of view).
All table fields are currently required.
- request
name
This can be an arbitrary string, but the current process name (seefsl::GetCurrentProcessName
) is a good default. - request
id
This can be an arbitrary id, but the current process ID (seefsl::GetCurrentProcessKoid
) is a good default.
Sourcepub fn set_debug_timeout_log_deadline(
&self,
payload: &NodeSetDebugTimeoutLogDeadlineRequest,
) -> Result<(), Error>
pub fn set_debug_timeout_log_deadline( &self, payload: &NodeSetDebugTimeoutLogDeadlineRequest, ) -> Result<(), Error>
Sysmem logs a warning if sysmem hasn’t seen
[fuchsia.sysmem2/BufferCollection.SetConstraints
] from all clients
within 5 seconds after creation of a new collection.
Clients can call this method to change when the log is printed. If multiple client set the deadline, it’s unspecified which deadline will take effect.
In most cases the default works well.
All table fields are currently required.
- request
deadline
The time at which sysmem will start trying to log the warning, unless all constraints are with sysmem by then.
Sourcepub fn set_verbose_logging(&self) -> Result<(), Error>
pub fn set_verbose_logging(&self) -> Result<(), Error>
This enables verbose logging for the buffer collection.
Verbose logging includes constraints set via
[fuchsia.sysmem2/BufferCollection.SetConstraints
] from each client
along with info set via [fuchsia.sysmem2/Node.SetDebugClientInfo
] (or
[fuchsia.sysmem2/Allocator.SetDebugClientInfo
]) and the structure of
the tree of Node
(s).
Normally sysmem prints only a single line complaint when aggregation
fails, with just the specific detailed reason that aggregation failed,
with little surrounding context. While this is often enough to diagnose
a problem if only a small change was made and everything was working
before the small change, it’s often not particularly helpful for getting
a new buffer collection to work for the first time. Especially with
more complex trees of nodes, involving things like
[fuchsia.sysmem2/BufferCollection.AttachToken
],
[fuchsia.sysmem2/BufferCollectionToken.SetDispensable
],
[fuchsia.sysmem2/BufferCollectionTokenGroup
] nodes, and associated
subtrees of nodes, verbose logging may help in diagnosing what the tree
looks like and why it’s failing a logical allocation, or why a tree or
subtree is failing sooner than expected.
The intent of the extra logging is to be acceptable from a performance point of view, under the assumption that verbose logging is only enabled on a low number of buffer collections. If we’re not tracking down a bug, we shouldn’t send this message.
Sourcepub fn get_node_ref(
&self,
) -> QueryResponseFut<NodeGetNodeRefResponse, DefaultFuchsiaResourceDialect>
pub fn get_node_ref( &self, ) -> QueryResponseFut<NodeGetNodeRefResponse, DefaultFuchsiaResourceDialect>
This gets a handle that can be used as a parameter to
[fuchsia.sysmem2/Node.IsAlternateFor
] called on any
[fuchsia.sysmem2/Node
]. This handle is only for use as proof that the
client obtained this handle from this Node
.
Because this is a get not a set, no [fuchsia.sysmem2/Node.Sync
] is
needed between the GetNodeRef
and the call to IsAlternateFor
,
despite the two calls typically being on different channels.
See also [fuchsia.sysmem2/Node.IsAlternateFor
].
All table fields are currently required.
- response
node_ref
This handle can be sent viaIsAlternateFor
on a differentNode
channel, to prove that the client obtained the handle from thisNode
.
Sourcepub fn is_alternate_for(
&self,
payload: NodeIsAlternateForRequest,
) -> QueryResponseFut<NodeIsAlternateForResult, DefaultFuchsiaResourceDialect>
pub fn is_alternate_for( &self, payload: NodeIsAlternateForRequest, ) -> QueryResponseFut<NodeIsAlternateForResult, DefaultFuchsiaResourceDialect>
Check whether the calling [fuchsia.sysmem2/Node
] is in a subtree
rooted at a different child token of a common parent
[fuchsia.sysmem2/BufferCollectionTokenGroup
], in relation to the
passed-in node_ref
.
This call is for assisting with admission control de-duplication, and with debugging.
The node_ref
must be obtained using
[fuchsia.sysmem2/Node.GetNodeRef
].
The node_ref
can be a duplicated handle; it’s not necessary to call
GetNodeRef
for every call to [fuchsia.sysmem2/Node.IsAlternateFor
].
If a calling token may not actually be a valid token at all due to a
potentially hostile/untrusted provider of the token, call
[fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken
] first
instead of potentially getting stuck indefinitely if IsAlternateFor
never responds due to a calling token not being a real token (not really
talking to sysmem). Another option is to call
[fuchsia.sysmem2/Allocator.BindSharedCollection
] with this token first
which also validates the token along with converting it to a
[fuchsia.sysmem2/BufferCollection
], then call IsAlternateFor
.
All table fields are currently required.
- response
is_alternate
- true: The first parent node in common between the calling node and
the
node_ref
Node
is aBufferCollectionTokenGroup
. This means that the callingNode
and thenode_ref
Node
will not have both their constraints apply - rather sysmem will choose one or the other of the constraints - never both. This is because only one child of aBufferCollectionTokenGroup
is selected during logical allocation, with only that one child’s subtree contributing to constraints aggregation. - false: The first parent node in common between the calling
Node
and thenode_ref
Node
is not aBufferCollectionTokenGroup
. Currently, this means the first parent node in common is aBufferCollectionToken
orBufferCollection
(regardless of notRelease
ed). This means that the callingNode
and thenode_ref
Node
may have both their constraints apply during constraints aggregation of the logical allocation, if bothNode
(s) are selected by any parentBufferCollectionTokenGroup
(s) involved. In this case, there is noBufferCollectionTokenGroup
that will directly prevent the twoNode
(s) from both being selected and their constraints both aggregated, but even when false, one or bothNode
(s) may still be eliminated from consideration if one or bothNode
(s) has a direct or indirect parentBufferCollectionTokenGroup
which selects a child subtree other than the subtree containing the callingNode
ornode_ref
Node
.
- true: The first parent node in common between the calling node and
the
- error
[fuchsia.sysmem2/Error.NOT_FOUND]
The node_ref wasn’t associated with the same buffer collection as the callingNode
. Another reason for this error is if thenode_ref
is an [zx.Handle.EVENT
] handle with sufficient rights, but isn’t actually a realnode_ref
obtained fromGetNodeRef
. - error
[fuchsia.sysmem2/Error.PROTOCOL_DEVIATION]
The caller passed anode_ref
that isn’t a [zx.Handle:EVENT
] handle , or doesn’t have the needed rights expected on a realnode_ref
. - No other failing status codes are returned by this call. However, sysmem may add additional codes in future, so the client should have sensible default handling for any failing status code.
Sourcepub fn get_buffer_collection_id(
&self,
) -> QueryResponseFut<NodeGetBufferCollectionIdResponse, DefaultFuchsiaResourceDialect>
pub fn get_buffer_collection_id( &self, ) -> QueryResponseFut<NodeGetBufferCollectionIdResponse, DefaultFuchsiaResourceDialect>
Get the buffer collection ID. This ID is also available from
[fuchsia.sysmem2/Allocator.GetVmoInfo
] (along with the buffer_index
within the collection).
This call is mainly useful in situations where we can’t convey a
[fuchsia.sysmem2/BufferCollectionToken
] or
[fuchsia.sysmem2/BufferCollection
] directly, but can only convey a VMO
handle, which can be joined back up with a BufferCollection
client end
that was created via a different path. Prefer to convey a
BufferCollectionToken
or BufferCollection
directly when feasible.
Trusting a buffer_collection_id
value from a source other than sysmem
is analogous to trusting a koid value from a source other than zircon.
Both should be avoided unless really necessary, and both require
caution. In some situations it may be reasonable to refer to a
pre-established BufferCollection
by buffer_collection_id
via a
protocol for efficiency reasons, but an incoming value purporting to be
a buffer_collection_id
is not sufficient alone to justify granting the
sender of the buffer_collection_id
any capability. The sender must
first prove to a receiver that the sender has/had a VMO or has/had a
BufferCollectionToken
to the same collection by sending a handle that
sysmem confirms is a valid sysmem handle and which sysmem maps to the
buffer_collection_id
value. The receiver should take care to avoid
assuming that a sender had a BufferCollectionToken
in cases where the
sender has only proven that the sender had a VMO.
- response
buffer_collection_id
This ID is unique per buffer collection per boot. Each buffer is uniquely identified by thebuffer_collection_id
andbuffer_index
together.
Sourcepub fn set_weak(&self) -> Result<(), Error>
pub fn set_weak(&self) -> Result<(), Error>
Sets the current [fuchsia.sysmem2/Node
] and all child Node
(s)
created after this message to weak, which means that a client’s Node
client end (or a child created after this message) is not alone
sufficient to keep allocated VMOs alive.
All VMOs obtained from weak Node
(s) are weak sysmem VMOs. See also
close_weak_asap
.
This message is only permitted before the Node
becomes ready for
allocation (else the server closes the channel with ZX_ERR_BAD_STATE
):
BufferCollectionToken
: any timeBufferCollection
: beforeSetConstraints
BufferCollectionTokenGroup
: beforeAllChildrenPresent
Currently, no conversion from strong Node
to weak Node
after ready
for allocation is provided, but a client can simulate that by creating
an additional Node
before allocation and setting that additional
Node
to weak, and then potentially at some point later sending
Release
and closing the client end of the client’s strong Node
, but
keeping the client’s weak Node
.
Zero strong Node
(s) and zero strong VMO handles will result in buffer
collection failure (all Node
client end(s) will see
ZX_CHANNEL_PEER_CLOSED
and all close_weak_asap
client_end
(s) will
see ZX_EVENTPAIR_PEER_CLOSED
), but sysmem (intentionally) won’t notice
this situation until all Node
(s) are ready for allocation. For initial
allocation to succeed, at least one strong Node
is required to exist
at allocation time, but after that client receives VMO handles, that
client can BufferCollection.Release
and close the client end without
causing this type of failure.
This implies [fuchsia.sysmem2/Node.SetWeakOk
] as well, but does not
imply SetWeakOk
with for_children_also
true, which can be sent
separately as appropriate.
Sourcepub fn set_weak_ok(&self, payload: NodeSetWeakOkRequest) -> Result<(), Error>
pub fn set_weak_ok(&self, payload: NodeSetWeakOkRequest) -> Result<(), Error>
This indicates to sysmem that the client is prepared to pay attention to
close_weak_asap
.
If sent, this message must be before
[fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated
].
All participants using a weak [fuchsia.sysmem2/BufferCollection
] must
send this message before WaitForAllBuffersAllocated
, or a parent
Node
must have sent [fuchsia.sysmem2/Node.SetWeakOk
] with
for_child_nodes_also
true, else the WaitForAllBuffersAllocated
will
trigger buffer collection failure.
This message is necessary because weak sysmem VMOs have not always been
a thing, so older clients are not aware of the need to pay attention to
close_weak_asap
ZX_EVENTPAIR_PEER_CLOSED
and close all remaining
sysmem weak VMO handles asap. By having this message and requiring
participants to indicate their acceptance of this aspect of the overall
protocol, we avoid situations where an older client is delivered a weak
VMO without any way for sysmem to get that VMO to close quickly later
(and on a per-buffer basis).
A participant that doesn’t handle close_weak_asap
and also doesn’t
retrieve any VMO handles via WaitForAllBuffersAllocated
doesn’t need
to send SetWeakOk
(and doesn’t need to have a parent Node
send
SetWeakOk
with for_child_nodes_also
true either). However, if that
same participant has a child/delegate which does retrieve VMOs, that
child/delegate will need to send SetWeakOk
before
WaitForAllBuffersAllocated
.
- request
for_child_nodes_also
If present and true, this means direct child nodes of this node created after this message plus all descendants of those nodes will behave as ifSetWeakOk
was sent on those nodes. Any child node of this node that was created before this message is not included. This setting is “sticky” in the sense that a subsequentSetWeakOk
without this bool set to true does not reset the server-side bool. If this creates a problem for a participant, a workaround is toSetWeakOk
withfor_child_nodes_also
true on child tokens instead, as appropriate. A participant should only setfor_child_nodes_also
true if the participant can really promise to obeyclose_weak_asap
both for its own weak VMO handles, and for all weak VMO handles held by participants holding the corresponding childNode
(s). Whenfor_child_nodes_also
is set, descendentNode
(s) which are using sysmem(1) can be weak, despite the clients of those sysmem1Node
(s) not having any direct way toSetWeakOk
or any direct way to find out aboutclose_weak_asap
. This only applies to descendents of thisNode
which are using sysmem(1), not to thisNode
when converted directly from a sysmem2 token to a sysmem(1) token, which will fail allocation unless an ancestor of thisNode
specifiedfor_child_nodes_also
true.
Sourcepub fn attach_node_tracking(
&self,
payload: NodeAttachNodeTrackingRequest,
) -> Result<(), Error>
pub fn attach_node_tracking( &self, payload: NodeAttachNodeTrackingRequest, ) -> Result<(), Error>
The server_end will be closed after this Node
and any child nodes have
have released their buffer counts, making those counts available for
reservation by a different Node
via
[fuchsia.sysmem2/BufferCollection.AttachToken
].
The Node
buffer counts may not be released until the entire tree of
Node
(s) is closed or failed, because
[fuchsia.sysmem2/BufferCollection.Release
] followed by channel close
does not immediately un-reserve the Node
buffer counts. Instead, the
Node
buffer counts remain reserved until the orphaned node is later
cleaned up.
If the Node
exceeds a fairly large number of attached eventpair server
ends, a log message will indicate this and the Node
(and the
appropriate) sub-tree will fail.
The server_end
will remain open when
[fuchsia.sysmem2/Allocator.BindSharedCollection
] converts a
[fuchsia.sysmem2/BufferCollectionToken
] into a
[fuchsia.sysmem2/BufferCollection
].
This message can also be used with a
[fuchsia.sysmem2/BufferCollectionTokenGroup
].
Sourcepub fn set_constraints(
&self,
payload: BufferCollectionSetConstraintsRequest,
) -> Result<(), Error>
pub fn set_constraints( &self, payload: BufferCollectionSetConstraintsRequest, ) -> Result<(), Error>
Provide [fuchsia.sysmem2/BufferCollectionConstraints
] to the buffer
collection.
A participant may only call
[fuchsia.sysmem2/BufferCollection.SetConstraints
] up to once per
[fuchsia.sysmem2/BufferCollection
].
For buffer allocation to be attempted, all holders of a
BufferCollection
client end need to call SetConstraints
before
sysmem will attempt to allocate buffers.
- request
constraints
These are the constraints on the buffer collection imposed by the sending client/participant. Theconstraints
field is not required to be set. If not set, the client is not setting any actual constraints, but is indicating that the client has no constraints to set. A client that doesn’t set theconstraints
field won’t receive any VMO handles, but can still find out how many buffers were allocated and can still refer to buffers by theirbuffer_index
.
Sourcepub fn wait_for_all_buffers_allocated(
&self,
) -> QueryResponseFut<BufferCollectionWaitForAllBuffersAllocatedResult, DefaultFuchsiaResourceDialect>
pub fn wait_for_all_buffers_allocated( &self, ) -> QueryResponseFut<BufferCollectionWaitForAllBuffersAllocatedResult, DefaultFuchsiaResourceDialect>
Wait until all buffers are allocated.
This FIDL call completes when buffers have been allocated, or completes with some failure detail if allocation has been attempted but failed.
The following must occur before buffers will be allocated:
- All
fuchsia.sysmem2/BufferCollectionToken
of the buffer collection must be turned in viaBindSharedCollection
to get a [fuchsia.sysmem2/BufferCollection
] (for brevity, this is assuming [fuchsia.sysmem2/BufferCollection.AttachToken
] isn’t being used), or have had [fuchsia.sysmem2/BufferCollectionToken.Release
] sent to them. - All
fuchsia.sysmem2/BufferCollection
of the buffer collection must have had [fuchsia.sysmem2/BufferCollection.SetConstraints
] sent to them, or had [fuchsia.sysmem2/BufferCollection.Release
] sent to them.
- result
buffer_collection_info
The VMO handles and other related info.
- error
[fuchsia.sysmem2/Error.NO_MEMORY]
The request is valid but cannot be fulfilled due to resource exhaustion. - error
[fuchsia.sysmem2/Error.PROTOCOL_DEVIATION
] The request is malformed. - error
[fuchsia.sysmem2/Error.CONSTRAINTS_INTERSECTION_EMPTY
] The request is valid but cannot be satisfied, perhaps due to hardware limitations. This can happen if participants have incompatible constraints (empty intersection, roughly speaking). See the log for more info. In cases where a participant could potentially be treated as optional, see [BufferCollectionTokenGroup
]. When using [fuchsia.sysmem2/BufferCollection.AttachToken
], this will be the error code if there aren’t enough buffers in the pre-existing collection to satisfy the constraints set on the attached token and any sub-tree of tokens derived from the attached token.
Sourcepub fn check_all_buffers_allocated(
&self,
) -> QueryResponseFut<BufferCollectionCheckAllBuffersAllocatedResult, DefaultFuchsiaResourceDialect>
pub fn check_all_buffers_allocated( &self, ) -> QueryResponseFut<BufferCollectionCheckAllBuffersAllocatedResult, DefaultFuchsiaResourceDialect>
Checks whether all the buffers have been allocated, in a polling fashion.
- If the buffer collection has been allocated, returns success.
- If the buffer collection failed allocation, returns the same
[
fuchsia.sysmem2/Error
] as [fuchsia.sysmem2/BufferCollection/WaitForAllBuffersAllocated
] would return. - error [
fuchsia.sysmem2/Error.PENDING
] The buffer collection hasn’t attempted allocation yet. This means that WaitForAllBuffersAllocated would not respond quickly.
Sourcepub fn attach_token(
&self,
payload: BufferCollectionAttachTokenRequest,
) -> Result<(), Error>
pub fn attach_token( &self, payload: BufferCollectionAttachTokenRequest, ) -> Result<(), Error>
Create a new token to add a new participant to an existing logical buffer collection, if the existing collection’s buffer counts, constraints, and participants allow.
This can be useful in replacing a failed participant, and/or in adding/re-adding a participant after buffers have already been allocated.
When [fuchsia.sysmem2/BufferCollection.AttachToken
] is used, the sub
tree rooted at the attached [fuchsia.sysmem2/BufferCollectionToken
]
goes through the normal procedure of setting constraints or closing
fuchsia.sysmem2/Node
, and then appearing to allocate buffers from
clients’ point of view, despite the possibility that all the buffers
were actually allocated previously. This process is called “logical
allocation”. Most instances of “allocation” in docs for other messages
can also be read as “allocation or logical allocation” while remaining
valid, but we just say “allocation” in most places for brevity/clarity
of explanation, with the details of “logical allocation” left for the
docs here on AttachToken
.
Failure of an attached Node
does not propagate to the parent of the
attached Node
. More generally, failure of a child Node
is blocked
from reaching its parent Node
if the child is attached, or if the
child is dispensable and the failure occurred after logical allocation
(see [fuchsia.sysmem2/BufferCollectionToken.SetDispensable
]).
A participant may in some scenarios choose to initially use a
dispensable token for a given instance of a delegate participant, and
then later if the first instance of that delegate participant fails, a
new second instance of that delegate participant my be given a token
created with AttachToken
.
From the point of view of the [fuchsia.sysmem2/BufferCollectionToken
]
client end, the token acts like any other token. The client can
[fuchsia.sysmem2/BufferCollectionToken.Duplicate
] the token as needed,
and can send the token to a different process/participant. The
BufferCollectionToken
Node
should be converted to a
BufferCollection
Node
as normal by sending
[fuchsia.sysmem2/Allocator.BindSharedCollection
], or can be closed
without causing subtree failure by sending
[fuchsia.sysmem2/BufferCollectionToken.Release
]. Assuming the former,
the [fuchsia.sysmem2/BufferCollection.SetConstraints
] message or
[fuchsia.sysmem2/BufferCollection.Release
] message should be sent to
the BufferCollection
.
Within the subtree, a success result from
[fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated
] means
the subtree participants’ constraints were satisfiable using the
already-existing buffer collection, the already-established
[fuchsia.sysmem2/BufferCollectionInfo
] including image format
constraints, and the already-existing other participants (already added
via successful logical allocation) and their specified buffer counts in
their constraints. A failure result means the new participants’
constraints cannot be satisfied using the existing buffer collection and
its already-added participants. Creating a new collection instead may
allow all participants’ constraints to be satisfied, assuming
SetDispensable
is used in place of AttachToken
, or a normal token is
used.
A token created with AttachToken
performs constraints aggregation with
all constraints currently in effect on the buffer collection, plus the
attached token under consideration plus child tokens under the attached
token which are not themselves an attached token or under such a token.
Further subtrees under this subtree are considered for logical
allocation only after this subtree has completed logical allocation.
Assignment of existing buffers to participants’
[fuchsia.sysmem2/BufferCollectionConstraints.min_buffer_count_for_camping
]
etc is first-come first-served, but a child can’t logically allocate
before all its parents have sent SetConstraints
.
See also [fuchsia.sysmem2/BufferCollectionToken.SetDispensable
], which
in contrast to AttachToken
, has the created token Node
+ child
Node
(s) (in the created subtree but not in any subtree under this
subtree) participate in constraints aggregation along with its parent
during the parent’s allocation or logical allocation.
Similar to [fuchsia.sysmem2/BufferCollectionToken.Duplicate
], the
newly created token needs to be [fuchsia.sysmem2/Node.Sync
]ed to
sysmem before the new token can be passed to BindSharedCollection
. The
Sync
of the new token can be accomplished with
[fuchsia.sysmem2/BufferCollection.Sync
] after converting the created
BufferCollectionToken
to a BufferCollection
. Alternately,
[fuchsia.sysmem2/BufferCollectionToken.Sync
] on the new token also
works. Or using [fuchsia.sysmem2/BufferCollectionToken.DuplicateSync
]
works. As usual, a BufferCollectionToken.Sync
can be started after any
BufferCollectionToken.Duplicate
messages have been sent via the newly
created token, to also sync those additional tokens to sysmem using a
single round-trip.
All table fields are currently required.
- request
rights_attentuation_mask
This allows attenuating the VMO rights of the subtree. These values forrights_attenuation_mask
result in no attenuation (note that 0 is not on this list):- ZX_RIGHT_SAME_RIGHTS (preferred)
- 0xFFFFFFFF (this is reasonable when an attenuation mask is computed)
- request
token_request
The server end of theBufferCollectionToken
channel. The client retains the client end.
Sourcepub fn attach_lifetime_tracking(
&self,
payload: BufferCollectionAttachLifetimeTrackingRequest,
) -> Result<(), Error>
pub fn attach_lifetime_tracking( &self, payload: BufferCollectionAttachLifetimeTrackingRequest, ) -> Result<(), Error>
Set up an eventpair to be signalled (ZX_EVENTPAIR_PEER_CLOSED
) when
buffers have been allocated and only the specified number of buffers (or
fewer) remain in the buffer collection.
[fuchsia.sysmem2/BufferCollection.AttachLifetimeTracking
] allows a
client to wait until an old buffer collection is fully or mostly
deallocated before attempting allocation of a new buffer collection. The
eventpair is only signalled when the buffers of this collection have
been fully deallocated (not just un-referenced by clients, but all the
memory consumed by those buffers has been fully reclaimed/recycled), or
when allocation or logical allocation fails for the tree or subtree
including this [fuchsia.sysmem2/BufferCollection
].
The eventpair won’t be signalled until allocation or logical allocation has completed; until then, the collection’s current buffer count is ignored.
If logical allocation fails for an attached subtree (using
[fuchsia.sysmem2/BufferCollection.AttachToken
]), the server end of the
eventpair will close during that failure regardless of the number of
buffers potenitally allocated in the overall buffer collection. This is
for logical allocation consistency with normal allocation.
The lifetime signalled by this event includes asynchronous cleanup of
allocated buffers, and this asynchronous cleanup cannot occur until all
holders of VMO handles to the buffers have closed those VMO handles.
Therefore, clients should take care not to become blocked forever
waiting for ZX_EVENTPAIR_PEER_CLOSED
to be signalled if any of the
participants using the logical buffer collection (including the waiter
itself) are less trusted, less reliable, or potentially blocked by the
wait itself. Waiting asynchronously is recommended. Setting a deadline
for the client wait may be prudent, depending on details of how the
collection and/or its VMOs are used or shared. Failure to allocate a
new/replacement buffer collection is better than getting stuck forever.
The sysmem server itself intentionally does not perform any waiting on
already-failed collections’ VMOs to finish cleaning up before attempting
a new allocation, and the sysmem server intentionally doesn’t retry
allocation if a new allocation fails due to out of memory, even if that
failure is potentially due to continued existence of an old collection’s
VMOs. This AttachLifetimeTracking
message is how an initiator can
mitigate too much overlap of old VMO lifetimes with new VMO lifetimes,
as long as the waiting client is careful to not create a deadlock.
Continued existence of old collections that are still cleaning up is not
the only reason that a new allocation may fail due to insufficient
memory, even if the new allocation is allocating physically contiguous
buffers. Overall system memory pressure can also be the cause of failure
to allocate a new collection. See also
[fuchsia.memorypressure/Provider
].
AttachLifetimeTracking
is meant to be compatible with other protocols
with a similar AttachLifetimeTracking
message; duplicates of the same
eventpair
handle (server end) can be sent via more than one
AttachLifetimeTracking
message to different protocols, and the
ZX_EVENTPAIR_PEER_CLOSED
will be signalled for the client end when all
the conditions are met (all holders of duplicates have closed their
server end handle(s)). Also, thanks to how eventpair endponts work, the
client end can (also) be duplicated without preventing the
ZX_EVENTPAIR_PEER_CLOSED
signal.
The server intentionally doesn’t “trust” any signals set on the
server_end
. This mechanism intentionally uses only
ZX_EVENTPAIR_PEER_CLOSED
set on the client end, which can’t be set
“early”, and is only set when all handles to the server end eventpair
are closed. No meaning is associated with any of the other signals, and
clients should ignore any other signal bits on either end of the
eventpair
.
The server_end
may lack ZX_RIGHT_SIGNAL
or ZX_RIGHT_SIGNAL_PEER
,
but must have ZX_RIGHT_DUPLICATE
(and must have ZX_RIGHT_TRANSFER
to
transfer without causing BufferCollection
channel failure).
All table fields are currently required.
- request
server_end
This eventpair handle will be closed by the sysmem server when buffers have been allocated initially and the number of buffers is then less than or equal tobuffers_remaining
. - request
buffers_remaining
Wait for all butbuffers_remaining
(or fewer) buffers to be fully deallocated. A number greater than zero can be useful in situations where a known number of buffers are intentionally not closed so that the data can continue to be used, such as for keeping the last available video frame displayed in the UI even if the video stream was using protected output buffers. It’s outside the scope of theBufferCollection
interface (at least for now) to determine how many buffers may be held without closing, but it’ll typically be in the range 0-2.
Trait Implementations§
Source§impl BufferCollectionProxyInterface for BufferCollectionProxy
impl BufferCollectionProxyInterface for BufferCollectionProxy
type SyncResponseFut = QueryResponseFut<()>
type GetNodeRefResponseFut = QueryResponseFut<NodeGetNodeRefResponse>
type IsAlternateForResponseFut = QueryResponseFut<Result<NodeIsAlternateForResponse, Error>>
type GetBufferCollectionIdResponseFut = QueryResponseFut<NodeGetBufferCollectionIdResponse>
type WaitForAllBuffersAllocatedResponseFut = QueryResponseFut<Result<BufferCollectionWaitForAllBuffersAllocatedResponse, Error>>
type CheckAllBuffersAllocatedResponseFut = QueryResponseFut<Result<(), Error>>
fn sync(&self) -> Self::SyncResponseFut
fn release(&self) -> Result<(), Error>
fn set_name(&self, payload: &NodeSetNameRequest) -> Result<(), Error>
fn set_debug_client_info( &self, payload: &NodeSetDebugClientInfoRequest, ) -> Result<(), Error>
fn set_debug_timeout_log_deadline( &self, payload: &NodeSetDebugTimeoutLogDeadlineRequest, ) -> Result<(), Error>
fn set_verbose_logging(&self) -> Result<(), Error>
fn get_node_ref(&self) -> Self::GetNodeRefResponseFut
fn is_alternate_for( &self, payload: NodeIsAlternateForRequest, ) -> Self::IsAlternateForResponseFut
fn get_buffer_collection_id(&self) -> Self::GetBufferCollectionIdResponseFut
fn set_weak(&self) -> Result<(), Error>
fn set_weak_ok(&self, payload: NodeSetWeakOkRequest) -> Result<(), Error>
fn attach_node_tracking( &self, payload: NodeAttachNodeTrackingRequest, ) -> Result<(), Error>
fn set_constraints( &self, payload: BufferCollectionSetConstraintsRequest, ) -> Result<(), Error>
fn wait_for_all_buffers_allocated( &self, ) -> Self::WaitForAllBuffersAllocatedResponseFut
fn check_all_buffers_allocated( &self, ) -> Self::CheckAllBuffersAllocatedResponseFut
fn attach_token( &self, payload: BufferCollectionAttachTokenRequest, ) -> Result<(), Error>
fn attach_lifetime_tracking( &self, payload: BufferCollectionAttachLifetimeTrackingRequest, ) -> Result<(), Error>
Source§impl Clone for BufferCollectionProxy
impl Clone for BufferCollectionProxy
Source§fn clone(&self) -> BufferCollectionProxy
fn clone(&self) -> BufferCollectionProxy
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for BufferCollectionProxy
impl Debug for BufferCollectionProxy
Source§impl Proxy for BufferCollectionProxy
impl Proxy for BufferCollectionProxy
Source§type Protocol = BufferCollectionMarker
type Protocol = BufferCollectionMarker
Proxy
controls.Source§fn from_channel(inner: AsyncChannel) -> Self
fn from_channel(inner: AsyncChannel) -> Self
Source§fn into_channel(self) -> Result<AsyncChannel, Self>
fn into_channel(self) -> Result<AsyncChannel, Self>
Source§fn as_channel(&self) -> &AsyncChannel
fn as_channel(&self) -> &AsyncChannel
§fn into_client_end(self) -> Result<ClientEnd<Self::Protocol>, Self>
fn into_client_end(self) -> Result<ClientEnd<Self::Protocol>, Self>
Auto Trait Implementations§
impl Freeze for BufferCollectionProxy
impl !RefUnwindSafe for BufferCollectionProxy
impl Send for BufferCollectionProxy
impl Sync for BufferCollectionProxy
impl Unpin for BufferCollectionProxy
impl !UnwindSafe for BufferCollectionProxy
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§unsafe fn clone_to_uninit(&self, dst: *mut T)
unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)