Struct fidl_fuchsia_sysmem2::NodeProxy

source ·
pub struct NodeProxy { /* private fields */ }

Implementations§

source§

impl NodeProxy

source

pub fn new(channel: AsyncChannel) -> Self

Create a new Proxy for fuchsia.sysmem2/Node.

source

pub fn take_event_stream(&self) -> NodeEventStream

Get a Stream of events from the remote end of the protocol.

§Panics

Panics if the event stream was already taken.

source

pub fn sync(&self) -> QueryResponseFut<()>

Ensure that previous messages have been received server side. This is particularly useful after previous messages that created new tokens, because a token must be known to the sysmem server before sending the token to another participant.

Calling [fuchsia.sysmem2/BufferCollectionToken.Sync] on a token that isn’t/wasn’t a valid token risks the Sync stalling forever. See [fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken] for one way to mitigate the possibility of a hostile/fake [fuchsia.sysmem2/BufferCollectionToken] at the cost of one round trip. Another way is to pass the token to [fuchsia.sysmem2/Allocator/BindSharedCollection], which also validates the token as part of exchanging it for a [fuchsia.sysmem2/BufferCollection] channel, and [fuchsia.sysmem2/BufferCollection.Sync] can then be used without risk of stalling.

After creating one or more fuchsia.sysmem2/BufferCollectionToken and then starting and completing a Sync, it’s then safe to send the BufferCollectionToken client ends to other participants knowing the server will recognize the tokens when they’re sent by the other participants to sysmem in a [fuchsia.sysmem2/Allocator.BindSharedCollection] message. This is an efficient way to create tokens while avoiding unnecessary round trips.

Other options include waiting for each [fuchsia.sysmem2/BufferCollectionToken.Duplicate] to complete individually (using separate call to Sync after each), or calling [fuchsia.sysmem2/BufferCollection.Sync] after a token has been converted to a BufferCollection via [fuchsia.sysmem2/Allocator.BindSharedCollection], or using [fuchsia.sysmem2/BufferCollectionToken.DuplicateSync] which includes the sync step and can create multiple tokens at once.

source

pub fn release(&self) -> Result<(), Error>

§On a [fuchsia.sysmem2/BufferCollectionToken] channel:

Normally a participant will convert a BufferCollectionToken into a [fuchsia.sysmem2/BufferCollection], but a participant can instead send Release via the token (and then close the channel immediately or shortly later in response to server closing the server end), which avoids causing buffer collection failure. Without a prior Release, closing the BufferCollectionToken client end will cause buffer collection failure.

§On a [fuchsia.sysmem2/BufferCollection] channel:

By default the server handles unexpected closure of a [fuchsia.sysmem2/BufferCollection] client end (without Release first) by failing the buffer collection. Partly this is to expedite closing VMO handles to reclaim memory when any participant fails. If a participant would like to cleanly close a BufferCollection without causing buffer collection failure, the participant can send Release before closing the BufferCollection client end. The Release can occur before or after SetConstraints. If before SetConstraints, the buffer collection won’t require constraints from this node in order to allocate. If after SetConstraints, the constraints are retained and aggregated, despite the lack of BufferCollection connection at the time of constraints aggregation.

§On a [fuchsia.sysmem2/BufferCollectionTokenGroup] channel:

By default, unexpected closure of a BufferCollectionTokenGroup client end (without Release first) will trigger failure of the buffer collection. To close a BufferCollectionTokenGroup channel without failing the buffer collection, ensure that AllChildrenPresent() has been sent, and send Release before closing the BufferCollectionTokenGroup client end.

If Release occurs before [fuchsia.sysmem2/BufferCollectionTokenGroup.AllChildrenPresent], the buffer collection will fail (triggered by reception of Releasewithout priorAllChildrenPresent). This is intentionally not analogous to how [fuchsia.sysmem2/BufferCollection.Release] without [fuchsia.sysmem2/BufferCollection.SetConstraints] first doesn't cause buffer collection failure. For a BufferCollectionTokenGroup, clean close requires AllChildrenPresent(if not already sent), thenRelease`, then close client end.

If Release occurs after AllChildrenPresent, the children and all their constraints remain intact (just as they would if the BufferCollectionTokenGroup channel had remained open), and the client end close doesn’t trigger buffer collection failure.

§On all [fuchsia.sysmem2/Node] channels (any of the above):

For brevity, the per-channel-protocol paragraphs above ignore the separate failure domain created by [fuchsia.sysmem2/BufferCollectionToken.SetDispensable] or [fuchsia.sysmem2/BufferCollection.AttachToken]. When a client end unexpectedly closes (without Release first) and that client end is under a failure domain, instead of failing the whole buffer collection, the failure domain is failed, but the buffer collection itself is isolated from failure of the failure domain. Such failure domains can be nested, in which case only the inner-most failure domain in which the Node resides fails.

source

pub fn set_name(&self, payload: &NodeSetNameRequest) -> Result<(), Error>

Set a name for VMOs in this buffer collection.

If the name doesn’t fit in ZX_MAX_NAME_LEN, the name of the vmo itself will be truncated to fit. The name of the vmo will be suffixed with the buffer index within the collection (if the suffix fits within ZX_MAX_NAME_LEN). The name specified here (without truncation) will be listed in the inspect data.

The name only affects VMOs allocated after the name is set; this call does not rename existing VMOs. If multiple clients set different names then the larger priority value will win. Setting a new name with the same priority as a prior name doesn’t change the name.

All table fields are currently required.

  • request priority The name is only set if this is the first SetName or if priority is greater than any previous priority value in prior SetName calls across all Node(s) of this buffer collection.
  • request name The name for VMOs created under this buffer collection.
source

pub fn set_debug_client_info( &self, payload: &NodeSetDebugClientInfoRequest, ) -> Result<(), Error>

Set information about the current client that can be used by sysmem to help diagnose leaking memory and allocation stalls waiting for a participant to send [fuchsia.sysmem2/BufferCollection.SetConstraints].

This sets the debug client info on this [fuchsia.sysmem2/Node] and all Node(s) derived from this Node, unless overriden by [fuchsia.sysmem2/Allocator.SetDebugClientInfo] or a later [fuchsia.sysmem2/Node.SetDebugClientInfo].

Sending [fuchsia.sysmem2/Allocator.SetDebugClientInfo] once per Allocator is the most efficient way to ensure that all fuchsia.sysmem2/Node will have at least some debug client info set, and is also more efficient than separately sending the same debug client info via [fuchsia.sysmem2/Node.SetDebugClientInfo] for each created [fuchsia.sysmem2/Node].

Also used when verbose logging is enabled (see SetVerboseLogging) to indicate which client is closing their channel first, leading to subtree failure (which can be normal if the purpose of the subtree is over, but if happening earlier than expected, the client-channel-specific name can help diagnose where the failure is first coming from, from sysmem’s point of view).

All table fields are currently required.

  • request name This can be an arbitrary string, but the current process name (see fsl::GetCurrentProcessName) is a good default.
  • request id This can be an arbitrary id, but the current process ID (see fsl::GetCurrentProcessKoid) is a good default.
source

pub fn set_debug_timeout_log_deadline( &self, payload: &NodeSetDebugTimeoutLogDeadlineRequest, ) -> Result<(), Error>

Sysmem logs a warning if sysmem hasn’t seen [fuchsia.sysmem2/BufferCollection.SetConstraints] from all clients within 5 seconds after creation of a new collection.

Clients can call this method to change when the log is printed. If multiple client set the deadline, it’s unspecified which deadline will take effect.

In most cases the default works well.

All table fields are currently required.

  • request deadline The time at which sysmem will start trying to log the warning, unless all constraints are with sysmem by then.
source

pub fn set_verbose_logging(&self) -> Result<(), Error>

This enables verbose logging for the buffer collection.

Verbose logging includes constraints set via [fuchsia.sysmem2/BufferCollection.SetConstraints] from each client along with info set via [fuchsia.sysmem2/Node.SetDebugClientInfo] (or [fuchsia.sysmem2/Allocator.SetDebugClientInfo]) and the structure of the tree of Node(s).

Normally sysmem prints only a single line complaint when aggregation fails, with just the specific detailed reason that aggregation failed, with little surrounding context. While this is often enough to diagnose a problem if only a small change was made and everything was working before the small change, it’s often not particularly helpful for getting a new buffer collection to work for the first time. Especially with more complex trees of nodes, involving things like [fuchsia.sysmem2/BufferCollection.AttachToken], [fuchsia.sysmem2/BufferCollectionToken.SetDispensable], [fuchsia.sysmem2/BufferCollectionTokenGroup] nodes, and associated subtrees of nodes, verbose logging may help in diagnosing what the tree looks like and why it’s failing a logical allocation, or why a tree or subtree is failing sooner than expected.

The intent of the extra logging is to be acceptable from a performance point of view, under the assumption that verbose logging is only enabled on a low number of buffer collections. If we’re not tracking down a bug, we shouldn’t send this message.

source

pub fn get_node_ref(&self) -> QueryResponseFut<NodeGetNodeRefResponse>

This gets a handle that can be used as a parameter to [fuchsia.sysmem2/Node.IsAlternateFor] called on any [fuchsia.sysmem2/Node]. This handle is only for use as proof that the client obtained this handle from this Node.

Because this is a get not a set, no [fuchsia.sysmem2/Node.Sync] is needed between the GetNodeRef and the call to IsAlternateFor, despite the two calls typically being on different channels.

See also [fuchsia.sysmem2/Node.IsAlternateFor].

All table fields are currently required.

  • response node_ref This handle can be sent via IsAlternateFor on a different Node channel, to prove that the client obtained the handle from this Node.
source

pub fn is_alternate_for( &self, payload: NodeIsAlternateForRequest, ) -> QueryResponseFut<NodeIsAlternateForResult>

Check whether the calling [fuchsia.sysmem2/Node] is in a subtree rooted at a different child token of a common parent [fuchsia.sysmem2/BufferCollectionTokenGroup], in relation to the passed-in node_ref.

This call is for assisting with admission control de-duplication, and with debugging.

The node_ref must be obtained using [fuchsia.sysmem2/Node.GetNodeRef].

The node_ref can be a duplicated handle; it’s not necessary to call GetNodeRef for every call to [fuchsia.sysmem2/Node.IsAlternateFor].

If a calling token may not actually be a valid token at all due to a potentially hostile/untrusted provider of the token, call [fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken] first instead of potentially getting stuck indefinitely if IsAlternateFor never responds due to a calling token not being a real token (not really talking to sysmem). Another option is to call [fuchsia.sysmem2/Allocator.BindSharedCollection] with this token first which also validates the token along with converting it to a [fuchsia.sysmem2/BufferCollection], then call IsAlternateFor.

All table fields are currently required.

  • response is_alternate
    • true: The first parent node in common between the calling node and the node_ref Node is a BufferCollectionTokenGroup. This means that the calling Node and the node_ref Node will not have both their constraints apply - rather sysmem will choose one or the other of the constraints - never both. This is because only one child of a BufferCollectionTokenGroup is selected during logical allocation, with only that one child’s subtree contributing to constraints aggregation.
    • false: The first parent node in common between the calling Node and the node_ref Node is not a BufferCollectionTokenGroup. Currently, this means the first parent node in common is a BufferCollectionToken or BufferCollection (regardless of not Releaseed). This means that the calling Node and the node_ref Node may have both their constraints apply during constraints aggregation of the logical allocation, if both Node(s) are selected by any parent BufferCollectionTokenGroup(s) involved. In this case, there is no BufferCollectionTokenGroup that will directly prevent the two Node(s) from both being selected and their constraints both aggregated, but even when false, one or both Node(s) may still be eliminated from consideration if one or both Node(s) has a direct or indirect parent BufferCollectionTokenGroup which selects a child subtree other than the subtree containing the calling Node or node_ref Node.
  • error [fuchsia.sysmem2/Error.NOT_FOUND] The node_ref wasn’t associated with the same buffer collection as the calling Node. Another reason for this error is if the node_ref is an [zx.Handle.EVENT] handle with sufficient rights, but isn’t actually a real node_ref obtained from GetNodeRef.
  • error [fuchsia.sysmem2/Error.PROTOCOL_DEVIATION] The caller passed a node_ref that isn’t a [zx.Handle:EVENT] handle , or doesn’t have the needed rights expected on a real node_ref.
  • No other failing status codes are returned by this call. However, sysmem may add additional codes in future, so the client should have sensible default handling for any failing status code.
source

pub fn get_buffer_collection_id( &self, ) -> QueryResponseFut<NodeGetBufferCollectionIdResponse>

Get the buffer collection ID. This ID is also available from [fuchsia.sysmem2/Allocator.GetVmoInfo] (along with the buffer_index within the collection).

This call is mainly useful in situations where we can’t convey a [fuchsia.sysmem2/BufferCollectionToken] or [fuchsia.sysmem2/BufferCollection] directly, but can only convey a VMO handle, which can be joined back up with a BufferCollection client end that was created via a different path. Prefer to convey a BufferCollectionToken or BufferCollection directly when feasible.

Trusting a buffer_collection_id value from a source other than sysmem is analogous to trusting a koid value from a source other than zircon. Both should be avoided unless really necessary, and both require caution. In some situations it may be reasonable to refer to a pre-established BufferCollection by buffer_collection_id via a protocol for efficiency reasons, but an incoming value purporting to be a buffer_collection_id is not sufficient alone to justify granting the sender of the buffer_collection_id any capability. The sender must first prove to a receiver that the sender has/had a VMO or has/had a BufferCollectionToken to the same collection by sending a handle that sysmem confirms is a valid sysmem handle and which sysmem maps to the buffer_collection_id value. The receiver should take care to avoid assuming that a sender had a BufferCollectionToken in cases where the sender has only proven that the sender had a VMO.

  • response buffer_collection_id This ID is unique per buffer collection per boot. Each buffer is uniquely identified by the buffer_collection_id and buffer_index together.
source

pub fn set_weak(&self) -> Result<(), Error>

Sets the current [fuchsia.sysmem2/Node] and all child Node(s) created after this message to weak, which means that a client’s Node client end (or a child created after this message) is not alone sufficient to keep allocated VMOs alive.

All VMOs obtained from weak Node(s) are weak sysmem VMOs. See also close_weak_asap.

This message is only permitted before the Node becomes ready for allocation (else the server closes the channel with ZX_ERR_BAD_STATE):

  • BufferCollectionToken: any time
  • BufferCollection: before SetConstraints
  • BufferCollectionTokenGroup: before AllChildrenPresent

Currently, no conversion from strong Node to weak Node after ready for allocation is provided, but a client can simulate that by creating an additional Node before allocation and setting that additional Node to weak, and then potentially at some point later sending Release and closing the client end of the client’s strong Node, but keeping the client’s weak Node.

Zero strong Node(s) and zero strong VMO handles will result in buffer collection failure (all Node client end(s) will see ZX_CHANNEL_PEER_CLOSED and all close_weak_asap client_end(s) will see ZX_EVENTPAIR_PEER_CLOSED), but sysmem (intentionally) won’t notice this situation until all Node(s) are ready for allocation. For initial allocation to succeed, at least one strong Node is required to exist at allocation time, but after that client receives VMO handles, that client can BufferCollection.Release and close the client end without causing this type of failure.

This implies [fuchsia.sysmem2/Node.SetWeakOk] as well, but does not imply SetWeakOk with for_children_also true, which can be sent separately as appropriate.

source

pub fn set_weak_ok(&self, payload: NodeSetWeakOkRequest) -> Result<(), Error>

This indicates to sysmem that the client is prepared to pay attention to close_weak_asap.

If sent, this message must be before [fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated].

All participants using a weak [fuchsia.sysmem2/BufferCollection] must send this message before WaitForAllBuffersAllocated, or a parent Node must have sent [fuchsia.sysmem2/Node.SetWeakOk] with for_child_nodes_also true, else the WaitForAllBuffersAllocated will trigger buffer collection failure.

This message is necessary because weak sysmem VMOs have not always been a thing, so older clients are not aware of the need to pay attention to close_weak_asap ZX_EVENTPAIR_PEER_CLOSED and close all remaining sysmem weak VMO handles asap. By having this message and requiring participants to indicate their acceptance of this aspect of the overall protocol, we avoid situations where an older client is delivered a weak VMO without any way for sysmem to get that VMO to close quickly later (and on a per-buffer basis).

A participant that doesn’t handle close_weak_asap and also doesn’t retrieve any VMO handles via WaitForAllBuffersAllocated doesn’t need to send SetWeakOk (and doesn’t need to have a parent Node send SetWeakOk with for_child_nodes_also true either). However, if that same participant has a child/delegate which does retrieve VMOs, that child/delegate will need to send SetWeakOk before WaitForAllBuffersAllocated.

  • request for_child_nodes_also If present and true, this means direct child nodes of this node created after this message plus all descendants of those nodes will behave as if SetWeakOk was sent on those nodes. Any child node of this node that was created before this message is not included. This setting is “sticky” in the sense that a subsequent SetWeakOk without this bool set to true does not reset the server-side bool. If this creates a problem for a participant, a workaround is to SetWeakOk with for_child_nodes_also true on child tokens instead, as appropriate. A participant should only set for_child_nodes_also true if the participant can really promise to obey close_weak_asap both for its own weak VMO handles, and for all weak VMO handles held by participants holding the corresponding child Node(s). When for_child_nodes_also is set, descendent Node(s) which are using sysmem(1) can be weak, despite the clients of those sysmem1 Node(s) not having any direct way to SetWeakOk or any direct way to find out about close_weak_asap. This only applies to descendents of this Node which are using sysmem(1), not to this Node when converted directly from a sysmem2 token to a sysmem(1) token, which will fail allocation unless an ancestor of this Node specified for_child_nodes_also true.
source

pub fn attach_node_tracking( &self, payload: NodeAttachNodeTrackingRequest, ) -> Result<(), Error>

The server_end will be closed after this Node and any child nodes have have released their buffer counts, making those counts available for reservation by a different Node via [fuchsia.sysmem2/BufferCollection.AttachToken].

The Node buffer counts may not be released until the entire tree of Node(s) is closed or failed, because [fuchsia.sysmem2/BufferCollection.Release] followed by channel close does not immediately un-reserve the Node buffer counts. Instead, the Node buffer counts remain reserved until the orphaned node is later cleaned up.

If the Node exceeds a fairly large number of attached eventpair server ends, a log message will indicate this and the Node (and the appropriate) sub-tree will fail.

The server_end will remain open when [fuchsia.sysmem2/Allocator.BindSharedCollection] converts a [fuchsia.sysmem2/BufferCollectionToken] into a [fuchsia.sysmem2/BufferCollection].

This message can also be used with a [fuchsia.sysmem2/BufferCollectionTokenGroup].

Trait Implementations§

source§

impl Clone for NodeProxy

source§

fn clone(&self) -> NodeProxy

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for NodeProxy

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl NodeProxyInterface for NodeProxy

source§

impl Proxy for NodeProxy

§

type Protocol = NodeMarker

The protocol which this Proxy controls.
source§

fn from_channel(inner: AsyncChannel) -> Self

Create a proxy over the given channel.
source§

fn into_channel(self) -> Result<AsyncChannel, Self>

Attempt to convert the proxy back into a channel. Read more
source§

fn as_channel(&self) -> &AsyncChannel

Get a reference to the proxy’s underlying channel. Read more
§

fn into_client_end(self) -> Result<ClientEnd<Self::Protocol>, Self>

Attempt to convert the proxy back into a client end. Read more
§

fn is_closed(&self) -> bool

Returns true if the proxy has received the PEER_CLOSED signal.
§

fn on_closed(&self) -> OnSignals<'_, Unowned<'_, Handle>>

Returns a future that completes when the proxy receives the PEER_CLOSED signal.

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> CloneToUninit for T
where T: Clone,

source§

default unsafe fn clone_to_uninit(&self, dst: *mut T)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dst. Read more
§

impl<T> Encode<Ambiguous1> for T

§

unsafe fn encode( self, _encoder: &mut Encoder<'_>, _offset: usize, _depth: Depth, ) -> Result<(), Error>

Encodes the object into the encoder’s buffers. Any handles stored in the object are swapped for Handle::INVALID. Read more
§

impl<T> Encode<Ambiguous2> for T

§

unsafe fn encode( self, _encoder: &mut Encoder<'_>, _offset: usize, _depth: Depth, ) -> Result<(), Error>

Encodes the object into the encoder’s buffers. Any handles stored in the object are swapped for Handle::INVALID. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

§

impl<T> Instrument for T

§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided [Span], returning an Instrumented wrapper. Read more
§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

§

impl<T> Pointable for T

§

const ALIGN: usize = _

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<T> WithSubscriber for T

§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a [WithDispatch] wrapper. Read more
§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a [WithDispatch] wrapper. Read more