template <>

class WireWeakOnewayClientImpl

Defined at line 24503 of file fidling/gen/sdk/fidl/fuchsia.sysmem2/fuchsia.sysmem2/cpp/fidl/fuchsia.sysmem2/cpp/wire_messaging.h

Public Methods

::fidl::OneWayStatus Release ()

###### On a [`fuchsia.sysmem2/BufferCollectionToken`] channel:

Normally a participant will convert a `BufferCollectionToken` into a

[`fuchsia.sysmem2/BufferCollection`], but a participant can instead send

`Release` via the token (and then close the channel immediately or

shortly later in response to server closing the server end), which

avoids causing buffer collection failure. Without a prior `Release`,

closing the `BufferCollectionToken` client end will cause buffer

collection failure.

###### On a [`fuchsia.sysmem2/BufferCollection`] channel:

By default the server handles unexpected closure of a

[`fuchsia.sysmem2/BufferCollection`] client end (without `Release`

first) by failing the buffer collection. Partly this is to expedite

closing VMO handles to reclaim memory when any participant fails. If a

participant would like to cleanly close a `BufferCollection` without

causing buffer collection failure, the participant can send `Release`

before closing the `BufferCollection` client end. The `Release` can

occur before or after `SetConstraints`. If before `SetConstraints`, the

buffer collection won't require constraints from this node in order to

allocate. If after `SetConstraints`, the constraints are retained and

aggregated, despite the lack of `BufferCollection` connection at the

time of constraints aggregation.

###### On a [`fuchsia.sysmem2/BufferCollectionTokenGroup`] channel:

By default, unexpected closure of a `BufferCollectionTokenGroup` client

end (without `Release` first) will trigger failure of the buffer

collection. To close a `BufferCollectionTokenGroup` channel without

failing the buffer collection, ensure that AllChildrenPresent() has been

sent, and send `Release` before closing the `BufferCollectionTokenGroup`

client end.

If `Release` occurs before

[`fuchsia.sysmem2/BufferCollectionTokenGroup.AllChildrenPresent], the

buffer collection will fail (triggered by reception of `Release` without

prior `AllChildrenPresent`). This is intentionally not analogous to how

[`fuchsia.sysmem2/BufferCollection.Release`] without

[`fuchsia.sysmem2/BufferCollection.SetConstraints`] first doesn't cause

buffer collection failure. For a `BufferCollectionTokenGroup`, clean

close requires `AllChildrenPresent` (if not already sent), then

`Release`, then close client end.

If `Release` occurs after `AllChildrenPresent`, the children and all

their constraints remain intact (just as they would if the

`BufferCollectionTokenGroup` channel had remained open), and the client

end close doesn't trigger buffer collection failure.

###### On all [`fuchsia.sysmem2/Node`] channels (any of the above):

For brevity, the per-channel-protocol paragraphs above ignore the

separate failure domain created by

[`fuchsia.sysmem2/BufferCollectionToken.SetDispensable`] or

[`fuchsia.sysmem2/BufferCollection.AttachToken`]. When a client end

unexpectedly closes (without `Release` first) and that client end is

under a failure domain, instead of failing the whole buffer collection,

the failure domain is failed, but the buffer collection itself is

isolated from failure of the failure domain. Such failure domains can be

nested, in which case only the inner-most failure domain in which the

`Node` resides fails.

Allocates 32 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::OneWayStatus SetName (::fuchsia_sysmem2::wire::NodeSetNameRequest NodeSetNameRequest)

Set a name for VMOs in this buffer collection.

If the name doesn't fit in ZX_MAX_NAME_LEN, the name of the vmo itself

will be truncated to fit. The name of the vmo will be suffixed with the

buffer index within the collection (if the suffix fits within

ZX_MAX_NAME_LEN). The name specified here (without truncation) will be

listed in the inspect data.

The name only affects VMOs allocated after the name is set; this call

does not rename existing VMOs. If multiple clients set different names

then the larger priority value will win. Setting a new name with the

same priority as a prior name doesn't change the name.

All table fields are currently required.

+ request `priority` The name is only set if this is the first `SetName`

or if `priority` is greater than any previous `priority` value in

prior `SetName` calls across all `Node`(s) of this buffer collection.

+ request `name` The name for VMOs created under this buffer collection.

Allocates 144 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::OneWayStatus SetDebugClientInfo (::fuchsia_sysmem2::wire::NodeSetDebugClientInfoRequest NodeSetDebugClientInfoRequest)

Set information about the current client that can be used by sysmem to

help diagnose leaking memory and allocation stalls waiting for a

participant to send [`fuchsia.sysmem2/BufferCollection.SetConstraints`].

This sets the debug client info on this [`fuchsia.sysmem2/Node`] and all

`Node`(s) derived from this `Node`, unless overriden by

[`fuchsia.sysmem2/Allocator.SetDebugClientInfo`] or a later

[`fuchsia.sysmem2/Node.SetDebugClientInfo`].

Sending [`fuchsia.sysmem2/Allocator.SetDebugClientInfo`] once per

`Allocator` is the most efficient way to ensure that all

[`fuchsia.sysmem2/Node`](s) will have at least some debug client info

set, and is also more efficient than separately sending the same debug

client info via [`fuchsia.sysmem2/Node.SetDebugClientInfo`] for each

created [`fuchsia.sysmem2/Node`].

Also used when verbose logging is enabled (see `SetVerboseLogging`) to

indicate which client is closing their channel first, leading to subtree

failure (which can be normal if the purpose of the subtree is over, but

if happening earlier than expected, the client-channel-specific name can

help diagnose where the failure is first coming from, from sysmem's

point of view).

All table fields are currently required.

+ request `name` This can be an arbitrary string, but the current

process name (see `fsl::GetCurrentProcessName`) is a good default.

+ request `id` This can be an arbitrary id, but the current process ID

(see `fsl::GetCurrentProcessKoid`) is a good default.

Allocates 344 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::OneWayStatus SetDebugTimeoutLogDeadline (::fuchsia_sysmem2::wire::NodeSetDebugTimeoutLogDeadlineRequest NodeSetDebugTimeoutLogDeadlineRequest)

Sysmem logs a warning if sysmem hasn't seen

[`fuchsia.sysmem2/BufferCollection.SetConstraints`] from all clients

within 5 seconds after creation of a new collection.

Clients can call this method to change when the log is printed. If

multiple client set the deadline, it's unspecified which deadline will

take effect.

In most cases the default works well.

All table fields are currently required.

+ request `deadline` The time at which sysmem will start trying to log

the warning, unless all constraints are with sysmem by then.

Allocates 64 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::OneWayStatus SetVerboseLogging ()

This enables verbose logging for the buffer collection.

Verbose logging includes constraints set via

[`fuchsia.sysmem2/BufferCollection.SetConstraints`] from each client

along with info set via [`fuchsia.sysmem2/Node.SetDebugClientInfo`] (or

[`fuchsia.sysmem2/Allocator.SetDebugClientInfo`]) and the structure of

the tree of `Node`(s).

Normally sysmem prints only a single line complaint when aggregation

fails, with just the specific detailed reason that aggregation failed,

with little surrounding context. While this is often enough to diagnose

a problem if only a small change was made and everything was working

before the small change, it's often not particularly helpful for getting

a new buffer collection to work for the first time. Especially with

more complex trees of nodes, involving things like

[`fuchsia.sysmem2/BufferCollection.AttachToken`],

[`fuchsia.sysmem2/BufferCollectionToken.SetDispensable`],

[`fuchsia.sysmem2/BufferCollectionTokenGroup`] nodes, and associated

subtrees of nodes, verbose logging may help in diagnosing what the tree

looks like and why it's failing a logical allocation, or why a tree or

subtree is failing sooner than expected.

The intent of the extra logging is to be acceptable from a performance

point of view, under the assumption that verbose logging is only enabled

on a low number of buffer collections. If we're not tracking down a bug,

we shouldn't send this message.

Allocates 32 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::OneWayStatus SetWeak ()

Sets the current [`fuchsia.sysmem2/Node`] and all child `Node`(s)

created after this message to weak, which means that a client's `Node`

client end (or a child created after this message) is not alone

sufficient to keep allocated VMOs alive.

All VMOs obtained from weak `Node`(s) are weak sysmem VMOs. See also

`close_weak_asap`.

This message is only permitted before the `Node` becomes ready for

allocation (else the server closes the channel with `ZX_ERR_BAD_STATE`):

* `BufferCollectionToken`: any time

* `BufferCollection`: before `SetConstraints`

* `BufferCollectionTokenGroup`: before `AllChildrenPresent`

Currently, no conversion from strong `Node` to weak `Node` after ready

for allocation is provided, but a client can simulate that by creating

an additional `Node` before allocation and setting that additional

`Node` to weak, and then potentially at some point later sending

`Release` and closing the client end of the client's strong `Node`, but

keeping the client's weak `Node`.

Zero strong `Node`(s) and zero strong VMO handles will result in buffer

collection failure (all `Node` client end(s) will see

`ZX_CHANNEL_PEER_CLOSED` and all `close_weak_asap` `client_end`(s) will

see `ZX_EVENTPAIR_PEER_CLOSED`), but sysmem (intentionally) won't notice

this situation until all `Node`(s) are ready for allocation. For initial

allocation to succeed, at least one strong `Node` is required to exist

at allocation time, but after that client receives VMO handles, that

client can `BufferCollection.Release` and close the client end without

causing this type of failure.

This implies [`fuchsia.sysmem2/Node.SetWeakOk`] as well, but does not

imply `SetWeakOk` with `for_children_also` true, which can be sent

separately as appropriate.

Allocates 32 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::OneWayStatus SetWeakOk (::fuchsia_sysmem2::wire::NodeSetWeakOkRequest NodeSetWeakOkRequest)

This indicates to sysmem that the client is prepared to pay attention to

`close_weak_asap`.

If sent, this message must be before

[`fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated`].

All participants using a weak [`fuchsia.sysmem2/BufferCollection`] must

send this message before `WaitForAllBuffersAllocated`, or a parent

`Node` must have sent [`fuchsia.sysmem2/Node.SetWeakOk`] with

`for_child_nodes_also` true, else the `WaitForAllBuffersAllocated` will

trigger buffer collection failure.

This message is necessary because weak sysmem VMOs have not always been

a thing, so older clients are not aware of the need to pay attention to

`close_weak_asap` `ZX_EVENTPAIR_PEER_CLOSED` and close all remaining

sysmem weak VMO handles asap. By having this message and requiring

participants to indicate their acceptance of this aspect of the overall

protocol, we avoid situations where an older client is delivered a weak

VMO without any way for sysmem to get that VMO to close quickly later

(and on a per-buffer basis).

A participant that doesn't handle `close_weak_asap` and also doesn't

retrieve any VMO handles via `WaitForAllBuffersAllocated` doesn't need

to send `SetWeakOk` (and doesn't need to have a parent `Node` send

`SetWeakOk` with `for_child_nodes_also` true either). However, if that

same participant has a child/delegate which does retrieve VMOs, that

child/delegate will need to send `SetWeakOk` before

`WaitForAllBuffersAllocated`.

+ request `for_child_nodes_also` If present and true, this means direct

child nodes of this node created after this message plus all

descendants of those nodes will behave as if `SetWeakOk` was sent on

those nodes. Any child node of this node that was created before this

message is not included. This setting is "sticky" in the sense that a

subsequent `SetWeakOk` without this bool set to true does not reset

the server-side bool. If this creates a problem for a participant, a

workaround is to `SetWeakOk` with `for_child_nodes_also` true on child

tokens instead, as appropriate. A participant should only set

`for_child_nodes_also` true if the participant can really promise to

obey `close_weak_asap` both for its own weak VMO handles, and for all

weak VMO handles held by participants holding the corresponding child

`Node`(s). When `for_child_nodes_also` is set, descendent `Node`(s)

which are using sysmem(1) can be weak, despite the clients of those

sysmem1 `Node`(s) not having any direct way to `SetWeakOk` or any

direct way to find out about `close_weak_asap`. This only applies to

descendents of this `Node` which are using sysmem(1), not to this

`Node` when converted directly from a sysmem2 token to a sysmem(1)

token, which will fail allocation unless an ancestor of this `Node`

specified `for_child_nodes_also` true.

Allocates 56 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::OneWayStatus AttachNodeTracking (::fuchsia_sysmem2::wire::NodeAttachNodeTrackingRequest NodeAttachNodeTrackingRequest)

The server_end will be closed after this `Node` and any child nodes have

have released their buffer counts, making those counts available for

reservation by a different `Node` via

[`fuchsia.sysmem2/BufferCollection.AttachToken`].

The `Node` buffer counts may not be released until the entire tree of

`Node`(s) is closed or failed, because

[`fuchsia.sysmem2/BufferCollection.Release`] followed by channel close

does not immediately un-reserve the `Node` buffer counts. Instead, the

`Node` buffer counts remain reserved until the orphaned node is later

cleaned up.

If the `Node` exceeds a fairly large number of attached eventpair server

ends, a log message will indicate this and the `Node` (and the

appropriate) sub-tree will fail.

The `server_end` will remain open when

[`fuchsia.sysmem2/Allocator.BindSharedCollection`] converts a

[`fuchsia.sysmem2/BufferCollectionToken`] into a

[`fuchsia.sysmem2/BufferCollection`].

This message can also be used with a

[`fuchsia.sysmem2/BufferCollectionTokenGroup`].

Allocates 56 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::OneWayStatus SetConstraints (::fuchsia_sysmem2::wire::BufferCollectionSetConstraintsRequest BufferCollectionSetConstraintsRequest)

Provide [`fuchsia.sysmem2/BufferCollectionConstraints`] to the buffer

collection.

A participant may only call

[`fuchsia.sysmem2/BufferCollection.SetConstraints`] up to once per

[`fuchsia.sysmem2/BufferCollection`].

For buffer allocation to be attempted, all holders of a

`BufferCollection` client end need to call `SetConstraints` before

sysmem will attempt to allocate buffers.

+ request `constraints` These are the constraints on the buffer

collection imposed by the sending client/participant. The

`constraints` field is not required to be set. If not set, the client

is not setting any actual constraints, but is indicating that the

client has no constraints to set. A client that doesn't set the

`constraints` field won't receive any VMO handles, but can still find

out how many buffers were allocated and can still refer to buffers by

their `buffer_index`.

Allocates 16 bytes of response buffer on the stack. Request is heap-allocated.

::fidl::OneWayStatus AttachToken (::fuchsia_sysmem2::wire::BufferCollectionAttachTokenRequest BufferCollectionAttachTokenRequest)

Create a new token to add a new participant to an existing logical

buffer collection, if the existing collection's buffer counts,

constraints, and participants allow.

This can be useful in replacing a failed participant, and/or in

adding/re-adding a participant after buffers have already been

allocated.

When [`fuchsia.sysmem2/BufferCollection.AttachToken`] is used, the sub

tree rooted at the attached [`fuchsia.sysmem2/BufferCollectionToken`]

goes through the normal procedure of setting constraints or closing

[`fuchsia.sysmem2/Node`](s), and then appearing to allocate buffers from

clients' point of view, despite the possibility that all the buffers

were actually allocated previously. This process is called "logical

allocation". Most instances of "allocation" in docs for other messages

can also be read as "allocation or logical allocation" while remaining

valid, but we just say "allocation" in most places for brevity/clarity

of explanation, with the details of "logical allocation" left for the

docs here on `AttachToken`.

Failure of an attached `Node` does not propagate to the parent of the

attached `Node`. More generally, failure of a child `Node` is blocked

from reaching its parent `Node` if the child is attached, or if the

child is dispensable and the failure occurred after logical allocation

(see [`fuchsia.sysmem2/BufferCollectionToken.SetDispensable`]).

A participant may in some scenarios choose to initially use a

dispensable token for a given instance of a delegate participant, and

then later if the first instance of that delegate participant fails, a

new second instance of that delegate participant my be given a token

created with `AttachToken`.

From the point of view of the [`fuchsia.sysmem2/BufferCollectionToken`]

client end, the token acts like any other token. The client can

[`fuchsia.sysmem2/BufferCollectionToken.Duplicate`] the token as needed,

and can send the token to a different process/participant. The

`BufferCollectionToken` `Node` should be converted to a

`BufferCollection` `Node` as normal by sending

[`fuchsia.sysmem2/Allocator.BindSharedCollection`], or can be closed

without causing subtree failure by sending

[`fuchsia.sysmem2/BufferCollectionToken.Release`]. Assuming the former,

the [`fuchsia.sysmem2/BufferCollection.SetConstraints`] message or

[`fuchsia.sysmem2/BufferCollection.Release`] message should be sent to

the `BufferCollection`.

Within the subtree, a success result from

[`fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated`] means

the subtree participants' constraints were satisfiable using the

already-existing buffer collection, the already-established

[`fuchsia.sysmem2/BufferCollectionInfo`] including image format

constraints, and the already-existing other participants (already added

via successful logical allocation) and their specified buffer counts in

their constraints. A failure result means the new participants'

constraints cannot be satisfied using the existing buffer collection and

its already-added participants. Creating a new collection instead may

allow all participants' constraints to be satisfied, assuming

`SetDispensable` is used in place of `AttachToken`, or a normal token is

used.

A token created with `AttachToken` performs constraints aggregation with

all constraints currently in effect on the buffer collection, plus the

attached token under consideration plus child tokens under the attached

token which are not themselves an attached token or under such a token.

Further subtrees under this subtree are considered for logical

allocation only after this subtree has completed logical allocation.

Assignment of existing buffers to participants'

[`fuchsia.sysmem2/BufferCollectionConstraints.min_buffer_count_for_camping`]

etc is first-come first-served, but a child can't logically allocate

before all its parents have sent `SetConstraints`.

See also [`fuchsia.sysmem2/BufferCollectionToken.SetDispensable`], which

in contrast to `AttachToken`, has the created token `Node` + child

`Node`(s) (in the created subtree but not in any subtree under this

subtree) participate in constraints aggregation along with its parent

during the parent's allocation or logical allocation.

Similar to [`fuchsia.sysmem2/BufferCollectionToken.Duplicate`], the

newly created token needs to be [`fuchsia.sysmem2/Node.Sync`]ed to

sysmem before the new token can be passed to `BindSharedCollection`. The

`Sync` of the new token can be accomplished with

[`fuchsia.sysmem2/BufferCollection.Sync`] after converting the created

`BufferCollectionToken` to a `BufferCollection`. Alternately,

[`fuchsia.sysmem2/BufferCollectionToken.Sync`] on the new token also

works. Or using [`fuchsia.sysmem2/BufferCollectionToken.DuplicateSync`]

works. As usual, a `BufferCollectionToken.Sync` can be started after any

`BufferCollectionToken.Duplicate` messages have been sent via the newly

created token, to also sync those additional tokens to sysmem using a

single round-trip.

All table fields are currently required.

+ request `rights_attentuation_mask` This allows attenuating the VMO

rights of the subtree. These values for `rights_attenuation_mask`

result in no attenuation (note that 0 is not on this list):

+ ZX_RIGHT_SAME_RIGHTS (preferred)

+ 0xFFFFFFFF (this is reasonable when an attenuation mask is computed)

+ request `token_request` The server end of the `BufferCollectionToken`

channel. The client retains the client end.

Allocates 64 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::OneWayStatus AttachLifetimeTracking (::fuchsia_sysmem2::wire::BufferCollectionAttachLifetimeTrackingRequest BufferCollectionAttachLifetimeTrackingRequest)

Set up an eventpair to be signalled (`ZX_EVENTPAIR_PEER_CLOSED`) when

buffers have been allocated and only the specified number of buffers (or

fewer) remain in the buffer collection.

[`fuchsia.sysmem2/BufferCollection.AttachLifetimeTracking`] allows a

client to wait until an old buffer collection is fully or mostly

deallocated before attempting allocation of a new buffer collection. The

eventpair is only signalled when the buffers of this collection have

been fully deallocated (not just un-referenced by clients, but all the

memory consumed by those buffers has been fully reclaimed/recycled), or

when allocation or logical allocation fails for the tree or subtree

including this [`fuchsia.sysmem2/BufferCollection`].

The eventpair won't be signalled until allocation or logical allocation

has completed; until then, the collection's current buffer count is

ignored.

If logical allocation fails for an attached subtree (using

[`fuchsia.sysmem2/BufferCollection.AttachToken`]), the server end of the

eventpair will close during that failure regardless of the number of

buffers potenitally allocated in the overall buffer collection. This is

for logical allocation consistency with normal allocation.

The lifetime signalled by this event includes asynchronous cleanup of

allocated buffers, and this asynchronous cleanup cannot occur until all

holders of VMO handles to the buffers have closed those VMO handles.

Therefore, clients should take care not to become blocked forever

waiting for `ZX_EVENTPAIR_PEER_CLOSED` to be signalled if any of the

participants using the logical buffer collection (including the waiter

itself) are less trusted, less reliable, or potentially blocked by the

wait itself. Waiting asynchronously is recommended. Setting a deadline

for the client wait may be prudent, depending on details of how the

collection and/or its VMOs are used or shared. Failure to allocate a

new/replacement buffer collection is better than getting stuck forever.

The sysmem server itself intentionally does not perform any waiting on

already-failed collections' VMOs to finish cleaning up before attempting

a new allocation, and the sysmem server intentionally doesn't retry

allocation if a new allocation fails due to out of memory, even if that

failure is potentially due to continued existence of an old collection's

VMOs. This `AttachLifetimeTracking` message is how an initiator can

mitigate too much overlap of old VMO lifetimes with new VMO lifetimes,

as long as the waiting client is careful to not create a deadlock.

Continued existence of old collections that are still cleaning up is not

the only reason that a new allocation may fail due to insufficient

memory, even if the new allocation is allocating physically contiguous

buffers. Overall system memory pressure can also be the cause of failure

to allocate a new collection. See also

[`fuchsia.memorypressure/Provider`].

`AttachLifetimeTracking` is meant to be compatible with other protocols

with a similar `AttachLifetimeTracking` message; duplicates of the same

`eventpair` handle (server end) can be sent via more than one

`AttachLifetimeTracking` message to different protocols, and the

`ZX_EVENTPAIR_PEER_CLOSED` will be signalled for the client end when all

the conditions are met (all holders of duplicates have closed their

server end handle(s)). Also, thanks to how eventpair endponts work, the

client end can (also) be duplicated without preventing the

`ZX_EVENTPAIR_PEER_CLOSED` signal.

The server intentionally doesn't "trust" any signals set on the

`server_end`. This mechanism intentionally uses only

`ZX_EVENTPAIR_PEER_CLOSED` set on the client end, which can't be set

"early", and is only set when all handles to the server end eventpair

are closed. No meaning is associated with any of the other signals, and

clients should ignore any other signal bits on either end of the

`eventpair`.

The `server_end` may lack `ZX_RIGHT_SIGNAL` or `ZX_RIGHT_SIGNAL_PEER`,

but must have `ZX_RIGHT_DUPLICATE` (and must have `ZX_RIGHT_TRANSFER` to

transfer without causing `BufferCollection` channel failure).

All table fields are currently required.

+ request `server_end` This eventpair handle will be closed by the

sysmem server when buffers have been allocated initially and the

number of buffers is then less than or equal to `buffers_remaining`.

+ request `buffers_remaining` Wait for all but `buffers_remaining` (or

fewer) buffers to be fully deallocated. A number greater than zero can

be useful in situations where a known number of buffers are

intentionally not closed so that the data can continue to be used,

such as for keeping the last available video frame displayed in the UI

even if the video stream was using protected output buffers. It's

outside the scope of the `BufferCollection` interface (at least for

now) to determine how many buffers may be held without closing, but

it'll typically be in the range 0-2.

Allocates 64 bytes of message buffer on the stack. No heap allocation necessary.