template <>

class WireSyncBufferClientImpl

Defined at line 2494 of file fidling/gen/sdk/fidl/fuchsia.sysmem/fuchsia.sysmem/cpp/fidl/fuchsia.sysmem/cpp/wire_messaging.h

Public Methods

::fidl::WireUnownedResult< ::fuchsia_sysmem::Node::Sync> Sync ()

Ensure that previous messages, including Duplicate() messages on a

token, collection, or group, have been received server side.

Calling BufferCollectionToken.Sync() on a token that isn't/wasn't a

valid sysmem token risks the Sync() hanging forever. See

ValidateBufferCollectionToken() for one way to mitigate the possibility

of a hostile/fake BufferCollectionToken at the cost of one round trip.

Another way is to pass the token to BindSharedCollection(), which also

validates the token as part of exchanging it for a BufferCollection

channel, and BufferCollection Sync() can then be used.

After a Sync(), it's then safe to send the client end of token_request

to another participant knowing the server will recognize the token when

it's sent into BindSharedCollection() by the other participant.

Other options include waiting for each token.Duplicate() to complete

individually (using separate call to token.Sync() after each), or

calling Sync() on BufferCollection after the token has been turned in

via BindSharedCollection().

Another way to mitigate is to avoid calling Sync() on the token, and

instead later deal with potential failure of BufferCollection.Sync() if

the original token was invalid. This option can be preferable from a

performance point of view, but requires client code to delay sending

tokens duplicated from this token until after client code has converted

the duplicating token to a BufferCollection and received successful

response from BufferCollection.Sync().

Prefer using BufferCollection.Sync() instead, when feasible (see above).

When BufferCollection.Sync() isn't feasible, the caller must already

know that this token is/was valid, or BufferCollectionToken.Sync() may

hang forever. See ValidateBufferCollectionToken() to check token

validity first if the token isn't already known to be (is/was) valid.

Caller provides the backing storage for FIDL message via an argument to `.buffer()`.

::fidl::OneWayStatus Close ()

On a BufferCollectionToken channel:

Normally a participant will convert a BufferCollectionToken into a

BufferCollection view, but a participant is also free to Close() the

token (and then close the channel immediately or shortly later in

response to server closing its end), which avoids causing logical buffer

collection failure.  Normally an unexpected token channel close will

cause logical buffer collection failure (the only exceptions being

certain cases involving AttachToken() or SetDispensable()).

On a BufferCollection channel:

By default the server handles unexpected failure of a BufferCollection

by failing the whole logical buffer collection. Partly this is to

expedite closing VMO handles to reclaim memory when any participant

fails. If a participant would like to cleanly close a BufferCollection

view without causing logical buffer collection failure, the participant

can send Close() before closing the client end of the BufferCollection

channel. If this is the last BufferCollection view, the logical buffer

collection will still go away. The Close() can occur before or after

SetConstraints(). If before SetConstraints(), the buffer collection

won't require constraints from this node in order to allocate. If

after SetConstraints(), the constraints are retained and aggregated

along with any subsequent logical allocation(s), despite the lack of

channel connection.

On a BufferCollectionTokenGroup channel:

By default, unexpected failure of a BufferCollectionTokenGroup will

trigger failure of the logical BufferCollectionTokenGroup and will

propagate failure to its parent. To close a BufferCollectionTokenGroup

channel without failing the logical group or propagating failure, send

Close() before closing the channel client endpoint.

If Close() occurs before AllChildrenPresent(), the logical buffer

collection will still fail despite the Close() (because sysmem can't be

sure whether all relevant children were created, so it's ambiguous

whether all relevant constraints will be provided to sysmem). If

Close() occurs after AllChildrenPresent(), the children and all their

constraints remain intact (just as they would if the

BufferCollectionTokenGroup channel had remained open), and the close

doesn't trigger or propagate failure.

Caller provides the backing storage for FIDL message via an argument to `.buffer()`.

::fidl::OneWayStatus SetName (uint32_t priority, ::fidl::StringView name)

Set a name for VMOs in this buffer collection. The name may be truncated

shorter. The name only affects VMOs allocated after it's set - this call

does not rename existing VMOs. If multiple clients set different names

then the larger priority value will win.

Caller provides the backing storage for FIDL message via an argument to `.buffer()`.

::fidl::OneWayStatus SetDebugClientInfo (::fidl::StringView name, uint64_t id)

Set information about the current client that can be used by sysmem to

help debug leaking memory and hangs waiting for constraints. |name| can

be an arbitrary string, but the current process name (see

fsl::GetCurrentProcessName()) is a good default. |id| can be an

arbitrary id, but the current process ID (see

fsl::GetCurrentProcessKoid()) is a good default.

Also used when verbose logging is enabled (see SetVerboseLogging()) to

indicate which client is closing their channel first, leading to

sub-tree failure (which can be normal if the purpose of the sub-tree is

over, but if happening earlier than expected, the

client-channel-specific name can help diagnose where the failure is

first coming from, from sysmem's point of view).

By default (unless overriden by this message or using

Allocator.SetDebugClientInfo()), a Node will copy info from its

parent Node at the time the child Node is created. While this can be

better than nothing, it's often better for each participant to use

Node.SetDebugClientInfo() or Allocator.SetDebugClientInfo() to keep the

info directly relevant to the current client. Also, SetVerboseLogging()

can be used to help disambiguate if a Node is suspected of having info

that was copied from its parent.

Caller provides the backing storage for FIDL message via an argument to `.buffer()`.

::fidl::OneWayStatus SetDebugTimeoutLogDeadline (int64_t deadline)

Sysmem logs a warning if not all clients have set constraints 5 seconds

after creating a collection. Clients can call this method to change

when the log is printed. If multiple client set the deadline, it's

unspecified which deadline will take effect.

Caller provides the backing storage for FIDL message via an argument to `.buffer()`.

::fidl::OneWayStatus SetVerboseLogging ()

Verbose logging includes constraints set via SetConstraints() from each

client along with info set via SetDebugClientInfo() and the structure of

the tree of Node(s).

Normally sysmem prints only a single line complaint when aggregation

fails, with just the specific detailed reason that aggregation failed,

with minimal context. While this is often enough to diagnose a problem

if only a small change was made and the system had been working before

the small change, it's often not particularly helpful for getting a new

buffer collection to work for the first time. Especially with more

complex trees of nodes, involving things like AttachToken(),

SetDispensable(), BufferCollectionTokenGroup nodes, and associated

sub-trees of nodes, verbose logging may help in diagnosing what the tree

looks like and why it's failing a logical allocation, or why a tree or

sub-tree is failing sooner than expected.

The intent of the extra logging is to be acceptable from a performance

point of view, if only enabled on a low number of buffer collections.

If we're not tracking down a bug, we shouldn't send this message.

If too many participants leave verbose logging enabled, we may end up

needing to require that system-wide sysmem verbose logging be permitted

via some other setting, to avoid sysmem spamming the log too much due to

this message.

This may be a NOP for some nodes due to intentional policy associated

with the node, if we don't trust a node enough to let it turn on verbose

logging.

Caller provides the backing storage for FIDL message via an argument to `.buffer()`.

::fidl::WireUnownedResult< ::fuchsia_sysmem::Node::GetNodeRef> GetNodeRef ()

This gets an event handle that can be used as a parameter to

IsAlternateFor() called on any Node. The client will not be granted the

right to signal this event, as this handle should only be used as proof

that the client obtained this handle from this Node.

Because this is a get not a set, no Sync() is needed between the

GetNodeRef() and the call to IsAlternateFor(), despite the two calls

potentially being on different channels.

See also IsAlternateFor().

Caller provides the backing storage for FIDL message via an argument to `.buffer()`.

::fidl::WireUnownedResult< ::fuchsia_sysmem::Node::IsAlternateFor> IsAlternateFor (::zx::event && node_ref)

This checks whether the calling node is in a subtree rooted at a

different child token of a common parent BufferCollectionTokenGroup, in

relation to the passed-in node_ref.

This call is for assisting with admission control de-duplication, and

with debugging.

The node_ref must be obtained using GetNodeRef() of a

BufferCollectionToken, BufferCollection, or BufferCollectionTokenGroup.

The node_ref can be a duplicated handle; it's not necessary to call

GetNodeRef() for every call to IsAlternateFor().

If a calling token may not actually be a valid token at all due to

a potentially hostile/untrusted provider of the token, call

ValidateBufferCollectionToken() first instead of potentially getting

stuck indefinitely if IsAlternateFor() never responds due to a calling

token not being a real token (not really talking to sysmem). Another

option is to call BindSharedCollection with this token first which also

validates the token along with converting it to a BufferCollection, then

call BufferCollection IsAlternateFor().

error values:

ZX_ERR_NOT_FOUND means the node_ref wasn't found within the same logical

buffer collection as the calling Node. Before logical allocation and

within the same logical allocation sub-tree, this essentially means that

the node_ref was never part of this logical buffer collection, since

before logical allocation all node_refs that come into existence remain

in existence at least until logical allocation (including Node(s) that

have done a Close() and closed their channel), and for ZX_ERR_NOT_FOUND

to be returned, this Node's channel needs to still be connected server

side, which won't be the case if the whole logical allocation has

failed. After logical allocation or in a different logical allocation

sub-tree there are additional potential reasons for this error. For

example a different logical allocation (separated from this Node(s)

logical allocation by an AttachToken() or SetDispensable()) can fail its

sub-tree deleting those Node(s), or a BufferCollectionTokenGroup may

exist and may select a different child sub-tree than the sub-tree the

node_ref is in causing deletion of the node_ref Node. The only time

sysmem keeps a Node around after that Node has no corresponding channel

is when Close() is used and the Node's sub-tree has not yet failed.

Another reason for this error is if the node_ref is an eventpair handle

with sufficient rights, but isn't actually a real node_ref obtained from

GetNodeRef().

ZX_ERR_INVALID_ARGS means the caller passed a node_ref that isn't an

eventpair handle, or doesn't have the needed rights expected on a real

node_ref.

No other failing status codes are returned by this call. However,

sysmem may add additional codes in future, so the client should have

sensible default handling for any failing status code.

On success, is_alternate has the following meaning:

* true - The first parent node in common between the calling node and

the node_ref Node is a BufferCollectionTokenGroup. This means that

the calling Node and the node_ref Node will _not_ have both their

constraints apply - rather sysmem will choose one or the other of

the constraints - never both. This is because only one child of

a BufferCollectionTokenGroup is selected during logical allocation,

with only that one child's sub-tree contributing to constraints

aggregation.

* false - The first parent node in common between the calling Node and

the node_ref Node is not a BufferCollectionTokenGroup. Currently,

this means the first parent node in common is a

BufferCollectionToken or BufferCollection (regardless of not

Close()ed or Close()ed). This means that the calling Node and the

node_ref Node _may_ have both their constraints apply during

constraints aggregation of the logical allocation, if both Node(s)

are selected by any parent BufferCollectionTokenGroup(s) involved.

In this case, there is no BufferCollectionTokenGroup that will

directly prevent the two Node(s) from both being selected and their

constraints both aggregated, but even when false, one or both

Node(s) may still be eliminated from consideration if one or both

Node(s) has a direct or indirect parent BufferCollectionTokenGroup

which selects a child sub-tree other than the sub-tree containing

the calling Node or node_ref Node.

Caller provides the backing storage for FIDL message via an argument to `.buffer()`.