template <>

class WireWeakSyncClientImpl

Defined at line 25637 of file fidling/gen/sdk/fidl/fuchsia.media/fuchsia.media/cpp/fidl/fuchsia.media/cpp/wire_messaging.h

Public Methods

::fidl::WireResult< ::fuchsia_media::AudioRenderer::SendPacket> SendPacket (const ::fuchsia_media::wire::StreamPacket & packet)

Sends a packet to the service. The response is sent when the service is

done with the associated payload memory.

`packet` must be valid for the current buffer set, otherwise the service

will close the connection.

Allocates 88 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::WireResult< ::fuchsia_media::AudioRenderer::DiscardAllPackets> DiscardAllPackets ()

Discards packets previously sent via `SendPacket` or `SendPacketNoReply`

and not yet released. The response is sent after all packets have been

released.

Allocates 32 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::WireResult< ::fuchsia_media::AudioRenderer::GetReferenceClock> GetReferenceClock ()

Retrieves the stream's reference clock. The returned handle will have READ, DUPLICATE

and TRANSFER rights, and will refer to a zx::clock that is MONOTONIC and CONTINUOUS.

Allocates 40 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::WireResult< ::fuchsia_media::AudioRenderer::GetMinLeadTime> GetMinLeadTime ()

While it is possible to call `GetMinLeadTime` before `SetPcmStreamType`,

there's little reason to do so. This is because lead time is a function

of format/rate, so lead time will be recalculated after `SetPcmStreamType`.

If min lead time events are enabled before `SetPcmStreamType` (with

`EnableMinLeadTimeEvents(true)`), then an event will be generated in

response to `SetPcmStreamType`.

Allocates 40 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::WireResult< ::fuchsia_media::AudioRenderer::Play> Play (int64_t reference_time, int64_t media_time)

Immediately puts the AudioRenderer into a playing state. Starts the advance

of the media timeline, using specific values provided by the caller (or

default values if not specified). In an optional callback, returns the

timestamp values ultimately used -- these set the ongoing relationship

between the media and reference timelines (i.e., how to translate between

the domain of presentation timestamps, and the realm of local system

time).

Local system time is specified in units of nanoseconds; media_time is

specified in the units defined by the user in the `SetPtsUnits` function,

or nanoseconds if `SetPtsUnits` is not called.

The act of placing an AudioRenderer into the playback state establishes a

relationship between 1) the user-defined media (or presentation) timeline

for this particular AudioRenderer, and 2) the real-world system reference

timeline. To communicate how to translate between timelines, the Play()

callback provides an equivalent timestamp in each time domain. The first

value ('reference_time') is given in terms of this renderer's reference

clock; the second value ('media_time') is what media instant exactly

corresponds to that local time. Restated, the frame at 'media_time' in

the audio stream should be presented at 'reference_time' according to

the reference clock.

Note: on calling this API, media_time immediately starts advancing. It is

possible (if uncommon) for a caller to specify a system time that is

far in the past, or far into the future. This, along with the specified

media time, is simply used to determine what media time corresponds to

'now', and THAT media time is then intersected with presentation

timestamps of packets already submitted, to determine which media frames

should be presented next.

With the corresponding reference_time and media_time values, a user can

translate arbitrary time values from one timeline into the other. After

calling `SetPtsUnits(pts_per_sec_numerator, pts_per_sec_denominator)` and

given the 'ref_start' and 'media_start' values from `Play`, then for

any 'ref_time':

media_time = ( (ref_time - ref_start) / 1e9

* (pts_per_sec_numerator / pts_per_sec_denominator) )

+ media_start

Conversely, for any presentation timestamp 'media_time':

ref_time = ( (media_time - media_start)

* (pts_per_sec_denominator / pts_per_sec_numerator)

* 1e9 )

+ ref_start

Users, depending on their use case, may optionally choose not to specify

one or both of these timestamps. A timestamp may be omitted by supplying

the special value '`NO_TIMESTAMP`'. The AudioRenderer automatically deduces

any omitted timestamp value using the following rules:

Reference Time

If 'reference_time' is omitted, the AudioRenderer will select a "safe"

reference time to begin presentation, based on the minimum lead times for

the output devices that are currently bound to this AudioRenderer. For

example, if an AudioRenderer is bound to an internal audio output

requiring at least 3 mSec of lead time, and an HDMI output requiring at

least 75 mSec of lead time, the AudioRenderer might (if 'reference_time'

is omitted) select a reference time 80 mSec from now.

Media Time

If media_time is omitted, the AudioRenderer will select one of two

values.

- If the AudioRenderer is resuming from the paused state, and packets

have not been discarded since being paused, then the AudioRenderer will

use a media_time corresponding to the instant at which the presentation

became paused.

- If the AudioRenderer is being placed into a playing state for the first

time following startup or a 'discard packets' operation, the initial

media_time will be set to the PTS of the first payload in the pending

packet queue. If the pending queue is empty, initial media_time will be

set to zero.

Return Value

When requested, the AudioRenderer will return the 'reference_time' and

'media_time' which were selected and used (whether they were explicitly

specified or not) in the return value of the play call.

Examples

1. A user has queued some audio using `SendPacket` and simply wishes them

to start playing as soon as possible. The user may call Play without

providing explicit timestamps -- `Play(NO_TIMESTAMP, NO_TIMESTAMP)`.

2. A user has queued some audio using `SendPacket`, and wishes to start

playback at a specified 'reference_time', in sync with some other media

stream, either initially or after discarding packets. The user would call

`Play(reference_time, NO_TIMESTAMP)`.

3. A user has queued some audio using `SendPacket`. The first of these

packets has a PTS of zero, and the user wishes playback to begin as soon

as possible, but wishes to skip all of the audio content between PTS 0

and PTS 'media_time'. The user would call

`Play(NO_TIMESTAMP, media_time)`.

4. A user has queued some audio using `SendPacket` and want to present

this media in synch with another player in a different device. The

coordinator of the group of distributed players sends an explicit

message to each player telling them to begin presentation of audio at

PTS 'media_time', at the time (based on the group's shared reference

clock) 'reference_time'. Here the user would call

`Play(reference_time, media_time)`.

Allocates 64 bytes of message buffer on the stack. No heap allocation necessary.

::fidl::WireResult< ::fuchsia_media::AudioRenderer::Pause> Pause ()

Immediately puts the AudioRenderer into the paused state and then report

the relationship between the media and reference timelines which was

established (if requested).

If the AudioRenderer is already in the paused state when this called,

the previously-established timeline values are returned (if requested).

Allocates 48 bytes of message buffer on the stack. No heap allocation necessary.