template <>
class NaturalClientImpl
Defined at line 5963 of file fidling/gen/sdk/fidl/fuchsia.media/fuchsia.media/cpp/fidl/fuchsia.media/cpp/natural_messaging.h
Public Methods
::fidl::internal::NaturalThenable< ::fuchsia_media::AudioRenderer::SendPacket> SendPacket (const ::fidl::Request< ::fuchsia_media::AudioRenderer::SendPacket> & request)
Sends a packet to the service. The response is sent when the service is
done with the associated payload memory.
`packet` must be valid for the current buffer set, otherwise the service
will close the connection.
::fidl::internal::NaturalThenable< ::fuchsia_media::AudioRenderer::DiscardAllPackets> DiscardAllPackets ()
Discards packets previously sent via `SendPacket` or `SendPacketNoReply`
and not yet released. The response is sent after all packets have been
released.
::fidl::internal::NaturalThenable< ::fuchsia_media::AudioRenderer::GetReferenceClock> GetReferenceClock ()
Retrieves the stream's reference clock. The returned handle will have READ, DUPLICATE
and TRANSFER rights, and will refer to a zx::clock that is MONOTONIC and CONTINUOUS.
::fidl::internal::NaturalThenable< ::fuchsia_media::AudioRenderer::GetMinLeadTime> GetMinLeadTime ()
While it is possible to call `GetMinLeadTime` before `SetPcmStreamType`,
there's little reason to do so. This is because lead time is a function
of format/rate, so lead time will be recalculated after `SetPcmStreamType`.
If min lead time events are enabled before `SetPcmStreamType` (with
`EnableMinLeadTimeEvents(true)`), then an event will be generated in
response to `SetPcmStreamType`.
::fidl::internal::NaturalThenable< ::fuchsia_media::AudioRenderer::Play> Play (const ::fidl::Request< ::fuchsia_media::AudioRenderer::Play> & request)
Immediately puts the AudioRenderer into a playing state. Starts the advance
of the media timeline, using specific values provided by the caller (or
default values if not specified). In an optional callback, returns the
timestamp values ultimately used -- these set the ongoing relationship
between the media and reference timelines (i.e., how to translate between
the domain of presentation timestamps, and the realm of local system
time).
Local system time is specified in units of nanoseconds; media_time is
specified in the units defined by the user in the `SetPtsUnits` function,
or nanoseconds if `SetPtsUnits` is not called.
The act of placing an AudioRenderer into the playback state establishes a
relationship between 1) the user-defined media (or presentation) timeline
for this particular AudioRenderer, and 2) the real-world system reference
timeline. To communicate how to translate between timelines, the Play()
callback provides an equivalent timestamp in each time domain. The first
value ('reference_time') is given in terms of this renderer's reference
clock; the second value ('media_time') is what media instant exactly
corresponds to that local time. Restated, the frame at 'media_time' in
the audio stream should be presented at 'reference_time' according to
the reference clock.
Note: on calling this API, media_time immediately starts advancing. It is
possible (if uncommon) for a caller to specify a system time that is
far in the past, or far into the future. This, along with the specified
media time, is simply used to determine what media time corresponds to
'now', and THAT media time is then intersected with presentation
timestamps of packets already submitted, to determine which media frames
should be presented next.
With the corresponding reference_time and media_time values, a user can
translate arbitrary time values from one timeline into the other. After
calling `SetPtsUnits(pts_per_sec_numerator, pts_per_sec_denominator)` and
given the 'ref_start' and 'media_start' values from `Play`, then for
any 'ref_time':
media_time = ( (ref_time - ref_start) / 1e9
* (pts_per_sec_numerator / pts_per_sec_denominator) )
+ media_start
Conversely, for any presentation timestamp 'media_time':
ref_time = ( (media_time - media_start)
* (pts_per_sec_denominator / pts_per_sec_numerator)
* 1e9 )
+ ref_start
Users, depending on their use case, may optionally choose not to specify
one or both of these timestamps. A timestamp may be omitted by supplying
the special value '`NO_TIMESTAMP`'. The AudioRenderer automatically deduces
any omitted timestamp value using the following rules:
Reference Time
If 'reference_time' is omitted, the AudioRenderer will select a "safe"
reference time to begin presentation, based on the minimum lead times for
the output devices that are currently bound to this AudioRenderer. For
example, if an AudioRenderer is bound to an internal audio output
requiring at least 3 mSec of lead time, and an HDMI output requiring at
least 75 mSec of lead time, the AudioRenderer might (if 'reference_time'
is omitted) select a reference time 80 mSec from now.
Media Time
If media_time is omitted, the AudioRenderer will select one of two
values.
- If the AudioRenderer is resuming from the paused state, and packets
have not been discarded since being paused, then the AudioRenderer will
use a media_time corresponding to the instant at which the presentation
became paused.
- If the AudioRenderer is being placed into a playing state for the first
time following startup or a 'discard packets' operation, the initial
media_time will be set to the PTS of the first payload in the pending
packet queue. If the pending queue is empty, initial media_time will be
set to zero.
Return Value
When requested, the AudioRenderer will return the 'reference_time' and
'media_time' which were selected and used (whether they were explicitly
specified or not) in the return value of the play call.
Examples
1. A user has queued some audio using `SendPacket` and simply wishes them
to start playing as soon as possible. The user may call Play without
providing explicit timestamps -- `Play(NO_TIMESTAMP, NO_TIMESTAMP)`.
2. A user has queued some audio using `SendPacket`, and wishes to start
playback at a specified 'reference_time', in sync with some other media
stream, either initially or after discarding packets. The user would call
`Play(reference_time, NO_TIMESTAMP)`.
3. A user has queued some audio using `SendPacket`. The first of these
packets has a PTS of zero, and the user wishes playback to begin as soon
as possible, but wishes to skip all of the audio content between PTS 0
and PTS 'media_time'. The user would call
`Play(NO_TIMESTAMP, media_time)`.
4. A user has queued some audio using `SendPacket` and want to present
this media in synch with another player in a different device. The
coordinator of the group of distributed players sends an explicit
message to each player telling them to begin presentation of audio at
PTS 'media_time', at the time (based on the group's shared reference
clock) 'reference_time'. Here the user would call
`Play(reference_time, media_time)`.
::fidl::internal::NaturalThenable< ::fuchsia_media::AudioRenderer::Pause> Pause ()
Immediately puts the AudioRenderer into the paused state and then report
the relationship between the media and reference timelines which was
established (if requested).
If the AudioRenderer is already in the paused state when this called,
the previously-established timeline values are returned (if requested).
::fit::result< ::fidl::OneWayError> AddPayloadBuffer (::fidl::Request< ::fuchsia_media::AudioRenderer::AddPayloadBuffer> request)
Adds a payload buffer to the current buffer set associated with the
connection. A `StreamPacket` struct reference a payload buffer in the
current set by ID using the `StreamPacket.payload_buffer_id` field.
A buffer with ID `id` must not be in the current set when this method is
invoked, otherwise the service will close the connection.
::fit::result< ::fidl::OneWayError> RemovePayloadBuffer (const ::fidl::Request< ::fuchsia_media::AudioRenderer::RemovePayloadBuffer> & request)
Removes a payload buffer from the current buffer set associated with the
connection.
A buffer with ID `id` must exist in the current set when this method is
invoked, otherwise the service will will close the connection.
::fit::result< ::fidl::OneWayError> SendPacketNoReply (const ::fidl::Request< ::fuchsia_media::AudioRenderer::SendPacketNoReply> & request)
Sends a packet to the service. This interface doesn't define how the
client knows when the sink is done with the associated payload memory.
The inheriting interface must define that.
`packet` must be valid for the current buffer set, otherwise the service
will close the connection.
::fit::result< ::fidl::OneWayError> EndOfStream ()
Indicates the stream has ended. The precise semantics of this method are
determined by the inheriting interface.
::fit::result< ::fidl::OneWayError> DiscardAllPacketsNoReply ()
Discards packets previously sent via `SendPacket` or `SendPacketNoReply`
and not yet released.
::fit::result< ::fidl::OneWayError> BindGainControl (::fidl::Request< ::fuchsia_media::AudioRenderer::BindGainControl> request)
Binds to the gain control for this AudioRenderer.
::fit::result< ::fidl::OneWayError> SetPtsUnits (const ::fidl::Request< ::fuchsia_media::AudioRenderer::SetPtsUnits> & request)
Sets the units used by the presentation (media) timeline. By default, PTS units are
nanoseconds (as if this were called with numerator of 1e9 and denominator of 1).
This ratio must lie between 1/60 (1 tick per minute) and 1e9/1 (1ns per tick).
::fit::result< ::fidl::OneWayError> SetPtsContinuityThreshold (const ::fidl::Request< ::fuchsia_media::AudioRenderer::SetPtsContinuityThreshold> & request)
Sets the maximum threshold (in seconds) between explicit user-provided PTS
and expected PTS (determined using interpolation). Beyond this threshold,
a stream is no longer considered 'continuous' by the renderer.
Defaults to an interval of half a PTS 'tick', using the currently-defined PTS units.
Most users should not need to change this value from its default.
Example:
A user is playing back 48KHz audio from a container, which also contains
video and needs to be synchronized with the audio. The timestamps are
provided explicitly per packet by the container, and expressed in mSec
units. This means that a single tick of the media timeline (1 mSec)
represents exactly 48 frames of audio. The application in this scenario
delivers packets of audio to the AudioRenderer, each with exactly 470
frames of audio, and each with an explicit timestamp set to the best
possible representation of the presentation time (given this media
clock's resolution). So, starting from zero, the timestamps would be..
[ 0, 10, 20, 29, 39, 49, 59, 69, 78, 88, ... ]
In this example, attempting to use the presentation time to compute the
starting frame number of the audio in the packet would be wrong the
majority of the time. The first timestamp is correct (by definition), but
it will be 24 packets before the timestamps and frame numbers come back
into alignment (the 24th packet would start with the 11280th audio frame
and have a PTS of exactly 235).
One way to fix this situation is to set the PTS continuity threshold
(henceforth, CT) for the stream to be equal to 1/2 of the time taken by
the number of frames contained within a single tick of the media clock,
rounded up. In this scenario, that would be 24.0 frames of audio, or 500
uSec. Any packets whose expected PTS was within +/-CT frames of the
explicitly provided PTS would be considered to be a continuation of the
previous frame of audio. For this example, calling 'SetPtsContinuityThreshold(0.0005)'
would work well.
Other possible uses:
Users who are scheduling audio explicitly, relative to a clock which has
not been configured as the reference clock, can use this value to control
the maximum acceptable synchronization error before a discontinuity is
introduced. E.g., if a user is scheduling audio based on a recovered
common media clock, and has not published that clock as the reference
clock, and they set the CT to 20mSec, then up to 20mSec of drift error
can accumulate before the AudioRenderer deliberately inserts a
presentation discontinuity to account for the error.
Users whose need to deal with a container where their timestamps may be
even less correct than +/- 1/2 of a PTS tick may set this value to
something larger. This should be the maximum level of inaccuracy present
in the container timestamps, if known. Failing that, it could be set to
the maximum tolerable level of drift error before absolute timestamps are
explicitly obeyed. Finally, a user could set this number to a very large
value (86400.0 seconds, for example) to effectively cause *all*
timestamps to be ignored after the first, thus treating all audio as
continuous with previously delivered packets. Conversely, users who wish
to *always* explicitly schedule their audio packets exactly may specify
a CT of 0.
Note: explicitly specifying high-frequency PTS units reduces the default
continuity threshold accordingly. Internally, this threshold is stored as an
integer of 1/8192 subframes. The default threshold is computed as follows:
RoundUp((AudioFPS/PTSTicksPerSec) * 4096) / (AudioFPS * 8192)
For this reason, specifying PTS units with a frequency greater than 8192x
the frame rate (or NOT calling SetPtsUnits, which accepts the default PTS
unit of 1 nanosec) will result in a default continuity threshold of zero.
::fit::result< ::fidl::OneWayError> SetReferenceClock (::fidl::Request< ::fuchsia_media::AudioRenderer::SetReferenceClock> request)
Sets the reference clock that controls this renderer's playback rate. If the input
parameter is a valid zx::clock, it must have READ, DUPLICATE, TRANSFER rights and
refer to a clock that is both MONOTONIC and CONTINUOUS. If instead an invalid clock
is passed (such as the uninitialized `zx::clock()`), this indicates that the stream
will use a 'flexible' clock generated by AudioCore that tracks the audio device.
`SetReferenceClock` cannot be called once `SetPcmStreamType` is called. It also
cannot be called a second time (even if the renderer format has not yet been set).
If a client wants a reference clock that is initially `CLOCK_MONOTONIC` but may
diverge at some later time, they should create a clone of the monotonic clock, set
this as the stream's reference clock, then rate-adjust it subsequently as needed.
::fit::result< ::fidl::OneWayError> SetUsage (const ::fidl::Request< ::fuchsia_media::AudioRenderer::SetUsage> & request)
Sets the usage of the render stream. This method may not be called after
`SetPcmStreamType` is called. The default usage is `MEDIA`.
::fit::result< ::fidl::OneWayError> SetUsage2 (const ::fidl::Request< ::fuchsia_media::AudioRenderer::SetUsage2> & request)
Sets the usage of the render stream. This method may not be called after
`SetPcmStreamType` is called. The default usage is `MEDIA`.
::fit::result< ::fidl::OneWayError> SetPcmStreamType (const ::fidl::Request< ::fuchsia_media::AudioRenderer::SetPcmStreamType> & request)
Sets the type of the stream to be delivered by the client. Using this method implies
that the stream encoding is `AUDIO_ENCODING_LPCM`.
This must be called before `Play` or `PlayNoReply`. After a call to `SetPcmStreamType`,
the client must then send an `AddPayloadBuffer` request, then the various `StreamSink`
methods such as `SendPacket`/`SendPacketNoReply`.
::fit::result< ::fidl::OneWayError> EnableMinLeadTimeEvents (const ::fidl::Request< ::fuchsia_media::AudioRenderer::EnableMinLeadTimeEvents> & request)
Enables or disables notifications about changes to the minimum clock lead
time (in nanoseconds) for this AudioRenderer. Calling this method with
'enabled' set to true will trigger an immediate `OnMinLeadTimeChanged`
event with the current minimum lead time for the AudioRenderer. If the
value changes, an `OnMinLeadTimeChanged` event will be raised with the
new value. This behavior will continue until the user calls
`EnableMinLeadTimeEvents(false)`.
The minimum clock lead time is the amount of time ahead of the reference
clock's understanding of "now" that packets needs to arrive (relative to
the playback clock transformation) in order for the mixer to be able to
mix packet. For example...
+ Let the PTS of packet X be P(X)
+ Let the function which transforms PTS -> RefClock be R(p) (this
function is determined by the call to Play(...)
+ Let the minimum lead time be MLT
If R(P(X))
<
RefClock.Now() + MLT
Then the packet is late, and some (or all) of the packet's payload will
need to be skipped in order to present the packet at the scheduled time.
The value `min_lead_time_nsec = 0` is a special value which indicates
that the AudioRenderer is not yet routed to an output device. If `Play`
is called before the AudioRenderer is routed, any played packets will be
dropped. Clients should wait until `min_lead_time_nsec > 0` before
calling `Play`.
::fit::result< ::fidl::OneWayError> PlayNoReply (const ::fidl::Request< ::fuchsia_media::AudioRenderer::PlayNoReply> & request)
::fit::result< ::fidl::OneWayError> PauseNoReply ()