pub struct StreamProcessorSynchronousProxy { /* private fields */ }

Implementations§

source§

impl StreamProcessorSynchronousProxy

source

pub fn new(channel: Channel) -> Self

source

pub fn into_channel(self) -> Channel

source

pub fn wait_for_event( &self, deadline: Time ) -> Result<StreamProcessorEvent, Error>

Waits until an event arrives and returns it. It is safe for other threads to make concurrent requests while waiting for an event.

source

pub fn enable_on_stream_failed(&self) -> Result<(), Error>

Permit the server to use OnStreamFailed() instead of the server just closing the whole StreamProcessor channel on stream failure.

If the server hasn’t seen this message by the time a stream fails, the server will close the StreamProcessor channel instead of sending OnStreamFailed().

source

pub fn set_input_buffer_partial_settings( &self, input_settings: StreamBufferPartialSettings ) -> Result<(), Error>

This is the replacement for SetInputBufferSettings().

When the client is using sysmem to allocate buffers, this message is used instead of SetInputBufferSettings()+AddInputBuffer(). Instead, a single SetInputBufferPartialSettings() provides the StreamProcessor with the client-specified input settings and a BufferCollectionToken which the StreamProcessor will use to convey constraints to sysmem. Both the client and the StreamProcessor will be informed of the allocated buffers directly by sysmem via their BufferCollection channel (not via the StreamProcessor channel).

The client must not QueueInput…() until after sysmem informs the client that buffer allocation has completed and was successful.

The server should be prepared to see QueueInput…() before the server has necessarily heard from sysmem that the buffers are allocated - the server must tolerate either ordering, as the QueueInput…() and notification of sysmem allocation completion arrive on different channels, so the client having heard that allocation is complete doesn’t mean the server knows that allocation is complete yet. However, the server can expect that allocation is in fact complete and can expect to get the allocation information from sysmem immediately upon requesting the information from sysmem.

source

pub fn set_output_buffer_partial_settings( &self, output_settings: StreamBufferPartialSettings ) -> Result<(), Error>

This is the replacement for SetOutputBufferSettings().

When the client is using sysmem to allocate buffers, this message is used instead of SetOutputBufferSettings()+AddOutputBuffer(). Instead, a single SetOutputBufferPartialSettings() provides the StreamProcessor with the client-specified output settings and a BufferCollectionToken which the StreamProcessor will use to convey constraints to sysmem. Both the client and the StreamProcessor will be informed of the allocated buffers directly by sysmem via their BufferCollection channel (not via the StreamProcessor channel).

Configuring output buffers is required after OnOutputConstraints() is received by the client with buffer_constraints_action_required true and stream_lifetime_ordinal equal to the client’s current stream_lifetime_ordinal (even if there is an active stream), and is permitted any time there is no current stream.

Closing the current stream occurs on the StreamControl ordering domain, so after a CloseCurrentStream() or FlushEndOfStreamAndCloseStream(), a subsequent Sync() completion must be received by the client before the client knows that there’s no longer a current stream.

See also CompleteOutputBufferPartialSettings().

source

pub fn complete_output_buffer_partial_settings( &self, buffer_lifetime_ordinal: u64 ) -> Result<(), Error>

After SetOutputBufferPartialSettings(), the server won’t send OnOutputConstraints(), OnOutputFormat(), OnOutputPacket(), or OnOutputEndOfStream() until after the client sends CompleteOutputBufferPartialSettings().

Some clients may be able to send CompleteOutputBufferPartialSettings() immediately after SetOutputBufferPartialSettings() - in that case the client needs to be prepared to receive output without knowing the buffer count or packet count yet - such clients may internally delay processing the received output until the client has heard from sysmem (which is when the client will learn the buffer count and packet count).

Other clients may first wait for sysmem to allocate, prepare to receive output, and then send CompleteOutputBufferPartialSettings().

source

pub fn flush_end_of_stream_and_close_stream( &self, stream_lifetime_ordinal: u64 ) -> Result<(), Error>

This message is optional.

This message is only valid after QueueInputEndOfStream() for this stream. The stream_lifetime_ordinal input parameter must match the stream_lifetime_ordinal of the QueueInputEndOfStream(), else the server will close the channel.

A client can use this message to flush through (not discard) the last input data of a stream so that the stream processor server generates corresponding output data for all the input data before the server moves on to the next stream, without forcing the client to wait for OnOutputEndOfStream() before queueing data of another stream.

The difference between QueueInputEndOfStream() and FlushEndOfStreamAndCloseStream(): QueueInputEndOfStream() is a promise from the client that there will not be any more input data for the stream (and this info is needed by some stream processors for the stream processor to ever emit the very last output data). The QueueInputEndOfStream() having been sent doesn’t prevent the client from later completely discarding the rest of the current stream by closing the current stream (with or without a stream switch). In contrast, FlushEndOfStreamAndCloseStream() is a request from the client that all the previously-queued input data be processed including the logical “EndOfStream” showing up as OnOutputEndOfStream() (in success case) before moving on to any newer stream - this essentially changes the close-stream handling from discard to flush-through for this stream only.

A client using this message can start providing input data for a new stream without that causing discard of old stream data. That’s the purpose of this message - to allow a client to flush through (not discard) the old stream’s last data (instead of the default when closing or switching streams which is discard).

Because the old stream is not done processing yet and the old stream’s data is not being discarded, the client must be prepared to continue to process OnOutputConstraints() messages until the stream_lifetime_ordinal is done. The client will know the stream_lifetime_ordinal is done when OnOutputEndOfStream(), OnStreamFailed(), or the StreamProcessor channel closes.

source

pub fn close_current_stream( &self, stream_lifetime_ordinal: u64, release_input_buffers: bool, release_output_buffers: bool ) -> Result<(), Error>

This “closes” the current stream, leaving no current stream. In addition, this message can optionally release input buffers or output buffers.

If there has never been any active stream, the stream_lifetime_ordinal must be zero or the server will close the channel. If there has been an active stream, the stream_lifetime_ordinal must be the most recent active stream whether that stream is still active or not. Else the server will close the channel.

Multiple of this message without any new active stream in between is not to be considered an error, which allows a client to use this message to close the current stream to stop wasting processing power on a stream the user no longer cares about, then later decide that buffers should be released and send this message again with release_input_buffers and/or release_output_buffers true to get the buffers released, if the client is interested in trying to avoid overlap in resource usage between old buffers and new buffers (not all clients are).

See also Sync().

source

pub fn sync(&self, ___deadline: Time) -> Result<(), Error>

On completion, all previous StreamProcessor calls have done what they’re going to do server-side, except for processing of data queued using QueueInputPacket().

The main purpose of this call is to enable the client to wait until CloseCurrentStream() with release_input_buffers and/or release_output_buffers set to true to take effect, before the client allocates new buffers and re-sets-up input and/or output buffers. This de-overlapping of resource usage can be worthwhile for media buffers which can consume resource types whose overall pools aren’t necessarily vast in comparison to resources consumed. Especially if a client is reconfiguring buffers multiple times.

Note that Sync() prior to allocating new media buffers is not alone sufficient to achieve non-overlap of media buffer resource usage system wide, but it can be a useful part of achieving that.

The Sync() transits the Output ordering domain and the StreamControl ordering domain, but not the InputData ordering domain.

This request can be used to avoid hitting kMaxInFlightStreams which is presently 10. A client that stays <= 8 in-flight streams will comfortably stay under the limit of 10. While the protocol permits repeated SetInputBufferSettings() and the like, a client that spams the channel can expect that the channel will just close if the server or the channel itself gets too far behind.

source

pub fn recycle_output_packet( &self, available_output_packet: &PacketHeader ) -> Result<(), Error>

After the client is done with an output packet, the client needs to tell the stream processor that the output packet can be re-used for more output, via this method.

It’s not permitted to recycle an output packet that’s already free with the stream processor server. It’s permitted but discouraged for a client to recycle an output packet that has been deallocated by an explicit or implicit output buffer de-configuration(). See buffer_lifetime_ordinal for more on that. A server must ignore any such stale RecycleOutputPacket() calls.

source

pub fn queue_input_format_details( &self, stream_lifetime_ordinal: u64, format_details: &FormatDetails ) -> Result<(), Error>

If the input format details are still the same as specified during StreamProcessor creation, this message is unnecessary and does not need to be sent.

If the stream doesn’t exist yet, this message creates the stream.

The server won’t send OnOutputConstraints() until after the client has sent at least one QueueInput* message.

All servers must permit QueueInputFormatDetails() at the start of a stream without failing, as long as the new format is supported by the StreamProcessor instance. Technically this allows for a server to only support the exact input format set during StreamProcessor creation, and that is by design. A client that tries to switch formats and gets a StreamProcessor channel failure should try again one more time with a fresh StreamProcessor instance created with CodecFactory using the new input format during creation, before giving up.

These format details override the format details specified during stream processor creation for this stream only. The next stream will default back to the format details set during stream processor creation.

This message is permitted at the start of the first stream (just like at the start of any stream). The format specified need not match what was specified during stream processor creation, but if it doesn’t match, the StreamProcessor channel might close as described above.

source

pub fn queue_input_packet(&self, packet: &Packet) -> Result<(), Error>

This message queues input data to the stream processor for processing.

If the stream doesn’t exist yet, this message creates the new stream.

The server won’t send OnOutputConstraints() until after the client has sent at least one QueueInput* message.

The client must continue to deliver input data via this message even if the stream processor has not yet generated the first OnOutputConstraints(), and even if the StreamProcessor is generating OnFreeInputPacket() for previously-queued input packets. The input data must continue as long as there are free packets to be assured that the server will ever generate the first OnOutputConstraints().

source

pub fn queue_input_end_of_stream( &self, stream_lifetime_ordinal: u64 ) -> Result<(), Error>

Inform the server that all QueueInputPacket() messages for this stream have been sent.

If the stream isn’t closed first (by the client, or by OnStreamFailed(), or StreamProcessor channel closing), there will later be a corresponding OnOutputEndOfStream().

The corresponding OnOutputEndOfStream() message will be generated only if the server finishes processing the stream before the server sees the client close the stream (such as by starting a new stream). A way to force the server to finish the stream before closing is to use FlushEndOfStreamAndCloseStream() after QueueInputEndOfStream() before any new stream. Another way to force the server to finish the stream before closing is to wait for the OnOutputEndOfStream() before taking any action that closes the stream.

In addition to serving as an “EndOfStream” marker to make it obvious client-side when all input data has been processed, if a client never sends QueueInputEndOfStream(), no amount of waiting will necessarily result in all input data getting processed through to the output. Some stream processors have some internally-delayed data which only gets pushed through by additional input data or by this EndOfStream marker. In that sense, this message can be viewed as a flush-through at InputData domain level, but the flush-through only takes effect if the stream processor even gets that far before the stream is just closed at StreamControl domain level. This message is not alone sufficient to act as an overall flush-through at StreamControl level. For that, send this message first and then send FlushEndOfStreamAndCloseStream() (at which point it becomes possible to queue input data for a new stream without causing discard of this older stream’s data), or wait for the OnOutputEndOfStream() before closing the current stream.

If a client sends QueueInputPacket(), QueueInputFormatDetails(), QueueInputEndOfStream() for this stream after the first QueueInputEndOfStream() for this stream, a server should close the StreamProcessor channel.

Trait Implementations§

source§

impl Debug for StreamProcessorSynchronousProxy

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl SynchronousProxy for StreamProcessorSynchronousProxy

§

type Proxy = StreamProcessorProxy

The async proxy for the same protocol.
§

type Protocol = StreamProcessorMarker

The protocol which this Proxy controls.
source§

fn from_channel(inner: Channel) -> Self

Create a proxy over the given channel.
source§

fn into_channel(self) -> Channel

Convert the proxy back into a channel.
source§

fn as_channel(&self) -> &Channel

Get a reference to the proxy’s underlying channel. Read more

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
§

impl<T> Encode<Ambiguous1> for T

§

unsafe fn encode( self, _encoder: &mut Encoder<'_>, _offset: usize, _depth: Depth ) -> Result<(), Error>

Encodes the object into the encoder’s buffers. Any handles stored in the object are swapped for Handle::INVALID. Read more
§

impl<T> Encode<Ambiguous2> for T

§

unsafe fn encode( self, _encoder: &mut Encoder<'_>, _offset: usize, _depth: Depth ) -> Result<(), Error>

Encodes the object into the encoder’s buffers. Any handles stored in the object are swapped for Handle::INVALID. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

§

impl<T> Instrument for T

§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided [Span], returning an Instrumented wrapper. Read more
§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

§

impl<T> Pointable for T

§

const ALIGN: usize = _

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<T> WithSubscriber for T

§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a [WithDispatch] wrapper. Read more
§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a [WithDispatch] wrapper. Read more