Is there any constraints for timedmetadata raw size in Roku? - streaming

Below is my finding:
When play an HLS stream and have a less than 8 kb timed metadata raw size then we are getting raw data and able to display it during the playback
When playing an HLS stream and having a more than 8 kb timed metadata raw size then we are not getting raw data and are not able to display it during the playback. When Play the same HLS Stream on other platforms(Website, iOS, Android) then able to read this time metadata

Related

Can a valid Ogg/Opus stream contain repeated Opus headers?

The definition of an Ogg/Opus stream requires two headers at the beginning of the stream. These headers are required by decoders, so it is impossible for one to pick up a long-running stream in the middle. If the Opus headers could be repeated periodically, it would be possible for a receiver to be attached to an Opus stream anywhere, and start decoding when the headers appeared, but I haven't found anything in the RFCs or other docs that would allow this.
It would be possible to insert headers by ending and restarting the stream, thus breaking the stream into a number of short pieces, but I don't know if existing decoders would output the result as a continuous stream of decoded audio or not.
Is there is a way to structure a long-running Ogg/Opus stream so existing players (e.g., vlc) could play it correctly regardless of where they happen to pick it up?

Getting bytes/sec on a Guzzle transfer

Is it possible to calculate the bytes/sec on GET requests created by Guzzle 4 or 5?
If so, how is this done?
To answer your question, yes it is possible to calculate the data rate.
Assuming the usage of Guzzle 5, and from the Guzzle Documentation on Events on the ProgressEvent:
You can access the emitted progress values using the corresponding
public properties of the event object:
$downloadSize: The number of bytes that will be downloaded (if known)
$downloaded: The number of bytes that have been downloaded
$uploadSize: The number of bytes that will be uploaded (if known)
$uploaded: The number of bytes that have been uploaded
In theory using an EventSubscriber (neat, tidy and recommended) or by passing closures to the event emitter (not so neat) is:
Start a timer on the BeforeEvent
Use the timer value and the ProgressEvent::downloaded to calculate a data rate.
Stop the timer within the CompleteEvent

Setting an AudioUnit's SampleRate to 16000 Hz without using an AudioSession

I would like to request a 16k Hz sample rate without using an audio session, but have thus far not been successful.
Querying the hardware via AudioUnitGetProperty / SampleRate before configuring the mic / speakers reveals an mSampleRate of 0, which is described in the documentation as implying that any sample rate is supported, but it is also stated that the hardware will actually give you the nearest one that it supports. After I request 16k and query the hardware again the mSampleRate is 44100 for both mic and speakers respectively. Querying the input/output scope of the input/output busses using AudioUnitGetProperty / SampleRate returns 0 in all cases, as does using the equivalent query with StreamFormat instead. Querying with AudioSessionGetProperty / CurrentHardwareSampleRate, despite the session not being configured or initialized, returns 44100. Everything works as expected when using an audio session, but this is not described as necessary in the documentation, except for submitting to the app store which I am not doing.
It is also not clear to me whether, when using an audio session and requesting 16k Hz, the session quietly converts to 16k between the input and output scopes of the input bus, or whether the hardware does actually support "any" sample rate as mentioned in the documentation. It would also be nice to have a list of sample rates supported by the hardware - it's hard to understand why there isn't a queryable list.
I'm interested in any relevant documentation that describes how to do this (without a session), or explains exactly which sample rates I can set the output scope of the input bus to. I have seen discussion in various threads here that it must be a downshift of 44.1k, but I have thus far not found any documentation supporting that assertion.
Many thanks.

How to send Audio data manually using udp sockets

I am working on Video chat application using udp sockets,
iam able to capture raw audio data which is huge in size. as it is chat application I should be able to transfer this audio data contineously.
The problem is this audio data is huge so socket mtu is not allowing me to transfer this data.
I am finding out the way I can split up this data and send through sockets and capture them at other end and combined them to produce voice data.
Please guide me how using udp sockets
With UDP you have to take care by yourself of transmission order (UDP datagram number 1 could be received AFTER a UDP datagram number 2) and lost packets (UDP doesn't grant delivery of the datagram)
You should use TCP for big size transfers where the order of the packets matters.
About MTU, you don't have to care if it is smaller than the size of the data you're going to send. The OS will defragment it for you.
Just split up the data in 64k blocks (maximum size allowed for a single send() call) and loop until your data is totally transmitted.

dvb: is it possible to have audio and video in a single 188 byte packet?

To me this is not possible. But I can't be too sure. Can somebody confirm this? If it's possible, how?
Thanks
No. According to MPEG2 systems standard - each packet of Transport streams belongs to one PID - which is Packet ID - which correspond to unique component (either audio or video). Hence, it is not possible to put two stream data within 188 packets.