Twilio Voice Timestamps - flutter

Our application uses the Twilio voice SDKs for iOS, Android, and Web. Our use case relies on precise device synchronization and time stamping. We are playing an audio stream on multiple adjacent devices (in a Twilio conference call) and we need that audio playback to be in sync. Most of the time, it works great, but every now-and-then, one of the devices falls a little bit behind and throws off the whole experience. We want to detect when a device is falling behind (receiving packets late) so we can temporarily mute it so it does not throw off the user experience we are going for.
We believe that Twilio voice uses real time communication (web RTC) and real-time transport protocol (RTP) under the hood. We also believe RTP has time stamping information for when packets are sent out and when packets are received.
We are looking for any suggestions for how we might read this timestamp information (both sent & received) to detect device synchronization issues.
Our iOS and Android clients are built using Flutter & Dart, so any way to look at this packet information using Dart would be great. If not, we can use native channels through Swift and Kotlin. For the web, we would need a way to look at this timestamp data using javascript.
If possible, we'd like to access this information through the SDK. I don't see anything about timestamps in Twilio's voice documentation. So, if not possible, we might have to sniff for packets on the devices? This way, we could look at the RTP packets coming from Twilio to see what information is available. As long as this does not break Twilio terms of service, of course :)

Even if you could get this information I don't think it will be useful. The timestamp field in RTP has little to do with real time. In voice it's actually a sample offset into the audio stream. With a typical narrowband codec with a fixed bit rate and no silence suppression it's completely predictable from the RTP sequence number. For example, with 20ms packets of G.711 it will increment by exactly 160 each packet.
RTP receivers expect there to be random variation between the receipt time of a packet and its timestamp - known as jitter. This is introduced by delays at the sender, in the network and at the receiver. This is why receivers use jitter buffers to reduce the likelihood of buffer underrun on playing. The definition of jitter for RTCP - the interarrival jitter - is a calculation that measures this. That is - the variation between the (predictable) RTP timestamp and the measured wallclock time at the receiver.
Maybe you need something more like an NTP protocol between your client and your server.

Related

Movesense Peak detectiion trigger

On the Maxim EKG chip, INT2B can be set as a peak detection trigger. How do I send a BT notification, preferably with a timestamp as soon as possible through the Nordic platform? Thank you.
In Movesense architecture there is no direct access to hardware, everything happens via Movesense API (defined in MovesenseCoreLib/resources/movesense-api as ".yaml"-files).
The Maxim 30003 peak detection is accessed via /Meas/HR -resource which gives notification each time a peak is detected. That resource can be subscribed to directly from the mobile (via the MDS library, see: movesense-mobile-lib). The delay from the actual peak to the mobile notification should be relatively constant and dominated by Maxim chips detection delay (read: I have not measured). The BLE connection causes itself some delay that should be 20-100ms or so depending on the BLE connection parameters etc. This is the way I'd go for since later when we add the timestamp to /Meas/HR (it has been requested from us already) it's a simple modification to use the included timestamp.
Alternatively you can write your own sensor app (firmware) with its own API, that can subscribe to /Meas/HR and for each notification do GET to /Time/Detailed and return just the timestamp each time a peak is detected. For a starting point for that I'd recommend the jumpmeter_app and modify it accordingly.
Full disclaimer: I work for Movesense team

How to set up a media server infrastrucure?

I need to set up a media server infrastructure to support live streaming. I have endless questions in relation to this as this area is totally new to me. I have done the research but I received so many different truths that I don't know who to believe.
Context:
Wowza
Wowza Engine
Audio and Video live streaming
15 x 20-minute live streams per day
Between 7 and 15 CONCURRENT live streams may happen at the same
moment in time
720p quality is sufficient
Every live stream will be viewed by only between 1 and 5 viewers
Viewers will view the stream on an internet browser of their choice.
However if possible they can also view the livestream on their phones
(even if its via the website through the phone's browser).
Choppy/buffering streams are not an acceptable thing
Streams do not need to be recorded and stored
Footage may be taken from webcams or phones
Audience is in the US (and so are the publisher of the live stream)
Questions:
1) Do I need Wowza transcoder?
Some suppliers told me I need the transcoders only if I require
adaptive bitrate.
Others told me I need the transcoders only if I need to stream to iPhone or apple devices
Others told me that I need
transcoders because I want to do concurrent live streams and I would
consequently need one transcoder licence per concurrent live stream
Others told me that concurrent live streams (multiple channels?) can
happen even if I do not buy transcoder licences
At this stage I do not know who to believe. The Wowza documentation says transcoders are required to convert incoming streams from one format to another and to provide adaptive bitrate but I am still not sure.
2) Can I host the website at a web hosting provider and buy (and consume) the media server from somewhere else?
- For example can I host the website on TSOHost but then have the media server from primcast or serverroom.net?
3) If the answer to the above is yes, will the bandwidth of both hosting providers be consumed when transmitting a live stream?
4) Since footage is taken either from phones or from webcams, which software do the users need to install in order to transmit the footage?
5) For 15 x 20 minute live streams per day, how much bandwidth is consumed? How do I calculate that?
6) Do I need adaptive bitrate streaming? Or is it required only if the audience can be expected to have bad internet speed?
7) Does adaptive bitrate streaming require special software on the encoding side or do the regular Adobe Flash Live Encoder and Wowza GoCoder do the trick?
Thank you in advance. If you know a freelance expert I can hire give me his details :P.
Quite a few questions, I'll try to add some answers (and you can contact me outside of SO of course)
1, Do you need Wowza Transcoder?
If the streams come from software that can send multiple bitrates, like the Flash Media Live Encoder, capable of sending the same stream in 3 different qualities, then you don't. You can use alternatively free software like ffmpeg on publisher side to avoid transcoding, but the cost is more CPU load on publisher side and of course more bandwidth upwards. Or you can still receive one stream with ffmpeg on the server and produce different qualities on the media server internally and feed those to Wowza Streaming Engine. But if you are not sensitive to cost and want a robust and simple solution, Transcoder AddOn is recommended.
2, Can I host the website at a web hosting provider and buy (and consume) the media server from somewhere else?
Sure, you can, this is a typical scenario. In your website you can embed a player like JW Player or similar and simply set them up to pull the stream from anywhere else. If you want to make sure that your streams are not reachable from other sites using the same technique, you can use (my) Wrench for authentication or build something similar.
3, will the bandwidth of both hosting providers be consumed when transmitting a live stream?
No, the player will receive the stream directly from the media server, not via the website's hosting provider.
4, Footage
What is footage?
5, Bandwidth
Multiply the bytes per second with the number of seconds and the number of streams.
6, Adaptive
You need adaptivity if the bandwidth varies, so for mobile devices it is highly recomended, but best for everyone generally, the network speed can drop anytime and if you don't want buffering spinners, you need this.
7, Does adaptive bitrate streaming require special software on the encoding side?
No, it's not the encoding side, it's the player and media server side. If multiple bitrate streams are available on the media server and the chosen technology and player supports it, then you get adaptivity.

Initiating comms to an embedded 3G device

I have an Arduino based device interfaced to a 3G modem which I use to record data from several sensors in a remote environment. I would like to be able to send commands and stream some data from the device every now and then back to my standard network connected PC. If the remote device was connected to a WIFI or other local area network this would be relatively straightforward, but as the device connects over 3G this means that it is behind the 3G carriers NAT and so establishing a connection to the device becomes difficult.
The device can, of course, open a TCP connection to my host PC any time it wishes, the problem is telling the device when i want it to do so. I need some way of getting some kind of message to the device to notify it that I would like it to initiate a connection to my PC.
I've been reading up on NAT traversal techniques that app developers use to initiate P2P comms between 2 devices both behind NATs such as UDP and TCP 'hole punching' but this method seems rather too complex for my arduino system. Another general idea is to have the device polling a web server periodically looking for a signal to initiate a connection, but I'm not sure how much traffic (and data usage costs) this would generate as the device would have to poll every 10 seconds or so in order to make sure it initiates it's connection within a reasonable time frame of the request being set on the web server that it polls.
Is there any commonly used method of achieving something like this? Any general ideas or insight would be much appreciated
Thanks,
James
I think the solution will depend largely on your particular applications and requirements.
There are several ways to achieve this type of functionality and it looks like you have covered some of them already. The most common are:
have the device poll the server. This may be ok depending on the response times you need. If you need to poll as regularly as you suggest above then I imagine power may be more important to you than data rates, especially if you are battery connected. With a typical 3G data plan the polling itself will have a negligible data overhead, I would think.
have the server send a SMS which then triggers the device to contact the server. You need to make sure the SMS variable delivery time is ok for you and you also have to be aware that SMS delivery is not guaranteed so you would have to build in some sort of check for delivery at a higher layer (or into your application).
use some low cost Android based device for your 3G connectivity and leverage the Google push notifications mechanism
It is worth noting that server polling typically gets very bad press as it is seems intuitively wasteful to have the client and the server constantly checking for messages, especially when the actual messages are fairly infrequent. However, underneath most push solutions there is still a pull mechanism in the background, albeit generally a very efficient one that may, for example, piggy back on other messages between the network and the mobile device and hence have minimal power and data overhead. Personally, I would say that if you do not have major concerns with battery/power or with the load polling might generate for your servers, then it is worth exploring if the simplicity benefits of a polling solution outweigh its other disadvantages.

triggering an event simultaneously on multiple iOS devices

I would like to trigger an event (e.g. play music) on multiple iOS devices at the exact same time (by means of milliseconds)
My approach is to keep a socket connection and send a timestamp to iOS devices (10 seconds later from current time) and trigger the event on iOS devices at that timestamp.
Problem is iOS devices might differ 1 or 2 seconds and that would cause a desynchronize. And even timestamps are pointing out the same time on each devices (AFAIK) they are not on millisecond sensivity.
Is there any way to trigger an event simultaneously on multiple devices, or an approach which should be followed?
Don't send the data over the Internet. You can't assume the connection latency will be low enough for your needs. Use Bluetooth instead. You can do with with GameKit, with dns-sd, or with a library like HHServices.
Pick a device that will act as the controller. Apple provide sample code to do so with GameKit, but it's not difficult to think up your own method. When you want to trigger the action, that controller will send a packet over Bluetooth to the other devices.
I doubt you need lower latency than that, but if you do, have the controller send packets to each connected device to ascertain the latency for each connection, have them send their timestamps to the controller, then the controller should be able to calculate a timestamp for each of them that will occur at the same time.

External Accessory reading problem

I need to receive data periodically through a BlueTooth External Accessory.
I implemented an event-driven model of EA's streams. However, the initial transmission from bluetooth is always delayed. For example, if each packet was 15 bytes long, the stream delegate would not fires until about 150 bytes.
Will polling help?
EDIT:
Also I found it hard to recover the session after the app switching back from background to foreground. Trying to open session again would fail. Any idea?
Read every bytes when NSStreamEventHasBytesAvailable arrives.
Did you develop your own Bluetooth accessory? May be the MCU only flushes after sending every 150 bytes.
Also you mentioned initial transmission. Do you know once the Bluetooth device is paired and connected to iPhone, it has to go through some identification process, handshaking some secret certificate. This may take few and even 10 seconds, depending on signal quality. This may be the cause of delay.