VoIP delta spikes below 20ms, causing the jitter to change - sip

I am trying to do some measurements on VoIP. I am using OpenSIPS, RTPProxy, and SIPp for testing.
Everything works fine as expected, but I only have a question regarding the delta time.
Below is a screenshot I got from Wireshark RTP streams' analysis.
Why do I have these spikes below the 20ms?
I am using in a SIPp xml scenario, where 8kulaw has the following characteristics:
8kulaw.wav: RIFF (little-endian) data, WAVE audio, ITU G.711 mu-law,
mono 8000 Hz
Much appreciated!

The "RTP Stream Analysis" from wireshark is giving you hints on the quality of the stream.
Your Max Delta value is 20.15 and occurs at packet 2008.
This will indicate the time between 2 packets which in your use-case are supposed to be spaced by exactly 20ms.
So the maximum difference is very short and should definitly not affect the quality of the stream. Usually, this is used on receiver (for incoming stream): on sender, there is usually no internal latency. This probably explains why you have so short "Max Delta".
The spikes you see are pretty big, but this is mostly because the scale is very short. Not because the stream is bad.

Related

UDP stream with little packets

I have a little network with a client and a server, and I'm testing the FrameRate, changing the dimension of the packet. Particulary, I have an image, changing threshold, I extract keypoints and descriptors and then I send a fixed number of packets (with different dimension with different threshold). Problems happen when udp packets are under MTU dimension, reception rate decrease and frame rate tend to be constant. I verify with wireshark that my reception times are correct, so isn't a server code problem.
this is the graph with the same image sends 30 times for threshold with a 10 step from 40 to 170.
i can't post the image so this is the link
Thanks for the responces
I think that none will interest this answer, but we arrived to the conclusion that the problem is a problem in wifi dongle's drivers.
The trasmission window does not go under a determined time's threshold. So under a determined amount of data while time remains constant, decreases.

iOS: Bad Mic input latency measurement result

I'm running a test to measure the basic latency of my iPhone app, and the result was disappointing: 50ms for a play-through test app. The app just picks up mic input and plays it out using the same render callback, no other audio units or processing involved. Therefore, the results seemed too bad for such a basic scenario. I need some pointers to see if the result makes sense or I had design flaws in my test.
The basic idea of the test was to have three roles:
My finger snap as the reference sound source.
A simple iOS play-thru app (using built-in mic) as the first
listener to #1.
A Mac (with a USB mic and Audacity) as the second listener to #1 and
the only listener to the iOS output (through a speaker connected via
iOS headphone jack).
Then, with Audacity in recording mode, the Mac would pick up both the sound from my fingers and its "clone" from the iOS speaker in close range. Finally I simply visually observe the waveform in Audacity's recorded track and measure the time interval between the peaks of the two recorded snaps.
This was by no means a super accurate measurement, but at least the innate latency of the Mac recording pipeline should have been cancelled out this way. So that the error should mainly come from the peak distance measurement, which I assume should be much smaller than the audio pipeline latency and can be ignored.
I was expecting 20ms or lower latency, but clearly the result gave me 50~60ms.
My ASBD uses kAudioFormatFlagsCanonical and kAudioFormatLinearPCM as format.
50 mS is about 4 mS more than the duration of 2 audio buffers (one output, one input) of size 1024 at a sample rate of 44.1 kHz.
17 mS is around 5 mS more than the duration of 2 buffers of length 256.
So it looks like the iOS audio latency is around 5 mS plus the duration of the two buffers (the audio output buffer duration plus the time it takes to fill the input buffer) ... on your particular iOS device.
A few iOS devices may support even shorter audio buffer sizes of 128 samples.
You can use core audio and set up the audio session to have a very low latency.
You can set the buffer size to be smaller using AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,...
Using smaller buffers causes the audio callback to happen more often while grabbing smaller chunks of audio. Keep in mind that this is merely a suggestion to the audio system. iOS will use a callback time suitable value based on your sample rate and integer powers of 2.
Once you set the buffer duration, you can get the actual buffer duration that the system will use using AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareIOBufferDuration,...
I'll summarize Paul R's comments as the answer, which has solved my problem:
50 ms corresponds to a total buffer size of around 2048 at a 44.1 kHz sample rate, which doesn't seem unreasonable given that you have both a record and a playback path.
I don't know that the buffer size is 2048, and there may be more than one buffer in your record-playback loopback test, but it seems that the effective total buffer size in you test is probably of the order of 2048, which doesn't seem unreasonable. Of course if you're only interested in record latency, as the title of your question suggests, then you'll need to find a way to tease that out separately from playback latency.

How to measure how fast I'm sending the UDP datagrams?

I have two questions about the actual sending speed using sendto() with C socket programming.
I did a little socket programming and I'm sending UDP datagrams back to back with no spacing(pausing) between each sendto action in a for loop. Is it reasonable to use clock_gettime() to get the elapsed time and calculate the actual sending rate? What's actually influencing the sending speed, is it the CPU's frequency, or is it the network interface that I'm using? My understanding is that it should be the slower one of the two? And use clock_gettime(), can I get a rather good estimation of this sending speed? Say that we get this sending speed and denote it by S.
Suppose I'm sending the UDP datagram from a PC through a 100Mbps ethernet network interface to a router. What's the actual arrival rate at the router? For one case, if S is greater than 100Mbps, then the arrival rate will be around 100Mbps, right? If S is greater than 100Mpbs, then the arrival rate should be S, right? Or should it be still 100Mbps? I'm a a little confused.
And the reason I'm doing this is that I want to get the maximum burst size of UDP datagram I could send in a row to the router(given a certain bandwidth limit of the outgoing link) without dropping any datagram. Any idea how to do some tests to get this?
A million things affect the speed and dropped packets. I recommend you write a C program that varies sending speed, row length, etc, measures speed and dropped packets, and outputs the results to something you can graph, such as a csv file.

Data transfer rates for iOS

We are developing a application that transfers data from iOS to a server.
On our latest test the upload speed at both the beginning and end of the transfer was .46 Mbps and the ammount of data transfer was 14.5 MB. That should take about 4 minutes according to the math. It took 6 minutes and 19 seconds. Is that a standard ammount of time for that data to be transfered? Or is this an issue with the coding?
You can lose 3-15% in TCP overhead, and that's before you account for any packet retransmission due to errors or packet loss. Your actual transmission time is enough to indicate that you probably are experiencing delay in data transmission related to more than just TCP overhead. http://www.w3.org/Protocols/HTTP/Performance/Nagle/summary.html is one good reference for some detailed metrics on TCP overhead.
You should run a tcpdump on the server side to get a more accurate picture of what's occurring.

Bandwidth measurent by minimum data transfer

I intend to write an application where I will be needing to calculate the network bandwidth along with latency and packet loss rate. One of the constraints is to passively measure the bandwidth (using the application data itself).
What I have read online and understood from a few existing applications is that almost all of them use active probing techniques (that is, generating a flow of probe packets) and use the time difference between arrival of the first and last packets to calculate the bandwidth.
The main problems with such a technique are that it floods the network with probe packets, which runs longer and is not scalable (since we need to run the application at both ends).
One of the suggestions was to calculate the RTT of a packet by echoing it back to the sender and calculate the bandwidth using the following equation:
Bandwidth <= (Receive Buffer size)/RTT.
I am not sure how accurate this could be as the receiver may not always echo back the packet on time to get the correct RTT. Use of ICMP alone may not always work as many servers disable it.
My main application runs over a TCP connection so I am interested in using the TCP connection to measure the actual bandwidth offered over a particular period of time. I would really appreciate if anybody could suggest a simple technique (reliable formula) to measure the bandwidth for a TCP connection.
It is only possible to know the available bandwidth by probing the network. This is due to that a 80% utilized link will still send echo-packets without delay, i.e. it will appear to be 0% occupied.
If you instead just wish to measure the bandwidth your application is using, it is much easier. E.g. keep a record of the amount of data you have transferred in the last second divided into 10ms intervals.
Active probing technique and its variants are bandwidth estimation algorithm. You dont want to use these algorithm to measure bandwidth. Note the difference between 'measure' and 'estimate'.
If you want to use tcp to measure bandwidth, you should be aware that tcp bandwidth is influenced by latency.
The easiest way to measure bandwidth using tcp is by sending tcp packets and measure the transferred bandwidth. It will flood the network. None of the non flooding algorithm reliable in high speed network. Plus, non flooding algorithm assume the channel is clear from traffic. If there is other traffic inside the channel, the result would be skewed. Im not suprised if the result would not make sense.