Data transfer rates for iOS - ios5

We are developing a application that transfers data from iOS to a server.
On our latest test the upload speed at both the beginning and end of the transfer was .46 Mbps and the ammount of data transfer was 14.5 MB. That should take about 4 minutes according to the math. It took 6 minutes and 19 seconds. Is that a standard ammount of time for that data to be transfered? Or is this an issue with the coding?

You can lose 3-15% in TCP overhead, and that's before you account for any packet retransmission due to errors or packet loss. Your actual transmission time is enough to indicate that you probably are experiencing delay in data transmission related to more than just TCP overhead. http://www.w3.org/Protocols/HTTP/Performance/Nagle/summary.html is one good reference for some detailed metrics on TCP overhead.
You should run a tcpdump on the server side to get a more accurate picture of what's occurring.

Related

VoIP delta spikes below 20ms, causing the jitter to change

I am trying to do some measurements on VoIP. I am using OpenSIPS, RTPProxy, and SIPp for testing.
Everything works fine as expected, but I only have a question regarding the delta time.
Below is a screenshot I got from Wireshark RTP streams' analysis.
Why do I have these spikes below the 20ms?
I am using in a SIPp xml scenario, where 8kulaw has the following characteristics:
8kulaw.wav: RIFF (little-endian) data, WAVE audio, ITU G.711 mu-law,
mono 8000 Hz
Much appreciated!
The "RTP Stream Analysis" from wireshark is giving you hints on the quality of the stream.
Your Max Delta value is 20.15 and occurs at packet 2008.
This will indicate the time between 2 packets which in your use-case are supposed to be spaced by exactly 20ms.
So the maximum difference is very short and should definitly not affect the quality of the stream. Usually, this is used on receiver (for incoming stream): on sender, there is usually no internal latency. This probably explains why you have so short "Max Delta".
The spikes you see are pretty big, but this is mostly because the scale is very short. Not because the stream is bad.

Sending images through sockets

I have an idea for a client-server. The client handles only input, sending it to the server. Server handles the input, logic and then sends the image of the program to the client. The client prints the image on user's screen. Uses udp, slight artefacts in the image are tolerated.
How fast can those images travel through the Internet? Can they travel at least 5 times a second? I don't have 2 computers at hand to test it.
EDIT: One more question - how reliable is UDP protocol? How many pixels would arrive corrupted? Say, 10% on average?
EDIT2: For example, I have an 320x200 32 bit image (red,green,blue + alpha). That's ~2 million bits. How long it takes for the image to arrive from the server to the client, if my ping is X, my uploading speed Y Mbps and my download speed Z Mbps?
The answers to your questions depend heavily on the internet connections of the machines involved. In particular, if the program is heavily graphical, the bandwidth used by the images may be fairly substantial, especially if your client is on a mobile device connecting through the cellular telephony system.
If you have plenty of bandwidth, 5 round trips per second should be achievable most of the time if both client and server are in the U.S., or both are in Europe. There are, for example, interactive computer games that depend on having 4-5 round trips per second for smooth play, and only occasionally have glitches as a result. If client and server are on different continents, and especially if they are on opposite sides of the world, this may be more difficult, as speed of light delays start using a significant proportion of the available transmission time. In the worst case, say between China and Argentina, theoretical speed of light delays alone limit the network to less than 8 round trips per second, so with real network and bandwidth limitations, 5 round trips per second could be difficult to achieve.
The reliability of UDP depends substantially on how congested the connection is. On an uncongested network connection, you'd probably lose 1% of the packets or less. On a very congested network connection, it might be a lot worse - I've seen situations where 80% of the packets were lost.
On an uncongested network, the time for an image to travel from the server to the client would be
(ping time)/2 + (1-packet overhead)*(image size)/(minimum bandwidth)
Packet overhead is only a few percent, so you might be able to drop that term out. Minimum bandwidth would be the minimum of the server upload bandwidth and the client download bandwidth. Note that the image size might be reduced substantially through compression. Don't forget, though, that you also need to allow for time for the input to be sent from the client to the server, which adds another (ping time)/2 at a minimum.

Should i adjust SO_RCVBUF to a very small size, when incoming data is very small

If a tcp server has thousands of concurrent connections,
and each tcp client sends very small data occasionally,
Should i adjust SO_RCVBUF to a very small size to save precious memory?
p.s. the size of data is less than 30 byte,
the time interval of data tranfer is larger than 5 seconds
Definitely not. You won't be able to decrease it below the platform minimum anyway, which will be at least 8k.

iPhone 4S - BLE data transfer speed

I've been tinkering around with the BLE (Bluetooth Low Energy) connectivity classes quiet a bit lately and haven't been able to make it transfer data any faster than 1KB / 5 seconds. I believe, in the documentation, it says the max speed is 60 bytes per 20 milliseconds. With data transfer and counting the Ack transfer after each set of packets, I believe we should be able to go as fast as 1.5KB per second. So my code is around 7-8 times slower than it should be.
I'm just wondering if anyone has been able to do data transfer in BLE as fast as the documentation says it should be able to do. What sort of speed are you getting if faster than mine?
Thanks a lot
see at the guidlines of apple and you will see that a connection update request is required to speed up your connection.
https://developer.apple.com/hardwaredrivers/BluetoothDesignGuidelines.pdf
I have min=20ms max 40 ms
I hope I could help
Roman
If you are able to use higher MTU size (negotiated by the iOS) then you would be able to increase the bandwidth even more, because there is a 4 byte L2CAP header and a 3 byte ATT header that wouldn't be transmitted more than in one packet.
If you are able to transmit 6 packets pr connection interval, then you would be able to put in 35 byte extra per connection interval (the 7 byte header would still be there for the first packet) The MTU size could also be split over several connection intervals, increasing the throughput with 7 more bytes pr connection interval. (Just takes longer time to assemble the packet again.) The max MTU size allowed by ATT is 515 bytes (Max size of att is 512 bytes + 3 byte header for opcode + handle)

Bandwidth measurent by minimum data transfer

I intend to write an application where I will be needing to calculate the network bandwidth along with latency and packet loss rate. One of the constraints is to passively measure the bandwidth (using the application data itself).
What I have read online and understood from a few existing applications is that almost all of them use active probing techniques (that is, generating a flow of probe packets) and use the time difference between arrival of the first and last packets to calculate the bandwidth.
The main problems with such a technique are that it floods the network with probe packets, which runs longer and is not scalable (since we need to run the application at both ends).
One of the suggestions was to calculate the RTT of a packet by echoing it back to the sender and calculate the bandwidth using the following equation:
Bandwidth <= (Receive Buffer size)/RTT.
I am not sure how accurate this could be as the receiver may not always echo back the packet on time to get the correct RTT. Use of ICMP alone may not always work as many servers disable it.
My main application runs over a TCP connection so I am interested in using the TCP connection to measure the actual bandwidth offered over a particular period of time. I would really appreciate if anybody could suggest a simple technique (reliable formula) to measure the bandwidth for a TCP connection.
It is only possible to know the available bandwidth by probing the network. This is due to that a 80% utilized link will still send echo-packets without delay, i.e. it will appear to be 0% occupied.
If you instead just wish to measure the bandwidth your application is using, it is much easier. E.g. keep a record of the amount of data you have transferred in the last second divided into 10ms intervals.
Active probing technique and its variants are bandwidth estimation algorithm. You dont want to use these algorithm to measure bandwidth. Note the difference between 'measure' and 'estimate'.
If you want to use tcp to measure bandwidth, you should be aware that tcp bandwidth is influenced by latency.
The easiest way to measure bandwidth using tcp is by sending tcp packets and measure the transferred bandwidth. It will flood the network. None of the non flooding algorithm reliable in high speed network. Plus, non flooding algorithm assume the channel is clear from traffic. If there is other traffic inside the channel, the result would be skewed. Im not suprised if the result would not make sense.