iPhone 4S - BLE data transfer speed - iphone

I've been tinkering around with the BLE (Bluetooth Low Energy) connectivity classes quiet a bit lately and haven't been able to make it transfer data any faster than 1KB / 5 seconds. I believe, in the documentation, it says the max speed is 60 bytes per 20 milliseconds. With data transfer and counting the Ack transfer after each set of packets, I believe we should be able to go as fast as 1.5KB per second. So my code is around 7-8 times slower than it should be.
I'm just wondering if anyone has been able to do data transfer in BLE as fast as the documentation says it should be able to do. What sort of speed are you getting if faster than mine?
Thanks a lot

see at the guidlines of apple and you will see that a connection update request is required to speed up your connection.
https://developer.apple.com/hardwaredrivers/BluetoothDesignGuidelines.pdf
I have min=20ms max 40 ms
I hope I could help
Roman

If you are able to use higher MTU size (negotiated by the iOS) then you would be able to increase the bandwidth even more, because there is a 4 byte L2CAP header and a 3 byte ATT header that wouldn't be transmitted more than in one packet.
If you are able to transmit 6 packets pr connection interval, then you would be able to put in 35 byte extra per connection interval (the 7 byte header would still be there for the first packet) The MTU size could also be split over several connection intervals, increasing the throughput with 7 more bytes pr connection interval. (Just takes longer time to assemble the packet again.) The max MTU size allowed by ATT is 515 bytes (Max size of att is 512 bytes + 3 byte header for opcode + handle)

Related

Ideal UDP packet size on a reliable (short) network for efficient data transfer

I have a question about the UDP protocol.
I want to stream data from a particle sensor (Xilinx FPGA Dev Board) to a raspberry pi (or Windows 10 laptop) via UDP byte socket as binary data. I don't care if some packets are lost, because there are many particles coming after anyways... But if a packet gets lost, all of the particles information should get lost.
The connection is a short lan cable over the 1 Gbit/s Ethernet port.
The "minimum" amount of data is 192 Bit (24 Byte) per particle (16 x 12 Bit value) and the maximum amount of particles is 3300 per second.
So I have to transfer max. 192 x 3300 = 79200 Byte/s plus Header etc.
Maximum packet size of UDP 65.507 Byte
As I understand the packet size has to be devidable by 24 Byte in my application.
Which leaves me with a packet size range of 24 to 65.496 Byte.
But if the concentration is lower I don't want to wait minutes until a packet is filled and ready to be send.
What would you guys suggust in regard of repitition rate and size?
E.g. a 1008 Byte packet has to be send about every 13 ms at max. particle concentration.
best regards
I just tested a UDP socket with 1024 Byte Buffer, which works nice with strings or Bitvectors for single test packets.

ADXL345 Accelerometer data use on I2C (Beaglebone Black)

Background Information
I am trying to make sure I will be able to run two ADXL345 Accelerometers on the same I2C Bus.
To my understanding, the bus can transmit up to 400k bits/s on fast mode.
In order to send 1 byte of data, there are 20 extra bits of overhead.
There are 6 bytes per accelerometer reading (XLow, XHigh, YLow, YHigh, ZLow, ZHigh)
I need to do 1000 readings per second with both accelerometers
Thus,
My total data used per second is 336k bits/s which is within my limit of 400k bits/s.
I am not sure if I am doing these calculations correctly.
Question:
How much data am I transmitting per second with two accelerometers reading 1000 times per second on i2c?
Your math seems to be a bit off; for this accelerometer (from the datasheet: https://www.sparkfun.com/datasheets/Sensors/Accelerometer/ADXL345.pdf), in order to read the 6 bytes of XYZ sample data, you need to perform a 6-byte burst read of the registers. What this means in terms of data transfer is a write of the register address to the accelerometer (0x31) then a burst read of 6 bytes continuously. Each of these two transfers requires sending first the I2C device address and the R/W bit, as well as an ACK/NAK per byte, including the address bytes, as well as START/REPEAT START/STOP conditions. So, over all, an individual transfer to get a single sample (ie, a single XYZ acceleration vector) is as follows:
Start (*) | Device Address: 0x1D (7) | Write: 0 (1) | ACK (1) | Register Address: 0x31 (8) | ACK (1) | Repeat Start (*) | Device Address: 0x1D (7) | Read: 1 (1) | ACK (1) | DATA0 (8) | ACK(1) | DATA1 (8) | ACK (1) | ... | DATA5 (8) | NAK (1) | Stop (*)
If we add all that up, we get 81+3 bits of data that need to be transmitted. Note first that the START, REPEAT START and STOP might not actually take a bits worth of time each but for simplicity we can assume they do. Note also that while the device address is only 7 bits, you always need to postpend the READ/WRITE bit, so an I2C transaction is always 8 bits + ACK/NAK, so 9 bits in total. Note also, the I2C max transfer rate really defines the max SCK speed the device can handle, so in fast mode, the SCK is at most 400KHz (thus 400Kbps at most, but because of the protocol, you'll get less in real data). Thus, 84 bits at 400KHz means that we can transfer a sample in 0.21 ms or ~4700 samples/sec assuming no gaps or breaks in transmission.
Since you need to read 2 samples every 1ms (2 accelerometers, so 84 bits * 2 = 164 bits/sample or 164Kbps at 1KHz sampling rate), this should at least be possible for fast mode I2C. However, you will need to be careful that you are taking full use of the I2C controller. Depending on the software layer you are working on, it might be difficult to issue I2C burst reads fast enough (ie, 2 burst read transactions within 1ms). Using the FIFO on the accelerometer would significantly help the latency requirement, meaning instead of having 1ms to issue two burst reads, you can delay up to 32ms to issue 64 burst reads (since you have 2 accelerometers); but since you need to issue a new burst read to read the next sample, you'll have to be careful about the delay introduced by software between calls to whatever API youre using to perform the I2C transactions.

Increased MTU but still can't send large UDP packets

a little info on what i'm trying to achieve here first. I'm using a Texas Instrument board EVM6678LE, and what i am trying to do is to increase the UDP transfer rate between the board and my PC.
I've increased the MTU on my PC through netsh>interface>ipv4 to 15,000. But when i ping the board from my PC i am only able to ping up to "ping 192.168.2.100 -l 10194", if i ping with 195bytes onwards i'll receive a request timeout. Is this a limitation of my PC or something?
Does anyone have any idea what could be the possible cause of this? Any advice or suggestions at all would be welcome. As the only way to increase the transfer rate i could think of it increasing the per packet size which reduces overhead. And at 10k i have a rate of around 9.1MB/s, and i'm trying to attain 25MB/s.
Thanks!
Increasing the MTU on your PC may not prevent fragmentation. I don't know exactly what is controlling this, but your network card or driver can fragment the packet even when MTU is not reached. Use a sniffer like Wireshark to see how the packets are sent.
About the timeout, it is possible that your board rejects fragmented pings (because of Ping of Death protection). There is also a possibility that its packet buffer is 10kB (10240) bytes long, and can't receive larger packets. Also, make sure that the receiving endpoint have a matching MTU.
Anyway, if you are trying to increase transfer rate, you are on the wrong track. The overhead for UDP is 8 bytes, IP 20 bytes, Ethernet 18 bytes, which make a total of 46 bytes (oh, coincidence, 46+10194 is exactly 10240). 46 bytes overhead for 1024 MTU is 95.5%. 46 bytes for 4096 is 98.9%, 46 bytes for 16384 is 99.7%. That means you gain +3.5% transfer rate from 1024 to 4096 MTU, and another +0.8% from 4096 to 16384. The gain is just ridiculous, and you should just let the MTU to the common default 1500.
Anyway, going from 9.1MB/s to 25MB/s just by changing the MTU is IMPOSSIBLE (if it was, why the PC default is not higher ?). Here I guess you are using Fast Ethernet (100BASE-T), and you are already transferring near full bandwidth. To get higher rates, you would need to have Gigabit Ethernet (1000BASE‑T). That means you need both hardware endpoints to support 1000BASE-T.

Data transfer rates for iOS

We are developing a application that transfers data from iOS to a server.
On our latest test the upload speed at both the beginning and end of the transfer was .46 Mbps and the ammount of data transfer was 14.5 MB. That should take about 4 minutes according to the math. It took 6 minutes and 19 seconds. Is that a standard ammount of time for that data to be transfered? Or is this an issue with the coding?
You can lose 3-15% in TCP overhead, and that's before you account for any packet retransmission due to errors or packet loss. Your actual transmission time is enough to indicate that you probably are experiencing delay in data transmission related to more than just TCP overhead. http://www.w3.org/Protocols/HTTP/Performance/Nagle/summary.html is one good reference for some detailed metrics on TCP overhead.
You should run a tcpdump on the server side to get a more accurate picture of what's occurring.

Converting from bandwidth to traffic gives different results depending on operators position?

This must be a stupid question, but nevertheless I find it curious:
Say I have a steady download of 128Kbps.
How much disk space is going to be consumed after a hour in Megabytes?
128 x 60 x 60 / 8 / 1024 = 56.25 MB
But
128 x 60 x 60 / 1000 /8 = 57.6 MB
So what is the correct way to calculate this?
Thanks!
In one calculation you're dividing by 1000, but in another you're dividing by 1024. There shouldn't be any surprise you get different numbers.
Officially, the International Electrotechnical Commission standards body has tried to push "kibibyte" as an alternative to "kilobyte" when you're talking about the 1024-based version. But if you use it, people will laugh at you.
Please remember that there is overhead in any transmission. There can be "dropped" packets etc. Also there is generally some upstream traffic as your PC acknoledges receipt of packets. Finally since packets can be received out of order, the packets themselves contain "extra" data to all the receiver to reconstruct the data in the proper order.
Ok, I found out an official explanation from Symantec on the matter:
http://seer.entsupport.symantec.com/docs/274171.htm
It seems the idea is to convert from bits to bytes as early as possible in calculation, and then the usual 1024 division comes in place.
I just hope it's a standard procedure, and not Symantec imposed one :).