AudioQueue buffer size for streaming aac audio - iphone

I am playing MP3 audio form a network stream and sometimes gaps are present when using WIFI connection.
I decrease the buffer size, but I am wondering what is the best method for calculating buffer size.
My MP3 stream is 64 Kbits.
I am using 3 buffers, for 64 *1024 each.
PacketDescriptions is 512
Thanks a lot
Thierry

I found the answer!
Best values for me are :
Number of buffers 3
Buffer size 32 * 768
Max packets description 4096

I wonder why did you choose only 24576 bytes.
Wasn't it so small for buffer?

Related

OPUS packet size

I have an application, that reads opus packets from a file. The file confirms opus packets in ogg format. My application sends each opus packet every 20 millisecond (it is configurable).
For 20 millisec, it sends packets of sizes ranging from 200 bytes to 400 bytes, say average size is 300 bytes.
Sending 300 bytes for 20millsec, is it right or its too much of data. How can I calculate for 20millisec how much data (in bytes) I can send to remote.
Can somebody help me to understand how to calculate number of bytes I need to send to remote party per 20millisec.
300 bytes/packet × 8 bits/byte / 20 ms/packet = 120 kbit/s
That is enough for good quality stereo music. Depending on the quality that you need, or if you are only sending mono or voice, you could potentially reduce the bitrate of the encoder. However if you are reading from an Ogg Opus file then the packets are already encoded, so it is too late to reduce the bitrate of the encoder unless you decode the packets and re-encode them at a lower bitrate.

Processing audio signals in Matlab vs Sensor

I'm trying to do target recognition using the target acoustic signal. I tested my code in matlab, however, i'm trying to simulate that in C to test it in tinyOS using sensor simulator.
In matlab, i used wav records (16 bits per sample, 44.1 sample rate), so for example, i have a record for a certain object, lets say cat sound which of 0:01 duration, in matlab that will give me a total of 36864 samples of type int16 ,and size 73728 bytes.
In sensor, if i have [Mica2 sensor: 10 bits ADC (but i'll use 8 bits ADC), 8 MHz microprocessor, and 4 Kb RAM. This means that when i detect an object, i'll fill the buffer with 4000 samples of type uint8_t (if i used 8 KHz sample rate and 8 bits ADC).
So, my question is that:
In matlab i used a large number of samples to represent the target audio signal(36864 samples), but in the sensor i'm limited to only 4000 samples, would that be enough to record the whole target sound?
Thank you very much, highly appreciate your advice

iOS: Bad Mic input latency measurement result

I'm running a test to measure the basic latency of my iPhone app, and the result was disappointing: 50ms for a play-through test app. The app just picks up mic input and plays it out using the same render callback, no other audio units or processing involved. Therefore, the results seemed too bad for such a basic scenario. I need some pointers to see if the result makes sense or I had design flaws in my test.
The basic idea of the test was to have three roles:
My finger snap as the reference sound source.
A simple iOS play-thru app (using built-in mic) as the first
listener to #1.
A Mac (with a USB mic and Audacity) as the second listener to #1 and
the only listener to the iOS output (through a speaker connected via
iOS headphone jack).
Then, with Audacity in recording mode, the Mac would pick up both the sound from my fingers and its "clone" from the iOS speaker in close range. Finally I simply visually observe the waveform in Audacity's recorded track and measure the time interval between the peaks of the two recorded snaps.
This was by no means a super accurate measurement, but at least the innate latency of the Mac recording pipeline should have been cancelled out this way. So that the error should mainly come from the peak distance measurement, which I assume should be much smaller than the audio pipeline latency and can be ignored.
I was expecting 20ms or lower latency, but clearly the result gave me 50~60ms.
My ASBD uses kAudioFormatFlagsCanonical and kAudioFormatLinearPCM as format.
50 mS is about 4 mS more than the duration of 2 audio buffers (one output, one input) of size 1024 at a sample rate of 44.1 kHz.
17 mS is around 5 mS more than the duration of 2 buffers of length 256.
So it looks like the iOS audio latency is around 5 mS plus the duration of the two buffers (the audio output buffer duration plus the time it takes to fill the input buffer) ... on your particular iOS device.
A few iOS devices may support even shorter audio buffer sizes of 128 samples.
You can use core audio and set up the audio session to have a very low latency.
You can set the buffer size to be smaller using AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,...
Using smaller buffers causes the audio callback to happen more often while grabbing smaller chunks of audio. Keep in mind that this is merely a suggestion to the audio system. iOS will use a callback time suitable value based on your sample rate and integer powers of 2.
Once you set the buffer duration, you can get the actual buffer duration that the system will use using AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareIOBufferDuration,...
I'll summarize Paul R's comments as the answer, which has solved my problem:
50 ms corresponds to a total buffer size of around 2048 at a 44.1 kHz sample rate, which doesn't seem unreasonable given that you have both a record and a playback path.
I don't know that the buffer size is 2048, and there may be more than one buffer in your record-playback loopback test, but it seems that the effective total buffer size in you test is probably of the order of 2048, which doesn't seem unreasonable. Of course if you're only interested in record latency, as the title of your question suggests, then you'll need to find a way to tease that out separately from playback latency.

iPhone 4S - BLE data transfer speed

I've been tinkering around with the BLE (Bluetooth Low Energy) connectivity classes quiet a bit lately and haven't been able to make it transfer data any faster than 1KB / 5 seconds. I believe, in the documentation, it says the max speed is 60 bytes per 20 milliseconds. With data transfer and counting the Ack transfer after each set of packets, I believe we should be able to go as fast as 1.5KB per second. So my code is around 7-8 times slower than it should be.
I'm just wondering if anyone has been able to do data transfer in BLE as fast as the documentation says it should be able to do. What sort of speed are you getting if faster than mine?
Thanks a lot
see at the guidlines of apple and you will see that a connection update request is required to speed up your connection.
https://developer.apple.com/hardwaredrivers/BluetoothDesignGuidelines.pdf
I have min=20ms max 40 ms
I hope I could help
Roman
If you are able to use higher MTU size (negotiated by the iOS) then you would be able to increase the bandwidth even more, because there is a 4 byte L2CAP header and a 3 byte ATT header that wouldn't be transmitted more than in one packet.
If you are able to transmit 6 packets pr connection interval, then you would be able to put in 35 byte extra per connection interval (the 7 byte header would still be there for the first packet) The MTU size could also be split over several connection intervals, increasing the throughput with 7 more bytes pr connection interval. (Just takes longer time to assemble the packet again.) The max MTU size allowed by ATT is 515 bytes (Max size of att is 512 bytes + 3 byte header for opcode + handle)

Is there an 'optimal' buffer size when using send()?

Let's say you're transferring a file of arbitrary length in chunks over TCP/IP:
looping...
read(buffer, LENGTH)
send(mysocket, buffer, LENGTH, flags)
My question is, what would be optimal value of LENGTH? Or does it not matter at all? I've seen everything from 256 bytes to 8192 bytes being used.
Depends what you mean by optimal. For optimal usage of the bandwidth, you want to maximize the packet size so send at least the network packet size (which on Ethernet is usually about 1500 bytes). If you are reading from disk 4096 or 8192 bytes would be a good value.
If your buffer size translates into packet size, then shorter buffers are better -- less to retransmit in event of a packet error.
ATM took this to the extreme with a 54-byte packet.
But depending upon your library, it might be doing some buffering of its own and setting its packet size independantly. YMMV.
If you are sending large amounts of data over a high latency connection, you can get better throughput with a larger send buffer. Here is a good explanation:
http://www.onlamp.com/pub/a/onlamp/2005/11/17/tcp_tuning.html