How to imlement Audio Streaming <50 millisecond latency on iPhone - iphone

I need to implement audio streaming on iPhone with latency lower than 50 millisecond .
Any ideas on how I can make it work?
I bumped into:
http://cocoawithlove.com/2009/06/revisiting-old-post-streaming-and.html
But it's very important to me to know that the latency will be very low.
thanks

One way to minimize latency on the receiving end is to use the RemoteIO Audio Unit with very short buffers, and feed it from raw PCM audio or a decompressor for an audio format that requires extremely low computational complexity to decode, as well as small packets.
You pretty much need complete control over the entire network source and path, including hand picking all the equipment, as any router or access point can completely destroy latency by buffeting packets or prioritizing other traffic, etc.
You probably want to use UDP for the IP protocol, with a packet size tuned to your network equipment and to the audio buffer size.

Related

Streaming Live audio to the browser - Alternatives to the Web Audio API?

I am attempting to stream live audio from an iOS device to a web browser. The iOS device sends small, mono wav files (as they are recorded) through a web socket. Once the client receives the wav files, I have the Web Audio API decode and schedule them accordingly.
This gets me about 99% of the way there, except I can hear clicks between each audio chunk. After some reading around, I have realized the likely source of my problem: the audio is being recorded at a sample rate of only 4k and this cannot be changed. It appears that the Web Audio API's decodeAudioData() function does not handle sample rates other than 44.1k with exact precision resulting in gaps between chunks.
I have tried literally everything I could find about this problem (scriptProcessorNodes, adjusting the timing, creating new buffers, even manually upsampling) and none of them have worked. At this point I am about to abandon the Web Audio API.
Is the Web Audio API appropriate for this?
Is there a better alternative for what I am trying to accomplish?
Any help/suggestions are appreciated, thanks!
Alas! AudioFeeder.js works wonders. I just specify the sampling rate of 4k, feed it raw 32 bit pcm data and it outputs a consistent stream of seamless audio! Even has built in buffer handling events, so no need to set any loops or timeouts to schedule chunk playback. I did have to tweak it a bit, though, to connect it to the rest of my web audio nodes and not just context.destination.
Note: AudioFeeder does automatically upsample to the audio context sampling rate. Going from 4k to 44.1k did introduce some pretty gnarly sounding artifacts in the highend, but a 48db lowpass filter (4 x 12db's) at 2khz got rid of them. I chose 2khz because, thanks to Harry Nyquist, I know that a sampling rate of 4k couldn't have possibly produced frequencies above 2khz in the original file.
All hail Brion Vibbers

Difference between playing a video stream and a video file on a browser

I was reading about the thin clients and streaming videos. How is it different from downloading a file locally and then playing it on a browser. I mean internally how does streaming work? does streaming take less CPU and memory than playing from a file?
The concept behind streaming is very simple - essentially you can imagine the server sending the video either byte by byte, or in 'chunks' and the client receiving the bytes or chunks into a 'first in first out' queue and then playing them in the order they are received (and at the speed required to play the video properly).
More sophisticated streaming techniques will allow the client switch between different bit rate encodings while downloading the chunks of a file - this means that if the network conditions change during video playback the client can choose a lower or higher bit rate chunk as the next chunk to download appropriately. This is referred to as Adaptive Bit Rate streaming.
Advantages of streaming include fast video start up and seeking, better utilisation of bandwidth and no need to download the whole video if the user decides to seek or stop watching.
The following article gives a very good overview: http://www.jwplayer.com/blog/what-is-video-streaming/

Audio hardware latency on the iPhone

I'm currently developing an app which plays an audio file (mp3, but can change to WAV to reduce decoding time), and records audio at the same time.
For synchronization purposes, I want to estimate the exact time when audio started playing.
Using AudioQueue to control each buffer, I can estimate the time when the first buffer was drained. My questions are:
What is the hardware delay between AudioQueue buffers being drained and them actually being played?
Is there a lower level API (specifically, AudioUnit), that has better performance (in hardware latency measures)?
Is it possible to place an upper limit on hardware latency using AudioQueue, w or w/o decoding the buffer? 5ms seems something that I can work with, more that that will require a different approach.
Thanks!
The Audio Queue API runs on top of Audio Units, so the RemoteIO Audio Unit using raw uncompressed audio will allow a lower and more deterministic latency. The minimum RemoteIO buffer duration that can be set on some iOS devices (using the Audio Session API) is about 6 to 24 milliseconds, depending on application state. That may set a lower limit on both play and record latency, depending on what events you are using for your latency measurement points.
Decoding compressed audio can add around an order of magnitude or two more latency from decode start.

Packet order for UDP video streaming

I'm developing a kind of proxy for video streaming and I'm now dealing with an issue related to packets received out-of-order (without losses). This issue (maybe) is the reason why there are frequent noises in the video playback.
Do you know by chance if VLC is able to reorder packets? If so, it would mean that the reason why there are some noises in the playback is something else, if not, I should just develop an additional layer that ensure the reception with the correct order.
Thanks.
Assuming that you are talking about RTP over UDP, AFAIK VLC uses live555 libraries for client-side RTSP/RTP functionality and live555 has a built in jitter buffer that should take care of re-ordering for you. I can't recall the size of the jitter buffer of hand but 100ms seems to ring a bell.
In case you didn't know: When developing media streaming applications (esp. over UDP) it is important to increase the size of the receiver buffer. If it is full and packets get dropped, which could explain your artifacts.
Also, UDP being unreliable means that you will experience artifacts if packets are lost/corrupted and you have no suitable mechanism to deal with it.

Streaming audio from a microphone on a Mac to an iPhone

I'm working on a personal project where the iPhone connects to a server-type application running on a Mac. The iPhone send and receives textual/ASCII data via standard sockets. I now need to stream the microphone from the Mac to the iPhone. I've done some work with AudioServices before but wanted to check my thoughts here before getting too deep.
I'm thinking I can:
1. Create an Audio Queue in the standard Cocoa application on the Mac.
2. In my Audio Queue Callback function, rather than writing it to a file, write it to another socket I open for audio streaming.
3. On the iPhone, receive the raw sampled/encoded audio data from the TCP stream and dump it into an Audio Queue Player which outputs to headphone/speaker.
I know this is no small task and I've greatly simplified what I need to do but could it be as easy as that?
Thanks for any help you can provide,
Stateful
This looks broadly sensible, but you'll almost certainly need to do a few more things:
Buffering. On the "recording" end, you probably don't want to block the audio queue if the buffer is full. On the "playback" end, I don't think you can just pass buffers into the queue (IIRC you'll need to buffer it until you get a callback).
Concurrency. I'm pretty sure AQ callbacks happen on their own thread, so you'll need some sort of locking/barriers around your buffer accesses.
Buffer pools, if memory allocation ends up being a big overhead.
Compression. AQ might be able to give you "IMA4" frames (IMA ADPCM 4:1, or so); I'm not sure if it does hardware MP3 decompression on the iPhone.
Packetization, if e.g. you need to interleave voice chat with text chat.
EDIT: Playback sync (or whatever you're supposed to call it). You need to be able to handle different effective audio clock rates, whether it's due to a change in latency or something else. Skype does it by changing playback speed (with pitch-correction).
EDIT: Packet loss. You might be able to get away with using TCP over a short link, but that depends a lot on the quality of your wireless network. UDP is a minor pain to get right (especially if you have to detect an MTU hole).
Depending on your data rates, it might be worthwhile going for the lower-level (BSD) socket API and potentially even using readv()/writev().
If all you want is an "online radio" service and you don't care about the protocol used, it might be easier to use AVPlayer/MPMoviePlayer to play audio from a URL instead. This involves implementing a server which speaks Apple's HTTP streaming protocol; I believe Apple has some sample code that does this.