Play and render stream using audio queues - iphone

I'm currently playing a stream on my iOS App but one feature we'd like to add is the visualization of the output wave. I use an output audio queue in order to play the stream, but have found no way to read the output buffer. Can this be achieved using audio queues or shall be done wit a lower level api?

To visualize, you presumably need PCM (uncompressed) data, so if you're pushing some compressed format into the queue like MP3 or AAC, then you never see the data you need. If you were working with PCM (maybe you're uncompressing it yourself with the Audio Conversion APIs), then you could visualize before putting samples into the queue. But then the problem would be latency - you want to visualize samples when they play, not when they go into the queue.
For latency reasons alone, you probably want to be using audio units.

It cannot actually be done. In order to do so, I need audio units to implement the streamer.

Related

How to get samples from AudioQueue Services on iOS

I'm trying to get samples from AudioQueue to show spectrum of music (like in iTunes) on iPhone.
Ive read a lot of posts but almost all asks about get samples when Recording, not playing :(
I'm using AudioQueue Services for streaming audio. Please help to understanding next points:
1/ Where can I get access to samples (PCM, non mp3 (I'm using mp3 stream)
2/ Should I collect samples in my own buffer to apply fft ?
3/ Is it possible get frequencies without fft transformations ?
4/ How can I synchronize my fft shift in buffer with current playing samples ?
thanks,
update:
AudioQueueProcessingTapNew
For iOS6+, this works fine for me. But what about iOS5 ?
For playing audio, the idea is to get at the samples before you feed them to the Audio Queue callback. You may need to convert any compressed audio file format into raw PCM samples beforehand. This can be done using one of the AVFoundation converter or file reader services.
You can then copy frames of data from the same source used to feed the Audio Queue callback buffers, and apply your FFT or other DSP for visualization to them.
You can use either FFTs or a bank of band-pass filters to get frequency info, but the FFT is very efficient at this.
Synchronization needs to done by trial-and-error, as Apple does not specify exact audio and view graphic display latencies, which may differ between iOS devices and OS versions anyway. But short Audio Queue buffers or using the RemoteIO Audio Unit may give you better control of the audio latency, and OpenGL ES will give you better control of the graphic latency.

Is it possible to access decoded audio data from Audio Queue Services?

I have an app in the App Store for streaming compressed music files over the network. I'm using Audio Queue Services to handle the playback of the files.
I would like to do some processing on the files as they are streamed. I tested the data in the audio buffers, but it is not decompressed audio. If I'm streaming an mp3 file, the buffers are just little slices of mp3 data.
My question is: is it ever possible to access the uncompressed PCM data when using Audio Queue Services? Am I forced to switch to using Audio Units?
You could use Audio Conversion Services to do your own MP3-to-PCM conversion, apply your effect, then put PCM into the queue instead of MP3. It's not going to be terribly easy, but you'd still be saved the threading challenges of doing this directly with audio units (which would require you to do your own conversion anyways and then probably use a CARingBuffer to send samples between the download/convert thread and the realtime callback thread from the IO unit)

Audio processing for iPhone. Which layer to use

I want to apply an audio filter on the users voice in iPhone.
The filter is quite heavy and needs many audio samples to get the desired quality. I do not want to apply the filter in realtime but I want to have an almost realtime performance. I would like the processing to happen in parrallel with the recording when the nessesary samples are collected and when the user stops recording to hear (after a few seconds) the distorted sound.
My questions are:
1. Which is the right technology layer for this task e.g. audio units?
2. Which are the steps involved?
3. Which are the key concepts and API methods to use?
4. I want to capture the users voice. Which are the right recording settings for this? If my filter alter alters the frequency should I use a wider range?
5. How can I collect the necessary samples for my filter? How can I handle the audio data? I mean depending on the recording settings how the data are packed?
6. How can I wright the final audio recording to a file?
Thanks in advance!
If you find a delay of over a hundred milliseconds acceptable, you can use the Audio Queue API, which is a bit simpler than using the RemoteIO Audio Unit, for both capture and audio playback. You can process the buffers in your own NSOperationQueue as the come in from the audio queue, and either save the processed results to a file or just kept in memory if there is room.
For Question 4: If your audio filter is linear, then you won't need any wider frequency range. If you are doing non-linear filtering, all bets are off.

Is there a "simple" way to play linear PCM audio on the iPhone?

I'm making a media playback app which gets given uncompressed linear PCM (über raw) audio from a third-party decoder, but I'm going crazy when it comes to just playing back the most simple audio format I can imagine..
The final app will get the PCM data progressively as the source file streams from a server, so I looked at Audio Queues initially (since I can seemingly just give it bytes on the go), but that turned out to be mindfudge - especially since Apple's own Audio Queue sample code seems to go off on a magical field trip of OpenGL and needless encapsulation..
I ended up using an AVAudioPlayer during (non-streaming) prototyping; I currently create a WAVE header and add it to the beginning of the PCM data to get AVAudioPlayer to accept it (since it can't take raw PCM)
Obviously that's no use for streaming (WAVE sets the entire file length in the header, so data can't be added on-the-go)..
..essentially: is there a way to just give iPhone OS some PCM bytes and have it play them?
You should revisit the AudioQueue code. Once you strip away all the guff you should have about 2 pages of code plus a callback in which you can supply the raw PCM. This callback is also synchronous with your main loop, so you don't even have to worry about locking.

Playback MP3 using RemoteIO and AudioUnits on iPhone... possible?

I want to playback an mp3 rather than an uncompressed file using RemoteIO / AudioUnit.
Using uncompressed files obviously uses far too much disk space (30MB vs 3MB for mp3).
Is this even possible? If so, can you provide a little code headstart?
Thanks a million.
How low-level do you want to go? You could use the AudioHardware API:
err = AudioDeviceAddIOProc(deviceID, ioProc, self);
and in your ioProc fill the buffers yourself, listen for hardware changes, and deal with real-time threading and a lot of other low-level stuff.
I'm assuming you're already decoding the mp3 data with AudioConverter and know how to do that.
Yes, it is possible. But it requires multiple audio API use and multiple threads.
Due to the real-time constraints of the Audio Unit buffer callback thread, you will have do the conversion of compressed files to raw PCM samples outside the Audio Unit callback. You could use Extended Audio File Services or AVAssetReader to do the conversion to uncompressed samples outside the Audio Unit callback. However, you don't need to uncompress the entire file at once. A short uncompressed buffer of a fraction of a second will likely do, as long as you keep filling it far enough ahead of the Audio Unit callback buffer consumption rate. This can be done in a separate timer driven thread that monitors buffer consumption, and decompresses just enough audio accordingly, perhaps into a ring buffer or circular FIFO.
What you will end up with will be similar to a rewrite of the Audio Queue API layer, but with more customizability.