I have an app in the App Store for streaming compressed music files over the network. I'm using Audio Queue Services to handle the playback of the files.
I would like to do some processing on the files as they are streamed. I tested the data in the audio buffers, but it is not decompressed audio. If I'm streaming an mp3 file, the buffers are just little slices of mp3 data.
My question is: is it ever possible to access the uncompressed PCM data when using Audio Queue Services? Am I forced to switch to using Audio Units?
You could use Audio Conversion Services to do your own MP3-to-PCM conversion, apply your effect, then put PCM into the queue instead of MP3. It's not going to be terribly easy, but you'd still be saved the threading challenges of doing this directly with audio units (which would require you to do your own conversion anyways and then probably use a CARingBuffer to send samples between the download/convert thread and the realtime callback thread from the IO unit)
Related
I need to play very large audio files from disk, and streaming them into memory would be the most efficient.
Must I use OpenAL for this or is there another option?
I would recommend using audio queues. They're simple to use, well-documented, and high-level. And I'd recommend using extended audio file services to get the audio data out of the files.
On iOS, for uncompressed audio data, you can mmap huge files, and then just copy the samples from memory to the RemoteIO callback buffers. The RemoteIO Audio Unit can be configured for even lower latency than Audio Queues.
Definitely use Audio Unit, it's the lowest level Audio API of Apple, so it's perfect for low latency..
You should first init Audio unit with remoteIO, them read buffer from your file in a while loop and push these buffers to AudioUnit in AudioUnit callback
I'm a DirectSound programmer new to iOS. I want to implement the ability to play streaming multichannel audio, sometimes looping back to a specified point when the stream is finished playing (think of a song that has a little intro diddy that is played once, then the song loops indefinitely, skipping that intro).
With DirectSound and libvorbis, at least, I'd feed a chunk of the OGG data into the libvorbis decoder, it'd spit out some PCM, and I'd fill the buffer and queue it up to play right after the current sound buffer is finished, swapping between two buffers.
Probably looking at using some kind of hardware-supported format on iOS, like AAC. What programming APIs should I be using that will allow me to do multichannel and loop points? Any input is appreciated, thanks!
The iOS AVAssetReader class can be used to read compressed audio file data into a PCM buffer. Either the Audio Queue API (simpler) or the RemoteIO Audio Unit (lower latency) can be used to play buffers of PCM data.
I'm currently playing a stream on my iOS App but one feature we'd like to add is the visualization of the output wave. I use an output audio queue in order to play the stream, but have found no way to read the output buffer. Can this be achieved using audio queues or shall be done wit a lower level api?
To visualize, you presumably need PCM (uncompressed) data, so if you're pushing some compressed format into the queue like MP3 or AAC, then you never see the data you need. If you were working with PCM (maybe you're uncompressing it yourself with the Audio Conversion APIs), then you could visualize before putting samples into the queue. But then the problem would be latency - you want to visualize samples when they play, not when they go into the queue.
For latency reasons alone, you probably want to be using audio units.
It cannot actually be done. In order to do so, I need audio units to implement the streamer.
I'm making a media playback app which gets given uncompressed linear PCM (über raw) audio from a third-party decoder, but I'm going crazy when it comes to just playing back the most simple audio format I can imagine..
The final app will get the PCM data progressively as the source file streams from a server, so I looked at Audio Queues initially (since I can seemingly just give it bytes on the go), but that turned out to be mindfudge - especially since Apple's own Audio Queue sample code seems to go off on a magical field trip of OpenGL and needless encapsulation..
I ended up using an AVAudioPlayer during (non-streaming) prototyping; I currently create a WAVE header and add it to the beginning of the PCM data to get AVAudioPlayer to accept it (since it can't take raw PCM)
Obviously that's no use for streaming (WAVE sets the entire file length in the header, so data can't be added on-the-go)..
..essentially: is there a way to just give iPhone OS some PCM bytes and have it play them?
You should revisit the AudioQueue code. Once you strip away all the guff you should have about 2 pages of code plus a callback in which you can supply the raw PCM. This callback is also synchronous with your main loop, so you don't even have to worry about locking.
I want to playback an mp3 rather than an uncompressed file using RemoteIO / AudioUnit.
Using uncompressed files obviously uses far too much disk space (30MB vs 3MB for mp3).
Is this even possible? If so, can you provide a little code headstart?
Thanks a million.
How low-level do you want to go? You could use the AudioHardware API:
err = AudioDeviceAddIOProc(deviceID, ioProc, self);
and in your ioProc fill the buffers yourself, listen for hardware changes, and deal with real-time threading and a lot of other low-level stuff.
I'm assuming you're already decoding the mp3 data with AudioConverter and know how to do that.
Yes, it is possible. But it requires multiple audio API use and multiple threads.
Due to the real-time constraints of the Audio Unit buffer callback thread, you will have do the conversion of compressed files to raw PCM samples outside the Audio Unit callback. You could use Extended Audio File Services or AVAssetReader to do the conversion to uncompressed samples outside the Audio Unit callback. However, you don't need to uncompress the entire file at once. A short uncompressed buffer of a fraction of a second will likely do, as long as you keep filling it far enough ahead of the Audio Unit callback buffer consumption rate. This can be done in a separate timer driven thread that monitors buffer consumption, and decompresses just enough audio accordingly, perhaps into a ring buffer or circular FIFO.
What you will end up with will be similar to a rewrite of the Audio Queue API layer, but with more customizability.