Core Audio - CARingBuffer to read an audio file for callback - iphone

Does anyone have a good example of using CARingBuffer to buffer a large audio file and how to read it in a callback?
Should it be reading the audio file in a secondary thread? How do I pause loading the audio file until the loaded buffers have been played (how do I pre-queue the audio file)? CAPlayThrough seems close but is only streaming audio from a microphone.
Thanks!

You can find an example that uses this ring buffer if you download the example code of the book Learning Core Audio here (under the downloads tab). Jump to the chapter 8 example in a folder called CH08_AUGraphInput.
However, if you are simply reading audio from a file, then using an (extra) ring buffer seems like an overkill.. A ring buffer comes in handy when you are having real time (or near real time) input and output (read chapter 8 in the said book for a more detailed explanation of when a ring buffer is necessary.. note that the example in chapter 8 is about playing audio immediately after recording it by a mic, which isn't what you want to do).
The reason why I said extra ring buffer, is because in core Audio there is already an audio Queue (which can be thought of as a ring buffer.. or at least it in your case it replaces the need for a ring buffer: you populate it with data, it plays the data, then fires a callback that informs you that the data you supplied has been played). The apple documentation offers a good explanation on this one.
In your case, if you are simply reading audio from a file, then you can easily control the throughput of the audio from the file. You can pause it by blocking the thread that reads data from the audio file for example.
For a simple example of what I'm talking about, see this example I created on github. For a more advanced example, see Matt Gallagher's famous example.

Generally for audio playback anything that can block or take an unbounded amount of time (in particular file or disk IO) should be done in a secondary thread. So you want to read the audio file's data in a producer thread, and consume the data in your IOProc or RemoteIO callback.
Synchronization becomes an issue with multiple threads, but if you have only one reader and one writer generally it isn't too hard. In fact, CARingBuffer is thread safe for this case.
The general flow should look like:
From the main thread:
Create the producer thread
Tell it which file to process
From the producer thread:
Open the specified file
Fill the empty space in the ring buffer with audio data
Wait until signaled or a timeout happens, and go back to #2
In your IOProc/callback:
Read data from the ring buffer
Signal the producer that more data is needed
Posting code to do this here would be much too long to read, but here are a few pointers to get you started. None of these are for the iPhone, but the principles are the same.
https://github.com/sbooth/SFBAudioEngine/blob/master/Player/AudioPlayer.cpp
http://www.snoize.com/

Related

ALSA - managing async IO

I have a device that is continuously putting out PCM data. Under certain circumstances I want to record this output. To this end I have a process that waits for the signal to record and when it gets it, it starts a thread (via pthread_create). This thread opens the PCM device and starts recording using snd_async_add_pcm_handler. This handler fn uses pcm_readi to get any available info in the PCM stream and write it to disk.
All well and good - except
Once this starts running my calling process stops getting any cycles. It should be continuing to listen for the next event that would signal to stop recording. Watching the execution I see it slow and then halt once the PCM recording starts. If I don't start the recording the app runs normally and continues to respond.
So it seems like I'm left with two avenues:
find gaps in the recording process and usleep (or similar) to give the calling app time to respond
attempt to exit the recording using other means
I've failed to make any headway using #1 so I'm working on #2. It's known that the audio sample will start with low amplitude, go high for a second or two and then go low again. I'm using a rolling average to trap this low-high-low and am attempting to close the async IO.
Where it stands: Once the IO is supposed to be stopped I've tried calling snd_async_del_handler but this crashes the app with a simple 'IO possible' message. I've also tried calling snd_pcm_drop but this doesn't close the async io so it crashes the next time it tries to read. Combining the two in either order gives similar responses.
What step(s) have I missed?
The snd_async_* functions are strongly deprecated, because they don't work with all kinds of devices, are hard to use, and don't have any advantages in practice.
Please note that the ALSA API is not thread safe, i.e., calls for one PCM device must all be from one thread, or synchronized.
What you should do is to have an event loop (using poll) in your thread that handles both the PCM device and the exit message from the main thread.
For this, the PCM device should be run in non-blocking mode (using the snd_pcm_poll_descriptors* functions).
To send a message to the thread, use any mechanism with a pollable file handle, such as a pipe or eventfd.

AVCaptureSession, multiple AVCaptureAudioDataOutputs

Enviroment
iphone
arm7/sdk6.0
xcode 4.5
Use-case
Based on the AVCam sample
Capture A/V into a file using AVCaptureMovieFileOutput
Add an additional AVCaptureAudioDataOutput to intercept the audio being written to the file while recording
How-to
Add Video input to the Capture session
Add Audio input to the Capture session
Add File Output to the Capture session
Add Audio Output to the Capture session
Configure
Start recording
The problem
It seems the audio output is mutual exclusive, thus, either I get data being written to the disk, OR, I get AVCaptureAudioDataOutput capture delegate being called, when AVCaptureMovieFileOutput is added ( order doesn't matter ), AVCaptureAudioDataOutput delegate is not called.
How can this be solved? how can I get 'AVCaptureAudioDataOutput' triggering it's delegate/selector while, at the same time 'AVCaptureMovieFileOutput' is used to write data to the disk?
Can this be done in any way other way than using a lower level API such as eg. AVAssetWriter et al ?
Any help will be appreciated!
AVAssetWriter is to be used in conjunction with AVAssetWriterInputPixelBufferAdaptor, a good example of how this can be achieved can be found here.
Then, upon 'AVCaptureAudioDataOutputSampleBufferDelegate' invocation, the raw audio buffer can be propagated out for further processing ( in parallel to having the data written to the disk ).

Waveform representation of any audio in iPhone

I have to draw a waveform for an audio file (CMK.mp3) in my application.
For this I have tried this Solution
As this solution is using AVAssetreader, which is taking two much time to display the waveform.
Can anyone please help, is there any other way to display the waveform quickly?
Thanks
AVAssetReader is the only way to read an AVAsset so there is no way around that. You will want to tune the code to process it without incurring unwanted overhead. I have not tried that code yet but I intend on using it to build a sample project to share on GitHub once I have the time, hopefully soon.
My approach to tune it will be to do the following:
Eliminate all Objective-C method calls and use C only instead
Move all work to a secondary queue off the main queue and use a block to call back one finished
One obstacle with rendering a waveform is you cannot have more than one AVAssetReader running at a time, at least the last time I tried. (It may have changed with iOS 6 possibly) A new reader cancels the other and that interrupts playback, so you need to do your work in sequence. I do that with queues.
In an audio app that I built it reads the CMSampleBufferRef into a CMBufferQueueRef which can hold multiple sample buffers. (see copyNextSampleBuffer on AVAssetReader) You can configure the queue to provide you with enough time to process a waveform after an AVAssetReader finishes reading an asset so that the current playback does not exhaust the contents of the CMBufferQueueRef before you start reading more buffers into it for the next track. That will be my approach when I attempt it. I just have to be careful that I do not use too much memory by making the buffer too big or making the buffer so big that it causes issues with playback. I just do not know how long it will take to process the waveform and I will test it on my older iPods and iPhone 4 before I try it on my iPhone 5 to see if they all perform well.
Be sure to stay as close to C as possible. Calls to Objective-C resources during this processing will incur potential thread switching and other run-time overhead costs which are significant enough to be noticeable. You will want to avoid that. What I may do is set up Key-Value Observing (KVO) to trigger the AVAssetReader to start the next task quickly so that I can maintain gapless playback between tracks.
Once I start my audio experiments I will put them on GitHub. I've created a repository where I will do this work. If you are interested you can "watch" that repo so you will know when I start committing updates to it.
https://github.com/brennanMKE/Audio

How can I monitor an mp3 live stream to detect corruption?

Once a month the mp3 streams messes up and the only way to tell it has messed up is by listening to it as it streams. Is there a script or program or tool I can use to monitor the live streams at a given url and send some kind of flag when it corrupts?
What happens is normally it plays a song for example or some music but once a month, every month, randomly, the stream corrupts and starts random chimpmunk like trash audio. Any ideas on this? I am just getting started at this with no idea at all.
Typically, this will happen when you play a track of the wrong sample rate.
Most (all that I've seen) SHOUTcast/Icecast encoders (going straight from files) will compress for MP3 just fine, but assume a fixed sample rate of whatever they are configured for. Typically this will be 44.1kHz. If you drop in a 48kHz track, or a 22.05kHz track, they will play at different speeds while causing all sorts of random issues with the stream.
The problem is easy enough to verify. Simply create a file of a different sample rate and test it. I suspect you will reproduce the problem. If that is the case, to my knowledge there is no way to detect it, since your stream isn't actually corrupt... it just sounds incorrect. You will have to scan all of your files for sample rate. FFMPEG in a script should be able to help you with that.
Now, if the problem actually is a corrupt MP3 stream, then you have problems on your encoding side. I suspect simply swapping out whatever DLL or module you're using with a recent stable version of LAME will help.
To detect a corrupt MP3 stream, your encoder must be using CRC. If you enable it, you should be able to read through the headers of each frame to find the CRC, and then run it on the audio data. In the event you get an error (or several frames with errors), you can then trigger a warning.
You can find information on the MP3 stream header here:
http://www.mp3-tech.org/programmer/frame_header.html

Loading & Selecting audio files into Audio Units

I'm trying to build a Render Callback function that will load a variety of short sound files, and (according to my custom logic) put them in my mixer Unit's iOData audioBufferList. How do I load an aif or caf file into the program, and appropriately import its samples into the ioData?
See Extended Audio File Services Reference, particularly "ExtAudioFileOpenURL" and "ExtAudioFileRead". Remember not to do anything too time consuming in the render callback (e.g. opening a file may be considered time consuming, allocating memory definitely is).