ALSA - managing async IO - centos

I have a device that is continuously putting out PCM data. Under certain circumstances I want to record this output. To this end I have a process that waits for the signal to record and when it gets it, it starts a thread (via pthread_create). This thread opens the PCM device and starts recording using snd_async_add_pcm_handler. This handler fn uses pcm_readi to get any available info in the PCM stream and write it to disk.
All well and good - except
Once this starts running my calling process stops getting any cycles. It should be continuing to listen for the next event that would signal to stop recording. Watching the execution I see it slow and then halt once the PCM recording starts. If I don't start the recording the app runs normally and continues to respond.
So it seems like I'm left with two avenues:
find gaps in the recording process and usleep (or similar) to give the calling app time to respond
attempt to exit the recording using other means
I've failed to make any headway using #1 so I'm working on #2. It's known that the audio sample will start with low amplitude, go high for a second or two and then go low again. I'm using a rolling average to trap this low-high-low and am attempting to close the async IO.
Where it stands: Once the IO is supposed to be stopped I've tried calling snd_async_del_handler but this crashes the app with a simple 'IO possible' message. I've also tried calling snd_pcm_drop but this doesn't close the async io so it crashes the next time it tries to read. Combining the two in either order gives similar responses.
What step(s) have I missed?

The snd_async_* functions are strongly deprecated, because they don't work with all kinds of devices, are hard to use, and don't have any advantages in practice.
Please note that the ALSA API is not thread safe, i.e., calls for one PCM device must all be from one thread, or synchronized.
What you should do is to have an event loop (using poll) in your thread that handles both the PCM device and the exit message from the main thread.
For this, the PCM device should be run in non-blocking mode (using the snd_pcm_poll_descriptors* functions).
To send a message to the thread, use any mechanism with a pollable file handle, such as a pipe or eventfd.

Related

Multithreading: best method for lossy thread notifications in Swift?

I have a high-priority audio thread that runs periodically and should do minimal synchronization.
Sometimes the main thread needs to ensure that at least one audio cycle has passed and certain parameters have been picked up, before sending the next batch of parameters. For example, when disabling an audio node the main thread needs to wait until the next cycle when the disabling command is picked up and the node shuts itself down.
At times it is important for the main thread to wait until the command is fully executed, but other times it's not important, so nobody might be listening to the sync event. Hence the "lossy" scenario.
So what is the best way of notifying other threads about an event with minimal overhead and possibly in a "lossy" way?
Can't think of ways of using a semaphore for this task. Are there any canonical ways of achieving this? Looks like Java's notifyAll() works precisely this way, if so, what synchronization mechanism is used behind notifyAll()?
Edit: been thinking, is there such a thing as "send me a semaphore in a queue and I'll signal it"? Seems a bit too complicated but theoretically it could do the job. Any simpler tools for the same task?
As a rule, you never want to block the main thread (or, at least, for more than a few milliseconds). If the response might ever take longer than that, rather than actually waiting, we would adopt asynchronous patterns, let the main thread proceed. Sure, if you need to prevent user interaction, we’d do that, but we wouldn't block the main thread.
The key concern is that if an app blocks the main thread for too long, you have a bad UX (where the app appears to freeze) and you risk having your app killed by the watchdog process. I would therefore not advise using semaphores (or any other similar mechanisms) to have the main thread wait for something from your audio engine controller.
So, for example, let’s say the main thread wants to tell the audio engine to pause playback, but you want the UI to “wait” for it to be acknowledged and handled. Instead of actually waiting, we would set up some asynchronous pattern where the main thread notifies the audio engine that it wants it to pause, the audio controller would then notify the main thread when that request has been processed via some callback mechanism (e.g., via delegate protocol pattern, completion handler closure, etc.). If you happen to need to prevent user interaction during the intervening time, then you’d disable controller and use some UIActivityIndicatorView (i.e., a spinner) or something like that, something that would be removed when the completion handler is called.
Now, you used the term “lossy”, but that generally conveys that you don't mind the request getting lost. But I’m assuming that is not really the case. I'm assuming that you don't really want the request to be lost, but rather only that the main thread doesn't care about the response, confident that the audio controller will get to it when it can. In that case, you'd probably still give this sort of request to the audio controller a callback mechanism, but the main thread just wouldn’t avail itself of it.
Now if you have a sequence of commands that you want the audio engine to process in order, then the audio controller might have a private, internal queue for these requests, where you’d configure it to not start subsequent request(s) until the prior ones finished. The main thread shouldn't be worried about whether the required audio cycle has processed. It should just send whatever requests are appropriate and the audio controller should handle them in the desired order/timing.

Music player process

I was reading a book which says that a processor with single core and no hyper-threading can process only one process at a time, so a doubt arises that when we do so many operations on a PC and also some background processes are always there then why not music player stops in between for short while. I know the CPU is pretty fast but still music player usually plays music in continuance without any small break ( that is observable ). Can anyone clarify this behavior?
1) A single-core CPU without hyperthreading can, as you say, only run one process at a time. Multiple processes are handled by context-switching, that is the CPU will run one process and then switch to the next process and the next and then back to the first process and so on. The frequency of how often a certain process is scheduled is dependent on lots of different factors, where process priority is one. (Back in the days it was often needed to run WinAmp with elevated priority to avoid glitches etc. Nowadays this is not needed as the CPU is a lot faster).
2) So, with this in mind, how come it still sounds great and without glitches?
When processing audio the CPU feeds the sound device with samples by putting them either in a hardware buffer on the sound card or in the RAM. The sound processor does not get its data directly from the CPU, instead it reads the samples from one of these two buffers. As long as we have samples in the buffer we are good, even though the CPU is off doing something else.
The details about the hardware buffer size is different on different sound cards. Some (older) sound cards does not have a sound buffer at all, and here the RAM comes into play instead.
Running out of samples is called buffer underrun. Even on modern computers this can happen, for example if you start a heavy process while running your audio player the CPU may not be able to switch back in time and we can clearly hear glitches and gaps in the sound feed.
This is due to an operating system which does preemptive multi-tasking. The process is in fact being interrupted for a very short amount of time, not long enough to notice for a human. Another reason is also that the audio card has a playback buffer which allows the playback continously, while data is being fed to it in chunks. So while the process of feeding the card with data is being interrupted for a very short time, the playback can still occur.
This is handled by the Operating System Scheduler.
The scheduler will allocate a time slice to each process (this maybe a few milliseconds) and will allow a process to execute what it needs to for that length of time. The length allocated is determined by the algorithm used by the OS (I.e. Short term scheduling, long term etc). The reason why you do not notice this is because the CPU can operate at such high frquencies, i.e. 1GHz which makes multi tasking on a single core / thread transparent to the user.
http://en.wikipedia.org/wiki/Scheduling_(computing)
http://web.cs.wpi.edu/~cs3013/c07/lectures/Section05-Scheduling.pdf

Waveform representation of any audio in iPhone

I have to draw a waveform for an audio file (CMK.mp3) in my application.
For this I have tried this Solution
As this solution is using AVAssetreader, which is taking two much time to display the waveform.
Can anyone please help, is there any other way to display the waveform quickly?
Thanks
AVAssetReader is the only way to read an AVAsset so there is no way around that. You will want to tune the code to process it without incurring unwanted overhead. I have not tried that code yet but I intend on using it to build a sample project to share on GitHub once I have the time, hopefully soon.
My approach to tune it will be to do the following:
Eliminate all Objective-C method calls and use C only instead
Move all work to a secondary queue off the main queue and use a block to call back one finished
One obstacle with rendering a waveform is you cannot have more than one AVAssetReader running at a time, at least the last time I tried. (It may have changed with iOS 6 possibly) A new reader cancels the other and that interrupts playback, so you need to do your work in sequence. I do that with queues.
In an audio app that I built it reads the CMSampleBufferRef into a CMBufferQueueRef which can hold multiple sample buffers. (see copyNextSampleBuffer on AVAssetReader) You can configure the queue to provide you with enough time to process a waveform after an AVAssetReader finishes reading an asset so that the current playback does not exhaust the contents of the CMBufferQueueRef before you start reading more buffers into it for the next track. That will be my approach when I attempt it. I just have to be careful that I do not use too much memory by making the buffer too big or making the buffer so big that it causes issues with playback. I just do not know how long it will take to process the waveform and I will test it on my older iPods and iPhone 4 before I try it on my iPhone 5 to see if they all perform well.
Be sure to stay as close to C as possible. Calls to Objective-C resources during this processing will incur potential thread switching and other run-time overhead costs which are significant enough to be noticeable. You will want to avoid that. What I may do is set up Key-Value Observing (KVO) to trigger the AVAssetReader to start the next task quickly so that I can maintain gapless playback between tracks.
Once I start my audio experiments I will put them on GitHub. I've created a repository where I will do this work. If you are interested you can "watch" that repo so you will know when I start committing updates to it.
https://github.com/brennanMKE/Audio

Core Audio - CARingBuffer to read an audio file for callback

Does anyone have a good example of using CARingBuffer to buffer a large audio file and how to read it in a callback?
Should it be reading the audio file in a secondary thread? How do I pause loading the audio file until the loaded buffers have been played (how do I pre-queue the audio file)? CAPlayThrough seems close but is only streaming audio from a microphone.
Thanks!
You can find an example that uses this ring buffer if you download the example code of the book Learning Core Audio here (under the downloads tab). Jump to the chapter 8 example in a folder called CH08_AUGraphInput.
However, if you are simply reading audio from a file, then using an (extra) ring buffer seems like an overkill.. A ring buffer comes in handy when you are having real time (or near real time) input and output (read chapter 8 in the said book for a more detailed explanation of when a ring buffer is necessary.. note that the example in chapter 8 is about playing audio immediately after recording it by a mic, which isn't what you want to do).
The reason why I said extra ring buffer, is because in core Audio there is already an audio Queue (which can be thought of as a ring buffer.. or at least it in your case it replaces the need for a ring buffer: you populate it with data, it plays the data, then fires a callback that informs you that the data you supplied has been played). The apple documentation offers a good explanation on this one.
In your case, if you are simply reading audio from a file, then you can easily control the throughput of the audio from the file. You can pause it by blocking the thread that reads data from the audio file for example.
For a simple example of what I'm talking about, see this example I created on github. For a more advanced example, see Matt Gallagher's famous example.
Generally for audio playback anything that can block or take an unbounded amount of time (in particular file or disk IO) should be done in a secondary thread. So you want to read the audio file's data in a producer thread, and consume the data in your IOProc or RemoteIO callback.
Synchronization becomes an issue with multiple threads, but if you have only one reader and one writer generally it isn't too hard. In fact, CARingBuffer is thread safe for this case.
The general flow should look like:
From the main thread:
Create the producer thread
Tell it which file to process
From the producer thread:
Open the specified file
Fill the empty space in the ring buffer with audio data
Wait until signaled or a timeout happens, and go back to #2
In your IOProc/callback:
Read data from the ring buffer
Signal the producer that more data is needed
Posting code to do this here would be much too long to read, but here are a few pointers to get you started. None of these are for the iPhone, but the principles are the same.
https://github.com/sbooth/SFBAudioEngine/blob/master/Player/AudioPlayer.cpp
http://www.snoize.com/

App stops responding to user input whilst task is ongoing. Any way to prevent this?

When it gets to a certain line of code in Flite, it takes about 2 minutes to get through that line, converting what's written into text-to-speech to be played back.
During this process, the app stops responding to any user input, dealing with it once it's finished with the code from Flite. Obviously this is an inconvenience. Is there any way to prevent it?
You should do any long processing in a background thread, not in the UI run loop, using something like NSOperationQueue, plus a completion callback to inform the UI when the processing is done.