I want to modify the apple's sample code of auriotouch to generate the waveform from and audio file instead of rendering the waveform from the mic input. I tried to do it, but i am not able to understand where and what changes to make. Can anyone guide me on how it can be achieved.
Thanks,
Look inside the render callback for a function named AudioUnitRender
The render callback happens whenever the speakers are hungry for data.
IIRC A.T. simply grabs however many samples are required from the microphone using this function
Of course, the first time round it will fail because there will be nothing waiting
Anyway, just comment out this function and instead fill the buffer yourself with samples from your file ( which I think you would probably want to load into memory in advance, probably don't want fileIO clogging a high priority thread )
that means you will probably need to create some sort of AudioFile class, and pass a reference to an instance of this class when you set up the render callback. that way you will be able to access the data from within this render callback ( which is a vanilla C function, ie not a member of a class, so it has no other way to access class data -- unless you want to do something horrible with file-level variables ).
make sure you create this AudioFile* audiofile NONATOMIC if it is a property, you don't want your render callback to be kept waiting because some other thread is inside the object and consequently has a lock on it.
Related
I am successfully using an MTAudioProcessingTap in Swift on MacOS to manipulate my audio routing for both live playback and export. However, the specific routing that should occur at runtime depends on the user's choices. What I would like to be able to do is pass in a pair of Ints to the MTAudioProcessingTapProcessCallback when I create the tap so that I can use a single callback definition that can use those Ints to determine how to do the routing. The problem is that the callback is a C function pointer that can't capture context.
I thought maybe I could use the clientInfo parameter of the MTAudioProcessingTapCallbacks to hold the values I need, but I can't find any documentation on how I can access this parameter from within the MTAudioProcessingTapProcessCallback.
I have 32 possible routing combinations, and unfortunately the only other option I see at this point is declaring 32 separate MTAudioProcessingTapProcessCallbacks, and then selecting which to use when I create the tap. But it would be so much easier for me if I could just have a single MTAudioProcessingTapProcessCallback that makes a simple decision based on passed-in data.
I figured out how it works. In order to access the data inside the clientInfo from within the Process callback:
Inside the MTAudioProcessingTapInitCallback you have to initialize the tapStorageOut with the clientInfo pointer
Inside the Process callback use MTAudioProcessingTapGetStorage(tap) to get that pointer and access the data.
I have a Matlab GUI that needs a high time to execute some callback functions. Besides, these functions include the following code:
drawnow('expose');
pause(handles.data.delay);
I want to avoid that those callback executions get interrupted in order to avoid data inconsistency if the user presses other buttons. Thus, I modify the figure settings as:
set(handles.figure, 'BusyAction','cancel', 'Interruptible','off');
However, the callbacks are still interrupted. How can I avoid it?
Note: I think that the problem is that I need to propagate the 'BusyAction' and 'Interruptible' values to all the controls in my GUI, is there any way to do it automatically? Like, for example, modifying the default value before generating the GUI.
The fastest and cleanest way to propagate any property to all UI objects is with findobj:
set(findobj('Type','uicontrol'), 'BusyAction','cancel', 'Interruptible','off');
I know that a block is a reusable chunk of executable code in Objective-C. Is there a reason I shouldn't put that same chunk of code in a function and just called the function when I need that code to run?
It depends on what you're trying to accomplish. One of the cool things about blocks is that they capture local scope. You can achieve the same end result with a function, but you end up having to do something like pass around a context object full of relevant values. With a block, you can do this:
int num1 = 42;
void (^myBlock)(void) = ^{
NSLog(#"num1 is %d", num1);
};
num1 = 0; // Changed after block is created
// Sometime later, in a different scope
myBlock(); // num1 is 42
So simply by using the variable num1, its value at the time myBlock was defined is captured.
From Apple's documentation:
Blocks are a useful alternative to traditional callback functions for
two main reasons:
They allow you to write code at the point of invocation that is
executed later in the context of the method implementation. Blocks are
thus often parameters of framework methods.
They allow access to local variables. Rather than using callbacks
requiring a data structure that embodies all the contextual
information you need to perform an operation, you simply access local
variables directly.
As Brad Larson comments in response to this answer:
Blocks will let you define actions that take place in response to an
event, but rather than have you write a separate method or function,
they allow you to write the handling code right where you set up the
listener for that event. This can save a mess of code and make your
application much more organized.
A good example of which i can give you is of alert view, it will be good if i decided at time of creation of alert view what will happen when i dismiss that instead i write the delegate method and wait for that to call. So it will be much easier to understand and implement and also it provides fast processing.
The following function calls are deprecated in OpenAL 1.1, what is a proper replacement?? THe only answer i found in google was "write your own function!!" ;-)
alutLoadWAVFile
alutUnloadWAV
There are 8 file loading functions in ALUT (not including the three deprecated functions alutLoadWAVFile, alutLoadWAVMemory, and alutUnloadWAV).
The prefix of the function determines where the data is going; four of them start alutCreateBuffer (create a new buffer and put the sound data into it), and the other four start alutLoadMemory (allocate a new memory region and put the sound data into it).
The suffix of the function determines where the data comes from. Your options are FromFile (from a file!), FromFileImage (from a memory region), HelloWorld (fixed internal data of someone saying "Hello, world!"), and Waveform (generate a waveform).
I believe the correct replacement for alutLoadWAVFile would therefore be alutCreateBufferFromFile.
However, I would not use this blindly - it's suitable for short sound clips, but for e.g. a music track you probably want to load it in chunks and queue up multiple buffers, to ease the memory load.
These functions are all covered in the alut documentation, by the way.
"write your own" is pretty much the correct answer.
You can usually get away with using the deprecated functions since most implementations still include the WAV file handling functions, with one notable exception being iOS, for which you'd need to use audio file services.
I'd suggest making a standard prototype for "load wav file" and then depending on the OS, use a different loading routine. You can just stub it with a call to alutLoadWAVFile for systems known to still support it.
I am designing a music visualiser application for the iPhone.
I was thinking of doing this by picking up data via the iPhone's mic, running a Fourier Transform on it and then creating visualisations.
The best example I have been able to get of this is aurioTuch which produces a perfect graph based on FFT data. However I have been struggling to understand / replicate aurioTouch in my own project.
I am unable to understand where exactly aurioTouch picks up the data from the microphone before it does the FFT?
Also is there any other examples of code that I could use to do this in my project? Or any other tips?
Since I am planning myself to use the input of the mic, I thought your question is a good opportunity to get familiar with a relevant sample code.
I will trace back the steps of reading through the code:
Starting off in SpectrumAnalysis.cpp (since it is obvious the audio has to get to this class somehow), you can see that the class method SpectrumAnalysisProcess has a 2nd input argument const int32_t* inTimeSig --- sounds a promising starting point, since the input time signal is what we are looking for.
Using the right-click menu item Find in project on this method, you can see that except for the obvious definition & declaration, this method is used only inside the FFTBufferManager::ComputeFFT method, where it gets mAudioBuffer as its 2nd argument (the inTimeSig from step 1). Looking for this class data member gives more then 2 or 3 results, but most of them are again just definitions/memory alloc etc. The interesting search result is where mAudioBuffer is used as argument to memcopy, inside the method FFTBufferManager::GrabAudioData.
Again using the search option, we see that FFTBufferManager::GrabAudioData is called only once, inside a method called PerformThru. This method has an input argument called ioData (sounds promising) of type AudioBufferList.
Looking for PerformThru, we see it is used in the following line: inputProc.inputProc = PerformThru; - we're almost there:: it looks like registering a callback function. Looking for the type of inputProc, we indeed see it is AURenderCallbackStruct - that's it. The callback is called by the audio framework, who is responsible to feed it with samples.
You will probably have to read the documentation for AURenderCallbackStruct (or better off, the Audio Unit Hosting) to get a deeper understanding, but I hope this gave you a good starting point.