What does the "Mute" Button Do in Apple's AurioTouch2 sample code? - iphone

I am modifying Apple's code from the AurioTouch2 example on their developer's site. Currently I am trying to fully understand the function of the App. I see that the App writes 0's to the buffers using the silenceData method when mute is on. However, it seems to me that the data has already been processed and when using the App I see no difference wether mute is on or off. What an I missing - what purpose does mute serve?
from the end of performThu method (the input callback)
if (THIS->mute == YES) { SilenceData(ioData); }
from aurioHelper.ccp
void SilenceData(AudioBufferList *inData)
{
for (UInt32 i=0; i < inData->mNumberBuffers; i++)
memset(inData->mBuffers[i].mData, 0, inData->mBuffers[i].mDataByteSize);
}
AurioTouch2 Sample Code

You are correct, all that's doing is zero-ing out the buffer. The reason it's important, is that it's possible for the mData member to be uninitialized (i.e. random), which would result in horribly loud buzzing noises if it was left alone. It's possible that it would make no difference, but you shouldn't really leave that to chance.
If you're ever in a situation where you'd like to produce silence, make sure you 0 your buffer (instead of just leaving it).

First, I found that the mute button does work. When I hold the phone up to my ear I can hear that the sound from the mic is being played through to the receiver. With mute off there is no sound. Before, I was expecting sound from the speaker (not the receiver). That part of the problem solved.
Second, the remote io unit puts the microphone input data in the ioData buffers. Before I was expecting that there would be another callback for the output to the speaker, but I think because there is not one the remote io unit just uses the same ioData and plays it out the receiver (speaker). Thus zeroing out the ioData (after processing the the microphone input data for use by the app) results in silence at the reciever(i.e. the mute function). Any confirmation or clarification is appreciated.

Related

8 channel async mic recording in matlab

I wanted to record a sequence of sounds (using 8 channel mic array).
Matlab's audiorecorder system object does not support more than 2 channels async recording.
When I say async, I want to achieve the following:
The user will press some key (handled by event handler gui) it will start the recording and then again user will press a key then the system will save the current recording and user starts with next audio in the sequence.
I can record 8 ch from Matlab using audioDeviceReader system object but for that, I need to call it for each frame so I will have to create a parallel process that will have to communicate with the event handler and the audioDeviceReader.
I don't have much experience will parallel programming? Should I look into audiorecorder's code and see if can be trivially changed to support 8 ch (If that was the case I think they would have already done it). Or write code to spawn a parallel process which exposes record and stop functions wrapping over audioDeviceReader which can interface with event listener similar to audiorecorder? If so how should I proceed?
Well surprisingly removing the num channel error check in the library code worked. :)

Queuing and looping buffers in OpenAL

I have question about queueing buffers in OpenAL.
I have two wave files, let's say for an engine. The first is the sound of the engine starting and the second is the engine running.
What I'm looking for is a way to create a source that plays sound 1 once and then loops sound 2 until alSourceStop() is called.
Is something like this even possible?
Thanks for your help :)
Hans
Here is come code where I stream audio using OpenAL ... the salient line is
alSourcei(streaming_source[ii], AL_BUFFER, 0);
its written for linux so OSX may require a tweak to header file locations :
https://github.com/scottstensland/render-audio-openal
let me know if your need anything explained ... enjoy

Getting accurate time from FFMPeg with Objective C (Audio Queue Services)

My iPhone app plays an audio file using FFMPeg.
I'm getting the elapsed time (to show to user) from the playing audio (in minutes and seconds after converting from microseconds, given by FFMPeg) like so:
AudioTimeStamp currentTimeStamp;
AudioQueueGetCurrentTime (audioQueue, NULL, &currentTimeStamp, NULL);
getFFMPEGtime = currentTimeStamp.mSampleTime/self.basicAudioDescription.mSampleRate;
self.currentAudioTime = [NSString stringWithFormat: #"%02d:%02d",
(int) getFFMPEGtime / (int)60000000,
(int) ((getFFMPEGtime % 60000000)/1000000)];
Everything works fine, but when I scrub back or forward to play another portion of the song, the elapsed time will go back to zero, no matter the current position. The timer will always zero out.
I know I'm suposed to do some math to keep track of the old time and the new time, maybe constructing another clock or so, perhaps implementing another callback function, etc... I'm not sure what way I should go.
My questions are:
1) What's the best approach to keep track of the elapsed time when going back/forward in a song, avoiding the clock to always going back to zero?
2) Should I look deeply into FFMPeg functions or should I stick with Objective-C and Cocoa Touch for solving this problem?
Please, I need some advices/ideas from experienced programmers. I'm stuck. Thanks beforehand!
If you're doing this to show elapsed time in the media to the user (which the fact you're turning it into a formatted string suggests), then you should instead query the playback system you're using for the time it thinks it is at in the file. The audio queue is completely unaware of your media so you'll have to duplicate work done by the playback system to map it to a media time. Timestamps from the audio queue are more like a high accuracy clock for things like A/V sync.

get input from keyboard while displaying an avi with matlab

Hi all
I wrote a short program that displays an avi file. I need the program to get input from the keyboard while the movie is running (and not after it ends):
this is my code:
figure('MenuBar','none')
set(gcf,'Color', 'white')
set(gca,'Color','white');
set(gca,'XColor','white');
set(gca,'YColor','white');
m=aviread('c:/t1.avi')
a=30:1:100;
b=100:-1:30;
c=[a b a b a b a b a b] %to run the movie back and forth
movie(m,c) %runs the movie
Thank you for any help
Ariel
Maybe you can insert your video in an UIPanel (or another suitable GUI item) and use the KeyPressFcn callback.
Have a look on this : Callback Sequencing and Interruption (I don't know if it can works but it's probably worth trying).
As far as I know multi-threading or parallel processing capabilities in MATLAB are limited; however it appears as there are remedies. This article describes combining MATLAB and C++ code, with use of MEX files.
Now I have to admit that I have never tried this so I can't really claim that it would work in your case, but it would be a good place to start.
Unless movie() has been designed to watch for input I think you will have to multithread, which from one of the other answers sounds a bit complicated.
You could play a short section of the video, then run come code to check for inputs and then play the next bit of the video. I'm not sure if you can count on things that the user types whilst the video plays going into the input buffer though.
the solution is to use winopen('c:/filename.avi')
winopen('c:/filename.avi')
this command opens media player and runs following commands in the matlab script. it doesn't wait for the movie to end. it runs in the background.
thanks every one
ariel

Creating an iPhone music Visualiser based on Fourier Transform

I am designing a music visualiser application for the iPhone.
I was thinking of doing this by picking up data via the iPhone's mic, running a Fourier Transform on it and then creating visualisations.
The best example I have been able to get of this is aurioTuch which produces a perfect graph based on FFT data. However I have been struggling to understand / replicate aurioTouch in my own project.
I am unable to understand where exactly aurioTouch picks up the data from the microphone before it does the FFT?
Also is there any other examples of code that I could use to do this in my project? Or any other tips?
Since I am planning myself to use the input of the mic, I thought your question is a good opportunity to get familiar with a relevant sample code.
I will trace back the steps of reading through the code:
Starting off in SpectrumAnalysis.cpp (since it is obvious the audio has to get to this class somehow), you can see that the class method SpectrumAnalysisProcess has a 2nd input argument const int32_t* inTimeSig --- sounds a promising starting point, since the input time signal is what we are looking for.
Using the right-click menu item Find in project on this method, you can see that except for the obvious definition & declaration, this method is used only inside the FFTBufferManager::ComputeFFT method, where it gets mAudioBuffer as its 2nd argument (the inTimeSig from step 1). Looking for this class data member gives more then 2 or 3 results, but most of them are again just definitions/memory alloc etc. The interesting search result is where mAudioBuffer is used as argument to memcopy, inside the method FFTBufferManager::GrabAudioData.
Again using the search option, we see that FFTBufferManager::GrabAudioData is called only once, inside a method called PerformThru. This method has an input argument called ioData (sounds promising) of type AudioBufferList.
Looking for PerformThru, we see it is used in the following line: inputProc.inputProc = PerformThru; - we're almost there:: it looks like registering a callback function. Looking for the type of inputProc, we indeed see it is AURenderCallbackStruct - that's it. The callback is called by the audio framework, who is responsible to feed it with samples.
You will probably have to read the documentation for AURenderCallbackStruct (or better off, the Audio Unit Hosting) to get a deeper understanding, but I hope this gave you a good starting point.