Getting accurate time from FFMPeg with Objective C (Audio Queue Services) - iphone

My iPhone app plays an audio file using FFMPeg.
I'm getting the elapsed time (to show to user) from the playing audio (in minutes and seconds after converting from microseconds, given by FFMPeg) like so:
AudioTimeStamp currentTimeStamp;
AudioQueueGetCurrentTime (audioQueue, NULL, &currentTimeStamp, NULL);
getFFMPEGtime = currentTimeStamp.mSampleTime/self.basicAudioDescription.mSampleRate;
self.currentAudioTime = [NSString stringWithFormat: #"%02d:%02d",
(int) getFFMPEGtime / (int)60000000,
(int) ((getFFMPEGtime % 60000000)/1000000)];
Everything works fine, but when I scrub back or forward to play another portion of the song, the elapsed time will go back to zero, no matter the current position. The timer will always zero out.
I know I'm suposed to do some math to keep track of the old time and the new time, maybe constructing another clock or so, perhaps implementing another callback function, etc... I'm not sure what way I should go.
My questions are:
1) What's the best approach to keep track of the elapsed time when going back/forward in a song, avoiding the clock to always going back to zero?
2) Should I look deeply into FFMPeg functions or should I stick with Objective-C and Cocoa Touch for solving this problem?
Please, I need some advices/ideas from experienced programmers. I'm stuck. Thanks beforehand!

If you're doing this to show elapsed time in the media to the user (which the fact you're turning it into a formatted string suggests), then you should instead query the playback system you're using for the time it thinks it is at in the file. The audio queue is completely unaware of your media so you'll have to duplicate work done by the playback system to map it to a media time. Timestamps from the audio queue are more like a high accuracy clock for things like A/V sync.

Related

SPI bit banging; MCP3208; Raspberry;error

I am using Raspberry Pi 2 board with raspbian loaded. need to do SPI by bit banging & interface MCP3208.
I have taken code from Github. It is written for MCp3008(10 bit adc).
Only change I made in code is that instead of calling:
adcValue = recvBits(12, clkPin, misoPin)
I called adcValue = recvBits(14, clkPin, misoPin) since have to receive 14 bits of data.
Problem: It keeps on sending random data ranging from 0-10700. Even though data should be max 4095. It means I am not reading data correctly.
I think the problem is that MCP3208 has max freq = 2Mhz, but in code there is no delay between two consecutive data read or write. I think I need to add some delay of 0.5us whenever I need to transition clock since I am operating at 1Mhz.
For a small delay I am currently reading Accurate Delays on the Raspberry Pi
Excerpt:
...when we need accurate short delays in the order of microseconds, it’s
not always the best way, so to combat this, after studying the BCM2835
ARM Peripherals manual and chatting to others, I’ve come up with a
hybrid solution for wiringPi. What I do now is for delays of under
100μS I use the hardware timer (which appears to be otherwise unused),
and poll it in a busy-loop, but for delays of 100μS or more, then I
resort to the standard nanosleep(2) call.
I finally found some py code to simplify reading from the 3208 thanks to RaresPlescan.
https://github.com/RaresPlescan/daisypi/blob/master/sense/mcp3208/adc_3.py
I had a data logger build on the pi, that was using a 3008. The COTS data logger I was trying to replicate had better resolution, so I started looking for a 12 bit and found the 3208. I literally swapped the 3008 out for the 3208 and with this guys code I have achieved better resolution than the COTS data logger.

What does the "Mute" Button Do in Apple's AurioTouch2 sample code?

I am modifying Apple's code from the AurioTouch2 example on their developer's site. Currently I am trying to fully understand the function of the App. I see that the App writes 0's to the buffers using the silenceData method when mute is on. However, it seems to me that the data has already been processed and when using the App I see no difference wether mute is on or off. What an I missing - what purpose does mute serve?
from the end of performThu method (the input callback)
if (THIS->mute == YES) { SilenceData(ioData); }
from aurioHelper.ccp
void SilenceData(AudioBufferList *inData)
{
for (UInt32 i=0; i < inData->mNumberBuffers; i++)
memset(inData->mBuffers[i].mData, 0, inData->mBuffers[i].mDataByteSize);
}
AurioTouch2 Sample Code
You are correct, all that's doing is zero-ing out the buffer. The reason it's important, is that it's possible for the mData member to be uninitialized (i.e. random), which would result in horribly loud buzzing noises if it was left alone. It's possible that it would make no difference, but you shouldn't really leave that to chance.
If you're ever in a situation where you'd like to produce silence, make sure you 0 your buffer (instead of just leaving it).
First, I found that the mute button does work. When I hold the phone up to my ear I can hear that the sound from the mic is being played through to the receiver. With mute off there is no sound. Before, I was expecting sound from the speaker (not the receiver). That part of the problem solved.
Second, the remote io unit puts the microphone input data in the ioData buffers. Before I was expecting that there would be another callback for the output to the speaker, but I think because there is not one the remote io unit just uses the same ioData and plays it out the receiver (speaker). Thus zeroing out the ioData (after processing the the microphone input data for use by the app) results in silence at the reciever(i.e. the mute function). Any confirmation or clarification is appreciated.

Perl system call mplayer, transition between videos varies

I'm only few weeks into perl, and I am trying to run the codes below:
sub runVideo {
system('mplayer -fs video1.mpeg2 video2.mpeg2');
return;
}
runVideo();
system('some other processes in background&');
runVideo();
Basically I run video1 and video2 for two times, first time which is just the videos, second time with some application running at the background, doesn't matter what apps are running, since I'm running the videos in fullscreen mode.
Problem:
On the first run, the transition from video1 to video2 takes about 1-2 seconds.
While on the second run, the transition from video1 to video2 takes less than a second.
Question:
Why is the transition time differs? Could it be the videos are still in the memory so it took shorter time to load it?
What other alternatives or workarounds to get a same transition time?
The answer is likely in caching effects. Either the video, or the codecs required to play it, weren't in memory for video2. But of course the second time you do it, they are.
There are a couple of things you can try—depending on the exact reason the delay is a problem:
You can try the -fixed-vo option to mplayer (if you're using mplayer 1.x; its default in 2.x I believe). This will prevent the jarring vo deinit/reinit cycle.
You can (and probably should) run mplayer in -slave mode (also probably with -idle). This will give you much more control over it.
You can pre-cache whatever data is taking a while. The way to do this on Unix-like systems is posix_fadvise(int fd, off_t offset, off_t len, int advice) with an advice of POSIX_FADV_WILLNEED. Alternatively, on Linux, readahead(int fd, off64_t offset, size_t count). Or finally, by a mmap on the file, followed by madvise(void *addr, size_t length, int advice) with advice of MADV_WILLNEED. Unfortunately, none of posix_fadvise, readahead, and madvise are exported by the POSIX module. So you'll have to find another module (check CPAN) or resort to Inline or XS. Or open/sysread (less efficient).
You can combine your videos together. That should completely eliminate transition time.

How to handle class methods being called again before they are finished?

What is the best way to handle this situation on an iPhone device: My program ramps the pitch of a sound between two values. A button pressed calls a method that has a while loop that does the ramping in small increments. It will take some time to finish. In the meantime the user has pressed another button calling the same method. Now I want the loop in the first call to stop and the second to start from the current state. Here is the something like what the method should look like:
-(void)changePitchSample: (float) newPitch{
float oldPitch=channel.pitch;
if (oldPitch>newPitch) {
while (channel.pitch>newPitch) {
channel.pitch = channel.pitch-0.001;
}
}
else if (oldPitch<newPitch) {
while (channel.pitch<newPitch) {
channel.pitch = channel.pitch+0.001;
}
}
}
Now how to best handle the situation where the method is called again? Do I need some kind of mulitthreading? I do not need two processes going at the same time, so it seems there must be some easier solution that I cannot find (being new to this language).
Any help greatly appreciated!
You cannot do this like that. While your loop is running no events will be processed. So if the user pushes the button again nothing will happen before your loop is finished. Also like this you can’t control the speed of your ramp. I’d suggest using a NSTimer. In your changePitchSample: method you store the new pitch somewhere (don’t overwrite the old one) and start a timer that fires once. When the timer fires you increment your pitch and if it is less than the new pitch you restart the timer.
Have a look at NSOperation and the Concurrency Programming Guide. You can first start you operation the increase the pitch and also store the operation object. On the second call you can call [operation cancel] to stop the last operation. Start a second operation to i.e. decrease the pitch and also store the new object.
Btw: What you are doing right now is very bad since you "block the main thread". Calculations that take some time should not be directly executed. You should probably also have a look at NSTimer to make your code independent of the processor speed.
Don't use a while loop; it blocks everything else. Use a timer and a state machine. The timer can call the state machine at the rate at which you want things to change. The state machine can look at the last ramp value and the time of the last button hit (or even an array of UI event times) and decide whether and how much to ramp the volume during the next time step (logic is often just a pile of if and select/case statements if the control algorithm isn't amenable to a nice table). Then the state machine can call the object or routine that handles the actual sound level.

Xcode - OpenAL - Getting the current time of a playing sound

Hey everyone, I was wondering if someone could point me in the right direction to creating a function in my openAL singleton class that returns the current time of the sound.
Any Ideas? Thanks!
(Current time 'while the sound is playing')
Hi can you elaborate in what you mean by the current time of the sound?
do you mean what the actually time is when the sound is played or how long the actual length of time is of the audio clip? please specify.
If you mean when the actual time the sound was played then you can use
CFAbsoluteTimeGetCurrent()
to get the current time expressed as the number of seconds (as a double value, so fractional seconds will be there) since January 1, 2001 00:00:00 GMT.
so basically what you will get is the current time the moment the function is called.
if you would like to know when an event occurred within your application then you can use the
-[UIEvent timestamp]
property which will be the most accurate representation of when the sound played.
To put either of those functions which ever you decide to choose (if its actually what you are talking about) then you return it like so:
(double)returnTime{
double *timeValue = CFAbsoluteTimeGetCurrent();
return timeValue;
}
And then you can call this function and store the time like so
double *storeValue = [self returnTime];
in one go. You can then use this variable to print it to the console or what ever you like.
Hope this helps. Again please specify what you meant by the current time of the sound.
Pk