MPMoviePlayer setCurrentPlayBackTime not working - iphone

MPMoviePlayerController i am playing a video and when i pause a video, when i click button i want forward the video by some time. Does any one know how to forward video from current time ? if yes what is the minimum time that i can forward video ? like is it millisecond
or seconds

Seeking very much depends on your content. Factors influencing the skip-able durations are: content format (MP4 local/progressive download or HTTP Stream/M3U8), i-frame frequency, TS-chunk-size (for M3U8) to name the major points. See wikipedias explanation on i-frames.
MPMoviePlayerController itself does not impose additional limitations.
To get very exact seeking, use MP4 with a high i-frame frequency. Note, that will dramatically affect the encoded video size.

Related

How to sync between recording and input live video stream?

I am working on a project in which I am receiving raw frames from some input video devices. I am trying to write those frames in a video files using FFMPEG library.
I have no control over the frame rate i am getting from my input sources. This frame rate varies in run-time also.
Now my problem is how do i sync between recorded video and coming video. Depending upon frame-rate i set in FFMPEG and actual frame rate that i am receiving playback of recorded video is either fast or slow than input video.
I tried to add timestamps (as numOfFrames) in encoded video as per following link
but that didn't help.
ffmpeg speed encoding problem
Please tell me a way to synchronize both. This is my first time with FFMPEG or any multimedia libraries so any examples will be highly appreciated.
I am using directshow ISampleGrabber interface to capture those frames.
Thank You
So Finally i figured out how to do this. Here is how..
First i was taking preview from PREVIEW pin of source filter which do not give timestamps to frames. So one should take frames from capture pin of the source filter. Than in SampleCB callback function we cant get time using IMediaSample::GetTime(). But this function will return time in unit of 100ns. FFMPEG requires it in units of 1/time_base. Here time_base is desired frame rate.
So directshow timestamp needs to be converted in FFMPEG units first. Than we can set pts in AVFrame::pts variable of ffmpeg. One more thing that needs to be considered is first frame of video shoul have timestamp of 0 in FFMPEG so that needs to be taken care of while converting from directshow timestamp to FFMPEG one.
Thank You

Rhythm (sound change) detection on iPhone

Sorry for my weak english
I've got some aif or MP3 tunes for plaing loud on the iPhone,
and I need to do some 'sound change' detections,
such I would use for some visualisations (jumping man or other)
how to do it 'easy' on iPhone or how to do it 'fine'?
should I do fft or such, or something different?
I have seen some sound visualisations back then but all they
seem not to much good (they seem not to be much clear in reactions on
music change).
Thanks
What are you currently using for playback? I would use audio queues or audio units - both give you access to buffers of audio samples as they're passed through to the hardware. At that point it's a matter of detecting peaks in the sound above a certain threshold, which you can do in a variety of different ways.
An FFT will not help you because you're interested in the time-domain waveform of the sound (its amplitude over time) not its frequency-domain characteristics (the relative strengths of different frequencies in different time windows.)

Rewind 30 seconds audio and play 2X speed

I have a Query regarding playing audio: Can I rewind the audio 30 secs and Play with 2X speed? Is that possible like in Podcasts in iPhone?
Thanks
If you want to speed up sound without making the pitch high and squeaky, then you may need to license a commercial DSP time-pitch stretch library such as Dirac, et.al. (There may be some open source code to do this in Audacity, but I am unaware of a working iOS port of such).

Play mp3 file smoothly upon dragging a scroll using AVToolbox or openAL

I have been facing this since so many days but I have not reach to any conclusion.
My problem is : I want to play an mp3 file but not simply by clicking on a play button.
It is this way I want to play it.
*There is a slider that I can drag using finger, I want that the mp3 should play with the frequency with which I am dragging the finger (or speed with which I am dragging my finger, so that it will give an effect of fast forwarding (funny type of voice)) or if I drag slider slowyly the output will be slow *
The problem is the output of the sound is not coming out smooth. its very distorted and disturbed voice.
I want the outuput to be smoother.
Please help. Any suggestions please. Presently I am using AVAudioPlayer and passing the time value based upon slider input to play the file. (It does not seems to be feasible though).
I feel that it is possible using openAL only and no other way. Because using openAL we can modify the frequency of the sound file (pitch)
CAN SOME ONE PLEASE REFER ME A LINK TO openAL implementation for iPhone . I have never played a sound file using openAL
Help!!
You won't be able to do it with AVAudioPlayer, as it does not support pitch operations.
You can load and decode the entire track into memory for playback with OpenAL (which supports pitch), or you can do realtime loading/decoding and pitch changing using Audio Units (MUCH lower level, and more complicated, though).

AudioQueue gaps in playback

I'm struggling with an AudioQueue audio player I implemented. I initially thought it was truncating the 1st 1/2 of audio that it played but upon loading larger files I notice gaps every other 1/2-1 second. I've run it in debug and I've confirmed that I'm loading the queue correctly with audio. (There are no big zero regions loaded in the queue.) It plays without issue (no gaps) on the simulator but on device I get gaps as if its missing every other chunk of audio. In my app I decompress then pull audio from a memory NSMutableData object. I feed this data into the audio queue. I have a corresponding implementation in the same app that plays wave audio and this example works without issue on long and short audio clips. I'm comparing the wave implementation to the other which does the decompression. the only difference between the two is how I discover the audio meta data and where I get the audio samples for enqueuing. In the wave implementation I use AudioFileGetProperty and AudioFileReadPackets to get this data. In the other case I derive the data before hand using cached ivars loaded during callbacks from my decompressor. The meta data matches for both compressed and wave implementations. I've run the code in instruments and I don't see anything taking more than 1ms in my audio packet delivery/enqueuing logic during playback. I'm completely lost. Please speak up if you have any idea how to solve the situation.
I finally resolved this issue. I found that if I skipped the 1st 44 bytes (the exact size of a wave header) of audio then it plays correctly on the device. It pays correctly on the sim regardless of wether I skip 44 or not. Strange and I'm not sure why but that's the way it works.