How to sync between recording and input live video stream? - encoding

I am working on a project in which I am receiving raw frames from some input video devices. I am trying to write those frames in a video files using FFMPEG library.
I have no control over the frame rate i am getting from my input sources. This frame rate varies in run-time also.
Now my problem is how do i sync between recorded video and coming video. Depending upon frame-rate i set in FFMPEG and actual frame rate that i am receiving playback of recorded video is either fast or slow than input video.
I tried to add timestamps (as numOfFrames) in encoded video as per following link
but that didn't help.
ffmpeg speed encoding problem
Please tell me a way to synchronize both. This is my first time with FFMPEG or any multimedia libraries so any examples will be highly appreciated.
I am using directshow ISampleGrabber interface to capture those frames.
Thank You

So Finally i figured out how to do this. Here is how..
First i was taking preview from PREVIEW pin of source filter which do not give timestamps to frames. So one should take frames from capture pin of the source filter. Than in SampleCB callback function we cant get time using IMediaSample::GetTime(). But this function will return time in unit of 100ns. FFMPEG requires it in units of 1/time_base. Here time_base is desired frame rate.
So directshow timestamp needs to be converted in FFMPEG units first. Than we can set pts in AVFrame::pts variable of ffmpeg. One more thing that needs to be considered is first frame of video shoul have timestamp of 0 in FFMPEG so that needs to be taken care of while converting from directshow timestamp to FFMPEG one.
Thank You

Related

Right choice to play audio and video content

What I'm doing :
I need to play audio and video files that are not supported by Apple on iPhone/iPad for example mkv/mka files which my contain several audio channels.
I'm using libffmpeg to find audio and video streams in media file.
Video is being decoded with avcodec_decode_video2 and audio with avcodec_decode_audio3
the return values are following for each function are following
avcodec_decode_video2 - returns AVFrame structure which encapsulates information about the video video frame from the pakcage, specifically is has data field which is a pointer to the picture/channel planes.
avcodec_decode_audio3 - returns samples of type int16_t * which I guess is the raw audio data
So basically I've done all this and successfully decoding the media content.
What I have to do :
I've to play the audio and video accordingly using Apples services. The playback I need to perform should support mixing of audio channels while playing video, i.e. let say mkv file contains two audio channel and a video channel. So I would like to know which service will be the appropriate choice for me ? My research showed that AudioQueue service might be useful audio playback, and probably AVFoundation for video.
Please help to find the right technology for my case i.e. video playeback + audio playback with possible audio channel mixing.
You are on the right path. If you are only playing audio (not recording at all) then I would use AudioQueues. It will do the mixing for you. If you are recording then you should use AudioUnits. Take a look at the MixerHost example project from Apple. For video I recommend using OpenGL. Assuming the image buffer is in YUV420 then you can render this with a simple two pass shader setup. I do believe there is an Apple example project showing how to do this. In any case you could render any pixel format using OpenGL and a shader to convert the pixel format to RGBA. Hope this help.

MPMoviePlayer setCurrentPlayBackTime not working

MPMoviePlayerController i am playing a video and when i pause a video, when i click button i want forward the video by some time. Does any one know how to forward video from current time ? if yes what is the minimum time that i can forward video ? like is it millisecond
or seconds
Seeking very much depends on your content. Factors influencing the skip-able durations are: content format (MP4 local/progressive download or HTTP Stream/M3U8), i-frame frequency, TS-chunk-size (for M3U8) to name the major points. See wikipedias explanation on i-frames.
MPMoviePlayerController itself does not impose additional limitations.
To get very exact seeking, use MP4 with a high i-frame frequency. Note, that will dramatically affect the encoded video size.

How do i pause video recording with iPhone SDK?

I see there is an app called iFile with a pause feature while recording video. How do they do this? I tried using AVMutableComposition classes and when the user pauses i cut a new video and then merge the video at the end, however the processing time to merge the videos is not desirable.
Can someone give me other good ideas on how to do this? I noticed the iFile way is very seamless.
Thanks
Here are some ideas. I have not tried either of these.
If you are using an AVAssetWriter to write your captured image then you can simply drop the frames while paused. You will need to keep track of the last presentation time stamp (PTS) that was used. Then you need to calculate the next image PTS based on this last time stamp when you start recording again. Doing this with audio as well might be a little trickier.
An alternate method would be to use empty edits. I am not sure how you would insert an empty edit in the middle of a track using AVAssetWriter. I know you can insert them at the beginning and end. Using AVMutableCompositionTrack you could use insertEmptyTimeRange: where the time range is constructed like
CMTime delta = CMTimeSubtract(new_sample_time, last_sample_time)
CMTimeRangeMake(last_sample_time, delta)
Where new_sample_time is the time of the first sample after un-pausing, and last_sample_time is the time of the last sample before pausing. Again with audio this may be a little tricky as the buffer for audio generally contains 1024 samples. The CMTime returned by CMSampleBufferGetPresentationTimeStamp is the time of the first sample.
Hope this helps or leads you to a solution.

Converting a sample rate of audio file iPhone

I'd like to change the pitch of an audio file by changing the sample rate programmatically. I am recording the file using AVAudioRecorder. I have noticed a settings parameter within AVAudioPlayer, however, it is read only. Can anyone lend a helping hand? :)
You could manipulate the data the recording process returns, this is generally the way to go for DSP.
A simple change in sound's speed can be done with a resampling.
Take a look here

AudioQueue gaps in playback

I'm struggling with an AudioQueue audio player I implemented. I initially thought it was truncating the 1st 1/2 of audio that it played but upon loading larger files I notice gaps every other 1/2-1 second. I've run it in debug and I've confirmed that I'm loading the queue correctly with audio. (There are no big zero regions loaded in the queue.) It plays without issue (no gaps) on the simulator but on device I get gaps as if its missing every other chunk of audio. In my app I decompress then pull audio from a memory NSMutableData object. I feed this data into the audio queue. I have a corresponding implementation in the same app that plays wave audio and this example works without issue on long and short audio clips. I'm comparing the wave implementation to the other which does the decompression. the only difference between the two is how I discover the audio meta data and where I get the audio samples for enqueuing. In the wave implementation I use AudioFileGetProperty and AudioFileReadPackets to get this data. In the other case I derive the data before hand using cached ivars loaded during callbacks from my decompressor. The meta data matches for both compressed and wave implementations. I've run the code in instruments and I don't see anything taking more than 1ms in my audio packet delivery/enqueuing logic during playback. I'm completely lost. Please speak up if you have any idea how to solve the situation.
I finally resolved this issue. I found that if I skipped the 1st 44 bytes (the exact size of a wave header) of audio then it plays correctly on the device. It pays correctly on the sim regardless of wether I skip 44 or not. Strange and I'm not sure why but that's the way it works.