set seek time in AVQueuePlayer in iphone - iphone

I am developing an application in that I want to play two video at the same time, That is not possible using MPMoviePlayer. So I have used AVQueuePlayer To play video. As I am success to play video but the problem is in jumping to particular time. For that we have method call seekToTime and we need to variable of the CMTime datatype.
I am able to jump at time in 1,2,3 seconds etc, My problem that I want to jump at Time 1.2, 1.3 , 1.4 second etc. but I am not able to move the video at that time.
Can any one know the solution of this problem than please help me to solve the problem.

The seekToTime: method, however, is tuned for performance rather than precision. If you need to move the playhead precisely, instead you use seekToTime:toleranceBefore:toleranceAfter Reference
toleranceBefore: The accuracy of the time before to which you would like to move the playback cursor.
[time-beforeTolerance]
toleranceAfter: The accuracy of the time after to which you would like to move the playback cursor.
[time+afterTolerance]
Lets say both parameters represents the margin of in-accuracy.
I have seen other Question on SO using self.player.currentItem.asset.duration to get the duration.
Good Luck.

Related

NStimer and Dispatchqueue starts a little late when using the airpot

My app provides voice to AVPlayer and gives a little term using Timer in the middle of playback so that users can read along.
It works well on simulators and real devices.
However, when you pair the air pot, the timer starts about 0.2 seconds late.
Why does this happen and how can I fix it?
I used 'Timer.scheduler' and 'Dispatchqueue.Main.asyncAfter', but both had the same symptoms.
I gather that you are starting playback and simultaneously starting a timer, but when you do this with Bluetooth earphones, that they don’t seem in sync. If that’s the case, I might suggest not using your own timer, but rather let the AVPlayer tell you where it is.
For example, you might use addPeriodicTimeObserver(forInterval:queue:using:). The AVPlayer tell you when it got to a particular point in the playback.

AVAudioPlayerNode - Get Player State?

In an iOS-project I am using the AVAudioPlayerNode in conjunction with the AVAudioEngine and an AVAudioUnitTimePitch. Everything works peachy. However, I was wondering if there is a way to figure out what the current player's state (e.g. isPlaying, isPaused) or at least the playback position is.
While AVAudioPlayer at least allows you to get the currentTime-parameter, I could not yet figure out how to get that information with AVAudioPlayerNode. I tried playing around with the nodeTimeForPlayerTime and playerTimeForNodeTime methods described in the swift documentation but I couldn't make any progress.
Any help would be highly appreciated.
Since the AVAudioPlayerNode is designed as an audio stream, it doesn't necessarily keep track of the time within a particular file. However, the AVAudioPlayerNode does keep a running time of how long its been playing all audio. This timer doesn't reset with each file, in order to change it, you must explicitly tell it where you want to start counting from.
So to find the current time the player has been playing you must do the following:
player.lastRenderTime.sampleTime / file.fileFormat.sampleRate
Now in order to get the timer to reset after each file, you must explicitly reset the players current time. To do this use the player.playAtTime: function.
If you would like an example, check one out here: https://github.com/danielmj/AEAudioPlayer

Playing a real time video stream from iPhone camera on a 20 second delay

I am trying to see if it is possible to record a video from the iPhone's camera and write this to a file. I then want the video to start playing on the screen a set time after. This all needs to happen continuously. For example, I want the video on the screen to always be 20 seconds behind what the camera is recording.
Some background:
I have a friend who is a coach and would like for his players to be able to see their last play. This could be accomplished by a feed going to a TV from an iPad always 20 seconds behind what is recorded. This needs to continually run until practice is over. (I would connect the iPad to the TV either with a cable or AirPlay to an Apple TV). The video would never need to be saved and should just be discarded after playing.
Is this even possible with the APIs AVFoundation offers? Will the iPhone let you write to a file and read from a file at the same time to accomplish this? Any other better way to accomplish this?
Thanks for your time.
Instead of writing to a file, how about saving your frames in a circular buffer big enough to hold X seconds of video?
The way I would start to do this would be to look at what's provided in AVCaptureVideoDataOutput and its delegate methods (where you can get the frame data from).

How do i pause video recording with iPhone SDK?

I see there is an app called iFile with a pause feature while recording video. How do they do this? I tried using AVMutableComposition classes and when the user pauses i cut a new video and then merge the video at the end, however the processing time to merge the videos is not desirable.
Can someone give me other good ideas on how to do this? I noticed the iFile way is very seamless.
Thanks
Here are some ideas. I have not tried either of these.
If you are using an AVAssetWriter to write your captured image then you can simply drop the frames while paused. You will need to keep track of the last presentation time stamp (PTS) that was used. Then you need to calculate the next image PTS based on this last time stamp when you start recording again. Doing this with audio as well might be a little trickier.
An alternate method would be to use empty edits. I am not sure how you would insert an empty edit in the middle of a track using AVAssetWriter. I know you can insert them at the beginning and end. Using AVMutableCompositionTrack you could use insertEmptyTimeRange: where the time range is constructed like
CMTime delta = CMTimeSubtract(new_sample_time, last_sample_time)
CMTimeRangeMake(last_sample_time, delta)
Where new_sample_time is the time of the first sample after un-pausing, and last_sample_time is the time of the last sample before pausing. Again with audio this may be a little tricky as the buffer for audio generally contains 1024 samples. The CMTime returned by CMSampleBufferGetPresentationTimeStamp is the time of the first sample.
Hope this helps or leads you to a solution.

Play mp3 file smoothly upon dragging a scroll using AVToolbox or openAL

I have been facing this since so many days but I have not reach to any conclusion.
My problem is : I want to play an mp3 file but not simply by clicking on a play button.
It is this way I want to play it.
*There is a slider that I can drag using finger, I want that the mp3 should play with the frequency with which I am dragging the finger (or speed with which I am dragging my finger, so that it will give an effect of fast forwarding (funny type of voice)) or if I drag slider slowyly the output will be slow *
The problem is the output of the sound is not coming out smooth. its very distorted and disturbed voice.
I want the outuput to be smoother.
Please help. Any suggestions please. Presently I am using AVAudioPlayer and passing the time value based upon slider input to play the file. (It does not seems to be feasible though).
I feel that it is possible using openAL only and no other way. Because using openAL we can modify the frequency of the sound file (pitch)
CAN SOME ONE PLEASE REFER ME A LINK TO openAL implementation for iPhone . I have never played a sound file using openAL
Help!!
You won't be able to do it with AVAudioPlayer, as it does not support pitch operations.
You can load and decode the entire track into memory for playback with OpenAL (which supports pitch), or you can do realtime loading/decoding and pitch changing using Audio Units (MUCH lower level, and more complicated, though).