How to play a video slowly for marking - iphone

I am Creating application for coaching. I struck with the marking on video. So I choose ffmpeg for converting video to image frame. That make me time delay as well as memory issues. I need to provide the user play the video slowly as frame by frame. Is there any other way to do that with out image conversion. V1 Golf did that process very quick manner. Please help me.

I would try converting video frame in separate thread and I would extract a few frames ahead as images in the background when user gets into 'slow motion mode'.
Here is example for one frame, so you should be quick with others: Video frame capture by NSOperation.
This should reduce delays and frames could be converted while user is eye-consuming subsequent frames.

Related

How to repeat video track using AVMutableComposition

I want to repeat 1s video to 10 seconds. I used AVMutableComposition and attached the code below.
When I try with a video recorded by apple default camera app, it works as well.
But I need to have specified video so that I made a customized camera. I made 1s video(frame rate : 5fps, H264 codec).
I am getting black frames with this video.
I am not sure whats the problem. Please help.
I solved it by myself so I did not translate target view's coordinator when apply transform to video track so that it doesn't show up properly.

record video in cocos2d iOS game, low resolution for video and high resolution for normal cases

I am using cocos2d's CCRenderTexture to record video of my game. But if recording video in retina display resolution will cost lot of CPU and memory, so I want to use low resolution for video record but keep retina-resolution for normal game play. is it possible?
I've tried "[[CCDirector sharedDirector] enableRetinaDisplay:NO];" during record video, but it seems not work. the generated output totally wrong.
This is not feasible.
You'd have to render each frame twice, once on the screen, then onto the render texture. A serious drop in framerate is inevitable even if you lower the resolution of the render texture somehow.
The reason is simply that you'll also have to write each render texture as an image to flash memory. This is extremely slow. You'll also end up with a huge amount of data. If each (PNG/JPG) image file ends up being a reasonably small 50 KB then one second of recorded data at 60 fps will consume 3 Megabytes of flash memory. One minute would be around 180 Megabytes.
To record a demo of your game, most games follow the simple principle of recording the user input, and then playing back the user input as if the user had issued these commands. This requires careful planning, no breaking changes when updating the app (or invalidating old demos), and no use of non-deterministic randomizers (ie seeded with time).
If you need to record a demo for making a trailer video, there's plenty of screengrabbing solutions around. Some even specialize in grabbing iPhone video, either from the device (usually requires a source code/library component) or from the Simulator.
You should check out Kamcord SDK for recording game play. Check at http://kamcord.com/
Kamcord has a built-in gameplay video and audio recording technology for iOS. It allows you, the game developer, to capture gameplay videos with an API. Your users can then replay and share these gameplay videos via YouTube, Facebook, Twitter, and email.

How do I pause video at the exact moment I capture a photo?

I am using AVFoundation to display a Video in my UIView via an AVCaptureVideoPreviewOverlay.
I then use AVStillImageOutput's -captureStillImageAsynchronouslyFromConnection: to capture a still Image from the Video with the AVCaptureSessionPresetPhoto preset.
I am freezing the video using AVCaptureSession's -stopRunning in the -captureStillImageAsynchronouslyFromConnection completion block mentioned earlier. However, it's too late and the video has continued running while the still image is taken, so the freeze is a second or two later. When I display the image there is a jump.
How can I freeze the video at the exact moment the photo is taken?
Almost a year later...Your approach is all wrong. Instead of trying to pause the video at the precise moment that the image is captured why don't you pause the video and then capture that paused image. To a user it makes no difference, to a developer you don't have to worry about exact precision.
To reiterate my idea, if you pause a video and flash white visual and play a click the user will think you have just captured that frame regardless if you are or not. Actually, you could consider pausing video the same as capturing an image without saving it.

Capture video without displaying the actual video feed

So I have an application that can currently capture video with the front facing iphone camera and then do some processing on the video feed real-time. What I'm trying to do, however, is make this process run in the background and put other controls onscreen. So for example, say I'd like to run the camera and process the image feed, but I want the user to see a black screen with some buttons on it. Any ideas on how to do this?
Just so we get terminology right, by "in the background", you mean running the camera capture while your application is in the foreground, but not displaying the actual video feed. This is possible, but I wanted to make clear that if you move your whole application into the background you will not have access to the camera then.
There are a few ways to do this, but the one that I've spent the most time with is grabbing frames of video (or photos) via AV Foundation. Using an AVCaptureDevice and AVCaptureSession, you can grab the frames of video and route them to an encoder for saving to disk or for processing using your own custom code. None of this requires the camera feed to be displayed onscreen, so you can put up whatever interface you like and do this video recording or photo capture without any onscreen indication.
I would caution that you should make it explicit to your users what you are doing, so that you do not run the risk of violating someone's privacy. Apple does not react kindly to those who do this (for good reason).
I encapsulate a lot of this within my open source GPUImage video and photo processing framework, so you could look at the code for the GPUImageVideoCamera class there to see how I configure the capture inputs. I hand the video frames off to OpenGL ES for the application of filters and other processing operations, but you could ignore that portion of it if you just wanted to do your own encoding or processing.
Heres an exemple code from Apple's doc:
http://developer.apple.com/library/ios/#samplecode/PhotoPicker/Introduction/Intro.html
there is also the way to customize the camera interface.

Playing a real time video stream from iPhone camera on a 20 second delay

I am trying to see if it is possible to record a video from the iPhone's camera and write this to a file. I then want the video to start playing on the screen a set time after. This all needs to happen continuously. For example, I want the video on the screen to always be 20 seconds behind what the camera is recording.
Some background:
I have a friend who is a coach and would like for his players to be able to see their last play. This could be accomplished by a feed going to a TV from an iPad always 20 seconds behind what is recorded. This needs to continually run until practice is over. (I would connect the iPad to the TV either with a cable or AirPlay to an Apple TV). The video would never need to be saved and should just be discarded after playing.
Is this even possible with the APIs AVFoundation offers? Will the iPhone let you write to a file and read from a file at the same time to accomplish this? Any other better way to accomplish this?
Thanks for your time.
Instead of writing to a file, how about saving your frames in a circular buffer big enough to hold X seconds of video?
The way I would start to do this would be to look at what's provided in AVCaptureVideoDataOutput and its delegate methods (where you can get the frame data from).