I am working on an application that has to be tight control of time flow during a movie recording.
Apple says the iPhone 5 can capture HD video up to 30 fps. If shoot a video and play it on quicktime I see a variable FPS, that reaches 30 fps at some moments, but at the same time quicktime reports the video as being 29.75 fps.
As far as I understand, for each second of video, an integer number of frames should be displayed, not a fractional number. I first thought that could be related to drop frames. Then I decided to design an method to measure drop frames and realized that for every second of video, the iPhone drops from 1 to 4 frames. Also discovered that every time a frame is dropped iPhone simply copies the last frame again to fill the gap. So in theory, dropping a frame would make no difference in the total number of frames a move would have.
So, this is my problem. What this 29.75 fps is telling? How this number is obtained?
It's not so much that x number of frames are shown per second, but each frame is shown for 1/x seconds. NTSC (the TV standard in US, Japan and others), runs at 29.97fps. So, each frame is shown for a bit more than 3/100ths of a second before the next frame is drawn. So, in your case, each frame is displayed for roughly .0336 of a second before the next one is shown.
Related
I am making video of simulation on netlogo. The total time of video is around 30 minutes. When i play movie, it works fine for first couple of minutes, then the screen of movie starts distorting and after some times the black screen appears. I tried to making video on by setting different frame set (i.e. 6, 12 and 24). But every time i got the same behavior in movie. Any suggestion?
I am selecting a video clip from the iPhone camera roll using UIImagePickerController within the AVFoundation framework, I have set it up so the user is able to adjust the length of the video by trimming it. Is there a way to set the maximum and minimum length the user is able to have the video as, for example I want the video clip to be a maximum length of 15 seconds and a minimum length also of 15 seconds.
What's the best way of going about doing this?
`imagePickerController.videoMaximumDuration = 15.0f;` // limits video length to 15 seconds.
where imagePickerController is object of UIImagePickerController.
Using videoMaximumDuration method you can restrict length of video from both ways. Like if you are recording video an alert will popup saying you cannot record video more than 15 sec and if you are selecting any video file from your library, first it will check the length of your video if length is more than 15 sec. Again alert will popup saying video is larger than 60 sec but there will be two options i.e. use or cancel. If you select to use then it will crop the length of video upto 15sec from the beginning.
UIImagePickerController has a property, videoMaximumDuration you can set.
I am using cocos2d's CCRenderTexture to record video of my game. But if recording video in retina display resolution will cost lot of CPU and memory, so I want to use low resolution for video record but keep retina-resolution for normal game play. is it possible?
I've tried "[[CCDirector sharedDirector] enableRetinaDisplay:NO];" during record video, but it seems not work. the generated output totally wrong.
This is not feasible.
You'd have to render each frame twice, once on the screen, then onto the render texture. A serious drop in framerate is inevitable even if you lower the resolution of the render texture somehow.
The reason is simply that you'll also have to write each render texture as an image to flash memory. This is extremely slow. You'll also end up with a huge amount of data. If each (PNG/JPG) image file ends up being a reasonably small 50 KB then one second of recorded data at 60 fps will consume 3 Megabytes of flash memory. One minute would be around 180 Megabytes.
To record a demo of your game, most games follow the simple principle of recording the user input, and then playing back the user input as if the user had issued these commands. This requires careful planning, no breaking changes when updating the app (or invalidating old demos), and no use of non-deterministic randomizers (ie seeded with time).
If you need to record a demo for making a trailer video, there's plenty of screengrabbing solutions around. Some even specialize in grabbing iPhone video, either from the device (usually requires a source code/library component) or from the Simulator.
You should check out Kamcord SDK for recording game play. Check at http://kamcord.com/
Kamcord has a built-in gameplay video and audio recording technology for iOS. It allows you, the game developer, to capture gameplay videos with an API. Your users can then replay and share these gameplay videos via YouTube, Facebook, Twitter, and email.
I am trying to see if it is possible to record a video from the iPhone's camera and write this to a file. I then want the video to start playing on the screen a set time after. This all needs to happen continuously. For example, I want the video on the screen to always be 20 seconds behind what the camera is recording.
Some background:
I have a friend who is a coach and would like for his players to be able to see their last play. This could be accomplished by a feed going to a TV from an iPad always 20 seconds behind what is recorded. This needs to continually run until practice is over. (I would connect the iPad to the TV either with a cable or AirPlay to an Apple TV). The video would never need to be saved and should just be discarded after playing.
Is this even possible with the APIs AVFoundation offers? Will the iPhone let you write to a file and read from a file at the same time to accomplish this? Any other better way to accomplish this?
Thanks for your time.
Instead of writing to a file, how about saving your frames in a circular buffer big enough to hold X seconds of video?
The way I would start to do this would be to look at what's provided in AVCaptureVideoDataOutput and its delegate methods (where you can get the frame data from).
I am Creating application for coaching. I struck with the marking on video. So I choose ffmpeg for converting video to image frame. That make me time delay as well as memory issues. I need to provide the user play the video slowly as frame by frame. Is there any other way to do that with out image conversion. V1 Golf did that process very quick manner. Please help me.
I would try converting video frame in separate thread and I would extract a few frames ahead as images in the background when user gets into 'slow motion mode'.
Here is example for one frame, so you should be quick with others: Video frame capture by NSOperation.
This should reduce delays and frames could be converted while user is eye-consuming subsequent frames.