Correct Video for lens distortion in Matlab? - matlab

I have a video that was taken with a GoPro and I would like to get rid of the fisheye distortion. I know I can get rid of the fisheye with the gopro software, but I want to do this using Matlab instead.
I know there's this http://www.mathworks.com/help/vision/ref/undistortimage.html that applies for images, however, how would I apply it for a full video? The number of frames in the video 207 (it's like 5 - 6 second short video).
Thank you very much!

Can't you just sample your video stream at 24fp (using e.g. ffmpeg, see here ), apply your Matlab routine one frame at a time, then rebuild the video stream in Matlab itself?

You can apply undistortImage to each frame of the video. If the video is saved to a file, you can use vision.VideoFileReader to read it one frame at a time, and then you call undistortImage. Then you can write the undistorted frame to a different file using vision.VideoFileWriter, or you can display it using vision.VideoPlayer.
Of course, this is all assuming that you have calibrated your camera beforehand using the Camera Calibrator App.

Related

Creating video skim of YUV video in matab

I am having Y,U,V components of some frames and now I want to create a video from these components. To be more specific I want to create video summary of a video. Till now I have obtained which frames needs to be there (after some processing) in final video skim and I want to generate a video skim from these frames.
Thanks.
Edit:
I have created video skim but I doubt that might not be the most efficient method. What I am doing is first I am storing the required frame's Y,U,V component in a cell then I am using frwrite function of matlab.

How can I render a moving circle over a video which positions itself based on data from a text file?

I have some video (.mp4), and some text data which includes the XY coordinates of a circle that I wish to draw over the video's frames and render a new video.
I have been able to do this in MATLAB using the computer vision toolbox, however the formats of video I can use are extremely limited... I need another method.
Use the insertShape function in the Computer Vision System Toolbox.

Opencv open 3d stereo video file and output to display

is there anyone working on extracting the data from a 3d stereo video by using opencv? (e.g. 3d blu-ray). From some documentation, it stated .avi is the only supporting video file format on opencv. If there you are or you know, would you mind to give me a tutorial how to do that. (e.g.A frame of a 3d stereo video should be an image of 2 views plus one depth map? or A frame of a 3d stereo video is 2 images of 2 views and some depth maps?) How to read the information?
An other question is, is there any API in opencv can control the output from the graphic cards ports? I mean if I have a graphic card with two DVI ports, would it be possible for the monitor connected to A-DVI display the left-sided image of the 3d-stereo video while B-DVI display the right-sided image.

Replace the first frame of a video with another frame using MATLAB

I am using MATLAB for a video processing task.
When I tried to create an object for the grayscale video (the video has a .FLV extension) using the mmreader function of MATLAB, it showed an warning saying that it cannot determine the number of frames.
Then when I tried to read the video using read(mmreader), it showed an error saying not enough memory for 3246 frames.
Therefore I just read the first frame of the video, edited it and now trying to place the frame in the position of the first frame of the video.
Things that I have done:
Read and extracted the first frame of a grayscale video (the file has a .FLV extension) using mmreader function of MATLAB.
I embedded some text in the frame.
Things that I want to do and need your help:
Replace the first frame of the source video with the edited frame (i.e. the frame with the embedded text in it)
Save and Play the video.

approach for recording grayscale video on iphone?

I am building an iphone app that needs to record grayscale video and save it to the camera roll. I'm stumped at how best to approach this.
I am thinking along the following lines:
Use a shader and opengl to transform the video to grayscale
Use AVFoundation (AVAssetWriter with an AVAssetWriterInputPixelBufferAdaptor) to write the video to the file.
My questions are:
Is this the right approach (simplest, best performance)?
If so, what would be the best way to go from opengl output to a CVPixelBufferRef input for the AVAssetWriterInputPixelBufferAdaptor?
If not, what would be a better approach?
Any nudge in the right direction is much appreciated!
In general, I'd agree with this approach. Doing your processing in an OpenGL ES 2.0 shader should be the most performant way of doing video frame alteration like this, but it won't be very simple. Fortunately, you can start from a pre-existing template that already does this.
You can use the sample application I wrote here (and explained here) as a base. I use custom shaders in this example to track colors in an image, but you could easily alter this to convert the video frames to grayscale (I even saw someone do this once). The code for feeding camera video into a texture and processing it could be used verbatim from that sample.
In one of the display options within that application, I render the processed image first to a framebuffer object, then use glReadPixels() to pull the resulting image back into bytes that I can work with on the CPU. You could use this to get the raw image data back after the GPU has processed a frame, then feed those bytes into CVPixelBufferCreateWithBytes() to generate your CVPixelBufferRef for writing to disk.
(Edit: 2/29/2012) As an update to this, I just implemented this kind of video recording in my open source GPUImage framework, so I can comment on the specific performance for the encoding part of this. It turns out that you can capture video from the camera, perform live filtering on it, grab it from OpenGL ES using glReadPixels(), and write that out as live H.264 video in 640x480 frames on an iPhone 4 at 30 FPS (the maximum camera framerate).
There were a few things that I needed to do in order to get this recording speed. You need to make sure that you set your AVAssetWriterInputPixelBufferAdaptor to use kCVPixelFormatType_32BGRA as its color format for input pixel buffers. Then, you'll need to re-render your RGBA scene using a color-swizzling shader to provide BGRA output when using glReadPixels(). Without this color setting, your video recording framerates will drop to 5-8 FPS on an iPhone 4, where with it they are easily hitting 30 FPS. You can look at the GPUImageMovieWriter class source code to see more about how I did this.
Using the GPUImage framework, your above filtering and encoding task can be handled by simply creating a GPUImageVideoCamera, attaching a target of a GPUImageSaturationFilter with the saturation set to 0, and then attaching a GPUImageMovieWriter as a target of that. The framework will handle the OpenGL ES interactions for you. I've done this, and it works well on all iOS devices I've tested.