Opencv open 3d stereo video file and output to display - stereo-3d

is there anyone working on extracting the data from a 3d stereo video by using opencv? (e.g. 3d blu-ray). From some documentation, it stated .avi is the only supporting video file format on opencv. If there you are or you know, would you mind to give me a tutorial how to do that. (e.g.A frame of a 3d stereo video should be an image of 2 views plus one depth map? or A frame of a 3d stereo video is 2 images of 2 views and some depth maps?) How to read the information?
An other question is, is there any API in opencv can control the output from the graphic cards ports? I mean if I have a graphic card with two DVI ports, would it be possible for the monitor connected to A-DVI display the left-sided image of the 3d-stereo video while B-DVI display the right-sided image.

Related

Extracting similar image patches captured from 2 different lenses of the same camera

Example Image
I have 2 images captured from my iPhone 8 plus and each of them have been captured at different zoom levels of the iPhone camera (this implies one image was captured with a 1x zoom and the other at 2x zoom). And now I wanna try and extract an image patch in Picture with 1x zoom corresponding to the zoomed in image i.e. Image with 2x Zoom. How would one go about doing that ?
I understand that SIFT features maybe helpful but is there a way I could use the information about the camera extrinsic and intrinsic matrices to find out the desired region of interest ?
(I'm just looking for hints)

Superimpose masked video onto video in Matlab

I currently have a video that has been selected out with a mask and saved as a separate video. The surrounded regions of the mask are all black. I want to superimpose this mask video onto another video of the same dimensions, replacing the black pixels with the pixels in the underlying video. Is this sort of thing possible in Matlab?
Any help would be greatly appreciated!
One easy way out could be to extract frames from both the videos and replace all the black pixels of the first video with the corresponding pixel values from the second video.

What image file type is expected by Matlab Stereo Camera Calibrator app?

The Matlab Calibration Workflow documention just says "capture images."
But it doesn't say how, or what file format is required.
My Matlab script has the line:
load('handshakeStereoParams.mat');
...and this .mat file I believe is generated by Matlab's Stereo Camera Calibrator app.
The Mathworks documentation on the Stereo Camera Calibration app does give specific advice on image formats:
Use uncompressed images or lossless compression formats such as PNG.
There's also a great deal more information on the details of what sort of images you need, under the "Image, Camera, and Pattern Preparation" subheading, in the expandable sections.
"Capture images" means take images of the checkerboard pattern with the cameras that you are trying to calibrate. Generally, you can load images of any format that imread can read into the Stereo Camera Calibrator app. However, it is much better not to use a format with lossy compression, like JPEG, because the calibration artifacts affect the calibration accuracy. So if you can configure your camera to save the images as PNG or BMP, you should do that. And if your camera only lets you use JPEG, then turn the image quality up to 100%.

Correct Video for lens distortion in Matlab?

I have a video that was taken with a GoPro and I would like to get rid of the fisheye distortion. I know I can get rid of the fisheye with the gopro software, but I want to do this using Matlab instead.
I know there's this http://www.mathworks.com/help/vision/ref/undistortimage.html that applies for images, however, how would I apply it for a full video? The number of frames in the video 207 (it's like 5 - 6 second short video).
Thank you very much!
Can't you just sample your video stream at 24fp (using e.g. ffmpeg, see here ), apply your Matlab routine one frame at a time, then rebuild the video stream in Matlab itself?
You can apply undistortImage to each frame of the video. If the video is saved to a file, you can use vision.VideoFileReader to read it one frame at a time, and then you call undistortImage. Then you can write the undistorted frame to a different file using vision.VideoFileWriter, or you can display it using vision.VideoPlayer.
Of course, this is all assuming that you have calibrated your camera beforehand using the Camera Calibrator App.

How can I render a moving circle over a video which positions itself based on data from a text file?

I have some video (.mp4), and some text data which includes the XY coordinates of a circle that I wish to draw over the video's frames and render a new video.
I have been able to do this in MATLAB using the computer vision toolbox, however the formats of video I can use are extremely limited... I need another method.
Use the insertShape function in the Computer Vision System Toolbox.