I would like to capture a live video in MATLAB and take picture frames from it, upon which image processing will be done to identify certain colours in the environment
I do not have the Image Acquisition Toolbox, I only have Image Processing Toolbox. The methods I know use the Image Acquisition toolbox functions.
Try these fex options:
Simple Video Camera Frame Grabber Toolkit
VCAPG2
Video Adaptor device (webcam) setup for MATLAB
Related
I'm working on stereo vision and I am using data from The KITTI Vision Benchmark Suite, The calibration parameters they use are very different from the parameters that the stereo camera calibrator toolbox produce and I couldn't find a way to use their data in MATLAB.
I also tried to use their calibration images in stereo camera calibrator toolbox which looks like this:
Calibration Toolbox MATLAB
But when I run the calibration it fails saying
Calibration fails:
unable to estimate camera parameters. images may contain severe less
distortion or the 3-D orientations of the calibration pattern are too
similar across images, if calibration pattern orientations are too
similar, try removing very similar images or adding additional images
with pattern in invalid orientations
Please help me if you have any idea on how to solve this
The Matlab Calibration Workflow documention just says "capture images."
But it doesn't say how, or what file format is required.
My Matlab script has the line:
load('handshakeStereoParams.mat');
...and this .mat file I believe is generated by Matlab's Stereo Camera Calibrator app.
The Mathworks documentation on the Stereo Camera Calibration app does give specific advice on image formats:
Use uncompressed images or lossless compression formats such as PNG.
There's also a great deal more information on the details of what sort of images you need, under the "Image, Camera, and Pattern Preparation" subheading, in the expandable sections.
"Capture images" means take images of the checkerboard pattern with the cameras that you are trying to calibrate. Generally, you can load images of any format that imread can read into the Stereo Camera Calibrator app. However, it is much better not to use a format with lossy compression, like JPEG, because the calibration artifacts affect the calibration accuracy. So if you can configure your camera to save the images as PNG or BMP, you should do that. And if your camera only lets you use JPEG, then turn the image quality up to 100%.
I have a video that was taken with a GoPro and I would like to get rid of the fisheye distortion. I know I can get rid of the fisheye with the gopro software, but I want to do this using Matlab instead.
I know there's this http://www.mathworks.com/help/vision/ref/undistortimage.html that applies for images, however, how would I apply it for a full video? The number of frames in the video 207 (it's like 5 - 6 second short video).
Thank you very much!
Can't you just sample your video stream at 24fp (using e.g. ffmpeg, see here ), apply your Matlab routine one frame at a time, then rebuild the video stream in Matlab itself?
You can apply undistortImage to each frame of the video. If the video is saved to a file, you can use vision.VideoFileReader to read it one frame at a time, and then you call undistortImage. Then you can write the undistorted frame to a different file using vision.VideoFileWriter, or you can display it using vision.VideoPlayer.
Of course, this is all assuming that you have calibrated your camera beforehand using the Camera Calibrator App.
I have some video (.mp4), and some text data which includes the XY coordinates of a circle that I wish to draw over the video's frames and render a new video.
I have been able to do this in MATLAB using the computer vision toolbox, however the formats of video I can use are extremely limited... I need another method.
Use the insertShape function in the Computer Vision System Toolbox.
is there anyone working on extracting the data from a 3d stereo video by using opencv? (e.g. 3d blu-ray). From some documentation, it stated .avi is the only supporting video file format on opencv. If there you are or you know, would you mind to give me a tutorial how to do that. (e.g.A frame of a 3d stereo video should be an image of 2 views plus one depth map? or A frame of a 3d stereo video is 2 images of 2 views and some depth maps?) How to read the information?
An other question is, is there any API in opencv can control the output from the graphic cards ports? I mean if I have a graphic card with two DVI ports, would it be possible for the monitor connected to A-DVI display the left-sided image of the 3d-stereo video while B-DVI display the right-sided image.