I'm doing a project that requires acquiring real world coordination from a camera. The first thing that I need to do is calibrate my camera. I use Camera Calibrator from MATLAB Toolbox, and about 40 samples for calibrating. All the samples was taken by Logitech C922. But after calibrate, the result seems so wrong, as you can see in the image below.
It is more distortion than the original image. I have also tried to calibrate using OpenCV but the result is the same. Anyone know what wrong and why does this happen ?
I am sorry if those questions are really beginner level, camera calibration is very new to me and I was not able to find my answers.
Thank you in advance!
Firstly, you really need to figure out what does 'calibration' means.
it's clear that the picture is showing the undistorted picture,since the lines on the chessboard and those on the background are quite straight. Without undistortion the chessboard in the center would look like squeezed in radial directions. Check the button 'show original' on the bottom-left corner of your picture, click it, and find the difference between these two pics.
What this calibrator does is that it calculates the intrinsic/extrinsic parameters, distortion coefficients and, if you wish, undistort the pictures you gave to her. She already did the job.
Related
If I have understand well, 3D 360 photos are created from a panorama photo, so I guess it should be possible to create a 3D photo (non 360) from a normal photo. But how? I did not find anything in Google! Any idea of what should I search??
So far, if nothing available (I don't think so), I'll try to duplicate the same photo in each eye. One of the pictures a little bit moved to the right, and the other one moved a little bit to the left. But I think the distortion algorithm is much more complicated.
Note: I'm also receiving answers here: https://plus.google.com/u/0/115463690952639951338/posts/4KdqFcqUTT9
I am in no way certain of this, but my intuition on how 3D 360 images are created in GoogleVR is this:
As you take a panorama image, it actually takes a series of images. As you turn the phone around, the perspective changes slightly with each image, not only by angle, but also offset (except in the unlikely event you spin the phone around its own axis). When it stitches together the final image, it creates one image for each eye, picking suitable images from the series so that it creates a 3D effect when viewed together. The same "area" of the image for each eye comes from a different source image.
You can't do anything similar with a single image. It's the multitude of images produced, each with a different perspective coming from the turning of the phone, that enables the algorithm to create a 3D image.
2D lacks a dimension hence cannot be converted to 3D just like that, but there are clever ways for example Google Pixel even though doesn't have 2 camera can make it seem like the image is 3D by applying some Machine learning algorithm that create the effect of perspective and depth by selective blurring.
3d photos can't be taken by normal but you can take 360 photos with normal camera ..... There are many apps via which you can do this ..... Also there are many algorithms to do it programmatically
The rectification function in Matlab seems to be responding wrong. Can anyone let me know if I am getting the right output?
Left Image
Right Image
Anaglyph of unrectified images
Anaglyph of rectified images
Here is my code:
leftImageSnapshot = getsnapshot(handles.vidL);
imshow(leftImageSnapshot);
rightImageSnapshot = getsnapshot(handles.vidR);
imshow(rightImageSnapshot);
[I1Rect,I2Rect]=rectifyStereoImages(I1,I2,stereoParams,'OutputView','valid');
imshowpair(I1Rect,I2Rect,'falsecolor','ColorChannels','red-cyan');
I was following this link for image rectification. After rectification, the images are supposed to look like the cameras are parallel. But in my case, the vertical disparity still exists in the image.
I am trying to obtain a disparity map for which the vertical disparity should be removed.
My best guess would be that your cameras were moved after you did the calibration. Once you calibrate, the position and orientation of the cameras relative to each other cannot change. If it does, your stereoParams are no longer valid.
To see what went wrong, do the calibration again using the Stereo Camera Calibrator app, and then click "Show Rectified" button at the lower left corner of the main image pane. I will show you a rectified pair of calibration images. If those look ok, then your cameras have moved and you have to take the calibration images again and recalibrate. If the rectified calibration images look bad, then something is wrong with your calibration.
By the way, there is a stereoAnaglyph function, which you can use to create a red-cyan anaglyph.
I am new to OpenCV and need to know the method of OpenCV which detects different shapes (circle, square, rectangle, triangle, ellipse) in a camera captured image for iPhone.
so, could someone directs me to the right direction (references/articles/anything) that which techniques are better to get it done.
Thanks..
iOmi
First you will probably need to look at an edge detector such as Canny to extract the shapes into a binary image. (Although this may be expensive for the iphone)
For circles I would have a look at the HoughCircles.
For squares and rectangles you should look at the findContours method and the sample code squares.cpp in the samples directory when you downloaded opencv.
With a quick google search I was able to find an article about detecting shapes in C# which roughly corresponds to the methods you would use in another language while using the opencv library.
I have not used opencv in ios but I hope this will help get you started.
I'm looking for code to be able to contrast detect edges in a photo.
Basically, the user will roughly paint the mask with their finger on Iphone, iPod or iPad. Then the code would detect the edges and adjust the mask to the edges.
Thanks for your help!
http://www.image-y.com/before.jpg
http://www.image-y.com/after.jpg
I recommend taking a look at OpenCV (which is also compilable on iOS (take a look at https://github.com/aptogo/OpenCVForiPhone)). A nice addition (with explanations) could be provided by this article http://b2cloud.com.au/tutorial/uiimage-pre-processing-category.
When having gained a basic understanding of what you can do with OpenCV, I'd personally try to do some kind of thresholding and contour detection (take a look at cv::findContours). Afterwards you could filter the found contours by using the given input by your user.
I have a video taken at an angle to the axis of a circular body. Since it was taken from an unknown angle, the circle appears as a ellipse.
How to find the angle of camera offset from the video? Also, Is it correct to apply the same transformation to all the frames in the video; as the video camera was in a fixed location?
For a super easy fix, go back to the scene and take the video again. This time, make sure the circle look like a circle.
That being said, this is an interesting topic in the academia. I believe there's various solutions/articles that are aimed to solve this kind of problem. Base on your reputation, I believe you already know that, but still wanted to give Stackoverflow members a shot at answering this problem. So here it goes.
For an easy fix, you can start with this function, by guessing the camera location by trial and error until you find an acceptable transformation to your image (a frame of the video). The function does not work right out of the box, you have to debug it a little bit.
If you have access to the (virtual) scene of the image, you can take an image. Base on mutual feature points from the new image and the original image, register the two images (and get the transformation) (ex1, ex2).
Finally, apply the same transformation to each frame of the video.
To answer your second question, though the camera location is fixed, there may be objects moving in the scene. So applying the same transformation to every frame will only correct the objects that are still. So it's not ideal. In the end, it depends on what the aims of the project is and how this non/correction affects the project aims.