Can any one tell me why the cloud points got from stereo camera(left and right camera images) are only on a flat plat and no depth - stereo-3d

Can any one tell me why the cloud points got from stereo camera(left and right camera images) using OpenCv C++ are only on a flat plat and no depth.
side image
frontal image

I was able to resolve the issue. It was due to bad calibration.got better point clouds once the calibration was improved

Related

iPhone TrueDepth front camera innacurate face tracking - skewed transformation

I am using an app that was developed using the ARKit framework. More specifically, I am interested in the 3D facial mesh and the face orientation and position with respect to the phone's front camera.
Having said that I record video with subjects performing in front of the front camera. During these recordings, I have noticed that some videos resulted in inaccurate transformations with the face being placed in the back of the camera whereas the rotation being skewed (not orthogonal basis).
I do not have a deep understanding of how the TrueDepth camera combines all its sensors to track and reconstruct the 3D facial structure and so I do not know what could potentially cause this issue. Although I have experimented with different setups e.g. different subjects, with and without a mirror, screen on and off, etc. I still have not been able to identify the source of the inaccurate transformation. Could it be a combination of the camera angle interfering with the mirror?
Below I attached two recordings of myself that resulted in incorrect (above) and correct (below) estimated transformations.
Do you have any idea of what might be the problem? Thank you in advance.

export 3d scene to make a stitched image

I am developing an app which creates spherical panoramas. I'm using ARKit for that. I made a button and named it Capture. What I do is that every time the user clicks Capture Button, it takes a snapshot, then it creates a plane using device point of view and uses the snapshot image as diffuse for that plane.
My end goal is to export all those planes stitched into one image to make a spherical panorama. Can anyone guide me in right direction?
I've tried using OpenCV but doesn't work when I take photos of ceilings or the floor. Also, it uses a lot of cpu memory. So far after spending more than a month I'm only able to create a regular panorama with openCV, and that too by stitching images in small batches and then stitching those stitched images to make the final image. Also, it works ok when you place your phone on a tripod. As long as the camera doesn't move much along x y and z axis it works ok.
So I guess the only two options I'm left with are exporting ARKit scene with multiple planes (with photos on them) or using phone's gyro data to stitch images.
I'm guessing that using gyro data to stich images will be extremely complicated in itself. Can anyone point me in the right direction?

Stereo Vision for Obstacle Detection

I'm working on stereo vision project with Halcon/NET. My project is to scanning the surface of a metal plate. Is it possible to detect small hole(1-3mm) on it with stereo vision?
If you are somewhat familiar with epipolar geometry and MRF optimization, you can have a look at this classic paper on 'Depth Estimation from Video'.
http://www.cad.zju.edu.cn/home/bao/pub/Consistent_Depth_Maps_Recovery_from_a_Video_Sequence.pdf
For camera calibration, you can use their ACTS software from here -
http://www.zjucvg.net/acts/acts.html
It accepts a video sequence and generates camera parameters and depth maps.
I hope it helps!
Yes, it is definitely possible to detect it - but I doubt you need stereo vision for it. Stereo vision is only useful when you want to recover 3D information (depth) from a scene.
Detection and classification can be achieved through deep learning methods too, it will also be probably more intuitive that way - but it depends on how unique your 'hole' is compared to the background of your scene. A problem of similar novelty has been discussed in this paper.
The same problem persists for stereo-vision, if the background of your scene has similar features to what you are trying to 'detect' it will create problems during stereo-matching.
Even if you use a simple 'edge' detector using a monocular vision system, it will still cause a problem.

Matlab Camera Calibration: saving chessboard calibration images

I'm currently trying to calibrate a camera and use those data to calibrate a projector. The first steps of this process start with printing a chessboard, generate chessboard calibration images and use them to calibrate the camera. Matlab documentation is quite thorough but it doesn't mention how one can generate its own chessboard calibration images. I can only assume this is a fairly simple thing to do but I'm new to Matlab so haven't figured out yet so any help would be greatly appreciated.
You generate the calibration images by taking pictures of the chessboard with your camera.
Also, if you have a recent version of MATLAB with the Computer Vision System Toolbox, try the Camera Calibrator app
https://sites.google.com/site/visheshpanjabihomepage/codes
The second set of codes, will help you collect the images for the camera calibration toolbox by Dr. Boquet (Link of which is given on the prescribed page itself).

Camera Calibration on MATLAB

I am very new to camera calibration and I have been trying to work with the camera calibration app from MATLAB's computer vision toolbox.
So I followed the steps they suggested on the website and so far so good, I was able to obtain the intrinsic parameters of the camera.
So now, I am kind of confused about what should I do with the "cameraParameter" object that was created when the calibration was done.
So my questions are:
(1) What should I do with the cameraParameter object that was created?
(2) How do I use this object when I am using the camera to capture images of something?
(3) Do I need the checker board around each time I capture images for my experiment?
(4) Should the camera be placed at the same spot each time?
I am sorry if those questions are really beginner level, camera calibration is new to me and I was not able to find my answers.
Thank you so much for your help.
I assume you are working with 1 just camera, so only intrinsic parameters of the camera are in the game.
(1),(2). Once your camera is calibrated, you need to use this parameters to undistort the image. Cameras dont take the images as they are in reality as the lenses distort it a bit, and the calibration parameters are for fixing the images. More in wikipedia.
About when you need to recalibrate the camera (3): if you set up the camera and don't change its focus, then you can use the same calibration parameters, but once you change the focal distance a recalibration is necessary.
(4) As long as you dont change the focal distance and you are not using a stereo camera sistem you can change your camera freely.
What you are looking for are two separate calibration steps: Alignment of depth image to color image and conversion from depth to a point cloud. Both functions are provided by windows sdk. There are matlab wrappers that call these SDK functions. You may want to do your own calibration only if you are not satisfied with the manufacturer calibration information stored on Kinect. Usually the error is within 1-2 pixels in the 2D alignment, and 4mm in 3D.
When you calibrate a single camera, you can use the resulting cameraParameters object for several things. First, you can remove the effects of lens distortion using the undistortImage function, which requires cameraParameters. There is also a function called extrinsics, which you can use to locate your calibrated camera in the world relative to some reference object (e. g. a checkerboard). Here's an example of how you can use a single camera to measure planar objects.
A depth sensor, like the Kinect, is a bit of different animal. It already gives you the depth in real units at each pixel. Additional calibration of the Kinect is useful if you want a more precise 3D reconstruction than what it gives you out of the box.
It would generally be helpful if you could tell us more about what it is you are trying to accomplish with your experiments.