Camera Calibration Kinect Vision Caltech IR camera - matlab

So I was trying to calibrate the IR camera of the new Kinect v2 sensor. So I am following all the steps from here http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html
The problem I am having is the following:
The IR image looks fine but once I put it through the program the image I am getting is a mostly white(bright) image. See pics below
Anyone encountered this issue before?
Thanks

You are not reading the IR image pixels correctly. The format is 16 bits per pixel, with only the high 10 ones used (see specification here). You are probably visualizing them as if they were 8bpp images, and therefore they end up white-saturated.
The simplest thing you can do is downshift the values by 8 bits (i.e. divide by 256) before interpreting them in a "standard" 8bpp image.
However, in Matlab you can simply use imagesc to display them with color scaling.

Related

Find Corner in image with low resolution (Checkerboard)

i need some help with a corner detection.
I printed a checkerboard and created an image of this checkerboard with a webcam. The problem is that the webcam has a low resolution, therefore it do not find all corners. So i enhanced the number of searched corner. Now it finds all corner but different one for the same Corner.
All Points are stored in a matrix therefore i don't know which element depends to which point.
(I can not use the checkerboard function because the fuction is not available in my Matlab Version)
I am currently using the matlab function corner.
My Question:
Is it possible to search the Extrema of all the point clouds to get 1 Point for each Corner? Or has sb an idea what i could do ? --> Please see the attached photo
Thanks for your help!
Looking at the image my guess is that the false positives of the corner detection are caused by compression artifacts introduced by the lossy compression algorithm used by your webcam's image acquisition software. You can clearly spot ringing artifacts around the edges of the checkerboard fields.
You could try two different things:
Check in your webcam's acquisition software whether you can disable the compression or change to a lossless compression
Working with the image you already have, you could try to alleviate the impact of the compression by binarising the image using a simple thresholding operation (which in the case of a checkerboard would not even mean loosing information since the image is intrinsically binary).
In case you want to go for option 2) I would suggest to do the following steps. Let's assume the variable storing your image is called img
look at the distribution of grey values using e.g. the imhist function like so: imhist(img)
Ideally you would see a clean bimodal distribution with no overlap. Choose an intensity value I in the middle of the two peaks
Then simply binarize by assigning img(img<I) = 0; img(img>I) = 255 (assuming img is of type uint8).
Then run the corner algorithm again and see if the outliers have disappeared

Automatic assembly of multiple Kinect 2 grayscale depth images to a depth map

The project is about measurement of different objects under the Kinect 2. The image acquisition code sample from the SDK is adapted in a way to save the depth information at the whole range and without limiting the values at 0-255.
The Kinect is mounted on a beam hoist and moved in a straight line over the model. At every stop, multiple images are taken, the mean calculated and the error correction applied. Afterwards the images are put together to have a big depth map instead of multiple depth images.
Due to the reduced image size (to limit the influence of the noisy edges), every image has a size of 350x300 pixels. For the moment, the test is done with three images to be put together. As with the final program, I know in which direction the images are taken. Due to the beam hoist, there is no rotation, only translation.
In Matlab the images are saved as matrices with the depth values going from 0 to 8000. As I could only find ideas on how to treat images, the depth maps are transformed into images with a colorbar. Then only the color part is saved and put into the stitching script, i.e. not the axes and the grey part around the image.
The stitching algorithm doesn’t work. It seems to me that the grayscale-images don’t have enough contrast to be treated by the algorithms. Maybe I am just searching in the wrong direction?
Any ideas on how to treat this challenge?

How do I interpret an Intel Realsense camera depth map in MATLAB?

I was able to view and capture the image from the depth stream in MATLAB (using the webcam from the Hardware Support Package) from an F200 Intel Realsense camera. However, it does not look the same way as it does in the Camera Explorer.
What I see from MATLAB -
I have also linked Depth.mat that contains the image in the variable "D".
The image is returned as a 3 dimensional array of uint8. I assumed that the depth stream is a larger number that is broken in bits in each plane and tried bitshifting each plane and adding it to the next while taking care of the datatypes. Then displayed it using imagesc, but did not get a proper depth image.
How do I properly interpret this image? Or, is there an alternate way to capture images in MATLAB?

Record screen in real time using MATLAB?

I am using an optical microscope and camera to record some videos which are post-processed in MATLAB.
Real time acquisition and pixel statistics would be extremely helpful, because some of what I am looking at absorbs very little light (I am using transmission mode). An example is that a blank (background) sample would give me an an average pixel value across a 512x512 ccd array of something like 144 (grayscale). An actual sample might have an average value of 140 or so. This subtle shift in pixel intensity would be useful in helping me focus the microscope.
Unfortunately, my camera setup is not supported by MATLAB, so I cannot use the image acquisition toolbox for real time. So I was wondering, is there a way that I could 'fake' real time image acquisition by selecting say a rectangle of my current desktop (the rectangle that is the video output of the microscopes camera), for matlab to record in real time?
Thanks

Calculating corresponding pixels

I have a computer vision set up with two cameras. One of this cameras is a time of flight camera. It gives me the depth of the scene at every pixel. The other camera is standard camera giving me a colour image of the scene.
We would like to use the depth information to remove some areas from the colour image. We plan on object, person and hand tracking in the colour image and want to remove far away background pixel with the help of the time of flight camera. It is not sure yet if the cameras can be aligned in a parallel set up.
We could use OpenCv or Matlab for the calculations.
I read a lot about rectification, Epipolargeometry etc but I still have problems to see the steps I have to take to calculate the correspondence for every pixel.
What approach would you use, which functions can be used. In which steps would you divide the problem? Is there a tutorial or sample code available somewhere?
Update We plan on doing an automatic calibration using known markers placed in the scene
If you want robust correspondences, you should consider SIFT. There are several implementations in MATLAB - I use the Vedaldi-Fulkerson VL Feat library.
If you really need fast performance (and I think you don't), you should think about using OpenCV's SURF detector.
If you have any other questions, do ask. This other answer of mine might be useful.
PS: By correspondences, I'm assuming you want to find the coordinates of a projection of the same 3D point on both your images - i.e. the coordinates (i,j) of a pixel u_A in Image A and u_B in Image B which is a projection of the same point in 3D.