Aligning two images - matlab

I have two images of the same shoe sole, one taken with a scanning machine and another with a digital camera. I want to scale one of the images so that it can be easily aligned with the other without having to do it all by hand.
My thought was to use edge detection, connect all the points on the outside of the shoe, scale one image to fit right inside the other, and then scale the original image at the same rate.
I've messed around using different tools in the Image Processing toolbox in MatLab, but am making no progress.
Is there a better way to go about this?

My advise would be to firstly use the function activecontour to obtain the outer contour of the shoe on both images. Then use the function procrustes with the binary images as input.
[~, CameraFittedToScan] = procrustes(Scan,Camera);
This transforms the camera image to best fit with the scanned image. If the scan and camera are not the same size then this needs to be adjusted first using the function imresize.

Related

Find Corner in image with low resolution (Checkerboard)

i need some help with a corner detection.
I printed a checkerboard and created an image of this checkerboard with a webcam. The problem is that the webcam has a low resolution, therefore it do not find all corners. So i enhanced the number of searched corner. Now it finds all corner but different one for the same Corner.
All Points are stored in a matrix therefore i don't know which element depends to which point.
(I can not use the checkerboard function because the fuction is not available in my Matlab Version)
I am currently using the matlab function corner.
My Question:
Is it possible to search the Extrema of all the point clouds to get 1 Point for each Corner? Or has sb an idea what i could do ? --> Please see the attached photo
Thanks for your help!
Looking at the image my guess is that the false positives of the corner detection are caused by compression artifacts introduced by the lossy compression algorithm used by your webcam's image acquisition software. You can clearly spot ringing artifacts around the edges of the checkerboard fields.
You could try two different things:
Check in your webcam's acquisition software whether you can disable the compression or change to a lossless compression
Working with the image you already have, you could try to alleviate the impact of the compression by binarising the image using a simple thresholding operation (which in the case of a checkerboard would not even mean loosing information since the image is intrinsically binary).
In case you want to go for option 2) I would suggest to do the following steps. Let's assume the variable storing your image is called img
look at the distribution of grey values using e.g. the imhist function like so: imhist(img)
Ideally you would see a clean bimodal distribution with no overlap. Choose an intensity value I in the middle of the two peaks
Then simply binarize by assigning img(img<I) = 0; img(img>I) = 255 (assuming img is of type uint8).
Then run the corner algorithm again and see if the outliers have disappeared

Automatic assembly of multiple Kinect 2 grayscale depth images to a depth map

The project is about measurement of different objects under the Kinect 2. The image acquisition code sample from the SDK is adapted in a way to save the depth information at the whole range and without limiting the values at 0-255.
The Kinect is mounted on a beam hoist and moved in a straight line over the model. At every stop, multiple images are taken, the mean calculated and the error correction applied. Afterwards the images are put together to have a big depth map instead of multiple depth images.
Due to the reduced image size (to limit the influence of the noisy edges), every image has a size of 350x300 pixels. For the moment, the test is done with three images to be put together. As with the final program, I know in which direction the images are taken. Due to the beam hoist, there is no rotation, only translation.
In Matlab the images are saved as matrices with the depth values going from 0 to 8000. As I could only find ideas on how to treat images, the depth maps are transformed into images with a colorbar. Then only the color part is saved and put into the stitching script, i.e. not the axes and the grey part around the image.
The stitching algorithm doesn’t work. It seems to me that the grayscale-images don’t have enough contrast to be treated by the algorithms. Maybe I am just searching in the wrong direction?
Any ideas on how to treat this challenge?

MATLAB texture mapping: Resize the image using imresize or built-in resize?

I'm plotting a 3D box and I use texture mapping to map the six sides with some images, I use following code line:
Z=zeros(height,width);
surface(Z, 'FaceColor','texturemap','EdgeColor','none','Cdata',image);
hold on;
And then the next side and so on. What I did so far is that I resized the images using imresize.
image = imresize(image,[height width]);
My question is if there will be a big difference if I just use the original sized image for texture mapping in terms of resolution and speed? Is it maybe even better not to use imresize? The thing is I'd have to change some code inbetween these lines and also come up with some other solutions but if the resolution of the mapped images would be better without imresize it would be totally worth it.

multiple image stitching

i have successfully implemented an algorithm to calculate a transformation that aligns feature matching using RANSAC. After that I can stitch the images. But now I am trying to do this for multiple images.
I can compute the transformation for each pair of images and stitch them together. But i want on a whole. Is it possible?
I think you need to map all images into one place defined by a specific 'destination image'.
I.e., pick a certain image (Should probably be in the middle of the pack in terms of where the camera points) and compute the transformation between that destination image and every other image.
Then map every image into the destination space.
I guess you could also map all images into some other destination space/projection -- but you need something more than RANSAC for this.

I need help compensating for the shifting of images when trying to create a grid with one image and apply it on another

I have two images of yeast plates:
Permissive:
Xgal:
The to images should be in the same spot and roughly the same size. I am trying to use one of the images to generate a grid and then apply that grid to the other image. The grid is made by looking at the colonies on permissive plate, the plate should have 1536 colonies on it. The problem is that the camera that was used to take the images moves a bit up and down and the images can also be shifted slightly due to the other plate not being in exactly the same place.
This then means that when I use the permissive plate to generate the grid on the xgal plate the grid shifts. Does anyone know a way in which I can compensate for this? I am using perl with the gd module. Any advice would be greatly appreciated. Thank you
I've done this in other languages in relation to motion analysis. You can mathematically determine the shift in position between two images using cross correlation.
Fortunately, you may not need to actually do the maths :) You could use something like ImageMagick, which provides a lot of image processing functions for you, and is perl scriptable. Independently scripts already exists for tasks very much like yours -- see.
If you have only a few pairs of images and, as in the examples, they are very different in appearance then an alternative method to Tim Barrass' would be
Open the first image in gimp, find the co-ordinates of a landmark feature
Open the second image in gimp, find the co-ordinates of the same landmark
Calculate the offset
Shift the second image using ImageMagick's convert command with the affine option. Set the parameters sx=sy=1.0, rx=ry=0.0, tx= negative horizontal offset, ty= negative vertical offset