Two cameras takes two images of a wooden plank. The images have an overlap of the plank which I need to stitch together in a way that it looks natural and preferably seamless to the human eye for inspection purposes. The images are cropped to the same size and masked to remove the background and most of the non-overlapping areas but the plank can have a slight tilt on the conveyor belt.
Currently I'm using the normxcorr2 function on the general overlay area, following ideas from the Matlab totorial of the normxcorr2 function, to try and identify one of the images in the other and work out an overlay offset, following the tutorial. However, this fails quite often as the normxcorr2 functions returns a zero offset - resulting in a bad stitching:
c = normxcorr2(plank_part1,plank_part2);
Find peak in cross-correlation:
[ypeak, xpeak] = ind(c==max(c(:)));
Account for the padding that normxcorr2 adds:
yoffSet = ypeak-size(onion,1);
xoffSet = xpeak-size(onion,2);
[xoffSet,yoffSet]
ans =
0 0
It would seem normxcorr2 can not always find the correct overlay of the images, or any overlay at all(?), even though I try to make it easier by increasing the gray scale contrast by the function histeq. My guess is that the amount of "gray-ish" area from the sapwood overwhelms the distinct knots, which are the important parts to stitch propperly.
Does anyone know of a way to either increase the likelihood of this stitching process, maybe by some more preprocessing, or use any other matlab skills/functions to make this work better?
P.S I can not use anything but freely accessible scripts as this would probably become license/copyright issues for my project.
Thank you for your time in trying to help!
You should look at the following link. The term that you should be looking for is image registration. There are more advanced methods than normxcorr2
Related
i need some help with a corner detection.
I printed a checkerboard and created an image of this checkerboard with a webcam. The problem is that the webcam has a low resolution, therefore it do not find all corners. So i enhanced the number of searched corner. Now it finds all corner but different one for the same Corner.
All Points are stored in a matrix therefore i don't know which element depends to which point.
(I can not use the checkerboard function because the fuction is not available in my Matlab Version)
I am currently using the matlab function corner.
My Question:
Is it possible to search the Extrema of all the point clouds to get 1 Point for each Corner? Or has sb an idea what i could do ? --> Please see the attached photo
Thanks for your help!
Looking at the image my guess is that the false positives of the corner detection are caused by compression artifacts introduced by the lossy compression algorithm used by your webcam's image acquisition software. You can clearly spot ringing artifacts around the edges of the checkerboard fields.
You could try two different things:
Check in your webcam's acquisition software whether you can disable the compression or change to a lossless compression
Working with the image you already have, you could try to alleviate the impact of the compression by binarising the image using a simple thresholding operation (which in the case of a checkerboard would not even mean loosing information since the image is intrinsically binary).
In case you want to go for option 2) I would suggest to do the following steps. Let's assume the variable storing your image is called img
look at the distribution of grey values using e.g. the imhist function like so: imhist(img)
Ideally you would see a clean bimodal distribution with no overlap. Choose an intensity value I in the middle of the two peaks
Then simply binarize by assigning img(img<I) = 0; img(img>I) = 255 (assuming img is of type uint8).
Then run the corner algorithm again and see if the outliers have disappeared
I have several images that i would like to correct from artifacts. They show different animals but they appear to look like they were folded (look at the image attached). The folds are straight and they go through the wings as well, they are just hard to see but they are there. I would like to remove the folds but at the same time preserve the information from the picture (structure and color of the wings).
I am using MATLAB right now and i have tried several methods but nothing seems to work.
Initially i tried to see if i can see anything by using an FFT but i do not see a structure in the spectrum that i can remove. I tried to use several edge detection methods (like Sobel, etc) but the problem is that the edge detection always finds the edges of the wings (because they are stronger)
rather than the straight lines. I was wondering if anyone has any ideas about how to proceed with this problem? I am not attaching any code because none of the methods i have tried (and described) are working.
Thank you for the help in advance.
I'll leave this bit here for anyone that knows how to erase those lines without affecting the quality of the image:
a = imread('https://i.stack.imgur.com/WpFAA.jpg');
b = abs(diff(a,1,2));
b = max(b,[],3);
c = imerode(b,strel('rectangle',[200,1]));
I think you should use a 2-dimensional Fast Fourier Transform
It might be easier to first use GIMP / Photoshop if a filter can resolve it.
I'm guessing the CC sensor got broken (it looks to good for old scanner problems). Maybe an electric distortion while it was reading the camera sensor. Such signals in theory have a repeating nature.
I dont think this was caused by a wrong colordepth/colorspace translation
If you like to code, then you might also write a custom pixel based filter in which you take x vertical pixels (say 20 or so) compare them to the next vertical row of 20 pixels. Compare against HSL (L lightnes), not RGB.
From all pixels calculate brightness changes this way.
Then per pixel check H (heu) is within range of nearby pixels take slope average of their brightness(ea take 30 pixels horizontal, calculate average brightnes of first 10 and last 10 pixels apply that brightness to center pixel 15,... //30, 15, 10 try to find what works well
Since you have whole strokes that apear brighter/darker such filter would smooth that effect out, the difficulty is to remain other patterns (the wings are less distorted), knowing what color space the sensor had might allow for a better decision as HSL, maybe HSV or so..
Here are some images taken from experiments which show a bubble caused by spheres moving in liquid.
Now I want to get the area of the bubble from every image using Matlab. The first thing come to my mind is edge detection. So I tried using the following code:
A = imread('D:\1.jpg');
BW1 = edge(A,'sobel');
figure, imshow(BW1)
to get the cavity edge of the picture which was then cropped manually, as the picture show, the result (below) doesn't satisfy requirements. Also, I still don't know how to get the area of the bubble.
So, can someone tell me what should I do?
I think you should use background subtraction and try a simple segmentation.
You could use regionprops to get the area of the bubble:
https://www.mathworks.com/help/images/ref/regionprops.html
I feel like it should work pretty well. If you have a hard time obtaining a clean segmentation you could probably improve the experimental setup to increase the contrast of the bubble with respect to the background by choosing a background as dark as possible and using some lateral illumination to leverage the diffusion of the light by the bubble.
Finally the segmentation should be performed in a region of interest (ROI) since you know the bubble is confined within the tank
As for the issue of getting an accurate cavity edges, the computer vision system toolbox has the vision.ForegroundDetector object, which implements a variant of Stauffer and Grimson's GMM background subtraction. The implementation is very fast, leveraging multiple cores. Check out this example of how to use background subtraction.
As for the issue of finding the area of the bubble, use the bwarea command. https://www.mathworks.com/help/images/ref/bwarea.html, it will sum up all the white pixels in the image.
I believe background subtraction is the most efficient method to calculate this bubble area. Note that you may need to use opening and closing techniques afterwards to filter other regions see (imopen imclose) at: https://uk.mathworks.com/help/images/ref/imopen.html , and afterwards, you can apply bwarea to calculate area. You could also use impixelinfo command to compare intensity level of bubbles and other areas, and therefore, threshold image to extract bubbles. It works only when you have same threshold level for all images. Further, it is possible to combine all these techniques which is completely depended on your images to achieve better results.
Other shape-based techniques also can be used to extract bubble region area.
I have seen many tutorials that people blend two images that are placed on top of each other very nicely in Photoshop. For example here are two images that are placed on top of each other:
Then in Photoshop after some work, the edges (around the smaller image) will be erased and two images are nicely mixed.
For example, this is a possible end result:
As it can be seen there is no edge and two images are very nicely blended, without blurring.
Can someone point me to any article or post that shows the math behind it? If there is a MATLAB code that can do it, that would be even better. Or at least if someone can tell me what is the correct term for this so I can do Google search on the topic.
Straight alpha blending alone is not sufficient, as it will perform a uniform mixing of the two images.
To achieve nice-looking results, you will need to define an alpha map, i.e. an image of the same size where you adjust the degree of transparency depending on the image that should dominate.
To obtain the mask, you can draw it by hand, for example as a filled outline, as a path or a polygon. Then you have to strongly blur this mask to get a smooth blend.
It looks very difficult (if not impossible) to automate this, as no software can guess what you want to enhance.
The term you are looking for is alpha blending.
https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending
The maths behind it boils down to some alpha weighted sums.
Matlab provides the function imfuse to achieve this:
https://de.mathworks.com/help/images/ref/imfuse.html
Edit: (as it still seems to be unclear)
Let's say you have 2 images A and B wich you want to blend.
You put one image over the other so for each coordinate you have 2 RGB touples.
Now you need to define the weight of both images. Will you only see the colour of image A or B or which ratio will you choose to mix them?
This is done by alpha values.
So all you need is a 2d function that defines the mixing ratio for each pixel.
Usually you have values between 0 and 1 where 0 shows one image, 1 shows the other image, 0.5 will mix them both equally and so on...
Just read the article I have linked. It gives you a clear mathematical definition. I can't provide more detail than that.
If you have problems understanding that I urge you to read a book on image processing fundamentals.
I have two images of yeast plates:
Permissive:
Xgal:
The to images should be in the same spot and roughly the same size. I am trying to use one of the images to generate a grid and then apply that grid to the other image. The grid is made by looking at the colonies on permissive plate, the plate should have 1536 colonies on it. The problem is that the camera that was used to take the images moves a bit up and down and the images can also be shifted slightly due to the other plate not being in exactly the same place.
This then means that when I use the permissive plate to generate the grid on the xgal plate the grid shifts. Does anyone know a way in which I can compensate for this? I am using perl with the gd module. Any advice would be greatly appreciated. Thank you
I've done this in other languages in relation to motion analysis. You can mathematically determine the shift in position between two images using cross correlation.
Fortunately, you may not need to actually do the maths :) You could use something like ImageMagick, which provides a lot of image processing functions for you, and is perl scriptable. Independently scripts already exists for tasks very much like yours -- see.
If you have only a few pairs of images and, as in the examples, they are very different in appearance then an alternative method to Tim Barrass' would be
Open the first image in gimp, find the co-ordinates of a landmark feature
Open the second image in gimp, find the co-ordinates of the same landmark
Calculate the offset
Shift the second image using ImageMagick's convert command with the affine option. Set the parameters sx=sy=1.0, rx=ry=0.0, tx= negative horizontal offset, ty= negative vertical offset