mathematical equations for blending two images - matlab

I have seen many tutorials that people blend two images that are placed on top of each other very nicely in Photoshop. For example here are two images that are placed on top of each other:
Then in Photoshop after some work, the edges (around the smaller image) will be erased and two images are nicely mixed.
For example, this is a possible end result:
As it can be seen there is no edge and two images are very nicely blended, without blurring.
Can someone point me to any article or post that shows the math behind it? If there is a MATLAB code that can do it, that would be even better. Or at least if someone can tell me what is the correct term for this so I can do Google search on the topic.

Straight alpha blending alone is not sufficient, as it will perform a uniform mixing of the two images.
To achieve nice-looking results, you will need to define an alpha map, i.e. an image of the same size where you adjust the degree of transparency depending on the image that should dominate.
To obtain the mask, you can draw it by hand, for example as a filled outline, as a path or a polygon. Then you have to strongly blur this mask to get a smooth blend.
It looks very difficult (if not impossible) to automate this, as no software can guess what you want to enhance.

The term you are looking for is alpha blending.
https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending
The maths behind it boils down to some alpha weighted sums.
Matlab provides the function imfuse to achieve this:
https://de.mathworks.com/help/images/ref/imfuse.html
Edit: (as it still seems to be unclear)
Let's say you have 2 images A and B wich you want to blend.
You put one image over the other so for each coordinate you have 2 RGB touples.
Now you need to define the weight of both images. Will you only see the colour of image A or B or which ratio will you choose to mix them?
This is done by alpha values.
So all you need is a 2d function that defines the mixing ratio for each pixel.
Usually you have values between 0 and 1 where 0 shows one image, 1 shows the other image, 0.5 will mix them both equally and so on...
Just read the article I have linked. It gives you a clear mathematical definition. I can't provide more detail than that.
If you have problems understanding that I urge you to read a book on image processing fundamentals.

Related

Image processing/restoration in Matlab

I have several images that i would like to correct from artifacts. They show different animals but they appear to look like they were folded (look at the image attached). The folds are straight and they go through the wings as well, they are just hard to see but they are there. I would like to remove the folds but at the same time preserve the information from the picture (structure and color of the wings).
I am using MATLAB right now and i have tried several methods but nothing seems to work.
Initially i tried to see if i can see anything by using an FFT but i do not see a structure in the spectrum that i can remove. I tried to use several edge detection methods (like Sobel, etc) but the problem is that the edge detection always finds the edges of the wings (because they are stronger)
rather than the straight lines. I was wondering if anyone has any ideas about how to proceed with this problem? I am not attaching any code because none of the methods i have tried (and described) are working.
Thank you for the help in advance.
I'll leave this bit here for anyone that knows how to erase those lines without affecting the quality of the image:
a = imread('https://i.stack.imgur.com/WpFAA.jpg');
b = abs(diff(a,1,2));
b = max(b,[],3);
c = imerode(b,strel('rectangle',[200,1]));
I think you should use a 2-dimensional Fast Fourier Transform
It might be easier to first use GIMP / Photoshop if a filter can resolve it.
I'm guessing the CC sensor got broken (it looks to good for old scanner problems). Maybe an electric distortion while it was reading the camera sensor. Such signals in theory have a repeating nature.
I dont think this was caused by a wrong colordepth/colorspace translation
If you like to code, then you might also write a custom pixel based filter in which you take x vertical pixels (say 20 or so) compare them to the next vertical row of 20 pixels. Compare against HSL (L lightnes), not RGB.
From all pixels calculate brightness changes this way.
Then per pixel check H (heu) is within range of nearby pixels take slope average of their brightness(ea take 30 pixels horizontal, calculate average brightnes of first 10 and last 10 pixels apply that brightness to center pixel 15,... //30, 15, 10 try to find what works well
Since you have whole strokes that apear brighter/darker such filter would smooth that effect out, the difficulty is to remain other patterns (the wings are less distorted), knowing what color space the sensor had might allow for a better decision as HSL, maybe HSV or so..

Stitching overlaying images from different cameras in Matlab

Two cameras takes two images of a wooden plank. The images have an overlap of the plank which I need to stitch together in a way that it looks natural and preferably seamless to the human eye for inspection purposes. The images are cropped to the same size and masked to remove the background and most of the non-overlapping areas but the plank can have a slight tilt on the conveyor belt.
Currently I'm using the normxcorr2 function on the general overlay area, following ideas from the Matlab totorial of the normxcorr2 function, to try and identify one of the images in the other and work out an overlay offset, following the tutorial. However, this fails quite often as the normxcorr2 functions returns a zero offset - resulting in a bad stitching:
c = normxcorr2(plank_part1,plank_part2);
Find peak in cross-correlation:
[ypeak, xpeak] = ind(c==max(c(:)));
Account for the padding that normxcorr2 adds:
yoffSet = ypeak-size(onion,1);
xoffSet = xpeak-size(onion,2);
[xoffSet,yoffSet]
ans =
0 0
It would seem normxcorr2 can not always find the correct overlay of the images, or any overlay at all(?), even though I try to make it easier by increasing the gray scale contrast by the function histeq. My guess is that the amount of "gray-ish" area from the sapwood overwhelms the distinct knots, which are the important parts to stitch propperly.
Does anyone know of a way to either increase the likelihood of this stitching process, maybe by some more preprocessing, or use any other matlab skills/functions to make this work better?
P.S I can not use anything but freely accessible scripts as this would probably become license/copyright issues for my project.
Thank you for your time in trying to help!
You should look at the following link. The term that you should be looking for is image registration. There are more advanced methods than normxcorr2

Compare two nonlinear transformed (monochromatic) images

Given are two monochromatic images of same size. Both are prealigned/anchored to one common point. Some points of the original image did move to a new position in the new image, but not in a linear fashion.
Below you see a picture of an overlay of the original (red) and transformed image (green). What I am looking for now is a measure of "how much did the "individual" points shift".
At first I thought of a simple average correlation of the whole matrix or some kind of phase correlation, but I was wondering whether there is a better way of doing so.
I already found that link, but it didn't help that much. Currently I implement this in Matlab, but this shouldn't be the point I guess.
Update For clarity: I have hundreds of these image pairs and I want to compare each pair how similar they are. It doesn't have to be the most fancy algorithm, rather easy to implement and yielding in a good estimate on similarity.
An unorthodox approach uses RASL to align an image pair. A python implementation is here: https://github.com/welch/rasl and it also
provides a link to the RASL authors' original MATLAB implementation.
You can give RASL a pair of related images, and it will solve for the
transformation (scaling, rotation, translation, you choose) that best
overlays the pixels in the images. A transformation parameter vector
is found for each image, and the difference in parameters tells how "far apart" they are (in terms of transform parameters)
This is not the intended use of
RASL, which is designed to align large collections of related images while being indifferent to changes in alignment and illumination. But I just tried it out on a pair of jittered images and it worked quickly and well.
I may add a shell command that explicitly does this (I'm the author of the python implementation) if I receive encouragement :) (today, you'd need to write a few lines of python to load your images and return the resulting alignment difference).
You can try using Optical Flow. http://www.mathworks.com/discovery/optical-flow.html .
It is usually used to measure the movement of objects from frame T to frame T+1, but you can also use it in your case. You would get a map that tells you the "offset" each point in Image1 moved to Image2.
Then, if you want a metric that gives you a "distance" between the images, you can perhaps average the pixel values or something similar.

Detecting shape from the predefined shape and cropping the background

I have several images of the pugmark with lots of irrevelant background region. I cannot do intensity based algorithms to seperate background from the foreground.
I have tried several methods. one of them is detecting object in Homogeneous Intensity image
but this is not working with rough texture images like
http://img803.imageshack.us/img803/4654/p1030076b.jpg
http://imageshack.us/a/img802/5982/cub1.jpg
http://imageshack.us/a/img42/6530/cub2.jpg
Their could be three possible methods :
1) if i can reduce the roughness factor of the image and obtain the more smoother texture i.e more flat surface.
2) if i could detect the pugmark like shape in these images by defining rough pugmark shape in the database and then removing the background to obtain image like http://i.imgur.com/W0MFYmQ.png
3) if i could detect the regions with depth and separating them from the background based on difference in their depths.
please tell if any of these methods would work and if yes then how to implement them.
I have a hunch that this problem could benefit from using polynomial texture maps.
See here: http://www.hpl.hp.com/research/ptm/
You might want to consider top-down information in the process. See, for example, this work.
Looks like you're close enough from the pugmark, so I think that you should be able to detect pugmarks using Viola Jones algorithm. Maybe a PCA-like algorithm such as Eigenface would work too, even if you're not trying to recognize a particular pugmark it still can be used to tell whether or not there is a pugmark in the image.
Have you tried edge detection on your image ? I guess it should be possible to finetune Canny edge detector thresholds in order to get rid of the noise (if it's not good enough, low pass filter your image first), then do shape recognition on what remains (you would then be in the field of geometric feature learning and structural matching) Viola Jones and possibly PCA-like algorithm would be my first try though.

I need help compensating for the shifting of images when trying to create a grid with one image and apply it on another

I have two images of yeast plates:
Permissive:
Xgal:
The to images should be in the same spot and roughly the same size. I am trying to use one of the images to generate a grid and then apply that grid to the other image. The grid is made by looking at the colonies on permissive plate, the plate should have 1536 colonies on it. The problem is that the camera that was used to take the images moves a bit up and down and the images can also be shifted slightly due to the other plate not being in exactly the same place.
This then means that when I use the permissive plate to generate the grid on the xgal plate the grid shifts. Does anyone know a way in which I can compensate for this? I am using perl with the gd module. Any advice would be greatly appreciated. Thank you
I've done this in other languages in relation to motion analysis. You can mathematically determine the shift in position between two images using cross correlation.
Fortunately, you may not need to actually do the maths :) You could use something like ImageMagick, which provides a lot of image processing functions for you, and is perl scriptable. Independently scripts already exists for tasks very much like yours -- see.
If you have only a few pairs of images and, as in the examples, they are very different in appearance then an alternative method to Tim Barrass' would be
Open the first image in gimp, find the co-ordinates of a landmark feature
Open the second image in gimp, find the co-ordinates of the same landmark
Calculate the offset
Shift the second image using ImageMagick's convert command with the affine option. Set the parameters sx=sy=1.0, rx=ry=0.0, tx= negative horizontal offset, ty= negative vertical offset