i am trying to erode a small shape, since it's a region on an MR images i can't find the suitble struct element disk doesn't work since the object is very small square doesn't give a good result,
the image :
thanks in advance
It's really difficult when object are really small. At this scale, you don't have many possibilities:
square 3x3 (8-neighborhood, the biggest)
disk 3x3 which is then just a simple cross (4-neighborhood)
segments 3x3 with for possible orientations, but then you impose an orientation and you will get artifacts.
There is a last possibility that is more complicated to develop: an hexagon. You have to use tow different masks for the implementations, according to the row/line you process. It will give you a result between the cross and the square.
Btw, why do you want to achieve, maybe there an other solution than the erosion.
Related
I have seen many tutorials that people blend two images that are placed on top of each other very nicely in Photoshop. For example here are two images that are placed on top of each other:
Then in Photoshop after some work, the edges (around the smaller image) will be erased and two images are nicely mixed.
For example, this is a possible end result:
As it can be seen there is no edge and two images are very nicely blended, without blurring.
Can someone point me to any article or post that shows the math behind it? If there is a MATLAB code that can do it, that would be even better. Or at least if someone can tell me what is the correct term for this so I can do Google search on the topic.
Straight alpha blending alone is not sufficient, as it will perform a uniform mixing of the two images.
To achieve nice-looking results, you will need to define an alpha map, i.e. an image of the same size where you adjust the degree of transparency depending on the image that should dominate.
To obtain the mask, you can draw it by hand, for example as a filled outline, as a path or a polygon. Then you have to strongly blur this mask to get a smooth blend.
It looks very difficult (if not impossible) to automate this, as no software can guess what you want to enhance.
The term you are looking for is alpha blending.
https://en.wikipedia.org/wiki/Alpha_compositing#Alpha_blending
The maths behind it boils down to some alpha weighted sums.
Matlab provides the function imfuse to achieve this:
https://de.mathworks.com/help/images/ref/imfuse.html
Edit: (as it still seems to be unclear)
Let's say you have 2 images A and B wich you want to blend.
You put one image over the other so for each coordinate you have 2 RGB touples.
Now you need to define the weight of both images. Will you only see the colour of image A or B or which ratio will you choose to mix them?
This is done by alpha values.
So all you need is a 2d function that defines the mixing ratio for each pixel.
Usually you have values between 0 and 1 where 0 shows one image, 1 shows the other image, 0.5 will mix them both equally and so on...
Just read the article I have linked. It gives you a clear mathematical definition. I can't provide more detail than that.
If you have problems understanding that I urge you to read a book on image processing fundamentals.
Given are two monochromatic images of same size. Both are prealigned/anchored to one common point. Some points of the original image did move to a new position in the new image, but not in a linear fashion.
Below you see a picture of an overlay of the original (red) and transformed image (green). What I am looking for now is a measure of "how much did the "individual" points shift".
At first I thought of a simple average correlation of the whole matrix or some kind of phase correlation, but I was wondering whether there is a better way of doing so.
I already found that link, but it didn't help that much. Currently I implement this in Matlab, but this shouldn't be the point I guess.
Update For clarity: I have hundreds of these image pairs and I want to compare each pair how similar they are. It doesn't have to be the most fancy algorithm, rather easy to implement and yielding in a good estimate on similarity.
An unorthodox approach uses RASL to align an image pair. A python implementation is here: https://github.com/welch/rasl and it also
provides a link to the RASL authors' original MATLAB implementation.
You can give RASL a pair of related images, and it will solve for the
transformation (scaling, rotation, translation, you choose) that best
overlays the pixels in the images. A transformation parameter vector
is found for each image, and the difference in parameters tells how "far apart" they are (in terms of transform parameters)
This is not the intended use of
RASL, which is designed to align large collections of related images while being indifferent to changes in alignment and illumination. But I just tried it out on a pair of jittered images and it worked quickly and well.
I may add a shell command that explicitly does this (I'm the author of the python implementation) if I receive encouragement :) (today, you'd need to write a few lines of python to load your images and return the resulting alignment difference).
You can try using Optical Flow. http://www.mathworks.com/discovery/optical-flow.html .
It is usually used to measure the movement of objects from frame T to frame T+1, but you can also use it in your case. You would get a map that tells you the "offset" each point in Image1 moved to Image2.
Then, if you want a metric that gives you a "distance" between the images, you can perhaps average the pixel values or something similar.
Which method is commonly used to evaluate the remaining 'boundary' pixels after an initial segmentation (based on thresholds)?
I thought about classification based on a standard deviation from the threshold values but I don't know if that is common practice in image analysis. This would be a region growing method but based on the answer on this question ( http://www.mathworks.com/matlabcentral/answers/53351-how-can-i-segment-a-color-image-with-region-growing ) it is not sensible to use the region growing algorithm. Someone suggested imdilate. This method seems arbitrary, useful when enhancing images for aesthetic purpose or to enhance the visibility. For my problem the assigning of the pixels has to be correct because I have to do measurements on these extracted objects/features and a few pixels make a huge difference.
What I was looking for :
To collect my boundary pixels of the BW image from the first segmentation (which I found : http://nl.mathworks.com/help/images/ref/bwboundaries.html)
A decision rule (nearest neighbor ?) to classify those boundary pixels. It would be helpful if there were multiple methods to do this, because it makes a relative accuracy check of the classification possible.
I would really appreciate the input/advice from someone with more experience in this area to point me to the right direction (functions, tutorials etc…)
Thank you !
What will work for you depends very much on the images you have. This is no one-size-fits-all algorithm.
First, you need to answer the question: Given a pixel close to a segmented feature, what would make you believe that this pixel belongs to the feature? Also: what is "close"?
The answer to the second question determines your search area. Here, imdilate is useful to identify candidate pixels (i.e. you dilate your feature, subtract the feature, and you are left with a ring of candidate pixels around each feature). If you test on all pixels, the risk is not so much that it could take forever, but that for some images, your region growing mechanism expands to the entire image.
The answer to the first question determines what algorithm you'll use. Do you look for a gradient, i.e. "if pixel p is closer in intensity to the adjacent feature than to most of its neighbors, then I take it"? Do you look for texture? Do you look for a local threshold (hysteresis thresholding)? The answer, again, depends very much on the images you are segmenting. Make sure you test on a large set of images, because what may look good on one image may totally fail on a different one.
I have two images of yeast plates:
Permissive:
Xgal:
The to images should be in the same spot and roughly the same size. I am trying to use one of the images to generate a grid and then apply that grid to the other image. The grid is made by looking at the colonies on permissive plate, the plate should have 1536 colonies on it. The problem is that the camera that was used to take the images moves a bit up and down and the images can also be shifted slightly due to the other plate not being in exactly the same place.
This then means that when I use the permissive plate to generate the grid on the xgal plate the grid shifts. Does anyone know a way in which I can compensate for this? I am using perl with the gd module. Any advice would be greatly appreciated. Thank you
I've done this in other languages in relation to motion analysis. You can mathematically determine the shift in position between two images using cross correlation.
Fortunately, you may not need to actually do the maths :) You could use something like ImageMagick, which provides a lot of image processing functions for you, and is perl scriptable. Independently scripts already exists for tasks very much like yours -- see.
If you have only a few pairs of images and, as in the examples, they are very different in appearance then an alternative method to Tim Barrass' would be
Open the first image in gimp, find the co-ordinates of a landmark feature
Open the second image in gimp, find the co-ordinates of the same landmark
Calculate the offset
Shift the second image using ImageMagick's convert command with the affine option. Set the parameters sx=sy=1.0, rx=ry=0.0, tx= negative horizontal offset, ty= negative vertical offset
I need to compare two or more images to calculate how much a point shifted in the x and y direction. How do I go about doing this in MATLAB?
What you are looking for is an "Optical Flow" algorithm. There are many around, some faster but less accurate, some slower and more accurate.
Click here to find a MATLAB optical flow implementation (Lucas Kanade).
Gilads suggestion about a Lucas-Kanade tracker/optical flow calculator is really good, and is what I would use. It does however have the drawback of not working very well if the scene has changed too much.
If the scenes are indeed very different (say you moved and rotated the camera quite a lot) you would have to find your corresponding points in some other way. One example could be to use a SIFT descriptor to find image features in the two images and then determine which points correspond to each other. If you know the camera matrices of the two images then it becomes quite easy.