I followed the 2-D Watershed example in Mathworks.com to separate the connected objects, like the image below:
The code is summarize as:
bw = imread('some_binary_image.tif');
D = -bwdist(~bw);
D(~bw) = -Inf;
L = watershed(D);
The result is:
The particle in the center has been separated into two. Are there any ways to avoid the over-segmentation here?
Thanks, lennon310, chessboard does work well for most of my images, but there are still some cases that it doesn't. For example, the following binary image:
Using chessboard will result in:
As I have hundreds of images, it seems that it is difficult to find one combination of parameters that work for all images. I am wondering if I need to combine the good results got from using chessboard, cityblock, etc...
Use max(abs(x1-x2),abs(y1-y2)) as the distance metric (chessboard), and use eight-connected neighborhood in watershed function:
bw=im2bw(I);
D = -bwdist(~bw,'chessboard');
imagesc(D)
D(~bw) = -Inf;
L = watershed(D,8);
figure,imagesc(L)
Result:
I have been dealing with the same issue for a while. For me, the solution was to use a marker based watershed method instead. Looks for examples on watershed method given on the Matlab Blog by Steve: http://blogs.mathworks.com/steve/
This method given by him worked best for me: http://blogs.mathworks.com/steve/2013/11/19/watershed-transform-question-from-tech-support/
Now, in an ideal world, we would be able to segment everything properly using a single method. But watershed does over or under-segment some particle, no matter which method you use (unless you manually give the markers). So, currently I am using a semi-automatic segmentation method; i.e., use watershed to segment the image as best as possible, and then take that image into MSPaint and edit it manually to correct whatever under/over-segmentation remains.
Region growing seems to have been used by some people in the past. But my image processing knowledge is limited so I can't help you out with that. It would be great if anyone could post something about how to use region-growing to segment such an image.
Hope this helps.
Related
i am a newbie here.. please excuse me for asking a straight forward question as i did not have the right information to do so.
for my question above, can anyone help me to create various shape in matlab?
i know how to make a simple triangle, rectangle in matlab.
what i am looking for is how to create animal patterns in matlab. all i need is the boundary layer (outer layer).
like from a bird / butterfly. something like the picture below.
butterfly wing:
can anyone give me tips / links to help me.
and yes, i also did not have the coding. i am totally lost on how to make the pattern in matlab.
my real purpose was to add mesh pattern into the wings. i have the code for the mesh. all i need is the code on how to make the wing shape.
If you already have the image created by another program, you can import it to matlab using imread. If you then want to get a binary boundary, you can use im2bw.
threshold = 0.7; % you can play with this to get what you want
binary_img = im2bw(imread('PATH\TO\IMAGE.jpg'), threshold);
In matlab versions starting at 2016a there is another function called imbinarize you might want to have a look at.
As for creating patterns from scratch, as already mentioned in the comments, matlab should not be your choice. Unless, of course, you have a well defined mathematical equation or a problem solution to which becomes the boundary. For this you can look into fimplicit, fplot etc..
Given are two monochromatic images of same size. Both are prealigned/anchored to one common point. Some points of the original image did move to a new position in the new image, but not in a linear fashion.
Below you see a picture of an overlay of the original (red) and transformed image (green). What I am looking for now is a measure of "how much did the "individual" points shift".
At first I thought of a simple average correlation of the whole matrix or some kind of phase correlation, but I was wondering whether there is a better way of doing so.
I already found that link, but it didn't help that much. Currently I implement this in Matlab, but this shouldn't be the point I guess.
Update For clarity: I have hundreds of these image pairs and I want to compare each pair how similar they are. It doesn't have to be the most fancy algorithm, rather easy to implement and yielding in a good estimate on similarity.
An unorthodox approach uses RASL to align an image pair. A python implementation is here: https://github.com/welch/rasl and it also
provides a link to the RASL authors' original MATLAB implementation.
You can give RASL a pair of related images, and it will solve for the
transformation (scaling, rotation, translation, you choose) that best
overlays the pixels in the images. A transformation parameter vector
is found for each image, and the difference in parameters tells how "far apart" they are (in terms of transform parameters)
This is not the intended use of
RASL, which is designed to align large collections of related images while being indifferent to changes in alignment and illumination. But I just tried it out on a pair of jittered images and it worked quickly and well.
I may add a shell command that explicitly does this (I'm the author of the python implementation) if I receive encouragement :) (today, you'd need to write a few lines of python to load your images and return the resulting alignment difference).
You can try using Optical Flow. http://www.mathworks.com/discovery/optical-flow.html .
It is usually used to measure the movement of objects from frame T to frame T+1, but you can also use it in your case. You would get a map that tells you the "offset" each point in Image1 moved to Image2.
Then, if you want a metric that gives you a "distance" between the images, you can perhaps average the pixel values or something similar.
I have several binary images which represent a partial map of an area (~4m radius) and were taken ~0.2m apart, for example:
(Sorry for the different axis limit).
If you look closely, you'll see that the first image is about 20cm to the right.
I want to be able to create a map of the area from several pictures like this.
I've tried several methods, such as Matlab's register but couldn't find any good algorithm for this purpose. Any ideas on how to approach this?
Thanks in advance!
Two possible routes:
Use imregister. This does registration based on image intensity. You will probably want a rigid transform.
However, this will require your data to be an image (matrix), which it doesn't look like it currently is.
Alternatively, you can use control points. These are common (labelled) points in each image which provide a reference to determine the transform.
Matlab has a built in function to determine control points, cpselect. However, again this requires image data. You may be better of writing your own function to do this or just selecting control points manually.
Once you have control points you can determine the transform between them using fitgeotrans
Have anyone ever used vision.PeopleDetector function from Computer Vision System Toolbox in Matlab?
I've installed it and tried to apply to images I have.
Although it detects people on the training image, it detects nothing on real photos. Either it doesn't detect people at all or detects people at parts of the image where they are not presented.
Could anyone share the experience of using this function?
Thanks a lot!
Here is a sample image:
The vision.PeopleDetector object does indeed detect upright standing people in images. However, like most computer vision algorithms it is not 100% accurate. Can you post a sample image where it fails?
There are several things you can try to improve performance.
Try changing the ClassificationModel parameter to 'UprightPeople_96x48'. There are two models that come with the object, trained on different data sets.
How big (in pixels) are the people in your image? If you use the default 'UprightPeople_128x64' model, then you will not be able to detect a person smaller than 128x64 pixels. Similarly, for the 'UprightPeople_96x48' model the smallest size person you can detect is 96x48. If the people in your image are smaller than that, you can up-sample the image using imresize.
Try reducing the ClassificationThreshold parameter to get more detections.
Edit:
Some thoughts on your particular image. My guess would be that the people detector is not working well here, because it was not trained on this kind of images. The training sets for both models consist of natural images of pedestrians. Ironically, the fact that your image has a perfectly clean background may be throwing the detector off.
If this image is typical of what you have to deal with, then I have a few suggestions. One possibility is to use simple thresholding to segment out the people. The other is to use vision.CascadeObjectDetector to detect the faces or the upper bodies, which happens to work perfectly on this image:
im = imread('postures.jpg');
detector = vision.CascadeObjectDetector('ClassificationModel', 'UpperBody');
bboxes = step(detector, im);
im2 = insertObjectAnnotation(im, 'rectangle', bboxes, 'person', 'Color', 'red');
imshow(im2);
I'm trying to write a code The helps me in my biology work.
Concept of code is to analyze a video file of contracting cells in a tissue
Example 1
Example 2: youtube.com/watch?v=uG_WOdGw6Rk
And plot out the following:
Count of beats per min.
Strenght of Beat
Regularity of beating
And so i wrote a Matlab code that would loop through a video and compare each frame vs the one that follow it, and see if there was any changes in frames and plot these changes on a curve.
Example of My code Results
Core of Current code i wrote:
for i=2:totalframes
compared=read(vidObj,i);
ref=rgb2gray(compared);%% convert to gray
level=graythresh(ref);%% calculate threshold
compared=im2bw(compared,level);%% convert to binary
differ=sum(sum(imabsdiff(vid,compared))); %% get sum of difference between 2 frames
if (differ ~=0) && (any(amp==differ)==0) %%0 is = no change happened so i dont wana record that !
amp(end+1)=differ; % save difference to array amp wi
time(end+1)=i/framerate; %save to time array with sec's, used another array so i can filter both later.
vid=compared; %% save current frame as refrence to compare the next frame against.
end
end
figure,plot(amp,time);
=====================
So thats my code, but is there a way i can improve it so i can get better results ?
because i get fealing that imabsdiff is not exactly what i should use because my video contain alot of noise and that affect my results alot, and i think all my amp data is actually faked !
Also i actually can only extract beating rate out of this, by counting peaks, but how can i improve my code to be able to get all required data out of it ??
thanks also really appreciate your help, this is a small portion of code, if u need more info please let me know.
thanks
You say you are trying to write a "simple code", but this is not really a simple problem. If you want to measure the motion accuratly, you should use an optical flow algorithm or look at the deformation field from a registration algorithm.
EDIT: As Matt is saying, and as we see from your curve, your method is suitable for extracting the number of beats and the regularity. To accuratly find the strength of the beats however, you need to calculate the movement of the cells (more movement = stronger beat). Unfortuantly, this is not straight forwards, and that is why I gave you links to two algorithms that can calculate the movement for you.
A few fairly simple things to try that might help:
I would look in detail at what your thresholding is doing, and whether that's really what you want to do. I don't know what graythresh does exactly, but it's possible it's lumping different features that you would want to distinguish into the same pixel values. Have you tried plotting the differences between images without thresholding? Or you could threshold into multiple classes, rather than just black and white.
If noise is the main problem, you could try smoothing the images before taking the difference, so that differences in noise would be evened out but differences in large features, caused by motion, would still be there.
You could try edge-detecting your images before taking the difference.
As a previous answerer mentioned, you could also look into motion-tracking and registration algorithms, which would estimate the actual motion between each image, rather than just telling you whether the images are different or not. I think this is a decent summary on Wikipedia: http://en.wikipedia.org/wiki/Video_tracking. But they can be rather complicated.
I think if all you need is to find the time and period of contractions, though, then you wouldn't necessarily need to do a detailed motion tracking or deformable registration between images. All you need to know is when they change significantly. (The "strength" of a contraction is another matter, to define that rigorously you probably would need to know the actual motion going on.)
What are the structures we see in the video? For example what is the big dark object in the lower part of the image? This object would be relativly easy to track, but would data from this object be relevant to get data about cell contraction?
Is this image from a light microscop? At what magnification? What is the scale?
From the video it looks like there are several motions and regions of motion. So should you focus on a smaller or larger area to get your measurments? Per cell contraction or region contraction? From experience I know that changing what you do at the microscope might be much better then complex image processing ;)
I had sucsess with Gunn and Nixons Dual Snake for a similar problem:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.6831
I placed the first aproximation in the first frame by hand and used the segmentation result as starting curv for the next frame and so on. My implementation for this is from 2000 and I only have it on paper, but if you find Gunn and Nixons paper interesting I can probably find my code and scan it.
#Matt suggested smoothing and edge detection to improve your results. This is good advise. You can combine smoothing, thresholding and edge detection in one function call, the Canny edge detector.Then you can dialate the edges to get greater overlap between frames. Little overlap will probably mean a big movement between frames. You can use this the same way as before to find the beat. You can now make a second pass and add all the dialated edge images related to one beat. This should give you an idea about the area traced out by the cells as they move trough a contraction. Maybe this can be used as a useful measure for contraction of a large cluster of cells.
I don't have access to Matlab and the Image Processing Toolbox now, so I can't give you tested code. Here are some hints: http://www.mathworks.se/help/toolbox/images/ref/edge.html , http://www.mathworks.se/help/toolbox/images/ref/imdilate.html and http://www.mathworks.se/help/toolbox/images/ref/imadd.html.