Get length of irregular object in a BW or RGB picture and draw it into picture for control - matlab

I face a well known problem which I am not able to solve.
I have the picture of a root (http://cl.ly/image/2W3C0a3X0a3Y). From this picture, I would like to know the length of the longest root (1st problem), the portion of the big roots and the small roots in % (say the diameter as an orientation which is the second problem). It is important that I can distinguish between fine and big roots since this is more or less the aim of the study (portion of them compared between different species). The last thing, I would like to draw a line along the measured longest root to check if everything was measured right.
For the length of the longest root, I tried to use regionprops(), which is not optimal since this assumes an oval as basic shape if I got this right.
However, the things I could really need support with are in fact:
How can I get the length of the longest root (start point should be the place where the longest root leaves the main root with the biggest diameter)?
Is it possible to distinguish between fine and big roots and can I get the portion of them? (the coin, the round object in the image is the reference)
Can I draw properties like length and diameter into the picture?
I found out how to draw the centriods of ovals and stuff, but I just dont understand how to do it with the proposed values.
I hope this is no double post and this question does not exists like this somewhere else, if yes, I am sorry for that.
I would like to thank the people on this forum, you do a great job and everybody with a question can be lucky to have you here.
Thank you for the help,
Phillip
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
EDIT
I followed the proposed solution, the code until now is as followed:
clc
clear all
close all
img=imread('root_test.jpg');
labTransformation = makecform('srgb2lab');
labI = applycform(img,labTransformation);
%seperate l,a,b
l = labI(:,:,1);
a = labI(:,:,2);
b = labI(:,:,3);
level = graythresh(l);
bw = im2bw(l);
bw = ~bw;
bw = bwareaopen(bw, 200);
se = strel('disk', 5);
bw2=imdilate(bw, se);
bw2 = imfill(bw2, 'holes');
bw3 =bwmorph(bw2, 'thin', 5);
bw3=double(bw3);
I4 = bwmorph(bw3, 'skel', 200);
%se = strel('disk', 10);%this step is for better visibility of the line
%bw4=imdilate(I4, se);
D = bwdist(I4);
This leads my in the skeleton picture - which is a great progress, thank you for that!!!
I am a little bit out at the point where I have to calculate the distances. How can I explain MatLab that it has to calculate the distance from all the small roots to the main root (how to define this?)? For this I have to work with the diameters first, right?
Could you maybe give the one or the other hint more how to accomplish the distance/length problem?
Thank you for the great help till here!
Phillip
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
EDIT2
Ok, I managed to separate the single root parts. This is not what your edit proposed, but at least something. I have the summed length of all roots as well - not too bad. But even with the (I assume) super easy step by step explanation I have never seen such a tree. I stopped at the point at which I have to select an invisible point - the rest is too advanced for me.
I dont want to waste more of the time and I am very thankful for the help you gave me already. But I suppose I am too MatLab-stupid to accomplish this :)
Thanks! Keep going like this, it is really helpful.
Phillip

For a pre-starting point, I don't see the need for a resolution of 3439x2439 for that image, it doesn't seem to add anything important to the problem, so I simply worked with a resized version of 800x567 (although there should be (nearly) no problem to apply this answer to the larger version). Also, you mention regionprops but I didn't see any description of how you got your binary image, so let us start from the beginning.
I considered your image in the LAB colorspace, then binarized the L channel by Otsu, applied a dilation on this result considering the foreground as black (the same could be done by applying an erosion instead), and finally removed small components. The L channel gives a better representation of your image than the more direct luma formula, leading to an easier segmentation. The dilation (or erosion) is done to join minor features, since there are quite a bit of ramifications that appear to be irrelevant. This produced the following image:
At this point we could attempt using the distance transform combined with grey tone anchored skeleton (see Soille's book on morphology, and/or "Order Independent Homotopic Thinning for Binary and Grey Tone Anchored Skeletons" by Ranwez and Soille). But, since the later is not easily available I will consider something simpler here. If we perform hole filling in the image above followed by thinning and pruning, we get a rough sketch of the connections between the many roots. The following image shows the result of this step composed with the original image (and dilated for better visualization):
As expected, the thinned image takes "shortcuts" due to the hole filling. But, if such step wasn't performed, then we would end up with cycles in this image -- something I want to avoid here. Nevertheless, it seems to provide a decent approximation to the size of the actual roots.
Now we need to calculate the sizes of the branches (or roots). The first thing is deciding where the main root is. This can be done by using the above binary image before the dilation and considering the distance transform, but this will not be done here -- my interest is only showing the feasibility of calculating those lengths. Supposing you know where your main root is, we need to find a path from a given root to it, and then the size of this path is the size of this root. Observe that if we eliminate the branch points from the thinned image, we get a nice set of connected components:
Assuming each end point is the end of a root, then the size of a root is the shortest path to the main root, and the path is composed by a set of connected components in the just shown image. Now you can find the largest one, the second largest, and all the others that can be calculated by this process.
EDIT:
In order to make the last step clear, first let us label all the branches found (open the image in a new tab for better visualization):
Now, the "digital" length of each branch is simply the amount of pixels in the component. You can later translate this value to a "real-world" length by considering the object added to the image. Note that at this point there is no need to depend on Image Processing algorithms at all, we can construct a tree from this representation and work there. The tree is built in the following manner: 1) find the branching point in the skeleton that belongs to the main root (this is the "invisible point" between the labels 15, 16, and 17 in the above image); 2) create an edge from that point to each branch connected to it; 3) assign a weight to the edge according to the amount of pixels needed to travel till the start of the other branch; 4) repeat with the new starting branches. For instance, at the initial point, it takes 0 pixels to reach the beginning of the branches 15, 16, and 17. Then, to reach from the beginning of the branch 15 till its end, it takes the size (number of pixels) of the branch 15. At this point we have nothing else to visit in this path, so we create a leaf node. The same process is repeated for all the other branches. For instance, here is the complete tree for this labeling (the dual representation of the following tree is much more space-efficient):
Now you find the largest weighted path -- which corresponds to the size of the largest root -- and so on.

Related

Interpolation between two images with different pixelsize

For my application, I want to interpolate between two images(CT to PET).
Therefore I map between them like that:
[X,Y,Z] = ndgrid(linspace(1,size(imagedata_ct,1),size_pet(1)),...
linspace(1,size(imagedata_ct,2),size_pet(2)),...
linspace(1,size(imagedata_ct,3),size_pet(3)));
new_imageData_CT=interp3(imagedata_ct,X,Y,Z,'nearest',-1024);
The size of my new image new_imageData_CT is similar to PET image. The problem is that data of my new image is not correct scaled. So it is compressed. I think the reason for that is that the pixelsize between the two images is different and not involved to the interpolation. So for example :
CT image size : 512x512x1027
CT voxel size[mm] : 1.5x1.5x0.6
PET image size : 192x126x128
PET voxel size[mm] : 2.6x2.6x3.12
So how could I take care about the voxel size regarding to the interpolation?
You need to perform a matching in the patient coordinate system, but there is more to consider than just the resolution and the voxel size. You need to synchronize the positions (and maybe the orientations also, but this is unlikely) of the two volumes.
You may find this thread helpful to find out which DICOM Tags describe the volume and how to calculate transformation matrices to use for transforming between the patient (x, y, z in millimeters) and volume (x, y, z in column, row, slice number).
You have to make sure that the volume positions are comparable as the positions of the slices in the CT and PET do not necsesarily refer to the same origin. The easy way to do this is to compare the DICOM attribute Frame Of Reference UID (0020,0052) of the CT and PET slices. For all slices that share the same Frame Of Reference UID, the position of the slice in the DICOM header refers to the same origin.
If the datasets do not contain this tag, it is going to be much more difficult, unless you just put it as an assumption. There are methods to deduce the matching slices of two different volumes from the contents of the pixel data referred to as "registration" but this is a science of its own. See the link from Hugues Fontenelle.
BTW: In your example, you are not going to find a matching voxel in both volumes for each position as the volumes have different size. E.g. for the x-direction:
CT: 512 * 1.5 = 768 millimeters
PET: 192 * 2.6 = 499 millimeters
I'll let to someone else answering the question, but I think that you're asking the wrong one. I lack context of course, but at first glance Matlab isn't the right tool for the job.
Have a look at ITK (C++ library with python wrappers), and the "Multi-modal 3D image registration" article.
Try 3DSlicer (it has a GUI for the previous tool)
Try FreeSurfer (similar, focused on brain scans)
After you've done that registration step, you could export the resulting images (now of identical size and spacing), and continue with your interpolation in Matlab if you wish (or with the same tools).
There is a toolbox in slicer called PETCTFUSION which aligns the PET scan to the CT image.
you can install it in slicer new version.
In the module's Display panel shown below, options to select a colorizing scheme for the PET dataset are provided:
Grey will provide white to black colorization, with black indicating the highest count values.
Heat will provide a warm color scale, with Dark red lowest, and white the highest count values.
Spectrum will provide a warm color scale that goes cooler (dark blue) on the low-count end to white at the highest.
This panel also provides a means to adjust the window and level of both PET and CT volumes.
I normally use the resampleinplace tool after the registration. you can find it in the package: registration and then, resample image.
Look at the screensht here:
If you would like to know more about the PETCTFUSION, there is a link below:
https://www.slicer.org/wiki/Modules:PETCTFusion-Documentation-3.6
Since slicer is compatible with python, you can use the python interactor to run your own code too.
And let me know if you face any problem

counting the number of objects on image with MatLab

I need to count the number of chalks on image with MatLab. I tried to convert my image to grayscale image and than allocate borders. Also I tried to convert my image to binary image and do different morphological operations with it, but I didn't get desired result. May be I did something wrong. Please help me!
My image:
You can use the fact that chalk is colorful and the separators are gray. Use rgb2hsv to convert the image to HSV color space, and take the saturation component. Threshold that, and then try using morphology to separate the chalk pieces.
This is also not a full solution, but hopefully it can provide a starting point for you or someone else.
Like Dima I noticed the chalk is brightly colored while the dividers are almost gray. I thought you could try and isolate gray pixels (where a gray pixel says red=blue=green) and go from there. I tried applying filters and doing morphological operations but couldn't find something satisfactory. still, I hope this helps
mim = imread('http://i.stack.imgur.com/RWBDS.jpg');
%we average all 3 color channels (note this isn't exactly equivalent to
%rgb2gray)
grayscale = uint8(mean(mim,3));
%now we say if all channels (r,g,b) are within some threshold of one another
%(there's probabaly a better way to do this)
my_gray_thresh=25;
graymask = (abs(mim(:,:,1) - grayscale) < my_gray_thresh)...
& (abs(mim(:,:,2) - grayscale) < my_gray_thresh)...
& (abs(mim(:,:,3) - grayscale) < my_gray_thresh);
figure(1)
imshow(graymask);
Ok so I spent a little time working on this- but unfortunately I'm out of time today and I apologize for the incomplete answer, but maybe this will get you started- (if you need more help, I'll edit this post over the weekend to give you a more complete answer :))
Here's the code-
for i=1:3
I = RWBDS(:,:,i);
se = strel('rectangle', [265,50]);
Io = imopen(I, se);
Ie = imerode(I, se);
Iobr = imreconstruct(Ie, I);
Iobrd = imdilate(Iobr, se);
Iobrcbr = imreconstruct(imcomplement(Iobrd), imcomplement(Iobr));
Iobrcbr = imcomplement(Iobrcbr);
Iobrcbrm = imregionalmax(Iobrcbr);
se2 = strel('rectangle', [150,50]);
Io2 = imerode(Iobrcbrm, se2);
Ie2 = imdilate(Io2, se2);
fgm{i} = Ie2;
end
fgm_final = fgm{1}+fgm{2}+fgm{3};
figure, imagesc(fgm_final);
It does still pick up the edges on the side of the image, but from here you're going to use connected bwconnectedcomponents, and you'll get the lengths of the major and minor axes, and by looking at the ratios of the objects it will get rid those.
Anyways good luck!
EDIT:
I played with the code a tiny bit more, and updated the code above with the new results. In cases when I was able to get rid of the side "noise" it also got rid of the side chalks. I figured I'd just leave both in.
What I did: In most cases a conversion to HSV color space is the way to go, but as shown by #rayryeng this is not the way to go here. Hue works really well when there is one type of color- if for example all chalks were red. (Logically you would think that going with the color channel would be better though, but this is no the case.) In this case, however, the only thing all chalks have in common is the relative shape. My solution basically used this concept by setting the structuring element se to something of the basic shape and ratio of the chalk and performing morphological operations- as you originally guessed was the way to go.
For more details, I suggest you read matlab's documentation on these specific functions.
And I'll let you figure out how to get the last chalk based on what I've given you :)

Location based segmentation of objects in an image (in Matlab)

I've been working on an image segmentation problem and can't seem to get a good idea for my most recent problem.
This is what I have at the moment:
Click here for image. (This is only a generic example.)
Is there a robust algorithm that can automatically discard the right square as not belonging to the group of the other four squares (that I know should always be stacked more or less on top of each other) ?
It can sometimes be the case, that one of the stacked boxes is not found, so there's a gap or that the bogus box is on the left side.
Your input is greatly appreciated.
If you have a way of producing BW images like your example:
s = regionprops(BW, 'centroid');
centroids = cat(1, s.Centroid);
xpos = centroids(:,1); should then be the x-positions of the boxes.
From here you have multiple ways to go, depending on whether you always have just one separated box and one set of grouped boxes or not. For the "one bogus box far away, rest closely grouped" case (away from Matlab, so this is unchecked) you could even do something as simple as:
d = abs(xpos-median(xpos));
bogusbox = centroids(d==max(d),:);
imshow(BW);
hold on;
plot(bogusbox(1),bogusbox(2),'r*');
Making something that's robust for your actual use case which I am assuming doesn't consist of neat boxes is another matter; as suggested in comments, you need some idea of how close together the positioning of your good boxes is, and how separate the bogus box(es) will be.
For example, you could use other regionprops measurements such as 'BoundingBox' or 'Extrema' and define some sort of measurement of how much the boxes overlap in x relative to each other, then group using that (this could be made to work even if you have multiple stacks in an image).

Morphological separation of two connected boundaries

I've got a question regarding the following scenario.
As I post-process an image, I gained a contour, which is unfortunately twice connected as you can see at the bottom line. To make it obvious what I want is just the outter line.
Therefore I zoomed in and marked the line, i want of the large image.
What I want from this selection is only the outter part, which I've marked as green in the next picture. Sorry for my bad drawing skills. ;)
I am using MatLab with the IPT. So I also tried to make out with bwmorph and the hbreak option, but it threw an error.
How do I solve that problem?
If you were successful could you please tell me a bit more about it?
Thank you in advance!
Sincerely
It seems your input image is a bit different than the one you posted, since I couldn't directly collect the branch points (there were too many of them). So, to start handling your problem I considering a thinning followed by branch point detection. I also dilate them and remove from the thinned image, this guarantees that in fact there is no connection (4 or 8) between the different segments in the initial image.
f = im2bw(imread('http://i.imgur.com/yeFyF.png'), 0);
g = bwmorph(f, 'thin', 'Inf');
h = g & ~bwmorph(bwmorph(g, 'branchpoints'), 'dilate');
Since h holds disconnected segments, the following operation collects the end points of all the segments:
u = bwmorph(h, 'endpoints');
Now to actually solve your problem I did some quick analysis on what you want to discard. Consider two distinct segments, a and b, in h. We say a and b overlap if the end points of one is contained in the other. By contained I simply mean if the starting x point of one is smaller or equal to the other, and the ending x point is greater or equal too. In your case, the "mountain" overlaps with the segment that you wish to remove. To determine each of them you remove, consider their area. But, since these are segments, area is a meaningless term. To handle that, I connected the end points of a segment, and used as area simply the interior points. As you can clearly notice, the area of the overlapped segment at bottom is very small, so we say it is basically a line and discard it while keeping the "mountain" segment. To do this step the image u is of fundamental importance, since with it you have a clear indication of where to start and stop tracking a contour. If you used the image has is , you would have trouble determining where to start and stop collecting the points of a contour (i.e., the raster order would give you incorrect overlapping indication).
To reconstruct the segment as a single one (currently you have three of them), consider the points you discarded from g in h, and use those that doesn't belong to the now removed bottom segment.
I'd also use bwmorph
%# find the branch point
branchImg = bwmorph(img,'branchpoints');
%# grow the pixel to 3x3
branchImg = imdilate(branchImg,ones(3));
%# hide the branch point
noBranchImg = img & ~branchImg;
%# label the three lines
lblImg = bwlabel(noBranchImg);
%# in the original image, mask label #3
%# note that it may not always be #3 that you want to mask
finalImg = img;
finalImg(lblImg==3) = 0;
%# show the result
imshow(finalImg)

artifacts in processed images

This question is related to my previous post Image Processing Algorithm in Matlab in stackoverflow, which I already got the results that I wanted to.
But now I am facing another problem, and getting some artefacts in the process images. In my original images (stack of 600 images) I can't see any artefacts, please see the original image from finger nail:
But in my 10 processed results I can see these lines:
I really don't know where they come from?
Also if they belong to the camera's sensor why can't I see them in my original images? Any idea?
Edit:
I have added the following code suggested by #Jonas. It reduces the artefact, but does not completely remove them.
%averaging of images
im = D{1}(:,:);
for i = 2:100
im = imadd(im,D{i}(:,:));
end
im = im/100;
imshow(im,[]);
for i=1:100
SD{i}(:,:)=imsubtract(D{i}(:,:),im(:,:))
end
#belisarius has asked for more images, so I am going to upload 4 images from my finger with speckle pattern and 4 images from black background size( 1280x1024 ):
And here is the black background:
Your artifacts are in fact present in your original image, although not visible.
Code in Mathematica:
i = Import#"http://i.stack.imgur.com/5hM3u.png"
EntropyFilter[i, 1]
The lines are faint, but you can see them by binarization with a very low level threshold:
Binarize[i, .001]
As for what is causing them, I can only speculate. I would start tracing from the camera output itself. Also, you may post two or three images "as they come straight from the camera" to allow us some experimenting.
The camera you're using is most likely has a CMOS chip. Since they have independent column (and possibly row) amplifiers, which may have slightly different electronic properties, you can get the signal from one column more amplified than from another.
Depending on the camera, these variability in column intensity can be stable. In that case, you're in luck: Take ~100 dark images (tape something over the lens), average them, and then subtract them from each image before running the analysis. This should make the lines disappear. If the lines do not disappear (or if there are additional lines), use the post-processing scheme proposed by Amro to remove the lines after binarization.
EDIT
Here's how you'd do the background subtraction, assuming that you have taken 100 dark images and stored them in a cell array D with 100 elements:
% take the mean; convert to double for safety reasons
meanImg = mean( double( cat(3,D{:}) ), 3);
% then you cans subtract the mean from the original (non-dark-frame) image
correctedImage = rawImage - meanImg; %(maybe you need to re-cast the meanImg first)
Here is an answer that in opinion will remove the lines more gently than the above mentioned methods:
im = imread('image.png'); % Original image
imFiltered = im; % The filtered image will end up here
imChanged = false(size(im));% To document the filter performance
% 1)
% Compute the histgrams for each column in the lower part of the image
% (where the columns are most clear) and compute the mean and std each
% bin in the histogram.
histograms = hist(double(im(501:520,:)),0:255);
colMean = mean(histograms,2);
colStd = std(histograms,0,2);
% 2)
% Now loop though each gray level above zero and...
for grayLevel = 1:255
% Find the columns where the number of 'graylevel' pixels is larger than
% mean_n_graylevel + 3*std_n_graylevel). - That is columns that contains
% statistically 'many' pixel with the current 'graylevel'.
lineColumns = find(histograms(grayLevel+1,:)>colMean(grayLevel+1)+3*colStd(grayLevel+1));
% Now remove all graylevel pixels in lineColumns in the original image
if(~isempty(lineColumns))
for col = lineColumns
imFiltered(:,col) = im(:,col).*uint8(~(im(:,col)==grayLevel));
imChanged(:,col) = im(:,col)==grayLevel;
end
end
end
imshow(imChanged)
figure,imshow(imFiltered)
Here is the image after filtering
And this shows the pixels affected by the filter
You could use some sort of morphological opening to remove the thin vertical lines:
img = imread('image.png');
SE = strel('line',2,0);
img2 = imdilate(imerode(img,SE),SE);
subplot(121), imshow(img)
subplot(122), imshow(img2)
The structuring element used was:
>> SE.getnhood
ans =
1 1 1
Without really digging into your image processing, I can think of two reasons for this to happen:
The processing introduced these artifacts. This is unlikely, but it's an option. Check your algorithm and your code.
This is a side-effect because your processing reduced the dynamic range of the picture, just like quantization. So in fact, these artifacts may have already been in the picture itself prior to the processing, but they couldn't be noticed because their level was very close to the background level.
As for the source of these artifacts, it might even be the camera itself.
This is a VERY interesting question. I used to deal with this type of problem with live IR imagers (video systems). We actually had algorithms built into the cameras to deal with this problem prior to the user ever seeing or getting their hands on the image. Couple questions:
1) are you dealing with RAW images or are you dealing with already pre-processed grayscale (or RGB) images?
2) what is your ultimate goal with these images. Is the goal to simply get rid of the lines regardless of the quality in the rest of the image that results, or is the point to preserve the absolute best image quality. Are you to perform other processing afterwards?
I agree that those lines are most likely in ALL of your images. There are 2 reasons for those lines ever showing up in an image, one would be in a bright scene where OP AMPs for columns get saturated, thus causing whole columns of your image to get the brightest value camera can output. Another reason could be bad OP AMPs or ADCs (Analog to Digital Converters) themselves (Most likely not an ADC as normally there is essentially 1 ADC for th whole sensor, which would make the whole image bad, not your case). The saturation case is actually much more difficult to deal with (and I don't think this is your problem). Note: Too much saturation on a sensor can cause bad pixels and columns to arise in your sensor (which is why they say never to point your camera at the sun). The bad column problem can be dealt with. Above in another answer, someone had you averaging images. While this may be good to find out where the bad columns (or bad single pixels, or the noise matrix of your sensor) are (and you would have to average pointing the camera at black, white, essentially solid colors), it isn't the correct answer to get rid of them. By the way, what I am explaining with the black and white and averaging, and finding bad pixels, etc... is called calibrating your sensor.
OK, so saying you are able to get this calibration data, then you WILL be able to find out which columns are bad, even single pixels.
If you have this data, one way that you could erase the columns out is to:
for each bad column
for each pixel (x, y) on the bad column
pixel(x, y) = Average(pixel(x+1,y),pixel(x+1,y-1),pixel(x+1,y+1),
pixel(x-1,y),pixel(x-1,y-1),pixel(x-1,y+1))
What this essentially does is replace the bad pixel with a new pixel which is the average of the 6 remaining good pixels around it. The above is an over-simplified version of an algorithm. There are certainly cases where a singly bad pixel could be right next the bad column and shouldn't be used for averaging, or two or three bad columns right next to each other. One could imagine that you would calculate the values for a bad column, then consider that column good in order to move on to the next bad column, etc....
Now, the reason I asked about the RAW versus B/W or RGB. If you were processing a RAW, depending on the build of the sensor itself, it could be that only one sub-pixel (if you will) of the bayer filtered image sensor has the bad OP AMP. If you could detect this, then you wouldn't necessarily have to throw out the other good sub-pixel's data. Secondarily, if you are using an RGB sensor, to take a grayscale photo, and you shot it in RAW, then you may be able to calculate your own grayscale pixels. Many sensors when giving back a grayscale image when using an RGB sensor, will simply pass back the Green pixel as the overall pixel. This is due to the fact that it really serves as the luminescence of an image. This is why most image sensors implement 2 green sub-pixels for every r or g sub-pixel. If this is what they are doing (not ALL sensors do this) then you may have better luck getting rid of just the bad channel column, and performing your own grayscale conversion using.
gray = (0.299*r + 0.587*g + 0.114*b)
Apologies for the long winded answer, but I hope this is still informational to someone :-)
Since you can not see the lines in the original image, they are either there with low intensity difference in comparison with original range of image, or added by your processing algorithm.
The shape of the disturbance hints to the first option... (Unless you have an algorithm that processes each row separately.)
It seems like your sensor's columns are not uniform, try taking a picture without the finger (background only) using the same exposure (and other) settings, then subtracting it from the photo of the finger (prior to other processing). (Make sure the background is uniform before taking both images.)