I have a image which looks like this:
The (blue) background have the value zero, the the (red) ring has a "large" value (compared to the rest of the image). I want to plot only the orange part of the sample. However, due to the finite resolution of the image the edges still appear as show here:
As you can see specially the white regions (yes there are a few) above are hard to see due to all the noise from edges.
Is there a good algorithm (preferable in matlab) which can help me clean up these images?
Find the binary mask for the ring
Dilate the mask a bit using imdilate and strel
Use the inverted mask to 'and out' the ring and the region around it
Related
The image above has been processed to remove its background and increase contrast with im2bw. I want to now identify and measure the two elongated black regions at the top and bottom centre of the image. This is the result:
If I use imfill(I,'holes'), one of them does not get identified.
I would also like to identify the boundaries, so that I can measure the area of these regions and find their respective "weighted centroid".
What I want to achieve is something that allows me to measure an angle between the orientation of the elongated black regions in different frames, as pictured in the sketch below (the red line indicates the position of the top black region in a previous frame).
In this answer, I'll be using DIPimage 3, an image analysis toolbox for MATLAB (disclosure: I'm an author). However, the filters applied are quite simple, it should be no problem implementing this using other toolboxes instead.
The original image is very noisy. Simply thresholding that image leads to a noisy binary image that is very difficult to work with. I'm suggesting you filter the original image to highlight the structures of interest first, before thresholding and measuring.
Because we're interested in detecting lines, we'll use the Laplace of Gaussian filter. It is important to tune the sigma parameter to match the width of the lines to be detected. After applying the Laplace filter, dark lines will appear bright, and bright lines will appear dark. The bright dot in the middle of the image will also be enhanced, but appear dark.
img = readim('https://i.stack.imgur.com/0LzF3m.png');
img = img{1}; % all three channels of PNG file are identical, take one
out = laplace(img,10);
This image is straight-forward to threshold.
out = out > 0.25;
Finally, we'll measure the orientation of these two lines as the angle under which the projection is largest.
msr = measure(out,[],'feret');
angle = msr.Feret(:,4)
Output (angle in radian, 0 is to the right, pi/2 is down):
angle =
-1.7575
-1.7714
I have an image like this.note that the regions are not perfectly shaped.it is rectangular like region and ellipse like region. I have segmented the ellipse like region using some algorithm.segmented region is bright one.the border (red rectangle) is dark one
finally i must get red rectangular like region
can you suggest any algorithm to perform this
I see that you have done some real progress on your segmentation. Because you already have an idea of the location of elements you want to segment, you should use a watershed with constraints/markers:
Your actual segmentation represents the inner markers.
You dilate it with a big structuring element (bugger than the inter disk space).
You take the contour of the dilation, and that's your outer markers.
You compute the gradient of the original image.
You apply the watershed on the gradient image, using the markers you have just computed.
[EDIT] As the segmentation you provided does not match with the original image (different dimensions), I had to simulate roughly a simple segmentation, using this image (the red lines being the the segmentation you already have). And I got this result.
I have two closed curve stereo rectified edge images. Is it possible to find the disparity(along x-axis in image coordinates) between the edge images and do a 3D reconstruction since I know the camera matrix. I am using matlab for the process. And I will not be able to do a window based technique as it's a binary image since a window based technique requires texture. The question how will I compute the disparity between the edge images? The images are available in the following links. Left Edge image https://www.dropbox.com/s/g5g22f6b0vge9ct/edge_left.jpg?dl=0 Right Edge Image https://www.dropbox.com/s/wjmu3pugldzo2gw/edge_right.jpg?dl=0
For this type of images, you can easily map each edge pixel from the left image to its counterpart in the right image, and therefore calculate the disparity for those pixels as usual.
The mapping can be done in various ways, depending on how typical these images are. For example, using DTW like approach to match curvatures.
For all other pixels in the image, you just don't have any information.
#Photon: Thanks for the suggestion. I did what you suggested. I matched each edge pixel in the left and right image in a DTW like fashion. But there are some pixels whose y-pixel coordinate value differ by 1 or 2 pixels, albeit they are properly rectified. So I calculated the depth by averaging those differing(up to 2-pixel difference in y-axis) edge pixels using least squares method. But I ended getting this space curve (https://www.dropbox.com/s/xbg2q009fjji0qd/false_edge.jpg?dl=0) when they actually should have been like this (https://www.dropbox.com/s/0ib06yvzf3k9dny/true_edge.jpg?dl=0) which is obtained using RGB images.I couldn't think of any other reason why it would be the case since I compared by traversing along the 408 edge pixels.
Consider that I have a colored image like this in which the outline is not complete (There are gaps between lines). I want to be able to fill the area between the lines with one color or another. This actually is a binary image which I got after applying canny edge detector on a corresponding gray scale image.
I tried first dilating the image and then eroding it, but the result is not good enough. I want to be able to preserve the thickness of the root
Any help would be greatly appreciated
Original Image
Image after edge detection and some manual removal of pixels
Using the information in the edge image, I thought I would try to extract pixels from the original image of a certain color. For every white pixel in the edited image, I used a search space in the original image along the same horizontal line. I used different thresholds for R, G and B and I ended up with this
I'm not sure what your original image looks like. It would be helpful to see.
You have gaps between the lines because a line in your original image has two edges, one on each side. The canny algorithm is detecting them both. The Canny edge detection algorithm has at its heart the application of two Sobel kernels to calculate the gradient, one for detecting horizontal edges and one for detection vertical edges.
-1 0 +1
-2 0 +2
-1 0 +1
and
+1 +2 +1
0 0 0
-1 -2 -1
These kernels will present peaks for both sides of the line. One peak positive and one negative. You can exclude one side of the line by excluding the corresponding peak. After taking the gradient of each direction truncate any values below zero (set the values to zero) to remove the second peak. Then continue with the Canny edge detection as usual. This will result in the detection of only a single edge for each line instead of the two that you are seeing now.
I'll add a third approach now that I have seen the image. It looks like most of the information is in the green channel.
Green channel image
This image gives you a decent result if you simply apply a threshold.
Thresholded image with a somewhat arbitrary threshold
You can then either clean this image up by itself or use your edge image. To clean it up with the edge image you produced remove any white pixels that are more than a certain distance from one of your detected edges (create a Euclidean distance map from your edge image and use that to set any white pixels greater than a certain distance from an edge to black).
If you are still collecting images you may want to try to position the camera in a way to avoid the bottom of the jar (or whatever this is).
You could attempt to use a line scanning methodology. Start at the side and scan horizontally. When you hit an edge you assume you are in a root and you start setting the voxels to white. When you hit another edge you assume you are leaving a root and you start. There will be some fringe cases and you may want to add additional checks, such as limiting the allowed thickness of a root.
You could also do a flood fill style algorithm where you take a seed point in a root and travel up the root filling it in.
Not sure how well these would work as it depends on the image and I did not test it.
Hi I'm attempting to filter an image with 4 objects inside using MatLab. My first image had a black background with white objects so it was clear to me to filter each image out by finding these large white sections using BW Label and separating them from the image.
The next image has noise in it though. Now I have an image with white lines running through my objects and they are actually connected to each other now. How could I filter out these lines in MatLab? What about Salt and pepper noise? Are there MatLab functions that can do this?
Filtering noise can be done in several ways. A typical noise filtering procedure will be something like threshold>median filtering>blurring>threshold. However, information regarding the type of noise can be very important for proper noise filtration. For examples, since you have lines in your image you can try to use a Hough transform to detect them and take them out of the play (or houghlines). Another approach can be to implement RANSAC. For salt & pepper type of noise, one should use medfilt2 with a proper window size that captures the noise characteristics (for example 3x3 window will deal well with noise fluctuations that are 1 pixel big...).
If you can live with distorting the objects a bit, you can use a closing (morphological) filter with a bit of contrast stretching. You'll need the image processing toolbox, but here's the general idea.
Blur to kill the lines otherwise the closing filter will erase your objects. You can use fspecial to create a Gaussian filter and imfilter to apply it
Apply the closing filter to the image using imclose with a mask that's bigger then your noise, but smaller then the object pieces (I used a 3x3 diamond in my example).
Threshold your image using im2bw so that every pixel gets turned to pure black or pure white based
I've attached an example I had to do for a school project. In my case, the background was white and objects black and I stretched between the erosion and dilation. You can't really see the gray after the erosion, but it was there (hence the necessity for thresholding).
You can of course directly do the closing (erosion followed by dilation) and then threshold. Notice how this filtering distorts the objects.
FYI usually salt-and-pepper noise is cleaned up with a moving average filter, but that will leave the image grayscale. For my project, I needed a pure black and white (for BW Label) and the morphological filters worked great to completely obliterate the noise.