Suppress heatmap non-maxima in segmentation with UNet - neural-network

I'm using U-Net for image segmentation.
The model was trained with images that could contain up to 4 different classes. The train classes are never overlapping.
The output of the UNet is a heatmap (with float values between 0 and 1) for each of these 4 classes.
Now, I have 2 problems:
for a certain class, how do I segment (draw contours) in the original image only for the points where the heatmap has significant values? (In the image below an example: the values in the centre are significant, while the values on the left aren't. If I draw the segmentation of the entire image without any additional operation, both are considered.)
downstream of the first point, how do I avoid that in the original image the contours of two superimposed classes are drawn? (maybe by drawing only the one that has higher values in the corresponding heatmap)

Related

Automatic detection of B/W image against colored background. What should I do when local thresholding doesn't work?

I have an image which consists of a black and white document against a heterogeneous colored background. I need to automatically detect the document in my image.
I have tried Otsu's method and a Local Thresholding method, but neither were successful. Also, edge detectors like Canny and Sobel didn't work.
Can anyone suggest some method to automatically detect the document?
Here's an example starting image:
After using various threshold methods, I was able to get the following output:
What follows is an automated global method for isolating an area of low color saturation (e.g. a b/w page) against a colored background. This may work well as an alternative approach when other approaches based on adaptive thresholding of grayscale-converted images fail.
First, we load an RGB image I, covert from RGB to HSV, and isolate the saturation channel:
I = imread('/path/to/image.jpg');
Ihsv = rgb2hsv(I); % convert image to HSV
Isat = Ihsv(:,:,2); % keep only saturation channel
In general, a good first step when deciding how to proceed with any object detection task is to examine the distribution of pixel values. In this case, these values represent the color saturation levels at each point in our image:
% Visualize saturation value distribution
imhist(Isat); box off
From this histogram, we can see that there appear to be at least 3 distinct peaks. Given that our target is a black and white sheet of paper, we’re looking to isolate saturation values at the lower end of the spectrum. This means we want to find a threshold that separates the lower 1-2 peaks from the higher values.
One way to do this in an automated way is through Gaussian Mixture Modeling (GMM). GMM can be slow, but since you’re processing images offline I assume this is not an issue. We’ll use Matlab’s fitgmdist function here and attempt to fit 3 Gaussians to the saturation image:
% Find threshold for calling ROI using GMM
n_gauss = 3; % number of Gaussians to fit
gmm_opt = statset('MaxIter', 1e3); % max iterations to converge
gmmf = fitgmdist(Isat(:), n_gauss, 'Options', gmm_opt);
Next, we use the GMM fit to classify each pixel and visualize the results of our GMM classification:
% Classify pixels using GMM
gmm_class = cluster(gmmf, Isat(:));
% Plot histogram, colored by class
hold on
bin_edges = linspace(0,1,256);
for j=1:n_gauss, histogram(Isat(gmm_class==j), bin_edges); end
In this example, we can see that the GMM ended up grouping the 2 far left peaks together (blue class) and split the higher values into two classes (yellow and red). Note: your colors might be different, since GMM is sensitive to random initial conditions. For our use here, this is probably fine, but we can check that the blue class does in fact capture the object we’d like to isolate by visualizing the image, with pixels colored by class:
% Visualize classes as image
im_class = reshape(gmm_class ,size(Isat));
imagesc(im_class); axis image off
So it seems like our GMM segmentation on saturation values gets us in the right ballpark - grouping the document pixels (blue) together. But notice that we still have two problems to fix. First, the big bar across the bottom is also included in the same class with the document. Second, the text printed on the page is not being included in the document class. But don't worry, we can fix these problems by applying some filters on the GMM-grouped image.
First, we’ll isolate the class we want, then do some morphological operations to low-pass filter and fill gaps in the objects.
Isat_bw = im_class == find(gmmf.mu == min(gmmf.mu)); %isolate desired class
opened = imopen(Isat_bw, strel('disk',3)); % morph open
closed = imclose(Isat_bw, strel('disk',50)); % morph close
imshow(closed)
Next, we’ll use a size filter to isolate the document ROI from the big object at the bottom. I’ll assume that your document will never fill the entire width of the image and that any solid objects bigger than the sheet of paper are not wanted. We can use the regionprops function to give us statistics about the objects we detect and, in this case, we’ll just return the objects’ major axis length and corresponding pixels:
% Size filtering
props = regionprops(closed,'PixelIdxList','MajorAxisLength');
[~,ridx] = min([props.MajorAxisLength]);
output_im = zeros(numel(closed),1);
output_im(props(ridx).PixelIdxList) = 1;
output_im = reshape(output_im, size(closed));
% Display final mask
imshow(output_im)
Finally, we are left with output_im - a binary mask for a single solid object corresponding to the document. If this particular size filtering rule doesn’t work well on your other images, it should be possible to find a set of values for other features reported by regionprops (e.g. total area, minor axis length, etc.) that give reliable results.
A side-by-side comparison of the original and the final masked image shows that this approach produces pretty good results for your sample image, but some of the parameters (like the size exclusion rules) may need to be tuned if results for other images aren't quite as nice.
% Display final image
image([I I.*uint8(output_im)]); axis image; axis off
One final note: be aware that the GMM algorithm is sensitive to random initial conditions, and therefore might randomly fail, or produce undesirable results. Because of this, it's important to have some kind of quality control measures in place to ensure that these random failures are detected. One possibility is to use the posterior probabilities of the GMM model to form some kind of criteria for rejecting a certain fit, but that’s beyond the scope of this answer.

Image Segmentation Using Prior Shape Information Matlab

I have a series of medical images from which I am attempting to segment out and analyze the ECG tracings in Matlab (the green, spiking line in the image below):
I have so far been successful in doing this on a small set of images using color thresholding and region properties. My problem is that almost all aspects of this feature of interest can change depending on the manufacturer of the machine used to produce the images and the behavior of the user operating it (over which I have 0 control).
Potentially differing attributes include line position in the image (which can change to be almost anywhere in the image), amplitude, frequency, and even color (which can be changed to match the color of the large white surface under the line in the above image). This makes it extremely difficult to create a robust segmentation solution for all images relying only on "simple" methods (color segmentation, region properties, edge detection etc).
Would it be straight forward to train a classifier to identify the general shape of this line and segment it out? Alternatively, is there another way to search and segment an image using prior shape information?
If you are currently applying an arbitrary threshold, you can look at various technique for dynamic thresholding (here a technique that applies the concept on edge detection).
What you could also try is to threshold on a different representation of the image, such as HSL and HSV (as I am assuming you are thresholding on the RGB values)
You may use a classifier and active contour model to segment the desired region. An example can be found here: http://pratondo.staff.telkomuniversity.ac.id/2016/01/14/robust-edge-stop-functions-for-edge-based-active-contour-models-in-medical-image-segmentation/

Smooth circular data - Matlab

I am currently doing some image segmentation on a bone qCT picture, see for instance images below.
I am trying to find the different borders in the picture for instance the outer border separating the bone to the noisy background. In this analysis I am getting a list of points (vec(1,:) containing x values and vex(2,:) containing the y values) in random order.
To get them into order I am using using a block of code which effectively takes the first point vec(1,1),vec(1,2) and then finds the closest point among the rest of the points in the vector. And then repeats.
Now my problem is that I want to smooth the data but how do I do that as the points lie in a circular formation? (I do have the Curve Fitting Toolbox)
Not exactly a smoothing procedure, but a way to simplify your data would be to compute the boundary of the convex hull of the data.
K = convhull(O(1,:), O(2,:));
plot(O(1,K), O(2,K));
You could also consider using alpha shapes if you want more control.

matlab: remove small edges and simplify an histology image

I have an image like this:
What I want to do is to find the outer edge of this cell and the inner edge in the cell between the two parts of different colors.
But this image contains to much detail I think, and is there any way to simplify this image, remove those small edges and find the edges I want?
I have tried the edge function provided by matlab. But it can only find the outer edge and disturbed by those detailed edges.
This is a very challenging work due to the ambiguous boundaries and tiny difference between red and green intensities. If you want to implement the segmentation very precisely and meet some medical requirements, Shai's k-means plus graph cuts may be one of the very few options (EM algorithm may be an alternative). If you have a large database that has many similar images, some machine learning methods might help. Otherwise, I just wrote a very simple code to roughly extract the internal red region for you. The boundary is not that accurate since some of the green regions are also included.
I1=I;
I=rgb2hsv(I);
I=I(:,:,1); % the channel with relatively large margin between green and red
I=I.*(I<0.25);
I=imdilate(I, true(5));
% I=imfill(I,'holes'); depends on what is your definition of the inner boundary
bw=bwconncomp(I);
ar=bw.PixelIdxList;
% find the largest labeled area,
n=0;
for i=1:length(ar)
if length(ar{i})>n
n=length(ar{i});
num=i;
end
end
bw1=bwlabel(I);
bwfinal(:,:,1)=(bw1==num).*double(I1(:,:,1));
bwfinal(:,:,2)=(bw1==num).*double(I1(:,:,2));
bwfinal(:,:,3)=(bw1==num).*double(I1(:,:,3));
bwfinal=uint8(bwfinal);
imshow(bwfinal)
It seems to me you have three dominant colors in the image:
1. blue-ish background (but also present inside cell as "noise")
2. grenn-ish one part of cell
3. red-ish - second part of cell
If these three colors are distinct enough, you may try and segment the image using k-means and Graph cuts.
First stage - use k-means to associate each pixels with one of three dominant colors. Apply k-means to the colors of the image (each pixel is a 3-vector in your chosen color space). Run k-means with k=3, keep for each pixel its distance to centroids.
Second stage - separate cell from background. Do a binary segmentation using graph-cut. The data cost for each pixel is either the distance to the background color (if pixel is labeled "background"), or the minimal distance to the other two colors (if pixel is labeled "foreground"). Use image contrast to set the pair-wise weights for the smoothness term.
Third stage - separate the two parts of the cell. Again do a binary segmentation using graph-cut but this time work only on pixels marked as "cell" in the previous stage. The data term for pixels that the k-means assigned to background but are labeled as cell should be zero for all labels (these are the "noise" pixels inside the cell).
You may find my matlab wrapper for graph-cuts useful for this task.

5-dimensional plotting in matlab for classification

I want to create a 5 dimensional plotting in matlab. I have two files in my workspace. one is data(150*4). In this file, I have 150 data and each has 4 features. Since I want to classify them, I have another file called "labels" (150*1) that includes a label for each data in data files. In other words the label are the class of data and I have 3 class: 1,2,3
I want to plot this classification, but i can't...
Naris
You need to think about what kind of plot you want to see. 5 dimensions are difficult to visualize, unless of course, your hyper-dimensional monitor is working. Mine never came back from the repair shop. (That should teach me for sending it out.)
Seriously, 5 dimensional data really can be difficult to visualize. The usual solution is to plot points in a 2-d space (the screen coordinates of a figure, for example. This is what plot essentially does.) Then use various attributes of the points plotted to show the other three dimensions. This is what Chernoff faces do for you. If you have the stats toolbox, then it looks like glyphplot will help you out. Or you can plot in 3-d, then use two attributes to show the other two dimensions.
Another idea is to plot points in 2-d to show two of the dimensions, then use color to indicate the other three dimensions. Thus, the RGB assigned to that marker will be defined by the other three dimensions. Of course, that means you must be able to visualize what the RGB coordinates of a color represent, so you need to understand color as it is represented in an RGB space.
You can use scatter3 to plot your data, using three features of data as dimensions, the fourth as color, and the class as different markers
figure,hold on
markerList = 'o*+';
for iClass = 1:nClasses
classIdx = dataClass==iClass;
scatter3(data(classIdx,1),data(classIdx,2),data(classIdx,3),[],data(classIdx,4),...
'marker',markerList(iClass));
end
When you use color to represent one of the features, I suggest to use a good colormap, such as pmkmp from the Matlab File Exchange instead of the default jet.
Alternatively, you can use e.g. mdscale to transform your higher-dimensional data to 2D for standard plotting.
There's a model called SOM (Self-organizing Maps) which builds a 2-D image of a multidimensional space.