I have a series of medical images from which I am attempting to segment out and analyze the ECG tracings in Matlab (the green, spiking line in the image below):
I have so far been successful in doing this on a small set of images using color thresholding and region properties. My problem is that almost all aspects of this feature of interest can change depending on the manufacturer of the machine used to produce the images and the behavior of the user operating it (over which I have 0 control).
Potentially differing attributes include line position in the image (which can change to be almost anywhere in the image), amplitude, frequency, and even color (which can be changed to match the color of the large white surface under the line in the above image). This makes it extremely difficult to create a robust segmentation solution for all images relying only on "simple" methods (color segmentation, region properties, edge detection etc).
Would it be straight forward to train a classifier to identify the general shape of this line and segment it out? Alternatively, is there another way to search and segment an image using prior shape information?
If you are currently applying an arbitrary threshold, you can look at various technique for dynamic thresholding (here a technique that applies the concept on edge detection).
What you could also try is to threshold on a different representation of the image, such as HSL and HSV (as I am assuming you are thresholding on the RGB values)
You may use a classifier and active contour model to segment the desired region. An example can be found here: http://pratondo.staff.telkomuniversity.ac.id/2016/01/14/robust-edge-stop-functions-for-edge-based-active-contour-models-in-medical-image-segmentation/
Related
I have an image which consists of a black and white document against a heterogeneous colored background. I need to automatically detect the document in my image.
I have tried Otsu's method and a Local Thresholding method, but neither were successful. Also, edge detectors like Canny and Sobel didn't work.
Can anyone suggest some method to automatically detect the document?
Here's an example starting image:
After using various threshold methods, I was able to get the following output:
What follows is an automated global method for isolating an area of low color saturation (e.g. a b/w page) against a colored background. This may work well as an alternative approach when other approaches based on adaptive thresholding of grayscale-converted images fail.
First, we load an RGB image I, covert from RGB to HSV, and isolate the saturation channel:
I = imread('/path/to/image.jpg');
Ihsv = rgb2hsv(I); % convert image to HSV
Isat = Ihsv(:,:,2); % keep only saturation channel
In general, a good first step when deciding how to proceed with any object detection task is to examine the distribution of pixel values. In this case, these values represent the color saturation levels at each point in our image:
% Visualize saturation value distribution
imhist(Isat); box off
From this histogram, we can see that there appear to be at least 3 distinct peaks. Given that our target is a black and white sheet of paper, we’re looking to isolate saturation values at the lower end of the spectrum. This means we want to find a threshold that separates the lower 1-2 peaks from the higher values.
One way to do this in an automated way is through Gaussian Mixture Modeling (GMM). GMM can be slow, but since you’re processing images offline I assume this is not an issue. We’ll use Matlab’s fitgmdist function here and attempt to fit 3 Gaussians to the saturation image:
% Find threshold for calling ROI using GMM
n_gauss = 3; % number of Gaussians to fit
gmm_opt = statset('MaxIter', 1e3); % max iterations to converge
gmmf = fitgmdist(Isat(:), n_gauss, 'Options', gmm_opt);
Next, we use the GMM fit to classify each pixel and visualize the results of our GMM classification:
% Classify pixels using GMM
gmm_class = cluster(gmmf, Isat(:));
% Plot histogram, colored by class
hold on
bin_edges = linspace(0,1,256);
for j=1:n_gauss, histogram(Isat(gmm_class==j), bin_edges); end
In this example, we can see that the GMM ended up grouping the 2 far left peaks together (blue class) and split the higher values into two classes (yellow and red). Note: your colors might be different, since GMM is sensitive to random initial conditions. For our use here, this is probably fine, but we can check that the blue class does in fact capture the object we’d like to isolate by visualizing the image, with pixels colored by class:
% Visualize classes as image
im_class = reshape(gmm_class ,size(Isat));
imagesc(im_class); axis image off
So it seems like our GMM segmentation on saturation values gets us in the right ballpark - grouping the document pixels (blue) together. But notice that we still have two problems to fix. First, the big bar across the bottom is also included in the same class with the document. Second, the text printed on the page is not being included in the document class. But don't worry, we can fix these problems by applying some filters on the GMM-grouped image.
First, we’ll isolate the class we want, then do some morphological operations to low-pass filter and fill gaps in the objects.
Isat_bw = im_class == find(gmmf.mu == min(gmmf.mu)); %isolate desired class
opened = imopen(Isat_bw, strel('disk',3)); % morph open
closed = imclose(Isat_bw, strel('disk',50)); % morph close
imshow(closed)
Next, we’ll use a size filter to isolate the document ROI from the big object at the bottom. I’ll assume that your document will never fill the entire width of the image and that any solid objects bigger than the sheet of paper are not wanted. We can use the regionprops function to give us statistics about the objects we detect and, in this case, we’ll just return the objects’ major axis length and corresponding pixels:
% Size filtering
props = regionprops(closed,'PixelIdxList','MajorAxisLength');
[~,ridx] = min([props.MajorAxisLength]);
output_im = zeros(numel(closed),1);
output_im(props(ridx).PixelIdxList) = 1;
output_im = reshape(output_im, size(closed));
% Display final mask
imshow(output_im)
Finally, we are left with output_im - a binary mask for a single solid object corresponding to the document. If this particular size filtering rule doesn’t work well on your other images, it should be possible to find a set of values for other features reported by regionprops (e.g. total area, minor axis length, etc.) that give reliable results.
A side-by-side comparison of the original and the final masked image shows that this approach produces pretty good results for your sample image, but some of the parameters (like the size exclusion rules) may need to be tuned if results for other images aren't quite as nice.
% Display final image
image([I I.*uint8(output_im)]); axis image; axis off
One final note: be aware that the GMM algorithm is sensitive to random initial conditions, and therefore might randomly fail, or produce undesirable results. Because of this, it's important to have some kind of quality control measures in place to ensure that these random failures are detected. One possibility is to use the posterior probabilities of the GMM model to form some kind of criteria for rejecting a certain fit, but that’s beyond the scope of this answer.
I need to calibrate a camera according to soccer field white lines.
In order to do so I used Canny for edge detection and HoughLinesP to get the white lines vectors. The location of the camera is not fixed and the picture may contain also the crowed. In this case the crowed may be very noisy for HoughLinesP, thus i thought to extract the ROI of field from the image.
Iv'e converted the image to HSV and used inRange on green color.
Now, what is the best way to get the ROI?
Link to example for a noisy image - Source and after InRange
Well in this approach using a clustering method like DBSCAN you can get rid of the noise and select biggest region. Then you can use opencv to describe that region, with a contour, with lines or with a convex hull.
By the way, you may want to take a look at recent line detectors
We use field, line, circle informations together with some problem specific assumptions, also lens undistortion and model fitting for auto calibration of static cameras; however there will always be a noisy case that require minimal user input
I have an image like this:
What I want to do is to find the outer edge of this cell and the inner edge in the cell between the two parts of different colors.
But this image contains to much detail I think, and is there any way to simplify this image, remove those small edges and find the edges I want?
I have tried the edge function provided by matlab. But it can only find the outer edge and disturbed by those detailed edges.
This is a very challenging work due to the ambiguous boundaries and tiny difference between red and green intensities. If you want to implement the segmentation very precisely and meet some medical requirements, Shai's k-means plus graph cuts may be one of the very few options (EM algorithm may be an alternative). If you have a large database that has many similar images, some machine learning methods might help. Otherwise, I just wrote a very simple code to roughly extract the internal red region for you. The boundary is not that accurate since some of the green regions are also included.
I1=I;
I=rgb2hsv(I);
I=I(:,:,1); % the channel with relatively large margin between green and red
I=I.*(I<0.25);
I=imdilate(I, true(5));
% I=imfill(I,'holes'); depends on what is your definition of the inner boundary
bw=bwconncomp(I);
ar=bw.PixelIdxList;
% find the largest labeled area,
n=0;
for i=1:length(ar)
if length(ar{i})>n
n=length(ar{i});
num=i;
end
end
bw1=bwlabel(I);
bwfinal(:,:,1)=(bw1==num).*double(I1(:,:,1));
bwfinal(:,:,2)=(bw1==num).*double(I1(:,:,2));
bwfinal(:,:,3)=(bw1==num).*double(I1(:,:,3));
bwfinal=uint8(bwfinal);
imshow(bwfinal)
It seems to me you have three dominant colors in the image:
1. blue-ish background (but also present inside cell as "noise")
2. grenn-ish one part of cell
3. red-ish - second part of cell
If these three colors are distinct enough, you may try and segment the image using k-means and Graph cuts.
First stage - use k-means to associate each pixels with one of three dominant colors. Apply k-means to the colors of the image (each pixel is a 3-vector in your chosen color space). Run k-means with k=3, keep for each pixel its distance to centroids.
Second stage - separate cell from background. Do a binary segmentation using graph-cut. The data cost for each pixel is either the distance to the background color (if pixel is labeled "background"), or the minimal distance to the other two colors (if pixel is labeled "foreground"). Use image contrast to set the pair-wise weights for the smoothness term.
Third stage - separate the two parts of the cell. Again do a binary segmentation using graph-cut but this time work only on pixels marked as "cell" in the previous stage. The data term for pixels that the k-means assigned to background but are labeled as cell should be zero for all labels (these are the "noise" pixels inside the cell).
You may find my matlab wrapper for graph-cuts useful for this task.
Hi I'm attempting to filter an image with 4 objects inside using MatLab. My first image had a black background with white objects so it was clear to me to filter each image out by finding these large white sections using BW Label and separating them from the image.
The next image has noise in it though. Now I have an image with white lines running through my objects and they are actually connected to each other now. How could I filter out these lines in MatLab? What about Salt and pepper noise? Are there MatLab functions that can do this?
Filtering noise can be done in several ways. A typical noise filtering procedure will be something like threshold>median filtering>blurring>threshold. However, information regarding the type of noise can be very important for proper noise filtration. For examples, since you have lines in your image you can try to use a Hough transform to detect them and take them out of the play (or houghlines). Another approach can be to implement RANSAC. For salt & pepper type of noise, one should use medfilt2 with a proper window size that captures the noise characteristics (for example 3x3 window will deal well with noise fluctuations that are 1 pixel big...).
If you can live with distorting the objects a bit, you can use a closing (morphological) filter with a bit of contrast stretching. You'll need the image processing toolbox, but here's the general idea.
Blur to kill the lines otherwise the closing filter will erase your objects. You can use fspecial to create a Gaussian filter and imfilter to apply it
Apply the closing filter to the image using imclose with a mask that's bigger then your noise, but smaller then the object pieces (I used a 3x3 diamond in my example).
Threshold your image using im2bw so that every pixel gets turned to pure black or pure white based
I've attached an example I had to do for a school project. In my case, the background was white and objects black and I stretched between the erosion and dilation. You can't really see the gray after the erosion, but it was there (hence the necessity for thresholding).
You can of course directly do the closing (erosion followed by dilation) and then threshold. Notice how this filtering distorts the objects.
FYI usually salt-and-pepper noise is cleaned up with a moving average filter, but that will leave the image grayscale. For my project, I needed a pure black and white (for BW Label) and the morphological filters worked great to completely obliterate the noise.
I've scanned an old photo with paper texture pattern and I would like to remove the texture as much as possible without lowering the image quality. Is there a way, probably using Image Processing toolbox in MATLAB?
I've tried to apply FFT transformation (using Photoshop plugin), but I couldn't find any clear white spots to be paint over. Probably the pattern is not so regular for this method?
You can see the sample below. If you need the full image I can upload it somewhere.
Unfortunately, you're pretty much stuck in the spatial domain, as the pattern isn't really repetitive enough for Fourier analysis to be of use.
As #Jonas and #michid have pointed out, filtering will help you with a problem like this. With filtering, you face a trade-off between the amount of detail you want to keep and the amount of noise (or unwanted image components) you want to remove. For example, the median filter used by #Jonas removes the paper texture completely (even the round scratch near the bottom edge of the image) but it also removes all texture within the eyes, hair, face and background (although we don't really care about the background so much, it's the foreground that matters). You'll also see a slight decrease in image contrast, which is usually undesirable. This gives the image an artificial look.
Here's how I would handle this problem:
Detect the paper texture pattern:
Apply Gaussian blur to the image (use a large kernel to make sure that all the paper texture information is destroyed
Calculate the image difference between the blurred and original images
EDIT 2 Apply Gaussian blur to the difference image (use a small 3x3 kernel)
Threshold the above pattern using an empirically-determined threshold. This yields a binary image that can be used as a mask.
Use median filtering (as mentioned by #Jonas) to replace only the parts of the image that correspond to the paper pattern.
Paper texture pattern (before thresholding):
You want as little actual image information to be present in the above image. You'll see that you can very faintly make out the edge of the face (this isn't good, but it's the best I have time for). You also want this paper texture image to be as even as possible (so that thresholding gives equal results across the image). Again, the right hand side of the image above is slightly darker, meaning that thresholding it well will be difficult.
Final image:
The result isn't perfect, but it has completely removed the highly-visible paper texture pattern while preserving more high-frequency content than the simpler filtering approaches.
EDIT
The filled-in areas are typically plain-colored and thus stand out a bit if you look at the image very closely. You could also try adding some low-strength zero-mean Gaussian noise to the filled-in areas to make them look more realistic. You'd have to pick the noise variance to match the background. Determining it empirically may be good enough.
Here's the processed image with the noise added:
Note that the parts where the paper pattern was removed are more difficult to see because the added Gaussian noise is masking them. I used the same Gaussian distribution for the entire image but if you want to be more sophisticated you can use different distributions for the face, background, etc.
A median filter can help you a bit:
img = imread('http://i.stack.imgur.com/JzJMS.jpg');
%# convert rgb to grayscale
img = rgb2gray(img);
%# apply median filter
fimg = medfilt2(img,[15 15]);
%# show
imshow(fimg,[])
Note that you may want to pad the image first to avoid edge effects.
EDIT: A smaller filter kernel than [15 15] will preserve image texture better, but will leave more visible traces of the filtering.
Well i have tried out a different approach using Anisotropc diffusion using the 2nd coefficient that operates on wider areas
Here is the output i got:
From what i can See from the Picture, the Noise has a relatively high Frequency Compared to the image itself. So applying a low Pass filter should work. Have a look at the Power spectrum abs(fft(...)) to determine the cutoff Frequency.