Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Can someone help me turn this image into a Black and White (not grayscale) image where it particles are black and the background is white? (or visa verse).
It's is not as simple as thresholding the image since the background varies in intensity and subtracting a (gaussian) blurred version does improve the situation but not enough.
best
Markus
I suggest you use high-pass filtering to remove the slow backgroud variations, and then apply a threshold.
I have tried a very simple form of high-pass filter: convolve with constant matrix (this is a low-pass filter) and then remove from original image.
See example result.
im = double(imread('tmp.jpg'));
im = im./max(im(:)); % normalize original image
N = 200; % select N of the order of background color spatial variations
imf = filter2(ones(N)/N^2,im); % normalized low-pass filter
imf = im - imf; % high-pass filter
imf = imf-min(imf(:)); % normalize between 0...
imf = imf/max(imf(:)); % ... and 1
threshold = .4; % select as appropriate
imft = imf < threshold;
imagesc(imft), colormap(gray), axis image
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am building a pyramid of images. First I take a big picture and build a smaller one even smaller, etc. I use interpolation to reduce the image. And I need to understand at what interpolation there will be less lost information between images. This is what I mean by interpolation quality.
I am looking at horizontal gradients. Please tell me how good this criterion is or if there is something better.
Blurred = imfilter(img, PSF);
Blurred = im2double(Blurred)
Blurred2 = imresize(Blurred, [300 300], "Method", "bicubic");
[x0,y0] = meshgrid(1:360,1:360);
[x, y] = meshgrid(1:1.2:360, 1:1.2:360);
Blurred3 = interp2(x0, y0, Blurred, x,y, "spline");
gradX = diff(Blurred,1,1);
gradY = diff(Blurred,1,2);
gradX2 = diff(Blurred2,1,1);
gradY2 = diff(Blurred2,1,2);
gradX3 = diff(Blurred3,1,1);
gradY3 = diff(Blurred3,1,2);
[h, cx]=imhist(gradX);
[h2, cx2]=imhist(gradX2);
[h3, cx3]=imhist(gradX3);
h=log10(h);
h2 = log10(h2);
h3 = log10(h3);
figure, plot(cx, h)
hold on
plot(cx2, h2);
plot(cx3, h3);
hold off
You're using the finite difference approximation to the derivative. The units in gradX are intensity units/pixel, with "pixel" the distance between pixels (which is assumed to be 1). When you rescale your image, you increase the pixel size, but in the derivative you're still assuming the distance between pixels is 1. Thus, the values in gradX2 are larger than those in gradX. You'd have to normalize by the image width to correct for this effect.
But still, after normalization, I don't see how this is a measure of quality of the interpolation. The right question would be: how well can I reconstruct Blurred from Blurred2? I'm assuming here that Blurred has been blurred just sufficiently to avoid aliasing when resampling the image.
I would apply a 2nd round of interpolation to Blurred2 to recover an image of the same size as Blurred, then compare the two images using MSE or similar error measure.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to make a confusion matrix for some classification problem.
So far, I've managed to make a 10 by 10 matrix that stores the accuracy of my estimation for classification problem.
Now I want to make a 10 by 10 square image(?) that has a darker color when the number in that location of the matrix has a higher number. (Ranges from 0 to 100)
I've never done graphics with Matlab before.
Any help would be greatly appreciated.
Thanks.
I would use image. For example:
img = 100*rand(4,4); % Your image
img = img./100*64; % The `image` function works on scale from 0->64
image(img); % or `image(img')` depending on how you want the axis
colormap('grey'); % For grey scale images
axis equal; % For square pixels
Or for inverted colours you can change the one line to:
img = 64-img./100*64;
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to remove the blood vessel in this images, please suggest any method and want to detect the microaneurysms (the red small dots in the images), below is my images after enhancement :
You could do something like this:
A = imread ('tFKeD.jpg');
C=bwlabel(A);
IM2 = imcomplement(C); % // invert the image so that your objects are 1
se = strel('diamond',3); % // Create a morphological object
BW2 = imdilate(IM2,se);
L=bwlabel(BW2); % // Label your objects
E = regionprops(L,'area'); % // Get the respective area
Area = cell2mat(struct2cell(E)); % // Convert to a matrix
[~,largestObject] = max(Area); % // Find the one with the largest area
vessel = L==largestObject;
imshow(vessel)
I suggest the following approach:
Take a threshold on the input image and keep all the values below the threshold.
This will generate a mask, in which the blood vessels and the microaneurysms are marked in white, and the rest is marked in black.
The threshold value can be determined by using the image histogram.
From looking at the image, it seems that the threshold should be low (due to the fact that the blood vessels and the microaneurysms are relatively dark).
Calculate the connected components in the image using bwconncomp.
Perform noise cleaning on the mask from the previous stage.
This can be done by using morphological operations (such as imclose), or by zeroing out connected components which
are too small to be classified as microaneurysms.
The biggest connected component should represent the blood vessels - remove it from your mask.
The output of this stage will be the desired result.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have almost 1000 images of similar dataset all of them have black background and an object (skin cancer mole) . Now the problem is clear i.e the objects are in different orientation I want the objects in all images with same orientation.
MATLAB code will be recomended.
Answer found:
Check the two images of skin lesion mole shown below:
We assume that it comes from loop on folder(1000 images) in which they are at:
Code:
% I1= first image
I=imread(I1); %% Image input
BW = im2bw(I,0.004);
%%%%% Getting the biggest region (because if segmentation give unclear results)
[L, num] = bwlabel(BW, 8);
count_pixels_per_obj = sum(bsxfun(#eq,L(:),1:num));
[~,ind] = max(count_pixels_per_obj);
biggest_region = (L==ind);
%%%%%%
%%% getting orientation
s = regionprops(biggest_region, 'Orientation');
data=s.Orientation;
%%% Orientation end
Y = imrotate(I, -data, 'loose', 'bilinear');
figure;
imshow(Y)
%%% Now the rotated image will be shown as
%% Now write the image in any other folder
%% that have all images aligned
imwrite(Y, 'test.jpg');
%% Hope it will save time for other. Thanks
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I'm trying to apply an algorithm only to a specific region of an image. I tried imfreehand, but not able, at least for me, to do that using this function.
So, is there some way when running my code for the operations to be applied only to some specific region of an image in MATLAB?
Thanks.
Using a mask defined by any of the "imroi" functions - imfreehand and imellipse included, you can use roifilt2 to filter just the roi using a given filter or function.
First, define the area:
imshow(I); %display your image
h = imfreehand; % now pick the region
BW = createmask(h); %makes BW mask
Then, use roifilt2 in one of the following ways -
Define a filter and apply it:
H = fspecial('unsharp');
I2 = roifilt2(H,I,BW);`
Apply a given function to the roi:
I2 = roifilt2(I, BW, 'histeq');
Apply a given function to the roi, specifying parameters:
fh = #(I)(histeq(I,5)); %define function
I2 = roifilt2(I, BW, fh);
The last is equivalent to calling I2 = hist(I,5); but only works on the defined roi.
ETA:
If you want to call multiple functions on the roi, it may be easiest to define your own function, which takes an image input (and optionally, other parameters), applies the appropriate filters/functions to the image, and outputs a final image - you would then call "myfunc" in the same way as "histeq" above.
You can try roipoly.
There is an example on SO here.
Here's an example:
img = imread('peppers.jpg'); % loads the image we want to use
[BW,xi,yi] = roipoly(img); % create a polynomial surrounding region
BW = repmat(uint8(BW),[1,1,3]); % create mask
selIMG = img.*BW; % apply mask to retain roi and set all else to 0
imview(selIMG)
se=strel('disk',3);
erosion=imerode(selIMG,se);
result_image=imsubtract(selIMG,erosion);
imview(result_image)
Edit
On erode: as the matlab doc explains, imerode picks the lowest value from the surrounding pixels (imdilate does the opposite). This means that the original treatment in my answer is inadequate for imerode, it would be better to set pixels outside the selection to max on the colorscale, and I provide an example here on how to do this "manually":
img = imread('peppers.jpg'); % loads the image we want to use
[BW,xi,yi] = roipoly(img); % logical mask which contains pixels of interest
nBW = uint8(~BW); % inverse of logical mask to pick surrounding pixels
surroundingMaxedOut = repmat(255*nBW,[1,1,3]); % set surrounding pixels to max value
nBW = repmat(nBW,[1,1,3]); % make mask with 3 channels to pick surrounding pixels
BW = repmat(uint8(BW),[1,1,3]); % make mask with 3 channels to handle RGB
selIMG = img.*BW; % pick the image region of interest
selIMG = selIMG + surroundingMaxedOut; % final selection with surrounding pixels maxed out
imview(selIMG) % inspect the selection
se=strel('disk',3);
erosion=imerode(selIMG,se); % apply erosion
finalIMG = img.*nBW + BW.*erosion; % insert eroded selection into the original image
imview(finalIMG)
As other answers show, matlab has routines that handle these operations implicitly and are more efficient not least in terms of memory management, however this example provides you with more control so you can see what is happening.