I want to segment my grayscale image as following:
img = io.imread(curr_img_path)
gray = color.rgb2gray(img)
assignment1 = slic(image=gray, n_segments=500, sigma=2, max_iter=100)
I am looking at the segmented image using
fig, ax = plt.subplots(2, 2, figsize=(10, 10), sharex=True, sharey=True)
ax[0, 0].imshow(mark_boundaries(gray, assignment1))
plt.show()
My problem: This shows me a normal grid. Like a chessboard. I do not understand why, and the docs say its possible using grayscale images. Any help? Btw: My image is of shape (352,1216), dtype= float64. There is no error message or something else. Would be glad for any help.
While the compactness parameter can be left to the default value for image in lab-space, the default parameter is to high for RGB/Grayscale images.
Related
Following the example from the documentation page of the centerCropWindow2d function, I am trying to dynamically crop an image based on a 'scale' value that is set by the user. In the end, this code would be used in a loop that would scale an image at different increments, and compare the landmarks between them using feature detection and extraction methods.
I wrote some test code to try and isolate 1 instance of this user-specified image cropping,
file = 'frameCropped000001.png';
image = imread(file);
scale = 1.5;
scaled_width = scale * 900;
scaled_height = scale * 635;
target_size = [scaled_width scaled_height];
scale_window = centerCropWindow2d(size(image), target_size);
image2 = imcrop(image, scale_window);
figure;
imshow(image);
figure;
imshow(image2);
but I am met with this error:
Error using centerCropWindow2d (line 30)
Expected input to be integer-valued.
Error in testRIA (line 20)
scale_window = centerCropWindow2d(size(image), target_size);
Is there no way to do use this function the way I explained above? If not, what's the easiest way to "scale" an image without just resizing it [that is, if I scale it by 0.5, the image stays the same size but is zoomed in by 2x].
Thank you in advance.
I didn't take into account that the height and width for some scales would NOT be whole integers. Since Matlab cannot crop images that are inbetween whole pixel numbers, the "Expected input to be integer-valued." popped up.
I solved my issue by using Math.floor() on the 'scaled_width' and 'scaled_height' variables.
I have an image like this:
my goal is to get the output under background normalization at this link.
Following the above link, I did the following:
(1). I first dilate the image to get the background
(2). then try to remove it via normalization
I got the background:
However, when I try to do the normalized division, I get this :
(black borders added to make clear of the boundary of the image)
this is my code:
image = imread('image.png');
image = rgb2gray(image);
se = offsetstrel('ball',9,9);
dilatedI = imdilate(image,se);
output = imdivide(image,dilatedI);
imshow(output,[]);
using
imshow(output)
just gives a black image.
I thought it might be a type conversion issue, but based on the resources mentioned earlier, I am uncertain if it is the case...
Any advice would be appreciated
Just make sure you dont do integer division! your images are integer type, so 4/5 returns 0 and 5/4 returns 1, not a floating point number. Just convert to float before dividing:
image = imread('https://i.stack.imgur.com/bIVRT.png');
%image = rgb2gray(image); The image is not a RGB online
se = offsetstrel('ball',21,21);
dilatedI = imdilate(image,se);
output = imdivide(double(image),double(dilatedI));
figure
subplot(121)
imshow(image);
subplot(122)
imshow(output);
I'm trying to subtract the background of an image with two images.
Image A is the background and image B is an image with things over the background.
I'm normalizing the images but I don't get the expected result.
Here's the code:
a = rgb2gray(im);
b = rgb2gray(im2);
resA = ((a - min(a(:)))./(max(a(:))-min(a(:))));
resB = ((b - min(b(:)))./(max(b(:))-min(b(:))));
resAbs = abs(resB-resA);
imshow(resAbs);
The resulting image is a completely dark image. Thanks to the answer of the user saeed masoomi, I realized that was because of the data type, so now, I have the following code:
a = rgb2gray(im);
b = rgb2gray(im2);
resA = im2double(a);
resB = im2double(b);
resAbs = imsubtract(resB,resA);
imshow(resAbs,[]);
The resulting image is not well filtered and there are parts of image B that don't appear but they should.
If I try doing this without normalizing, I still have the same problem.
The only difference between image A and B are the arms that only appears in image B, so they should appear without any cut.
Can you see something wrong? Maybe I should filter with a threshold?
Do not normalize the two images. Background subtraction is typically done with identical camera settings, so the two images are directly comparable. If the background image doesn't have a bright object in it, normalizing like you do would brighten it w.r.t. the second image. The intensities are no longer comparable, and you'd see differences where there were none.
If you recorded the background image with different camera settings (different exposure time, illumination, etc) then background subtraction is a lot more complicated than you think. You'd have to apply an optimization scheme to make the two images comparable, such that their difference is sparse. You'd have to look through the literature for that, it's not at all trivial.
Hi please pay attention to your data type ... images in matlab save in unsigned char(or int) (8-bit 0 to 255 and there is no 0.1 or 0.2 or any float number so if you have 1.2 output will be 1).
you have a wrong computation in uint8 data like below
max=uint8(255); %uint8
min=uint8(20); %uint8
data=uint8(40); %uint8
normalized=(data-min)/(max-min) %uint8
output will be
normalized =
uint8
0
ooops, you may think that this output will be 0.0851 but it's not because data type is uint8 and output will be 0 ... so i guess your all data is zero( result image is dark ) ...so for prevent this mistake MATLAB have a handy function named im2double (convert uint8 to double and all data normalized between 0 and one)
I2 = im2double(I) converts the intensity image I to double precision, rescaling the data if necessary. I can be a grayscale intensity image, a truecolor image, or a binary image.
so we can rewrite your code like below
a = rgb2gray(im);
b = rgb2gray(im2);
resA = im2double(a);
resB = im2double(b);
resAbs = abs(imsubtract(a,b)); %edited
imshow(resAbs,[])
edited
so if again output image is dark you must be check that two image have different pixel by below code!!
if(isempty(nonzeros))
disp('Two image is diffrent -> normal')
else
disp('Two image is same -> something wrong')
end
I have an image and a subimage which is cropped out of the original image.
Here's the code I have written so far:
val1 = imread(img);
val2 = imread(img_w);
gray1 = rgb2gray(val1);%grayscaling both images
gray2 = rgb2gray(val2);
matchingval = normxcorr2(gray1,gray2);%normalized cross correlation
[max_c,imax]=max(abs(matchingval(:)));
After this I am stuck. I have no idea how to change the whole image grayscale except for the sub image which should be in color.
How do I do this?
Thank you.
If you know what the coordinates are for your image, you can always just use the rgb2gray on just the section of interest.
For instance, I tried this on an image just now:
im(500:1045,500:1200,1)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,2)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,3)=rgb2gray(im(500:1045,500:1200,1:3));
Where I took the rows (500 to 1045), columns (500 to 1200), and the rgb depth (1 to 3) of the image and applied the rgb2gray function to just that. I did it three times as the output of rgb2gray is a 2d matrix and a color image is a 3d matrix, so I needed to change it layer by layer.
This worked for me, making only part of the image gray but leaving the rest in color.
The issue you might have though is this, a color image is 3 dimensions while a gray scale need only be 2 dimensions. Combining them means that the gray scale must be in a 3d matrix.
Depending on what you want to do, this technique may or may not help.
Judging from your code, you are reading the image and the subimage in MATLAB. What you need to know are the coordinates of where you extracted the subimage. Once you do that, simply take your original colour image, convert that to grayscale, then duplicate this image in the third dimension three times. You need to do this so that you can place colour pixels in this image.
For RGB images, grayscale images have the RGB components to all be the same. Duplicating this image in the third dimension three times creates the RGB version of the grayscale image. Once you do that, simply use the row and column coordinates of where you extracted the subimage and place that into the equivalent RGB grayscale image.
As such, given your colour image that is stored in img and your subimage stored in imgsub, and specifying the rows and columns of where you extracted the subimage in row1,col1 and row2,col2 - with row1,col1 being the top left corner of the subimage and row2,col2 is the bottom right corner, do this:
img_gray = rgb2gray(img);
img_gray = cat(3, img_gray, img_gray, img_gray);
img_gray(row1:row2, col1:col2,:) = imgsub;
To demonstrate this, let's try this with an image in MATLAB. We'll use the onion.png image that's part of the image processing toolbox in MATLAB. Therefore:
img = imread('onion.png');
Let's also define our row1,col1,row2,col2:
row1 = 50;
row2 = 90;
col1 = 80;
col2 = 150;
Let's get the subimage:
imgsub = img(row1:row2,col1:col2,:);
Running the above code, this is the image we get:
I took the same example as rayryeng's answer and tried to solve by HSV conversion.
The basic idea is to set the second layer i.e saturation layer to 0 (so that they are grayscale). then rewrite the layer with the original saturation layer only for the sub image area, so that, they alone have the saturation values.
Code:
img = imread('onion.png');
img = rgb2hsv(img);
sPlane = zeros(size(img(:,:,1)));
sPlane(50:90,80:150) = img(50:90,80:150,2);
img(:,:,2) = sPlane;
img = hsv2rgb(img);
imshow(img);
Output: (Same as rayryeng's output)
Related Answer with more details here
I'm looking to take in an image of 162x193 pixels and basically scale it down by 0.125 i.e 162/8 = 20.25 and 193/8 = 24.125. Thus I would like a picture of size 20x24 The only problem I'm currently having is that when I use the imresize function it rounds up the images pixel values i.e I get an image of size 21x25 instead of 20x24. Any way of getting 20x24 or is this problem something I'm going to have to live with? Here is some code:
//Read in original Image
imageBig = imread(strcat('train/',files(i).name));
//Resize the image
image = imresize(imageBig,0.125);
disp(size(image));
It appears that with the scale argument being provided, imresize ceils up the dimensions as your results show. So, I guess an obvious choice is to manually provide it the rounded values as dimensions.
Code
%%// Scaling ratio
scale1 = 0.125;
%%// Get scaled up/down version
[M,N,~] = size(imageBig);
image = imresize(imageBig,[round(scale1*M) round(scale1*N)]);