I have this image of some edges:
I managed to link disconnected white edges, skeletonize, dilate a bit the edges, crop, and label the complement, all using morphological operations in scikit-image.
Below is the resulting cropped and labeled image. Colorbar added to show range.
Now I would like to use dilation again to eliminate the black edges (I know the thickness since I previously dilated the skeleton using disk(3).
I tried dilated = dilation(cropped, disk(4))
but I get this error message:
C:\Users\...\Continuum\Anaconda\lib\site-packages\skimage\util\dtype.py:103:
UserWarning: Possible sign loss when converting negative image of type int32 to positive image of type uint8.
"%s to positive image of type %s." % (dtypeobj_in, dtypeobj))
C:\Users\...\AppData\Local\Continuum\Anaconda\lib\site-packages\skimage\util\dtype.py:107:
UserWarning: Possible precision loss when converting from int32 to uint8
"%s to %s" % (dtypeobj_in, dtypeobj))
I am not sure I understand. Since the range is 0-4 I expect no negative values. If I run print cropped.min(), cropped.max() I get 0 4
Can anyone suggest what the problem is and if there is any way around it?
Related
hope you might have a suggestion for my struggle (matlab). This is the result of the Imfill holes function, but it left a lot of segments unfilled. Is there anything I can attempt to fix this?
imfill(BW1,'holes')
You may use bwmorph with 'bridge' argument for bridging small gaps before imfill.
'bridge'
Bridges unconnected pixels, that is, sets 0-valued pixels to 1 if they have two nonzero neighbors that are not connected.
Here is a code sample:
I = imread('holes.jpg');
% Remove top and bottom "white frame".
I = I(7:end-6, :);
% Convert to binary image with manual threshold (could be that binarize is required only due to JPEG artifacts).
I = imbinarize(I, 0.3);
% Bridges unconnected pixels (morphological operation).
J = bwmorph(I, 'bridge');
% Fill holes
K = imfill(J, 'holes');
Remarks:
Please consider posting the original image (before fill).
Consider posting images in PNG format instead of JPEG - JPEG compression looses information and creates compression artifacts.
The image you have posted has a "white frame", that is probably not part of the original image. I decided to keep the left and right parts for filling the shapes next to the borders.
Result:
I have imported an image. I have parsed it to double precision and performed some filtering on it.
When I plot the result with imshow, the double image is too dark. But when I use imshowpair to plot the original and the final image, both images are correctly displayed.
I have tried to use uint8, im2uint8, multiply by 255 and then use those functions, but the only way to obtain the correct image is using imshowpair.
What can I do?
It sounds like a problem where the majority of your intensities / colour data are outside the dynamic range of what is accepted for imshow when showing double data.
I also see that you're using im2double, but im2double simply converts the image to double and if the image is already double, nothing happens. It's probably because of the way you are filtering the images. Are you doing some sort of edge detection? The reason why you're getting dark images is probably because the majority of your intensities are negative, or are hovering around 0. imshow whe displaying double type images assumes that the dynamic range of intensities is [0,1].
Therefore, one way to resolve your problem is to do:
imshow(im,[]);
This shifts the display so that range so the smallest value is mapped to 0, and the largest to 1.
If you'd like a more permanent solution, consider creating a new output variable that does this for you:
out = (im - min(im(:))) / (max(im(:)) - min(im(:)));
This will perform the same shifting that imshow does when displaying data for you. You can now just do:
imshow(out);
I am new to matlab and to image processing, and I am having some issues normalizing but I am not sure why.
In my code I store the image as a black and white image in lim3, then:
minvalue = min(min(min(lim3)));
maxvalue = max(max(max(lim3)));
normimg = (lim3-minvalue)*255/(maxvalue-minvalue);
Unfortunately, this gives a new image that is exactly the same as lim3 at all, but I am not sure why. Ideally, I don't want to use the histeq function, so if someone could explain how to fix this code to get it to work, I would appreciate it.
All of the people above in the comments have raised very good points, but if you want a tl;dr answer, here are the most important points:
If your minimum and maximum values are 0 and 255 respectively for all colour channels, this means that the minimum and maximum colour values are black and white respectively. This is the same situation if your image was a single channel image / grayscale. As such, if you try and normalize your output image, it will look the same as you would be multiplying and dividing by the same scale. However, just as a minor note, your above code will work if your image is grayscale. I would also get rid of the superfluous nested min/max calls.
You need to make sure that your image is cast to double before you do this scaling as you will most likely generate floating point numbers. Should your scale be < 1, this will inadvertently be truncated to 0. In general, you will lose precision when you're trying to normalize the intensities as the type of the image is most likely uint8. You also need to remember to cast back to uint8 when you're done, as that is what the original type of the image was before you cast. You can do this casting, or you can use im2double as this essentially does what you want under the hood, but normalizes the image's intensities to the range of [0,1].
As such, if you really really really really... really... want to use your code above, you'd have to do something like this:
lim3 = double(lim3); %// Cast to double
minvalue = min(lim3(:)); %// Note the change here
maxvalue = max(lim3(:)); %// Got rid of superfluous nested min/max calls
normimg = uint8((lim3-minvalue)*255/(maxvalue-minvalue)); %// Cast back to uint8
This code will work if the image you are reading in is grayscale.
Bonus - For Colour Images
However, if you want to apply the above for colour images, I don't recommend you use the above approach. The reason being is you will only see a difference if the minimum and maximum values for each colour plane are the same - 0 and 255 respectively. What I would recommend you do is normalize each colour plane separately so that you'll push each colour plane to the range of [0,1], not being bound to the minimum and maximum of just one colour plane.
As such, I would recommend you do something like this:
lim3 = double(lim3); %// Cast to double
normimg = uint8(zeros(size(lim3))); %// Allocate output image
for idx = 1 : 3
chan = lim3(:,:,idx);
minvalue = min(chan(:));
maxvalue = max(chan(:));
normimg(:,:,idx) = uint8((chan-minvalue)*255/(maxvalue-minvalue)); %// Cast back to uint8
end
The above code accesses each colour plane individually, normalizes the plane, then puts the result in the output image normimg. I would recommend you use the above approach instead if you want to see any contrast differences for colour images.
I have computed an image with values between 0 and 255. When I use imageview(), the image is correctly displayed, in grey levels, but when I want to save this image or display it with imshow, I have a white image, or sometimes some black pixels here and there:
Whereas with imageview():
Can some one help me?
I think that you should use imshow(uint8(image)); on the image before displaying it.
Matlab expects images of type double to be in the 0..1 range and images that are uint8 in the 0..255 range. You can convert the range yourself (but change values in the process), do an explicit cast (and potentially loose precision) or instruct Matlab to use the minimum and maximum value found in the image matrix as the white and black value to scale to when visualising.
See the following example with an uint8 image present in Matlab:
im = imread('moon.tif');
figure; imshow(im);
figure; imshow(double(im));
figure; imshow(double(im), []);
figure; imshow(im2double(im));
I'm trying to implement a morphological method for image colors from the article: "Probabilistic pseudo-morphology for grayscale and color images". At one point, we compute the PCA on the entire image, calculate a chebyschev inequality ( the equation 11 in the paper: http://perso.telecom-paristech.fr/~bloch/P6Image/Projets/pseudoMorphology/Caliman-PR2014.pdf) of each 3 components which gives us 3 pairs of vector. We next have to represent these vectors back in the RGB space. I don't understand how do we do that? Can someone help me?
Looking at the paper, I'm not sure which representation you're talking about. I'm guessing Fig. 16, but I'm not sure. There's a note in the caption of Fig. 16 that's helpful: "(For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)"
Possible answer: if you have a matrix of size A = (y_pixels,x_pixels,3), then you can display this as an RGB image via:
A = rand(100,100,3);
figure()
imshow(A)
Note that your matrix must be scaled in the range [0..1].
It seems easy to map your your PCA scores for each pixel onto such a matrix, and simply display that as RGB via imshow. Does that solve your problem?