This code produces error:
validateattribute(I{'logical','unit8'.....
Please anyone can tell me why this is so..
I = imread('flowers.jpg');
I=im2double(I);
points = detectSURFFeatures(I);
imshow(I); hold on;
plot(points.selectStrongest(10));
detectSURFFeatures takes grayscale images. If your image is RGB, you should call rgb2gray frist.
The error message should be telling you something to that effect.
Related
I have written some lines of code using the following function:
adaptivethreshold(IM,ws,c)
and it gives me a Mask bw. I multiply this mask with my original image bb and show the result.
clear
clc
bb=dicomread('30421573');
figure(1)
imagesc(bb)
bw=adaptivethreshold(bb,50,128);
imaa=double(bw).*double(bb);
figure(2)
image(imaa)
the original Image and the result are shown:
It does not seem to be giving me any mask for my image. Is there any way I can extract those yellow parts from my results?
Try to create the mask before applying to the image, i.e.
bw=adaptivethreshold(bb,50,128);
BW = imbinarize(bb,bw);
imaa=double(bw).*double(BW);
figure(2)
image(imaa)
Very new to MatLab, just figuring some things out and had a question. I am basically trying to filter/blur an image using conv2() but I am getting an all white image when I am using imshow()
I am reading the image in with
testImage = imread('test.bmp');
This is a uint8 image of a grayscale.
I am trying to convolve the image with a 4 x 4 matrix.
fourByFour = ones(4);
When I actually execute, I am getting all white with imshow()
convolvedImage = conv2(testImage, fourByFour);
I should expect a filter placed on the image, not an entirely white one.
Any help would be appreciated.
I don't have your test image so I explain on an image. As the definition of conv2 it returns the two-dimensional convolution.
So please look at this little code:
clc;% clear the screen
clear all;% clear everything
close all;% close all figures
test = imread('test.bmp');
% read test image that is .bmp format and size :294x294x3 and uint8.
fourByFour = ones(4); % define 4 * 4 matrix all ones
convolvedImage = conv2(test(:,:,1), fourByFour);
%apply the ones matrix on image but to use conv2 on this image we apply on one of channels
figure, imshow(convolvedImage,[])
This is my command window, out put:
I'm using MAtlab 2017a, and if I use conv2(test, fourByFour); instead of conv2(test(:,:,1), fourByFour); ,the error is :
Error using conv2
N-D arrays are not supported.
So we should attention to class type and dimensions. And one more thing, in your command window please type edit conv2 you can read the details of this function and how to use it, but never edit it:). Thanks
test = imread('test.bmp');
% read test image that is .bmp format and size :294x294x3 and uint8.
fourByFour = ones(4);
% define 4 * 4 matrix all ones
test=rgb2gray(test);
% convert colorimage into gray
convolvedImage = conv2(double(test), double(fourByFour));
%perform the convolution
figure, imshow(convolvedImage,[])
% display the image
When I try to run
BW = edge(im,'canny')
where im is my image (256X256 uint8).
This is the error I get:
Error using gradient (line 3)
Not enough input arguments.
Error in edge>smoothGradient (line 709)
derivGaussKernel = gradient(gaussKernel);
Error in edge (line 213)
[dx, dy] = smoothGradient(a, sigma);
Error in ps_1_1 (line 2)
BW = edge(im,'canny')
As the function works fine for me when I tested it I think you probably pass an image to the function which is not grayscale (meaning each pixels having one gray value), if that is not the case either, try to reinstal the library because as antony mentioned in the comments, the function works fine. but anyway be sure to read the edge document carefully.
Hi all I am using montage command of matlab to display images. However I am facing a problem. The command that I use is given below:
dirOutput = dir('C:\Users\DELL\Desktop\book chapter\Journal chan vese\robust
contour initialization\book for document\4 phase\*.jpg');
fileNames = {dirOutput.name}'
montage(fileNames, 'Size', [1 6]);
export_fig combined1.jpg -r300
I have 6 images (all grayscale). However, the command prompt immediately throws an error like this:
//Error using montage>getImagesFromFiles (line 349)
//FILENAMES must contain images that are the same size.
//Error in montage>parse_inputs (line 225)
// [I,cmap] = getImagesFromFiles(varargin{1});
//Error in montage (line 112)
//[I,cmap,mSize,indices,displayRange] = parse_inputs(varargin{:});
//Error in montage_pics (line 3)
//montage(fileNames, 'Size', [1 6]);
I am even uploading some of my images here:
As can be seen clearly, all the images are grayscale. I then read the image size and they are as follows:
1.128X128 2.128X128*3 3.128X128*3 4.128X128 5.128X128*3 6.128X128*3.
So some of the images are treated as indeed colour images.
My question is how to use the montage command for such images. Another problem is that montage command always needs images of similar sizes. So I wanted to avoid this loopholes.
Of course I could use a software tool to convert the images to the required format but that is a bad way to work. I believe the below code if added to my original code will solve this problem
%Read Each Image
I=imread('image');
I=imresize(I,[128 128]);
I=I(:,:,1);
%Apply montage command
However I have failed to integrate this code in my original code. Please help me to solve this problem. Thanks in advance guys for your valuable suggestions and help.
To montage you have to make sure
The size matches
The datatype matches
All images are grayscale (or all images are rgb, but do not mix)
.
images={'eight.tif','fabric.png','football.jpg'};
%intended size
ssize=128;
%preallocation
IALL=zeros(ssize,ssize,1,numel(images));
for idx=1:numel(images)
%get image, ensure double to avoid issues with different colour depths
I=im2double(imread(images{idx}));
%resize
I=imresize(I,[ssize,ssize]);
%if rgb, change to gray
if size(I,3)>1 %rgb image
I=rgb2gray(I);
end
%insert
IALL(:,:,:,idx)=I;
end
montage(IALL);
How can I use the qtdecomp(Image,threshold) function in MATLAB to find a quadtree decomposition of an RGB image?
I tried this:
Ig = rgb2gray(I); % I is the rgb image
S = qtdecomp(I,.27);
but I get this error:
??? Error using ==> qtdecomp>ParseInputs
A must be two-dimensional
Error in ==> qtdecomp at 88
[A, func, params, minDim, maxDim] = ParseInputs(varargin{:});
Also I get this error:
??? Error using ==> qtdecomp>ParseInputs
Size of A is not a multiple of the maximum block dimension
Can someone tell me how can I do it?
One obvious error... In the code above you are still passing the original RGB image I to the QTDECOMP function. You have to pass Ig instead:
S = qtdecomp(Ig,.27);
There are two problem with the code in your original post:
1) The first error is because you need to supply Ig, the greyscale version of your image to the qtdecomp function, rather than the colour version I:
S = qtdecomp(Ig, .27);
2) The second error is because for the function qtdecomp, the image's size needs to be square and a power of 2. I suggest you resize the image in an image editor. For example, if your image is 1500x1300, you probably want to resize or crop it to 1024x1024, or perhaps 2048x2048.
You can find the size of the greyscale version of your image with this MATLAB command:
size(Ig)
To crop it to the 1024x1024 in the top-left corner, you can run this MATLAB command:
Ig = Ig(1:1024, 1:1024);