How to work with quantiz in matlab/octave? - matlab

I have a code which resizes an image and gets its entropy.
Resize works correctly, but I cannot do something with quantiz.
This function throws
Error using quantiz (line 31)
The input signal must be a real vector.
Error. How to fix it?(GrayImage is 2D array with values from 0 to 255)
normalImage = imread("porshe.bmp");
grayImage = rgb2gray(normalImage);
width = 750;
heigth = 500;
ebase = entropy(grayImage);
resizedImage2 = imresize(grayImage,0.5);
resizedImage4 = imresize(grayImage,0.25);
er2 = entropy(resizedImage2);
er4 = entropy(resizedImage4);
quantizImage8 = quantiz(grayImage,8);
quantizImage16 = quantiz(grayImage,16);
quantizImage64 = quantiz(grayImage,64);
imshow(quantizImage64)

Related

How do I use the screenRect function in MATLAB correctly?

I run the following code as I am trying to display numbers at randoma locations on the screen:
% Screen parameters
center = [screenRect(3)/2 screenRect(4)/2)];
[mainwin, screenRect] = Screen(0, 'OpenWindow');
% potential locations to place sequences
nrow = 6; ncolumn = 6; cellsize = 100;
for ncells = 1:nrow*ncolumn
xnum = (mod(ncells-1, ncolumn)+1)-3.5;
ynum = ceil(ncells/nrow)-3.5;
cellcenter(ncells,1) = center(1)+xnum.*cellsize;
cellcenter(ncells,2) = center(2)+ynum.*cellsize;
end
% randomise position of numbers
cellindex = Shuffle(1:nrow*ncolumn); % randomize the position of the sequences within the grid specified earlier
itemloc = [cellcenter(cellindex(1),1)-cellsize/2, cellcenter(cellindex(1),2)-cellsize/2, cellcenter(cellindex(1),1)+cellsiz
I get the following error:
Unrecognized function or variable 'screenRect'.
Error in First_draft (line 4)
center = [screenRect(3)/2 screenRect(4)/2];
If you are sure that you installed the toolbox correctly, Then
return the screenRect variable, using Screen method.
[mainwin, screenRect] = Screen(0, 'OpenWindow');
Then use screenRectvariable
center = [screenRect(3)/2 screenRect(4)/2];
  ```

hot to get pixel per meter in matlab

I have a vector shapefile which is in unit of 'Meter' presenting boundary of overall Germany. I am converting it into raster format based on each pixel representing 300 Meters respectively. After conversion I accessed inmage information using imfinfo() in matlab. However the result is giving me the unit value is in "Inches" I am quite confused at the moment and do not know what to do to convert inches to meters as a pixel size unit. Would you please give me some idea?
`% Code
R6 = shaperead('B6c.shp');
%Nord
XN6 = double(R6(4).X); YN6 = double(R6(4).Y);
XN6min = min(XN6(XN6>0)); XNmax = max(XN6);
YN6min = min(YN6(YN6>0)); YNmax = max(YN6);
%Bayern
XB6 = double(R6(7).X); YB6 = double(R6(7).Y);
XB6min = min(XB6(XB6>0)); XB6max = max(XB6);
YB6min = min(YB6(YB6>0)); YB6max = max(YB6);
%Schleswig-Holstein
XSH6 = double(R6(9).X); YSH6 = double(R6(9).Y);
XSH6min = min(XSH6(XSH6>0)); XSH6max = max(XSH6);
YSH6min = min(YSH6(YSH6>0)); YSH6max = max(YSH6);
%Sachsen
XS6 = double(R6(6).X); YS6 = double(R6(6).Y);
XS6min = min(XS6(XS6>0)); XS6max = max(XS6);
YS6min = min(YS6(YS6>0)); YS6max = max(YS6);
dx = round(XS6max-XN6min);
dy = round(YSH6max-YB6min);
M = round((dx)/300);enter code here N = round((dy)/300);
A6 = zeros(M,N); %initiating image matrix based on 4 limiting States
%transformation from world to pixel coordinates
xpix_bw =(((XBW-XN6min)*M)/dx)';
ypix_bw =(((YBW-YB6min)*N)/dy)';
xbw6=round(xpix_bw); xbw6=xbw6(~isnan(xbw6));
ybw6=round(ypix_bw); ybw6=ybw6(~isnan(ybw6));
%line drawing
for i=1:1:length(xbw6)-1
j=i+1;
x1=xbw6(i); x2=xbw6(j); y1=ybw6(i); y2=ybw6(j);
nn=atan2((y2-y1),(x2-x1)); % azimuthal angle
if x2==x1
l=abs(y2-y1);
else
l = round((x2-x1)/cos(nn)); % horizontal distance
end
xx=zeros(l,1); %empty column
yy=zeros(l,1); %empty column
% creating line along slope distance
for i=1:1:l
xx(i)=round(x1+cos(nn)*i);
yy(i)=round(y1+sin(nn)*i);
A6(xx(i)+1,yy(i)+1) = 256;
end
end
imwrite(A6, 'Untitled_0506_300.tif','Resolution', 300);`

How to avoid fliplr in the below code?

I am trying to split a region in an image into left and right. But I am avoiding a certain percentage of columns in the center from each side.
So,
I have to get the keep indexes for both left and right.
I am using fliplr to reverse array indexes of right side,
get (1:n_indices),
then again fliplr back to normal.
Can I avoid fliplr in the below code:
img1 = imread('sample4.png');
keepPercent = 0.9; %90 on both sides
columnsWithAllZeros = all(img1 == 0);
left_idx = find(~columnsWithAllZeros,1,'first');
right_idx = find(~columnsWithAllZeros,1,'last');
cent_idx = floor(mean([left_idx,right_idx]));
left_to_cent_idxs = left_idx:cent_idx;
cent_to_right_idxs = cent_idx+1:right_idx;
cent_to_right_idxs = fliplr(cent_to_right_idxs); % flip
num_leftKeep_idxs = floor(keepPercent *length(left_to_cent_idxs));
num_rightKeep_idxs = floor(keepPercent *length(cent_to_right_idxs));
right_keepImg_idxs = left_to_cent_idxs(1:num_leftKeep_idxs);
left_keepImg_idxs = cent_to_right_idxs(1:num_rightKeep_idxs);
left_keepImg_idxs = fliplr(left_keepImg_idxs); %flip back This is not needed I Know
[leftBrain_img, rightBrain_img] = deal(zeros(nrow, ncol, 'logical'));
leftBrain_img(:,left_keepImg_idxs) = img1(:,left_keepImg_idxs);
rightBrain_img(:,right_keepImg_idxs) = img1(:,right_keepImg_idxs);
rightBrain_img = cast(rightBrain_img,'uint16') .*img1;
leftBrain_img = cast(leftBrain_img,'uint16') .*img1;
figure,
subplot(131), imshow(img1,[])
subplot(132), imshow(rightBrain_img,[])
subplot(133), imshow(leftBrain_img,[])
The sample image is available here
Thanks,
Gopi
That could be done, just like #rahnema1 said. But the question is why even do it when it could be done in a much faster & simpler way!
Have a look at this code-
img1 = imread('sample4.png');
keepPercent = 0.9; %90 on both sides
columnsWithAllZeros = all(img1 == 0);
leavepercent=1-keepPercent;
idx=minmax(find(columnsWithAllZeros==0));
cent_idx = floor(mean(idx));
left_keepImg_idxs1=idx(1):cent_idx-floor(leavepercent*(cent_idx-idx(1)+1));
right_keepImg_idxs1=cent_idx+1+floor(leavepercent*(idx(2)-cent_idx+1)):idx(2);
[leftBrain_img, rightBrain_img] =deal(zeros(512, 512, 'logical'));
leftBrain_img(:,left_keepImg_idxs1) = img1(:,left_keepImg_idxs1);
rightBrain_img(:,right_keepImg_idxs1) = img1(:,right_keepImg_idxs1);
rightBrain_img = cast(rightBrain_img,'uint16') .*img1;
leftBrain_img = cast(leftBrain_img,'uint16') .*img1;
figure,
subplot(131), imshow(img1,[])
subplot(132), imshow(rightBrain_img,[])
subplot(133), imshow(leftBrain_img,[])

Best way to isolate rectangular object

I have the following image and I would like to segment the rectangular object in the middle. I implemented the following code to segment but I cannot isolate the object. What functions or approaches can I take to isolate the rectangular object in the image?
im = imread('image.jpg');
% convert image to grayscale,
imHSV = rgb2hsv(im);
imGray = rgb2gray(im);
imSat = imHSV(:,:,2);
imHue = imHSV(:,:,1);
imVal = imHSV(:,:,3);
background = imopen(im,strel('disk',15));
I2 = im - background;
% detect edge using sobel algorithm
[~, threshold] = edge(imGray, 'sobel');
fudgeFactor = .5;
imEdge = edge(imGray,'sobel', threshold * fudgeFactor);
%figure, imshow(imEdge);
% split image into colour channels
redIM = im(:,:,1);
greenIM = im(:,:,2);
blueIM = im(:,:,3);
% convert image to binary image (using thresholding)
imBlobs = and((imSat < 0.6),(imHue < 0.6));
imBlobs = and(imBlobs, ((redIM + greenIM + blueIM) > 150));
imBlobs = imfill(~imBlobs,4);
imBlobs = bwareaopen(imBlobs,50);
figure,imshow(imBlobs);
In this example, you can leverage the fact that the rectangle contains blue in all of its corners in order to build a good initial mask.
Use threshold in order to locate the blue locations in the image and create an initial mask.
Given this initial mask, find its corners using min and max operations.
Connect between the corners with lines in order to receive a rectangle.
Fill the rectangle using imfill.
Code example:
% convert image to binary image (using thresholding)
redIM = im(:,:,1);
greenIM = im(:,:,2);
blueIM = im(:,:,3);
mask = blueIM > redIM*2 & blueIM > greenIM*2;
%noise cleaning
mask = imopen(mask,strel('disk',3));
%find the corners of the rectangle
[Y, X] = ind2sub(size(mask),find(mask));
minYCoords = find(Y==min(Y));
maxYCoords = find(Y==max(Y));
minXCoords = find(X==min(X));
maxXCoords = find(X==max(X));
%top corners
topRightInd = find(X(minYCoords)==max(X(minYCoords)),1,'last');
topLeftInd = find(Y(minXCoords)==min(Y(minXCoords)),1,'last');
p1 = [Y(minYCoords(topRightInd)) X((minYCoords(topRightInd)))];
p2 = [Y(minXCoords(topLeftInd)) X((minXCoords(topLeftInd)))];
%bottom corners
bottomRightInd = find(Y(maxXCoords)==max(Y(maxXCoords)),1,'last');
bottomLeftInd = find(X(minYCoords)==min(X(minYCoords)),1,'last');
p3 = [Y(maxXCoords(bottomRightInd)) X((maxXCoords(bottomRightInd)))];
p4 = [Y(maxYCoords(bottomLeftInd)) X((maxYCoords(bottomLeftInd)))];
%connect between the corners with lines
l1Inds = drawline(p1,p2,size(mask));
l2Inds = drawline(p3,p4,size(mask));
maskOut = mask;
maskOut([l1Inds,l2Inds]) = 1;
%fill the rectangle which was created
midP = ceil((p1+p2+p3+p4)./4);
maskOut = imfill(maskOut,midP);
%present the final result
figure,imshow(maskOut);
Final Result:
Intermediate results (1-after threshold taking, 2-after adding lines):
*drawline function is taken from drawline webpage

Resize Frame for Optical Flow

I have problem with optical flow if the frame size have been manipulated in any way this gives me error. There are two options either change the resolution of the video at the beginning or somehow how change the frame size in a way that optical flow will work. I will want to add a cascade object to detect nose, mouth and eyes in further development therefore I need solution that will work for individual regions without necessary setting optical flow individually for those regions especially that a bounding box does not have a fixed size and it will displace itself slightly from frame to frame. Here is my code so far, the error is that it is exceeding matrix dimensions.
faceDetector = vision.CascadeObjectDetector();
vidObj = vision.VideoFileReader('MEXTest.mp4','ImageColorSpace','Intensity','VideoOutputDataType','uint8');
converter = vision.ImageDataTypeConverter;
opticalFlow = vision.OpticalFlow('ReferenceFrameDelay', 1);
opticalFlow.OutputValue = 'Horizontal and vertical components in complex form';
shapeInserter = vision.ShapeInserter('Shape','Lines','BorderColor','Custom','CustomBorderColor', 255);
vidPlayer = vision.VideoPlayer('Name','Motion Vector');
while ~isDone(vidObj);
frame = step(vidObj);
fraRes = imresize(frame,0.5);
fbbox = step(faceDetector,fraRes);
I = imcrop(fraRes,fbbox);
im = step(converter,I);
of = step(opticalFlow,im);
lines = videooptflowlines(of, 20);
if ~isempty(lines)
out = step(shapeInserter,im,lines);
step(vidPlayer,out);
end
end
release(vidPlayer);
release(VidObj);
UPDATE: I went and edited the function for optical flow which creates lines and this sorts out the some size issues however it is necessary to to input this manually for each object (so if there is any other way let me know). I think the best solution would be set a fixed size to cascadeObjectDetector, does anyone know how to do this? Or have any other idea?
faceDetector = vision.CascadeObjectDetector(); %I need fixed size for this
faceDetector.MinSize = [150 150];
vidRead = vision.VideoFileReader('MEXTest.mp4','ImageColorSpace','Intensity','VideoOutputDataType','uint8');
convert = vision.ImageDataTypeConverter;
optFlo = vision.OpticalFlow('ReferenceFrameDelay', 1);
optFlo.OutputValue = 'Horizontal and vertical components in complex form';
shapeInserter = vision.ShapeInserter('Shape','Lines','BorderColor','Custom', 'CustomBorderColor', 255);
while ~isDone(vidRead)
frame = step(vidRead);
fraRes = imresize(frame,0.3);
fraSin = im2single(fraRes);
bbox = step(faceDetector,fraSin);
I = imcrop(fraSin, bbox);
im = step(convert, I);
release(optFlo);
of = step(optFlo, im);
lines = optfloo(of, 50); %use videooptflowlines instead of (optfloo)
out = step(shapeInserter, im, lines);
imshow(out);
end