Matlab imcrop returns an image size zero - matlab

I am trying to take a sample of an image using imcrop, but I am having issues.
patchsize = 35
[sample_width, sample_height] = size(sample);
max_width = sample_width-patchsize;
max_height = sample_height-patchsize;
x = randi([0, max_width]);
y = randi([0, max_height]);
patch = imcrop(sample,[x y patchsize-1 patchsize-1]);
The next part of my code is an attempt to access the pixels in patch, but I get an error : Attempted to access patch(1,1); index out of bounds because size(patch)=[0,0]
so I added,
disp(size(patch))
apparently size of patch is 0 0, but I don't understand why this is the case. Based on answers to other questions about imcrop on stack overflow, I also tried
patch = imcrop(sample, [x y x+patchsize-1 y+patchsize-1])
This also resulted in patch having size zero
note: The size did not always return zero, in the first case, depending on what the random ints x and y were, patch would sometimes have size patchsize, but it did not happen %100 of the time, and in the second example, it would sometimes have sizes that were all over the board, but I cannot understand where this inconsistency is coming from.
Update: The problem was coming from the part of my code:
[sample_width, sample_height] = size(sample);
should have been:
sample_width, sample_height, depth] = size(sample);

The problem with the original code is in:
[sample_width, sample_height] = size(sample);
Should be:
[sample_height, sample_width, sample_depth] = size(sample);
RGB images have 3 dimensions and declaring as above will give sample_height a larger value accounting for the width and the depth

Related

How to perform an orthographic projection on a z-Buffer image in Matlab?

I am facing the same problem as mentioned in this post, however, I am not facing it with OpenGL, but simply with MATLAB. Depth as distance to camera plane in GLSL
I have a depth image rendered from the Z-Buffer from 3ds max. I was not able to get an orthographic representation of the z-buffer. For a better understanding, I will use the same sketch as made by the previous post:
* |--*
/ |
/ |
C-----* C-----*
\ |
\ |
* |--*
The 3 asterisks are pixels and the C is the camera. The lines from the
asterisks are the "depth". In the first case, I get the distance from the pixel to the camera. In the second, I wish to get the distance from each pixel to the plane.
The settins of my camera are the following:
WIDTH = 512;
HEIGHT = 424;
FOV = 89.971;
aspect_ratio = WIDTH/HEIGHT;
%clipping planes
near = 500;
far = 5000;
I calulate the frustum settings like the following:
%calculate frustums settings
top = tan((FOV/2)*5000)
bottom = -top
right = top*aspect_ratio
left = -top*aspect_ratio
And set the projection matrix like this:
%Generate matrix
O_p = [2/(right-left) 0 0 -((right+left)/(right-left)); ...
0 2/(top-bottom) 0 -((top+bottom)/(top-bottom));...
0 0 -2/(far-near) -(far+near)/(far-near);...
0 0 0 1];
After this I read in the depth image, which was saved as a 48 bit RGB- image, where each channel is the same, thus only one channel has to be used.
%Read in image
img = imread('KinectImage.png');
%Throw away, except one channel (all hold the same information)
c1 = img(:,:,1);
The pixel values have to be inverted, since the closer the values are to the camera, the brigher they were. If a pixel is 0 (no object to render available) it is set to 2^16, so , that after the bit complementation, the value is still 0.
%Inverse bits that are not zero, so that the z-image has the correct values
c1(c1 == 0) = 2^16
c1_cmp = bitcmp(c1);
To apply the matrix, to each z-Buffer value, I lay out the vector one dimensional and build up a vector like this [0 0 z 1] , over every element.
c1_cmp1d = squeeze(reshape(c1_cmp,[512*424,1]));
converted = double([zeros(WIDTH*HEIGHT,1) zeros(WIDTH*HEIGHT,1) c1_cmp1d zeros(WIDTH*HEIGHT,1)]) * double(O_p);
After that, I pick out the 4th element of the result vector and reshape it to a image
img_con = converted(:,4);
img_con = reshape(img_con,[424,512]);
However, the effect, that the Z-Buffer is not orthographic is still there, so did I get sth wrong? Is my calculation flawed ? Or did I make mistake here?
Depth Image coming from 3ds max
After the computation (the white is still "0" , but the color axis has changed)
It would be great to achieve this with 3ds max, which would resolve this issue, however I was not able to find this setting for the z-buffer. Thus, I want to solve this using Matlab.

Matlab: Can't find a circle with imfindcircles [duplicate]

This question already has answers here:
Matlab: separate connected components
(2 answers)
Closed 5 years ago.
I want to understand how imfindcircles works, so I created a simple image with black background and a single white circle. The image is 640x480 and the circle has a diameter of 122 pixels:
I tried to use imfindcircles to detect the circle, I have tried various modes of the image, as uint8 RGB, uint8 grayscale, double grayscale and the reversed image, in all those forms, and with various values for minR and maxR. I got no result in all cases:
minR = 40;
maxR = 80;
Irgb = imread('example_circle.png');
Irgbr = 255 - Irgb;
I = rgb2gray(Irgb);
Ir = 255 - I;
Id = double(I)/255;
Ird = 1 - Id;
[c1,r1] = imfindcircles(I,[minR maxR]);
[c2,r2] = imfindcircles(Ir,[minR maxR]);
[c3,r3] = imfindcircles(Id,[minR maxR]);
[c4,r4] = imfindcircles(Ird,[minR maxR]);
[c5,r5] = imfindcircles(Irgb,[minR maxR]);
[c6,r6] = imfindcircles(Irgbr,[minR maxR]);
disp([length(r1) length(r2) length(r3) length(r4) length(r5) length(r6)]);
The output is:
0 0 0 0 0 0
How am I supposed to use the function to find the circle?
The imfindcircles function has a 'Sensitivity' parameter:
As you increase the sensitivity factor, imfindcircles detects more circular objects, including weak and partially obscured circles. Higher sensitivity values also increase the risk of false detection.
By setting the Sensitivity to a higher value, you get more potential circles. You could tune this parameter to always give you one circle, e.g. 0.95 seems to work fine in this specific case. This is probably not very robust though.
[c1, r1] = imfindcircles(Irgb,[40,80], 'Sensitivity', 0.95)
If you know that there will always be exactly one circle, you can set the Sensitivity to 1, which returns all potential circles. Then, use the metric output, which gives you the calculated strength of a detected circle. As you know there will be exactly one circle, you can just take the strongest one, which is always the first row.
[c, r] = imfindcircles(Irgb,[40,80], 'Sensitivity', 1);
c1 = c(1,:);
r1 = r(1,:);

Changing image aspect ratio of interpolated RGB image. Square to rectangular

I have some code which takes a fish eye images and converts it to a rectangular image in each RGB channels. I am having trouble with the fact the the output image is square instead of rectangular. (this means that the image is distorted, compressed horizontally.) I have tried changing the output matrix to a more suitable format, without success. Besides this i have also discovered that for the code to work the input image must be square like 500x500. Any idea how to solve this issue? This is the code:
The code is inspired by Prakash Manandhar "Polar To/From Rectangular Transform of Images" file exchange on mathworks.
EDIT. Code now works.
function imP = FISHCOLOR2(imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
for k=1:Dr % colors
imP(:,:,k) = interp2(imR(:,:,k), xR, yR); % add k channel
end
imP = imresize(imP,[size(imP,1), size(imP,2)/3]);
imP = imrotate(imP,270);
SOLVED
Input image <- Image link
Output image <- Image link
PART A
To remove the requirement of a square input image, you may resize the input image into a square one with this -
%%// Resize the input image to make it square
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
Few points I would like to raise here though about this image-resizing to make it a square image. This was a quick and dirty approach and distorts the image for a non-square image, which you may not want if the image is not too "squarish". In many of those non-squarish images, you would find blackish borders across the boundaries of the image. If you can remove that using some sort of image processing algorithm or just manual photoshoping, then it would be ideal. After that even if the image is not square, imresize could be considered a safe option.
PART B
Now, after doing the main processing of flattening out the fisheye image,
at the end of your code, it seemed like the image has to be rotated
90 degrees clockwise or counter-clockwise depending on if the fisheye
image have objects inwardly or outwardly respectively.
%%// Rotating image
imP = imrotate(imP,-90); %%// When projected inwardly
imP = imrotate(imP,-90); %%// When projected outwardly
Note that the flattened image must have the height equal to the half of the
size of the input square image, that is the radius of the image.
Thus, the final output image must have number of rows as - size(imP,2)/2
Since you are flattening out a fisheye image, I assumed that the width
of the flattened image must be 2*PI times the height of it. So, I tried this -
imP = imresize(imP,[size(imP,2)/2 pi*size(imP,2)]);
But the results looked too flattened out. So, the next logical experimental
value looked like PI times the height, i.e. -
imP = imresize(imP,[size(imP,2)/2 pi*size(imP,2)/2]);
Results in this case looked good.
I'm not very experienced in the finer points of image processing in MATLAB, but depending on the exact operation of the imP fill mechanism, you may get what you're looking for by doing the following. Change:
M = size(imR, 1);
N = size(imR, 2);
To:
verticalScaleFactor = 0.5;
M = size(imR, 1) * verticalScaleFactor;
N = size(imR, 2);
If my hunch is right, you should be able to tune that scale factor to get the image just right. It may, however, break your code. Let me know if it doesn't work, and edit your post to flesh out exactly what each section of code does. Then we should be able to give it another shot. Good luck!
This is the code which works.
function imP = FISHCOLOR2(imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
for k=1:Dr % colors
imP(:,:,k) = interp2(imR(:,:,k), xR, yR); % add k channel
end
imP = imresize(imP,[size(imP,1), size(imP,2)/3]);
imP1 = imrotate(imP1,270);

maximum intensity projection matlab with color

Hi all I have a stack of images of fluorescent labeled particles that are moving through time. The imagestack is gray scaled.
I computed a maximum intensity projection by taking the maximum of the image stack in the 3rd dimension.
Example:
ImageStack(x,y,N) where N = 31 image frames.
2DProjection = max(ImageStack,[],3)
Now, since the 2D projection image is black and white, I was hoping to assign a color gradient so that I can get a sense of the flow of particles through time. Is there a way that I can overlay this image with color, so that I will know where a particle started, and where it ended up?
Thanks!
You could use the second output of max to get which frame the particular maximum came from. max returns an index matrix which indicates the index of each maximal value, which in your case will be the particular frame in which it occurred. If you use this with the imagesc function, you will be able to plot how the particles move with time. For instance:
ImageStack(x,y,N) where N = 31 image frames.
[2DProjection,FrameInfo] = max(ImageStack,[],3);
imagesc(FrameInfo);
set(gca,'ydir','normal'); % Otherwise the y-axis would be flipped
You can sum up bright pixels of each image with one another after coloring each image. This way you will have mixed colors on overlapped areas which you will miss using max function. Although I like the previous answer more than mine.
hStep = 1/N;
currentH = 0;
resultImage = uint8(zeros(x,y,3));
for i = 1 : N
rgbColor = hsv2rgb(currentH,1,0.5);
resultImage(:,:,1) = resultImage(:,:,1) + im(:,:,i) * rgbColor(1);
resultImage(:,:,2) = resultImage(:,:,2) + im(:,:,i) * rgbColor(2);
resultImage(:,:,3) = resultImage(:,:,3) + im(:,:,i) * rgbColor(3);
currentH = currentH + hStep;
end

Rotation of image manually in matlab

I am trying to rotate the image manually using the following code.
clc;
m1 = imread('owl','pgm'); % a simple gray scale image of order 260 X 200
newImg = zeros(500,500);
newImg = int16(newImg);
rotationMatrix45 = [cos((pi/4)) -sin((pi/4)); sin((pi/4)) cos((pi/4))];
for x = 1:size(m1,1)
for y = 1:size(m1,2)
point =[x;y] ;
product = rotationMatrix45 * point;
product = int16(product);
newx =product(1,1);
newy=product(2,1);
newImg(newx,newy) = m1(x,y);
end
end
imshow(newImg);
Simply I am iterating through every pixel of image m1, multiplying m1(x,y) with rotation matrix, I get x',y', and storing the value of m1(x,y) in to `newImg(x',y')' BUT it is giving the following error
??? Attempted to access newImg(0,1); index must be a positive integer or logical.
Error in ==> at 18
newImg(newx,newy) = m1(x,y);
I don't know what I am doing wrong.
Part of the rotated image will get negative (or zero) newx and newy values since the corners will rotate out of the original image coordinates. You can't assign a value to newImg if newx or newy is nonpositive; those aren't valid matrix indices. One solution would be to check for this situation and skip such pixels (with continue)
Another solution would be to enlarge the newImg sufficiently, but that will require a slightly more complicated transformation.
This is assuming that you can't just use imrotate because this is homework?
The problem is simple, the answer maybe not : Matlab arrays are indexed from one to N (whereas in many programming langages it's from 0 to (N-1) ).
Try newImg( max( min(1,newX), m1.size() ) , max( min(1,newY), m1.size() ) ) maybe (I don't have Matlab at work so I can tell if it's gonna work), but the resulting image will be croped.
this is an old post so I guess it wont help the OP but as I was helped by his attempt I post here my corrected code.
basically some freedom in the implementation regarding to how you deal with unassigned pixels as well as wether you wish to keep the original size of the pic - which will force you to crop areas falling "outside" of it.
the following function rotates the image around its center, leaves unassigned pixels as "burned" and crops the edges.
function [h] = rot(A,ang)
rotMat = [cos((pi.*ang/180)) sin((pi.*ang/180)); -sin((pi.*ang/180)) cos((pi.*ang/180))];
centerW = round(size(A,1)/2);
centerH = round(size(A,2)/2);
h=255.* uint8(ones(size(A)));
for x = 1:size(A,1)
for y = 1:size(A,2)
point =[x-centerW;y-centerH] ;
product = rotMat * point;
product = int16(product);
newx =product(1,1);
newy=product(2,1);
if newx+centerW<=size(A,1)&& newx+centerW > 0 && newy+centerH<=size(A,2)&& newy+centerH > 0
h(newx+centerW,newy+centerH) = A(x,y);
end
end
end