I am working on Simulink to develop my algorithm.
I have a video stream with dimensions 640x360. I am trying to extract region of interest (ROI) from each frame. However, my video turns into grayscale when I use the following code:
MATLAB Function Block which I am using for the ROI extraction:
function y = fcn(u)
%some more code
width = 639;
height = 210;
top = 150;
left = 1;
y = u(top:top+height, left:left+width);
Solution
Change the last line as follows:
y = u(top:top+height, left:left+width,:);
Explanation
The dimensions of an RGB image are actually mxnx3. The m and n are the image height and width, and there are 3 channels: red, green and blue.
when you perform a cropping of an RGB image, it should be performed on each channel separately. You can achieve that by using my code example above.
Related
I want to apply Temporal Median Filter to a depth map video to ensure temporal consistency and prevent the flickering effect.
Thus, I am trying to apply the filter on all video frames at once by:
First loading all frames,
%%% Read video sequence
numfrm = 5;
infile_name = 'depth_map_1920x1088_80fps.yuv';
width = 1920; %xdim
height = 1088; %ydim
fid_in = fopen(infile_name, 'rb');
[Yd, Ud, Vd] = yuv_import(infile_name,[width, height],numfrm);
fclose(fid_in);
then creating a 3-D depth matrix (height x width x number-of-frames),
%%% Build a stack of images from the video sequence
stack = zeros(height, width, numfrm);
for i=1:numfrm
RGB = yuv2rgb(Yd{i}, Ud{i}, Vd{i});
RGB = RGB(:, :, 1);
stack(:,:,i) = RGB;
end
and finally applying the 1-D median filter along the third direction (time)
temp = medfilt1(stack);
However, for some reason this is not working. When I try to view each frame, I get white images.
frame1 = temp(:,:,1);
imshow(frame1);
Any help would be appreciated!
My guess is that this is actually working but frame1 is of class double and contains values, e.g. between 0 and 255. As imshow represents double images by default on a [0,1] scale, you obtain a white, saturated image.
I would therefore suggest:
caxis auto
after imshow to fix the display problem.
Best,
I want to crop a face section from an image but face image is not straight/vertically aligned. I am having four pixel points to crop it..
Problem is that,
If i will transform image first the pixel points cannot be used thereafter to crop the facial section out of it.
Or in other case I am not having an exact bounding box to crop the image directly using imcrop as facial sections are somewhat tilted left or right.
The four pixel points are at forehead , chin and ears of the face to be cropped.
You should look at poly2mask. This function produces a mask image from your given x and y coordinates:
BW = poly2mask(x,y,m,n);
where x and y are your coordinates, and the produced BW image is m by n. You can then use this BW image to mask your original image I by doing
I(~BW) = 0;
If you actually want to crop, then you could get the bounding box (either through the regionprops function or the code below):
x1 = round(min(x));
y1 = round(min(y));
x2 = round(max(x));
y2 = round(max(y));
and then crop the image after you have used the BW as a mask.
I2 = I(x1:x2,y1:y2);
Hope that helps.
I have an image and a subimage which is cropped out of the original image.
Here's the code I have written so far:
val1 = imread(img);
val2 = imread(img_w);
gray1 = rgb2gray(val1);%grayscaling both images
gray2 = rgb2gray(val2);
matchingval = normxcorr2(gray1,gray2);%normalized cross correlation
[max_c,imax]=max(abs(matchingval(:)));
After this I am stuck. I have no idea how to change the whole image grayscale except for the sub image which should be in color.
How do I do this?
Thank you.
If you know what the coordinates are for your image, you can always just use the rgb2gray on just the section of interest.
For instance, I tried this on an image just now:
im(500:1045,500:1200,1)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,2)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,3)=rgb2gray(im(500:1045,500:1200,1:3));
Where I took the rows (500 to 1045), columns (500 to 1200), and the rgb depth (1 to 3) of the image and applied the rgb2gray function to just that. I did it three times as the output of rgb2gray is a 2d matrix and a color image is a 3d matrix, so I needed to change it layer by layer.
This worked for me, making only part of the image gray but leaving the rest in color.
The issue you might have though is this, a color image is 3 dimensions while a gray scale need only be 2 dimensions. Combining them means that the gray scale must be in a 3d matrix.
Depending on what you want to do, this technique may or may not help.
Judging from your code, you are reading the image and the subimage in MATLAB. What you need to know are the coordinates of where you extracted the subimage. Once you do that, simply take your original colour image, convert that to grayscale, then duplicate this image in the third dimension three times. You need to do this so that you can place colour pixels in this image.
For RGB images, grayscale images have the RGB components to all be the same. Duplicating this image in the third dimension three times creates the RGB version of the grayscale image. Once you do that, simply use the row and column coordinates of where you extracted the subimage and place that into the equivalent RGB grayscale image.
As such, given your colour image that is stored in img and your subimage stored in imgsub, and specifying the rows and columns of where you extracted the subimage in row1,col1 and row2,col2 - with row1,col1 being the top left corner of the subimage and row2,col2 is the bottom right corner, do this:
img_gray = rgb2gray(img);
img_gray = cat(3, img_gray, img_gray, img_gray);
img_gray(row1:row2, col1:col2,:) = imgsub;
To demonstrate this, let's try this with an image in MATLAB. We'll use the onion.png image that's part of the image processing toolbox in MATLAB. Therefore:
img = imread('onion.png');
Let's also define our row1,col1,row2,col2:
row1 = 50;
row2 = 90;
col1 = 80;
col2 = 150;
Let's get the subimage:
imgsub = img(row1:row2,col1:col2,:);
Running the above code, this is the image we get:
I took the same example as rayryeng's answer and tried to solve by HSV conversion.
The basic idea is to set the second layer i.e saturation layer to 0 (so that they are grayscale). then rewrite the layer with the original saturation layer only for the sub image area, so that, they alone have the saturation values.
Code:
img = imread('onion.png');
img = rgb2hsv(img);
sPlane = zeros(size(img(:,:,1)));
sPlane(50:90,80:150) = img(50:90,80:150,2);
img(:,:,2) = sPlane;
img = hsv2rgb(img);
imshow(img);
Output: (Same as rayryeng's output)
Related Answer with more details here
By default, MATLAB function imrotate rotate image with black color filled in rotated portion. See this, http://in.mathworks.com/help/examples/images_product/RotationFitgeotransExample_02.png
We can have rotated image with white background also.
Question is, Can we rotate an image (with or without using imrotate) filled with background of original image?
Specific to my problem: Colored image with very small angle of rotation (<=5 deg.)
Here's a naive approach, where we simply apply the same rotation to a mask and take only the parts of the rotated image, that correspond to the transformed mask. Then we just superimpose these pixels on the original image.
I ignore possible blending on the boundary.
A = imread('cameraman.tif');
angle = 10;
T = #(I) imrotate(I,angle,'bilinear','crop');
%// Apply transformation
TA = T(A);
mask = T(ones(size(A)))==1;
A(mask) = TA(mask);
%%// Show image
imshow(A);
You can use padarray() function with 'replicate' and 'both' option to interpolate your image. Then you can use imrotate() function.
In the code below, I've used ceil(size(im)/2) as pad size; but you may want bigger pad size to eliminate the black part. Also I've used s and S( writing imR(S(1)-s(1):S(1)+s(1), S(2)-s(2):S(2)+s(2), :)) to crop the image where you can extract bigger part of image just expanding boundary of index I used below for imR.
Try this:
im = imread('cameraman.tif'); %// You can also read a color image
s = ceil(size(im)/2);
imP = padarray(im, s(1:2), 'replicate', 'both');
imR = imrotate(imP, 45);
S = ceil(size(imR)/2);
imF = imR(S(1)-s(1):S(1)+s(1)-1, S(2)-s(2):S(2)+s(2)-1, :); %// Final form
figure,
subplot(1, 2, 1)
imshow(im);
title('Original Image')
subplot(1, 2, 2)
imshow(imF);
title('Rotated Image')
This gives the output below:
Not so good but better than black thing..
I have some code which takes a fish eye images and converts it to a rectangular image in each RGB channels. I am having trouble with the fact the the output image is square instead of rectangular. (this means that the image is distorted, compressed horizontally.) I have tried changing the output matrix to a more suitable format, without success. Besides this i have also discovered that for the code to work the input image must be square like 500x500. Any idea how to solve this issue? This is the code:
The code is inspired by Prakash Manandhar "Polar To/From Rectangular Transform of Images" file exchange on mathworks.
EDIT. Code now works.
function imP = FISHCOLOR2(imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
for k=1:Dr % colors
imP(:,:,k) = interp2(imR(:,:,k), xR, yR); % add k channel
end
imP = imresize(imP,[size(imP,1), size(imP,2)/3]);
imP = imrotate(imP,270);
SOLVED
Input image <- Image link
Output image <- Image link
PART A
To remove the requirement of a square input image, you may resize the input image into a square one with this -
%%// Resize the input image to make it square
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
Few points I would like to raise here though about this image-resizing to make it a square image. This was a quick and dirty approach and distorts the image for a non-square image, which you may not want if the image is not too "squarish". In many of those non-squarish images, you would find blackish borders across the boundaries of the image. If you can remove that using some sort of image processing algorithm or just manual photoshoping, then it would be ideal. After that even if the image is not square, imresize could be considered a safe option.
PART B
Now, after doing the main processing of flattening out the fisheye image,
at the end of your code, it seemed like the image has to be rotated
90 degrees clockwise or counter-clockwise depending on if the fisheye
image have objects inwardly or outwardly respectively.
%%// Rotating image
imP = imrotate(imP,-90); %%// When projected inwardly
imP = imrotate(imP,-90); %%// When projected outwardly
Note that the flattened image must have the height equal to the half of the
size of the input square image, that is the radius of the image.
Thus, the final output image must have number of rows as - size(imP,2)/2
Since you are flattening out a fisheye image, I assumed that the width
of the flattened image must be 2*PI times the height of it. So, I tried this -
imP = imresize(imP,[size(imP,2)/2 pi*size(imP,2)]);
But the results looked too flattened out. So, the next logical experimental
value looked like PI times the height, i.e. -
imP = imresize(imP,[size(imP,2)/2 pi*size(imP,2)/2]);
Results in this case looked good.
I'm not very experienced in the finer points of image processing in MATLAB, but depending on the exact operation of the imP fill mechanism, you may get what you're looking for by doing the following. Change:
M = size(imR, 1);
N = size(imR, 2);
To:
verticalScaleFactor = 0.5;
M = size(imR, 1) * verticalScaleFactor;
N = size(imR, 2);
If my hunch is right, you should be able to tune that scale factor to get the image just right. It may, however, break your code. Let me know if it doesn't work, and edit your post to flesh out exactly what each section of code does. Then we should be able to give it another shot. Good luck!
This is the code which works.
function imP = FISHCOLOR2(imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
for k=1:Dr % colors
imP(:,:,k) = interp2(imR(:,:,k), xR, yR); % add k channel
end
imP = imresize(imP,[size(imP,1), size(imP,2)/3]);
imP1 = imrotate(imP1,270);