%clear workspace
clear;
clc;
close all;
%read Reference image and convert into single
rgb1= im2single(imread('r1.jpg'));
I1 = rgb2gray(rgb1);
%create mosaic background
sz= size(I1)+300;% Size of the mosaic
h=sz(1);
w=sz(2);
%create a world coordinate system
outputView = imref2d([h,w]);
%affine matrix
xtform = eye(3);
% Warp the current image onto the mosaic image
%using 2D affine geometric transformation
mosaic = imwarp(rgb1, affine2d(xtform),'OutputView', outputView);
figure,imshow(mosaic,'initialmagnification','fit');
%read Target image and convert into single
rgb2= im2single(imread('t1.jpg'));
I2 = rgb2gray(rgb2);
%find surf features of reference and target image ,then find new
%affine matrix
%Detect SURFFeatures in the reference image
points = detectSURFFeatures(I1);
%detectSURFFeatures returns information about SURF features detected
%in the 2-D grayscale input image . The detectSURFFeatures function
%implements the Speeded-Up Robust Features (SURF) algorithm
%to find blob features
%Extract feature vectors, also known as descriptors, and their
%corresponding locations
[featuresPrev, pointsPrev] = extractFeatures(I1,points);
%Detect SURFFeatures in the target image
points = detectSURFFeatures(I2);
%Extract feature vectors and their corresponding locations
[features, points] = extractFeatures(I2,points);
% Match features computed from the refernce and the target images
indexPairs = matchFeatures(features, featuresPrev);
% Find corresponding locations in the refernce and the target images
matchedPoints = points(indexPairs(:, 1), :);
matchedPointsPrev = pointsPrev(indexPairs(:, 2), :);
%compute a geometric transformation from the corresponding locations
tform=estimateGeometricTransform(matchedPoints,matchedPointsPrev,'affine');
%get affine matrix
xtform = tform.T;
% Warp the current image onto the mosaic image
mosaicnew = imwarp(rgb2, affine2d(xtform), 'OutputView', outputView);
figure,imshow(mosaicnew,'initialmagnification','fit');
%create a object to overlay one image over another
halphablender = vision.AlphaBlender('Operation', 'Binary mask', 'MaskSource', 'Input port');
% Creat a mask which specifies the region of the target image.
% BY Applying geometric transformation to image
mask= imwarp(ones(size(I2)), affine2d(xtform), 'OutputView', outputView)>=1;
figure,imshow(mask,'initialmagnification','fit');
%overlays one image over another
mosaicfinal = step(halphablender, mosaic,mosaicnew, mask);
% %show results
figure,imshow(rgb1,'initialmagnification','fit');
figure,imshow(rgb2,'initialmagnification','fit');
figure,imshow(mosaicfinal,'initialmagnification','fit');
There was an error when using the function 'imref2d' and this is the error that appeared.
Undefined function 'imref2d' for input arguments of type 'double'. Error in immosaic (line 13) outputView = imref2d([h,w]);
This manual will help you. You are using incorrect input in imref2d.
Related
I am trying to get pixel intensity values from regions of interest in RGB images.
I segmented the image and saved the regions of interest (ROI) using regionprops 'PixelList' in MATLAB, as shown below:
In this example I am using "onion.png" image built in MATLAB. (But in reality I have hundreds of images, and each of them have several ROIs hence why I'm saving the ROIs separately.)
%SEGMENTATION PROGRAM:
a=imread('C:\Program Files\MATLAB\MATLAB Production Server\R2015a\toolbox\images\imdata\onion.png');warning('off', 'Images:initSize:adjustingMag');
figure; imshow(a,[]);
nrows = size(a,1);ncols = size(a,2);
zr=ones(nrows,ncols); %matrix of ones
r=a(:,:,1);g=a(:,:,2);b=a(:,:,3); %get RGB values
rr=double(r);gg=double(g);bb=double(b);% convert to double to avoid uint8 sums
bgd=(rr+bb)./(2*gg); %calculation RGB ratio of background
zr1=bgd>1.15; %matrix containing background as 1 and ROI as 0
% remake binary image for holes command which requires white object to fill % (this step is not relevant for this example, but it is for my data)
zr2=zr1<0.5;
zr3=imfill(zr2, 'holes');
figure;imshow(zr3); pause;
roi=regionprops(zr3,'Centroid','PixelList','Area');
ar=[roi.Area];
% find sort order , largest first
[as, ia]=sort(ar(1,:),'descend');
for w=1:length(roi); xy(w,:)=roi(w).Centroid;end
% use sort index to put cenrtoid list in same order
xy1=xy(ia,:);
%and pixel id list
for w=1:length(roi)
roi2(w).PixelList=roi(ia(w)).PixelList;
end
%extract centriod positions as two colums
%SAVE PIXEL LIST FOR EACH ROI IN A SEPARATE FILE
for ww=1:w;
k=roi2(ww).PixelList;
save('onion_PL','k');
end
How do I use this pixel list to get the intensity values in the original image? More specifically, I need to get the ratio of pixels in Green channel over Red ("grr=rdivide(gg,rr);"), but only in the region of interest labeled with PixelList. Here's my code so far:
%PL is the PixelList output we got above.
a=imread('C:\Program Files\MATLAB\MATLAB Production Server\R2015a\toolbox\images\imdata\onion.png');warning('off', 'Images:initSize:adjustingMag');
PL=dir(['*PL.mat']); %load file PixelList files. "dir" is a variable with directory path containing the pixelist files. In this example, we saved "onion_PL.mat"
for m=1:length(PL);
load(PL(m).name);
ex=[]; %empty matrix to hold the extracted values
for mm=1:length(k);
%INSERT ANSWER HERE
end
This next bit is wrong because it's based on the entire image ("a"), but it contains the calculations that I would like to perform in the ROIs
figure; imshow(a,[]);
pause;
nrows = size(a,1);ncols = size(a,2);
zr=ones(nrows,ncols); %matrix of ones
r=a(:,:,1);g=a(:,:,2);b=a(:,:,3); %get RGB values
rr=double(r);gg=double(g);bb=double(b);% convert to double to avoid uint8 sums
grr=rdivide(gg,rr);
I am brand new to MATLAB, so my code is not the greatest... Any suggestions will be greatly appreciated. Thank you in advance!
The loop you are looking for seems simple:
grr = zeros(nrows, ncols); % Initialize grr with zeros.
for mm = 1:length(k)
x = k(mm, 1); % Get the X (column) coordinate.
y = k(mm, 2); % Get the Y (row) coordinate.
grr(y, x) = gg(y, x) / rr(y, x);
end
A more efficient solution is using sub2ind for converting the x,y coordinates to linear indices:
% Convert k to linear indices.
kInd = sub2ind([nrows, ncols], k(:,2), k(:,1));
% Update only the indices in the PixelList.
grr(kInd) = rdivide(gg(kInd), rr(kInd));
In your given code sample there are 5 PixelLists.
I don't know how do you want to "arrange" the result.
In my code sample, I am saving the 5 results to 5 mat files.
Here is an executable code sample:
close all
%SEGMENTATION PROGRAM:
a=imread('onion.png');warning('off', 'Images:initSize:adjustingMag');
figure; imshow(a,[]);
nrows = size(a,1);ncols = size(a,2);
zr=ones(nrows,ncols); %matrix of ones
r=a(:,:,1);g=a(:,:,2);b=a(:,:,3); %get RGB values
rr=double(r);gg=double(g);bb=double(b);% convert to double to avoid uint8 sums
bgd=(rr+bb)./(2*gg); %calculation RGB ratio of background
zr1=bgd>1.15; %matrix containing background as 1 and ROI as 0
% remake binary image for holes command which requires white object to fill % (this step is not relevant for this example, but it is for my data)
zr2=zr1<0.5;
zr3=imfill(zr2, 'holes');
figure;imshow(zr3); %pause;
roi=regionprops(zr3,'Centroid','PixelList','Area');
ar=[roi.Area];
% find sort order , largest first
[as, ia]=sort(ar(1,:),'descend');
for w=1:length(roi); xy(w,:)=roi(w).Centroid;end
% use sort index to put cenrtoid list in same order
xy1=xy(ia,:);
%and pixel id list
for w=1:length(roi)
roi2(w).PixelList=roi(ia(w)).PixelList;
end
%extract centroid positions as two columns
%SAVE PIXEL LIST FOR EACH ROI IN A SEPARATE FILE
for ww=1:w
k=roi2(ww).PixelList;
%save('onion_PL', 'k');
save(['onion', num2str(ww), '_PL'], 'k'); % Store in multiple files - onion1_PL.mat, onion2_PL.mat, ... onion5_PL.mat
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear % Use clear for testing - the variables are going to be read from the mat file.
%PL is the PixelList output we got above.
a=imread('onion.png');warning('off', 'Images:initSize:adjustingMag');
nrows = size(a,1);ncols = size(a,2);
zr=ones(nrows,ncols); %matrix of ones
r=a(:,:,1);g=a(:,:,2);b=a(:,:,3); %get RGB values
rr=double(r);gg=double(g);bb=double(b);% convert to double to avoid uint8 sums
grr=rdivide(gg,rr);
PL=dir('*PL.mat'); %load file PixelList files. "dir" is a variable with directory path containing the pixelist files. In this example, we saved "onion_PL.mat"
for m = 1:length(PL)
load(PL(m).name);
ex=[]; %empty matrix to hold the extracted values
%for mm=1:length(k)
%INSERT ANSWER HERE
grr = zeros(nrows, ncols); % Initialize grr with zeros.
for mm = 1:length(k)
x = k(mm, 1); % Get the X (column) coordinate.
y = k(mm, 2); % Get the Y (row) coordinate.
grr(y, x) = gg(y, x) / rr(y, x);
end
% Instead of using a loop, it's more efficient to use sub2ind
if false
% Convert k to linear indices.
kInd = sub2ind([nrows, ncols], k(:,2), k(:,1));
% Update only the indices in the PixelList.
grr(kInd) = rdivide(gg(kInd), rr(kInd));
end
figure;imshow(grr);title(['grr of m=', num2str(m)]) % Show grr for testing.
save(['grr', num2str(m)], 'grr'); % Save grr for testing.
imwrite(imadjust(grr, stretchlim(grr)), ['grr', num2str(m), '.png']); % Store grr as image for testing
end
First two grr matrices as images (used for testing):
grr1.png:
grr2.png:
I have this problem in image processing and I couldn't find an algorithm to perform well under this condition.It's so simple to understand but I don't know how to implement it in OpenCV or in Matlab, so any algorithm or function in one of them (MATLAB or opencv) is helpful.
1 . lets suppose that an image and background of scene is like the below
2 . We apply an edge detector to image and my current image will be like the below picture.
now my Problem is that how we can find the biggest contour or area like the below in edge image?
If you want original picture the original picture is
and in matlab you can get edge image by below codes.
clc
clear
img = imread('1.png'); % read Image
gray = rgb2gray(img); % Convert RGB to gray-scale
edgeImage = edge(gray,'canny',0.09); % apply canny to gray-scale image
imshow(edgeImage) % Display result in figure(MATLAB)
In OpenCV you can use below code
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img = imread("1.png");
Mat gray;
cvtColor(img, //BGR form image
gray, //Mat for gray(destination)
CV_BGR2GRAY); //type of transform(in here BGR->GRay)
Mat edgeImage;
Canny(gray, //Input Array
edgeImage, //Output Array
40, // Lower threshold
120); //Upper threshold
namedWindow("Edge-Image"); //create a window for display image
imshow("Edge-Image",edgeImage); //Display edgeImage in window that in before line create
waitKey(0); //stick display window and wait for any key
return 0;
}
Here is a solution in Matlab using imdilate to close the contours and regionprops to get the closed objects area:
% Your code to start
img = imread('Image.png'); % read Image
gray = rgb2gray(img); % Convert RGB to gray-scale
edgeImage = edge(gray,'canny',0.09); % apply canny to gray-scale image
% First dilate to close contours
BW = imdilate(edgeImage, strel('disk',4,8));
% Then find the regions
R = regionprops(~BW, {'Area', 'PixelIdxList'});
% Find the second biggest region (the biggest is the background)
[~, I] = sort([R(:).Area], 'descend');
Mask = zeros(size(img));
Mask(R(I(2)).PixelIdxList) = 1;
% Display
clf
imshow(Mask)
An the result is:
Best,
First close the contour with morphological closing since you can't find it now as it is not really a distinct contour, but a part of the larger one.
After closing, just use the findContours() function and use its output to get the area of each contour and eventually find the maximum one by using the contourArea() function.
How to count circle objects in a bright image using MATLAB?
The input image is:
imfindcircles function can't find any circle in this image.
Based on well known image processing techniques, you can write your own processing tool:
img = imread('Mlj6r.jpg'); % read the image
imgGray = rgb2gray(img); % convert to grayscale
sigma = 1;
imgGray = imgaussfilt(imgGray, sigma); % filter the image (we will take derivatives, which are sensitive to noise)
imshow(imgGray) % show the image
[gx, gy] = gradient(double(imgGray)); % take the first derivative
[gxx, gxy] = gradient(gx); % take the second derivatives
[gxy, gyy] = gradient(gy); % take the second derivatives
k = 0.04; %0.04-0.15 (see wikipedia)
blob = (gxx.*gyy - gxy.*gxy - k*(gxx + gyy).^2); % Harris corner detector (high second derivatives in two perpendicular directions)
blob = blob .* (gxx < 0 & gyy < 0); % select the top of the corner (i.e. positive second derivative)
figure
imshow(blob) % show the blobs
blobThresshold = 1;
circles = imregionalmax(blob) & blob > blobThresshold; % find local maxima and apply a thresshold
figure
imshow(imgGray) % show the original image
hold on
[X, Y] = find(circles); % find the position of the circles
plot(Y, X, 'w.'); % plot the circle positions on top of the original figure
nCircles = length(X)
This code counts 2710 circles, which is probably a slight (but not so bad) overestimation.
The following figure shows the original image with the circle positions indicated as white dots. Some wrong detections are made at the border of the object. You can try to make some adjustments to the constants sigma, k and blobThresshold to obtain better results. In particular, higher k may be beneficial. See wikipedia, for more information about the Harris corner detector.
I have a stereo camera system and I am trying this MATLAB's Computer Vision toolbox example (http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html) with my own images and camera calibration files. I used Caltech's camera calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/).
First I tried each camera separately based on first example and found the intrinsic camera calibration matrices for each camera and saved them. I also undistorted the left and right images using Caltech toolbox. Therefore I commented out the code for that from MATLAB example.
Here are the instrinsic camera matrices:
K1=[1050 0 630;0 1048 460;0 0 1];
K2=[1048 0 662;0 1047 468;0 0 1];
BTW, these are the right and center lenses from bumblebee XB3 cameras.
Question: aren't they supposed to be the same?
Then I did stereo calibration based on fifth example. I saved the rotation matrix (R) and translation matrix (T) from that. Therefore I commented out the code for that from MATLAB example.
Here are the rotation and translation matrices:
R=[0.9999 -0.0080 -0.0086;0.0080 1 0.0048;0.0086 -0.0049 1];
T=[120.14 0.55 1.04];
Then I fed all these images and calibration files and camera matrices to the MATLAB example and tried to find the 3-D point cloud but the results are not promising. I am attaching the code here. I think here are two problems:
1- My epipolar constraint value is too large!(to the power of 16)
2- I am not sure about the camera matrices and how I calculated them from R, and T from Caltech toolbox!
P.S. as far as feature extraction goes that is fine.
would be great if someone can help.
clear
close all
clc
files = {'Left1.tif';'Right1.tif'};
for i = 1:numel(files)
files{i}=fullfile('...\sparse_matlab', files{i});
images(i).image = imread(files{i});
end
figure;
montage(files); title('Pair of Original Images')
% Intrinsic camera parameters
load('Calib_Results_Left.mat')
K1 = KK;
load('Calib_Results_Right.mat')
K2 = KK;
%Extrinsics using stereo calibration
load('Calib_Results_stereo.mat')
Rotation=R;
Translation=T';
images(1).CameraMatrix=[Rotation; Translation] * K1;
images(2).CameraMatrix=[Rotation; Translation] * K2;
% Detect feature points and extract SURF descriptors in images
for i = 1:numel(images)
%detect SURF feature points
images(i).points = detectSURFFeatures(rgb2gray(images(i).image),...
'MetricThreshold',600);
%extract SURF descriptors
[images(i).featureVectors,images(i).points] = ...
extractFeatures(rgb2gray(images(i).image),images(i).points);
end
% Visualize several extracted SURF features from the Left image
figure; imshow(images(1).image);
title('1500 Strongest Feature Points from Globe01');
hold on;
plot(images(1).points.selectStrongest(1500));
indexPairs = ...
matchFeatures(images(1).featureVectors, images(2).featureVectors,...
'Prenormalized', true,'MaxRatio',0.4) ;
matchedPoints1 = images(1).points(indexPairs(:, 1));
matchedPoints2 = images(2).points(indexPairs(:, 2));
figure;
% Visualize correspondences
showMatchedFeatures(images(1).image,images(2).image,matchedPoints1,matchedPoints2,'montage' );
title('Original Matched Features from Globe01 and Globe02');
% Set a value near zero, It will be used to eliminate matches that
% correspond to points that do not lie on an epipolar line.
epipolarThreshold = .05;
for k = 1:length(matchedPoints1)
% Compute the fundamental matrix using the example helper function
% Evaluate the epipolar constraint
epipolarConstraint =[matchedPoints1.Location(k,:),1]...
*helperCameraMatricesToFMatrix(images(1).CameraMatrix,images(2).CameraMatrix)...
*[matchedPoints2.Location(k,:),1]';
%%%% here my epipolarConstraint results are bad %%%%%%%%%%%%%
% Only consider feature matches where the absolute value of the
% constraint expression is less than the threshold.
valid(k) = abs(epipolarConstraint) < epipolarThreshold;
end
validpts1 = images(1).points(indexPairs(valid, 1));
validpts2 = images(2).points(indexPairs(valid, 2));
figure;
showMatchedFeatures(images(1).image,images(2).image,validpts1,validpts2,'montage');
title('Matched Features After Applying Epipolar Constraint');
% convert image to double format for plotting
doubleimage = im2double(images(1).image);
points3D = ones(length(validpts1),4); % store homogeneous world coordinates
color = ones(length(validpts1),3); % store color information
% For all point correspondences
for i = 1:length(validpts1)
% For all image locations from a list of correspondences build an A
pointInImage1 = validpts1(i).Location;
pointInImage2 = validpts2(i).Location;
P1 = images(1).CameraMatrix'; % Transpose to match the convention in
P2 = images(2).CameraMatrix'; % in [1]
A = [
pointInImage1(1)*P1(3,:) - P1(1,:);...
pointInImage1(2)*P1(3,:) - P1(2,:);...
pointInImage2(1)*P2(3,:) - P2(1,:);...
pointInImage2(2)*P2(3,:) - P2(2,:)];
% Compute the 3-D location using the smallest singular value from the
% singular value decomposition of the matrix A
[~,~,V]=svd(A);
X = V(:,end);
X = X/X(end);
% Store location
points3D(i,:) = X';
% Store pixel color for visualization
y = round(pointInImage1(1));
x = round(pointInImage1(2));
color(i,:) = squeeze(doubleimage(x,y,:))';
end
% add green point representing the origin
points3D(end+1,:) = [0,0,0,1];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1); montage(files,'Size',[1,2]); title('Original Images')
% plot point-cloud
hAxes = subplot(1,2,2); hold on; grid on;
scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
view(20,24);axis equal;axis vis3d
set(hAxes,'XAxisLocation','top','YAxisLocation','left',...
'ZDir','reverse','Ydir','reverse');
grid on
title('Reconstructed Point Cloud');
First of all, the Computer Vision System Toolbox now includes a Camera Calibrator App for calibrating a single camera, and also support for programmatic stereo camera calibration. It would be easier for you to use those tools, because the example you are using and the Caltech Calibration Toolbox use somewhat different conventions.
The example uses the pre-multiply convention, i.e. row vector * matrix, while the Caltech toolbox uses the post-multiply convention (matrix * column vector). That means that if you do use the camera parameters from Caltech, you would have to transpose the intrinsic matrix and the rotation matrices. That could be the main cause of your problems.
As far as the intrinsics being different between your two cameras, that is perfectly normal. All cameras are slightly different.
It would also help to see the matched features that you've used for triangulation. Given that you are reconstructing an elongated object, it doesn't seem too surprising to see the reconstructed points form a line in 3D...
You could also try rectifying the images and doing a dense reconstruction, as in the example I've linked to above.
I have for example the following image and a corresponding mask.
I would like to weight the pixels inside the white circle with a Gaussian, g = #(x,y,xc,yc) exp(-( ((x-xc)^2)/0.5 + ((y-yc)^2)/0.5 ));, placed in the centroid (xc,yc) of the mask. x, y are the coordinates of the corresponding pixels. Could you please someone suggest a way to do that without using for loops?
Thanks.
By "weighting" pixels inside the ellipse, I assume you mean multiply elementwise by a 2D gaussian. If so, here's the code:
% Read images
img = imread('img.jpg');
img = im2double(rgb2gray(img));
mask = imread('mask.jpg');
mask = im2double(rgb2gray(mask)) > 0.9; % JPG Compression resulted in some noise
% Gaussian function
g = #(x,y,xc,yc) exp(-(((x-xc).^2)/500+((y-yc).^2)./200)); % Should be modified to allow variances as parameters
% Use rp to get centroid and mask
rp_mask = regionprops(mask,'Centroid','BoundingBox','Image');
% Form coordinates
centroid = round(rp_mask.Centroid);
[coord_x coord_y] = meshgrid(ceil(rp_mask.BoundingBox(1)):ceil(rp_mask.BoundingBox(1))+rp_mask.BoundingBox(3)-1, ...
ceil(rp_mask.BoundingBox(2)):ceil(rp_mask.BoundingBox(2))+rp_mask.BoundingBox(4)-1);
% Get Gaussian Mask
gaussian_mask = g(coord_x,coord_y,centroid(1),centroid(2));
gaussian_mask(~rp_mask.Image) = 1; % Set values outside ROI to 1, this negates weighting outside ROI
% Apply Gaussian - Can use temp variables to make this shorter
img_g = img;
img_g(ceil(rp_mask.BoundingBox(2)):ceil(rp_mask.BoundingBox(2))+rp_mask.BoundingBox(4)-1, ...
ceil(rp_mask.BoundingBox(1)):ceil(rp_mask.BoundingBox(1))+rp_mask.BoundingBox(3)-1) = ...
img(ceil(rp_mask.BoundingBox(2)):ceil(rp_mask.BoundingBox(2))+rp_mask.BoundingBox(4)-1, ...
ceil(rp_mask.BoundingBox(1)):ceil(rp_mask.BoundingBox(1))+rp_mask.BoundingBox(3)-1) .* gaussian;
% Show
figure, imshow(img_g,[]);
The result:
If you instead want to perform some filtering within that roi, there's a function called roifilt2 which will allow you to filter the image within that region as well:
img_filt = roifilt2(fspecial('gaussian',[21 21],10),img,mask);
figure, imshow(img_filt,[]);
The result: