Sparse 3D reconstruction MATLAB example - matlab

I have a stereo camera system and I am trying this MATLAB's Computer Vision toolbox example (http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html) with my own images and camera calibration files. I used Caltech's camera calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/).
First I tried each camera separately based on first example and found the intrinsic camera calibration matrices for each camera and saved them. I also undistorted the left and right images using Caltech toolbox. Therefore I commented out the code for that from MATLAB example.
Here are the instrinsic camera matrices:
K1=[1050 0 630;0 1048 460;0 0 1];
K2=[1048 0 662;0 1047 468;0 0 1];
BTW, these are the right and center lenses from bumblebee XB3 cameras.
Question: aren't they supposed to be the same?
Then I did stereo calibration based on fifth example. I saved the rotation matrix (R) and translation matrix (T) from that. Therefore I commented out the code for that from MATLAB example.
Here are the rotation and translation matrices:
R=[0.9999 -0.0080 -0.0086;0.0080 1 0.0048;0.0086 -0.0049 1];
T=[120.14 0.55 1.04];
Then I fed all these images and calibration files and camera matrices to the MATLAB example and tried to find the 3-D point cloud but the results are not promising. I am attaching the code here. I think here are two problems:
1- My epipolar constraint value is too large!(to the power of 16)
2- I am not sure about the camera matrices and how I calculated them from R, and T from Caltech toolbox!
P.S. as far as feature extraction goes that is fine.
would be great if someone can help.
clear
close all
clc
files = {'Left1.tif';'Right1.tif'};
for i = 1:numel(files)
files{i}=fullfile('...\sparse_matlab', files{i});
images(i).image = imread(files{i});
end
figure;
montage(files); title('Pair of Original Images')
% Intrinsic camera parameters
load('Calib_Results_Left.mat')
K1 = KK;
load('Calib_Results_Right.mat')
K2 = KK;
%Extrinsics using stereo calibration
load('Calib_Results_stereo.mat')
Rotation=R;
Translation=T';
images(1).CameraMatrix=[Rotation; Translation] * K1;
images(2).CameraMatrix=[Rotation; Translation] * K2;
% Detect feature points and extract SURF descriptors in images
for i = 1:numel(images)
%detect SURF feature points
images(i).points = detectSURFFeatures(rgb2gray(images(i).image),...
'MetricThreshold',600);
%extract SURF descriptors
[images(i).featureVectors,images(i).points] = ...
extractFeatures(rgb2gray(images(i).image),images(i).points);
end
% Visualize several extracted SURF features from the Left image
figure; imshow(images(1).image);
title('1500 Strongest Feature Points from Globe01');
hold on;
plot(images(1).points.selectStrongest(1500));
indexPairs = ...
matchFeatures(images(1).featureVectors, images(2).featureVectors,...
'Prenormalized', true,'MaxRatio',0.4) ;
matchedPoints1 = images(1).points(indexPairs(:, 1));
matchedPoints2 = images(2).points(indexPairs(:, 2));
figure;
% Visualize correspondences
showMatchedFeatures(images(1).image,images(2).image,matchedPoints1,matchedPoints2,'montage' );
title('Original Matched Features from Globe01 and Globe02');
% Set a value near zero, It will be used to eliminate matches that
% correspond to points that do not lie on an epipolar line.
epipolarThreshold = .05;
for k = 1:length(matchedPoints1)
% Compute the fundamental matrix using the example helper function
% Evaluate the epipolar constraint
epipolarConstraint =[matchedPoints1.Location(k,:),1]...
*helperCameraMatricesToFMatrix(images(1).CameraMatrix,images(2).CameraMatrix)...
*[matchedPoints2.Location(k,:),1]';
%%%% here my epipolarConstraint results are bad %%%%%%%%%%%%%
% Only consider feature matches where the absolute value of the
% constraint expression is less than the threshold.
valid(k) = abs(epipolarConstraint) < epipolarThreshold;
end
validpts1 = images(1).points(indexPairs(valid, 1));
validpts2 = images(2).points(indexPairs(valid, 2));
figure;
showMatchedFeatures(images(1).image,images(2).image,validpts1,validpts2,'montage');
title('Matched Features After Applying Epipolar Constraint');
% convert image to double format for plotting
doubleimage = im2double(images(1).image);
points3D = ones(length(validpts1),4); % store homogeneous world coordinates
color = ones(length(validpts1),3); % store color information
% For all point correspondences
for i = 1:length(validpts1)
% For all image locations from a list of correspondences build an A
pointInImage1 = validpts1(i).Location;
pointInImage2 = validpts2(i).Location;
P1 = images(1).CameraMatrix'; % Transpose to match the convention in
P2 = images(2).CameraMatrix'; % in [1]
A = [
pointInImage1(1)*P1(3,:) - P1(1,:);...
pointInImage1(2)*P1(3,:) - P1(2,:);...
pointInImage2(1)*P2(3,:) - P2(1,:);...
pointInImage2(2)*P2(3,:) - P2(2,:)];
% Compute the 3-D location using the smallest singular value from the
% singular value decomposition of the matrix A
[~,~,V]=svd(A);
X = V(:,end);
X = X/X(end);
% Store location
points3D(i,:) = X';
% Store pixel color for visualization
y = round(pointInImage1(1));
x = round(pointInImage1(2));
color(i,:) = squeeze(doubleimage(x,y,:))';
end
% add green point representing the origin
points3D(end+1,:) = [0,0,0,1];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1); montage(files,'Size',[1,2]); title('Original Images')
% plot point-cloud
hAxes = subplot(1,2,2); hold on; grid on;
scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
view(20,24);axis equal;axis vis3d
set(hAxes,'XAxisLocation','top','YAxisLocation','left',...
'ZDir','reverse','Ydir','reverse');
grid on
title('Reconstructed Point Cloud');

First of all, the Computer Vision System Toolbox now includes a Camera Calibrator App for calibrating a single camera, and also support for programmatic stereo camera calibration. It would be easier for you to use those tools, because the example you are using and the Caltech Calibration Toolbox use somewhat different conventions.
The example uses the pre-multiply convention, i.e. row vector * matrix, while the Caltech toolbox uses the post-multiply convention (matrix * column vector). That means that if you do use the camera parameters from Caltech, you would have to transpose the intrinsic matrix and the rotation matrices. That could be the main cause of your problems.
As far as the intrinsics being different between your two cameras, that is perfectly normal. All cameras are slightly different.
It would also help to see the matched features that you've used for triangulation. Given that you are reconstructing an elongated object, it doesn't seem too surprising to see the reconstructed points form a line in 3D...
You could also try rectifying the images and doing a dense reconstruction, as in the example I've linked to above.

Related

Count circle objects in an image using matlab

How to count circle objects in a bright image using MATLAB?
The input image is:
imfindcircles function can't find any circle in this image.
Based on well known image processing techniques, you can write your own processing tool:
img = imread('Mlj6r.jpg'); % read the image
imgGray = rgb2gray(img); % convert to grayscale
sigma = 1;
imgGray = imgaussfilt(imgGray, sigma); % filter the image (we will take derivatives, which are sensitive to noise)
imshow(imgGray) % show the image
[gx, gy] = gradient(double(imgGray)); % take the first derivative
[gxx, gxy] = gradient(gx); % take the second derivatives
[gxy, gyy] = gradient(gy); % take the second derivatives
k = 0.04; %0.04-0.15 (see wikipedia)
blob = (gxx.*gyy - gxy.*gxy - k*(gxx + gyy).^2); % Harris corner detector (high second derivatives in two perpendicular directions)
blob = blob .* (gxx < 0 & gyy < 0); % select the top of the corner (i.e. positive second derivative)
figure
imshow(blob) % show the blobs
blobThresshold = 1;
circles = imregionalmax(blob) & blob > blobThresshold; % find local maxima and apply a thresshold
figure
imshow(imgGray) % show the original image
hold on
[X, Y] = find(circles); % find the position of the circles
plot(Y, X, 'w.'); % plot the circle positions on top of the original figure
nCircles = length(X)
This code counts 2710 circles, which is probably a slight (but not so bad) overestimation.
The following figure shows the original image with the circle positions indicated as white dots. Some wrong detections are made at the border of the object. You can try to make some adjustments to the constants sigma, k and blobThresshold to obtain better results. In particular, higher k may be beneficial. See wikipedia, for more information about the Harris corner detector.

MATLAB 3D sparse reconstruction issues. Somehow I can't get the final scatter plot of the points to work

I'm trying to reconstruct the shape of a sail. I'm using the 3D sparse reconstruction method. I'm using two cameras with which I took two pictures. I managed to do the calibration of such cameras too. In the pictures it is possible to see the checkerboard and the code I wrote detects it properly.
Now, since my pictures are black and white and the quality of the cameras is quite low, I cannot use the detectFeatures method properly. Problems arise when I'm trying to use matchFeatures. To overcome this problem I decided to use instead a cpselect command. By doing so I can manually click on the features. The matching between points from the two views seems now to be correct. When I carry on with the code and try to reconstruct the 3D plot I get points all over the place. It seems deformed. The plot clearly does not represent the sail and I don't know why.
The code follows.
Thank you in advance
% % Load precomputed camera parameters
load IP_CalibrationCarlos.mat %Calibration feature
%
I1 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan1.1.jpg');
I2 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan2.1.jpg');
%
[I1, newOrigin1] = undistortImage(I1, cameraParameters, 'OutputView', 'full');
[I2, newOrigin2] = undistortImage(I2, cameraParameters, 'OutputView', 'full');
%
I1 = imcrop(I1, [80 10 1040 1300]); %Necessary so images have same size
I2 = imcrop(I2, [0 10 1067 1300]);
%
squareSize = 82; % checkerboard square size in millimeters
%
[imagePoints, boardSize, pairsUsed] = detectCheckerboardPoints(rgb2gray(I1), rgb2gray(I2));
[refPoints1, boardSize] = detectCheckerboardPoints(rgb2gray(I1));
[refPoints2, boardSize] = detectCheckerboardPoints(rgb2gray(I2));
%
% % Translate detected points back into the original image coordinates
refPoints1 = bsxfun(#plus, refPoints1, newOrigin1);
refPoints2 = bsxfun(#plus, refPoints2, newOrigin2);
%
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
%
[R1, t1] = extrinsics(refPoints1, worldPoints, cameraParameters); %R = r t = translation
[R2, t2] = extrinsics(refPoints2, worldPoints, cameraParameters);
%
% % Calculate camera matrices using the |cameraMatrix| function.
cameraMatrix1 = cameraMatrix(cameraParameters, R1, t1);
cameraMatrix2 = cameraMatrix(cameraParameters, R2, t2);
%
cpselect(I1, I2); % Save them as 'matchedPoints1'and 'matchedPoints2'
%
indexPairs = matchFeatures(matchedPoints1, matchedPoints2);
% Visualize correspondences
figure;
showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
title('Matched Features');
%
[points3D] = triangulate(matchedPoints1, matchedPoints2, ...
cameraMatrix1, cameraMatrix2);
%
x = -points3D(:,1);
y = -points3D(:,2);
z = -points3D(:,3);
figure
scatter3(x,y,z, 25);
xlabel('X');
ylabel('Y');
zlabel('Z');
The first problem you have is that you are cropping the images. Once you do that, all your coordinates are off. You do not need to do that here, because the images do not need to be the same size.
The second question is how precisely did you select the matching points? From the picture you have posted, it seems that your matches can be off by a few pixels, which can result in a large reconstruction error. Can you try finding the centroids of the spots on the sail using regionprops?
Also, are the two cameras stationary relative to each other? If so, then you may be better off calibrating the stereo pair, and doing the dense reconstruction as in this example. In that case, the two images would have to be of the same size.

how can i use lsqcurvefit for image registration?

I have two 3D images, i need to register these two images using "lsqcurvefit". I know that I can use "imregister" but I want to use my own registration using "lsqcurvefit" in Matlab. My images are are following Gaussian distribution. it is not documented well that how should I provide it, anyone can help me in detail?
image registration is a repeated process of maping source image to target image using i.e affine. i want to use intensity base registration, and i use all voxels of my image. therefore, i need to fit these two images as much as possible.
Thanks
Here's an example of how to do point-wise image registration using lsqcurvefit. Basically you make a function that takes a set of points and an Affine matrix (we're just going to use the translate and rotate parts but you can use skew and magnify if desired) and returns a new set of points. There's probably a built-in function for this already but it's only two lines so it's easy to write. That function is:
function TformPts = TransformPoints(StartCoordinates, TransformMatrix)
TformPts = StartCoordinates*TransformMatrix;
Here's a script that generates some points, rotates and translates them by a random angle and vector, then uses the TransformPoints function as the input for lsqcurvefit to fit the needed transformation matrix for the registration. Then it's just a matrix multiplication to generate the registered set of points. If we did this all right the red circles (original data) will line up with the black stars (shifted then registered points) very well when the code below is run.
% 20 random points in x and y between 0 and 100
% row of ones pads out third dimension
pointsList = [100*rand(2, 20); ones(1, 20)];
rotateTheta = pi*rand(1); % add rotation, in radians
translateVector = 10*rand(1,2); % add translation, up to 10 units here
% 2D transformation matrix
% last row pads out third dimension
inputTransMatrix = [cos(rotateTheta), -sin(rotateTheta), translateVector(1);
sin(rotateTheta), cos(rotateTheta), translateVector(2);
0 0 1];
% Transform starting points by this matrix to make an array of shifted
% points.
% For point-wise registration, pointsList represents points from one image,
% shiftedPoints points from the other image
shiftedPoints = inputTransMatrix*pointsList;
% Add some random noise
% Remove this line if you want the registration to be exact
shiftedPoints = shiftedPoints + rand(size(shiftedPoints, 1), size(shiftedPoints, 2));
% Plot starting sets of points
figure(1)
plot(pointsList(1,:), pointsList(2,:), 'ro');
hold on
plot(shiftedPoints(1,:), shiftedPoints(2,:), 'bx');
hold off
% Fitting routine
% Make some initial, random guesses
initialFitTheta = pi*rand(1);
initialFitTranslate = [2, 2];
guessTransMatrix = [cos(initialFitTheta), -sin(initialFitTheta), initialFitTranslate(1);
sin(initialFitTheta), cos(initialFitTheta), initialFitTranslate(2);
0 0 1];
% fit = lsqcurvefit(#fcn, initialGuess, shiftedPoints, referencePoints)
fitTransMatrix = lsqcurvefit(#TransformPoints, guessTransMatrix, pointsList, shiftedPoints);
% Un-shift second set of points by fit values
fitShiftPoints = fitTransMatrix\shiftedPoints;
% Plot it up
figure(1)
hold on
plot(fitShiftPoints(1,:), fitShiftPoints(2,:), 'k*');
hold off
% Display start transformation and result fit
disp(inputTransMatrix)
disp(fitTransMatrix)

warped/curved point clouds

I am working on sparse reconstruction using a calibrated stereo pair. This is the approach I have taken step by step:
1- I Calibrated my stereo cameras using the Stereo Camera Calibrator app in MATLAB.
2- I Took a pair of stereo images and Undistorted each image.
3- I Detect, extract, and match point features.
4- I Use the triangulate function in MATLAB to get 3D coordinates of the matched points by passing the stereoParametes object into triangulate. The resulting 3D coordinates are with respect to the optical center of camera 1 (Right camera) and it is in millimeters.
The problem is that the point clouds seems to be warped and curved towards the edges of the image. at first it seemed like a barrel distortion of the lenses to me. so I recalibrated the bumblebee XB3 cameras using MATLAB camera calibrator app. but this time I used 3 radial distortion coefficients and also included tangential and skew parameters. but the results are the same. I also tried Caltech's camera calibration toolbox but it had the same results as MATLAB. the radial distortion coefficients are similar in both toolboxes. also another problem is that the Z values in the point cloud are all negative but I am thinking that might be coming from the fact that I am using right camera as camera 1 and left camera as camera 2 as opposed to what MATLAB's coordinate system is in the link attached.
I have attached couple of pictures of 3D point cloud from both sparse and dense 3D reconstruction. I am not interetsed in Dense 3D but just wanted to do it to see if the problem still exist which it does. I believe that means the main problem is with the images and camera calibration rather than algorithms.
Now my questions are:
1- What is the main reason/reasons for having warped/curved 3D point clouds? is it camera calibration only or other steps may introduce error as well? how can I check on that?
2- Can you suggest another camera calibration toolbox besides MATLAB's and Caltech's? maybe one that is more suitable for radial distortions?
Thanks
Images:
Links:
coordinate system
Code:
clear
close all
clc
load('mystereoparams.mat');
I11 = imread('Right.tif');
I22 = imread('Left.tif');
figure, imshowpair(I11, I22, 'montage');
title('Pair of Original Images');
[I1, newOrigin1] = undistortImage(I11,stereoParams.CameraParameters1);
[I2, newOrigin2] = undistortImage(I22,stereoParams.CameraParameters2);
figure, imshowpair(I1, I2, 'montage');
title('Undistorted Images');
% Detect feature points
imagePoints1 = detectSURFFeatures(rgb2gray(I1), 'MetricThreshold', 600);
imagePoints2 = detectSURFFeatures(rgb2gray(I2), 'MetricThreshold', 600);
% Extract feature descriptors
features1 = extractFeatures(rgb2gray(I1), imagePoints1);
features2 = extractFeatures(rgb2gray(I2), imagePoints2);
% Visualize several extracted SURF features
figure;
imshow(I1);
title('1500 Strongest Feature Points from Image1');
hold on;
plot(selectStrongest(imagePoints1, 1500));
indexPairs = matchFeatures(features1, features2, 'MaxRatio', 0.4);
matchedPoints1 = imagePoints1(indexPairs(:, 1));
matchedPoints2 = imagePoints2(indexPairs(:, 2));
% Visualize correspondences
figure;
showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2,'montage');
title('Original Matched Features from Globe01 and Globe02');
% Transform matched points to the original image's coordinates
matchedPoints1.Location = bsxfun(#plus, matchedPoints1.Location, newOrigin1);
matchedPoints2.Location = bsxfun(#plus, matchedPoints2.Location, newOrigin2);
[Cloud, reprojErrors] = triangulate(matchedPoints1, matchedPoints2, stereoParams);
figure;plot3(Cloud(:,1),Cloud(:,2),Cloud(:,3),'b.');title('Point Cloud before noisy match removal');
xlabel('X'), ylabel('Y'), zlabel('Depth (Z) in mm')
% Eliminate noisy points
meanmean=mean(sqrt(sum(reprojErrors .^ 2, 2)))
standdev=std(sqrt(sum(reprojErrors .^ 2, 2)))
errorDists = max(sqrt(sum(reprojErrors.^2,2)),[],14);
validIdx = errorDists < meanmean+standdev;
tt1=find(Cloud(:,3)>0);
validIdx(tt1)=0;
tt2=find(abs(Cloud(:,3))>1800);
validIdx(tt2)=0;
tt3=find(abs(Cloud(:,3))<1000);
validIdx(tt3)=0;
points3D = Cloud(validIdx, :);
figure;plot3(points3D(:,1),points3D(:,2),points3D(:,3),'b.');title('Point Cloud after noisy match removal');
xlabel('X'), ylabel('Y'), zlabel('Depth (Z) in mm')
validPoints1 = matchedPoints1(validIdx, :);
validPoints2 = matchedPoints2(validIdx, :);
figure;
showMatchedFeatures(I1, I2, validPoints1,validPoints2,'montage');
title('Matched Features After Removing Noisy Matches');
% get the color of each reconstructed point
validPoints1 = round(validPoints1.Location);
numPixels = size(I1, 1) * size(I1, 2);
allColors = reshape(im2double(I1), [numPixels, 3]);
colorIdx = sub2ind([size(I1, 1), size(I1, 2)], validPoints1(:,2), ...
validPoints1(:, 1));
color = allColors(colorIdx, :);
% add green point representing the origin
points3D(end+1,:) = [0,0,0];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1);
imshowpair(I1, I2, 'montage');
title('Original Images')
% plot point cloud
hAxes = subplot(1,2,2);
showPointCloud(points3D, color, 'Parent', hAxes, ...
'VerticalAxisDir', 'down', 'MarkerSize', 40);
xlabel('x-axis (mm)');
ylabel('y-axis (mm)');
zlabel('z-axis (mm)')
title('Reconstructed Point Cloud');
figure, scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
title('Final colored Reconstructed Point Cloud');
Your code looks right. The problem seems to be in the calibration. The fact that you get a warped image with 3 coefficients tells me that you may not have enough data points close to the edges of the image to estimate the distortion accurately. It is hard to see with your images, though. If you take a picture of a scene with many straight edges and undistort it, you would get a better idea.
So I would recommend taking more images with the checkerboard as close to the edges of the image as you can get. See if that helps.
Another thing to look at would be estimation errors. In R2014b the Stereo Camera Calibrator app can optionally return the standard error values for each of the estimated parameters. Those can give you confidence intervals and tell you whether you may need more data points. See this example.
Oh, and also make sure that your calibration images are not saved as jpeg. Please use a lossless format like tiff or png.

Representing three variables in a three dimension plot

I have a problem dealing with 3rd dimension plot for three variables.
I have three matrices: Temperature, Humidity and Power. During one year, at every hour, each one of the above were measured. So, we have for each matrix 365*24 = 8760 points. Then, one average point is taken every day. So,
Tavg = 365 X 1
Havg = 365 X 1
Pavg = 365 X 1
In electrical point of veiw, the power depends on the temperature and humidity. I want to discover this relation using a three dimensional plot.
I tried using mesh, meshz, surf, plot3, and many other commands in MATLAB but unfortunately I couldn't get what I want. For example, let us take first 10 days. Here, every day is represented by average temperature, average humidity and average power.
Tavg = [18.6275
17.7386
15.4330
15.4404
16.4487
17.4735
19.4582
20.6670
19.8246
16.4810];
Havg = [75.7105
65.0892
40.7025
45.5119
47.9225
62.8814
48.1127
62.1248
73.0119
60.4168];
Pavg = [13.0921
13.7083
13.4703
13.7500
13.7023
10.6311
13.5000
12.6250
13.7083
12.9286];
How do I represent these matrices by three dimension plot?
The challenge is that the 3-D surface plotting functions (mesh, surf, etc.) are looking for a 2-D matrix of z values. So to use them you need to construct such a matrix from the data.
Currently the data is sea of points in 3-D space, so, you have to map these points to a surface. A simple approach to this is to divide up the X-Y (temperature-humidity) plane into bins and then take the average of all of the Z (power) data. Here is some sample code for this that uses accumarray() to compute the averages for each bin:
% Specify bin sizes
Tbin = 3;
Hbin = 20;
% Create binned average array
% First create a two column array of bin indexes to use as subscripts
subs = [round(Havg/Hbin)+1, round(Tavg/Tbin)+1];
% Now create the Z (power) estimate as the average value in each bin
Pest = accumarray(subs,Pavg,[],#mean);
% And the corresponding X (temp) & Y (humidity) vectors
Tval = Tbin/2:Tbin:size(Pest,2)*Tbin;
Hval = Hbin/2:Hbin:size(Pest,1)*Hbin;
% And create the plot
figure(1)
surf(Tval, Hval, Pest)
xlabel('Temperature')
ylabel('Humidity')
zlabel('Power')
title('Simple binned average')
xlim([14 24])
ylim([40 80])
The graph is a bit coarse (can't post image yet, since I am new) because we only have a few data points. We can enhance the visualization by removing any empty bins by setting their value to NaN. Also the binning approach hides any variation in the Z (power) data so we can also overlay the orgional point cloud using plot3 without drawing connecting lines. (Again no image b/c I am new)
Additional code for the final plot:
%% Expanded Plot
% Remove zeros (useful with enough valid data)
%Pest(Pest == 0) = NaN;
% First the original points
figure(2)
plot3(Tavg, Havg, Pavg, '.')
hold on
% And now our estimate
% The use of 'FaceColor' 'Interp' uses colors that "bleed" down the face
% rather than only coloring the faces away from the origin
surfc(Tval, Hval, Pest, 'FaceColor', 'Interp')
% Make this plot semi-transparent to see the original dots anb back side
alpha(0.5)
xlabel('Temperature')
ylabel('Humidity')
zlabel('Power')
grid on
title('Nicer binned average')
xlim([14 24])
ylim([40 80])
I think you're asking for a surface fit for your data. The Curve Fitting Toolbox handles this nicely:
% Fit model to data.
ft = fittype( 'poly11' );
fitresult = fit( [Tavg, Havg], Pavg, ft);
% Plot fit with data.
plot( fitresult, [xData, yData], zData );
legend( 'fit 1', 'Pavg vs. Tavg, Havg', 'Location', 'NorthEast' );
xlabel( 'Tavg' );
ylabel( 'Havg' );
zlabel( 'Pavg' );
grid on
If you don't have the Curve Fitting Toolbox, you can use the backslash operator:
% Find the coefficients.
const = ones(size(Tavg));
coeff = [Tavg Havg const] \ Pavg;
% Plot the original data points
clf
plot3(Tavg,Havg,Pavg,'r.','MarkerSize',20);
hold on
% Plot the surface.
[xx, yy] = meshgrid( ...
linspace(min(Tavg),max(Tavg)) , ...
linspace(min(Havg),max(Havg)) );
zz = coeff(1) * xx + coeff(2) * yy + coeff(3);
surf(xx,yy,zz)
title(sprintf('z=(%f)*x+(%f)*y+(%f)',coeff))
grid on
axis tight
Both of these fit a linear polynomial surface, i.e. a plane, but you'll probably want to use something more complicated. Both of these techniques can be adapted to this situation. There's more information on this subject at mathworks.com: How can I determine the equation of the best-fit line, plane, or N-D surface using MATLAB?.
You might want to look at Delaunay triangulation:
tri = delaunay(Tavg, Havg);
trisurf(tri, Tavg, Havg, Pavg);
Using your example data, this code generates an interesting 'surface'. But I believe this is another way of doing what you want.
You might also try the GridFit tool by John D'Errico from MATLAB Central. This tool produces a surface similar to interpolating between the data points (as is done by MATLAB's griddata) but with cleaner results because it smooths the resulting surface. Conceptually multiple datapoints for nearby or overlapping X,Y coordinates are averaged to produce a smooth result rather than noisy "ripples." The tool also allows for some extrapolation beyond the data points. Here is a code example (assuming the GridFit Tool has already been installed):
%Establish points for surface
num_points = 20;
Tval = linspace(min(Tavg),max(Tavg),num_points);
Hval = linspace(min(Havg),max(Havg),num_points);
%Do the fancy fitting with smoothing
Pest = gridfit(Tavg, Havg, Pavg, Tval, Hval);
%Plot results
figure(5)
surfc(XI,YI,Pest, 'FaceColor', 'Interp')
To produce an even nicer plot, you can add labels, some transparancy and overlay the original points:
alpha(0.5)
hold on
plot3(Tavg,Havg,Pavg,'.')
xlabel('Temperature')
ylabel('Humidity')
zlabel('Power')
grid on
title('GridFit')
PS: #upperBound: Thanks for the Delaunay triangulation tip. That seems like the way to go if you want to go through each of the points. I am a newbie so can't comment yet.
Below is your solution:
Save/write the Myplot3D function
function [x,y,V]=Myplot3D(X,Y,Z)
x=linspace(X(1),X(end),100);
y=linspace(Y(1),Y(end),100);
[Xt,Yt]=meshgrid(x,y);
V=griddata(X,Y,Z,Xt,Yt);
Call the following from your command line (or script)
[Tavg_new,Pavg_new,V]=Myplot3D(Tavg,Pavg,Havg);
surf(Tavg_new,Pavg_new,V)
colormap jet;
xlabel('Temperature')
ylabel('Power/Pressure')
zlabel('Humidity')