I am working on an application where I need to rectify an image taken from a mobile camera platform. The platform measures roll, pitch and yaw angles, and I want to make it look like the image is taken from directly above, by some sort of transform from this information.
In other words, I want a perfect square lying flat on the ground, photographed from afar with some camera orientation, to be transformed, so that the square is perfectly symmetrical afterwards.
I have been trying to do this through OpenCV(C++) and Matlab, but I seem to be missing something fundamental about how this is done.
In Matlab, I have tried the following:
%% Transform perspective
img = imread('my_favourite_image.jpg');
R = R_z(yaw_angle)*R_y(pitch_angle)*R_x(roll_angle);
tform = projective2d(R);
outputImage = imwarp(img,tform);
figure(1), imshow(outputImage);
Where R_z/y/x are the standard rotational matrices (implemented with degrees).
For some yaw-rotation, it all works just fine:
R = R_z(10)*R_y(0)*R_x(0);
Which gives the result:
If I try to rotate the image by the same amount about the X- or Y- axes, I get results like this:
R = R_z(10)*R_y(0)*R_x(10);
However, if I rotate by 10 degrees, divided by some huge number, it starts to look OK. But then again, this is a result that has no research value what so ever:
R = R_z(10)*R_y(0)*R_x(10/1000);
Can someone please help me understand why rotating about the X- or Y-axes makes the transformation go wild? Is there any way of solving this without dividing by some random number and other magic tricks? Is this maybe something that can be solved using Euler parameters of some sort? Any help will be highly appreciated!
Update: Full setup and measurements
For completeness, the full test code and initial image has been added, as well as the platforms Euler angles:
Code:
%% Transform perspective
function [] = main()
img = imread('some_image.jpg');
R = R_z(0)*R_y(0)*R_x(10);
tform = projective2d(R);
outputImage = imwarp(img,tform);
figure(1), imshow(outputImage);
end
%% Matrix for Yaw-rotation about the Z-axis
function [R] = R_z(psi)
R = [cosd(psi) -sind(psi) 0;
sind(psi) cosd(psi) 0;
0 0 1];
end
%% Matrix for Pitch-rotation about the Y-axis
function [R] = R_y(theta)
R = [cosd(theta) 0 sind(theta);
0 1 0 ;
-sind(theta) 0 cosd(theta) ];
end
%% Matrix for Roll-rotation about the X-axis
function [R] = R_x(phi)
R = [1 0 0;
0 cosd(phi) -sind(phi);
0 sind(phi) cosd(phi)];
end
The initial image:
Camera platform measurements in the BODY coordinate frame:
Roll: -10
Pitch: -30
Yaw: 166 (angular deviation from north)
From what I understand the Yaw-angle is not directly relevant to the transformation. I might, however, be wrong about this.
Additional info:
I would like specify that the environment in which the setup will be used contains no lines (oceanic photo) that can reliably used as a reference (the horizon will usually not be in the picture). Also the square in the initial image is merely used as a measure to see if the transformation is correct, and will not be there in a real scenario.
So, this is what I ended up doing: I figured that unless you are actually dealing with 3D images, rectifying the perspective of a photo is a 2D operation. With this in mind, I replaced the z-axis values of the transformation matrix with zeros and ones, and applied a 2D Affine transformation to the image.
Rotation of the initial image (see initial post) with measured Roll = -10 and Pitch = -30 was done in the following manner:
R_rotation = R_y(-60)*R_x(10);
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ]
This implies a rotation of the camera platform to a virtual camera orientation where the camera is placed above the scene, pointing straight downwards. Note the values used for roll and pitch in the matrix above.
Additionally, if rotating the image so that is aligned with the platform heading, a rotation about the z-axis might be added, giving:
R_rotation = R_y(-60)*R_x(10)*R_z(some_heading);
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ]
Note that this does not change the actual image - it only rotates it.
As a result, the initial image rotated about the Y- and X-axes looks like:
The full code for doing this transformation, as displayed above, was:
% Load image
img = imread('initial_image.jpg');
% Full rotation matrix. Z-axis included, but not used.
R_rot = R_y(-60)*R_x(10)*R_z(0);
% Strip the values related to the Z-axis from R_rot
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ];
% Generate transformation matrix, and warp (matlab syntax)
tform = affine2d(R_2d);
outputImage = imwarp(img,tform);
% Display image
figure(1), imshow(outputImage);
%*** Rotation Matrix Functions ***%
%% Matrix for Yaw-rotation about the Z-axis
function [R] = R_z(psi)
R = [cosd(psi) -sind(psi) 0;
sind(psi) cosd(psi) 0;
0 0 1];
end
%% Matrix for Pitch-rotation about the Y-axis
function [R] = R_y(theta)
R = [cosd(theta) 0 sind(theta);
0 1 0 ;
-sind(theta) 0 cosd(theta) ];
end
%% Matrix for Roll-rotation about the X-axis
function [R] = R_x(phi)
R = [1 0 0;
0 cosd(phi) -sind(phi);
0 sind(phi) cosd(phi)];
end
Thank you for the support, I hope this helps someone!
I think you can derive transformation this way:
1) Let you have four 3d-points A(-1,-1,0), B(1,-1,0), C(1,1,0) and D(-1,1,0). You can take any 4 noncollinear points. They not related to image.
2) You have transformation matrix, so you can set your camera by multiplying points coords by transformation matrix. And you'll get 3d coords relative to camera position/direction.
3) You need to get projection of your points to screen plane. The simpliest way is to use ortographic projection (simply ignore depth coordinate). On this stage you've got 2D projections of transformed points.
4) Once you have 2 sets of 2D points coordinates (the set from step 1 without 3-rd coordinate and the set from step 3), you can compute homography matrix in standard way.
5) Apply inverse homograhy transformation to your image.
You need to estimate a homography. For an off-the-shelf Matlab solution, see function vgg_H_from_x_lin.m from http://www.robots.ox.ac.uk/~vgg/hzbook/code/ .
For the theory dig into a Computer Vision textbook, such as the one available freely at http://szeliski.org/Book/ or in Chapter 3 of http://programmingcomputervision.com/downloads/ProgrammingComputerVision_CCdraft.pdf
Maybe my answer is not correct due to my mis-understanding of the camera parameters, but I was wondering whether the Yaw/Pitch/Roll is relative to the position of your object. I used the formula of general rotations, and my code is below (the rotation functions R_x, R_y, and R_z were copied from yours, I didn't paste them here)
close all
file='http://i.stack.imgur.com/m5e01.jpg'; % original image
I=imread(file);
R_rot = R_x(-10)*R_y(-30)*R_z(166);
R_rot = inv(R_rot);
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ];
T = maketform('affine',R_2d);
transformedI = imtransform(I,T);
figure, imshow(I), figure, imshow(transformedI)
The result:
This indicates that you still need some rotation operation to get the 'correct' alignment in your mind (but probably not necessary the correct position in the camera's mind).
So I change R_rot = inv(R_rot); to R_rot = inv(R_rot)*R_x(-5)*R_y(25)*R_z(180);, and now it gave me:
Looks better like what you want.
Thanks.
Related
I am facing the same problem as mentioned in this post, however, I am not facing it with OpenGL, but simply with MATLAB. Depth as distance to camera plane in GLSL
I have a depth image rendered from the Z-Buffer from 3ds max. I was not able to get an orthographic representation of the z-buffer. For a better understanding, I will use the same sketch as made by the previous post:
* |--*
/ |
/ |
C-----* C-----*
\ |
\ |
* |--*
The 3 asterisks are pixels and the C is the camera. The lines from the
asterisks are the "depth". In the first case, I get the distance from the pixel to the camera. In the second, I wish to get the distance from each pixel to the plane.
The settins of my camera are the following:
WIDTH = 512;
HEIGHT = 424;
FOV = 89.971;
aspect_ratio = WIDTH/HEIGHT;
%clipping planes
near = 500;
far = 5000;
I calulate the frustum settings like the following:
%calculate frustums settings
top = tan((FOV/2)*5000)
bottom = -top
right = top*aspect_ratio
left = -top*aspect_ratio
And set the projection matrix like this:
%Generate matrix
O_p = [2/(right-left) 0 0 -((right+left)/(right-left)); ...
0 2/(top-bottom) 0 -((top+bottom)/(top-bottom));...
0 0 -2/(far-near) -(far+near)/(far-near);...
0 0 0 1];
After this I read in the depth image, which was saved as a 48 bit RGB- image, where each channel is the same, thus only one channel has to be used.
%Read in image
img = imread('KinectImage.png');
%Throw away, except one channel (all hold the same information)
c1 = img(:,:,1);
The pixel values have to be inverted, since the closer the values are to the camera, the brigher they were. If a pixel is 0 (no object to render available) it is set to 2^16, so , that after the bit complementation, the value is still 0.
%Inverse bits that are not zero, so that the z-image has the correct values
c1(c1 == 0) = 2^16
c1_cmp = bitcmp(c1);
To apply the matrix, to each z-Buffer value, I lay out the vector one dimensional and build up a vector like this [0 0 z 1] , over every element.
c1_cmp1d = squeeze(reshape(c1_cmp,[512*424,1]));
converted = double([zeros(WIDTH*HEIGHT,1) zeros(WIDTH*HEIGHT,1) c1_cmp1d zeros(WIDTH*HEIGHT,1)]) * double(O_p);
After that, I pick out the 4th element of the result vector and reshape it to a image
img_con = converted(:,4);
img_con = reshape(img_con,[424,512]);
However, the effect, that the Z-Buffer is not orthographic is still there, so did I get sth wrong? Is my calculation flawed ? Or did I make mistake here?
Depth Image coming from 3ds max
After the computation (the white is still "0" , but the color axis has changed)
It would be great to achieve this with 3ds max, which would resolve this issue, however I was not able to find this setting for the z-buffer. Thus, I want to solve this using Matlab.
I am using MATLAB. I want to use canny method for edge detection. But I need the edges that are diagonal or the edges that are only on 40 to 50 degree angle. how can i do that?
You need write canny edge detector's code by your own (you would get lots of implementation )in the internet. You would then be calculating the gradient magnitudes and gradient directions in the second step. There you need to filter out the angles and corresponding magnitudes.
Hope this helps you.
I've answered a similar question about how to use Matlab's edge function to find oriented edges with Canny ( Orientational Canny Edge Detection ), but I also wanted to try out a custom implementation as suggested by Avijit.
Canny Edge Detection steps:
Start with an image, I'll use a built in demo image.
A = im2double(rgb2gray(imread('peppers.png')));
Gaussian Filter
A_filter = imgaussfilt(A);
Sobel Edge Detection -- We can't use the built in implementation (edge(A_filter, 'Sobel')), because we want the edge angles, not just the edge locations, so we implement our own operator.
a. Convolution to find oriented gradients
%These filters measure the difference in values between vertically or horizontally adjacent pixels.
%Effectively, this finds vertical and horizontal gradients.
vertical_filter = [-1 0 1; -2 0 2; -1 0 1];
horizontal_filter = [-1 -2 -1; 0 0 0; 1 2 1];
A_vertical = conv2(A_filter, vertical_filter, 'same');
A_horizontal = conv2(A_filter, horizontal_filter, 'same');
b. Calculate the angles
A_angle = arctan(A_vertical./A_horizontal);
At this step, we traditionally bin edges by orientation (0°, 45°, 90°, 135°), but since you only want diagonal edges between 40 and 50 degrees, we will retain those edges and discard the rest.
% I lowered the thresholds to include more pixels
% But for your original post, you would use 40 and 50
lower_angle_threshold = 22.5;
upper_angle_threshold = 67.5;
diagonal_map = zeros(size(A), 'logical');
diagonal_map (A_angle>(lower_angle_threshold*pi/180) & A_angle<(upper_angle_threshold*pi/180)) = 1;
Perform non-max suppression on the remaining edges -- This is the most difficult portion to adapt to different angles. To find the exact edge location, you compare two adjacent pixels: for 0° edges, compare east-west, for 45° south-west pixel to north-east pixel, for 90° compare north-south, and for 135° north-west pixel to south-east pixel.
Since your desired angle is close to 45°, I just used south-west, but if you wanted 10° to 20°, for example, you'd have to put some more thought into these comparisons.
non_max = A_sobel;
[n_rows, n_col] = size(A);
%For every pixel
for row = 2:n_rows-1
for col = 2:n_col-1
%If we are at a diagonal edge
if(diagonal_map(row, col))
%Compare north east and south west pixels
if(A_sobel(row, col)<A_sobel(row-1, col-1) || ...
A_sobel(row, col)<A_sobel(row+1, col+1))
non_max(row, col) = 0;
end
else
non_max(row, col) = 0;
end
end
end
Edge tracking with hysteresis -- Decide whether weak edge pixels are close enough (I use a 3x3 window) to strong edge pixels. If they are, include them in the edge. If not, they are noise; remove them.
high_threshold = 0.5; %These thresholds are tunable parameters
low_threshold = 0.01;
weak_edge_pixels = non_max > low_threshold & non_max < high_threshold;
strong_edge_pixels = non_max > high_threshold;
final = strong_edge_pixels;
for row = 2:n_rows-1
for col = 2:n_col-1
window = strong_edge_pixels(row-1:row+1, col-1:col+1);
if(weak_edge_pixels(row, col) && any(window(:)))
final(row, col) = 1;
end
end
end
Here are my results.
As you can see, discarding the other edge orientations has a very negative effect on the hysteresis step because fewer strong pixels are detected. Adjusting the high_threshold would help somewhat. Another option would be to do Steps 5 and 6 using all edge orientations, and then use the diagonal_map to extract the diagonal edges.
I use camera calibration in matlab to detect some checkerboard patterns, after
figure; showExtrinsics(cameraParams, 'CameraCentric');
Now, I want to rotate the checkerboard patterns around the x-axis such that all of them have nearly the same y coordinates in the camera frame.
Method:
I get the positions of all patterns in the camera's frame. Then I do optimization,where the objective function is to minimize variance in y and the variable is rotation about x ranging from o to 360.
Problem:
But when I plot the transformed y-coordinates, they are even nearly in a line.
Code:
Get the checkerboad points:
%% Get rotation and translation matrices for each image;
T_cw=cell(num_imgs,1); % stores camera to world rotation and translation for each image
pixel_coordinates=zeros(num_imgs,2); % stores the pixel coordinates of each checkerboard origin
for ii=1:num_imgs,
% Calibrate the camera
im=imread(list_imgs_path{ii});
[imagePoints, boardSize] = detectCheckerboardPoints(im);
[r_wc, t_wc] = extrinsics(imagePoints, worldPoints, cameraParams);
T_wc=[r_wc,t_wc';0 0 0 1];
% World to camera matrix
T_cw{ii} = inv(T_wc);
t_cw{ii}=T_cw{ii}(1:3,4); % x,y,z coordinates in camera's frame
end
Data(num_imgs=10):
t_cw
[-1072.01388542262;1312.20387622761;-1853.34408157349]
[-1052.07856598756;1269.03455126794;-1826.73576892251]
[-1091.85978641218;1351.08261414473;-1668.88197803184]
[-1337.56358084648;1373.78548638383;-1396.87603554914]
[-1555.19509876309;1261.60428874489;-1174.63047408086]
[-1592.39596647158;1066.82210015055;-1165.34417772659]
[-1523.84307918660;963.781819272748;-1207.27444716506]
[-1614.00792252030;893.962075837621;-1114.73528985018]
[-1781.83112607964;708.973204727939;-797.185326205240]
[-1781.83112607964;708.973204727939;-797.185326205240]
Main code (Optimization and transformation):
%% Get theta for rotation
f_obj = #(x)var_ycors(x,t_cw);
opt_theta = fminbnd(f_obj,0,360);
%% Plotting (rotate ycor and check to fix theta)
y_rotated=zeros(1,num_imgs);
for ii=1:num_imgs,
y_rotated(ii)=rotate_cor(opt_theta,t_cw{ii});
end
plot(1:numel(y_rotated),y_rotated);
function var_computed=var_ycors(theta,t_cw)
ycor=zeros(1,numel(t_cw));
for ii =1:numel(t_cw),
ycor(ii)=rotate_cor(theta,t_cw{ii});
end
var_computed=var(ycor);
end
function ycor=rotate_cor(theta,mat)
r_x=[1 0 0; 0 cosd(theta) -sind(theta); 0 sind(theta) cosd(theta)];
rotate_mat=mat'*r_x;
ycor=rotate_mat(2);
end
This is a clear eigenvector problem!
Take your centroids:
t_cw=[-1072.01388542262;1312.20387622761;-1853.34408157349
-1052.07856598756;1269.03455126794;-1826.73576892251
-1091.85978641218;1351.08261414473;-1668.88197803184
-1337.56358084648;1373.78548638383;-1396.87603554914
-1555.19509876309;1261.60428874489;-1174.63047408086
-1592.39596647158;1066.82210015055;-1165.34417772659
-1523.84307918660;963.781819272748;-1207.27444716506
-1614.00792252030;893.962075837621;-1114.73528985018
-1781.83112607964;708.973204727939;-797.185326205240
-1781.83112607964;708.973204727939;-797.185326205240];
t_cw=reshape(t_cw,[3,10])';
compute PCA on them, so we know the principal conponents:
[R]=pca(t_cw);
And.... thats it! R is now the transformation matrix between your original points and the rotated coordinate system. As an example, I will draw in red the old points and in blue the new ones:
hold on
plot3(t_cw(:,1),t_cw(:,2),t_cw(:,3),'ro')
trans=t_cw*R;
plot3(trans(:,1),trans(:,2),trans(:,3),'bo')
You can see that now the blue ones are in a plane, with the best possible fit to the X direction. If you want them in Y direction, just rotate 90 degrees in Z (I am sure you can figure out how to do this with 2 minutes of Google ;) ).
Note: This is mathematically the best possible fit. I know they are not as "in a row" as one would like, but this is because of the data, this is honestly the best possible fit, as that is what the eigenvectors are!
I am trying to make an animation of the trajectory (circular orbit of 7000 km altitude) of a satellite orbiting the Earth. The following vectors x,y,z represents the coordinates of it (obtained integrating the acceleration due to the nonspherical gravitational potential) in the reference system.
fh = figure('DoubleBuffer','on');
ah = axes('Parent',fh,'Units','normalized','Position',[0 0 1 1],...
'DataAspectRatio',[1 1 1],'DrawMode','fast');
x = 1.0e+003 * [ 1.293687086462776 1.355010603320554 ...
1.416226136451621 1.477328806662750 1.538313743926646...
1.841302933101510 2.140623861743577 2.435680048370655...
2.725883985836056 3.830393161542639 4.812047393962632...
5.639553477924236 6.285935904692739 6.778445814703028...
6.981534839226300 6.886918327688911 6.496619397538814...
5.886899070860056 5.061708852126299 4.051251943168882...
2.891621923700204 1.551975259009857 0.148687346809817...
-1.259946709379085 -2.614876359324573 -3.789635985368149...
-4.822735075152957 -5.675398819678173 -6.314344260262741...
-6.725008970265510 -6.860046738669579 -6.714044347581475...
-6.291232549137548 -5.646225528669501 -4.790489239458692...
-3.756316068441812 -2.581710448683235 -1.257064527234605...
0.118190083177733 1.488198207705392 2.797262268588749...
3.943218990855596 4.943060241667732 5.760107224604901...
6.363435161221018 6.741208871652011 6.844507242544970...
6.669637491855506 6.222229021788314 5.549112743364572...
4.665587166679964 3.605338508383659 2.407805301565781...
1.076891826523990 -0.297413079432155 -1.658804233546807...
-2.950960371016551 -4.105336427038419 -5.093651475630134...
-5.875676956725480 -6.417825276834068 -6.694317613708315...
-6.702354075060146 -6.441476385534835 -5.920328191821120...
-5.149356931765655 -4.165756794143557 -3.010476122311884...
-1.730623521107957 -0.547981318845428 0.651933236927557...
1.830754553013015 2.950797411065132];
y = 1.0e+003 *[ -6.879416537989226 -6.867600717396513...
-6.855237614338527 -6.842328214064634 -6.828873545169439...
-6.753459997528374 -6.664593892931937 -6.562452270514113...
-6.447238135027323 -5.857768973060929 -5.080802144227667...
-4.141502963266585 -3.069449548231363 -1.712593819793112...
-0.283073212084787 1.157789207734001 2.547934226666446...
3.733185664633135 4.781256997101091 5.653507474532885...
6.316540958291930 6.760480121739906 6.924451844039825...
6.801366712306432 6.393950562012035 5.763652137956600...
4.918852380803697 3.890903548710424 2.717191733101876...
1.385839187748386 -0.001786735280855 -1.388680800030854...
-2.717513794724399 -3.877348086956174 -4.892062889940518...
-5.723943344458780 -6.341064412332522 -6.729295147896739...
-6.844976271597333 -6.684181367561298 -6.252308741323985...
-5.600523241569850 -4.741636145151388 -3.707934368103928...
-2.537101251915556 -1.208445066639178 0.169057351189467...
1.539102816836380 2.845512534980855 3.993289528709769...
4.989150886098799 5.795183343929699 6.379362665363127...
6.723976759736427 6.794165677259719 6.586864956951024...
6.108394444576384 5.387403581100790 4.449452017586583...
3.332306147336086 2.080126804848620 0.757432563194591...
-0.595089763589023 -1.923045482863719 -3.172486599444496...
-4.302442851663575 -5.254127434062967 -5.988250483410006...
-6.472859710456819 -6.675113607083117 -6.664054266658221...
-6.440275312105615 -6.010308893159839];
z = [ -1.348762314964606 -1.416465504571016 -1.484053975854905...
-1.551522350691171 -1.618865254528658 -1.953510294130345...
-2.284215283426580 -2.610320163346533 -2.931177500785390...
-4.153679292291825 -5.242464339076090 -6.162825517200489...
-6.884797354552217 -7.440577139596716 -7.680358197465111...
-7.594616346122523 -7.183952381870657 -6.529293328494871...
-5.637062917332294 -4.540678277777376 -3.279180600545935...
-1.817413221203883 -0.280548741687378 1.268253040429052...
2.764251377698321 4.066975661566477 5.218214283582148...
6.174673504642019 6.899157495671121 7.375688520371054...
7.548875108319217 7.410793523141250 6.965068314483629...
6.271309946313485 5.343254095742233 4.215431448848456...
2.928028129903598 1.469574073877195 -0.048649548535536...
-1.563638474934283 -3.013536101911645 -4.285161526803897...
-5.397128342069014 -6.308837263463213 -6.985946890567337...
-7.415475222950275 -7.542406523585701 -7.363021555333582...
-6.884639818710263 -6.158276823110702 -5.199186592259776...
-4.043958234344444 -2.736923814690622 -1.283388986878655...
0.219908617803070 1.712828428793243 3.135072606759898...
4.411790351254605 5.510842969067953 6.387336537361380...
7.004133661144990 7.332163450286972 7.366696289243980...
7.105258174916579 6.555393588532904 5.727091807637045...
4.660073989309112 3.399622357708514 1.999243120787114...
0.701744421660999 -0.620073499615723 -1.923270654698332...
-3.164705887374677 ];
load('topo.mat','topo','topomap1');
[x1,y1,z1] = sphere(50);
x1 = 6678.14*x1;
y1 = 6678.14*y1;
z1 = 6678.14*z1;
props.AmbientStrength = 0.1;
props.DiffuseStrength = 1;
props.SpecularColorReflectance = .5;
props.SpecularExponent = 20;
props.SpecularStrength = 1;
props.FaceColor= 'texture';
props.EdgeColor = 'none';
props.FaceLighting = 'phong';
props.Cdata = topo;
surface(x1,y1,z1,props);
light('position',[-1 0 1]);
light('position',[-1.5 0.5 -0.5], 'color', [.6 .2 .2]);
view(3);
handles.p1 = line('parent',ah,'XData',x(1),'YData',y(1),'ZData',...
z(1),'Color','red','LineWidth',2);
handles.p2 = line('parent',ah,'XData',x(end),'YData',y(end),...
'ZData',z(end),'Marker','o','MarkerSize',6,'MarkerFaceColor','b');
oaxes([0 0 0],'Arrow','extend','AxisLabelLocation','side',...
'Xcolor','green','Ycolor','green','Zcolor','green');
axis vis3d equal;
handles.XLim = get(gca,'XLim');
handles.YLim = get(gca,'YLim');
handles.ZLim = get(gca,'ZLim');
set([handles.p1,handles.p2],'Visible','off');
xmin = handles.XLim(1);
ymin = handles.YLim(1);
zmin = handles.ZLim(1);
xmax = handles.XLim(2);
ymax = handles.YLim(2);
zmax = handles.ZLim(2);
set(ah, 'XLim', [xmin xmax],'YLim', [ymin ymax],'Zlim',[zmin zmax]);
view(3);
handles.hsat = line('parent',ah,'XData',x(1), 'YData',y(1),...
'ZData',z(1),'Marker','o', 'MarkerSize',6,'MarkerFaceColor','b');
k = uint8(2);
u2 = uint8(length(x));
while k<u2
handles.htray(k) = line([x(k-1) x(k)],[y(k-1) y(k)],[z(k-1) z(k)],...
'Color','red','LineWidth',3);
set(handles.hsat,'XData',x(k),'YData',y(k),'ZData',z(k));
drawnow;
k = k + 1;
end
where oaxes is a FEX application that allows getting an axes located (in this case) at the origin (0,0,0) of the PlotBox.
I have read the User Guide's Graphics section in the Matlab Help Browser. It recommends to use low-level functions for speeding the graphics output (this is the reason for which I use the line function instead of plot3) and the renderer painters for line graphics. In my case, I can not use it because I have a surface (the Earth) which is not well drawn by it. I want to get something similar to this (I have tried to get in touch with the author but I have not got response). The final result is a slow (it takes 11.4 seconds in my computer with microprocessor intel core i5) and discontinuous animation (perhaps I need more points to get the blue point's movement looks like continuous but the integrator's output points are invariable). I would like to know what I should make to improve it. Thank you for your attention. Cheers.
A couple of things here.
DrawMode=fast probably doesn't do what you think it does. It's turning off depthsorting. I think that you really want depthsorting here.
You're creating line objects in the inner loop. You really want create a small number of graphics objects and reuse them. Could you create a single line object and set the XData, YData, & ZData, in the loop?
You can use hgtransform to avoid modifying the coordinates of hsat (as described here), but that would only make a difference if hsat was much more complex. I don't think it would buy you anything in this case.
You could reduce the resolution of your surface.
You probably want to set the figure's Renderer property to OpenGL.
In this case, but I'm getting almost 20 frames per second on my system with your code. After making those changes, I'm getting about 100 frames per second. What sort of framerate are you shooting for here?
I believe the main reason your animation is slow is because you are using the Phong lighting algorithm which is computationally expensive. To see the effect it has on performance, try specifying Gouraud shading instead:
%#lighting('gouraud');
props.FaceLighting = 'gouraud'; %# faster interpolating method
I am working on video stabilisation ( making shaky videos non-shaky) using matlab.
One of the steps is to find a smooth camera path given the unstable camera path.
The unstable camera path is one which gives the jittering or shake to the video.
I have camera path specified using camera position which is a 3d-data.
camera path - (cx,cy,cz);
As i plot in matlab, i can visually see the shakiness of the camera motion.
So now i require a least squares fitting to be done on the camera path specified by(cx,cy,cz);
I came across polyfit() which does fitting for 2-dimensional data.
But what i need is a 3-d smooth curve fit to the shaky curve.
Thanks in advance.
Couldn't you just fit three separate 1d curves for cx(t), cy(t), cz(t)?
BTW: I think what you need is a Kalman filter, not a polynomial fit to the camera path. But I'm not sure if matlab has builtin support for that.
Approach using least square fit:
t = (1:0.1:5)';
% model
px = [ 5 2 1 ];
x = polyval(px,t);
py = [ -2 1 1 ];
y = polyval(py,t);
pz = [ 1 20 1 ];
z = polyval(pz,t);
% plot model
figure
plot3(x,y,z)
hold all
% simulate measurement
xMeasured = x+2*(rand(length(x),1)-0.5);
yMeasured = y+2*(rand(length(y),1)-0.5);
zMeasured = z+2*(rand(length(z),1)-0.5);
% plot simulated measurements
plot3(xMeasured, yMeasured, zMeasured,'or')
hold off
grid on
% least squares fit
A = [t.^2, t, t./t];
pxEstimated = A\xMeasured;
pyEstimated = A\yMeasured;
pzEstimated = A\zMeasured;
Let me be grateful to stackoverflow.com first of all and then my thanks to zellus and nikie who had started me thinking about the problem more. So now I have reached the solution which follows zellus approach and as nikie pointed out I used parameter 't' .
cx, cy,cz are the coordinates in 3d space and in my case they are all 343x1 doubles
My final code is shown below which fits the 3d data set:
t = linspace(1,343,343)';
load cx.mat;
load cy.mat;
load cz.mat;
plot3(cx, cy, cz,'r'),title('source Camera Path');
hold all
A = [t.^2, t, t./t];
fx = A\cx;
fy = A\cy;
fz = A\cz;
Xev = polyval(fx,t);
Yev = polyval(fy,t);
Zev = polyval(fz,t);
plot3(Xev,Yev,Zev,'+b'),title('Fitting Line');
I look forward to more interesting discussions on StackOverflow with great helpful people.