Gravity in accelerometric measurements - matlab

I have taken from a data set the values ​​of x and z of activity (e.g. walking, running) detected by an accelerometer. Since the data collected also contains the gravity values, I removed it with the following filter in Matlab:
fc = 0.3;
fs = 50;
x = ...;
y = ...;
z = ...;
[but,att] = butter(6,fc/(fs/2));
gx = filter(but,att,x);
gy = filter(but,att,y);
gz = filter(but,att,z);
new_x = x-gx;
new_y = y-gy;
new_z = z-gz;
A = magnitude(new_x,new_y,new_z);
plot(A)
Then I calculated the magnitude value and plotted the magnitude value on a graph.
However, every graph, even after removing gravity, starts with a magnitude of 1g (9.8 m / s ^ 2), why? Should not it start at 0 since I removed gravity?

You need to wait for the filter value to ramp up. Include some additional data that you don't graph at the beginning of the file for this purpose.
How accurate do your calculations need to be? With walking and running the angle of the accelerometer can change, so the orientation of the gravity vector can change throughout the gait cycle. How much of a change in orientation you can expect to see depends on the sensor location and the particular motion you are trying to capture.

Related

How to perform an orthographic projection on a z-Buffer image in Matlab?

I am facing the same problem as mentioned in this post, however, I am not facing it with OpenGL, but simply with MATLAB. Depth as distance to camera plane in GLSL
I have a depth image rendered from the Z-Buffer from 3ds max. I was not able to get an orthographic representation of the z-buffer. For a better understanding, I will use the same sketch as made by the previous post:
* |--*
/ |
/ |
C-----* C-----*
\ |
\ |
* |--*
The 3 asterisks are pixels and the C is the camera. The lines from the
asterisks are the "depth". In the first case, I get the distance from the pixel to the camera. In the second, I wish to get the distance from each pixel to the plane.
The settins of my camera are the following:
WIDTH = 512;
HEIGHT = 424;
FOV = 89.971;
aspect_ratio = WIDTH/HEIGHT;
%clipping planes
near = 500;
far = 5000;
I calulate the frustum settings like the following:
%calculate frustums settings
top = tan((FOV/2)*5000)
bottom = -top
right = top*aspect_ratio
left = -top*aspect_ratio
And set the projection matrix like this:
%Generate matrix
O_p = [2/(right-left) 0 0 -((right+left)/(right-left)); ...
0 2/(top-bottom) 0 -((top+bottom)/(top-bottom));...
0 0 -2/(far-near) -(far+near)/(far-near);...
0 0 0 1];
After this I read in the depth image, which was saved as a 48 bit RGB- image, where each channel is the same, thus only one channel has to be used.
%Read in image
img = imread('KinectImage.png');
%Throw away, except one channel (all hold the same information)
c1 = img(:,:,1);
The pixel values have to be inverted, since the closer the values are to the camera, the brigher they were. If a pixel is 0 (no object to render available) it is set to 2^16, so , that after the bit complementation, the value is still 0.
%Inverse bits that are not zero, so that the z-image has the correct values
c1(c1 == 0) = 2^16
c1_cmp = bitcmp(c1);
To apply the matrix, to each z-Buffer value, I lay out the vector one dimensional and build up a vector like this [0 0 z 1] , over every element.
c1_cmp1d = squeeze(reshape(c1_cmp,[512*424,1]));
converted = double([zeros(WIDTH*HEIGHT,1) zeros(WIDTH*HEIGHT,1) c1_cmp1d zeros(WIDTH*HEIGHT,1)]) * double(O_p);
After that, I pick out the 4th element of the result vector and reshape it to a image
img_con = converted(:,4);
img_con = reshape(img_con,[424,512]);
However, the effect, that the Z-Buffer is not orthographic is still there, so did I get sth wrong? Is my calculation flawed ? Or did I make mistake here?
Depth Image coming from 3ds max
After the computation (the white is still "0" , but the color axis has changed)
It would be great to achieve this with 3ds max, which would resolve this issue, however I was not able to find this setting for the z-buffer. Thus, I want to solve this using Matlab.

Estimating distance to a point using camera calibration

I want to estimate distance (camera to a point in the ground : that means Yw=0) from a given pixel coordinate of that point . For that I used camera calibration methods
But the results are not meaningful.
I have following details to calibration
-focal length x and y , principal point x and y, effective pixel size in meters , yaw and pitch angles and camera heights etc.
-I have entered focal length ,principal points and translation vector in terms of pixels for calculation
-I have multiplied image point with camera_matrix and then rotational| translation matrix (R|t), to get the world point.
Is my procedure correct?? What can be wrong ?
result
image_point(x,y) =400,380
world_point z co ordinate(distance) = 12.53
image_point(x,y) =400,180
world_point z co ordinate(distance) = 5.93
problem
I am getting very few pixels for z coordinate ,
That means z co ordinate is << 1 m , (because effective pixel size in meters = 10 ^-5 )
This is my matlab code
%positive downward pitch
xR = 0.033;
yR = 0;
zR = pi;
%effective pixel size in meters = 10 ^-5 ; focal_length x & y = 0.012 m
% principal point x & y = 320 and 240
intrinsic_params =[1200,0,320;0,1200,240;0,0,1];
Rx=[1,0,0 ; 0,cos(xR),sin(xR); 0,-sin(xR),cos(xR)];
Ry=[cos(yR),0,-sin(yR) ; 0,1,0 ; sin(yR),0,cos(yR)];
Rz=[cos(zR),sin(zR),0 ; -sin(zR),cos(zR),0 ; 0,0,1];
R= Rx * Ry * Rz ;
% The camera is 1.17m above the ground
t=[0;117000;0];
extrinsic_params = horzcat(R,t);
% extrinsic_params is 3 *4 matrix
P = intrinsic_params * extrinsic_params; % P 3*4 matrix
% make it square ....
P_sq = [P; 0,0,0,1];
%image size is 640 x 480
%An arbitrary pixel 360,440 is entered as input
image_point = [400,380,0,1];
% world point will be in the form X Y Z 1
world_point = P_sq * image_point'
Your procedure is kind of right, however it is going in the wrong direction.
See this link. Using your intrinsic and extrinsic calibration matrix you can find the pixel-space position of a real-world vector, NOT the other way around. The exception to this is if your camera is stationary in the global frame and you have the Z position of the feature in the global space.
Stationary camera, known feature Z case: (see also this link)
%% First we simulate a camera feature measurement
K = [0.5 0 320;
0 0.5 240;
0 0 1]; % Example intrinsics
R = rotx(0)*roty(0)*rotz(pi/4); % orientation of camera in global frame
c = [1; 1; 1]; %Pos camera in global frame
rwPt = [ 10; 10; 5]; %position of a feature in global frame
imPtH = K*R*(rwPt - c); %Homogeneous image point
imPt = imPtH(1:2)/imPtH(3) %Actual image point
%% Now we use the simulated image point imPt and the knowledge of the
% features Z coordinate to determine the features X and Y coordinates
%% First determine the scaling term lambda
imPtH2 = [imPt; 1];
z = R.' * inv(K) * imPtH2;
lambda = (rwPt(3)-c(3))/z(3);
%% Now the RW position of the feature is:
rwPt2 = c + lambda*R.' * inv(K) * imPtH2 % Reconstructed RW point
Non-stationary camera case:
To find the real-world position or distance from the camera to a particular feature (given on the image plane) you have to employ some method of reconstructing the 3D data from the 2D image.
The two that come to mind immediately is opencv's solvePnP and stereo-vision depth estimation.
solvePnP requires 4 co-planar (in RW space) features to be available in the image, and the positions of the features in RW space known. This may not sound useful as you need to know the RW position of the features, but you can simply define the 4 features with a known offset rather than a position in the global frame - the result will be the relative position of the camera in the frame the features are defined in. solvePnP gives very accurate pose estimation of the camera. See my example.
Stero vision depth estimation requires the same feature to be found in two spatially-separate images and the transformation between the images in RW space must be known very precisely.
There may be other methods but these are the two I am familiar with.

Transform Image using Roll-Pitch-Yaw angles (Image rectification)

I am working on an application where I need to rectify an image taken from a mobile camera platform. The platform measures roll, pitch and yaw angles, and I want to make it look like the image is taken from directly above, by some sort of transform from this information.
In other words, I want a perfect square lying flat on the ground, photographed from afar with some camera orientation, to be transformed, so that the square is perfectly symmetrical afterwards.
I have been trying to do this through OpenCV(C++) and Matlab, but I seem to be missing something fundamental about how this is done.
In Matlab, I have tried the following:
%% Transform perspective
img = imread('my_favourite_image.jpg');
R = R_z(yaw_angle)*R_y(pitch_angle)*R_x(roll_angle);
tform = projective2d(R);
outputImage = imwarp(img,tform);
figure(1), imshow(outputImage);
Where R_z/y/x are the standard rotational matrices (implemented with degrees).
For some yaw-rotation, it all works just fine:
R = R_z(10)*R_y(0)*R_x(0);
Which gives the result:
If I try to rotate the image by the same amount about the X- or Y- axes, I get results like this:
R = R_z(10)*R_y(0)*R_x(10);
However, if I rotate by 10 degrees, divided by some huge number, it starts to look OK. But then again, this is a result that has no research value what so ever:
R = R_z(10)*R_y(0)*R_x(10/1000);
Can someone please help me understand why rotating about the X- or Y-axes makes the transformation go wild? Is there any way of solving this without dividing by some random number and other magic tricks? Is this maybe something that can be solved using Euler parameters of some sort? Any help will be highly appreciated!
Update: Full setup and measurements
For completeness, the full test code and initial image has been added, as well as the platforms Euler angles:
Code:
%% Transform perspective
function [] = main()
img = imread('some_image.jpg');
R = R_z(0)*R_y(0)*R_x(10);
tform = projective2d(R);
outputImage = imwarp(img,tform);
figure(1), imshow(outputImage);
end
%% Matrix for Yaw-rotation about the Z-axis
function [R] = R_z(psi)
R = [cosd(psi) -sind(psi) 0;
sind(psi) cosd(psi) 0;
0 0 1];
end
%% Matrix for Pitch-rotation about the Y-axis
function [R] = R_y(theta)
R = [cosd(theta) 0 sind(theta);
0 1 0 ;
-sind(theta) 0 cosd(theta) ];
end
%% Matrix for Roll-rotation about the X-axis
function [R] = R_x(phi)
R = [1 0 0;
0 cosd(phi) -sind(phi);
0 sind(phi) cosd(phi)];
end
The initial image:
Camera platform measurements in the BODY coordinate frame:
Roll: -10
Pitch: -30
Yaw: 166 (angular deviation from north)
From what I understand the Yaw-angle is not directly relevant to the transformation. I might, however, be wrong about this.
Additional info:
I would like specify that the environment in which the setup will be used contains no lines (oceanic photo) that can reliably used as a reference (the horizon will usually not be in the picture). Also the square in the initial image is merely used as a measure to see if the transformation is correct, and will not be there in a real scenario.
So, this is what I ended up doing: I figured that unless you are actually dealing with 3D images, rectifying the perspective of a photo is a 2D operation. With this in mind, I replaced the z-axis values of the transformation matrix with zeros and ones, and applied a 2D Affine transformation to the image.
Rotation of the initial image (see initial post) with measured Roll = -10 and Pitch = -30 was done in the following manner:
R_rotation = R_y(-60)*R_x(10);
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ]
This implies a rotation of the camera platform to a virtual camera orientation where the camera is placed above the scene, pointing straight downwards. Note the values used for roll and pitch in the matrix above.
Additionally, if rotating the image so that is aligned with the platform heading, a rotation about the z-axis might be added, giving:
R_rotation = R_y(-60)*R_x(10)*R_z(some_heading);
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ]
Note that this does not change the actual image - it only rotates it.
As a result, the initial image rotated about the Y- and X-axes looks like:
The full code for doing this transformation, as displayed above, was:
% Load image
img = imread('initial_image.jpg');
% Full rotation matrix. Z-axis included, but not used.
R_rot = R_y(-60)*R_x(10)*R_z(0);
% Strip the values related to the Z-axis from R_rot
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ];
% Generate transformation matrix, and warp (matlab syntax)
tform = affine2d(R_2d);
outputImage = imwarp(img,tform);
% Display image
figure(1), imshow(outputImage);
%*** Rotation Matrix Functions ***%
%% Matrix for Yaw-rotation about the Z-axis
function [R] = R_z(psi)
R = [cosd(psi) -sind(psi) 0;
sind(psi) cosd(psi) 0;
0 0 1];
end
%% Matrix for Pitch-rotation about the Y-axis
function [R] = R_y(theta)
R = [cosd(theta) 0 sind(theta);
0 1 0 ;
-sind(theta) 0 cosd(theta) ];
end
%% Matrix for Roll-rotation about the X-axis
function [R] = R_x(phi)
R = [1 0 0;
0 cosd(phi) -sind(phi);
0 sind(phi) cosd(phi)];
end
Thank you for the support, I hope this helps someone!
I think you can derive transformation this way:
1) Let you have four 3d-points A(-1,-1,0), B(1,-1,0), C(1,1,0) and D(-1,1,0). You can take any 4 noncollinear points. They not related to image.
2) You have transformation matrix, so you can set your camera by multiplying points coords by transformation matrix. And you'll get 3d coords relative to camera position/direction.
3) You need to get projection of your points to screen plane. The simpliest way is to use ortographic projection (simply ignore depth coordinate). On this stage you've got 2D projections of transformed points.
4) Once you have 2 sets of 2D points coordinates (the set from step 1 without 3-rd coordinate and the set from step 3), you can compute homography matrix in standard way.
5) Apply inverse homograhy transformation to your image.
You need to estimate a homography. For an off-the-shelf Matlab solution, see function vgg_H_from_x_lin.m from http://www.robots.ox.ac.uk/~vgg/hzbook/code/ .
For the theory dig into a Computer Vision textbook, such as the one available freely at http://szeliski.org/Book/ or in Chapter 3 of http://programmingcomputervision.com/downloads/ProgrammingComputerVision_CCdraft.pdf
Maybe my answer is not correct due to my mis-understanding of the camera parameters, but I was wondering whether the Yaw/Pitch/Roll is relative to the position of your object. I used the formula of general rotations, and my code is below (the rotation functions R_x, R_y, and R_z were copied from yours, I didn't paste them here)
close all
file='http://i.stack.imgur.com/m5e01.jpg'; % original image
I=imread(file);
R_rot = R_x(-10)*R_y(-30)*R_z(166);
R_rot = inv(R_rot);
R_2d = [ R_rot(1,1) R_rot(1,2) 0;
R_rot(2,1) R_rot(2,2) 0;
0 0 1 ];
T = maketform('affine',R_2d);
transformedI = imtransform(I,T);
figure, imshow(I), figure, imshow(transformedI)
The result:
This indicates that you still need some rotation operation to get the 'correct' alignment in your mind (but probably not necessary the correct position in the camera's mind).
So I change R_rot = inv(R_rot); to R_rot = inv(R_rot)*R_x(-5)*R_y(25)*R_z(180);, and now it gave me:
Looks better like what you want.
Thanks.

Rotation of image manually in matlab

I am trying to rotate the image manually using the following code.
clc;
m1 = imread('owl','pgm'); % a simple gray scale image of order 260 X 200
newImg = zeros(500,500);
newImg = int16(newImg);
rotationMatrix45 = [cos((pi/4)) -sin((pi/4)); sin((pi/4)) cos((pi/4))];
for x = 1:size(m1,1)
for y = 1:size(m1,2)
point =[x;y] ;
product = rotationMatrix45 * point;
product = int16(product);
newx =product(1,1);
newy=product(2,1);
newImg(newx,newy) = m1(x,y);
end
end
imshow(newImg);
Simply I am iterating through every pixel of image m1, multiplying m1(x,y) with rotation matrix, I get x',y', and storing the value of m1(x,y) in to `newImg(x',y')' BUT it is giving the following error
??? Attempted to access newImg(0,1); index must be a positive integer or logical.
Error in ==> at 18
newImg(newx,newy) = m1(x,y);
I don't know what I am doing wrong.
Part of the rotated image will get negative (or zero) newx and newy values since the corners will rotate out of the original image coordinates. You can't assign a value to newImg if newx or newy is nonpositive; those aren't valid matrix indices. One solution would be to check for this situation and skip such pixels (with continue)
Another solution would be to enlarge the newImg sufficiently, but that will require a slightly more complicated transformation.
This is assuming that you can't just use imrotate because this is homework?
The problem is simple, the answer maybe not : Matlab arrays are indexed from one to N (whereas in many programming langages it's from 0 to (N-1) ).
Try newImg( max( min(1,newX), m1.size() ) , max( min(1,newY), m1.size() ) ) maybe (I don't have Matlab at work so I can tell if it's gonna work), but the resulting image will be croped.
this is an old post so I guess it wont help the OP but as I was helped by his attempt I post here my corrected code.
basically some freedom in the implementation regarding to how you deal with unassigned pixels as well as wether you wish to keep the original size of the pic - which will force you to crop areas falling "outside" of it.
the following function rotates the image around its center, leaves unassigned pixels as "burned" and crops the edges.
function [h] = rot(A,ang)
rotMat = [cos((pi.*ang/180)) sin((pi.*ang/180)); -sin((pi.*ang/180)) cos((pi.*ang/180))];
centerW = round(size(A,1)/2);
centerH = round(size(A,2)/2);
h=255.* uint8(ones(size(A)));
for x = 1:size(A,1)
for y = 1:size(A,2)
point =[x-centerW;y-centerH] ;
product = rotMat * point;
product = int16(product);
newx =product(1,1);
newy=product(2,1);
if newx+centerW<=size(A,1)&& newx+centerW > 0 && newy+centerH<=size(A,2)&& newy+centerH > 0
h(newx+centerW,newy+centerH) = A(x,y);
end
end
end

Moving point along a curve (3D-Animation plots)

I am trying to make an animation of the trajectory (circular orbit of 7000 km altitude) of a satellite orbiting the Earth. The following vectors x,y,z represents the coordinates of it (obtained integrating the acceleration due to the nonspherical gravitational potential) in the reference system.
fh = figure('DoubleBuffer','on');
ah = axes('Parent',fh,'Units','normalized','Position',[0 0 1 1],...
'DataAspectRatio',[1 1 1],'DrawMode','fast');
x = 1.0e+003 * [ 1.293687086462776 1.355010603320554 ...
1.416226136451621 1.477328806662750 1.538313743926646...
1.841302933101510 2.140623861743577 2.435680048370655...
2.725883985836056 3.830393161542639 4.812047393962632...
5.639553477924236 6.285935904692739 6.778445814703028...
6.981534839226300 6.886918327688911 6.496619397538814...
5.886899070860056 5.061708852126299 4.051251943168882...
2.891621923700204 1.551975259009857 0.148687346809817...
-1.259946709379085 -2.614876359324573 -3.789635985368149...
-4.822735075152957 -5.675398819678173 -6.314344260262741...
-6.725008970265510 -6.860046738669579 -6.714044347581475...
-6.291232549137548 -5.646225528669501 -4.790489239458692...
-3.756316068441812 -2.581710448683235 -1.257064527234605...
0.118190083177733 1.488198207705392 2.797262268588749...
3.943218990855596 4.943060241667732 5.760107224604901...
6.363435161221018 6.741208871652011 6.844507242544970...
6.669637491855506 6.222229021788314 5.549112743364572...
4.665587166679964 3.605338508383659 2.407805301565781...
1.076891826523990 -0.297413079432155 -1.658804233546807...
-2.950960371016551 -4.105336427038419 -5.093651475630134...
-5.875676956725480 -6.417825276834068 -6.694317613708315...
-6.702354075060146 -6.441476385534835 -5.920328191821120...
-5.149356931765655 -4.165756794143557 -3.010476122311884...
-1.730623521107957 -0.547981318845428 0.651933236927557...
1.830754553013015 2.950797411065132];
y = 1.0e+003 *[ -6.879416537989226 -6.867600717396513...
-6.855237614338527 -6.842328214064634 -6.828873545169439...
-6.753459997528374 -6.664593892931937 -6.562452270514113...
-6.447238135027323 -5.857768973060929 -5.080802144227667...
-4.141502963266585 -3.069449548231363 -1.712593819793112...
-0.283073212084787 1.157789207734001 2.547934226666446...
3.733185664633135 4.781256997101091 5.653507474532885...
6.316540958291930 6.760480121739906 6.924451844039825...
6.801366712306432 6.393950562012035 5.763652137956600...
4.918852380803697 3.890903548710424 2.717191733101876...
1.385839187748386 -0.001786735280855 -1.388680800030854...
-2.717513794724399 -3.877348086956174 -4.892062889940518...
-5.723943344458780 -6.341064412332522 -6.729295147896739...
-6.844976271597333 -6.684181367561298 -6.252308741323985...
-5.600523241569850 -4.741636145151388 -3.707934368103928...
-2.537101251915556 -1.208445066639178 0.169057351189467...
1.539102816836380 2.845512534980855 3.993289528709769...
4.989150886098799 5.795183343929699 6.379362665363127...
6.723976759736427 6.794165677259719 6.586864956951024...
6.108394444576384 5.387403581100790 4.449452017586583...
3.332306147336086 2.080126804848620 0.757432563194591...
-0.595089763589023 -1.923045482863719 -3.172486599444496...
-4.302442851663575 -5.254127434062967 -5.988250483410006...
-6.472859710456819 -6.675113607083117 -6.664054266658221...
-6.440275312105615 -6.010308893159839];
z = [ -1.348762314964606 -1.416465504571016 -1.484053975854905...
-1.551522350691171 -1.618865254528658 -1.953510294130345...
-2.284215283426580 -2.610320163346533 -2.931177500785390...
-4.153679292291825 -5.242464339076090 -6.162825517200489...
-6.884797354552217 -7.440577139596716 -7.680358197465111...
-7.594616346122523 -7.183952381870657 -6.529293328494871...
-5.637062917332294 -4.540678277777376 -3.279180600545935...
-1.817413221203883 -0.280548741687378 1.268253040429052...
2.764251377698321 4.066975661566477 5.218214283582148...
6.174673504642019 6.899157495671121 7.375688520371054...
7.548875108319217 7.410793523141250 6.965068314483629...
6.271309946313485 5.343254095742233 4.215431448848456...
2.928028129903598 1.469574073877195 -0.048649548535536...
-1.563638474934283 -3.013536101911645 -4.285161526803897...
-5.397128342069014 -6.308837263463213 -6.985946890567337...
-7.415475222950275 -7.542406523585701 -7.363021555333582...
-6.884639818710263 -6.158276823110702 -5.199186592259776...
-4.043958234344444 -2.736923814690622 -1.283388986878655...
0.219908617803070 1.712828428793243 3.135072606759898...
4.411790351254605 5.510842969067953 6.387336537361380...
7.004133661144990 7.332163450286972 7.366696289243980...
7.105258174916579 6.555393588532904 5.727091807637045...
4.660073989309112 3.399622357708514 1.999243120787114...
0.701744421660999 -0.620073499615723 -1.923270654698332...
-3.164705887374677 ];
load('topo.mat','topo','topomap1');
[x1,y1,z1] = sphere(50);
x1 = 6678.14*x1;
y1 = 6678.14*y1;
z1 = 6678.14*z1;
props.AmbientStrength = 0.1;
props.DiffuseStrength = 1;
props.SpecularColorReflectance = .5;
props.SpecularExponent = 20;
props.SpecularStrength = 1;
props.FaceColor= 'texture';
props.EdgeColor = 'none';
props.FaceLighting = 'phong';
props.Cdata = topo;
surface(x1,y1,z1,props);
light('position',[-1 0 1]);
light('position',[-1.5 0.5 -0.5], 'color', [.6 .2 .2]);
view(3);
handles.p1 = line('parent',ah,'XData',x(1),'YData',y(1),'ZData',...
z(1),'Color','red','LineWidth',2);
handles.p2 = line('parent',ah,'XData',x(end),'YData',y(end),...
'ZData',z(end),'Marker','o','MarkerSize',6,'MarkerFaceColor','b');
oaxes([0 0 0],'Arrow','extend','AxisLabelLocation','side',...
'Xcolor','green','Ycolor','green','Zcolor','green');
axis vis3d equal;
handles.XLim = get(gca,'XLim');
handles.YLim = get(gca,'YLim');
handles.ZLim = get(gca,'ZLim');
set([handles.p1,handles.p2],'Visible','off');
xmin = handles.XLim(1);
ymin = handles.YLim(1);
zmin = handles.ZLim(1);
xmax = handles.XLim(2);
ymax = handles.YLim(2);
zmax = handles.ZLim(2);
set(ah, 'XLim', [xmin xmax],'YLim', [ymin ymax],'Zlim',[zmin zmax]);
view(3);
handles.hsat = line('parent',ah,'XData',x(1), 'YData',y(1),...
'ZData',z(1),'Marker','o', 'MarkerSize',6,'MarkerFaceColor','b');
k = uint8(2);
u2 = uint8(length(x));
while k<u2
handles.htray(k) = line([x(k-1) x(k)],[y(k-1) y(k)],[z(k-1) z(k)],...
'Color','red','LineWidth',3);
set(handles.hsat,'XData',x(k),'YData',y(k),'ZData',z(k));
drawnow;
k = k + 1;
end
where oaxes is a FEX application that allows getting an axes located (in this case) at the origin (0,0,0) of the PlotBox.
I have read the User Guide's Graphics section in the Matlab Help Browser. It recommends to use low-level functions for speeding the graphics output (this is the reason for which I use the line function instead of plot3) and the renderer painters for line graphics. In my case, I can not use it because I have a surface (the Earth) which is not well drawn by it. I want to get something similar to this (I have tried to get in touch with the author but I have not got response). The final result is a slow (it takes 11.4 seconds in my computer with microprocessor intel core i5) and discontinuous animation (perhaps I need more points to get the blue point's movement looks like continuous but the integrator's output points are invariable). I would like to know what I should make to improve it. Thank you for your attention. Cheers.
A couple of things here.
DrawMode=fast probably doesn't do what you think it does. It's turning off depthsorting. I think that you really want depthsorting here.
You're creating line objects in the inner loop. You really want create a small number of graphics objects and reuse them. Could you create a single line object and set the XData, YData, & ZData, in the loop?
You can use hgtransform to avoid modifying the coordinates of hsat (as described here), but that would only make a difference if hsat was much more complex. I don't think it would buy you anything in this case.
You could reduce the resolution of your surface.
You probably want to set the figure's Renderer property to OpenGL.
In this case, but I'm getting almost 20 frames per second on my system with your code. After making those changes, I'm getting about 100 frames per second. What sort of framerate are you shooting for here?
I believe the main reason your animation is slow is because you are using the Phong lighting algorithm which is computationally expensive. To see the effect it has on performance, try specifying Gouraud shading instead:
%#lighting('gouraud');
props.FaceLighting = 'gouraud'; %# faster interpolating method