Why am I not getting slope of a horizontal line = 0? - matlab

I am trying to get the slope of my data in Matlab through "polyfit" command.
x = 1:38; y = -60*ones(1,38);
p_fit = polyfit(x,y,1);
slope = p_fit(1);
As per my knowledge, since y has constant values, I am expecting slope to be zero. But I am getting a value in 10^-16. So please help me in correcting myself.
The values of y is in dB domain. SO will this be any problem ? or any other reason.
Thanks

MATLAB runs on double precision floating point arithmetic if you don't tell it to do otherwise, and 10^-16 is reasonably well within the expected error.
If you want to get into the specifics (and you really should), have a look at "What every computer scientist should know about floating point arithmetic".
Update:
With regards to your comment, the boundaries you mention are at least 10 orders of magnitude bigger than the error you are seeing, so as long as this stays that way, you really need not worry about the small error.

Related

Matrix Multiplication Simplification not giving same results in code

I want to evaluate the expression xTAx -2yTAx+yTAy in Matlab.
If my calculations are correct, this should be equivalent to solving(x-y)TA(x-y)
My main problem right now is that I am not getting the same results from the above two expressions. I'm wondering if it's a precision error or a conceptual error.
I tried a very simple example with some random values to check my math was right. The mini example I used is as follows
A = [1,2,0;2,1,2;0,2,1];
x = [1;2;3];
y = [4;4;4];
(transpose(x)*A*x)-(2*transpose(y)*A*x)+(transpose(y)*A*(y))
transpose(x-y)*A*(x-y)
Both give 46.
I then tried a more realistic example of values for what I'm doing and it failed. For example
A = [2.66666666666667,-0.333333333333333,0,-0.333333333333333,-0.333333333333333,0,0,0,0;
-0.333333333333333,2.66666666666667,-0.333333333333333,-0.333333333333333,-0.333333333333333,-0.333333333333333,0,0,0;
0,-0.333333333333333,2.66666666666667,0,-0.333333333333333,-0.333333333333333,0,0,0;
-0.333333333333333,-0.333333333333333,0,2.66666666666667,-0.333333333333333,0,-0.333333333333333,-0.333333333333333,0;
-0.333333333333333,-0.333333333333333,-0.333333333333333,-0.333333333333333,2.66666666666667,-0.333333333333333,-0.333333333333333,-0.333333333333333 -0.333333333333333;
0,-0.333333333333333,-0.333333333333333,0,-0.333333333333333,2.66666666666667,0,-0.333333333333333,-0.333333333333333;
0,0,0,-0.333333333333333,-0.333333333333333,0,2.66666666666667,-0.333333333333333,0;
0,0,0,-0.333333333333333,-0.333333333333333,-0.333333333333333,-0.333333333333333,2.66666666666667,-0.333333333333333;
0,0,0,0,-0.333333333333333,-0.333333333333333,0,-0.333333333333333,2.66666666666667];
x =[1.21585420370805;
1.00388159497757e-16;
-0.405284734569351;
1.36809776609658e-16;
-1.04796659533634e-17;
-7.52459042423650e-17;
-0.607927101854027;
-8.49163704356314e-17;
0.303963550927013];
v =[0.0319067068305797,0.00786616506360615,0.0709811622828447,0.0719615328671117;
1.26150800194897e-17,5.77584497721720e-18,7.89740111567879e-18,7.14396333930938e-18;
-0.158358815125228,-0.876275686098803,0.0539216716399594,0.0450616819309899;
7.90937837037453e-18,3.24196519177793e-18,3.99402664932776e-18,4.17486202509670e-18;
5.35533279761622e-18,-8.91723780863019e-19,-1.56128480212603e-18,1.84423677629470e-19;
-2.18478057810330e-17,-6.63779320738873e-18,-3.21099714760257e-18,-3.93612240449303e-18;
-0.0213963226092681,-0.0168256143048771,-0.0175695110350900,-0.0128155908603601;
-4.06029341772399e-18,-5.65705978843172e-18,-1.80182480882056e-18,-1.59281757789645e-18;
0.221482525259710,-0.0576644539359728,0.0163934384910353,0.0197432976432437];
u = [1.37058079022968;
1.79302486419321;
69.4330887239959;
-52.3949662214410];
y = v*u;
Gives 0 for the first expression and 7.1387e-28 for the second. Is this a precision error thing? Which version is better/ more accurate to use and why? Thank you!
I have not checked your simplification, but floating-point math in general is inexact. Even doing the same operations in a different order can give you different results. With the deviations you are seeing I would believe they could reasonably come from inexactness in floating-point math and it is up to you to decide whether the deviations are acceptable for your purpose.
To determine which version is more accurate you would have to compute reference values to compare to, which might prove difficult.

How do I get lines to stop extending beyond plot border? (Matlab)

I am writing a report for a class and am having some issues with the lines of an unstable plot going beyond the boundary of the graph and overlapping the title and xlabel. This is despite specifying a ylim from -2 to 2. Is there a good way to solve this issue?
Thanks!
plot(X,u(:,v0),X,u(:,v1),X,u(:,v2),X,u(:,v3),X,u(:,v4))
titlestr = sprintf('Velocity vs. Distance of %s function using %s: C=%g, imax=%g, dx=%gm, dt=%gsec',ICFType,SDType,C,imax,dx,dt);
ttl=title(titlestr);
ylabl=ylabel("u (m/s)");
xlabl=xlabel("x (m)");
ylim([-2 2])
lgnd=legend('t=0','t=1','t=2','t=3','t=4');
ttl.FontSize=18;
ylabl.FontSize=18;
xlabl.FontSize=18;
lgnd.FontSize=18;
EDIT: Minimum reproducible example
mgc=randi([-900*10^10,900*10^10], [1000,2]);
mgc=mgc*1000000;
plot(mgc(:,1),mgc(:,2))
ylim([-1,1])
This is odd. It really looks like a Bug... partly
The reason is probably that the angle of the lines are so narrow that MATLAB runs into rounding errors when calculating the points to draw for your limits very small limits given very large numbers. (You see that you don't run into this problem when you don't scale the matrix mgc.
mgc = randi([-900*10^10,900*10^10], [1000,2]);
plot(mgc(:,1),mgc(:,2))
ylim([-1,1])
but if you scale it further, you run into this problem...
mgc = randi([-900*10^10,900*10^10], [1000,2]);
plot(mgc(:,1)*1e6,mgc(:,2)*1e6)
ylim([-1,1])
While those numbers are nowhere near the maximum number a double can represent (type realmax in the command window to see that this is a number with 308 zeros!); limiting the plot to [-1,1] on one of the axes -- note that you obtain the same phenom on the x-axis -- let MATLAB run into precision problems.
First of all, you see that it plots much less lines than before (in my case)... although, I just said to zoom on the y-axis. The thing is, that MATLAB does not recalculate the lines for the section but it really zooms into it (I guess that this may cause resolution errors with regard to pixels?)
Well, lets have a look at the data (Pro-tip, you can get the data of a line from a MATLAB figure by calling this snippet
datObj = findobj(gcf,'-property','YData','-property','XData');
X = datObj.XData;
Y = datObj.YData;
xlm = get(gca,'XLim'); % get the current x-limits
) We see that it represents the original data set, which is not surprising as you can also zoom out again.
Note that his only occurs if you have such a chaotic, jagged line. If you sort it, it does not happen.
quick fix:
Now, what happens, if we calculate the exact points for this section?
m = diff(Y)./diff(X); % slope
n = Y(1:end-1)-m.*X(1:end-1); % offset
x = [(-1-n); (1-n)]./m;
y = ones(size(x))./[-1 1].';
% plot
plot([xMinus1;xPlus1],(ones(length(xMinus1),2).*[-1 1]).')
xlim(xlm); % limit to exact same scale as before
The different colors indicate that they are now individual lines and not a single wild chaos;)
It seems Max pretty much hit the nail on the head as it pertains to the reason for this error is occurring. Per Enrico's advice I went ahead and submitted a bug report. MathWorks responded saying they were unsure it was "unexpected behavior" and would look into it more shortly. They also did suggest a temporary workaround (which, in my case, may be permanent).
This workaround is to put
set(gca,'ClippingStyle','rectangle');
directly after the plotting line.
Below is a modified version of the minimum reproducible example with this modification.
mgc=randi([-900*10^10,900*10^10], [1000,2]);
mgc=mgc*1000000;
plot(mgc(:,1),mgc(:,2))
set(gca,'ClippingStyle','rectangle');
ylim([-1,1])

Balloon string inverse kinematics [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have a simulation where I have balloons bouncing around the screen with a string attached to each one. it functions like a constraint. The string should obey gravity, such that if the balloon isn't max length, there should be slack hanging down.
In order to solve this, I represent the string with line segments. I have the string start hanging straight down, and then iterate over each segment, pointing it to the balloons, and then positioning it so it is connected. This gets the behavior very close to what I want, however in order to make sure that it doesn't leave the center, i translate it back to the place it is rooted in. The problem is that the end no longer is going to be connected. If i didn't have to reset the string to the vertical position this would work, but i have to in order to make sure the slack is calculated.
I added a temporary solution of iterating over it again to reconnect it, and the behavior is decent, but it definitely isn't realistic. An example can be seen here: http://www.mikerkoval.com/simulation/balloons
click to create balloons. I am unsure how to simulate gravity. Is there a better approach to make this more realistic?
-----EDIT----
I am strugglign to get my value solved for a in the catenary function.
I defined my function as:
var k = function (a){
return Math.pow((2 * a[0]*Math.sinh(h/(2 * a[0])) - sqrt(s*s- v*v)), 2)
}
however, when I call
var sol = numeric.uncmin(k, [1]);
it runs for a little then throws a Nan error. I am trying to figure out where the problem even is, but I am having very little success since I am struggling to learn what is going on with uncmin
I think what you want is to find the parameters for the catenary curve through two points p0 and p1, the positions of the anchor and balloon, which has given arc length s, which is the length of the string.
The equation of the curve is
y = a cosh (x / a)
where a is a parameter you need to determine. Let v = y1 - y0 and h = x1 - h0. With a bit of algebra you can show the value of a must satisfy
sqrt(s*s - v*v) = 2 * a * sinh(h / (2a))
This article a nice discussion of how to solve this for a iteratively with Newton's method. The author makes a substitution of variables b = a/h that make makes the solution space close to linear, so Newton will converge very quickly to a good answer.
Newton's method is a simple iteration that requires the explicit derivative of the curve and a starting point. The article above gives the derivative. Because the curve is so close to linear, you can pick any starting point on that linear portion - b = 0.2 will do. By converging quickly, I mean the number of correct significant digits doubles with each iteration. So in practice you're likely to have all the digits of a double precision floating point number in 6 iterations or less.
Once you have the parameter, it will be a simple patter to plot the explicit curve you need.
If Newton doesn't converge in 6 iterations or so, then the instance of the problem is a very ill-conditioned one. Here that means a curve that's very nearly a straight line. So just draw such a line between p0 and p1!
All this should be easily fast enough: less expensive than your current approximation.

atan(c*tan(x)) does not consider x>pi/2

The Problem:
I have a problem in Matlab with a part of a function that calculates the value of
atan(c*tan(x)).
For some real c and x where x is an angle. Usually this would return some values between -pi/2 and pi/2 but I want it to mind "how often the angle got around": For example with c=1 we get the same result for x=pi/2 and x=3*pi/2.
This is what I want to avoid, in this case I would want it to calculate for x=3*pi/2 the value 3*pi/2 (I know that for c=1 atan and tan would cancel out, but for arbitrary real c they do not).
In other words, I want to make atan(c*tan(x)) continuous on R in x.
How I tried to fix it:
I simply added a function that kept an eye on the argument of the tangens:
atan(c*tan(x))+pi.*(floor((x./pi)+(1/2)))
This works for values that are not too near to the poles of the tangent function, but once x lies !very! near it breaks down (jumps of heigh pi are observed). This is especially a problem for my calculations as x is near enough one of the poles all the time.
What still seems to be the problem:
Imho the problem is that the tangens has a "much higher resolution" near its poles than the "floor function" making its jump too early/late for the correcting function.
My question:
Is there another possibility to fix this problem that is not sensitive to x lying near the poles of the tangent ?
There are multiple possible solutions for this problem. A simple one would be to use a special case for x near pi + k*pi/2, letting atan(c*tan(x)) = +- pi/2 depending on whether x is slightly smaller, or slightly larger, than pi/2 respectively.
function Out = ContTangents(In,RidgeParam)
InOverPi = In./pi;
Rounded = round(InOverPi);
CloseRounded = round(InOverPi+1/2)-1/2;
ArctanResult = zeros(size(In));
IsCloseVal = abs(InOverPi - CloseRounded) < 0.00001;
CloseVal = sign(RidgeParam*(CloseRounded-InOverPi)) * pi/2;
NotCloseVal = atan(RidgeParam*tan(In));
ArctanResult(IsCloseVal) = CloseVal(IsCloseVal);
ArctanResult(~IsCloseVal) = NotCloseVal(~IsCloseVal);
Out = ArctanResult + pi*(Rounded + 1/2);
This solution appears to produce nice-looking curves, and as far as I can see it avoids discontinuities and singularities. At least, it seems nice when I run
figure, plot(ContTangents(-10:.001*pi:10,2))
Edit: Some parts of the function are modified. When running ContTangents(pi/2-exp(-36),2) (the problem value in the OP's example below), I found this very peculiar behavior. Is this a bug in Matlab?
K>> InOverPi<0.5
ans =
1
K>> round(InOverPi)
ans =
1
This could be worked around through redefining an own round function which doesn't fail at the point .5-eps(.25). In fact, due to the entertaining behavior of floating point at the limit, you can't even define that number as such, but you have to proceed as follows.
format hex
(pi/2-exp(-36))/pi
ans =
3fdfffffffffffff
-eps(.25)+.25+.25
ans =
3fdfffffffffffff

Matlab: poor accuracy of optimizers/solvers

I am having difficulty achieving sufficient accuracy in a root-finding problem on Matlab. I have a function, Lik(k), and want to find the value of k where Lik(k)=L0. Basically, the problem is that various built-in Matlab solvers (fzero, fminbnd, fmincon) are not getting as close to the solution as I would like or expect.
Lik() is a user-defined function which involves extensive coding to compute a numerical inverse Laplace transform, etc., and I therefore do not include the full code. However, I have used this function extensively and it appears to work properly. Lik() actually takes several input parameters, but for the current step, all of these are fixed except k. So it is really a one-dimensional root-finding problem.
I want to find the value of k >= 165.95 for which Lik(k)-L0 = 0. Lik(165.95) is less than L0 and I expect Lik(k) to increase monotonically from here. In fact, I can evaluate Lik(k)-L0 in the range of interest and it appears to smoothly cross zero: e.g. Lik(165.95)-L0 = -0.7465, ..., Lik(170.5)-L0 = -0.1594, Lik(171)-L0 = -0.0344, Lik(171.5)-L0 = 0.1015, ... Lik(173)-L0 = 0.5730, ..., Lik(200)-L0 = 19.80. So it appears that the function is behaving nicely.
However, I have tried to "automatically" find the root with several different methods and the accuracy is not as good as I would expect...
Using fzero(#(k) Lik(k)-L0): If constrained to the interval (165.95,173), fzero returns k=170.96 with Lik(k)-L0=-0.045. Okay, although not great. And for practical purposes, I would not know such a precise upper bound without a lot of manual trial and error. If I use the interval (165.95,200), fzero returns k=167.19 where Lik(k)-L0 = -0.65, which is rather poor. I have been running these tests with Display set to iter so I can see what's going on, and it appears that fzero hits 167.19 on the 4th iteration and then stays there on the 5th iteration, meaning that the change in k from one iteration to the next is less than TolX (set to 0.001) and thus the procedure ends. The exit flag indicates that it successfully converged to a solution.
I also tried minimizing abs(Lik(k)-L0) using fminbnd (giving upper and lower bounds on k) and fmincon (giving a starting point for k) and ran into similar accuracy issues. In particular, with fmincon one can set both TolX and TolFun, but playing around with these (down to 10^-6, much higher precision than I need) did not make any difference. Confusingly, sometimes the optimizer even finds a k-value on an earlier iteration that is closer to making the objective function zero than the final k-value it returns.
So, it appears that the algorithm is iterating to a certain point, then failing to take any further step of sufficient size to find a better solution. Does anyone know why the algorithm does not take another, larger step? Is there anything I can adjust to change this? (I have looked at the list under optimset but did not come up with anything useful.)
Thanks a lot!
As you seem to have a 'wild' function that does appear to be monotone in the region, a fairly small range of interest, and not a very high requirement in precision I think all criteria are met for recommending the brute force approach.
Assuming it does not take too much time to evaluate the function in a point, please try this:
Find an upperbound xmax and a lower bound xmin, choose a preferred stepsize and evaluate your function at
xmin:stepsize:xmax
If required (and monotonicity really applies) you can get another upper and lower bound by doing this and repeat the process for better accuracy.
I also encountered this problem while using fmincon. Here is how I fixed it.
I needed to find the solution of a function (single variable) within an optimization loop (multiple variables). Because of this, I needed to provide a large interval for the solution of the single variable function. The problem is that fmincon (or fzero) does not converge to a solution if the search interval is too large. To get past this, I solve the problem inside a while loop, with a huge starting upperbound (1e200) with the constraint made on the fval value resulting from the solver. If the resulting fval is not small enough, I decrease the upperbound by a factor. The code looks something like this:
fval = 1;
factor = 1;
while fval>1e-7
UB = factor*1e200;
[x,fval,exitflag] = fminbnd(#(x)function(x,...),LB,UB,options);
factor = factor * 0.001;
end
The solver exits the while when a good solution is found. You can of course play also with the LB by introducing another factor and/or increase the factor step.
My 1st language isn't English so I apologize for any mistakes made.
Cheers,
Cristian
Why not use a simple bisection method? You always evaluate the middle of a certain interval and then reduce this to the right or left part so that you always have one bound giving a negative and the other bound giving a positive value. You can reduce to arbitrary precision very quickly. Since you reduce the interval in half each time it should converge very quickly.
I would suspect however there is some other problem with that function in that it has discontinuities. It seems strange that fzero would work so badly. It's a deterministic function right?