why am I getting imaginary numbers instead of a simple answer - matlab

this is just part of the code that matters and needs to be fixed. I don't know what i'm doing wrong here. all the variables are simple numbers, it's true that one is needed for that other, but there shouldn't be anything wrong with that. the answer for which I'm getting imaginary numbers is supposed to be part of a loop, so it's important I get it right. please ignore the variables that are not needed, as i just wrote a part of the code
the answer i get is:
KrInitialFirstPart = 0.000000000000000e+00 - 1.466747615972368e+05i
clear all;
clc;
% the initial position components
rInitial= 10; %kpc
zInitial= 0; %kpc
% the initial velocity components
vrInitial= 0; %km/s
vzInitial= 150; %tangential velocity component
vtInitial= 150; %not used
% the height
h= rInitial*vzInitial; %angulan momentum constant
tInitial=0;
Dt=1e-3;
e=0.99;
pc=11613.5;
KrInitialFirstPart= -4*pi*pc*sqrt( 1-(e^2) / (e^3) )*rInitial
format long

Here
sqrt( 1-(e^2) / (e^3) )
you have here
e=0.99;
so e < 1 and so e^3 is less than e^2.
Therefore
(e^2)/(e^3) > 1.
The division operation binds tighter than (i.e is evaluated ahead of) the subtraction so you are taking a square root of a negative number. Hence the imaginary component in your result.
Perhaps you require
sqrt( (1-(e^2)) / (e^3) )
which is guaranteed to yield a real number result since
1 - e^2 > 0
for your specified e

Related

MatLab using Fixed Point method to find a root

I wanna find a root for the following function with an error less than 0.05%
f= 3*x*tan(x)=1
In the MatLab i've wrote that code to do so:
clc,close all
syms x;
x0 = 3.5
f= 3*x*tan(x)-1;
df = diff(f,x);
while (1)
x1 = 1 / 3*tan(x0)
%DIRV.. z= tan(x0)^2/3 + 1/3
er = (abs((x1 - x0)/x1))*100
if ( er <= 0.05)
break;
end
x0 = x1;
pause(1)
end
But It keeps running an infinite loop with error 200.00 I dunno why.
Don't use while true, as that's usually uncalled for and prone to getting stuck in infinite loops, like here. Simply set a limit on the while instead:
while er > 0.05
%//your code
end
Additionally, to prevent getting stuck in an infinite loop you can use an iteration counter and set a maximum number of iterations:
ItCount = 0;
MaxIt = 1e5; %// maximum 10,000 iterations
while er > 0.05 & ItCount<MaxIt
%//your code
ItCount=ItCount+1;
end
I see four points of discussion that I'll address separately:
Why does the error seemingly saturate at 200.0 and the loop continue infinitely?
The fixed-point iterator, as written in your code, is finding the root of f(x) = x - tan(x)/3; in other words, find a value of x at which the graphs of x and tan(x)/3 cross. The only point where this is true is 0. And, if you look at the value of the iterants, the value of x1 is approaching 0. Good.
The bad news is that you are also dividing by that value converging toward 0. While the value of x1 remains finite, in a floating point arithmetic sense, the division works but may become inaccurate, and er actually goes NaN after enough iterations because x1 underflowed below the smallest denormalized number in the IEEE-754 standard.
Why is er 200 before then? It is approximately 200 because the value of x1 is approximately 1/3 of the value of x0 since tan(x)/3 locally behaves as x/3 a la its Taylor Expansion about 0. And abs(1 - 3)*100 == 200.
Divisions-by-zero and relative orders-of-magnitude are why it is sometimes best to look at the absolute and relative error measures for both the values of the independent variable and function value. If need be, even putting an extremely (relatively) small finite, constant value in the denominator of the relative calculation isn't entirely a bad thing in my mind (I remember seeing it in some numerical recipe books), but that's just a band-aid for robustness's sake that typically hides a more serious error.
This convergence is far different compared to the Newton-Raphson iterations because it has absolutely no knowledge of slope and the fixed-point iteration will converge to wherever the fixed-point is (forgive the minor tautology), assuming it does converge. Unfortunately, if I remember correctly, fixed-point convergence is only guaranteed if the function is continuous in some measure, and tan(x) is not; therefore, convergence is not guaranteed since those pesky poles get in the way.
The function it appears you want to find the root of is f(x) = 3*x*tan(x)-1. A fixed-point iterator of that function would be x = 1/(3*tan(x)) or x = 1/3*cot(x), which is looking for the intersection of 3*tan(x) and 1/x. However, due to point number (2), those iterators still behave badly since they are discontinuous.
A slightly different iterator x = atan(1/(3*x)) should behave a lot better since small values of x will produce a finite value because atan(x) is continuous along the whole real line. The only drawback is that the domain of x is limited to the interval (-pi/2,pi/2), but if it converges, I think the restriction is worth it.
Lastly, for any similar future coding endeavors, I do highly recommend #Adriaan's advice. If would like a sort of compromise between the styles, most of my iterative functions are written with a semantic variable notDone like this:
iter = 0;
iterMax = 1E4;
tol = 0.05;
notDone = 0.05 < er & iter < iterMax;
while notDone
%//your code
iter = iter + 1;
notDone = 0.05 < er & iter < iterMax;
end
You can add flags and all that jazz, but that format is what I frequently use.
I believe that the code below achieves what you are after using Newton's method for the convergence. Please leave a comment if I have missed something.
% find x: 3*x*tan(x) = 1
f = #(x) 3*x*tan(x)-1;
dfdx = #(x) 3*tan(x)+3*x*sec(x)^2;
tolerance = 0.05; % your value?
perturbation = 1e-2;
converged = 1;
x = 3.5;
f_x = f(x);
% Use Newton s method to find the root
count = 0;
err = 10*tolerance; % something bigger than tolerance to start
while (err >= tolerance)
count = count + 1;
if (count > 1e3)
converged = 0;
disp('Did not converge.');
break;
end
x0 = x;
dfdx_x = dfdx(x);
if (dfdx_x ~= 0)
% Avoid division by zero
f_x = f(x);
x = x - f_x/dfdx_x;
else
% Perturb x and go back to top of while loop
x = x + perturbation;
continue;
end
err = (abs((x - x0)/x))*100;
end
if (converged)
disp(['Converged to ' num2str(x,'%10.8e') ' in ' num2str(count) ...
' iterations.']);
end

Solving for the square root by Newton's Method

yinitial = x
y_n approaches sqrt(x) as n->infinity
If theres an x input and tol input. Aslong as the |y^2-x| > tol is true compute the following equation of y=0.5*(y + x/y). How would I create a while loop that will stop when |y^2-x| <= tol. So every time through the loop the y value changes. In order to get this answer--->
>>sqrtx = sqRoot(25,100)
sqrtx =
7.4615
I wrote this so far:
function [sqrtx] = sqrRoot(x,tol)
n = 0;
x=0;%initialized variables
if x >=tol %skips all remaining code
return
end
while x <=tol
%code repeated during each loop
x = x+1 %counting code
end
That formula is using a modified version of Newton's method to determine the square root. y_n is the previous iteration and y_{n+1} is the current iteration. You just need to keep two variables for each, then when the criteria of tolerance is satisfied, you return the current iteration's output. You also are incrementing the wrong value. It should be n, not x. You also aren't computing the tolerance properly... read the question more carefully. You take the current iteration's output, square it, subtract with the desired value x, take the absolute value and see if the output is less than the tolerance.
Also, you need to make sure the tolerance is small. Specifying the tolerance to be 100 will probably not allow the algorithm to iterate and give you the right answer. It may also be useful to see how long it took to converge to the right answer. As such, return n as a second output to your function:
function [sqrtx,n] = sqrRoot(x,tol) %// Change
%// Counts total number of iterations
n = 0;
%// Initialize the previous and current value to the input
sqrtx = x;
sqrtx_prev = x;
%// Until the tolerance has been met...
while abs(sqrtx^2 - x) > tol
%// Compute the next guess of the square root
sqrtx = 0.5*(sqrtx_prev + (x/sqrtx_prev));
%// Increment the counter
n = n + 1;
%// Set for next iteration
sqrtx_prev = sqrtx;
end
Now, when I run this code with x=25 and tol=1e-10, I get this:
>> [sqrtx, n] = sqrRoot(25, 1e-10)
sqrtx =
5
n =
7
The square root of 25 is 5... at least that's what I remember from maths class back in the day. It also took 7 iterations to converge. Not bad.
Yes, that is exactly what you are supposed to do: Iterate using the equation for y_{n+1} over and over again.
In your code you should have a loop like
while abs(y^2 - x) > tol
%// Calculate new y from the formula
end
Also note that tol should be small, as told in the other answer. The parameter tol actually tells you how inaccurate you want your solution to be. Normally you want more or less accurate solutions, so you set tol to a value near zero.
The correct way to solve this..
function [sqrtx] = sqRoot(x,tol)
sqrtx = x;%output = x
while abs((sqrtx.^2) - x) > tol %logic expression to test when it should
end
sqrtx = 0.5*((sqrtx) + (x/sqrtx)); %while condition prove true calculate
end
end

Hints or pointers with this specific situation

First time programming here. So I wrote this function in matlab to find roots of cubic polynomials using iterative processes. The function has to get the number of roots right, so at the end, I used if statements to get rid of a root if it were sufficiently close to any other root I've found, because it's probably a repeated root. However, the problem with this, as I just found out, is that if the coefficients of the polynomial are super small, the roots will all be super close to 0. My code will output that there is only one root, 0, when really it should display 3 solutions of 0.
I feel like this is a rather difficult predicament because the iterative processes will never get the exact numbers twice for a double root, so it isn't a matter of just comparing if the numbers are exactly the same. It could be that they're actually two different roots, just very close to one another. Any suggestions?
Edit: This is the code I wrote to get it to not display double roots twice, but I've realized this could potentially get rid of actual roots.
rts = [root1, root2, root3];
if abs(root1 - root2) < 1*10^(-7)
rts = [root1, root3];
end
if abs(root1 - root3) < 1*10^(-7)
rts = [root1, root2];
end
if abs(root2 - root3) < 1*10^(-7)
rts = [root1, root2];
end
if abs(root1 - root2) < 1*10^(-7) && abs(root1 - root3) < 1*10^(-7)
rts = root1;
end
Assuming that your roots in the rts array are monotolly increasing. The problem with your code is that you overwrite your rts array, depending on which conditions are true. Use a new variable for the unequal roots different_rts.
If the difference of the next root to the last stored root is greater than the treshold, add it to the array.
rts = [ 0 1e-8 0.1 ]
nDifferent = 1; % number of different roots
different_rts = rts(1) % initialize with first value
for ri=2:numel(rts)
if( abs(different_rts(nDifferent)-rts(ri))>1e-7 ) % if difference is greater add next root
nDifferent = nDifferent + 1;
different_rts(nDifferent) = rts(ri);
end
end

Optimization of double loop

I have written a code where i have to control, if the position (x,y) (saved in the Matrix Mat) is inside of a circular object which is centered at (posx,posy). If so the point gets a value val otherwise its zero.
My Code looks like this but as a matter of fact it is advertised to NOT use loops in matlab. Since i use not 1 but 2 loops, i was wondering if there is a more effective way for solving my problem.
Mat = zeros(300); %creates my coordinate system with zeros
...
for i =lowlimitx:highlimitx %variable boundary of my object
for j=lowlimity:highlimity
helpsqrdstnc = abs(posx-i)^2 + abs(posy-j)^2; %square distance from center
if helpsqrdstnc < radius^2
Mat(i,j)= val(helpsqrdstnc);
end
end
end
the usual way to optimize matlab code is to vectorize the operations. This is because built in functions and operators is in general much faster. For your case this would leave you with this code:
Mat = zeros(300); %creates my coordinate system with zeros
...
xSq = abs(posx-(lowlimitx:highlimitx)).^2;
ySq = abs(posy-(lowlimity:highlimity)).^2;
helpsqrdstnc = bsxfun(#plus,xSq,ySq.'); %bsxfun to do [xSq(1)+ySq(1),xSq(2)+ySq(1),...; xSq(1)+ySq(2),xSq(2)+ySq(2)...; ...]
Mat(helpsqrdstnc < radius^2)= val(helpsqrdstnc(helpsqrdstnc < radius^2));
where helpsqrdstnc must be the same size as Mat. There may also be neseccary to do a reshape here, but you will notice that by yourself if you get a column vector.
This does of course assume that radius, posx and posy is constant, but reading the question this seems to be the case. However, I do not know exactly how val looks, so it I have not managed to test the code. I also think that val(helpsqrdstnc) is tedious, since this refer to the distance, which does not neseccarily need to be an integer.

Matlab -- random walk with boundaries, vectorized

Suppose I have a vector J of jump sizes and an initial starting point X_0. Also I have boundaries 0, B (assume 0 < X_0 < B). I want to do a random walk where X_i = [min(X_{i-1} + J_i,B)]^+. (positive part). Basically if it goes over a boundary, it is made equal to the boundary. Anyone know a vectorized way to do this? The current way I am doing it consists of doing cumsums and then finding places where it violates a condition, and then starting from there and repeating the cumsum calculation, etc until I find that I stop violating the boundaries. It works when the boundaries are rarely hit, but if they are hit all the time, it basically becomes a for loop.
In the code below, I am doing this across many samples. To 'fix' the ones that go out of the boundary, I have to loop through the samples to check...(don't think there is a vectorized 'find')
% X_init is a row vector describing initial resource values to use for
% each sample
% J is matrix where each col is a sequence of Jumps (columns = sample #)
% In this code the jumps are subtracted, but same thing
X_intvl = repmat(X_init,NumJumps,1) - cumsum(J);
X = [X_init; X_intvl];
for sample = 1:NumSamples
k = find(or(X_intvl(:,sample) > B, X_intvl(:,sample) < 0),1);
while(~isempty(k))
change = X_intvl(k-1,sample) - X_intvl(k,sample);
X_intvl(k:end,sample) = X_intvl(k:end,sample)+change;
k = find(or(X_intvl(:,sample) > B, X_intvl(:,sample) < 0),1);
end
end
Interesting question (+1).
I faced a similar problem a while back, although slightly more complex as my lower and upper bound depended on t. I never did work out a fully-vectorized solution. In the end, the fastest solution I found was a single loop which incorporates the constraints at each step. Adapting the code to your situation yields the following:
%# Set the parameters
LB = 0; %# Lower bound
UB = 5; %# Upper bound
T = 100; %# Number of observations
N = 3; %# Number of samples
X0 = (1/2) * (LB + UB); %# Arbitrary start point halfway between LB and UB
%# Generate the jumps
Jump = randn(N, T-1);
%# Build the constrained random walk
X = X0 * ones(N, T);
for t = 2:T
X(:, t) = max(min(X(:, t-1) + Jump(:, t-1), UB), 0);
end
X = X';
I would be interested in hearing if this method proves faster than what you are currently doing. I suspect it will be for cases where the constraint is binding in more than one or two places. I can't test it myself as the code you provided is not a "working" example, ie I can't just copy and paste it into Matlab and run it, as it depends on several variables for which example (or simulated) values are not provided. I tried adapting it myself, but couldn't get it to work properly?
UPDATE: I just switched the code around so that observations are indexed on columns and samples are indexed on rows, and then I transpose X in the last step. This will make the routine more efficient, since Matlab allocates memory for numeric arrays column-wise - hence it is faster when performing operations down the columns of an array (as opposed to across the rows). Note, you will only notice the speed-up for large N.
FINAL THOUGHT: These days, the JIT accelerator is very good at making single loops in Matlab efficient (double loops are still pretty slow). Therefore personally I'm of the opinion that every time you try and obtain a fully-vectorized solution in Matlab, ie no loops, you should weigh up whether the effort involved in finding a clever solution is worth the slight gains in efficiency to be made over an easier-to-obtain method that utilizes a single loop. And it is important to remember that fully-vectorized solutions are sometimes slower than solutions involving single loops when T and N are small!
I'd like to propose another vectorized solution.
So, first we should set the parameters and generate random Jumpls. I used the same set of parameters as Colin T Bowers:
% Set the parameters
LB = 0; % Lower bound
UB = 20; % Upper bound
T = 1000; % Number of observations
N = 3; % Number of samples
X0 = (1/2) * (UB + LB); % Arbitrary start point halfway between LB and UB
% Generate the jumps
Jump = randn(N, T-1);
But I changed generation code:
% Generate initial data without bounds
X = cumsum(Jump, 2);
% Apply bounds
Amplitude = UB - LB;
nsteps = ceil( max(abs(X(:))) / Amplitude - 0.5 );
for ii = 1:nsteps
ind = abs(X) > (1/2) * Amplitude;
X(ind) = Amplitude * sign(X(ind)) - X(ind);
end
% Shifting X
X = X0 + X;
So, instead of for loop I'm using cumsum function with smart post-processing.
N.B. This solution works significantly slower than Colin T Bowers's one for tight bounds (Amplitude < 5), but for loose bounds (Amplitude > 20) it works much faster.