MatLab using Fixed Point method to find a root - matlab

I wanna find a root for the following function with an error less than 0.05%
f= 3*x*tan(x)=1
In the MatLab i've wrote that code to do so:
clc,close all
syms x;
x0 = 3.5
f= 3*x*tan(x)-1;
df = diff(f,x);
while (1)
x1 = 1 / 3*tan(x0)
%DIRV.. z= tan(x0)^2/3 + 1/3
er = (abs((x1 - x0)/x1))*100
if ( er <= 0.05)
break;
end
x0 = x1;
pause(1)
end
But It keeps running an infinite loop with error 200.00 I dunno why.

Don't use while true, as that's usually uncalled for and prone to getting stuck in infinite loops, like here. Simply set a limit on the while instead:
while er > 0.05
%//your code
end
Additionally, to prevent getting stuck in an infinite loop you can use an iteration counter and set a maximum number of iterations:
ItCount = 0;
MaxIt = 1e5; %// maximum 10,000 iterations
while er > 0.05 & ItCount<MaxIt
%//your code
ItCount=ItCount+1;
end

I see four points of discussion that I'll address separately:
Why does the error seemingly saturate at 200.0 and the loop continue infinitely?
The fixed-point iterator, as written in your code, is finding the root of f(x) = x - tan(x)/3; in other words, find a value of x at which the graphs of x and tan(x)/3 cross. The only point where this is true is 0. And, if you look at the value of the iterants, the value of x1 is approaching 0. Good.
The bad news is that you are also dividing by that value converging toward 0. While the value of x1 remains finite, in a floating point arithmetic sense, the division works but may become inaccurate, and er actually goes NaN after enough iterations because x1 underflowed below the smallest denormalized number in the IEEE-754 standard.
Why is er 200 before then? It is approximately 200 because the value of x1 is approximately 1/3 of the value of x0 since tan(x)/3 locally behaves as x/3 a la its Taylor Expansion about 0. And abs(1 - 3)*100 == 200.
Divisions-by-zero and relative orders-of-magnitude are why it is sometimes best to look at the absolute and relative error measures for both the values of the independent variable and function value. If need be, even putting an extremely (relatively) small finite, constant value in the denominator of the relative calculation isn't entirely a bad thing in my mind (I remember seeing it in some numerical recipe books), but that's just a band-aid for robustness's sake that typically hides a more serious error.
This convergence is far different compared to the Newton-Raphson iterations because it has absolutely no knowledge of slope and the fixed-point iteration will converge to wherever the fixed-point is (forgive the minor tautology), assuming it does converge. Unfortunately, if I remember correctly, fixed-point convergence is only guaranteed if the function is continuous in some measure, and tan(x) is not; therefore, convergence is not guaranteed since those pesky poles get in the way.
The function it appears you want to find the root of is f(x) = 3*x*tan(x)-1. A fixed-point iterator of that function would be x = 1/(3*tan(x)) or x = 1/3*cot(x), which is looking for the intersection of 3*tan(x) and 1/x. However, due to point number (2), those iterators still behave badly since they are discontinuous.
A slightly different iterator x = atan(1/(3*x)) should behave a lot better since small values of x will produce a finite value because atan(x) is continuous along the whole real line. The only drawback is that the domain of x is limited to the interval (-pi/2,pi/2), but if it converges, I think the restriction is worth it.
Lastly, for any similar future coding endeavors, I do highly recommend #Adriaan's advice. If would like a sort of compromise between the styles, most of my iterative functions are written with a semantic variable notDone like this:
iter = 0;
iterMax = 1E4;
tol = 0.05;
notDone = 0.05 < er & iter < iterMax;
while notDone
%//your code
iter = iter + 1;
notDone = 0.05 < er & iter < iterMax;
end
You can add flags and all that jazz, but that format is what I frequently use.

I believe that the code below achieves what you are after using Newton's method for the convergence. Please leave a comment if I have missed something.
% find x: 3*x*tan(x) = 1
f = #(x) 3*x*tan(x)-1;
dfdx = #(x) 3*tan(x)+3*x*sec(x)^2;
tolerance = 0.05; % your value?
perturbation = 1e-2;
converged = 1;
x = 3.5;
f_x = f(x);
% Use Newton s method to find the root
count = 0;
err = 10*tolerance; % something bigger than tolerance to start
while (err >= tolerance)
count = count + 1;
if (count > 1e3)
converged = 0;
disp('Did not converge.');
break;
end
x0 = x;
dfdx_x = dfdx(x);
if (dfdx_x ~= 0)
% Avoid division by zero
f_x = f(x);
x = x - f_x/dfdx_x;
else
% Perturb x and go back to top of while loop
x = x + perturbation;
continue;
end
err = (abs((x - x0)/x))*100;
end
if (converged)
disp(['Converged to ' num2str(x,'%10.8e') ' in ' num2str(count) ...
' iterations.']);
end

Related

How to setup equation that involves a sum from x=1 to infinity and loops?

I am getting confused on how to properly set up this equation. To find a value of V(i,j). The end result would be plotting V over time. I understand that there needs to be loops to allow this equation to work, however I am lost when it comes to setting it up. Basically I am trying to take the sum from n=1 to infinity of (1-(-1)^n)/(n^4 *pi^4)*sin((n*pi*c*j)/L)*sin((n*pi*i)/L)
I originally thought that I should make it a while loop to increment n by 1 until I reach say 10 or so just to get an idea of what the output would look like. All of the variables were unknown and values were added again to see what the plot would look like.
I have down another code where the equation is just dependent on i and j. However with this n term, I am thrown off. Any advice would be great as to setting up the equation. Thank you.
L=10;
x=linspace(0,L,30);
t1= 50;
X=30;
p=1
c=t1/1000;
V=zeros(X,t1);
V(1,:)=0;
V(30,:)=0;
R=((4*p*L^3)/c);
n=1;
t=1:50;
while n < 10
for i=1:31
for j=1:50
V(i,j)=R*sum((1-(-1)^n)/(n^4 *pi^4)*sin((n*pi*c*j)/L)*sin((n*pi*i)/L));
end
end
n=n+1;
end
figure(1)
plot(V(i,j),t)
Various ways of doing so:
1) Computing the sum up to one Nmax in one shot:
Nmax = 30;
Vijn = #(i,j,n) R*((1-(-1)^n)/(n^4 *pi^4)*sin((n*pi*c*j)/L)*sin((n*pi*i)/L));
i = 1:31;
j = 1:50;
n = 1:Nmax;
[I,J,N] = ndgrid(i,j,n);
V = arrayfun(Vijn,I,J,N);
Vc = cumsum(V,3);
% now Vc(:,:,k) is sum_n=1^{k+1} V(i,j,n)
figure(1);clf;imagesc(Vc(:,:,end));
2) Looping indefinitely
n = 1;
V = 0;
i = 1:31;
j = 1:50;
[I,J] = meshgrid(i,j);
while true
V = V + R*((1-(-1)^n)/(n^4 *pi^4)*sin((n*pi*c*J)/L).*sin((n*pi*I)/L));
n = n + 1;
figure(1);clf;
imagesc(V);
title(sprintf('N = %d',n))
drawnow;
pause(0.25);
end
Note that in your example you won't need many terms, since:
Every second term is zero (for even n, the term 1-(-1)^n is zero).
The terms decay with 1/n^4. In norms: n=1 contributes ~2e4, n=3 contributes ~4e2, n=5 contributes 5e1, n=7 contributes ~14, etc. Visually, there is a small difference between n=1 and n=1+n=3 but barely a noticeable one for n=1+n=3+n=5.
Given that so few terms are needed, the first approach is probably the better one. Also, skip the even indices, as you don't need them.

My Matlab code is not working (very simple yet odd 'if' statement issue)

for j = 1:4
n = (co(e(j,2),:) - co(e(j,1),:));
n = n./norm(n);
F = e(j,5).*e(j,4)./e(j,3).*-(dot(u,n)).*n
theta = acosd(dot(F,n)/(norm(F)*norm(n)))
if theta == 180
F2 = [F2; -sqrt(F(1)^2 + F(2)^2 + F(3)^2)]
else if theta == 0
F2 = [F2; sqrt(F(1)^2 + F(2)^2 + F(3)^2)]
end
end
end
Essentially, what the issue is that the first value of F (for j = 1) in the if statement is being ignored, even though the value of theta is 180. To check, I've done 'whos theta' and even wrote theta == 180 into the code for which it returned 0. For j = 2:4 though, the code works absolutely fine despite j = 2 yielding a theta value of 180 as well. It's almost as if it's skipping the if statement entirely for j = 1. I believe this is linked to the fact that F is the only matrix vector with three non-zero components, but still don't know how to go about solving this.
theta might a be double and very close to 180 but not exactly 180.00000001 ~= 180. It might be better to compared with a given tolerance.
tol = 1e-3;
if abs(theta-180)<tol
...
end
Matlab also has internal floating-point accuracy eps, it might be an interesting read.
Additionally you 'else-if' statement should elseif but that might be just a typo from copy-paste.

Matlab : Help in entropy estimation of a disretized time series

This Question is in continuation to a previous one asked Matlab : Plot of entropy vs digitized code length
I want to calculate the entropy of a random variable that is discretized version (0/1) of a continuous random variable x. The random variable denotes the state of a nonlinear dynamical system called as the Tent Map. Iterations of the Tent Map yields a time series of length N.
The code should exit as soon as the entropy of the discretized time series becomes equal to the entropy of the dynamical system. It is known theoretically that the entropy of the system is log_2(2). The code exits but the frst 3 values of the entropy array are erroneous - entropy(1) = 1, entropy(2) = NaN and entropy(3) = NaN. I am scratching my head as to why this is happening and how I can get rid of it. Please help in correcting the code. THank you.
clear all
H = log(2)
threshold = 0.5;
x(1) = rand;
lambda(1) = 1;
entropy(1,1) = 1;
j=2;
tol=0.01;
while(~(abs(lambda-H)<tol))
if x(j - 1) < 0.5
x(j) = 2 * x(j - 1);
else
x(j) = 2 * (1 - x(j - 1));
end
s = (x>=threshold);
p_1 = sum(s==1)/length(s);
p_0 = sum(s==0)/length(s);
entropy(:,j) = -p_1*log2(p_1)-(1-p_1)*log2(1-p_1);
lambda = entropy(:,j);
j = j+1;
end
plot( entropy )
It looks like one of your probabilities is zero. In that case, you'd be trying to calculate 0*log(0) = 0*-Inf = NaN. The entropy should be zero in this case, so you you can just check for this condition explicitly.
Couple side notes: It looks like you're declaring H=log(2), but your post says the entropy is log_2(2). p_0 is always 1 - p_1, so you don't have to count everything up again. Growing the arrays dynamically is inefficient because matlab has to re-copy the entire contents at each step. You can speed things up by pre-allocating them (only worth it if you're going to be running for many timesteps).

Solving for the square root by Newton's Method

yinitial = x
y_n approaches sqrt(x) as n->infinity
If theres an x input and tol input. Aslong as the |y^2-x| > tol is true compute the following equation of y=0.5*(y + x/y). How would I create a while loop that will stop when |y^2-x| <= tol. So every time through the loop the y value changes. In order to get this answer--->
>>sqrtx = sqRoot(25,100)
sqrtx =
7.4615
I wrote this so far:
function [sqrtx] = sqrRoot(x,tol)
n = 0;
x=0;%initialized variables
if x >=tol %skips all remaining code
return
end
while x <=tol
%code repeated during each loop
x = x+1 %counting code
end
That formula is using a modified version of Newton's method to determine the square root. y_n is the previous iteration and y_{n+1} is the current iteration. You just need to keep two variables for each, then when the criteria of tolerance is satisfied, you return the current iteration's output. You also are incrementing the wrong value. It should be n, not x. You also aren't computing the tolerance properly... read the question more carefully. You take the current iteration's output, square it, subtract with the desired value x, take the absolute value and see if the output is less than the tolerance.
Also, you need to make sure the tolerance is small. Specifying the tolerance to be 100 will probably not allow the algorithm to iterate and give you the right answer. It may also be useful to see how long it took to converge to the right answer. As such, return n as a second output to your function:
function [sqrtx,n] = sqrRoot(x,tol) %// Change
%// Counts total number of iterations
n = 0;
%// Initialize the previous and current value to the input
sqrtx = x;
sqrtx_prev = x;
%// Until the tolerance has been met...
while abs(sqrtx^2 - x) > tol
%// Compute the next guess of the square root
sqrtx = 0.5*(sqrtx_prev + (x/sqrtx_prev));
%// Increment the counter
n = n + 1;
%// Set for next iteration
sqrtx_prev = sqrtx;
end
Now, when I run this code with x=25 and tol=1e-10, I get this:
>> [sqrtx, n] = sqrRoot(25, 1e-10)
sqrtx =
5
n =
7
The square root of 25 is 5... at least that's what I remember from maths class back in the day. It also took 7 iterations to converge. Not bad.
Yes, that is exactly what you are supposed to do: Iterate using the equation for y_{n+1} over and over again.
In your code you should have a loop like
while abs(y^2 - x) > tol
%// Calculate new y from the formula
end
Also note that tol should be small, as told in the other answer. The parameter tol actually tells you how inaccurate you want your solution to be. Normally you want more or less accurate solutions, so you set tol to a value near zero.
The correct way to solve this..
function [sqrtx] = sqRoot(x,tol)
sqrtx = x;%output = x
while abs((sqrtx.^2) - x) > tol %logic expression to test when it should
end
sqrtx = 0.5*((sqrtx) + (x/sqrtx)); %while condition prove true calculate
end
end

Matlab -- random walk with boundaries, vectorized

Suppose I have a vector J of jump sizes and an initial starting point X_0. Also I have boundaries 0, B (assume 0 < X_0 < B). I want to do a random walk where X_i = [min(X_{i-1} + J_i,B)]^+. (positive part). Basically if it goes over a boundary, it is made equal to the boundary. Anyone know a vectorized way to do this? The current way I am doing it consists of doing cumsums and then finding places where it violates a condition, and then starting from there and repeating the cumsum calculation, etc until I find that I stop violating the boundaries. It works when the boundaries are rarely hit, but if they are hit all the time, it basically becomes a for loop.
In the code below, I am doing this across many samples. To 'fix' the ones that go out of the boundary, I have to loop through the samples to check...(don't think there is a vectorized 'find')
% X_init is a row vector describing initial resource values to use for
% each sample
% J is matrix where each col is a sequence of Jumps (columns = sample #)
% In this code the jumps are subtracted, but same thing
X_intvl = repmat(X_init,NumJumps,1) - cumsum(J);
X = [X_init; X_intvl];
for sample = 1:NumSamples
k = find(or(X_intvl(:,sample) > B, X_intvl(:,sample) < 0),1);
while(~isempty(k))
change = X_intvl(k-1,sample) - X_intvl(k,sample);
X_intvl(k:end,sample) = X_intvl(k:end,sample)+change;
k = find(or(X_intvl(:,sample) > B, X_intvl(:,sample) < 0),1);
end
end
Interesting question (+1).
I faced a similar problem a while back, although slightly more complex as my lower and upper bound depended on t. I never did work out a fully-vectorized solution. In the end, the fastest solution I found was a single loop which incorporates the constraints at each step. Adapting the code to your situation yields the following:
%# Set the parameters
LB = 0; %# Lower bound
UB = 5; %# Upper bound
T = 100; %# Number of observations
N = 3; %# Number of samples
X0 = (1/2) * (LB + UB); %# Arbitrary start point halfway between LB and UB
%# Generate the jumps
Jump = randn(N, T-1);
%# Build the constrained random walk
X = X0 * ones(N, T);
for t = 2:T
X(:, t) = max(min(X(:, t-1) + Jump(:, t-1), UB), 0);
end
X = X';
I would be interested in hearing if this method proves faster than what you are currently doing. I suspect it will be for cases where the constraint is binding in more than one or two places. I can't test it myself as the code you provided is not a "working" example, ie I can't just copy and paste it into Matlab and run it, as it depends on several variables for which example (or simulated) values are not provided. I tried adapting it myself, but couldn't get it to work properly?
UPDATE: I just switched the code around so that observations are indexed on columns and samples are indexed on rows, and then I transpose X in the last step. This will make the routine more efficient, since Matlab allocates memory for numeric arrays column-wise - hence it is faster when performing operations down the columns of an array (as opposed to across the rows). Note, you will only notice the speed-up for large N.
FINAL THOUGHT: These days, the JIT accelerator is very good at making single loops in Matlab efficient (double loops are still pretty slow). Therefore personally I'm of the opinion that every time you try and obtain a fully-vectorized solution in Matlab, ie no loops, you should weigh up whether the effort involved in finding a clever solution is worth the slight gains in efficiency to be made over an easier-to-obtain method that utilizes a single loop. And it is important to remember that fully-vectorized solutions are sometimes slower than solutions involving single loops when T and N are small!
I'd like to propose another vectorized solution.
So, first we should set the parameters and generate random Jumpls. I used the same set of parameters as Colin T Bowers:
% Set the parameters
LB = 0; % Lower bound
UB = 20; % Upper bound
T = 1000; % Number of observations
N = 3; % Number of samples
X0 = (1/2) * (UB + LB); % Arbitrary start point halfway between LB and UB
% Generate the jumps
Jump = randn(N, T-1);
But I changed generation code:
% Generate initial data without bounds
X = cumsum(Jump, 2);
% Apply bounds
Amplitude = UB - LB;
nsteps = ceil( max(abs(X(:))) / Amplitude - 0.5 );
for ii = 1:nsteps
ind = abs(X) > (1/2) * Amplitude;
X(ind) = Amplitude * sign(X(ind)) - X(ind);
end
% Shifting X
X = X0 + X;
So, instead of for loop I'm using cumsum function with smart post-processing.
N.B. This solution works significantly slower than Colin T Bowers's one for tight bounds (Amplitude < 5), but for loose bounds (Amplitude > 20) it works much faster.