Usage of roots and solve in MATLAB - matlab

I have an equation which goes like this:
Here a, b, c, d, e, f, g, h are constants. I want to vary k from say .6 to 10 with 0.1 interval and to find w. There are 2 ways to do this in MATLAB.
One way is convert this equation into equation of the form (something)w^8-(something else)w^6....-(something else again)w^0=0, and then make use of command 'roots' in MATLAB (Method 1).
Another way is that defining symbolic functions and then executing the program. When you are using the this method, you may not need simplify the expression any further, you can just put it in first form itself (Method 2).
Both ways are shown in the script below:
%%% defining values
clear; clc;
a=0.1500;
b=0.20;
c=0.52;
d=0.5;
e=6;
f=30;
g=18;
h=2;
%% Method 1: varying k using roots
tic
i=0;
for k=.6:.1:10
i=i+1;
t8=a;
t7=0;
t6=-(1+e+a*(c+g))*(k^2) ;
t5=0;
t4=(k^2*(b+f+(c*e+g)*k^2)-a*(d+h-c*g*k^4));
t3=0;
t2=k^2*(d*(e+a*g)+h+a*c*h-(c*f+b*g)*k^2);
t1=0;
t0=(a*d*h)-(d*f+b*h)*k^2;
q=[t8 t7 t6 t5 t4 t3 t2 t1 t0];
r(i,:)=roots(q);
end
krho1(:,1)=.6:.1:10;
r_real=real(r);
r_img=imag(r);
dat1=[krho1 r_real(:,1) r_real(:,2) r_real(:,3) r_real(:,4) r_real(:,5) r_real(:,6) r_real(:,7) r_real(:,8)];
fnameout=('stack_using_roots.dat');
fid1=fopen(fnameout,'w+');
fprintf(fid1,'krho\t RR1\t RR2\t RR3\t RR4\t RR5\t RR6\t RR7\t RR8\t \r');
fprintf(fid1,'%6.4f %7.10f %7.10f %7.10f %7.10f %7.10f %7.10f %7.10f %7.10f \n',dat1');
fclose(fid1);
plot(krho1, r_real(:,1),krho1, r_real(:,2),krho1, r_real(:,3),krho1, r_real(:,4),krho1, r_real(:,5),krho1, r_real(:,6),krho1, r_real(:,7),krho1, r_real(:,8))
toc
%% Method 2: varying k using solve
tic
syms w k
i=0;
for k=.6:.1:10
i=i+1;
first=a/k^2;
second=(w^2-b)/(w^4-k^2*c*w^2-d) ;
third=(e*w^2-f)/(w^4-k^2*g*w^2-h);
n(i,:)=double(solve(first-second-third, w));
end
krho1(:,1)=.6:.1:10;
r_real=real(n);
r_img=imag(n);
dat1=[krho1 r_real(:,1) r_real(:,2) r_real(:,3) r_real(:,4) r_real(:,5) r_real(:,6) r_real(:,7) r_real(:,8)];
fnameout=('stack_using_solve.dat');
fid1=fopen(fnameout,'w+');
fprintf(fid1,'krho\t RR1\t RR2\t RR3\t RR4\t RR5\t RR6\t RR7\t RR8\t \r');
fprintf(fid1,'%6.4f %7.10f %7.10f %7.10f %7.10f %7.10f %7.10f %7.10f %7.10f \n',dat1');
fclose(fid1);
figure;
plot(krho1, r_real(:,1),krho1, r_real(:,2),krho1, r_real(:,3),krho1, r_real(:,4),krho1, r_real(:,5),krho1, r_real(:,6),krho1, r_real(:,7),krho1, r_real(:,8))
toc
Method 1 uses roots command and Method 2 uses symbolic and solve. My question are as follows:
You can see that first section plots are coming in a short time, where the second one is taking a greater time. Is there any way to increase the speed?
The plots of both the section seems very different, and you may be forced to believe that I have made mistakes while carrying out the calculation from (a/k^2)-((w^2-b)/(w^4-k^2*cw^2-d))-((ew^2-f)/(w^4-k^2*g*w^2-h)) to (something)w^8-(something else)w^6....-(something else again)w^0. I can assure you that I have put it correctly. You can see what really happens, if you look for any particular value of krho in both the dat file (stack_using_roots and stack_using_solve). For, lets say, krho=3.6, the roots is the same in both the dat files, but the way in which it is 'written' is not in a proper way. That is why the plots looks awkward. In short, while using 'roots' command, the solutions are given in a orderd format, on the other hand while using 'solve', it is getting shifted randomly. What is really happening? Is there any way to get around this problem?
I have ran the program with
i) syms w along with n(i,:)=double(solve(first-second-third==0, w));
ii) syms w k along with n(i,:)=double(solve(first-second-third==0, w));
iii) syms w k along with n(i,:)=double(solve(first-second-third, w));
In all these 3 cases, results seem to be same. Then what is the thing that we have to define as symbolic? And when do we use and do not use the expression '==0'?

Are there any ways to increase the speed?
Several. Some trivial speed improvements would come from defining variables before the loop. The big bottleneck is solve. Unfortunately, there isn't an obvious analytical solution to your problem without knowing k beforehand, so there's no obvious way to pull solve outside the for loop.
In short, while using 'roots' command, the solutions are given in a ordered format, on the other hand while using 'solve', it is getting shifted randomly. Why is that?
It is not really getting "shifted". Your function is symmetric about w = 0. So, for every root r there is another root at -r. Every time you call solve, it gives you the first, second, third then fourth roots, and then the same thing but this time the roots are multiplied by -1.
Sometimes solve chooses to take out -1 as a common factor. In these cases, it first gives you the roots multiplied by -1, then the positive roots. Why it sometimes takes out -1, sometimes doesn't, I don't know, but in your case (since you don't care about the imaginary part) you can fix this by replacing double(solve(first-second-third, w)) with sort(real(double(solve(first-second-third, w)))). The order of the roots won't be the same as in Method 1, but you won't get the weird switching behaviour.
In all these 3 cases, results seem to be same. Then what is the thing that we have to define as symbolic? And when do we use and do not use the expression '==0'?
syms w k vs. syms w doesn't make a difference because you redefine k as a numeric value (0.6, 0.7,... etc). Only w needs to be symbolic.
The reference page for solve dictates how the equation should be specified. If you scroll down to the section regarding the input variable eqns, it states
If any elements of eqns are symbolic expressions (without the right side), solve equates the element to 0.
This is why it makes no difference whether you write first-second-third==0 or first-second-third as the first input to solve.

Related

MATLAB Initial Objective Function Evaluation Error

I am trying to create a plot in MATLAB by iterating over values of a constant (A) with respect to a system of equations. My code is pasted below:
function F=Func1(X, A)
%X(1) is c
%X(2) is h
%X(3) is lambda
%A is technology (some constant)
%a is alpha
a=1;
F(1)=1/X(1)-X(3)
F(2)=X(3)*(X(1)-A*X(2)^a)
F(3)=-1/(24-X(2))-X(3)*A*a*X(2)^(a-1)
clear, clc
init_guess=[0,0,0]
for countA=1:0.01:10
result(countA,:)=[countA,fsolve(#Func1, init_guess)]
end
display('The Loop Has Ended')
display(result)
%plot(result(:,1),result(:,2))
The first set of code, I am defining the set of equations I am attempting to solve. In the second set of lines, I am trying to write a loop which iterates through the values of A that I want, [1,10] with an increment of 0.01. Right now, I am just trying to get my loop code to work, but I keep on getting this error:
"Failure in initial objective function evaluation. FSOLVE cannot continue."
I am not sure why this is the case. From what I understand, this is the result of my initial guess matrix not being the right size, but I believe it should be of size 3, as I am solving for three variables. Additionally, I'm fairly confident there's nothing wrong with how I'm referencing my Func1 code in my Loop code.
Any advice you all could provide would be sincerely appreciated. I've only been working on MATLAB for a few days, so if this is a rather trivial fix, pardon my ignorance. Thanks.
Couple of issues with your code:
1/X(1) in function Func1 is prone to the division by zero miscalculations. I would change it to 1/(X(1)+eps).
If countA is incrementing by 0.01, it cannot be used as an index for result. Perhaps introduce an index ind to increment.
I have included your constant A within the function to clarify what optimization variables are.
Here is the updated code. I don't know if the results are reasonable or not:
init_guess=[0,0,0];
ind = 1;
for countA=1:0.01:10
result(ind,:)=[countA, fsolve(#(X) Func1(X,countA), init_guess)];
ind = ind+1;
end
display('The Loop Has Ended')
display(result)
%plot(result(:,1),result(:,2))
function F=Func1(X,A)
%X(1) is c
%X(2) is h
%X(3) is lambda
%A is technology (some constant)
%a is alpha
a=1;
F(1)=1/(X(1)+eps)-X(3);
F(2)=X(3)*(X(1)-A*X(2)^a);
F(3)=-1/(24-X(2))-X(3)*A*a*X(2)^(a-1);
end

why does a*b*a take longer than (a'*(a*b)')' when using gpuArray in Matlab scripts?

The code below performs the operation the same operation on gpuArrays a and b in two different ways. The first part computes (a'*(a*b)')' , while the second part computes a*b*a. The results are then verified to be the same.
%function test
clear
rng('default');rng(1);
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
a=gpuArray(a);
b=gpuArray(b);
tic;
c1=gather(transpose(transpose(a)*transpose(a*b)));
disp(['time for (a''*(a*b)'')'': ' , num2str(toc),'s'])
clearvars -except c1
rng('default');
rng(1)
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
a=gpuArray(a);
b=gpuArray(b);
tic;
c2=gather(a*b*a);
disp(['time for a*b*a: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c2))))])
%end
However, computing (a'*(a*b)')' is roughly 4 times faster than computing a*b*a. Here is the output of the above script in R2018a on an Nvidia K20 (I've tried different versions and different GPUs with the similar behaviour).
>> test
time for (a'*(a*b)')': 0.43234s
time for a*b*a: 1.7175s
error = 2.0009e-11
Even more strangely, if the first and last lines of the above script are uncommented (to turn it into a function), then both take the longer amount of time (~1.7s instead of ~0.4s). Below is the output for this case:
>> test
time for (a'*(a*b)')': 1.717s
time for a*b*a: 1.7153s
error = 1.0914e-11
I'd like to know what is causing this behaviour, and how to perform a*b*a or (a'*(a*b)')' or both in the shorter amount of time (i.e. ~0.4s rather than ~1.7s) inside a matlab function rather than inside a script.
There seem to be an issue with multiplication of two sparse matrices on GPU. time for sparse by full matrix is more than 1000 times faster than sparse by sparse. A simple example:
str={'sparse*sparse','sparse*full'};
for ii=1:2
rng(1);
a=sprand(3000,3000,0.1);
b=sprand(3000,3000,0.1);
if ii==2
b=full(b);
end
a=gpuArray(a);
b=gpuArray(b);
tic
c=a*b;
disp(['time for ',str{ii},': ' , num2str(toc),'s'])
end
In your context, it is the last multiplication which does it. to demonstrate I replace a with a duplicate c, and multiply by it twice, once as sparse and once as full matrix.
str={'a*b*a','a*b*full(a)'};
for ii=1:2
%rng('default');
rng(1)
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
rng(1)
c=sprand(3000,3000,0.1);
if ii==2
c=full(c);
end
a=gpuArray(a);
b=gpuArray(b);
c=gpuArray(c);
tic;
c1{ii}=a*b*c;
disp(['time for ',str{ii},': ' , num2str(toc),'s'])
end
disp(['error = ',num2str(max(max(abs(c1{1}-c1{2}))))])
I may be wrong, but my conclusion is that a * b * a involves multiplication of two sparse matrices (a and a again) and is not treated well, while using transpose() approach divides the process to two stage multiplication, in none of which there are two sparse matrices.
I got in touch with Mathworks tech support and Rylan finally shed some light on this issue. (Thanks Rylan!) His full response is below. The function vs script issue appears to be related to certain optimizations matlab applies automatically to functions (but not scripts) not working as expected.
Rylan's response:
Thank you for your patience on this issue. I have consulted with the MATLAB GPU computing developers to understand this better.
This issue is caused by internal optimizations done by MATLAB when encountering some specific operations like matrix-matrix multiplication and transpose. Some of these optimizations may be enabled specifically when executing a MATLAB function (or anonymous function) rather than a script.
When your initial code was being executed from a script, a particular matrix transpose optimization is not performed, which results in the 'res2' expression being faster than the 'res1' expression:
n = 2000;
a=gpuArray(sprand(n,n,0.01));
b=gpuArray(rand(n));
tic;res1=a*b*a;wait(gpuDevice);toc % Elapsed time is 0.884099 seconds.
tic;res2=transpose(transpose(a)*transpose(a*b));wait(gpuDevice);toc % Elapsed time is 0.068855 seconds.
However when the above code is placed in a MATLAB function file, an additional matrix transpose-times optimization is done which causes the 'res2' expression to go through a different code path (and different CUDA library function call) compared to the same line being called from a script. Therefore this optimization generates slower results for the 'res2' line when called from a function file.
To avoid this issue from occurring in a function file, the transpose and multiply operations would need to be split in a manner that stops MATLAB from applying this optimization. Separating each clause within the 'res2' statement seems to be sufficient for this:
tic;i1=transpose(a);i2=transpose(a*b);res3=transpose(i1*i2);wait(gpuDevice);toc % Elapsed time is 0.066446 seconds.
In the above line, 'res3' is being generated from two intermediate matrices: 'i1' and 'i2'. The performance (on my system) seems to be on par with that of the 'res2' expression when executed from a script; in addition the 'res3' expression also shows similar performance when executed from a MATLAB function file. Note however that additional memory may be used to store the transposed copy of the initial array. Please let me know if you see different performance behavior on your system, and I can investigate this further.
Additionally, the 'res3' operation shows faster performance when measured with the 'gputimeit' function too. Please refer to the attached 'testscript2.m' file for more information on this. I have also attached 'test_v2.m' which is a modification of the 'test.m' function in your Stack Overflow post.
Thank you for reporting this issue to me. I would like to apologize for any inconvenience caused by this issue. I have created an internal bug report to notify the MATLAB developers about this behavior. They may provide a fix for this in a future release of MATLAB.
Since you had an additional question about comparing the performance of GPU code using 'gputimeit' vs. using 'tic' and 'toc', I just wanted to provide one suggestion which the MATLAB GPU computing developers had mentioned earlier. It is generally good to also call 'wait(gpuDevice)' before the 'tic' statements to ensure that GPU operations from the previous lines don't overlap in the measurement for the next line. For example, in the following lines:
b=gpuArray(rand(n));
tic; res1=a*b*a; wait(gpuDevice); toc
if the 'wait(gpuDevice)' is not called before the 'tic', some of the time taken to construct the 'b' array from the previous line may overlap and get counted in the time taken to execute the 'res1' expression. This would be preferred instead:
b=gpuArray(rand(n));
wait(gpuDevice); tic; res1=a*b*a; wait(gpuDevice); toc
Apart from this, I am not seeing any specific issues in the way that you are using the 'tic' and 'toc' functions. However note that using 'gputimeit' is generally recommended over using 'tic' and 'toc' directly for GPU-related profiling.
I will go ahead and close this case for now, but please let me know if you have any further questions about this.
%testscript2.m
n = 2000;
a = gpuArray(sprand(n, n, 0.01));
b = gpuArray(rand(n));
gputimeit(#()transpose_mult_fun(a, b))
gputimeit(#()transpose_mult_fun_2(a, b))
function out = transpose_mult_fun(in1, in2)
i1 = transpose(in1);
i2 = transpose(in1*in2);
out = transpose(i1*i2);
end
function out = transpose_mult_fun_2(in1, in2)
out = transpose(transpose(in1)*transpose(in1*in2));
end
.
function test_v2
clear
%% transposed expression
n = 2000;
rng('default');rng(1);
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);
tic;
c1 = gather(transpose( transpose(a) * transpose(a * b) ));
disp(['time for (a''*(a*b)'')'': ' , num2str(toc),'s'])
clearvars -except c1
%% non-transposed expression
rng('default');
rng(1)
n = 2000;
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);
tic;
c2 = gather(a * b * a);
disp(['time for a*b*a: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c2))))])
%% sliced equivalent
rng('default');
rng(1)
n = 2000;
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);
tic;
intermediate1 = transpose(a);
intermediate2 = transpose(a * b);
c3 = gather(transpose( intermediate1 * intermediate2 ));
disp(['time for split equivalent: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c3))))])
end
EDIT 2 I might have been right, see this other answer
EDIT: They use MAGMA, which is column major. My answer does not hold, however I will leave it here for a while in case it can help crack this strange behavior.
The below answer is wrong
This is my guess, I can not 100% tell you without knowing the code under MATLAB's hood.
Hypothesis: MATLABs parallel computing code uses CUDA libraries, not their own.
Important information
MATLAB is column major and CUDA is row major.
There is no such things as 2D matrices, only 1D matrices with 2 indices
Why does this matter? Well because CUDA is highly optimized code that uses memory structure to maximize cache hits per kernel (the slowest operation on GPUs is reading memory). This means a standard CUDA matrix multiplication code will exploit the order of memory reads to make sure they are adjacent. However, what is adjacent memory in row-major is not in column-major.
So, there are 2 solutions to this as someone writing software
Write your own column-major algebra libraries in CUDA
Take every input/output from MATLAB and transpose it (i.e. convert from column-major to row major)
They have done point 2, and assuming that there is a smart JIT compiler for MATLAB parallel processing toolbox (reasonable assumption), for the second case, it takes a and b, transposes them, does the maths, and transposes the output when you gather.
In the first case however, you already do not need to transpose the output, as it is internally already transposed and the JIT catches this, so instead of calling gather(transpose( XX )) it just skips the output transposition is side. The same with transpose(a*b). Note that transpose(a*b)=transpose(b)*transpose(a), so suddenly no transposes are needed (they are all internally skipped). A transposition is a costly operation.
Indeed there is a weird thing here: making the code a function suddenly makes it slow. My best guess is that because the JIT behaves differently in different situations, it doesn't catch all this transpose stuff inside and just does all the operations anyway, losing the speed up.
Interesting observation: It takes the same time in CPU than GPU to do a*b*a in my PC.

Solving system of equations to gain desired step response

I need to obtain the values of a, b and c in the following equations so that the step response of the system matches that of the figure below.
x_dot = a*x + b+u;
y = c*x;
Where x_dot is the first derivative of x.
I have been trying to achieve this through Matlab and have so far achieved the following, using just arbitrary values for a, b and c for testing purposes:
clc;
close all;
clear all;
a=1;
b=2;
c=3;
tspan = [0:0.01:12];
x_dot = a*x+b*xu;
x = (a*x^2)/2 + b*u*x;
y = c*x;
f = #(t,x) [a*x(1)+b*x(2); c*x(1)];
[t, xa] = ode45(f,tspan,[0,0]);
plot(t,xa(:,1));
This certainly sounds like a parameter estimation problem as already hinted. You want to minimise the error between the outcome modelled using your ode and the values in your graph (fitting your three parameters a,b and c to the data).
A first step is to write an error function that takes the ode output values and compares how close it is to the data values (sum of least squares error for instance).
Then you have to search through a range of a,b,c values (which may be a large search space) and you pick the set of (a,b,c) which minimise your error function (i.e. get as close to your graph as possible).
Many search/optimisation strategies exist (.. e.g. genetic algorithm/etc.).
Please Note that the parameters are elements of real numbers(which includes negative values and extremely large or small values), the large search space is usually what makes these problems difficult to solve.
Also I think you have to be careful of initial conditions e.g. [0,0] doesn't seem to lead to interesting results. (try a = -0.5,b=0.2 and c=-0.00000001, with IC of [0,10] as below)
clc;
close all;
clear all;
a=-0.5;
b=0.2;
c=-0.00000001;
tspan = [0:0.01:12];
f = #(t,x) [a*x(1)+b*x(2); c*x(1)];
[t, xa] = ode45(f,tspan,[0,10]);
plot(t,xa);
hold on
plot(t,4)
Here 10 is the starting point of the green line and blue line starts at 0. What I would also note is that the IC change the results.. so there any many possible solutions for a,b,c given IC.
Looks interesting.. good luck.

Jacobi iteration doesn't end

I'm trying to implement the Jacobi iteration in MATLAB but am unable to get it to converge. I have looked online and elsewhere for working code for comparison but am unable to find any that is something similar to my code and still works. Here is what I have:
function x = Jacobi(A,b,tol,maxiter)
n = size(A,1);
xp = zeros(n,1);
x = zeros(n,1);
k=0; % number of steps
while(k<=maxiter)
k=k+1;
for i=1:n
xp(i) = 1/A(i,i)*(b(i) - A(i,1:i-1)*x(1:i-1) - A(i,i+1:n)*x(i+1:n));
end
err = norm(A*xp-b);
if(err<tol)
x=xp;
break;
end
x=xp;
end
This just blows up no matter what A and b I use. It's probably a small error I'm overlooking but I would be very grateful if anyone could explain what's wrong because this should be correct but is not so in practice.
Your code is correct. The reason why it may not seem to work is because you are specifying systems that may not converge when you are using Jacobi iterations.
To be specific (thanks to #Saraubh), this method will converge if your matrix A is strictly diagonally dominant. In other words, for each row i in your matrix, the absolute summation of all of the columns j at row i without the diagonal coefficient at i must be less than the diagonal itself. In other words:
However, there are some systems that will converge with Jacobi, even if this condition isn't satisfied, but you should use this as a general rule before trying to use Jacobi for your system. It's actually more stable if you use Gauss-Seidel. The only difference is that you are re-using the solution of x and feeding it into the other variables as you progress down the rows. To make this Gauss-Seidel, all you have to do is change one character within your for loop. Change it from this:
xp(i) = 1/A(i,i)*(b(i) - A(i,1:i-1)*x(1:i-1) - A(i,i+1:n)*x(i+1:n));
To this:
xp(i) = 1/A(i,i)*(b(i) - A(i,1:i-1)*xp(1:i-1) - A(i,i+1:n)*x(i+1:n));
**HERE**
Here are two examples that I will show you:
Where we specify a system that does not converge by Jacobi, but there is a solution. This system is not diagonally dominant.
Where we specify a system that does converge by Jacobi. Specifically, this system is diagonally dominant.
Example #1
A = [1 2 2 3; -1 4 2 7; 3 1 6 0; 1 0 3 4];
b = [0;1;-1;2];
x = Jacobi(A, b, 0.001, 40)
xtrue = A \ b
x =
1.0e+09 *
4.1567
0.8382
1.2380
1.0983
xtrue =
-0.1979
-0.7187
0.0521
0.5104
Now, if I used the Gauss-Seidel solution, this is what I get:
x =
-0.1988
-0.7190
0.0526
0.5103
Woah! It converged for Gauss-Seidel and not Jacobi, even though the system isn't diagonally dominant, I may have an explanation for that, and I'll provide later.
Example #2
A = [10 -1 2 0; -1 -11 -1 3; 2 -1 10 -1; 0 3 -1 8];
b = [6;25;-11;15];
x = Jacobi(A, b, 0.001, 40);
xtrue = A \ b
x =
0.6729
-1.5936
-1.1612
2.3275
xtrue =
0.6729
-1.5936
-1.1612
2.3274
This is what I get with Gauss-Seidel:
x =
0.6729
-1.5936
-1.1612
2.3274
This certainly converged for both, and the system is diagonally dominant.
As such, there is nothing wrong with your code. You are just specifying a system that can't be solved using Jacobi. It's better to use Gauss-Seidel for iterative methods that revolve around this kind of solving. The reason why is because you are immediately using information from the current iteration and spreading this to the rest of the variables. Jacobi does not do this, which is the reason why it diverges more quickly. For Jacobi, you can see that Example #1 failed to converge, while Example #2 did. Gauss-Seidel converged for both. In fact, when they both converge, they're quite close to the true solution.
Again, you need to make sure that your systems are diagonally dominant so you are guaranteed to have convergence. Not enforcing this rule... well... you'll be taking a risk as it may or may not converge.
Good luck!
Though this does not point out the problem in your code, I believe that you are looking for the Numerical Methods: Jacobi File Exchange Submission.
%JACOBI Jacobi iteration for solving a linear system.
% Sample call
% [X,dX] = jacobi(A,B,P,delta,max1)
% [X,dX,Z] = jacobi(A,B,P,delta,max1)
It seems to do exactly what you describe.
As others have pointed out that not all systems are convergent using Jacobi method, but they do not point out why? Actually only a small sub-set of systems converge with Jacobi method.
The convergence criteria is that the "sum of all the coefficients (non-diagonal) in a row" must be lesser than the "coefficient at the diagonal position in that row". This criteria must be satisfied by all the rows. You can read more at: Jacobi Method Convergence
Before you decide to use Jacobi method, you must see whether this criteria is satisfied by the numerical method or not. The Gauss-Seidel method has a slightly more relaxed convergence criteria which allows you to use it for most of the Finite Difference type numerical methods.

Matlab, on the quest to find a simple way to control the output of a function

I have already asked a similar question here, I will try this time to be a lot more elaborate in consideration that what puzzles me the most about Matlab still is how to handle/control output as in plotting/listing.
I have the following function
function e = calcEulerSum2(n)
for i=1:n % step size still one, although left blank
e = 1;
e = e + sum(1./factorial(1:n)); % computes a vector of length n and takes
end % the factorial of its entries, then piecewise division
end
To approximate e using the Taylor Summation method. I am now asked to compute the absolute error using
abserror(n):=abs(calcEulerSum2(n)-exp(1))
for values of n= 10^0,10^1,...,10^16.
My idea was to make the above function sensitive for vector input (if you call it like that, I have read vectorization on here as well). I thought this would be the best thing I could do, because I obviously want to evaluate the above function for a couple of values of n and then maybe plot the result in a graph to see the results.
However, I believe that my understanding of MATLAB is too rudimentary as of now to find a simple solution to that problem.
Additional: With the help of this website I already managed to solve a similar problem, using a recursive definition of e to get a vector output for each successful iteration using:
function e = calcEulerSum3(n)
for i = n % here the idea is to let n be a continuous vector
e(1)=1; % base case
e(i+1)=e(i)+1/factorial(i); % recursive definition to approx e
end
This function now understands vector input, but only if the vector is what I would call continuous on the integer line, for example n'=[ 1 2 3 4 5 ] and so on to make the recursive iteration through a vector work. This time however, my vector n as above would destroy this concept.
Question(s):
Are there simple ways to handle the above output abserror(n) ?, because I have a feeling that I am overachieving by trying to vectorize my functions
Update: As suggested by #vish the following function does a much better job for my purpose:
function e = calcEulerSum2(n) %Updated
for i=1:n % step size still one, although left blank
e = 1;
e = e + cumsum(1./factorial(1:n)); % computes a vector of length n and takes
end % the factorial of its entries, then piecewise division
end
the output of this program for n=3 would be
calcEulerSum2(3)
ans =
2.0000 2.5000 2.6667
This is great and indeed what I am looking for, but lets say I want to plot the absolute error now for several values of n, for the sake of the argument I will choose smaller values (because I understand that 10^16! is really blunt to input), but still many. So if I decide I want to print out the absolute error abserror(n) for the following values of n
n = [1 4 9 11 17 18 19 22 27 29 31 33 ]
how would I do that?
Not sure you are computing what you think you are computing.
The for loop is totally not needed since you keep overwriting e at every iteration and you only return the last one.
factorial(1:3) == [1,2,6]
then you sum the reciprocals.
The first iteration of the loop factorial(1:1) goes nowhere.
Try
cumsum(1./factorial(1:7))
l think that's what you wanted
Edit: , factorial(10^5) is too big to be meaningful, i think going from 1 to n=16 will give you good enough results. (not 10^16)
2nd edit
run this
N=0:16;
res = exp(1) - cumsum(1./factorial(N))
see what happens
you can also plot it on a log scale.
semilogy(res)