I am trying to solve a system through Gauss-Seidel Iterative method. But i also want to receive as an answer the iteration matrix that was used. I have this method
function [x0,iter] = gaussSeidel(A,b,iterMax,tol)
D = diag(diag(A));
Lower = -tril(A,-1);
Upper = -triu(A,1);
M = D - Lower;
N = Upper;
n = size(A);
n = n(1);
x0 = ones(n,1);
iter = 1;
for i = 1:1:iterMax
iter = i;
x = M\(N*x0+b);
normC = norm(x-x0,inf);
x0 = x;
if normC <tol
break
end
end
I want to know whats in the iteration matrix ((D-Lower)^(-1))*Upper, but for that I would have to calculate the inverse, and that's computationally expensive, is there another way to get the value?
"\" is an optimized way of solving linear constant coefficient systems by taking inverse.
If you want to see what is the inverse of the coefficient matrix, then you must use
inv(A)
instead of
A\b
There is no way out! Or if you are only dealing with the N th column of the inverse matrix, then you can also use
I=eye(n);
A\I(:,N);
Related
I only know of the following power iteration. But it needs to create a huge matrix A'*A when both of rows and columns are pretty large. And A is a dense matrix as well. Is there any alternative to power iteration method below? I have heard of krylov subspace method, but I am not familiar with it. In anycase I am looking for any faster method than the one mentioned below:
B = A'*A; % or B = A*A' if it is smaller
x = B(:,1); % example of starting point, x will have the largest eigenvector
x = x/norm(x);
for i = 1:200
y = B*x;
y = y/norm(y);
% norm(x - y); % <- residual, you can try to use it to stop iteration
x = y;
end;
n3 = sqrt(mean(B*x./x)) % translate eigenvalue of B to singular value of A
I checked 'svd' command of matlab with a 100*100 randomly generated matrix. It is almost 5 times faster than your code.
s = svd(A);
n3 = s(1);
We were asked to define our own differential operators on MATLAB, and I did it following a series of steps, and then we should use the differential operators to solve a boundary value problem:
-y'' + 2y' - y = x, y(0) = y(1) =0
my code was as follows, it was used to compute this (first and second derivative)
h = 2;
x = 2:h:50;
y = x.^2 ;
n=length(x);
uppershift = 1;
U = diag(ones(n-abs(uppershift),1),uppershift);
lowershift = -1;
L= diag(ones(n-abs(lowershift),1),lowershift);
% the code above creates the upper and lower shift matrix
D = ((U-L))/(2*h); %first differential operator
D2 = (full (gallery('tridiag',n)))/ -(h^2); %second differential operator
d1= D*y.'
d2= ((D2)*y.')
then I changed it to this after posting it here and getting one response that encouraged the usage of Identity Matrix, however I still seem to be getting no where.
h = 2;
n=10;
uppershift = 1;
U = diag(ones(n-abs(uppershift),1),uppershift);
lowershift = -1;
L= diag(ones(n-abs(lowershift),1),lowershift);
D = ((U-L))/(2*h); %first differential operator
D2 = (full (gallery('tridiag',n)))/ -(h^2); %second differential operator
I= eye(n);
eqn=(-D2 + 2*D - I)*y == x
solve(eqn,y)
I am not sure how to proceed with this, like should I define y and x, or what exactly? I am clueless!
Because this is a numerical approximation to the solution of the ODE, you are seeking to find a numerical vector that is representative of the solution to this ODE from time x=0 to x=1. This means that your boundary conditions make it so that the solution is only valid between 0 and 1.
Also this is now the reverse problem. In the previous post we did together, you know what the input vector was, and doing a matrix-vector multiplication produced the output derivative operation on that input vector. Now, you are given the output of the derivative and you are now seeking what the original input was. This now involves solving a linear system of equations.
Essentially, your problem is now this:
YX = F
Y are the coefficients from the matrix derivative operators that you derived, which is a n x n matrix, X would be the solution to the ODE, which is a n x 1 vector and F would be the function you are associating the ODE with, also a n x 1 vector. In our case, that would be x. To find Y, you've pretty much done that already in your code. You simply take each matrix operator (first and second derivative) and you add them together with the proper signs and scales to respect the left-hand side of the ODE. BTW, your first derivative and second derivative matrices are correct. What's left is adding the -y term to the mix, and that is accomplished by -eye(n) as you have found out in your code.
Once you formulate your Y and F, you can use the mldivide or \ operator and solve for X and get the solution to this linear system via:
X = Y \ F;
The above essentially solves the linear system of equations formed by Y and F and will be stored in X.
The first thing you need to do is define a vector of points going from x=0 to x=1. linspace is probably the most suitable where you can specify how many points we want. Let's assume 100 points for now:
x = linspace(0,1,100);
Therefore, h in our case is just 1/100. In general, if you want to solve from the starting point x = a up to the end point x = b, the step size h is defined as h = (b - a)/n where n is the total number of points you want to solve for in the ODE.
Now, we have to include the boundary conditions. This simply means that we know the beginning and ending of the solution of the ODE. This means that y(0) = y(1) = 0. As such, we make sure that the first row of Y has only the first column set to 1 and the last row of Y has only the last column set to 1, and we'll set the output position in F to both be 0. This symbolizes that we already know the solution at these points.
Therefore, your final code to solve is just:
%// Setup
a = 0; b = 1; n = 100;
x = linspace(a,b,n);
h = (b-a)/n;
%// Your code
uppershift = 1;
U = diag(ones(n-abs(uppershift),1),uppershift);
lowershift = -1;
L= diag(ones(n-abs(lowershift),1),lowershift);
D = ((U-L))/(2*h); %first differential operator
D2 = (full (gallery('tridiag',n)))/ -(h^2);
%// New code - Create differential equation matrix
Y = (-D2 + 2*D - eye(n));
%// Set boundary conditions on system
Y(1,:) = 0; Y(1,1) = 1;
Y(end,:) = 0; Y(end,end) = 1;
%// New code - Create F vector and set boundary conditions
F = x.';
F(1) = 0; F(end) = 0;
%// Solve system
X = Y \ F;
X should now contain your numerical approximation to the ODE in steps of h = 1/100 starting from x=0 up to x=1.
Now let's see what this looks like:
figure;
plot(x, X);
title('Solution to ODE');
xlabel('x'); ylabel('y');
You can see that y(0) = y(1) = 0 as per the boundary conditions.
Hope this helps, and good luck!
I'd like to compute kernel matrices efficiently for generic kernel functions in
Matlab. This means I need to compute k(x,y) for every row x of X
and every row y of Y. Here is some matlab code that computes what I'd
like, but it is rather slow,
function K=compute_kernel( k_func, X, Y )
m = size(X,1);
n = size(Y,1);
K = zeros(m,n);
for i = 1:m
for j = 1:n
K(i,j) = k_func(X(i,:)', Y(j,:)');
end
end
end
Is there any other approach to this problem, e.g. some bsxfun variant that
calls a function on every row taken from X and Y?
pdist2(X,Y, dist_func) unfortunately computes dist_func(X, Y(i,:)), instead of dist_func(X(i,:), Y(i,:)). So the actual function I need is,
function K=compute_kernel( k_func, X, Y )
% Woohoo! Efficient way to compute kernel
size(X)
size(Y)
m = size(X,1);
n = size(Y,1);
for i = 1:n
K(:,i) = pdist2( Y(i,:), X, k_func);
end
It's not as nice as just using pdist2, but is still much more efficient than the previous case.
Have you tired pdist2 with custom distance function?
P.S.
It is best not to use i and j as variables in Matlab
I am using the matlab code from this book: http://books.google.com/books/about/Probability_Markov_chains_queues_and_sim.html?id=HdAQdzAjl60C
Here is the Code:
function [pi] = GE(Q)
A = Q';
n = size(A);
for i=1:n-1
for j=i+1:n
A(j,i) = -A(j,i)/A(i,i);
end
for j =i+1:n
for k=i+1:n
A(j,k) = A(j,k)+ A(j,i) * A(i,k);
end
end
end
x(n) = 1;
for i = n-1:-1:1
for j= i+1:n
x(i) = x(i) + A(i,j)*x(j);
end
x(i) = -x(i)/A(i,i);
end
pi = x/norm(x,1);
Is there a faster code that I am not aware of? I am calling this functions millions of times, and it takes too much time.
MATLAB has a whole set of built-in linear algebra routines - type help slash, help lu or help chol to get started with a few of the common ways to efficiently solve linear equations in MATLAB.
Under the hood these functions are generally calling optimised LAPACK/BLAS library routines, which are generally the fastest way to do linear algebra in any programming language. Compared with a "slow" language like MATLAB it would not be unexpected if they were orders of magnitude faster than an m-file implementation.
Hope this helps.
Unless you are specifically looking to implement your own, you should use Matlab's backslash operator (mldivide) or, if you want the factors, lu. Note that mldivide can do more than Gaussian elimination (e.g., it does linear least squares, when appropriate).
The algorithms used by mldivide and lu are from C and Fortran libraries, and your own implementation in Matlab will never be as fast. If, however, you are determined to use your own implementation and want it to be faster, one option is to look for ways to vectorize your implementation (maybe start here).
One other thing to note: the implementation from the question does not do any pivoting, so its numerical stability will generally be worse than an implementation that does pivoting, and it will even fail for some nonsingular matrices.
Different variants of Gaussian elimination exist, but they are all O(n3) algorithms. If any one approach is better than another depends on your particular situation and is something you would need to investigate more.
function x = naiv_gauss(A,b);
n = length(b); x = zeros(n,1);
for k=1:n-1 % forward elimination
for i=k+1:n
xmult = A(i,k)/A(k,k);
for j=k+1:n
A(i,j) = A(i,j)-xmult*A(k,j);
end
b(i) = b(i)-xmult*b(k);
end
end
% back substitution
x(n) = b(n)/A(n,n);
for i=n-1:-1:1
sum = b(i);
for j=i+1:n
sum = sum-A(i,j)*x(j);
end
x(i) = sum/A(i,i);
end
end
Let's assume Ax=d
Where A and d are known matrices.
We want to represent "A" as "LU" using "LU decomposition" function embedded in matlab thus:
LUx = d
This can be done in matlab following:
[L,U] = lu(A)
which in terms returns an upper triangular matrix in U and a permuted lower triangular matrix in L such that A = LU. Return value L is a product of lower triangular and permutation matrices. (https://www.mathworks.com/help/matlab/ref/lu.html)
Then if we assume Ly = d where y=Ux.
Since x is Unknown, thus y is unknown too, by knowing y we find x as follows:
y=L\d;
x=U\y
and the solution is stored in x.
This is the simplest way to solve system of linear equations providing that the matrices are not singular (i.e. the determinant of matrix A and d is not zero), otherwise, the quality of the solution would not be as good as expected and might yield wrong results.
if the matrices are singular thus cannot be inversed, another method should be used to solve the system of the linear equations.
For the naive approach (aka without row swapping) for an n by n matrix:
function A = naiveGauss(A)
% find's the size
n = size(A);
n = n(1);
B = zeros(n,1);
% We have 3 steps for a 4x4 matrix so we have
% n-1 steps for an nxn matrix
for k = 1 : n-1
for i = k+1 : n
% step 1: Create multiples that would make the top left 1
% printf("multi = %d / %d\n", A(i,k), A(k,k), A(i,k)/A(k,k) )
for j = k : n
A(i,j) = A(i,j) - (A(i,k)/A(k,k)) * A(k,j);
end
B(i) = B(i) - (A(i,k)/A(k,k)) * B(k);
end
end
function Sol = GaussianElimination(A,b)
[i,j] = size(A);
for j = 1:i-1
for i = j+1:i
Sol(i,j) = Sol(i,:) -( Sol(i,j)/(Sol(j,j)*Sol(j,:)));
end
end
disp(Sol);
end
I think you can use the matlab function rref:
[R,jb] = rref(A,tol)
It produces a matrix in reduced row echelon form.
In my case it wasn't the fastest solution.
The solution below was faster in my case by about 30 percent.
function C = gauss_elimination(A,B)
i = 1; % loop variable
X = [ A B ];
[ nX mX ] = size( X); % determining the size of matrix
while i <= nX % start of loop
if X(i,i) == 0 % checking if the diagonal elements are zero or not
disp('Diagonal element zero') % displaying the result if there exists zero
return
end
X = elimination(X,i,i); % proceeding forward if diagonal elements are non-zero
i = i +1;
end
C = X(:,mX);
function X = elimination(X,i,j)
% Pivoting (i,j) element of matrix X and eliminating other column
% elements to zero
[ nX mX ] = size( X);
a = X(i,j);
X(i,:) = X(i,:)/a;
for k = 1:nX % loop to find triangular form
if k == i
continue
end
X(k,:) = X(k,:) - X(i,:)*X(k,j);
end
I have double summation over m = 1:M and n = 1:N for polar point with coordinates rho, phi, z:
I have written vectorized notation of it:
N = 10;
M = 10;
n = 1:N;
m = 1:M;
rho = 1;
phi = 1;
z = 1;
summ = cos (n*z) * besselj(m'-1, n*rho) * cos(m*phi)';
Now I need to rewrite this function for accepting vectors (columns) of coordinates rho, phi, z. I tried arrayfun, cellfun, simple for loop - they work too slow for me. I know about "MATLAB array manipulation tips and tricks", but as MATLAB beginner I can't understand repmat and other functions.
Can anybody suggest vectorized solution?
I think your code is already well vectorized (for n and m). If you want this function to also accept an array of rho/phi/z values, I suggest you simply process the values in a for-loop, as I doubt any further vectorization will bring significant improvements (plus the code will be harder to read).
Having said that, in the code below, I tried to vectorize the part where you compute the various components being multiplied {row N} * { matrix N*M } * {col M} = {scalar}, by making a single call to the BESSELJ and COS functions (I place each of the row/matrix/column in the third dimension). Their multiplication is still done in a loop (ARRAYFUN to be exact):
%# parameters
N = 10; M = 10;
n = 1:N; m = 1:M;
num = 50;
rho = 1:num; phi = 1:num; z = 1:num;
%# straightforward FOR-loop
tic
result1 = zeros(1,num);
for i=1:num
result1(i) = cos(n*z(i)) * besselj(m'-1, n*rho(i)) * cos(m*phi(i))';
end
toc
%# vectorized computation of the components
tic
a = cos( bsxfun(#times, n, permute(z(:),[3 2 1])) );
b = besselj(m'-1, reshape(bsxfun(#times,n,rho(:))',[],1)'); %'
b = permute(reshape(b',[length(m) length(n) length(rho)]), [2 1 3]); %'
c = cos( bsxfun(#times, m, permute(phi(:),[3 2 1])) );
result2 = arrayfun(#(i) a(:,:,i)*b(:,:,i)*c(:,:,i)', 1:num); %'
toc
%# make sure the two results are the same
assert( isequal(result1,result2) )
I did another benchmark test using the TIMEIT function (gives more fair timings). The result agrees with the previous:
0.0062407 # elapsed time (seconds) for the my solution
0.015677 # elapsed time (seconds) for the FOR-loop solution
Note that as you increase the size of the input vectors, the two methods will start to have similar timings (the FOR-loop even wins on some occasions)
You need to create two matrices, say m_ and n_ so that by selecting element i,j of each matrix you get the desired index for both m and n.
Most MATLAB functions accept matrices and vectors and compute their results element by element. So to produce a double sum, you compute all elements of the sum in parallel by f(m_, n_) and sum them.
In your case (note that the .* operator performs element-wise multiplication of matrices)
N = 10;
M = 10;
n = 1:N;
m = 1:M;
rho = 1;
phi = 1;
z = 1;
% N rows x M columns for each matrix
% n_ - all columns are identical
% m_ - all rows are identical
n_ = repmat(n', 1, M);
m_ = repmat(m , N, 1);
element_nm = cos (n_*z) .* besselj(m_-1, n_*rho) .* cos(m_*phi);
sum_all = sum( element_nm(:) );