I am trying to use preconditioned conjugate gradient in matlab to speed things up. I am using this iterative method because backslash operator was too time consuming. However, I am having some issues. For e.g. I want to solve this system of linear equations given by
Ax=B
where A is a sparse positive definite matrix and B is a matrix as well. In matlab I can do that simply by
x= A\B
However, if I use pcg function, then I would have to loop over all the columns of B and solve individual
x(:,i)=pcg(A,B(:,i))
This loop will take more time than x=A\B. If I consider just a single column as b instead of matrix B, then pcg works faster than the backslash operator. However, if I consider the whole matrix B, then pcg is slower than backslash operator. So there is no point of using pcg.
Any suggestions guys?
When using the method as suggested by Mattj, it shows the following error
Error using iterapp (line 60)
user supplied function ==>
#(x)reshape(repmat(A*x,1,nb),[],1)
failed with the following error:
Inner matrix dimensions must agree.
Error in pcg (line 161)
r = b -
iterapp('mtimes',afun,atype,afcnstr,x,varargin{:});
I think we need to see more data on your timing tests and the dimensions/sparsities of A and B, and better understand why pcg is faster than mldivide. However, you can implement what you're after this way,
[ma,na]=size(A);
[mb,nb]=size(B);
afun=#(x) reshape(A*reshape(x,na,[]),[],1);
X=pcg(afun,B(:));
X=reshape(X,na,nb);
However, if I consider the whole matrix B, then pcg is slower than
backslash operator. So there is no point of using pcg.
That does make a certain amount of sense. When backslash solves the first set of equations A*x=B(:,1), it can recycle pieces of its analysis to the later columns B(:,i), e.g., if it performs an LU decomposition of A.
Conversely, all the work that PCG applies to the different B(:,i) are independent. So, it very well might not make sense to use PCG. The one exception to this is if each B(:,i+1) is similar to B(:,i). In other words, if the columns of B change in a gradual continuous manner. If so, then you should run PCG in a loop as you've been doing, but use the i-th solution x(:,i) to initialize PCG in the next iteration of the loop. That will cut down on the total amount of work PCG must perform.
Related
I encountered some numerical questions when running simulation on MatLab. Here please find the questions:
I found that A*A' (a matrix times its transpose) is not guaranteed to be symmetric in MatLab. Can I know what is the reason? And because I will have A*C*A', where C is a symmetric matrix, and I would like to keep A*C*A' as symmetric. Is there any method to fix the numerical difference created by the transpose operation?
I implemented a for loop in Matlab to compute a set of matrices. Small numerical difference (around 10^(-10)) in each round accumulates to the next run, and it finally diverges after around 30 rounds. Is there any method to fix small error in each run and do not affect the result at the same time.
Thank you for reading my questions!
"I found that A*A' (a matrix times its transpose) is not guaranteed to be symmetric in MatLab."
I would dispute that statement as written. The MATLAB parser is smart enough to recognize that the operands of A*A' are the same and call a symmetric BLAS routine in the background to do the work, and then manually copy one triangle into the other resulting in an exactly symmetric result. Where one usually gets into trouble is by coding something that the parser cannot recognize. E.g.,
A = whatever;
B = whatever;
X = A + B;
(A+B) * (A+B)' <-- MATLAB parser will call generic BLAS routine
X * X' <-- MATLAB parser will call symmetric BLAS routine
In the first matrix multiply above, the MATLAB parser may not be not smart enough to recognize the symmetry so a generic matrix multiply BLAS routine (e.g., dgemm) could be called to do the work and the result is not guaranteed to be exactly symmetric. But in the second matrix multiply above the MATLAB parser does recognize the symmetry and calls a symmetric BLAS matrix multiply routine.
For the ACA' case, I don't know of any method to force MATLAB to generate an exact symmetric result. You could manually copy one resulting triangle into the other after the fact. I suppose you could also factor C into two parts X*X' and then regroup but that seems like too much work for what you are trying to do.
Hello I need help plotting the below equation in matlab.
v=10.0004+10.229*e^(-3*t)*sin(5.196*t-257.856)
here is what I have but I keep getting an error:
t=[0:0.1:2];
v=10.0004+10.229*exp(t)*sin(5.196*t+257.856);
plot(t,v)
Error using *
Incorrect dimensions for matrix multiplication. Check that the number of columns in the first matrix matches the
number of rows in the second matrix. To perform elementwise multiplication, use '.*'.
Error in example (line 2)
v=10.0004+10.229*exp(t)*sin(5.196*t+257.856);
Because t is a matrix, you cannot simply input it as you would a single variable. You have to access each value individually and calculate a corresponding v, then you store that value and move on. Rinse and repeat for each value.
This can be visualized with a for loop. Get the length of your time variable, which will determine how many values you need to calculate, then let the loop run for the corresponding number of elements. Make sure the loop counter is also used to index each element in v.
t = 0:0.1:2 ;
%For each element (n) in t, create a corresponding one of v.
for n = 1:length(t)
v(n) = 10.0004+10.229*exp(t(n))*sin(5.196*t(n)+257.856);
end
plot(t,v)
As we can interpret from the loop, there is a need to do element-wise (good keyword to remember) multiplication. In other languages, you might HAVE to use the loop method. Luckily in Matlab there is a dedicated operator for this '.*'. Therefore in Matlab you could simply modify your code as follows:
t=[0:0.1:2];
v=10.0004+10.229.*exp(t).*sin(5.196.*t+257.856);
plot(t,v)
Either method gives you the desired plot. The first I included to illustrate the underlying logic of what you're looking to do, and the second to simply it with Matlab's syntax. Hope this helps guide you in the future.
Best of luck out there.
I am using matlab. I have a function f(x) and I want to apply f(x) to a set of values. So I wrote 2 codes. The first code is a simple for loop. At one point x0, inside this for loop, I find that f(x0)=1.0000 and then I use f(x0)-1=-4.7684e-07.
My second code is using arrayfun on f(x). And at the same input value x0, I found that the results is 1.0000 but arrayfun(f,x0)-1=4.7684e-07!
This error 4.7684e-07 looks tiny. But the for loop gives me a number below 1 and the arrayfun gives me a number above 1. This is really a big difference in my work, as my subsequent computations largely hinges on whether this number is below 1 or above 1, as this number is supposed to be a probability.
Now my question is: why arrayfun has such problem? There is no random numbers in my code, why arrayfun generates a different result than for loop? which one should I trust? Is there a way to avoid this kind of precision problem? Note that in this code, all of my variables are in single type. Is this causing the problem?
addition and multiplication are not commutative under limited precision, and results depends on the order. For example, adding (a+b+c+d) is expected to give you different answers, up to the floating point round-off error, compared to (a+c+d+b). This is especially an issue in parallel computing where the orders of the accumulation from multiple parallel thread is not ensured.
You can either force the operation order or use higher precision, like double, to reduce this error.
I have a little code, that should implement cepstrum deconvolution for minimum phase FIR filter design, but being nonmatlab guy I'm struggling with understanding it. Can someone help?
wn = [ones(1,m)
2*ones((n+odd)/2-1,m)
ones(1-rem(n,2),m)
zeros((n+od d)/2-1,m)];
y = real(ifft(exp(fft(wn.*real(ifft(log(abs(fft(x)))))))));
Mainly I don't understand the first line, ".*" symbol in second line and also probably at some point there should be conversion from real to complex domain in the second line, but I have no idea where. Any ideas?
In the first line you are constructing the matrix wn row by row.
.* operator means element-wise multiplication. * alone would mean matrix multiplication.
In fact you should pay attention to the size of x and wn which must be the same for the element-wise multiplication to have sense.
actually there isn't any conversion from real to complex in the second line. There are the functions log, fft, ifft that may return complex values depending on the input.
You can access the Matlab help by the commands help or doc (for example doc ones should produce the documentation of the ones function - this produce a matrix filled with ones of the size specified by it's arguments).
To quickly summon the help when you're inspecting some code you can use Matlab's inline help by pressing the button F1 when the cursor is at the end of a function name (just before the parenthesis).
I have this huge matrix A of dimension 900000x900000. And I have to solve this linear equation
Ax=b where b is a column matrix of size 900000x1.
I used matlab's backslash operator like A\b to try to get x. However, it freezes and I couldn't get x. Mostly I get out of memory issue. Even though I ran it in a computer with higher memory it makes the system very slow and I have to wait to get the answer.
How can I solve this equation. My matrix is pretty sparse. However, it's band is wider but most of the elements are zero. b is a full matrix. Any suggestions?
I did a project, where we also operated with such large but fortunately very sparse matrices.
Using such large matrices, you are pretty lost with direct methods: You can never compute the inverse because it will be a dense matrix, which you can never store. Also methods such as LU or Cholesky factorization are quite expensive because they again create a significant fill-in, i.e. they destroy zeros.
A viable alternative is to use iterative methods. If you know that your matrix is symmetric and positive-definite, try the Conjugate gradient method:
x = pcg(A, b); %# Computes a solution to Ax = b, with A symm. pos-def.
I would just give it a try and have a look, if the method converges. Proofing the assumption of positive-definiteness is not easy, I'm afraid.
If you do not get a solution, there are many more iterative methods. For example:
bicg - BiConjugate Gradient Method
bicgstab - BiConjugate Gradient Method (stabilized)
lsqr - Least Squares QR Method
gmres - Generalized Minimum Residual Method (I like this a lot)