I am trying to solve a large DAE system, coupled with the equations to calculate the sensitivity of the variables to a parameter. My problem is the jacobian of the entire system, its calculation is pretty slow and I would like to speed it up.
I am using numjac, in this form:
[Jx,FAC,G] = numjac(#(t,y)MODEL(t,y,X),tt,yy,dydt,jac_tol,FAC,0,JPat,G);
I want to vectorize the code, but I can't seem to get what this means. As far as I understood my code is already vectorized! t,y,X go in and I get a column vector of dy(i)/dt, or F(t,y(i)). But if I say that my function is vectorized, I get an error:
Matrix dimensions must agree.
Error in numjac (line 192)
Fdiff = Fdel - Fty(:,ones(1,ng));
How can I properly vectorize it?
Related
im using Matlab to trying to solve 2 equations with 2 variables.
I define the 2 functions, f2(n_1,n_2),f3(n_1,n_2) which both depend on f1(n_1,n_2), then I defined the vectorised function G(n_1,n_2) which contain both of them.
Later I defined a the desired stating point, and tried to solve. but when running the code it raise an error which I'm not fully understand.
the above mentioned is displayed in the code below:
the code:
clear, close all; clc
%Const
N0=25;
G1=1;G2=1;
a1=6;a2=3;
k1=1;k2=4;
%main
syms n_1 n_2
X_0=[-5;5];
f1=N0-a1.*n_1-a2.*n_2;
f2=f1.*G1.*n_1-k1.*n_1;
f3=f1.*G2.*n_2-k2.*n_2;
G=#(n_1,n_2) [f2;f3];
s = fsolve(G,X_0);
the error:
Error using fsolve (line 269)
FSOLVE requires all values returned by functions to be of data type double.
Error in Ex1_Q3_DavidS (line 37)
s = fsolve(G,X_0);
thanks
fsolve is a function that uses numerical methods to find the root of a numerical function.
A numerical function is, for example f=#(x)x^2=2;. In MATLAB, you can evaluate f() at any number and it will return a number, but there is no higher order mathematical abstraction to it. This is however the fastest way to do maths in a computer, as it is not a higher intelligence, just a glorified calculator.
Some people however, want to give higher intelligence to computers and coded very complex symbolic toolboxes that with sets of rules try to teach computers to think semi-like humans and solve symbolic equations, as you do in paper. To solve those equations a function called solve is introduced in MATLAB.
You are doing symbolic math, but using the numeric solver. It does not work, just use the symbolic solver for symbolic math.
I want to implement an equation
c= a*w*(sinwt + b*sin(2*w*t))
where w is varying and a,b and c are all constants.
I have done it using Agebraic Constraint block but I am getting an error
Trouble solving algebraic loop containing 'trial1/Algebraic Constraint1/Initial Guess' at time >0. Stopping simulation. There may be a singularity in the solution. If the model is correct, >try reducing the step size (either by reducing the fixed step size or by tightening the error >tolerances)
Pl help as in what might be wrong. Or suggest what are the other ways of solving the equation and finding a graph of w vs t(using scope).
Try implementing equation in this manner.
I have taken a=1,b=1,c=1 & w=1
c= #(t) (a*w*(sin(t) + b*sin(2*w*t)));
t = linspace (-pi,pi,1000);
figure
plot (t,c(t))
I have a points for a given polynomial. I would like to integrate, preferably using a definite integral, but I believe in the syntax of using polyint this isn't possible without some manipulation. Regardless, if I can just get it to integrate I'll be able to take it from there.
dpt=coeffvalues(fitresult{4});
ppval=polyval(dpt,xx)
cpdt=coeffvalues(fitresult{2});
cpval=polyval(cpdt,xx)
pint=(ppval./cpval);
intp=polyint(pint);
I've tried doing this a couple of ways...One being fitting the results of the pint curve, finding the coefficients and then using the polyint function. But no matter which way I do it I always get the same three errors:
Error using ./
Matrix dimensions must agree.
Error in polyint (line 16)
pi = [p./(length(p):-1:1) k];
Error in ptintegrate97 (line 61)
intp=polyint(ptint);
Usually its the first error that is causing the problem, but when I do size(ppval) and size(cpval), they are both 837x1. So I'm kinda lost. I'm new to MATLAB sorry if this is a stupid question.
polyint won't work here, because it is expecting a series of polynomial coefficients, but you are providing a series of numbers that are the output of the previous calculation and have no relation to any polynomial coefficients whatsoever. The error you are getting is because the shape of pint is wrong. But even if it was right, you wouldn't get the answer you want.
You can choose to integrate pint numerically if you want. Using a simpson's rule on the pint values could certainly get you to a correct answer if your step size between points is small enough. Or, you could return to doing a symbolic polynomial division in order to get an absolute integral. I am not sure what exactly you are after, or what your requirements are.
I am trying to use preconditioned conjugate gradient in matlab to speed things up. I am using this iterative method because backslash operator was too time consuming. However, I am having some issues. For e.g. I want to solve this system of linear equations given by
Ax=B
where A is a sparse positive definite matrix and B is a matrix as well. In matlab I can do that simply by
x= A\B
However, if I use pcg function, then I would have to loop over all the columns of B and solve individual
x(:,i)=pcg(A,B(:,i))
This loop will take more time than x=A\B. If I consider just a single column as b instead of matrix B, then pcg works faster than the backslash operator. However, if I consider the whole matrix B, then pcg is slower than backslash operator. So there is no point of using pcg.
Any suggestions guys?
When using the method as suggested by Mattj, it shows the following error
Error using iterapp (line 60)
user supplied function ==>
#(x)reshape(repmat(A*x,1,nb),[],1)
failed with the following error:
Inner matrix dimensions must agree.
Error in pcg (line 161)
r = b -
iterapp('mtimes',afun,atype,afcnstr,x,varargin{:});
I think we need to see more data on your timing tests and the dimensions/sparsities of A and B, and better understand why pcg is faster than mldivide. However, you can implement what you're after this way,
[ma,na]=size(A);
[mb,nb]=size(B);
afun=#(x) reshape(A*reshape(x,na,[]),[],1);
X=pcg(afun,B(:));
X=reshape(X,na,nb);
However, if I consider the whole matrix B, then pcg is slower than
backslash operator. So there is no point of using pcg.
That does make a certain amount of sense. When backslash solves the first set of equations A*x=B(:,1), it can recycle pieces of its analysis to the later columns B(:,i), e.g., if it performs an LU decomposition of A.
Conversely, all the work that PCG applies to the different B(:,i) are independent. So, it very well might not make sense to use PCG. The one exception to this is if each B(:,i+1) is similar to B(:,i). In other words, if the columns of B change in a gradual continuous manner. If so, then you should run PCG in a loop as you've been doing, but use the i-th solution x(:,i) to initialize PCG in the next iteration of the loop. That will cut down on the total amount of work PCG must perform.
I'm not too familiar with MATLAB or computational mathematics so I was wondering how I might solve an equation involving the sum of squares, where each term involves two vectors- one known and one unknown. This formula is supposed to represent the error and I need to minimize the error. I think I'm supposed to use least squares but I don't know too much about it and I'm wondering what function is best for doing that and what arguments would represent my equation. My teacher also mentioned something about taking derivatives and he formed a matrix using derivatives which confused me even more- am I required to take derivatives?
The problem that you must be trying to solve is
Min u'u = min \sum_i u_i^2, u=y-Xbeta, where u is the error, y is the vector of dependent variables you are trying to explain, X is a matrix of independent variables and beta is the vector you want to estimate.
Since sum u_i^2 is diferentiable (and convex), you can evaluate the minimal of this expression calculating its derivative and making it equal to zero.
If you do that, you find that beta=inv(X'X)X'y. This maybe calculated using the matlab function regress http://www.mathworks.com/help/stats/regress.html or writing this formula in Matlab. However, you should be careful how to evaluate the inverse (X'X) see Most efficient matrix inversion in MATLAB