I hope I am on topic here. I'm asking here since it said on the faq page: a question concerning (among others) a software algorithm :) So here it goes:
I need to solve a system of ODEs (like $ \dot x = A(t) x$. The Matrix A may change and is given as a string in the function call (Calc_EDS_v2('Sys_EDS_a',...)
Then I'm using ode45 in a loop to find my x:
function [intervals, testing] = EDS_calc_v2(smA,options,debug)
[..]
for t=t_start:t_step:t_end)
[Te,Qe]=func_int(#intQ_2_v2,[t,t+t_step],q);
q=Qe(end,:);
[..]
end
[..]
with func_int being ode45 and #intQ_2_v2 my m-file. q is given back to the call as the starting vector. As you can see I'm just using ode45 on the intervall [t, t+t_step]. That's because my system matrix A can force ode45 to use a lot of steps, leading it to hit the AbsTol or RelTol very fast.
Now my A is something like B(t)*Q(t), so in the m-file intQ_2_v2.m I need to evaluate both B and Q at the times t.
I first done it like so: (v1 -file, so function name is different)
function q=intQ_2_v1(t,X)
[..]
B(1)=...; ... B(4)=...;
Q(1)=...; ...
than that is naturally only with the assumption that A is a 2x2 matrix. With that setup it took a basic system somewhere between 10 and 15 seconds to compute.
Instead of the above I now use the files B1.m to B4.m and Q1.m to B4.m (I know that that's not elegant, but I need to use quadgk on B later and quadgk doesn't support matrix functions.)
function q=intQ_2_v2(t,X)
[..]
global funcnameQ, funcnameB, d
for k=1:d
Q(k)=feval(str2func([funcnameQ,int2str(k)]),t);
B(k)=feval(str2func([funcnameB,int2str(k)]),t);
end
[..]
funcname (string) referring to B or Q (with added k) and d is dimension of the system.
Now I knew that it would cost me more time than the first version but I'm seeing the computing times are ten times as high! (getting 150 to 160 seconds) I do understand that opening 4 files and evaluate roughly 40 times per ode-loop is costly... and I also can't pre-evalute B and Q, since ode45 uses adaptive step sizes...
Is there a way to not use that last loop?
Mostly I'm interested in a solution to drive down the computing times. I do have a feeling that I'm missing something... but can't really put my finger on it. With that one taking nearly three minutes instead of 10 seconds I can get a coffee in between each testrun now... (plz don't tell me to get a faster computer)
(sorry for such a long question )
I'm not sure that I fully understand what you're doing here, but I can offer a few tips.
Use the profiler, it will help you understand exactly where the bottlenecks are.
Using feval is slower than using function handles directly, especially when using str2func to build the handle each time. There is also a slowdown from using the global variables (and it's a good habit to avoid these unless absolutely necessary). Each of these really adds up when using them repeatedly (as it looks like here). Store function handles to each of your mfiles in a cell array and either pass them directly to the function or use nested function for the optimization so that the cell array of handles is visible to the function being optimized. Personally, I prefer the nested method, but passing is better if you will use those mfiles elsewhere.
I expect this will get your runtime back to close to what the first method gave. Be sure to tell us if this was the problem or if you found another solution.
Related
I have a differential equation that's a function of around 30 constants. The differential equation is a system of (N^2+1) equations (where N is typically 4). Solving this system produces N^2+1 functions.
Often I want to see how the solution of the differential equation functionally depends on constants. For example, I might want to plot the maximum value of one of the output functions and see how that maximum changes for each solution of the differential equation as I linearly increase one of the input constants.
Is there a particularly clean method of doing this?
Right now I turn my differential-equation-solving script into a large function that returns an array of output functions. (Some of the inputs are vectors & matrices). For example:
for i = 1:N
[OutputArray1(i, :), OutputArray2(i, :), OutputArray3(i, :), OutputArray4(i, :), OutputArray5(i, :)] = DE_Simulation(Parameter1Array(i));
end
Here I loop through the function. The function solves a differential equation, and then returns the set of solution functions for that input parameter, and then each is appended as a row to a matrix.
There are a few issues I have with my method:
If I want to see the solution to the differential equation for a different parameter, I have to redefine the function so that it is an input of one of the thirty other parameters. For the sake of code readability, I cannot see myself explicitly writing all of the input parameters as individual inputs. (Although I've read that structures might be helpful here, but I'm not sure how that would be implemented.)
I typically get lost in parameter space and often have to update the same parameter across multiple scripts. I have a script that runs the differential-equation-solving function, and I have a second script that plots the set of simulated data. (And I will save the local variables to a file so that I can load them explicitly for plotting, but I often get lost figuring out which file is associated with what set of parameters). The remaining parameters that are not in the input of the function are inside the function itself. I've tried making the parameters global, but doing so drastically slows down the speed of my code. Additionally, some of the inputs are arrays I would like to plot and see before running the solver. (Some of the inputs are time-dependent boundary conditions, and I often want to see what they look like first.)
I'm trying to figure out a good method for me to keep track of everything. I'm trying to come up with a smart method of saving generated figures with a file tag that displays all the parameters associated with that figure. I can save such a file as a notepad file with a generic tagging-number that's listed in the title of the figure, but I feel like this is an awkward system. It's particularly awkward because it's not easy to see what's different about a long list of 30+ parameters.
Overall, I feel as though what I'm doing is fairly simple, yet I feel as though I don't have a good coding methodology and consequently end up wasting a lot of time saving almost-identical functions and scripts to solve fairly simple tasks.
It seems like what you really want here is something that deals with N-D arrays instead of splitting up the outputs.
If all of the OutputArray_ variables have the same number of rows, then the line
for i = 1:N
[OutputArray1(i, :), OutputArray2(i, :), OutputArray3(i, :), OutputArray4(i, :), OutputArray5(i, :)] = DE_Simulation(Parameter1Array(i));
end
seems to suggest that what you really want your function to return is an M x K array (where in this case, K = 5), and you want to pack that output into an M x K x N array. That is, it seems like you'd want to refactor your DE_Simulation to give you something like
for i = 1:N
OutputArray(:,:,i) = DE_Simulation(Parameter1Array(i));
end
If they aren't the same size, then a struct or a table is probably the best way to go, as you could assign to one element of the struct array per loop iteration or one row of the table per loop iteration (the table approach would assume that the size of the variables doesn't change from iteration to iteration).
If, for some reason, you really need to have these as separate outputs (and perhaps later as separate inputs), then what you probably want is a cell array. In that case you'd be able to deal with the variable number of inputs doing something like
for i = 1:N
[OutputArray{i, 1:K}] = DE_Simulation(Parameter1Array(i));
end
I hesitate to even write that, though, because this almost certainly seems like the wrong data structure for what you're trying to do.
Last week I asked the following:
https://stackoverflow.com/questions/32658199/vectorizing-gibbs-sampler-in-matlab
Perhaps it was not that clear what I want to do, so this might be more clear.
I would like to vectorize a "for" loop in matlab, where some variables inside of the loop are bidirectionally related. So, here is an example:
A=2;
B=3;
for i=1:10000
A=3*B;
B=exp(A*(-1/2))
end
Thank you once again for your time.
A quick Excel calculation indicates that this quickly converges to 0.483908 (after much less than 10000 loops - so one way of speeding it up would be to check for convergence). If A and B are always 2 and 3 respectively, you could just replace the loop with this value.
Alternatively, using some series analysis you might be able to come up with an analytical expression for B when i is large - although with the nested exponents deriving this is a bit beyond my own abilities!
Edit
A bit of googling reveals this. Wikipedia states that for a tetration of x to infinity (i.e. x^x^x^x^x...), the solution y satisfies y = x^y. In your case, for example, 0.483908 = e^(-3/2)^0.483908, so 0.483908 is a solution. Not sure how you would exploit this though.
Wikipedia also gives a convergence condition, which might be of use to you: x lies between e^-e and e^1/e.
Final Edit (?)
Turns out you need Lambert's W function to solve for equations of the form of y = x^y. There seems to be no native function for this, but there seems to be something in the FileExchange - see here and here.
I am trying to run code similar to the following, I replaced the function I had with one much smaller, to provide a minimum working example:
clear
syms k m
n=2;
symsum(symsum(k*m,m,0,min(k,n-k)),k,0,n)
I receive the following error message:
"Error using sym/min (line 86)
Input arguments must be convertible to floating-point numbers."
I think this means that the min function cannot be used with symbolic arguments. However, I was hoping that MATLAB would be substituting in actual numbers through its iterations of k=0:n.
Is there a way to get this to work? Any help much appreciated. So far I the most relevant page I found was here, but I am somewhat hesitant as I find it difficult to understand what this function does.
EDIT following #horchler, I messed around putting it in various places to try and make it work, and this one did:
clear
syms k m
n=2;
symsum(symsum(k*m,m,0,feval(symengine, 'min', k,n-k)),k,0,n)
Because I do not really understand this feval function, I was curious to whether there was a better, perhaps more commonly-used solution. Although it is a different function, there are many pieces online advising against the eval function, for example. I thought perhaps this one may also carry issues.
I agree that Matlab should be able to solve this as you expect, even though the documentation is clear that it won't.
Why the issue occurs
The problem is due the inner symbolic summation, and the min function itself, being evaluated first:
symsum(k*m,m,0,min(k,n-k))
In this case, the input arguments to sym/min are not "convertible to floating-point numbers" as k is a symbolic variable. It is only after you wrap the above in another symbolic summation that k becomes clearly defined and could conceivably be reduced to numbers, but the inner expression has already generated an error so it's too late.
I think that it's a poor choice for sym/min to return an error. Rather, it should just return itself. This is what the sym/int function does when it can't evaluate an integral symbolically or numerically. MuPAD (see below) and Mathematica 10 also do something like this as well for their min functions.
About the workaround
This directly calls a MuPAD's min function. Calling MuPAD functions from Matlab is discussed in more detail in this article from The MathWorks.
If you like, you can wrap it in a function or an anonymous function to make calling it cleaner, e.g.:
symmin = #(x,y)feval(symengine,'min',x,y);
Then, you code would simply be:
syms k m
n = 2;
symsum(symsum(k*m,m,0,symmin(k,n-k)),k,0,n)
If you look at the code for sym/min in the Symbolic Math toolbox (type edit sym/min in your Command Window), you'll see that it's based on a different function: symobj::maxmin. I don't know why it doesn't just call MuPAD's min, other than performance reasons perhaps. You might consider filing a service request with The MathWorks to ask about this issue.
I am trying to vectorise a for loop. I have a set of coordinates listed in a [68x200] matrix called plt2, and I have another set of coordinates listed in a [400x1] matrix called trans1. I want to create a three dimensional array called dist1, where in dist1(:,:,1) I have all of the values of plt2 with the first value of trans1 subtracted, all the way through to the end of trans1. I have a for loop like this which works but is very slow:
for i=1:source_points;
dist1(:,:,i)=plt2-trans1(i,1);
end
Thanks for any help.
If I understood correctly, this can be easily solved with bsxfun:
dist1 = bsxfun(#minus, plt2, shiftdim(trans1,-2));
Or, if speed is important, use this equivalent version (thanks to #chappjc), which seems to be much faster:
dist1 = bsxfun(#minus, plt2, reshape(trans1,1,1,[]));
In general, bsxfun is a very useful function for cases like this. Its behaviour can be summarized as follows: for any singleton dimension of any of its two input arrays, it applies an "implicit" for loop to the other array along the same dimension. See the doc for further details.
Vectorizing is a good first optimization, and is usually much easier than going all in writing your own compiled mex-function (in c).
However, the golden middle-way for power users is Matlab Coder (this also applies to slightly harder problems than the one posted, where vectorization is more or less impossible). First, create a small m-file function around the slow code, in your case:
function dist1 = do_some_stuff(source_points,dist1,plt2,trans1)
for i=1:source_points;
dist1(:,:,i)=plt2-trans1(i,1);
end
Then create a simple wrapper function which calls do_some_stuff as well as defines the inputs. This file should really be only 5 rows, with only the bare essentials needed. Matlab Coder uses the wrapper function to understand what typical proper inputs to do_some_stuff are.
You can now fire up the Matlab Coder gui from the Apps section and simply add do_some_stuff under Entry-Point Files. Press Autodefine types and select your wrapper function. Go to build and press build, and you are good to go! This approach usually bumps up the execution speed substantially with almost no effort.
BR
Magnus
I have a function that returns a large vector and is called multiple times, with some logic going on between calls that makes vectorization not an option.
An example of the function is
function a=f(X,i)
a=zeros(size(X,1),1);
a(:)=X(:,i);
end
and I am doing
for i=1:n a=f(X,i); end
When profiling this (size(X,1)=5.10^5, n=100 ) times are 0.12s for the zeros line and 0.22s for a(:)=X(:,i) the second line. As expected memory is allocated at each call of f in the 'zeros' line.
To get rid of that line and its 0.12s, I thought of allocating the returned value just once, and passing it in as return space each time to an appropriate function g like so:
function a=g(X,i,a)
a(:)=X(:,i);
end
and doing
a=zeros(m,1);
for i=1:n a=g(X,i,a); end
What is surprising to me is that profiling inside g still shows memory being allocated in the same amounts at the a(:)=X(:,i); line, and the time taken is very much like 0.12+0.22s..
1)Is this just "lazy copy on write" because I am writing into a?
2)Going forward, what are the options?
-a global variable for a (messy..)?
-writing a matrix handle class (must I really?)
(The nested function way means some heavy redesigning to make a nesting function to which X is known (the matrix A with notations from that answer)..)
Perhaps this is a bit tangential to your question, but if this is a performance critical application, I think a good way to go is to rewrite your function as a mex file. Here is a quote from http://www.mathworks.com/support/tech-notes/1600/1605.html#intro,
The main reasons to write a MEX-file are:...
Speed; you can rewrite bottleneck computations (like for-loops) as a MEX-file for efficiency.
If you are not familiar with mex files, the link above should get you started. Converting your existing function to C/C++ should not be overly difficult. The yprime.c example included with MATLAB is similar to what you're trying to do, since it is iteratively being called to calculate the derivatives inside ode45, etc.