MATLAB: Vectorize for loop in MATLAB - matlab

I wanted to vectorize this piece of code. Is it possible to do this? I tried finding a solution, but I was not able to find any good result on google.
for pos=length1+1:length
X1(pos) = sim(net1, [demandPred(pos), demand(pos-1), X1(pos-1), X1(pos-2)]')';
X2(pos) = sim(net1, [demandPred(pos), demand(pos-1), X2(pos-1), X2(pos-2)]')';
end
Thanks in advance. :)
Edit 1:
The model which I am going to simulate is a simple GRNN.
net1 = newgrnn([demand(169:trainElem), demand(169-1:trainElem-1), X1(169 - 1:trainElem - 1), X1(169 - 2:trainElem - 2)]', 0.09);

Can Simulink models be vectorized? Sometimes.
Can your Simulink model be vectorised? It's impossible to tell without seeing the model -- and how it is being called from m-code (as you've shown in your question) is no indication.
An example of vectorization would be: consider a model with signal s1 that gets added to constant K, and assume that you need to run the models for different values if K. You could use a loop (like the m-code you show) and run the model for each individual required value for K. Alternatively, you can make K a vector, in which case all values would get added to s1 and the result would be a vector of signals s1+K(1), s1+K(2),..., s1+K(n), and the model only needs to be executed once for all of these summations to occur.
But, whether that sort of thing can be done in your model cannot be determined without seeing the model.

Related

Clean methodology for running a function for a large set of input parameters (in Matlab)

I have a differential equation that's a function of around 30 constants. The differential equation is a system of (N^2+1) equations (where N is typically 4). Solving this system produces N^2+1 functions.
Often I want to see how the solution of the differential equation functionally depends on constants. For example, I might want to plot the maximum value of one of the output functions and see how that maximum changes for each solution of the differential equation as I linearly increase one of the input constants.
Is there a particularly clean method of doing this?
Right now I turn my differential-equation-solving script into a large function that returns an array of output functions. (Some of the inputs are vectors & matrices). For example:
for i = 1:N
[OutputArray1(i, :), OutputArray2(i, :), OutputArray3(i, :), OutputArray4(i, :), OutputArray5(i, :)] = DE_Simulation(Parameter1Array(i));
end
Here I loop through the function. The function solves a differential equation, and then returns the set of solution functions for that input parameter, and then each is appended as a row to a matrix.
There are a few issues I have with my method:
If I want to see the solution to the differential equation for a different parameter, I have to redefine the function so that it is an input of one of the thirty other parameters. For the sake of code readability, I cannot see myself explicitly writing all of the input parameters as individual inputs. (Although I've read that structures might be helpful here, but I'm not sure how that would be implemented.)
I typically get lost in parameter space and often have to update the same parameter across multiple scripts. I have a script that runs the differential-equation-solving function, and I have a second script that plots the set of simulated data. (And I will save the local variables to a file so that I can load them explicitly for plotting, but I often get lost figuring out which file is associated with what set of parameters). The remaining parameters that are not in the input of the function are inside the function itself. I've tried making the parameters global, but doing so drastically slows down the speed of my code. Additionally, some of the inputs are arrays I would like to plot and see before running the solver. (Some of the inputs are time-dependent boundary conditions, and I often want to see what they look like first.)
I'm trying to figure out a good method for me to keep track of everything. I'm trying to come up with a smart method of saving generated figures with a file tag that displays all the parameters associated with that figure. I can save such a file as a notepad file with a generic tagging-number that's listed in the title of the figure, but I feel like this is an awkward system. It's particularly awkward because it's not easy to see what's different about a long list of 30+ parameters.
Overall, I feel as though what I'm doing is fairly simple, yet I feel as though I don't have a good coding methodology and consequently end up wasting a lot of time saving almost-identical functions and scripts to solve fairly simple tasks.
It seems like what you really want here is something that deals with N-D arrays instead of splitting up the outputs.
If all of the OutputArray_ variables have the same number of rows, then the line
for i = 1:N
[OutputArray1(i, :), OutputArray2(i, :), OutputArray3(i, :), OutputArray4(i, :), OutputArray5(i, :)] = DE_Simulation(Parameter1Array(i));
end
seems to suggest that what you really want your function to return is an M x K array (where in this case, K = 5), and you want to pack that output into an M x K x N array. That is, it seems like you'd want to refactor your DE_Simulation to give you something like
for i = 1:N
OutputArray(:,:,i) = DE_Simulation(Parameter1Array(i));
end
If they aren't the same size, then a struct or a table is probably the best way to go, as you could assign to one element of the struct array per loop iteration or one row of the table per loop iteration (the table approach would assume that the size of the variables doesn't change from iteration to iteration).
If, for some reason, you really need to have these as separate outputs (and perhaps later as separate inputs), then what you probably want is a cell array. In that case you'd be able to deal with the variable number of inputs doing something like
for i = 1:N
[OutputArray{i, 1:K}] = DE_Simulation(Parameter1Array(i));
end
I hesitate to even write that, though, because this almost certainly seems like the wrong data structure for what you're trying to do.

Simulink S-Functions - Retrieve Initial Values from another S-Function

I'm trying to model the respective processes of an internal combustion engine. My current modelling approach is to have different sub functions which model the different processes.
Within each sub function is a Level 2 S-Function which solves the ODEs to give the in cylinder state (pressure, temperature, etc).
The problem that I'm having is that each sub function is enabled depending on the current crank angle which is computed from the current timestep in Simulink. The first process works fine as I manually set the initial values, but then I can't pass the latest in-cylinder state (the output from the first sub function) to the second sub function to use as the initial conditions (it insists on using the initial values I set at the beginning of the simulation).
Is there any way round this? Currently I'm going along a path of global data stores, but haven't had any joy so far.
There are a lot of different ways to solve this problem.
I'll show some of them as examples.
You can create additive output with Unit dalay block like this:
So you can get value of your crank angle from previous timestep and USE IT in formula for solving you equations.
Also you can use some code like this:
if (t == 0)
% equations with your initial values
sred = 0;
else
% equations with other values
y = uOld + myCoeef;
end
Another idea: sometimes I use persistent variables in Matlab function to save values of some variable from previous step. But I think it makes calculation slower.
One more idea - if you have Stateflow you can create chart with two states: first for initial moment with your coefficient and second to solve new equations.
If I understood you in wrong way you can show your code and we'll offer some new ideas!
P.S. Example of my using of S-Function:
My S-Function needs 2 values: Q is calculated in simulink at every step, ro is initial I took from big matrix I loaded from workspace in table and took necessary value depending of time.
So there is no any initial values in S-Function - all needed values I transmit into it from simulink!

Vectorising 3d array

I am trying to vectorise a for loop. I have a set of coordinates listed in a [68x200] matrix called plt2, and I have another set of coordinates listed in a [400x1] matrix called trans1. I want to create a three dimensional array called dist1, where in dist1(:,:,1) I have all of the values of plt2 with the first value of trans1 subtracted, all the way through to the end of trans1. I have a for loop like this which works but is very slow:
for i=1:source_points;
dist1(:,:,i)=plt2-trans1(i,1);
end
Thanks for any help.
If I understood correctly, this can be easily solved with bsxfun:
dist1 = bsxfun(#minus, plt2, shiftdim(trans1,-2));
Or, if speed is important, use this equivalent version (thanks to #chappjc), which seems to be much faster:
dist1 = bsxfun(#minus, plt2, reshape(trans1,1,1,[]));
In general, bsxfun is a very useful function for cases like this. Its behaviour can be summarized as follows: for any singleton dimension of any of its two input arrays, it applies an "implicit" for loop to the other array along the same dimension. See the doc for further details.
Vectorizing is a good first optimization, and is usually much easier than going all in writing your own compiled mex-function (in c).
However, the golden middle-way for power users is Matlab Coder (this also applies to slightly harder problems than the one posted, where vectorization is more or less impossible). First, create a small m-file function around the slow code, in your case:
function dist1 = do_some_stuff(source_points,dist1,plt2,trans1)
for i=1:source_points;
dist1(:,:,i)=plt2-trans1(i,1);
end
Then create a simple wrapper function which calls do_some_stuff as well as defines the inputs. This file should really be only 5 rows, with only the bare essentials needed. Matlab Coder uses the wrapper function to understand what typical proper inputs to do_some_stuff are.
You can now fire up the Matlab Coder gui from the Apps section and simply add do_some_stuff under Entry-Point Files. Press Autodefine types and select your wrapper function. Go to build and press build, and you are good to go! This approach usually bumps up the execution speed substantially with almost no effort.
BR
Magnus

function parameters in matlab wander off after curve fitting

first a little background. I'm a psychology student so my background in coding isn't on par with you guys :-)
My problem is as follow and the most important observation is that curve fitting with 2 different programs gives completly different results for my parameters, altough my graphs stay the same. The main program we have used to fit my longitudinal data is kaleidagraph and this should be seen as kinda the 'golden standard', the program I'm trying to modify is matlab.
I was trying to be smart and wrote some code (a lot at least for me) and the goal of that code was the following:
1. Taking an individual longitudinal datafile
2. curve fitting this data on a non-parametric model using lsqcurvefit
3. obtaining figures and the points where f' and f'' are zero
This all worked well (woohoo :-)) but when I started comparing the function parameters both programs generate there is a huge difference. The kaleidagraph program stays close to it's original starting values. Matlab wanders off and sometimes gets larger by a factor 1000. The graphs stay however more or less the same in both situations and both fit the data well. However it would be lovely if I would know how to make the matlab curve fitting more 'conservative' and more located near it's original starting values.
validFitPersons = true(nbValidPersons,1);
for i=1:nbValidPersons
personalData = data{validPersons(i),3};
personalData = personalData(personalData(:,1)>=minAge,:);
% Fit a specific model for all valid persons
try
opts = optimoptions(#lsqcurvefit, 'Algorithm', 'levenberg-marquardt');
[personalParams,personalRes,personalResidual] = lsqcurvefit(heightModel,initialValues,personalData(:,1),personalData(:,2),[],[],opts);
catch
x=1;
end
Above is a the part of the code i've written to fit the datafiles into a specific model.
Below is an example of a non-parametric model i use with its function parameters.
elseif strcmpi(model,'jpa2')
% y = a.*(1-1/(1+(b_1(t+e))^c_1+(b_2(t+e))^c_2+(b_3(t+e))^c_3))
heightModel = #(params,ages) abs(params(1).*(1-1./(1+(params(2).* (ages+params(8) )).^params(5) +(params(3).* (ages+params(8) )).^params(6) +(params(4) .*(ages+params(8) )).^params(7) )));
modelStrings = {'a','b1','b2','b3','c1','c2','c3','e'};
% Define initial values
if strcmpi('male',gender)
initialValues = [176.76 0.339 0.1199 0.0764 0.42287 2.818 18.52 0.4363];
else
initialValues = [161.92 0.4173 0.1354 0.090 0.540 2.87 14.281 0.3701];
end
I've tried to mimick the curve fitting process in kaleidagraph as good as possible. There I've found they use the levenberg-marquardt algorithm which I've selected. However results still vary and I don't have any more clues about how I can change this.
Some extra adjustments:
The idea for this code was the following:
I'm trying to compare different fitting models (they are designed for this purpose). So what I do is I have 5 models with different parameters and different starting values ( the second part of my code) and next I have the general curve fitting file. Since there are different models it would be interesting if I could put restrictions into how far my starting values could wander off.
Anyone any idea how this could be done?
Anybody willing to help a psychology student?
Cheers
This is a common issue when dealing with non-linear models.
If I were, you, I would try to check if you can remove some parameters from the model in order to simplify it.
If you really want to keep your solution not too far from the initial point, you can use upper bounds and lower bounds for each variable:
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
defines a set of lower and upper bounds on the design variables in x so that the solution is always in the range lb ≤ x ≤ ub.
Cheers
You state:
I'm trying to compare different fitting models (they are designed for
this purpose). So what I do is I have 5 models with different
parameters and different starting values ( the second part of my code)
and next I have the general curve fitting file.
You will presumably compare the statistics from fits with different models, to see whether reductions in the fitting error are unlikely to be due to chance. You may want to rely on that comparison to pick the model that not only fits your data suitably but is also simplest (which is often referred to as the principle of parsimony).
The problem is really with the model you have shown resulting in correlated parameters and therefore overfitting, as mentioned by #David. Again, this should be resolved when you compare different models and find that some do just as well (statistically speaking) even though they involve fewer parameters.
edit
To drive the point home regarding the problem with the choice of model, here are (1) results of a trial fit using simulated data (2) the correlation matrix of the parameters in graphical form:
Note that absolute values of the correlation close to 1 indicate strongly correlated parameters, which is highly undesirable. Note also that the trend in the data is practically linear over a long portion of the dataset, which implies that 2 parameters might suffice over that stretch, so using 8 parameters to describe it seems like overkill.

Using coupled system of PDEs in modelica

Just few questions, i hope someone will find time to answer :).
What if we have COUPLED model example: system of n indepedent variables X and n nonlinear partial differential equations PDEf(X,PDEf(X)) with respect to TIME that depends of X,PDEf(X)(partial differential equation depending of variables X ). Can you give some advice? Here is one example:
Let’s say that c is output, or desired variable. Let’s say that r is independent variable.Partial differential equation looks like:
∂c/∂t=D*1/r+∂c/∂r+2(D* (∂^2 c)/(∂r^2 ))
D=constant
r=0:0.1:Rp- Matlab syntaxis, how to represent same in Modelica (I use integrator,but didn't work)?
Here is a code (does not work):
model PDEtest
/* Boundary conditions
1. delta(c)/delta(r)=0 for r=0
2. delta(c)/delta(r)=-j*d for r=Rp*/
parameter Real Rp=88*1e-3; // length
parameter Real initialConc=1000;
parameter Real Dp=1e-14;
parameter Integer np=10; // num. of points
Real cp[np](start=fill(initialConc,np));
Modelica.Blocks.Continuous.Integrator r(k=1); // independent x1
Real j;
protected
parameter Real dr=Rp/np;
parameter Real ts= 0.01; // for using when loop (sample(0,ts) )
algorithm
j:=sin(time); // this should be indepedent variable like x2
r.u:=dr;
while r.y<=Rp loop
for i in 2:np-1 loop
der(cp[i]):=2*Dp/r.y+(cp[i]-cp[i-1])/dr+2*(Dp*(cp[i+1]-2*cp[i]+cp[i-1])/dr^2);
end for;
if r.y==Rp then
cp[np]:=-j*Dp;
end if;
cp[1]:=if time >=0 then initialConc else initialConc;
end while;
annotation (uses(Modelica(version="3.2")));
end PDEtest;
Here are more questions:
This code don’t work in OpenModelica 1.8.1, also don’t work in Dymola 2013demo. How can we have continuos function of variable c, not array of functions ?
Can we place values of array cp in combiTable? And how?
If instead “algorithm” stay “equation” code can’t be succesfull checked.Why? In OpenModelica, error is :could not flattening model :S.
Is there any simplified way to use a set of equation (PDE’s) that are coupled? I know for PDEs library in Modelica, but I think they are complicated. I want to write a function for solving PDE and call these function in “main model”, so that output of function be continuos function of “c”.I don’t know what for doing with array of functions.
Can you give me advice how to understand Modelica language, if we “speak” like in Matlab? For example: Values of independent variable r,we can specife in Matlab, like r=0:TimeStep:Rp…How to do same in Modelica? And please explain me how section “equation” works, is there similarity with Matlab, and is there necessary sequancial approach?
Cheers :)
It's hard to answer your question, since you assuming that Modelica ~ Matlab, but that's not the case. So I won't comment your code, since it's really wrong. Let me give you an example model to the burger equation. Maybe you could use it as starting point.
model burgereqn
Real u[N+2](start=u0);
parameter Real h = 1/(N+1);
parameter Integer N = 10;
parameter Real v = 234;
parameter Real Pi = 3.14159265358979;
parameter Real u0[N+2]={((sin(2*Pi*x[i]))+0.5*sin(Pi*x[i])) for i in 1:N+2};
parameter Real x[N+2] = { h*i for i in 1:N+2};
equation
der(u[1]) = 0;
for i in 2:N+1 loop
der(u[i]) = - ((u[i+1]^2-u[i-1]^2)/(4*(x[i+1]-x[i-1])))
+ (v/(x[i+1]-x[i-1])^2)*(u[i+1]-2*u[i]+u[i+1]);
end for;
der(u[N+2]) = 0;
end burgereqn;
Your further questions:
cp is an continuous variable and the array is representing
every discretization point.
Why you should want to do that, as far as I understand cp is
your desired solution variable.
You should try to use almost always equation section
algorithm sections are usually used in functions. I'm pretty
sure you can represent your desire behaviour with equations.
I don't know that library, but the hard thing on a pde is the
discretization and the solving it self. You may run into issues
while solving the pde with a modelica tool, since usually
a Modelica tool has no specialized solving algorithm for pdes.
Please consider for that question further references. You could
start with Modelica.org.