Firstly I am a relatively new user.I am trying to correlate a physical test data with the model I built using Dymola/Modelica. In this model "variable 1" has a initial value based on which "variable 2,3 and 4" are calculated and these variables(2,3 & 4) are used to re-calculate "variable 1" and this value of "variable 1" has to be used for the next time step and subsequent recalculations has to be done.
I am not sure how to pass this "updated variable 1" as an input to the model every time step?Can someone please help me on how to approach this problem?
Thanks.
If I understood well the question you have an equation system which you want to solve de-coupled, i.e. solve one set of equations using some initial values or values of the system at previous time step, let's call this equation set A, then with its results as an input solve equation set B at the next time step and so on. Below an example of a decoupled system that is discrete, in which the decoupling is obtained with a clock period shift:
And then the same system that is solved in a coupled way, so at every time instant, all the equations are solved synchronously:
To reply to your comment you can also implement your model into equation section inside a when statement using the pre operator that is used to refer to last value of a discrete variable during an event.
model test
parameter Real timeStep = 0.1;
Real T_i[4];
Real K[4];
Real M[4];
initial equation
T_i = {1,2,3,4}; //starting value of a T
K = T_i .* 1.1 .+ 4;
M = K .* 1.1 .+ 4;
equation
when sample(timeStep,timeStep) then
K = T_i .* 10 .+ 4;
M = K .* 10 .+ 4;
T_i = pre(M) + pre(K);
end when;
end test;
I hope this helps
Related
I have a differential equation: dx/dt = a * x. Using Matlab Simulink, I need to solve this equation and output it using Scope block.
The problem is, I don't know how to specify an initial condition value t = 0.
So far I have managed to create a solution that looks like this:
I know that inside integrator, there is a possiblity to set "Initial condition" but I can't figure out how that affects the final result. How can I simply set the value of x at t = 0; e.i. x(0) = 6?
Let's work this problem through analytically first so we know if the model is correct.
dx/dt = a*x % Seperable differential equation
=> (1/x) dx = a dt % Now we can integrate
=> ln(x) = a*t + c % We can determine c using the initial condition x(0)
=> ln(x0) = a*0 + c
=> ln(x0) = c
=> x = exp(a*t + ln(x0)) % Subbing into 3rd line and taking exp of both sides
=> x = x0 * exp(a*t)
So now we have an idea. Let's look at this for t = 0 .. 1, x0 = 6, a = 5:
% Plot x vs t using plain MATLAB
x0 = 6; a = 5;
t = 0:1e-2:1; x = x0*exp(a*t);
plot(t,x)
Now let's make a Simulink model which acts as a numerical integrator. We don't actually need the Integrator block for this application, we simply want to add the change at each time step!
To run this, we must first set a couple of things up. In Simulation > Model Configuration Parameters, we must set the time step to match the time step we've used to switch between dx/dt and dx (2nd Gain block).
Lastly, we must set the initial condition for x0, this can be done in the Memory block
Setting the end time to 1s and running the model, we see the expected result in the Scope. Because it matches our analytical solution, we know it is correct.
Now we understand what's going on, we can re-introduce the integration block to make the model more flexible. Using the integrator means that dt is automatically calculated, and we don't need to micro-manage the Gain block, in fact we can get rid of it. We still need a Memory block though. We now also need intial conditions in both the integrator, and the memory block. Put scopes in different locations and just complete the first few time steps to work out why!
Note that the initial conditions are less clear when using the integrator block.
The way to think about the integrator blocks is either completely in the Laplace picture, or as representing the equivalent integral equation for the IVP
y'(t)=f(t,y(t)), y(0) = y_0
is equivalent to
y(t) = y_0 + int(s=0 to t) f(s,y(s)) ds
The feed-back loop in the block diagram realizes almost literally this fixed-point equation for the solution function.
So there is no need for complicated constructions and extra blocks.
I need a Simulink block or a group of blocks to make peak detection: compare each value of an input stream to its previous and post values. If it's larger than the previous AND larger than the next value, output this value.
I've tried to do it with a Matlab Function Block, but I cannot make the required delay. I mean it's not possible, as far as I tried, to store previous values for example.
So, what should I do?
Update: Another example
In responding to the comments, suggested solutions are helpful if I'm dealing with discrete values. So here is another example to represent my need:
A Schmidt Trigger
I need to implement a Matlab function to implement the given scenario. I can do something like
if u >= 2
y = 3;
elseif (u < 2)
y = -3;
But still this is not correct as I need to look at the previous value (hysteresis) of the input, other wise I'll end up having something like the following
PS: I know there is nothing called previous value in analog, but we all know that Simulink is dealing with analog values as a discrete in the end (much larger sampling). So I think maybe there is a way to do it.
I think your code is fine except for one minor mistake:
if u > 2
y = 3;
elseif u < -2
y = -3;
else
y = u;
The variable 'u' in elseif part should get compared with the -2 (upper threshold but with a negative sign)
Hope it helps!
Im implementing the k-means algorithm on matlab without using the k-means built-in function, The stopping criteria is when the new centroids doesn't change by new iterations, but i cannot implement it in matlab , can anybody help?
Thanks
Setting no change as a stopping criteria is a bad idea. There are a few main reasons you shouldn't use a 0 change condition
even for a well behaved function the difference between 0 change and a very small change (say 1e-5 perhaps)could be 1000+ iterations, so you are wasting time trying to get them to be exactly the same. Especially because computers usually keep far more digits than we are interested in. IF you only need 1 digit accuracy, why wait for the computer to find an answer within 1e-31?
computers have floating point errors everywhere. Try doing some easily reversible matrix operations like a = rand(3,3); b = a*a*inv(a); a-b theoretically this should be 0 but you will see it isn't. So these errors alone could prevent your program from ever stopping
dithering. lets say we have a 1d k means problem with 3 numbers and we want to split them into 2 groups. One iteration the grouping can be a,b vs c. the next iteration could be a vs b,c the next could be a,b vs c the next.... This is of course a simplified example, but there can be instances where a few data points can dither between clusters, and you will end up with a never ending algorithm. Since those few points are reassigned, the change will never be 0
the solution is to use a delta threshold. basically you subtract the current values from the previous and if they are less than a threshold you are done. This on its own is powerful, but as with any loop, you need a backup escape plan. And that is setting a max_iterations variable. Look at matlabs documentation for kmeans, even they have a MaxIter variable (default is 100) so even if your kmeans doesn't converge, at least it wont run endlessly. Something like this might work
%problem specific
max_iter = 100;
%choose a small number appropriate to your problem
thresh = 1e-3;
%ensures it runs the first time
delta_mu = thresh + 1;
num_iter = 0;
%do your kmeans in the loop
while (delta_mu > thresh && num_iter < max_iter)
%save these right away
old_mu = curr_mu;
%calculate new means and variances, this is the standard kmeans iteration
%then store the values in a variable called curr_mu
curr_mu = newly_calculate_values;
%use the two norm to find the delta as a single number. no matter what
%the original dimensionality of mu was. If old_mu -new_mu was
% 0 the norm is still 0. so it behaves well as a distance measure.
delta_mu = norm(old_mu - curr_mu,2);
num_ter = num_iter + 1;
end
edit
if you don't know the 2 norm is essentially the euclidean distance
I have some code which uses sparse indexing (and there's no way that I can get around that). I run this in a function, and use it for two problems, where the sizes of all the variables involved do not change. However, for one problem, the sparse indexing part takes 5 seconds, and for the other, takes 25 seconds.
I checked the size of every variable involved, and they are the same for both problems. I also checked that xv is a full matrix for both problem types.
So, anyone else ever run into something weird like this? Any ideas as to why this would happen? Mainly I am trying to make the code more efficient, and while 5 seconds is ok for my particular application, 25 seconds (especially when I can't explain it) is very bad.
Edit: Here is a link to a photo that profiles this weird behavior. The runtime values were recorded on the third run to ensure that the size of X is also not changing. And I did check that xv is a dense (not sparse) matrix both times.
https://www.dropbox.com/s/i41j6afanzbjdyg/weird_bcd_thing.png?dl=0
Thanks so much for any help!
Code below (runs in a for loop). If I use ptype = 1, then it's 5 seconds, ptype = 3 is 25 seconds.
clvec = cliques{k};
xcurr = full(X(clvec));
xv = reshape(xcurr - Z(offset_index(k) + 1 : offset_index(k) + ncl^2),ncl,ncl);
%these two functions both take a dense symmetric matrix and return a dense symmetric matrix, and in both cases the size is the same for a given k.
if ptype == 1
xv = proj_PSD(xv,0,0);
elseif ptype == 3
xv = proj_Schoenberg(xv,0);
end
Xd = vec(xv) - xcurr;
%THIS IS THE WEIRD LINE
tic
X(clvec) = xv;
toc;
In the 'WEIRD LINE' : X(clvec) = xv;
You are using a random access to a sparse matrix.
This access in a sparse matrix is not constant and depends on its data. The time is may depend on the matrix values and the indices you are trying to access.
This is not the case in regular matrix, where you usually get a stable access time, and faster.
In order to assure a stable constant access try to change the implementation based on your specific matrix usage, try to avoid values assign by random access.
See next code for as a reference:
X = sparse(randi(100,50,1),randi(100,50,1),randn(1),100,100);
for i=1:10000
rand_inds{i} = randperm(10000,100);
end
for i=1:100
ti = tic;
X(rand_inds{i}) = 3;
to_X(i) = toc(ti);
end
Xf = full(X);
for i=1:100
ti = tic;
Xf(rand_inds{i}) = 3;
to_Xf(i) = toc(ti);
end
figure;plot(to_X);hold on;plot(to_Xf,'r');
I solved my problem! I'm posting the answer because I think it's interesting.
One thing I didn't mention in the question is that the loop goes from k = 1 to k = L, and for ptype = 3, we add one more step, and that's assigning all the diagonal indices to 0:
X(diag_index) = 0
where diag_index is computed ahead of time.
The problem is, instead of just assigning the values to 0, MATLAB will automatically discard these indices, and the next loop, when accessing diagonal indices, it has to re-allocate for X. So, I changed that line to
X(diag_index) = eps;
and now they both run equally fast! (It's not the best solution, since that's going to be a source of error later, but there's no more mystery!)
The answer is never what you think it would be...
I was reading this post online where the person mentioned that using "if statements" and "abs()" functions can have negative repercussions in MATLAB's variable-step ODE solvers (like ODE45). According to the OP, it can significantly affect time-step (requiring too low of a time step) and give poor results when the differential equations are finally integrated. I was wondering whether this is true, and if so, why. Also, how can this problem be mitigated without resorting to fix-step solvers. I've given an example code below as to what I mean:
function [Z,Y] = sauters(We,Re,rhos,nu_G,Uinj,Dinj,theta,ts,SMDs0,Uzs0,...
Uts0,Vzs0,zspan,K)
Y0 = [SMDs0;Uzs0;Uts0;Vzs0]; %Initial Conditions
options = odeset('RelTol',1e-7,'AbsTol',1e-7); %Tolerance Levels
[Z,Y] = ode45(#func,zspan,Y0,options);
function DY = func(z,y)
DY = zeros(4,1);
%Calculate Local Droplet Reynolds Numbers
Rez = y(1)*abs(y(2)-y(4))*Dinj*Uinj/nu_G;
Ret = y(1)*abs(y(3))*Dinj*Uinj/nu_G;
%Calculate Droplet Drag Coefficient
Cdz = dragcof(Rez);
Cdt = dragcof(Ret);
%Calculate Total Relative Velocity and Droplet Reynolds Number
Utot = sqrt((y(2)-y(4))^2 + y(3)^2);
Red = y(1)*abs(Utot)*Dinj*Uinj/nu_G;
%Calculate Derivatives
%SMD
if(Red > 1)
DY(1) = -(We/8)*rhos*y(1)*(Utot*Utot/y(2))*(Cdz*(y(2)-y(4)) + ...
Cdt*y(3)) + (We/6)*y(1)*y(1)*(y(2)*DY(2) + y(3)*DY(3)) + ...
(We/Re)*K*(Red^0.5)*Utot*Utot/y(2);
elseif(Red < 1)
DY(1) = -(We/8)*rhos*y(1)*(Utot*Utot/y(2))*(Cdz*(y(2)-y(4)) + ...
Cdt*y(3)) + (We/6)*y(1)*y(1)*(y(2)*DY(2) + y(3)*DY(3)) + ...
(We/Re)*K*(Red)*Utot*Utot/y(2);
end
%Axial Droplet Velocity
DY(2) = -(3/4)*rhos*(Cdz/y(1))*Utot*(1 - y(4)/y(2));
%Tangential Droplet Velocity
DY(3) = -(3/4)*rhos*(Cdt/y(1))*Utot*(y(3)/y(2));
%Axial Gas Velocity
DY(4) = (3/8)*((ts - ts^2)/(z^2))*(cos(theta)/(tan(theta)^2))*...
(Cdz/y(1))*(Utot/y(4))*(1 - y(4)/y(2)) - y(4)/z;
end
end
Where the function "dragcof" is given by the following:
function Cd = dragcof(Re)
if(Re <= 0.01)
Cd = (0.1875) + (24.0/Re);
elseif(Re > 0.01 && Re <= 260.0)
Cd = (24.0/Re)*(1.0 + 0.1315*Re^(0.32 - 0.05*log10(Re)));
else
Cd = (24.0/Re)*(1.0 + 0.1935*Re^0.6305);
end
end
This is because derivatives that are computed using if-statements, modulus operations (abs()), or things like unit step functions, dirac delta's, etc., will introduce discontinuities in the value of the solution or its derivative(s), resulting in kinks, jumps, inflection points, etc.
This implies the solution to the ODE has a complete change in behavior at the relevant times. What variable step integrators will do is
detect this
recognize that they won't be able to use information directly beyond the "problem point"
decrease the step, and repeat from the top, until the problem point satisfies the accuracy demands
Therefore, there will be many failed steps and reductions in step size near the problem points, negatively affecting the overall integration time.
Variable step integrators will continue to produce good results, however. Constant step integrators are not a good remedy for this sort of problem, since they are not able to detect such problems in the first place (there's no error estimation).
What you could do is simply split the problem up in multiple parts. If you know beforehand at what points in time the changes will occur, you just start a new integration for each interval, each time using the output of the previous integration as the initial value for the next one.
If you don't know beforehand where the problems will be, you could use this very nice feature in Matlab's ODE solvers called event functions (see the documentation). You let one of Matlab's solvers detect the event (change of sign in the derivative, change of condition in the if-statement, or whatever), and terminate the integration when such events are detected. Then start a new integration, starting from the last time and with initial conditions of the previous integration, as before, until the final time is reached.
There will still be a slight penalty in overall execution time this way, since Matlab will try to detect the location of the event accurately. However, it is still much better than running the integration blindly when it comes to both execution time and accuracy of the results.
Yes it is true and it happens because of your solution is not smooth enough at some points.
Assume you want to integrate. y'(t) = f(t,y). Then, what happens in f is getting integrated to become y. Thus, if in your definition of f there is
abs(), then f has a kink and y is still smooth and 1 times differentiable
if, then f has a jump and y a kink and no more differentiability
Matlab's ODE45 presumes that your solution is 5 times differentiable, and tries to ensure an accuracy of order 4. Nonsmooth points of your function are misinterpreted as stiffness what leads to small stepsizes and even to breakdowns.
What you can do: Because of the lack of smoothness you cannot expect a high accuracy anyways. Thus, ODE23 might be a better choice. In the worst case, you have to stick to first-order schemes.