I am working on a simple model which includes a derivative of dy/dx, but in Modelica, I can't write this equation directly, I could use the combination of x=timeand der(y), but I think this is a compromise because of limitation of Modelica language.
My question is:
Is there another better method to describe derivative in Modelica?
Here is the code:
model HowToExpressDerivative "dy/dx=5, how to describe this equation in Modelica?"
Real x,y;
equation
x = time;
der(y) = 5;
end HowToExpressDerivative;
I also tried to use der(y)/der(x) to express dy/dx, but there is an error when x equals time^2.
model HowToExpressDerivative "dy/dx=5, how to describe this equation in Modelica?"
Real x,y;
equation
x=time^2;
der(y)/der(x)=5;
end HowToExpressDerivative;
Error: The following error was detected at time: 0
Model error - division by zero: (1.0) / (der(x)) = (1) / (0)
Error: Integrator failed to start model.
... "HowToExpressDerivative.mat" creating (simulation result file)
ERROR: The simulation of HowToExpressDerivative FAILED
Given enthalpy h and crank-angle phi you could replace dh/dphi=... by:
der(h)/der(phi)=...
However, even if correct that formula will break down when the engine is standing still (der(phi)=0), so it is not ideal.
An alternative would be to rewrite the formulas. Looking more closely the formula seems to be:
dh/dphi=(\partial a/\partial T)*dT/dphi+...
which suggests that they could be rewritten as:
der(h)=(\partial a/\partial T)*der(T)+...
Related
What is the idea behind double QN?
The Bellman equation used to calculate the Q values to update the online network follows the equation:
value = reward + discount_factor * target_network.predict(next_state)[argmax(online_network.predict(next_state))]
The Bellman equation used to calculate the Q value updates in the original DQN is:
value = reward + discount_factor * max(target_network.predict(next_state))
but the target network for evaluating the action is updated using weights of the
online_network and the value and fed to the target value is basically old q value of the action.
any ideas how adding another networks based on weights from the first network helps?
I really liked the explanation from here:
https://becominghuman.ai/beat-atari-with-deep-reinforcement-learning-part-2-dqn-improvements-d3563f665a2c
"This is actually quite simple: you probably remember from the previous post that we were trying to optimize the Q function defined as follows:
Q(s, a) = r + γ maxₐ’(Q(s’, a’))
Because this definition is recursive (the Q value depends on other Q values), in Q-learning we end up training a network to predict its own output, as we pointed out last time.
The problem of course is that at each minibatch of training, we are changing both Q(s, a) and Q(s’, a’), in other words, we are getting closer to our target but also moving our target! This can make it a lot harder for our network to converge.
It thus seems like we should instead use a fixed target so as to avoid this problem of the network “chasing its own tail”, but of course that isn’t possible since the target Q function should get better and better as we train."
I have a function
b=2.02478;
g=3.45581;
s=0.6;
R=1;
p =#(r) 1 - (b./r).^2 - (g^-2)*((2/15)*(s/R)^9 *(1./(r - 1).^9 - 1./(r + 1).^9 - 9./(8*r).*(1./(r - 1).^8 - 1./(r + 1).^8)) -(s/R)^3 *(1./(r-1).^3 - 1./(r+1).^3 - 3./(2*r).*(1./(r-1).^2 - 1./(r+1).^2)));
options = optimset('Display','off');
tic
r2 = fzero(p,[1.001,100])
toc
tic
r3 = fsolve(p,[1.001,100],options)
toc
and the answer
r2 =
2.0198
Elapsed time is 0.002342 seconds.
r3 =
2.1648 2.2745
Elapsed time is 0.048991 seconds.
which is more confiable ? fzero returns different values than fsolve
You should always look at the exit flag (or output struct) of a function, especially when your result is not as expected.
This is what I get:
fzero(func,[1.00001,100]):
X = 4.9969
FVAL
EXITFLAG = 1 % fzero found a zero X.
OUTPUT.message = 'Zero found in the interval [1.00001, 100]'
fzero(func,1.1):
X = 1
FVAL = 8.2304e+136
EXITFLAG = -5 % fzero may have converged to a singular point.
OUTPUT.message = 'Current point x may be near a singular point. The interval [0.975549, 1.188] reduced to the requested tolerance and the function changes sign in the interval, but f(x) increased in magnitude as the interval reduced.'
The meaning of the exit flag is explained in the matlab documentation:
1 Function converged to a solution x.
-5 Algorithm might have converged to a singular point.
-6 fzero did not detect a sign change.
So, based on this information it is clear that the first one gives you the correct result.
Why does fzero fails
As documented in the manual, fzero calculates the zero by finding a sign change:
tries to find a point x where fun(x) = 0. This solution is where fun(x) changes sign—fzero cannot find a root of a function such as x^2.
Therefore, X = 1 is also a solution of your formulation as the sign changes at this location from +inf to -inf as can be seen on a plot:
Note that it is always a good idea to provide a search range if possible as mentioned in the manual:
Calling fzero with a finite interval guarantees fzero will return a value near a point where FUN changes sign.
Tip: Calling fzero with an interval (x0 with two elements) is often faster than calling it with a scalar x0.
Alternative: fsolve
Note that this method is developed for solving a system of multiple nonlinear equations. Therefore, it is not as efficient as fzero (~20x slower in your case). fzero uses gradient based methods (check the manual for more information), which may work better in certain situations, but may get stuck in a local extrema. In this case, the gradient of your function gives the correct direction as long as your initial value is larger than 1. So, for this specific function fsolve is somewhat more robust than fzero with a single initial value, i.e. fsolve(func, 1.1) returns the expected value.
Conclusion: In general, use fzero with a search range instead of an initial value if possible for a single variable and fsolve for multiple variables. If one method fails, you can try another method or another starting point.
As you can read in documentation:
The algorithm, which was originated by T. Dekker, uses a combination of bisection, secant, and inverse quadratic interpolation methods.
So, it is sensitive into the initial point and area which it is seeking for the solution. Hence, you have gotten different result for different initial value and scope.
My goal is to compute the n-fold self-convolution of a function rho(eta) where eta > 0, using MuPAD. (The background are energy densities of systems composed of many identical subsystems.) I tried to start with a simple case, but I'm already getting stuck there:
I define rho(eta) to be constantly 1 for eta > 0, so it is a Heaviside function:
rho := eta -> heaviside(eta)
and I implement the 2-fold self-convolution using a double integral and a Dirac delta function:
int(int(rho(etaA) * rho(etaB) * dirac(etaA + etaB - energy), etaB = 0..infinity), etaA=0..infinity)
with the result
so MuPAD wasn't even able to simplify the integral over a delta function and obtain a normal convolution expression; no idea what's going on with the limit here.
If I just directly enter the normal convolution expression of the function with itself
int(rho(etaA) * rho(energy - etaA), etaA = 0..infinity)
I get
again with a limit (which could be simplified to 0, or couldn't it?). The second term comes actually close to the correct answer, the heaviside just accounts for the possibility that energy may be negative. Ok, so I tell MuPAD that energy is positive:
int(rho(etaA) * rho(energy - etaA), etaA = 0..infinity) assuming energy > 0
and now MuPAD just gives me back the original unchanged integral:
Well, maybe using heaviside is the problem, and it is not strictly necessary anyway since I implement the constraint to eta > 0 through the integration limits. So I redefine
rho := eta -> 1
and use the formula with the delta function, plus the information that energy is positive:
int(int(rho(etaA) * rho(etaB) * dirac(etaA + etaB - energy), etaB = 0..infinity), etaA=0..infinity) assuming energy > 0
Guess what? Now MuPAD returns a heaviside by itself:
which is correct – but why doesn't it evaluate this integral? It's not that hard, is it?
So please anyone tell me: Why is all this happening? And how can I make it work?
For a prior on a bound measure, I am trying to stretch a beta distribution between [-1,1], "[a]s described by Barnard, McCulloch & Meng (2000)" (according to this tutorial).
Specifically, I am trying to implement this suggestion:
rho_half_with ~ dbeta(1, 1)
# shifting and streching rho_half_with from [0,1] to [-1,1]
rho ~ 2 * rho_half_with - 1
However, I always get
syntax error on line (...) near "2"
No entry in the manual for JAGS or BUGS I found deals with manipulations of distributions (as sources of stochastic relation assignments). Is it indeed possible to apply basic arithmetic operations to BUGS/JAGS stochastic relation (following the ~ operator), and if yes, how?
The problem with the code you have posted is that you use a ~ in a non-stochastic relation, where JAGS would want you to use <- instead. The following should work:
rho_half_with ~ dbeta(1, 1)
# shifting and streching rho_half_with from [0,1] to [-1,1]
rho <- 2 * rho_half_with - 1
Regarding the error message you mention in the comments you get that because you try to initiate a variable that is not stochastic (rho). Remove that initialization or switch to initializing rho_half_with to solve that problem.
Just few questions, i hope someone will find time to answer :).
What if we have COUPLED model example: system of n indepedent variables X and n nonlinear partial differential equations PDEf(X,PDEf(X)) with respect to TIME that depends of X,PDEf(X)(partial differential equation depending of variables X ). Can you give some advice? Here is one example:
Let’s say that c is output, or desired variable. Let’s say that r is independent variable.Partial differential equation looks like:
∂c/∂t=D*1/r+∂c/∂r+2(D* (∂^2 c)/(∂r^2 ))
D=constant
r=0:0.1:Rp- Matlab syntaxis, how to represent same in Modelica (I use integrator,but didn't work)?
Here is a code (does not work):
model PDEtest
/* Boundary conditions
1. delta(c)/delta(r)=0 for r=0
2. delta(c)/delta(r)=-j*d for r=Rp*/
parameter Real Rp=88*1e-3; // length
parameter Real initialConc=1000;
parameter Real Dp=1e-14;
parameter Integer np=10; // num. of points
Real cp[np](start=fill(initialConc,np));
Modelica.Blocks.Continuous.Integrator r(k=1); // independent x1
Real j;
protected
parameter Real dr=Rp/np;
parameter Real ts= 0.01; // for using when loop (sample(0,ts) )
algorithm
j:=sin(time); // this should be indepedent variable like x2
r.u:=dr;
while r.y<=Rp loop
for i in 2:np-1 loop
der(cp[i]):=2*Dp/r.y+(cp[i]-cp[i-1])/dr+2*(Dp*(cp[i+1]-2*cp[i]+cp[i-1])/dr^2);
end for;
if r.y==Rp then
cp[np]:=-j*Dp;
end if;
cp[1]:=if time >=0 then initialConc else initialConc;
end while;
annotation (uses(Modelica(version="3.2")));
end PDEtest;
Here are more questions:
This code don’t work in OpenModelica 1.8.1, also don’t work in Dymola 2013demo. How can we have continuos function of variable c, not array of functions ?
Can we place values of array cp in combiTable? And how?
If instead “algorithm” stay “equation” code can’t be succesfull checked.Why? In OpenModelica, error is :could not flattening model :S.
Is there any simplified way to use a set of equation (PDE’s) that are coupled? I know for PDEs library in Modelica, but I think they are complicated. I want to write a function for solving PDE and call these function in “main model”, so that output of function be continuos function of “c”.I don’t know what for doing with array of functions.
Can you give me advice how to understand Modelica language, if we “speak” like in Matlab? For example: Values of independent variable r,we can specife in Matlab, like r=0:TimeStep:Rp…How to do same in Modelica? And please explain me how section “equation” works, is there similarity with Matlab, and is there necessary sequancial approach?
Cheers :)
It's hard to answer your question, since you assuming that Modelica ~ Matlab, but that's not the case. So I won't comment your code, since it's really wrong. Let me give you an example model to the burger equation. Maybe you could use it as starting point.
model burgereqn
Real u[N+2](start=u0);
parameter Real h = 1/(N+1);
parameter Integer N = 10;
parameter Real v = 234;
parameter Real Pi = 3.14159265358979;
parameter Real u0[N+2]={((sin(2*Pi*x[i]))+0.5*sin(Pi*x[i])) for i in 1:N+2};
parameter Real x[N+2] = { h*i for i in 1:N+2};
equation
der(u[1]) = 0;
for i in 2:N+1 loop
der(u[i]) = - ((u[i+1]^2-u[i-1]^2)/(4*(x[i+1]-x[i-1])))
+ (v/(x[i+1]-x[i-1])^2)*(u[i+1]-2*u[i]+u[i+1]);
end for;
der(u[N+2]) = 0;
end burgereqn;
Your further questions:
cp is an continuous variable and the array is representing
every discretization point.
Why you should want to do that, as far as I understand cp is
your desired solution variable.
You should try to use almost always equation section
algorithm sections are usually used in functions. I'm pretty
sure you can represent your desire behaviour with equations.
I don't know that library, but the hard thing on a pde is the
discretization and the solving it self. You may run into issues
while solving the pde with a modelica tool, since usually
a Modelica tool has no specialized solving algorithm for pdes.
Please consider for that question further references. You could
start with Modelica.org.