Discretizing time derivative terms of PDE in the Modelica model - modelica

I am trying to use Modelica to discretize a PDE model, but I got stuck with how to discretize the time derivative terms.
As the following screenshot shows, it is a typical method for the heat conduction PDE model, which uses the der operator instead of discretizing the time derivative terms.
What I am trying to do is discretizing all the derivative terms in the equation, including the time derivative, but I am not sure how to express Q(t+Δt)-Q(t), cause I don't know if there is a mechanism in Modelica that allows me to use the values of a variable of different time points.
My question is:
Is it possible to do discretization on the time derivative terms?

There is no simple support for it.
A simple possibility is to use der(Q)=(Q(t+Δt)-Q(t))/Δt; which basically gives the method-of-lines, https://en.wikipedia.org/wiki/Method_of_lines
To use that you have to rewrite the equations from Q(t+Δt)-Q(t)=-uΔt/Δx(Q(t,i+1)-Q(t,i)) to (Q(t+Δt)-Q(t))/Δt=-u(Q(t,i+1)-Q(t,i))//Δx, and replace left-hand-side by der(Q) and use a normal discretization in the x-direction.
If you really want it exactly discretized like in the text:
Do as above and use Euler with the specific step-size as integration method (or in more advanced cases use synchronous with solverMethod="ExplicitEuler").
Manually write when sample(Δt,Δt) then Q=pre(Q)+Δt/Δx*...

Related

What kind of numerical method does 'pdepe' (MATLAB function's) use?

I'm using the MATLAB's function 'pdepe' to solve a problem with some partial differential equations, a parabolic one.
I need to know the kind of numerical method that function uses, 'cause I have to notify this in a report.
The description of the function in MathWorks is "Solve initial-boundary value problems for systems of parabolic and elliptic PDEs in one space variable and time". Is it a finite difference method?
Thanks for helping me.
Taken from the Matlab 2016b documentation for pdepe:
The time integration is done with ode15s. pdepe exploits the
capabilities of ode15s for solving the differential-algebraic
equations that arise when Equation 1-3 contains elliptic equations,
and for handling Jacobians with a specified sparsity pattern.
Also, from the ode15s documentation:
ode15s is a variable-step, variable-order (VSVO) solver based on the
numerical differentiation formulas (NDFs) of orders 1 to 5.
Optionally, it can use the backward differentiation formulas (BDFs,
also known as Gear's method) that are usually less efficient
As indicated by Alessandro Trigilio, ode15s is used to advance the solution forward in time. Exactly what the function is advancing in time is a semi-discrete, second-order Galerkin formulation for non-singular problems or a semi-discrete, second-order Petrov-Galerkin formulation for singular problems (polar or spherical meshes that include the origin). As such, the spatial discretization is finite element in nature.

integral or trapz, which one is the more appropriate in MATLAB?

I'm computing multiple integrals using MATLAB.
I'm using the integral function to compute the integral but I was wondering is it faster to use trapz instead of using integral?
I know that trapz introduces a bit of error in the computation, but despite that, with is the best function to compute integrals in MATLAB?
Short and sweet:
Use trapz for discrete data or for selected functional data if you don't care about (potentially extremely) low accuracy of the integral value
Use integral for integrands that have a functional form, adjusting tolerances as needed for speed.
As mentioned by the MATLAB documentation, trapz is intended "to perform numerical integrations on discrete data sets" and leverages the trapezoidal rule for the integrations. The error between the true integral and the trapz approximation is almost entirely dependent on the input x vector (sometimes called the abscissa in integration parlance) with no automatic adaptability. The good part is that if the underlying function is "nice" (i.e., continuous, smooth, no sharp peaks or excessive oscillations, etc.), trapz will likely be the fastest function to approximate the integral since it
Doesn't have to call a function for values (they're input)
Doesn't automatically adapt (which takes time and can be complex to
implement).
However, for general integrals, trapz may also be the most inaccurate and may require a denser x vector to calculate a low-error value.
For discrete data, this is a short-coming that must be lived with, but if the integrand has a functional form, integral and its family is highly recommended.
The black-box numeric integrators in MATLAB have evolved over the years, and MathWorks co-founder Clever Moler has a nice blog post going over some of the evolutions. The post discusses the quad, quadl, and quadgk functions and how quadgk became the core for integral and its ilk. The basic breakdown of the three functions is
quad uses a three-point and five-point Simpson's Rule
quadl uses a four-seven-thirteen point1 Lobatto-Kronrod2 rule
quadgk a uses seven-fifteen point Gauss-Kronrod2 rule
to acquire both an approximation of the integral and an error approximation for adaptive quadrature. The summary of the history lesson and test problems is that quadgk was written with vectorization incorporated3, uses a higher-order rule which excludes end-points, and gives extremely accurate answers faster than its competitors. As a result, quadgk is the core of the new and highly-recommended integral family.
1 Adaptive quadrature usually lists the number of points used to form its approximation of the value and the error. Typically, there are two numbers that indicate the number of points to form the low-order and high-order approximations. quadl is interesting in that it uses a four-point Gauss-Lobatto rule and seven-point and thirteen-point Kronrod extensions for its error handling.
2 Gaussian Quadrature, which is an integration technique that chooses it abscissa to exactly integrate a family of polynomials over a given interval instead of prescribing them as in Newton-Cotes, has a lot of names associated with it to indicate a lot of "stuff" that's going on without being explicit about it (which can be very annoying to newcomers). "Gauss" refers to the aforementioned method of choosing abscissa and associated weights for the integration. "Lobatto" indicates an extension to Gauss-Legendre integration methods that incorporates end-points (others may not like my link between these two, but I find the parallels pleasing). "Kronrod" indicates an extension to any particular Gauss rule that creates a high-order rule using a given set of abscissa and adding to it; this creates a "nesting" (the low-order points are part of the high-order point set) that results in fewer function evaluations overall.
3 Since vectorization is written into integral, integrands or limits that are vector-valed must use the 'ArrayValued' flag to tell the program to make functional evaluations differently so as not to create a size-mismatch error. It might be possible to program around this to a certain extent, but the MathWorks decided not to.

Managing of Navier-Stokes PDEs by means of SBF in Dymola

Has anyone tried to implement the Navier Stokes Partial Differential Equations (PDE) in Modelica?
I found the method of the spatial basis functions (SBF) which by means of numerical modifications gets Ordinary Differential Equations (ODE) that could be handled by Dymola.
Regards,
Victor
The aim of the method I was saying before is to convert PDEs in ODEs, so the issues with the CFL coefficient would disappear, the problem is that the Modelica.Fluids elements just define the equations in function of the variables in both ends of each component.
i.e dp=port_a.p-port_b.p
but with that sort of methodology, the variables such as pressure, density, mass flow... would be function also of the surrounding components... it would be a kind of massive interaction between all the components,
I would like to see an example in Modelica, because I hardly haven't found information about that topic linked to Modelica.
Modelica is a language for modeling behavior described by DAEs. As such, as long as you can create a system of ODEs, you should be able to express your problem in Modelica.
However, if your PDEs are hyperbolic, the wave dynamics in the equations might cause some issues with simulation. This is because the CFL condition imposes limits on time steps that an ordinary differential equation solver will be unaware of. If the solver includes error control, it will probably manage to get a solutions but may run quite slow because it won't know how to explicitly limit the simulation step size. If it doesn't include error control and it violates the CFL condition, the system will go unstable. Note, this only applies to systems where the CFL condition applies.

about backpropagation and sigmoid function

I have been reading this ebook about ANN:https://www4.rgu.ac.uk/files/chapter3%20-%20bp.pdf
and got a doubt about the effect of the sigmoid function for calculating the errorB. In the text says that if I have threshold neuron I can use:
Target-Output
but because I have a sigmoid function involved I should add:
Output(1-Output)
and end up with:
ErrorB=OutputB(1-OutputB)(TargetB-OutputB)
I mean why I should add the part of O(1-O), I have tried with different values, but I really do not get the intuition why it should be in that way.
Any help?
Thanks
As Kelu stated, that part of the equation is based on derivatives of your transfer function (in this case sigmoid). To understand why you need derivatives, you need to understand how the delta rule works(*):
Your overall goal is to minimize the error in the network's output using gradient descent. Gradient descent itself tries to find a minimum in the error function (E) by taking steps proportional to the negative of the gradient. A gradient is simply the derivative and the reason you're working with derivatives mathematically is that gradients point in the direction of the greatest rate of increase of the (error) function. Conclusion: Since you wanna minimize the error, you go the opposite way of the gradient.
This is the intuitive reason for using gradients. If you want the mathematical derivation, you should check this basic wiki article (additional comment as it's not mentioned anywhere: the g'(x) in the article is the first derivative of g(x))
Other transfer functions can be used, e.g. linear (in this case there is no g'(x) term as the derivative is simply a constant) or hyperbolic tangent in which case the derivative is something different again.
(*) Equation is derived from following equation where you start by minimizing the error of the output:
It is like that because of the fact that Output(1-Output) is a derivative of sigmoid function (simplified). In general, this part is based on derivatives, you can try with different functions (from sigmoid) and then you have to use their derivatives too to get a proper learning rate.
If you want you can take a look at my implementation (it's far from perfect, but maybe you will get some idea from it ;)), it's a simple project I made on my university - https://github.com/kelostrada/neuron-network

Modelling dynamical systems with MATLAB/Mathematica

Recently I have been performing simulations on some dynamical system, where all the dynamical quantities are interdependent. To therefore simulate the dynamics I performed loops over small time steps dt<<1 and changed the quantities within each iteration. The simulations were done in respectively Mathematica and Matlab.
I got nice results but simulations could take quite long due to the slow iteration process. Generally I hear that one should avoid for loops like I have used, because they slow down the simulation greatly. On the other hand however I am clueless on how to do the simulations without iterations in small time steps. Therefore I ask you: For a dynamical system, where every quantity must be changed in ultra small time steps, what are then the possible methods for for simulating the dynamics.
The straightforward approach is to write the problem as a set of differential equations and use the ODE solving capabilities of either system. Both MATLAB and Mathematica have advanced (and customizable) numerical differential equation solvers, and they both support special "events" in the differential equations that can't be expressed using a simple formula (e.g. the event of a ball bouncing back from the floor).
For Mathematica, first check out NDSolve, WhenEvent then later the Advanced Numerical Differential Equation Solving tutorial.
From your description it sounds like you may be using a naive ODE solving method such as the Euler method. Using a better numerical ODE solving technique can give significant effective speedups (by not forcing you to use "ultra small time steps").
If performance is paramount, consider re-implementing the simulation in a low-level language like C or C++, and possibly making it callable from Mathematica (LibraryLink) to allow easy data analysis and visualization.