Resolving problems with Algebraic Loops in SIMULINK models - simulink

MY PROBLEM
I have a SIMULINK model that has got a feeback loop aka Algebraic Loops that is causing error in the simulation. My original solver configuration was Fixed-Step with Dormand-Prince (Order 5). I also tried Fixed-Step with Order 3 (Runga-Kutta). It still has not resolved the issue.
MY QUESTION
Is there any way to resolve algebraic loop without altering the original performance or design of the circuit too much?
CLARIFICATION FOR THE QUESTION
I am sure that there will be a way to solve this. However, I don't want it to compromise the original performance of the circuit. Moreover, it is a customer-supplied data and it seems that they have managed to work this out as fine. I simply require somebody to point me to the right direction regarding how to solve this.
MY Approach so far
I tried to break the loop using:
i) Switch (if ip = 0, op = 0; if ip = 1, op = 1) I know it is stupid but it is a different block that breaks the loop.
ii) Logic gate (XORing the feedback signal with 0).
Unfortunately, I don't know how to do a zero-order hold unit delay loop that seems to be another commonly prescribed solution for this kind of problem. But I believe that may cause issues with my model's originally intended performance.
I have posted a query on Mathworks website, but no response so far. SO I thought...why not stackoverflow? Below is the image.

Add a unit delay block on the feed back signal (from Logical Operator1 to Logical Operator4).
The unit delay provides the previous value of the output signal - this won't affect most of the circuitry in this scenario.

There isn't a "one size fits all" answer when it comes to algebraic loops. Here are a few resources about algebraic loops:
What are algebraic loops in Simulink and how do I solve them?
How can I resolve algebraic loops in my Simulink model in Simulink 6.5 (R2006b)?
Algebraic Loops in the Simulink documentation
and there are many others...
In your case, I would suggest highlighting the algebraic loop (as per the doc in the hyperlink above), and try inserting a unit delay in the loop. The doc shows how to do this. Other suggestions would be to try the algebraic loop solver or model parameters related to algebraic loops, or placing an IC or Algebraic Constraint block in the loop. Again, refer to the doc in the hyperlink above for details. I assume you are constrained to use a fixed-step solver and cannot switch to a variable-step solver.

Related

How to improve the convergence performance of Dymola?

Recently I am working with fluid modeling with Modelica, but I come across a lot of divergence problems of nonlinear equations, like in the following screenshot.
So I am considering if it is possible to use the min/max/nominal attributes of variables to improve the model's convergence, especially when a user comes across the nonlinear solver failure. According to the answer of this question on StackOverflow, min/max attributes won't help convergence, and based on the Modelica Specification 4.8.6, nomial attributes are used to determine appropriate tolerances or epsilons, or may be used for scaling.
So my question is:
If I meet this kind of divergence problem caused by the nonlinearity of my model, how could I help the compiler to get convergence better and quicker?
Someone might suggest better start values of variables used as state variables, but when I am dealing with large models, I am not sure how to find the specific state variables of which I should modify the start values.
Chapter 2.6.13 "Online diagnostics for non-linear systems" in manual 1B and following in the manual should help. You can e.g. list states that dominates error: usually these states are a good hint where to start your improvements.
Adding to the answer by Imke Krueger.
If the models fail after 2917 s one possibility is that the solution was diverging before that, with e.g., enthalpy decreasing further and further until the model has left the valid regions.
Assuming it happened fairly slowly it is best to plot the states and other variables in that components. Additionally the states dominating the error as indicated in the answer by Imke Krueger and see if any of them seem to diverge.
If it happened more quickly:
Log events and check whether something important like a flow reversal just happened before that time.
Disable equidistant output, as it is possible that the model diverged between two output points.
An eigenvalue-based analysis of the Jacobin at time = 0 provides a ranking of state-variables from most significant to the least one. That could be a heuristic to examine the influence of start variables of most significant state-variables.
What could be also helpful is to conduct a similar analysis a little time before the problem occurs.
Also there is a possibility to compute dynamic parameter sensitivities of state variables (before the problem occurs) w.r.t. start values, see e.g. https://github.com/Mathemodica/DerXP for a suggested approach. This gives you a hint which start values significantly influences the values of state variables.

Issues with the "Chattering" example in Chapter 2 of Modelica by example

Here is the link of the model:https://mbe.modelica.university/behavior/discrete/decay/#chattering
The simulation result for the following result in Dymola 2021 would be:
model WithChatter(stopTime=1.001s)
model WithChatter(stopTime=1.5s)
As we could see, noEvent operator does decrease the CPUTime, but it also causes the system stiff, it would be easier to understand with more explanation about why noEvent would cause the system stiff.
Based on the event logging of model WithChatter, the simulation process actually uses the minimum time step because the der(x) is not a continuous function. But why doesn't this approach suit the model WithNoEvents?(https://mbe.modelica.university/behavior/discrete/decay/#speed-vs-accuracy)
If noEvent operator means using the integrator directly, it might require the functions in the equation system have to be continuous? So Does this mean that the model used in the Chattering example(https://mbe.modelica.university/behavior/discrete/decay/#chattering) isn't appropriate, since the function in this model is not continuous?
The model used in the chattering example isn't appropriate since it is not continuous, and the error message from dassl is just a boiler-plate message, so the model isn't stiff but discontinuous as you found.
Markus A has a good point in the related question When to use noEvent operator in Modelica language? that using noEvent to avoid chattering is in general not a good idea, and one should normally try to rewrite the model instead of adding noEvent.
This specific model is sort of similar to a friction model where you would have
der(v)=(if v>=0 then -1 else 1)+f_other/m;
The solution for friction is not to introduce noEvent, but to add a stuck mode as in Modelica.Mechanics.Rotational.Components.BearingFriction.

Any suggestion for solving linear equations with two unknown to be assumed?

I am trying to solve a "linearized" linear-system-of-equations, which requires two parameters to be estimated by iteration because of linearization. The actual problem is nonlinear actually, but using fourier series method, it iss linearized.
I have been solving linear system by just matrices and SVDs which takes not much time but these matrices depend on the two parameters that are to be iteratively solved. At the end I just need to make sure that one of the parameters I solve iteratively matches the response I get in the system. This is the criteria to be minimized.
I have been using "fmincon" and "multi-start" to solve for two parameters and I get some results, but it is taking longer than what I expect. There is local minima issue too, so I had to include "multi-start".
Anyone has an idea if any other method would be easier to solve this problem?
I really appreciate it.
A global optimization method that one may use is Simulated annealing.
May be MATLAB has a relevant routine.
There is free Simulated annealing software that you may also try.
I got an improvement in my problem, and I just replied it in comments but I think it is worth putting it in here since what I did emerged something unexpected:
So I ran a monte carlo sim for two variables to be iteratively solved, and plotted how the error changes with respect to input variables. I realized that there are tons of local minima in the error of the response and that's why fmincon was not able to solve itself because it was quickly jumping into one of those local minima holes, and I needed a very refined multi-start for fmincon so that I could get global minimum. This is very interesting observation because I wasn't expecting that rough error distribution with respect to two parameters.
Is there any efficient solver/optimizer in matlab that you know of, to get the global minimum in cases where there are many local minima? Or any other method?
Thanks,

Matlab - output of the algorithm

I have a program using PSO algorithm using penalty function for Constraint Satisfaction. But when I run the program for different iterations, the output of the algorithm would be :
"Iteration 1: Best Cost = Inf"
.
Does anyone know why I always get inf answer?
There could be many reasons for that, none of which will be accurate if you don't provide a MWE with the code you have already tried or a context of the function you are analysing.
For instance, while studying the PSO algorithm you might use it on functions which have analytical solutions first. By doing this you can study the behaviour of the algorithm before applying to a similar problem, and fine tune its parameters.
My guess is that you might not be providing either the right function (I have done that already, getting a signal wrong is easy!), the right constraints (same logic applies), your weights for the penalty function and velocity update are way off.

modelling a resonator in simulink

I have been trying to model the Fabry-Perot resonator in simulink. I am not sure if it is right to choose simulink for this task but I have been getting some results, at least. However, I have been also getting an error of algebraic loop when I use a different pair of coupling/reflection parameters. It says,
"Simulink cannot solve the algebraic loop containing
'jblock_multi_MR/Meander2b/Subsystem3/Real-Imag to Complex' at time
6.91999999999991 using the LineSearch-based algorithm due to one of the
following reasons: the model is ill-defined i.e., the system equations do
not have a solution; or the nonlinear equation solver failed to converge
due to numerical issues.
To rule out solver convergence as the cause of this error, either
a) switch to TrustRegion-based algorithm using
set_param('jblock_multi_MR','AlgebraicLoopSolver','TrustRegion')
b) reducing the VariableStepDiscrete solver RelTol parameter so that
the solver takes smaller time steps.
If the error persists in spite of the above changes, then the model is
likely ill-defined and requires modification."
Changing the solver does not help. As a note, I implemented the system in terms of electric fields and complex signals naturally.
thanks for any help.
There is no magic solution for solving algebraic loop issues, as these issues tend to be very model-dependent. Here are a few pointers though:
What are algebraic loops in Simulink and how do I solve them?
How can I resolve algebraic loops in my Simulink model in Simulink 6.5 (R2006b)?
Algebraic Loops in the Simulink documentation
See also this answer to a similar question on SO, with some suggestions for breaking the loop.