why if I put a filter on an output I modify the source signal? is this a simulink bug? - matlab

I know it sounds strange and that's a bad way to write a question,but let me show you this odd behavior.
as you can see this signal, r5, is nice and clean. exactly what I expected from my simulation.
now look at this:
this is EXACTLY the same simulation,the only difference is that the filter is now not connected. I tried for hours to find a reason,but it seems like a bug.
This is my file, you can test it yourself disconnecting the filter.
----edited.
Tried it with simulink 2014 and on friend's 2013,on two different computers...if Someone can test it on 2015 it would be great.
(attaching the filter to any other r,r1-r4 included ''fixes'' the noise (on ALL r1-r8),I tried putting it on other signals but the noise won't go away).
the expected result is exactly the smooth one, this file showed to be quite robust on other simulations (so I guess the math inside the blocks is good) and this case happens only with one of the two''link number'' (one input on the top left) set to 4,even if a small noise appears with one ''link number'' set to 3.
thanks in advance for any help.

It seems to me that the only thing the filter could affect is the time step used in the integration, assuming you are using a dynamic time step (which is the default). So, my guess is that (if this is not a bug) your system is numerically unstable/chaotic. It could also be related to noise, caused by differentiation. Differentiating noise over a smaller time step mostly makes things even worse.
Solvers such as ode23 and ode45 use a dynamic time step. ode23 compares a second and third order integration and selects the third one if the difference between the two is not too big. If the difference is too big, it does another calculation with a smaller timestep. ode45 does the same with a fourth and fifth order calculation, more accurate, but more sensitive. Instabilities can occur if a smaller time step makes things worse, which could occur if you differentiate noise.
To overcome the problem, try using a fixed time step, change your precision/solver, or better: avoid differentiation, use some type of state estimator to obtain derivatives or calculate analytically.

Related

scipy.integrate.odeint time dependend stepsize

I have the following problem:
I have to use an ode-solver to solve a chemical reaction equation. The rate constants are functions of time and can suddenly change (puls from electric discharge).
One way to solve this is to keep the stepsize very small hmax < dt. This results in a high comp. affort --> time consuming. My question is: Is there an efficient way to make this work? I thought about to def hmax(puls_ON) with plus_ON=True within the puls and plus_ON=False between. However, since dt is increasing in time, it may dose not even recognize the puls, because the time interval is growing hmax=hmax(t).
A time-grid would be the best option I thin, but I don't think this is possible with odeint?
Or is it possible to somehow force the solver to integrate at a specific point in time (e.g. t0 ->(hmax=False)->tpuls_1_start->(hmax=dt)->tpuls_1_end->(hmax=False)->puls_2_start.....)?
thx
There is an optional parameter tcrit for the odeint that you could try:
Vector of critical points (e.g. singularities) where integration care should be taken.
I don't know what it actually does but it may help to not simply step over the pulse.
If that does not work you can of course manually split your integration into different intervals. Integrate until your tpuls_1_start. Then restart the integration using the results from the previous one as initial values.

How to analyze scale-free signals and get signal properties

I am new with signal processing, i have following signals which i've got after some pre-processing on original signals.
You can see some of them has some similarities with others and some doesn't. but the problem is They have various range(in this example from 1000 to 3000).
Question
How can i analysis their properties scale-free(what i mean from properties is statistical properties of signals or whatever)??
Note that i don't want to cross-comparing the signals, i just want independent signals signatures which i can run some process on them sometime later.
Anything would help.
If you want to make a filter that separates signals that follow this pattern from signals that don't, well, there's tons of things you could do!
Just think practically. As a first shot at it, you could do something like this (in this order):
Check if the signals are all-positive
Check if the first element is close in value to the last element
Check if the maximum lies "in the middle" somewhere
Check if the first value is small, then the signal grows, then shrinks again
Check if the growth rates are gradual. You could for example analyze their derivatives (after smoothing):
a. derivative should be all-positive for a while, then all-negative.
b. derivative should be smooth (no jumps greater than some tolerance)
Without additional knowledge about the signal's nature/origin, it's going to be hard to come up with more meaningful metrics than these...

Simulink: PID Controller - difference between back-calculation and clamping for anti-windup?

I need to implement an anti-windup (output limitation) for my PID controller. Simulink is offering two options: back calculation and clamping (documentation) which seem to deliver equal results. I know what back calculation is doing mathematically. It requires to define the back-calculation gain Kb. This gain is dependent on how long my controller is saturated, therefore it is actually a dynamic value (because I may have a high variation of saturation times). Do you see a way to control this value? (in this case it probably would be necessary to build my own PID Controller as shown in the documentation above or in the picture below.
Which brings me to the question, what is clamping actually doing? And what are other differences? Which one is faster, which one is more robust against stiff slopes? Does anybody has experiences using both?
Not sure if this fully answers the question, but the PID Controller documentation page, explains a bit more about clamping:
clamping
Stops integration when the sum of the block components
exceeds the output limits and the integrator output and block input
have the same sign. Resumes integration when the sum of the block
components exceeds the output limits and the integrator output and
block input have opposite sign. The integrator portion of the block
is:
The clamping circuit implements the logic necessary to determine whether integration continues.
If you select the clamping option and look under the mask, you can probably see the details of the clamping circuit.
Additionally to am304's answer there are some more things to consider.
Clamping
Clamping will always work. It detects when there is integrator overflow and sets the integral path of the PID-controller to zero to avoid windup by using a simple switch.
Clamping is a commmonly used anti windup method, especially in case of digital control systems. In serious applications however, there is also forward clamping involved - evaluating the controller input as well. This mechanism must me implemented manually.
Back Calculation
Back Calculation highly depends on the back calculation coefficient Kb. If you don't know how to actually calculate the parameter Kb don't use back-calculation. This method calculates the difference between the actual controller output and the saturated output and subtracts it from the I-Gain path, amplified by Kb.
In most of cases the default value Kb = 1 will lead to worse results than clamping, it is even possible that it has no effect at all. Kb should be calculated based on the sampling time or
in case a D-Gain is involded, based on D- and I-Gain. Appropriate literatur should be consulted to calculate the coefficient. Back calculation with a properly set coeffient enables better dynamics than clamping!

diagnostic for MATLAB ODE

I am solving a stiff PDE in MATLAB using ode15, and it often freezes depending on the initial conditions. I never actually get an error, it just won't finish even after 10 hours when it should take around 30 seconds to run. I am experimenting with different spatial and time node intervals, but it is hard, because I don't get feedback.
Is there some sort of equivalent to diagnostic for fsolve? stats is not useful because it only displays an output after fsolve is finished.
Check out the documentation on odeset, and specifically the stats option. I think you basically just want to set stats to on and you will get some feedback.
Also, depending on your ODE, you may need a different solver. About half way down the page on this page there is a list of most of the solvers available in MATLAB. Depending on whether your function is stiff or non-stiff, and how accurate you need to get, one of those might work better for you. Sometimes I just code them all in and comment out all but one until I find the one that runs the best for me, but check out the documentation on each if you want to find the "right" one for your application.
Your question is confusing because you refer to both ode15s and fsolve locking up. These are two completely different functions. One does numerical integration and the other solves for roots. Also, fsolve has no option called 'Stats' (see doc fsolve). If you want continuous output from fsolve use:
options = optimist('Display','iter');
[x,fval,exitflag] = fsolve(myfun,x0,options)
This will display the iteration count, number of function evaluations, the function value, and other stuff depending on what algorithm you use (the alorithm can be adjusted via the 'Algorithm' option). Again see doc fsolve for full details.
As far as the 'Stats' option with ode15s goes, it's not going to give you very much information. I doubt that it will you figure out why your system is halting (if it even is ode15s that you have a problem with). What you can try is using an output function via the 'OutputFcn' option of odeset. You can try the simple odeprint first:
options = odeset('OutputFcn',#odeprint)
which will print your state after each integration step. Type edit odeprint to see the code and how you might write your own output function if you need to do more.

max likelihood fminsearch

I used Matlab-fminsearch for a negativ max likelihood model for a binomial distributed function. I don't get any error notice, but the parameter which I want to estimate, take always the start value. Apparently, there is a mistake. I know that I ask a totally general question. But is it possible that anybody had the same mistake and know how to deal with it?
Thanks a lot,
#woodchips, thank you a lot. Step by step, I've tried to do what you advised me. First of all, I actually maximized (-log(likelihood)) and this is not the problem. I think I found out the problem but I still have some questions, if I don't bother you. I have a model(param) to maximize in paramstart=p1. This model is built for (-log(likelihood(F))) and my F is a vectorized function like F(t,Z,X,T,param,m2,m3,k,l). I have a data like (tdata,kdata,ldata),X,T are grids and Z is a function on this grid and (m1,m2,m3) are given parameters.When I want to see the value of F(tdata,Z,X,T,m1,m2,m3,kdata,ldata), I get a good output. But I think fminsearch accept that F(tdata,Z,X,T,p,m2,m3,kdata,ldata) like a constant and thatswhy I always have as estimated parameter the start value. I will be happy, if you have any advise to tweak that.
You have some options you can try to tweak. I'd start with algorithm.
When the function value practically doesn't change around your startpoint it's also problematic. Maybe switching to log-likelyhood helps.
I always use fminunc or fmincon. They allow also providing the Hessian (typically better than "estimated") or 'typical values' so the algorithm doesn't spend time in unfeasible regions.
It is virtually always true that you should NEVER maximize a likelihood function, but ALWAYS maximize the log of that function. Floating point issues will almost always corrupt the problem otherwise. That your optimization starts and stops at the same point is a good indicator this is the problem.
You may well need to dig a little deeper than the above, but even so, this next test is the test I recommend that all users of optimization tools do for every one of their problems, BEFORE they throw a function into an optimizer. Evaluate your objective for several points in the vicinity. Does it yield significantly different values? If not, then look to see why not. Are you creating a non-smooth objective to optimize, or a zero objective? I.e., zero to within the supplied tolerances?
If it does yield different values but still not converge, then make sure you know how to call the optimizer correctly. Yeah, right, like nobody has ever made this mistake before. This is actually a very common cause of failure of optimizers.
If it does yield good values that vary, and you ARE calling the optimizer correctly, then think if there are regions into which the optimizer is trying to diverge that yield garbage results. Is the objective generating complex or imaginary results?