I have a datasheet (around 100 samples) where for a real SISO system (DC motor), I know the input and output. With tfest command, I can form first order to nth order transfer function using the same data (loaded with iddata function) for the system.
But in real life the system can be either 1st order or nth order.
Like in MATLAB, using same iddat (contains the sample values), I can generate following transfer functions:
sys1 = tfest(iddat, 1, 1, 0.5); %number of zero=1, pole=1, 1st order system
sys1 =
From input "u1" to output "y1":
exp(-0.5*s) * (2.932 s - 0.1862) / (s + 1.082)
sys = tfest(iddat, 3, 2, 0.5);%number of zero=3, pole=2, 2nd order system
sys =
From input "u1" to output "y1":
exp(-0.5*s) * (0.1936 s^2 - 0.02193 s + 0.0006905) / ( s^3 + 0.07175 s^2 + 0.05526 s + 1.772e-13)
Can someone explain the scenario?
Fitting a model to an experimental data requires a minimum amount of knowledge about the underlying physical system.
Here you have a DC motor which probably does not have any zeros and no DC gain but you are forcing matlab to fit a proper 3rd order transfer function and it is giving you the closest one (not necessarily the correct one).
Instead remove the half a second delay and let the function find the time constant for you. So
tfest(iddat,1);
would be sufficient (or try with 3 if you are suspicious about the motor drive).
Related
So. First of all, I am new to Neural Network (NN).
As part of my PhD, I am trying to solve some problem through NN.
For this, I have created a program that creates some data set made of
a collection of input vectors (each with 63 elements) and its corresponding
output vectors (each with 6 elements).
So, my program looks like this:
Nₜᵣ = 25; # number of inputs in the data set
xtrain, ytrain = dataset_generator(Nₜᵣ); # generates In/Out vectors: xtrain/ytrain
datatrain = zip(xtrain,ytrain); # ensamble my data
Now, both xtrain and ytrain are of type Array{Array{Float64,1},1}, meaning that
if (say)Nₜᵣ = 2, they look like:
julia> xtrain #same for ytrain
2-element Array{Array{Float64,1},1}:
[1.0, -0.062, -0.015, -1.0, 0.076, 0.19, -0.74, 0.057, 0.275, ....]
[0.39, -1.0, 0.12, -0.048, 0.476, 0.05, -0.086, 0.85, 0.292, ....]
The first 3 elements of each vector is normalized to unity (represents x,y,z coordinates), and the following 60 numbers are also normalized to unity and corresponds to some measurable attributes.
The program continues like:
layer1 = Dense(length(xtrain[1]),46,tanh); # setting 6 layers
layer2 = Dense(46,36,tanh) ;
layer3 = Dense(36,26,tanh) ;
layer4 = Dense(26,16,tanh) ;
layer5 = Dense(16,6,tanh) ;
layer6 = Dense(6,length(ytrain[1])) ;
m = Chain(layer1,layer2,layer3,layer4,layer5,layer6); # composing the layers
squaredCost(ym,y) = (1/2)*norm(y - ym).^2;
loss(x,y) = squaredCost(m(x),y); # define loss function
ps = Flux.params(m); # initializing mod.param.
opt = ADAM(0.01, (0.9, 0.8)); #
and finally:
trainmode!(m,true)
itermax = 700; # set max number of iterations
losses = [];
for iter in 1:itermax
Flux.train!(loss,ps,datatrain,opt);
push!(losses, sum(loss.(xtrain,ytrain)));
end
It runs perfectly, however, it comes to my attention that as I train my model with an increasing data set(Nₜᵣ = 10,15,25, etc...), the loss function seams to increase. See the image below:
Where: y1: Nₜᵣ=10, y2: Nₜᵣ=15, y3: Nₜᵣ=25.
So, my main question:
Why is this happening?. I can not see an explanation for this behavior. Is this somehow expected?
Remarks: Note that
All elements from the training data set (input and output) are normalized to [-1,1].
I have not tryed changing the activ. functions
I have not tryed changing the optimization method
Considerations: I need a training data set of near 10000 input vectors, and so I am expecting an even worse scenario...
Some personal thoughts:
Am I arranging my training dataset correctly?. Say, If every single data vector is made of 63 numbers, is it correctly to group them in an array? and then pile them into an ´´´Array{Array{Float64,1},1}´´´?. I have no experience using NN and flux. How can I made a data set of 10000 I/O vectors differently? Can this be the issue?. (I am very inclined to this)
Can this behavior be related to the chosen act. functions? (I am not inclined to this)
Can this behavior be related to the opt. algorithm? (I am not inclined to this)
Am I training my model wrong?. Is the iteration loop really iterations or are they epochs. I am struggling to put(differentiate) this concept of "epochs" and "iterations" into practice.
loss(x,y) = squaredCost(m(x),y); # define loss function
Your losses aren't normalized, so adding more data can only increase this cost function. However, the cost per data doesn't seem to be increasing. To get rid of this effect, you might want to use a normalized cost function by doing something like using the mean squared cost.
I build a simple model in Dymola, I choose to use i_R1 to set the intializaiton condition, as shown in the follwing code and screenshot.
model circuit1
Real i_gen(unit="A") "Current of the generator";
Real i_R1(start=1,fixed=true,unit="A") "Current of R1";
Real i_R2(unit="A") "Current of R2";
Real i_C(unit="A") "Current of the capacitor";
Real i_D(unit="A") "Current of the diode";
Real u_1(unit="V") "Voltage of generator";
Real u_2(unit="V") "Output voltage";
// Voltage generator
constant Real PI = 3.1415926536;
parameter Real U0( unit="V") = 5;
parameter Real frec( unit="Hz") = 100;
parameter Real w( unit="rad/s") = 2*PI*frec;
parameter Real phi( unit="rad") = 0;
// Resistors
parameter Real R1( unit="Ohm") = 100;
parameter Real R2( unit="Ohm") = 100;
// Capacitor
parameter Real C( unit="F") = 1e-6;
// Diode
parameter Real Is( unit="A") = 1e-9;
parameter Real Vt( unit="V") = 0.025;
equation
// Node equations
i_gen = i_R1;
i_R1 = i_D + i_R2 + i_C;
// Constitutive relationships
u_1 = U0 * sin( w * time + phi);
u_1 - u_2 = i_R1 * R1;
i_D = Is * ( exp(u_2 / Vt) - 1);
u_2 = i_R2 * R2;
C * der(u_2) = i_C;
end circuit1;
But after translation, in the dsin.txt, it shows that i_R1 is a free variable, but u_2 is fixed.
My question is:
Why Dymola sets u_2 as fixed instead of i_R1?
The first column in dsin.txt is now primarily used for continue simulation in Dymola, and is otherwise sort of ignored.
If you want to know which values are relevant for starting the simulation, i.e. parameters and variables with fixed=true you should instead look at the 6th column and x&8, that shows that i_R1, U0, freq, phi, R1, R2, C, Is, and Vt will influence a normal simulation.
For continue simulation it is instead x&16 that matters, so u_2 instead of i_R1.
The x above is the value in the 6th column, and &8 represents bitwise and. In Modelica you could do mod(div(x, 8),2)==1 to test the numbers.
Better read Hans Olsson's answer, he knows better, still here is what I wrote:
I didn't implement it, so take everything with a grain of salt:
dsmodel.mof for the posted example, contains the following code:
// Initial Section
u_1 := U0*sin(w*time+phi);
u_2 := u_1-i_R1*R1;
Using the values from the example results in u_1 = 0 and u_2=-100. So it seems the fixed start value for i_R1 is used to compute the initial value u_2 using the above equations.
This should be the reason for the fixed statements in the model and dsin.txt being different in dsin.txt compared to the original Modelica code. Basically information from the model is used to compute the initial value of the state (u_2) from the start value from an auxiliary variable (i_R1). In the executed code, the state is being initialized.
Speculation: u_2 is unknown when creating dsin.txt, so it is set to zero and computed later. This should correspond to the first case described in dsin.txt in
"Initial values are calculated according to the following procedure:"
which is when all states are fixed.
I think it is a bug: even though it is signed as fixed, the voltage u_2 starts at -100V instead of 0V when I simulate it, and i_R1 starts at 1A.
Speculation: Perhaps the sorting algorithms are allowed during translation to set fixed initial values to more meaningful variables, as long as the condition given by the Modelica code (i_R1=1, in your case) is met. If that's the case, it would still count as a bug for me, but it might explain what's going on.
I have been trying to multiply 2 sets of timeseries data in Simulink, At and Bt, and I expected the result to be like this:
ans = sum(A(1:t)*B(t:-1:1))
For instance, when t = 3, the result should be
ans =At1*Bt1 + (At2*Bt1 + At1*Bt2) + (At3*Bt1 + At2*Bt2 + At3*Bt1)
I got these 2 datasets from one of my Simulink models and I want to continue my simulation with the same model.
To achieve this, I guess I need to flip one of those 2 datasets.
So I tried the Matlab function flip(), but it doesn't work when the argument is a timeseries.
Then I tried to first output those data to Matlab workspace as arrays and flipped them, and then input them back to my Simulink model, but this didn't work as well because in those arrays there are no any columns storing Time information.
At last, I found that there is a block called "Flip" in the DSP Toolbox, but the thing is that I don't have this toolbox, probably we won't buy it, and I am not sure if this block works.
If that is what you need, then write a function to do that:
function C = multiply_timeseries(A, B)
Alen = length(A.Data);
Blen = length(B.Data);
if ~(Alen == Blen)
error("A and B length should be the same")
end
C = timeseries(zeros(1,Alen,'like',A.Data), A.Time);
for t = 1:Alen
C.Data(t) = sum( A(1:t) * B(t:-1:1) );
end
end
Modify the above to suit your needs.
Take a look at the following simple system
where Kp=7130 and Kd=59.3880. These values are designed so that the system should exhibit an overshoot of 20% and steady-state error less than 0.01. The Simulink model yields correct results whereas tf() doesn't. This is the model
and its result is
Now implementing same system with tf as follows:
clear all
clc
kp=7130;
kd=59.3880;
num=[kd kp];
den=[1 18+kd 72+kp];
F=tf(num,den);
step(F)
stepinfo(F)
yields different overshoot.
Any suggestions why there is an inconsistent response? Do I have to put the system in specific form in order to use tf()?
The error is considering correct the response of the Simulink implementation. step is giving you the correct response.
A pure derivative does not exists in Simulink, and if out try a transfer function block with [kd, kp] as numerator and [1] as denominator you will get an error.
The derivative is a filter with a pole when you use the fixed step integrator, the behavior with variable step is quite uncertain, and should be avoided. The closed loop system you get with your controller has relative degree one (1 zeros, 2 poles).
If you look at the response, the Simulink implementation starts with dy/dt = 0 for t = 0, and this is not possible with this kind of closed loop system. The correct response is the one of tf (dy/dt > 0 for t = 0).
Your closed loop transfer function is correct, and you should consider its response as correct. Try to simulate the transfer function in the image with Simulink. You will see the same response of the step command.
Let's test this with some code:
In the image we have three test:
the analytic transfer function
an approximation of the derivative
the simulation with your derivative block
Try to implement it and test the value of 0.001 in the tf s / (0.001 s + 1), you will see that if you decrease the coefficient towards 0, the response of Transfer Fnc2 will approximate the one of the analytic closed loop tf (up to a point Simulink will not evaluate the derivative and will stop the simulation).
And finally, the analytic transfer function in Simulink gives the same response of the step command.
In the comment you said you evaluated the inverse laplace, so let us check also the inverse laplace. The symbolic toolbox will do that for us:
syms s kp kd t
Plant = 1/(s^2 + 18 * s + 72)
Reg = kp + kd * s
L = Plant * Reg
ClosedLoop = simplify(L / (1 + L))
Step = 1/s
ResponseStep = simplify(ilaplace(ClosedLoop * Step))
ResponseStep_f = matlabFunction(simplify( ...
subs( ...
subs(ResponseStep, kp, 7130), kd, 59.3880)));
t_ = linspace(0, 0.15, 200);
y_ = arrayfun(t_closedLoop_d, t_);
plot(t_, y_);
as you can see the inverse Laplace shows an overshoot of more than 25%.
EDIT: Evaluating the inverse Laplace that you evaluated at this link
Again the overshoot is at 25.9%
I want to determine the linearized transfer function from a non-linear system made in Simulink. I can see that it should be possible to use the linmod function in Matlab but when I try this
[num,den]=linmod('sys')
I'm not getting the numerator and denominator but instead the state space matrix etc. Can anyone help?
Try the function balred instead: documentation
rsys = balred(sys,ORDERS) computes a reduced-order approximation rsys
of the LTI model sys. The desired order (number of states) for rsys is
specified by ORDERS. You can try multiple orders at once by setting
ORDERS to a vector of integers, in which case rsys is a vector of
reduced-order models. balred uses implicit balancing techniques to
compute the reduced- order approximation rsys.
example:
Q = tf([1 2 3 4 5],[5 4 3 2 1])
Q =
s^4 + 2 s^3 + 3 s^2 + 4 s + 5
-------------------------------
5 s^4 + 4 s^3 + 3 s^2 + 2 s + 1
Q_lin = balred(Q,2)
Q_lin =
3.276 s^2 - 2.06 s + 2.394
--------------------------
s^2 - 0.2757 s + 0.4789
balred(Q,1)
is not working for my example, as there are 2 unstable poles, but it may works for your system.
linmod always returns a state-space representation (see documentation). Use tf to convert your stae-space representation to a transfer function:
Conversion to Transfer Function
tfsys = tf(sys) converts the dynamic
system model sys to transfer function form. The output tfsys is a tf
model object representing sys expressed as a transfer function.
BTW, if you have Simulink Control Design, a better alternative to linmod is linearize.