Is there an alternative to the sample function in Openmodelica, which accepts arguments which are not of type parameter? That is, the alternative should permit the sampling of a variable range of values during a simulation.
The end goal is to create a class with which I can measure the RMS value of a real signal during a simulation. The RMS value is used as a control variable. The real signal has a continuously changing frequency so in order to have better measurements, I want to either be able to varry the sampling range continuously during simulation or discretely in some sections/periods of the oscillation.
Is it also possible to have a "running RMS" function so that the output is continuous?
In short, I would like to calculate the RMS value over a variable sampling range and the sample should only have one new term or value per iteration and not a completely new set of values.
Some possible solutions (you probably should check my math and just use them for inspiration; also check the RootMeanSquare block in the standard library which for some reason samples the Mean block):
Running RMS from beginning of time (no frequency).
model RMS
Real signal = sin(time);
Real rms = if time < 1e-10 then signal else sqrt(i_sq / time /* Assume start-time is 0; can also integrate the denominator using der(denom)=1 for a portable solution. Remember to guard the first period of time against division by zero */);
Real i_sq(start=0, fixed=true) "Integrated square of the signal";
equation
der(i_sq) = signal^2;
end RMS;
With a fixed window, f:
model RMS
constant Real f = 2*2*asin(1.0);
Real signal = sin(time);
Real rms = if time < f then (if time < 1e-10 then signal else sqrt(i_sq / time)) else sqrt(i_sq_f / f);
Real i_sq(start=0, fixed=true);
Real i_sq_f = i_sq - delay(i_sq, f);
equation
der(i_sq) = signal^2;
end RMS;
With a variable window, f (limited by f_max):
model RMS
constant Real f_max = 2*2*asin(1.0);
constant Real f = 1+abs(2*asin(time));
Real signal = sin(time);
Real rms = if time < f then (if time < 1e-10 then signal else sqrt(i_sq / time)) else sqrt(i_sq_f / f);
Real i_sq(start=0, fixed=true);
Real i_sq_f = i_sq - delay(i_sq, f, f_max);
equation
der(i_sq) = signal^2;
end RMS;
Variable time for sampling in synchronous Modelica: https://trac.modelica.org/Modelica/ticket/2022
Variable time for sampling in older Modelica:
when time>=nextEvent then
doSampleStuff(...);
nextEvent = calculateNextSampleTime(...);
end when;
Related
I'm trying to create a model where one Modelica variable is a triangular wave of another variable. First I tried the floor() function as below:
model test1
final constant Real pi=2*Modelica.Math.asin(1.0);
parameter Real b = 1;
parameter Real a = 1;
Real x,p,u;
equation
if sign(sin(x*pi/b))>=0 then
p=a*(x-b*floor(x/b));
else
p=a*(b-(x-b*floor(x/b)));
end if;
x=time;
u = floor(x/b);
end test1
(x=time; is arbitrary so the model compiles)
but the result is weird, as you can see below
zoom in:
somehow 0.005 seconds before the next step floor function behaves unexpectedly and becomes a linear function ending by the next value.
then I tried the ceil() function. everything seemed right till I realised the same problem happens with ceil() function at other values (e.g. x=13)
I would appreciate if you could:
help me understand why this "glitch" happens and if it is intentional by design or a bug?
how I can fix this?
are there any alternatives to create a triangular wave function?
P.S. I am using this "wave function" to model the interaction between two jagged bodies"
If you are allowed to utilize the Modelica Standard Library, you can build up a parametrized, time-based zigzag signal using the CombiTimeTable block with linear interpolation and periodic extrapolation. For example,
model Test4
parameter Real a=2 "Amplitude";
parameter Real b=3 "Period";
Real y=zigzag.y[1] "Zigzag";
Modelica.Blocks.Sources.CombiTimeTable zigzag(
table=[0,0;b/4,a;b/4,a;b/2,0;b/2,0;3*b/4,-a;3*b/4,-a;b,0],
extrapolation=Modelica.Blocks.Types.Extrapolation.Periodic)
annotation(Placement(transformation(extent={{-80,60},{-60,80}})));
Modelica.Blocks.Sources.Trapezoid trapezoid(
amplitude=2*a,
rising=b/2,
width=0,
falling=b/2,
period=b,
offset=-a)
annotation(Placement(transformation(extent={{-80,25},{-60,45}})));
annotation(uses(Modelica(version="3.2.2")));
end Test4;
I don't have an explanation for the glitches in your simulation.
However, I would take another approach to the sawtooth function: I see it as an integrator integrating +1 and -1 upwards and downwards. The integration time determines the amplitude and period of the sawtooth function.
The pictures below show an implementation using MSL blocks and one using code. The simulation results below are the same for both implementations.
Best regards,
Rene Just Nielsen
Block diagram:
Code:
model test3
parameter Real a=2 "amplitude";
parameter Real b=3 "period";
Real u, y;
initial equation
u = 1;
y = 0;
equation
4*a/b*u = der(y);
when y > a then
u = -1;
elsewhen y < -a then
u = 1;
end when;
end test3;
Simulation result:
The problem I guess is due to floating point representation and events not occurring at exact times.
Consider x-floor(x) and 1-(x-floor(x)) at time=0.99, they are 0.99 and 0.01; at time=1.00 they are 0.0 and 1.0, which causes your problems.
For a=b=1, you can use the following equation for p:
p=min(mod(x,2),2-mod(x,2));. You can even add noEvent to it, and you can consider the signal continuous (but not differentiable).
model test
parameter Real b = 1;
parameter Real a = 3;
Real x, p;
equation
p = 2*a*min(1 / b * mod(x, b ),1 - 1/b * mod(x, b));
x = time;
end test;
My first advise would be to remove the sign-function, since there is no benefit of doing sign(foo)>=0 compared to foo>=0.
Interesting enough that seems to fix the problem in Dymola - and I assume also in OpenModelica:
model test1 "almost original"
final constant Real pi=2*Modelica.Math.asin(1.0);
parameter Real b = 1;
parameter Real a = 1;
Real x,p,u;
equation
if sin(x*pi/b)>=0 then
p=a*(x-b*floor(x/b));
else
p=a*(b-(x-b*floor(x/b)));
end if;
x=time;
u = floor(x/b);
end test1;
Now I only have to explain that - and the reason is that sin(x*pi/b) is slightly out of sync with the floor-function, but if you use sin(x*pi/b)>=0 that is within the root-finding epsilon and nothing strange happen.
When you use sign(sin(x*pi/b))>=0 that is no longer possible, instead of having sin(x*pi/b) an epsilon below zero it is now -1, and instead of epsilon above zero it is 1.
The real solution is thus slightly more complicated:
model test2 "working"
parameter Real b = 1;
parameter Real a = 1;
Real x,p,u;
Real phase=mod(x,b*2);
equation
if phase<b then
p=a/b*phase;
else
p=a-a/b*(phase-b);
end if;
x=time;
u = floor(x/b);
end test2;
which was improved based on a suggested solution:
model test3 "almost working"
parameter Real b = 1;
parameter Real a = 1;
Real x,p,u;
equation
if mod(x,2*b)<b then
p=a/b*mod(x,b);
else
p=a-a/b*mod(x,b);
end if;
x=time;
u = floor(x/b);
end test3;
The key point in this solution, test2, is that there is only one problematic event generating expression mod(x,2*b) - and the < will not get out of sync with this.
In practice test3 will almost certainly also work, but in unlikely cases the event generation might get out of sync between mod(x,2*b) and mod(x,b); with unknown consequences.
Note that all three examples are now modified to generate output that looks similar.
I have a regular wave equation to simulate on MATLAB Simulink :
Equation:
Fw(t) = Awave F(w) cos(wt + g)
Fw= Wave existing force;
Awave= Amplitude of wave= Wave Height/2;
t = time= 5 AM, 11 AM, 5 PM, 11 PM;
w = corresponding frequency = 2*pi/T;
T = Period of wave;
g = 0;
where, Awave, Fw and T vary with t.
Can you please give me an idea? Specially using Simulink MATLAB Function!
Feed Awave, Fw, T and t as inputs to your MATLAB Function, which should look something like this if I understood correctly:
function Fw = wave_fun(Awave,Fw,T,t)
w = 2*pi/T;
g = 0; % change if need be
Fw = Awave * cos(w*t+g);
Note that t is the simulation time in s (you can use a Clock block for that). The other variables need to be defined as a function of time in the base workspace and then you can use From Workspace blocks.
However, this is so simple that a MATLAB Function is unnecessarily complicated in my opinion. Simple mathematical blocks should suffice to combine the inputs and compute the desired output.
What is the best way to calculate the mean value (mean) and standard deviation (StdDev) for a continuous signal in Modelica. The mean and StdDev should be calculated for fixed period of time T; i.e., from t-T until t.
Below is a discrete solution to the problem. It is coded as a block in Modelica with 1 continuous input and 2 continuous output signals. Discretization is accomplished using the Modelica built-in function sample:
block MeanStdDevDiscr
"Determines the mean value and standard deviation of a signal for a fixed time interval T."
extends Modelica.Blocks.Interfaces.BlockIcon;
import SI = Modelica.SIunits;
parameter SI.Time T = 0.1
"Time interval used for calculating mean value and standard deviation";
parameter Integer n = 10 "number of increments in T";
Modelica.Blocks.Interfaces.RealInput u "signal input"
annotation (Placement(transformation(extent={{-140,-20},{-100,20}})));
Modelica.Blocks.Interfaces.RealOutput[2] y
"y[1] = average value; y[2] = standard deviation"
annotation (Placement(transformation(extent={{100,-10},{120,10}})));
protected
parameter SI.Time dt = T/n "Precision of monitor";
Real[n] uArray;
initial equation
uArray = ones(n)*u;
equation
when sample(0, dt) then
uArray[1] = u;
for j in 2:n loop
uArray[j] = pre(uArray[j-1]);
end for;
end when;
y[1] = sum(uArray)/n; // mean value
y[2] = sqrt(sum((uArray .- y[1]).^2)/n); // standard deviation
annotation (Diagram(graphics));
end MeanStdDevDiscr;
The final goal I am trying to achieve is the generation of a ten minutes time series: to achieve this I have to perform an FFT operation, and it's the point I have been stumbling upon.
Generally the aimed time series will be assigned as the sum of two terms: a steady component U(t) and a fluctuating component u'(t). That is
u(t) = U(t) + u'(t);
So generally, my code follows this procedure:
1) Given data
time = 600 [s];
Nfft = 4096;
L = 340.2 [m];
U = 10 [m/s];
df = 1/600 = 0.00167 Hz;
fn = Nfft/(2*time) = 3.4133 Hz;
This means that my frequency array should be laid out as follows:
f = (-fn+df):df:fn;
But, instead of using the whole f array, I am only making use of the positive half:
fpos = df:fn = 0.00167:3.4133 Hz;
2) Spectrum Definition
I define a certain spectrum shape, applying the following relationship
Su = (6*L*U)./((1 + 6.*fpos.*(L/U)).^(5/3));
3) Random phase generation
I, then, have to generate a set of complex samples with a determined distribution: in my case, the random phase will approach a standard Gaussian distribution (mu = 0, sigma = 1).
In MATLAB I call
nn = complex(normrnd(0,1,Nfft/2),normrnd(0,1,Nfft/2));
4) Apply random phase
To apply the random phase, I just do this
Hu = Su*nn;
At this point start my pains!
So far, I only generated Nfft/2 = 2048 complex samples accounting for the fpos content. Therefore, the content accounting for the negative half of f is still missing. To overcome this issue, I was thinking to merge the real and imaginary part of Hu, in order to get a signal Huu with Nfft = 4096 samples and with all real values.
But, by using this merging process, the 0-th frequency order would not be represented, since the imaginary part of Hu is defined for fpos.
Thus, how to account for the 0-th order by keeping a procedure as the one I have been proposing so far?
I have two signals, let's call them 'a' and 'b'. They are both nearly identical signals (recorded from the same input and contain the same information) however, because I recorded them at two different 'b' is time shifted by an unknown amount. Obviously, there is random noise in each.
Currently, I am using cross correlation to compute the time shift, however, I am still getting improper results.
Here is the code I am using to calculate the time shift:
function [ diff ] = FindDiff( signal1, signal2 )
%FINDDIFF Finds the difference between two signals of equal frequency
%after an appropritate time shift is applied
% Calculates the time shift between two signals of equal frequency
% using cross correlation, shifts the second signal and subtracts the
% shifted signal from the first signal. This difference is returned.
length = size(signal1);
if (length ~= size(signal2))
error('Vectors must be equal size');
end
t = 1:length;
tx = (-length+1):length;
x = xcorr(signal1,signal2);
[mx,ix] = max(x);
lag = abs(tx(ix));
shifted_signal2 = timeshift(signal2,lag);
diff = signal1 - shifted_signal2;
end
function [ shifted ] = timeshift( input_signal, shift_amount )
input_size = size(input_signal);
shifted = (1:input_size)';
for i = 1:input_size
if i <= shift_amount
shifted(i) = 0;
else
shifted(i) = input_signal(i-shift_amount);
end
end
end
plot(FindDiff(a,b));
However the result from the function is a period wave, rather than random noise, so the lag must still be off. I would post an image of the plot, but imgur is currently not cooperating.
Is there a more accurate way to calculate lag other than cross correlation, or is there a way to improve the results from cross correlation?
Cross-correlation is usually the simplest way to determine the time lag between two signals. The position of peak value indicates the time offset at which the two signals are the most similar.
%// Normalize signals to zero mean and unit variance
s1 = (signal1 - mean(signal1)) / std(signal1);
s2 = (signal2 - mean(signal2)) / std(signal2);
%// Compute time lag between signals
c = xcorr(s1, s2); %// Cross correlation
lag = mod(find(c == max(c)), length(s2)) %// Find the position of the peak
Note that the two signals have to be normalized first to the same energy level, so that the results are not biased.
By the way, don't use diff as a name for a variable. There's already a built-in function in MATLAB with the same name.
Now there are two functions in Matlab:
one called finddelay
and another called alignsignals that can do what you want, I believe.
corr finds a dot product between vectors (v1, v2). If it works bad with your signal, I'd try to minimize a sum of squares of differences (i.e. abs(v1 - v2)).
signal = sin(1:100);
signal1 = [zeros(1, 10) signal];
signal2 = [signal zeros(1, 10)];
for i = 1:length(signal1)
signal1shifted = [signal1 zeros(1, i)];
signal2shifted = [zeros(1, i) signal2];
d2(i) = sum((signal1shifted - signal2shifted).^2);
end
[fval lag2] = min(d2);
lag2
It is computationally worse than cross-calculation which can be speeded up by using FFT. As far as I know you can't do this with euclidean distance.
UPD. Deleted wrong idea about cross-correlation with periodic signals
You can try matched filtering in frequency domain
function [corr_output] = pc_corr_processor (target_signal, ref_signal)
L = length(ref_signal);
N = length(target_signal);
matched_filter = flipud(ref_signal')';
matched_filter_Res = fft(matched_filter,N);
corr_fft = matched_filter_Res.*fft(target_signal);
corr_out = abs(ifft(corr_fft));
The peak of the matched filter maximum-index of corr_out above should give you the lag amount.