Threshold level check - matlab

How can I generate a simulink model to tell me if a certain output x signal has reached a threshold level after n seconds during the system run time? I would like to consider the last value of x and if so, enable an alarm value of -1.

I assume you want to do the following comparison:
x(n) >= threshold
In Simulink there is a block called "weighted sample time" that you can use to know the sample time. Then you can use a comparator to check if sample time is equal to n and another comparator to make sure x is greater than n

Related

System Dynamics simulation - Translating Stella into AnyLogic syntax

I modelled the following logic in stella:
(IF "cause" > 0 THEN MONTECARLO("probabilityofconsequence") ELSE 0
But Im not getting the correct syntax on AnyLogic:
(cause > 0) ? (uniform() < probabilityofconsequence) ? 1 : 0 : 0
Any ideas?
Disclaimer:
What stella does is with the Montecarlo function a series of zeros and ones from a Bernoulli distribution based on the probability provided. The probability is the percentage probability of an event happening per DT divided by DT (it is similar too, but not the same as, the percent probability of an event per unit time). The probability value can be either a variable or a constant, but should evaluate to a number between 0 and 100/DT (numbers outside the range will be set to 0 or 100/DT). The expected value of the stream of numbers generated summed over a unit time is equation to probability/100.
MONTECARLO is equivalent to the following logic:
IF (UNIFORM(0,100,<seed>) < probability*DT THEN 1 ELSE 0
the equivalent in anylogic should be:
cause>0 && uniform(0,100) < probability*DT ? 1 : 0
you need to create a variable called DT that is the equal to either the fixed time step that you have chosen in your model configuration, or the value you consider that should be adequate.
Since anylogic depending on how you are running the model, doesn't consider the fixed time step as fixed, you need to define the DT yourself.
No matter what, you are going to get results not exactly equal to stella probably since the time steps are not necessarily the same... but maybe similar enough should satisfy you

Index when mean is constant

I am relatively new to matlab. I found the consecutive mean of a set of 1E6 random numbers that has mean and standard deviation. Initially the calculated mean fluctuate and then converges to a certain value.
I will like to know the index (i.e 100th position) at which the mean converges. I have no idea how to do that.
I tried using the logical operator but i have to go through 1e6 data points. Even with that i still can't find the index.
Y_c= sigma_c * randn(n_r, 1) + mu_c; %Random number creation
Y_f=sigma_f * randn(n_r, 1) + mu_f;%Random number creation
P_u=gamma*(B*B)/2.*N_gamma+q*B.*N_q + Y_c*B.*N_c; %Calculation of Ultimate load
prog_mu=cumsum(P_u)./cumsum(ones(size(P_u))); %Progressive Cumulative Mean of system response
logical(diff(prog_mu==0)); %Find index
I suspect the issue is that the mean will never truly be constant, but will rather fluctuate around the "true mean". As such, you'll most likely never encounter a situation where the two consecutive values of the cumulative mean are identical. What you should do is determine some threshold value, below which you consider fluctuations in the mean to be approximately equal to zero, and compare the difference of the cumulative mean to that value. For instance:
epsilon = 0.01;
const_ind = find(abs(diff(prog_mu))<epsilon,1,'first');
where epsilon will be the threshold value you choose. The find command will return the index at which the variation in the cumulative mean first drops below this threshold value.
EDIT: As was pointed out, this method may potentially fail if the first few random numbers are generated such that the difference between them is less than the epsilon value, but have not yet converged. I would like to suggest a different approach, then.
We calculate the cumulative means, as before, like so:
prog_mu=cumsum(P_u)./cumsum(ones(size(P_u)));
We also calculate the difference in these cumulative means, as before:
df_prog_mu = diff(prog_mu);
Now, to ensure that conversion has been achieved, we find the first index where the cumulative mean is below the threshold value epsilon and all subsequent means are also below the threshold value. To phrase this another way, we want to find the index after the last position in the array where the cumulative mean is above the threshold:
conv_index = find(~df_prog_mu,1,'last')+1;
In doing so, we guarantee that the value at the index, and all subsequent values, have converged below your predetermined threshold value.
I wouldn't imagine that the mean would suddenly become constant at a single index. Wouldn't it asymptotically approach a constant value? I would reccommend a for loop to calculate the mean (it sounds like maybe you've already done this part?) like this:
avg = [];
for k=1:length(x)
avg(k) = mean(x(1:k));
end
Then plot the consecutive mean:
plot(avg)
hold on % this will allow us to plot more data on the same figure later
If you're trying to find the point at which the consecutive mean comes within a certain range of the true mean, try this:
Tavg = 5; % or whatever your true mean is
err = 0.01; % the range you want the consecutive mean to reach before we say that it "became constant"
inRange = avg>(Tavg-err) & avg<(Tavg+err); % gives you a binary logical array telling you which values fell within the range
q = 1000; % set this as high as you can while still getting a value for consIndex
constIndex = [];
for k=1:length(inRange)
if(inRange(k) == sum(inRange(k:k+q))/(q-1);)
constIndex = k;
end
end
The below answer takes a similar approach but makes an unsafe assumption that the first value to fall within the range is the value where the function starts to converge. Any value could randomly fall within that range. We need to make sure that the following values also fall within that range. In the above code, you can edit "q" and "err" to optimize your result. I would recommend double checking it by plotting.
plot(avg(constIndex), '*')

MATLAB Simple - Linear Predictive Coding and Energy Forecasting

I have a dataset with 274 samples (9 months) of the daily energy (Watts.hour) used on a residential household. I'm not sure if i'm applying the lpc function correctly.
My code is the following:
filename='9-months.csv';
energy = csvread(filename);
C=zeros(5,1);
counter=0;
N=3;
for n=274:-1:31
w2=energy(1:n-1,1);
a=lpc(w2,N);
energy_estimated=0;
for X = 1:N
energy_estimated = energy_estimated + (-a(X+1)*energy(n-X));
end
w_real=energy(n);
error2=abs(w_real-energy_estimated);
counter=counter+1;
C(counter,1)=error2;
end
mean_error=round(mean(C));
Being "n" the sample on analysis, I will use the energy array's values, from 1 to n-1, to calculate the lpc coefficientes (with N=3).
After that, it will apply the calculated coefficients on the "for" cycle presented, in order to calculate the estimated energy.
Finally, error2 outputs the error between the real energy and estimated value.
On the example presented ( http://www.mathworks.com/help/signal/ref/lpc.html ) some filters are used. Do I need to apply any filter to it? Is my methodology correct?
Thank you very much in advance!
The lpc seems to be used correctly, but there are a few other things about your code. I am adressign the part at he "for n" :
for n=31:274 %for me it would seem more logically to go forward in time
w2=energy(1:n-1,1);
a=lpc(w2,N);
energy_estimate=filter([0 -a(2:end)],1,w2);
energy_estimate=energy_estimate(end);
estimates(n)=energy_estimate;
end
error=energy(31:274)-estimates(31:274)';
meanerror=mean(error); %you dont really round mean errors
filter is exactly what you are trying to do with the X=1:N loop. but this will perform the calculation for the entire w2 vector. If you just want the last value take the (end) command as well.
Now there is no reason to calculate the error for every single value and then add them to a vector you can do that faster after the calculation.
Now if your trying to estimate future values with a lpc it could work like that, but you are implying that every value is only dependend on the last 3 values. Have you tried something like a polynominal approach? i would think that this would be closer to reality.

Defining an half open interval on matlab

How would I go about setting my tspan vector for solutions to my ode between (1,5]? I've thought of just doing >>tspan = [1:(any amount of steps):5] but is that okay?
You can't numerically integrate over a (half) open interval. Numerical integration always operates at specific numeric points, i.e. not an interval anyway, but a finite set of numbers. What you specify with the tspan argument are the smallest and largest number in that set, and both therefore are included in it. You can put more numbers into tspan to explicitly request integration results at these points, too, but however you choose those this doesn't change the fact that you don't have an interval.
If the motivation of the question is that your equations have a singularity at 1, you might specify a start point that is slightly larger, e.g. [1 + 1e-5, 5].
Seems ok, but 2 notes:
A. It should be tspan=[1:(any size of step):5];, not amount of steps. For amount of steps, you can write: tspan=linspace(1,5,(any amount of steps));
B. Those options are include '1'. If you want the interval (1,5], you shold add the size of step to '1' on each of the options. For example: tspan=[1+(size of step) : (size of step) :5];

Finding the maximum value of a function under uncertainty

I have three values X,Y and Z. These values have a range of values between 0 and 1 (0 and 1 included).
When I call a function f(X,Y,Z) it returns a value V (value between 0 and 1). My Goal is to choose X,Y,Z so that the returned value V is as close as possible to 1.
The selection Process should be automated and the right values for X,Y,Z are unknown.
Due to my Use Case it is possible to set Y and Z to 1 (the value 1 hasn't any influence on the output) and search for the best value of X.
After that I can replace X by that value and do the same for Y. Same procedure for Z.
How can I find the "maximum of the function"? Is there somekind of "gradient descend" or hill climbing algorithm or something like that?
The whole modul is written in perl so maybe there is an package for perl that can solve that problem?
You can use Simulated Annealing. Its a multi-variable optimization technique. It is also used to get a partial solution for the Travelling Salesperson problem. Its one of the search algorithms mentioned in Peter Norvig's Intro to AI book as well.
Its a hill climbing algorithm which depends on random variables. Also it won't necessarily give you the 'optimal' answer. You can also vary the iterations required by it as per your computational/time needs.
http://en.wikipedia.org/wiki/Simulated_annealing
http://www1bpt.bridgeport.edu/sed/projects/449/Fall_2000/fangmin/chapter2.htm
I suggest you take a look at Math::Amoeba which implements the Nelder–Mead method for finding stationary points on functions.