Matlab Event Finder is occasionally missing events - matlab

I've been working on a numerical model that has been giving me some trouble lately. I'm attempting to solve a time series for a system where there is an impact at a certain location. Obviously Matlab's event finder is for exactly this sort of situation, but it's failing me, in that it occasionally crosses the event threshold without triggering.
My system is a pendulum with a wall at -30 degrees. Should be pretty simple; solve until pendulum hits -30, stop, reverse velocity, etc. However, as you can see in my picture, it occasionally just swings right past -30 and doesn't stop integrating. The code is mostly a copy-paste job of the ball bounce example with the differential equation changed, but somehow it's not working properly. A photo of the problematic time series is here:
and my code is here: https://gist.github.com/f892a764ca10027a3b89
function matlab_timeser
k1 = .557;
tstart = 0;
tfinal = 1200;
refine = 100;
y0 = [-3.0*pi/180.0; 6.0*pi/180.0];
options = odeset('Events',#events,'Refine',refine);
tout = tstart;
yout = y0.'; % <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
teout = [];
yeout = [];
ieout = [];
while max(tout) < tfinal
[t,y,te,ye,ie] = ode113(#f,[tstart tfinal],y0,options);
% Accumulate output.
nt = length(t);
tout = [tout; t(2:nt)];
yout = [yout; y(2:nt,:)];
teout = [teout; te]; % Events at tstart are never reported.
yeout = [yeout; ye];
ieout = [ieout; ie];
% Set the new initial conditions, with k attenuation.
y0(1) = -30 * pi/180;
y0(2) = -k1*y(nt,2);
% A good guess of a valid first time step is the length of
% the last valid time step, so use it for faster computation.
options = odeset(options,'InitialStep',t(nt)-t(nt-refine),'MaxStep',t(nt)-t(1));
tstart = t(nt);
% plot(tout,yout(:,1)*180/pi)
% hold on
% plot([0 t(nt)],[-30 -30],'r')
% hold off
% pause()
end
plot(tout,yout(:,1)*180/pi)
hold on
plot([0 t(nt)],[-30 -30],'r')
end
function dydt = f(t, y)
wf = 0.8008 * 2 * pi;
param = [6e-4 6.087 2.1e-3 0.236 0.557];
coul1 = param(1);
w1 = param(2);
z1 = param(3);
AL = param(4);
dydt = [y(2); AL*(wf)^2*sin(wf*t)*cos(y(1))-(w1)^2*sin(y(1))-2.0*z1*w1*y(2)-coul1*((y(2)^2)+(w1)^2*cos(y(1)))*sign(y(2))];
end
function [value,isterminal,direction] = events(t,y)
value = y(1) + 30*pi/180; % Detect height = -30
isterminal = 1; % Stop the integration
direction = 0; % any direction
end
I've tried a few different ode solvers and while they give slightly different results, they all seem to miss events here and there.
If anyone has any insights into why Event may occasionally fail to notice an event, I'd appreciate it! Thanks!

Related

I can't fix the problem of my PID simulation code(I think it's about difference of the differentiator)

I'm trying to build a PID controller, its differential part multiply by a low-pass filter, witch transform func is 1/(r*s+1) (r = 4*Ts ).
I use forward difference to discrete the controller, and use state space function to handle the object, I want a step response of this system.
And I used pid() feedback() step() functions to check if I'm right.
And yet I was wrong, here is the response.
I tried only PI control.
You can see it's almost perfect. So I think the problem is about differential. But I really can't find it out. I already have checked the forward difference many times.
I tried backward difference and got same curve.
T = 0.05;
Cp = 6;
Ci = 1;
Cd = 7;
P = zeros(1, n+1);
I = zeros(1, n+1);
D = zeros(1, n+1);
r = 4*T;
for i = 1:n
if i == 1
e(i) = U(i)-0; % error of time domain, U is step function
P(i) = Cp*e(i);
I(i) = 0;
D(i) = 0;
else
e(i) = U(i)-y(i-1);
P(i) = Cp*e(i);
I(i) = Ci*T*e(i-1)+I(i-1); % forward
D(i) = (Cd*(e(i)-e(i-1))-D(i-1)*(T-r))/r; % forward
% I(i) = T*Ci*e(i)+I(i-1); % backward
% D(i) = (Cd*(e(i)-e(i-1))+r*D(i-1))/(r+T); % backward
% I(i) = (T*(e(i)+e(i-1))+2*I(i-1))/2; % Bilinear
end
u(i) = P(i) + I(i) + D(i);
x(:,i+1) = x(:,i)+T*(Ao*x(:,i)+Bo*u(i));
y(i) = Co*x(:,i);
end
% this how I get the right curve.
PIDF = pid(Cp, Ci, Cd, r);
fb = feedback(PIDF*G_sysc, 1); % G_sysc is trans func of object.
step(fb, N(1:n)*T, 'blue');
I expect should be almost same with the Step Response.
Alright... I have fingered why it's different out by myself. Because the function feedback() set the error as 0 at the first point, and next point, the error would be step function - output = 1, so the differential will be a big positive, that cause the output convergence quicker than my PID controller.
Now I only want to know how do I edit the pid() and feedback() functions to set the first error to be 1, cause this is more common sense.

setting state variable to a given value using ode15s

I'm using ode15s to simulate/solve a set of ODEs. I would like to implement a feature, where upon reaching a given condition during the simulation, a number in the model changes programatically (e.g., an indicator constant) for a fixed amount of time, and then reverts back.
This could be, for example using Lotka-Volterra equations:
dx/dt = alphax - betax*y
dy/dt = (delta+indicator)xy - gammay + epsilonindicator
indicator starts as 0. Let's say that when x reaches 10, I'd like to switch indicator to 1 for 10 time units, and then flip it back to 0.
This can be done in a dirty way by using global variables, however, this is something I'd like to avoid (impossible to parallelize + general avoidance of global variables). Is there a neat alternative way when using ode15s (i.e., I don't know the time step)?
Many thanks for any suggestions!
Edit: As noted correctly by LutzL, wrapping an ODE with non-smooth state without handling events may lead to inaccurate results
as you can not predict at what time points in what order the ODE
function is evaluated. LutzL
So the accurate solution is to deal with ODE events. An example for the modified Lotka-Volterra equations is given below, where the event fires, if x gets >10 and the indicator will be turned on for 10 seconds:
% parameters and times:
params = ones(5,1); % [alpha, ..., epsilon]
z_start = [2, 1];
t_start = 0;
t_end = 30;
options = odeset('Events',#LotkaVolterraModEvents); % set further options here, too.
% wrap ODE function with indicator on and off
LVModODE_indicatorOn = #(t,z)LotkaVolterraModODE(t,z,1, params);
LVModODE_indicatorOff = #(t,z)LotkaVolterraModODE(t,z,0, params);
% storage for simulation values:
data.t = t_start;
data.z = z_start;
data.teout = [];
data.zeout = zeros(0,2);
data.ieout = [];
% temporary loop variables:
z_0 = z_start;
t_0 = t_start;
isIndicatorActive = false;
while data.t(end) < t_end % until the end time is reached
if isIndicatorActive
% integrate for 10 seconds, if indicator is active
active_seconds = 10;
[t, z, te,ze,ie] = ode15s(LVModODE_indicatorOn, [t_0 t_0+active_seconds], z_0, options);
else
% integrate until end or event, if indicator is not active.
[t, z, te,ze,ie] = ode15s(LVModODE_indicatorOff, [t_0 t_end], z_0, options);
isIndicatorActive = true;
end
%append data to storage
t_len = length(t);
data.t = [data.t; t(2:end)];
data.z = [data.z; z(2:end,:)];
data.teout = [data.teout; te];
data.zeout = [data.zeout; ze];
data.ieout = [data.ieout; ie];
% reinitialize start values for next iteration of loop
t_0 = t(end);
z_0 = z(end, :);
% set the length of the last instegration
options = odeset(options,'InitialStep',t(end) - t(end-1));
end
%% plot your results:
figure;
plot(data.t, data.z(:,1), data.t, data.z(:,2));
hold all
plot(data.teout, data.zeout(:,1), 'ok');
legend('x','y', 'Events in x')
%% Function definitions for integration and events:
function z_dot = LotkaVolterraModODE(t, z, indicator, params)
x = z(1); y= z(2);
% state equations: modified Lotka-Volterra system
z_dot = [params(1)*x - params(2)*y;
(params(4) + indicator)*x*y - params(3)*y + params(5)*indicator];
end
function [value, isTerminal, direction] = LotkaVolterraModEvents(t,z)
x = z(1);
value = x-10; % event on rising edge when x passes 10
isTerminal = 1; %stop integration -> has to be reinitialized from outer logic
direction = 1; % only event on rising edge (i.e. x(t_e-)<10 and x(t_e+)>10)
end
The main work is done in the while loop, where the integration takes place.
(Old post) The following solution may lead to inaccurate results, handling events, as explained in the first part should be preferred.
You could wrap your problem in a class, which is able to hold a state (i.e. its properties). The class should have a method, which is used as odefun for the variable-step integrator. See also here on how to write classes in MATLAB.
The example below demonstrates, how it could be achieved for the example you provided:
% file: MyLotkaVolterra.m
classdef MyLotkaVolterra < handle
properties(SetAccess=private)
%define, if the modified equation is active
indicator;
% define the start time, where the condition turned active.
tStart;
% ode parameters [alpha, ..., epsilon]
params;
end
methods
function self = MyLotkaVolterra(alpha, beta, gamma, delta, epsilon)
self.indicator = 0;
self.tStart = 0;
self.params = [alpha, beta, gamma, delta, epsilon];
end
% ODE funciton for the state z = [x;y] and time t
function z_dot = odefun(self, t, z)
x = z(1); y= z(2);
if (x>=10 && ~self.indicator)
self.indicator = 1;
self.tStart = t;
end
%condition to turn indicator off:
if (self.indicator && t - self.tStart >= 10)
self.indicator = false;
end
% state equations: modified Lotka-Volterra system
z_dot = [self.params(1)*x - self.params(2)*y;
(self.params(4) + self.indicator)*x*y - ...
self.params(3)*y + self.params(5)*self.indicator];
end
end
end
This class could be used as follows:
% your ode using code:
% 1. create an object (`lvObj`) from the class with parameters alpha = ... = epsilon = 1
lvObj = MyLotkaVolterra(1, 1, 1, 1, 1);
% 2. pass its `odefun`method to the integrator (exaple call with ode15s)
[t,y] = ode15s(#lvObj.odefun, [0,5], [9;1]); % 5 seconds

HOG descriptor for multiple people detection

I am doing a real-time people detection using HOG-LBP descriptor and using a sliding window approach for the detector also LibSVM for the classifier. However, after classifier I never get multiple detected people, sometimes is only 1 or might be none. I guess I have a problem on my classification step. Here is my code on classification:
label = ones(length(featureVector),1);
P = cell2mat(featureVector);
% each row of P' correspond to a window
% classifying each window
[~, predictions] = svmclassify(P', label,model);
% set the threshold for getting multiple detection
% the threshold value is 0.7
get_detect = predictions.*[predictions>0.6];
% the the value after sorted
[r,c,v]= find(get_detect);
%% Creating the bounding box for detection
for ix=1:length(r)
rects{ix}= boxPoint{r(ix)};
end
if (isempty(rects))
rects2=[];
else
rects2 = cv.groupRectangles(rects,3,'EPS',0.35);
end
for i = 1:numel(rects2)
rectangle('Position',[rects2{i}(1),rects2{i}(2),64,128], 'LineWidth',2,'EdgeColor','y');
end
For the whole my code, I have posted here : [HOG with SVM] (sliding window technique for multiple people detection)
I really need a help for it. Thx.
If you have problems wiith the sliding window, you can use this code:
topLeftRow = 1;
topLeftCol = 1;
[bottomRightCol bottomRightRow d] = size(im);
fcount = 1;
% this for loop scan the entire image and extract features for each sliding window
for y = topLeftCol:bottomRightCol-wSize(2)
for x = topLeftRow:bottomRightRow-wSize(1)
p1 = [x,y];
p2 = [x+(wSize(1)-1), y+(wSize(2)-1)];
po = [p1; p2];
img = imcut(po,im);
featureVector{fcount} = HOG(double(img));
boxPoint{fcount} = [x,y];
fcount = fcount+1;
x = x+1;
end
end
lebel = ones(length(featureVector),1);
P = cell2mat(featureVector);
% each row of P' correspond to a window
[~, predictions] = svmclassify(P',lebel,model); % classifying each window
[a, indx]= max(predictions);

Using Improved Euler Method in Matlab

I am trying to solve a 2nd order differential equation in Matlab. I was able to do this using the forward Euler method, but since this requires quite a small time step to get accurate results I have looked into some other options. More specifically the Improved Euler method (Heun's method).
I understand the principle of Improved Euler method, that it first estimates the velocity and then uses that information to correct it to the current condition. But I am not totally sure if what I have written is totally correct.
1)Can you check if my code utilizes the Improved Euler method correctly?
2)In my code, the last line before the end, the second B(ii) should be B(ii+1)?
I have written a simplified code for both options. Here it is:
t = 0:0.01:100;
dt = t(2)-t(1); % Time step
%Constants%
M = 20000;
m_a = 10000;
c= 15000;
k_spring = 40000;
B = rand(1,length(t)+1);
%% Forward Euler Method %%
x = zeros(1,length(t)+1); % Pre-allocation
u = zeros(1,length(t)+1); % Pre-allocation
x(1) = 1; % Initial condition
u(1) = 0; % Initial condition
for ii = 1:length(t)
x(ii+1) = x(ii) + dt*u(ii);
u(ii+1) = u(ii) + dt * ((1/(M+m_a)) * -(c+k_spring+B(ii))*x(ii));
end
%% Improved Euler Method %%
x1 = zeros(1,length(t)+1); % Pre-allocation
u1 = zeros(1,length(t)+1); % Pre-allocation
x1(1) = 1; % Initial condition
u1(1) = 0; % Initial condition
for ii = 1:length(t)
x1(ii+1) = x1(ii) + dt*u1(ii);
u1(ii+1) = u1(ii) + dt * ((1/(M+m_a)) * -(c+k_spring+B(ii))*x1(ii)); %Estimate
u1(ii+1) = u1(ii) + (dt/2) * ( ((1/(M+m_a)) * -(c+k_spring+B(ii))*x1(ii)) + ((1/(M+m_a)) * -(c+k_spring+B(ii))*x1(ii+1)) ); %Correction
end
Thanks!
You should follow the principal programming idea to make things that are used repeatedly into extra procedures. Suppose you did so and the function evaluating the ODE function is called odefunc.
function [dotx, dotu] = odefunc(x,u,B)
dotx = u;
dotu =(1/(M+m_a)) * -(c+k_spring+B)*x);
end
Then
for ii = 1:length(t)
%% Predictor
[dotx1,dotu1] = odefunc(x1(ii), u1(ii), B(ii));
%% One corrector step
[dotx2,dotu2] = odefunc(x1(ii)+dotx1*dt, u1(ii)+dotu1*dt, B(ii+1));
x1(ii+1) = x1(ii) + 0.5*(dotx1+dotx2)*dt;
u1(ii+1) = u1(ii) + 0.5*(dotu1+dotu2)*dt;
end

MATLAB: One Step Ahead Neural Network Timeseries Forecast

Intro: I'm using MATLAB's Neural Network Toolbox in an attempt to forecast time series one step into the future. Currently I'm just trying to forecast a simple sinusoidal function, but hopefully I will be able to move on to something a bit more complex after I obtain satisfactory results.
Problem: Everything seems to work fine, however the predicted forecast tends to be lagged by one period. Neural network forecasting isn't much use if it just outputs the series delayed by one unit of time, right?
Code:
t = -50:0.2:100;
noise = rand(1,length(t));
y = sin(t)+1/2*sin(t+pi/3);
split = floor(0.9*length(t));
forperiod = length(t)-split;
numinputs = 5;
forecasted = [];
msg = '';
for j = 1:forperiod
fprintf(repmat('\b',1,numel(msg)));
msg = sprintf('forecasting iteration %g/%g...\n',j,forperiod);
fprintf('%s',msg);
estdata = y(1:split+j-1);
estdatalen = size(estdata,2);
signal = estdata;
last = signal(end);
[signal,low,high] = preprocess(signal'); % pre-process
signal = signal';
inputs = signal(rowshiftmat(length(signal),numinputs));
targets = signal(numinputs+1:end);
%% NARNET METHOD
feedbackDelays = 1:4;
hiddenLayerSize = 10;
net = narnet(feedbackDelays,[hiddenLayerSize hiddenLayerSize]);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
signalcells = mat2cell(signal,[1],ones(1,length(signal)));
[inputs,inputStates,layerStates,targets] = preparets(net,{},{},signalcells);
net.trainParam.showWindow = false;
net.trainparam.showCommandLine = false;
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
net.performFcn = 'mse'; % Mean squared error
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
next = net(inputs(end),inputStates,layerStates);
next = postprocess(next{1}, low, high); % post-process
next = (next+1)*last;
forecasted = [forecasted next];
end
figure(1);
plot(1:forperiod, forecasted, 'b', 1:forperiod, y(end-forperiod+1:end), 'r');
grid on;
Note:
The function 'preprocess' simply converts the data into logged % differences and 'postprocess' converts the logged % differences back for plotting. (Check EDIT for preprocess and postprocess code)
Results:
BLUE: Forecasted Values
RED: Actual Values
Can anyone tell me what I'm doing wrong here? Or perhaps recommend another method to achieve the desired results (lagless prediction of sinusoidal function, and eventually more chaotic timeseries)? Your help is very much appreciated.
EDIT:
It's been a few days now and I hope everyone has enjoyed their weekend. Since no solutions have emerged I've decided to post the code for the helper functions 'postprocess.m', 'preprocess.m', and their helper function 'normalize.m'. Maybe this will help get the ball rollin.
postprocess.m:
function data = postprocess(x, low, high)
% denormalize
logdata = (x+1)/2*(high-low)+low;
% inverse log data
sign = logdata./abs(logdata);
data = sign.*(exp(abs(logdata))-1);
end
preprocess.m:
function [y, low, high] = preprocess(x)
% differencing
diffs = diff(x);
% calc % changes
chngs = diffs./x(1:end-1,:);
% log data
sign = chngs./abs(chngs);
logdata = sign.*log(abs(chngs)+1);
% normalize logrets
high = max(max(logdata));
low = min(min(logdata));
y=[];
for i = 1:size(logdata,2)
y = [y normalize(logdata(:,i), -1, 1)];
end
end
normalize.m:
function Y = normalize(X,low,high)
%NORMALIZE Linear normalization of X between low and high values.
if length(X) <= 1
error('Length of X input vector must be greater than 1.');
end
mi = min(X);
ma = max(X);
Y = (X-mi)/(ma-mi)*(high-low)+low;
end
I didn't check you code, but made a similar test to predict sin() with NN. The result seems reasonable, without a lag. I think, your bug is somewhere in synchronization of predicted values with actual values.
Here is the code:
%% init & params
t = (-50 : 0.2 : 100)';
y = sin(t) + 0.5 * sin(t + pi / 3);
sigma = 0.2;
n_lags = 12;
hidden_layer_size = 15;
%% create net
net = fitnet(hidden_layer_size);
%% train
noise = sigma * randn(size(t));
y_train = y + noise;
out = circshift(y_train, -1);
out(end) = nan;
in = lagged_input(y_train, n_lags);
net = train(net, in', out');
%% test
noise = sigma * randn(size(t)); % new noise
y_test = y + noise;
in_test = lagged_input(y_test, n_lags);
out_test = net(in_test')';
y_test_predicted = circshift(out_test, 1); % sync with actual value
y_test_predicted(1) = nan;
%% plot
figure,
plot(t, [y, y_test, y_test_predicted], 'linewidth', 2);
grid minor; legend('orig', 'noised', 'predicted')
and the lagged_input() function:
function in = lagged_input(in, n_lags)
for k = 2 : n_lags
in = cat(2, in, circshift(in(:, end), 1));
in(1, k) = nan;
end
end