Matlab: Execute the loop until the specified conditions are met - matlab

Recently came across a simple issues that I could not solve on my own. I have a Simulink model that uses a matlab function for some calculations inside the model. The idea is that at some specified moment of time I need to change the voltage of an electric drive. And I need to change it until the rotor’s position reaches another specified value. For instance:
If control_signal == 1; (command to start the execution);
While Angle ~= 180 \\ desired angle is 180;
Control voltage = 5 - 0.1 (5V is the initial value, while the increment of
the voltage change is 0.1)
End
end
So technically what I was thinking will happen, is that the cycle will be executed until the angle of 180 is reached, at some value of the control voltage (for instance 4.6). But when I am running this code, Simulink can’t execute the model. So without any warning or errors, simulation freezes at some stage (when the main condition kicks in). So looks like it can’t process further when the cycle’s execution starts. Can somebody help me with the code? Because the described behaviour of the model during simulation is definitely caused by the above mentioned code.
Thank you in advance.

My guess is you convert the value of the angle expressed in radians to degrees, and you calculate the angle beforehand using trigonometry functions - it will never reach precisely 180 degrees. You should choose some epsilon such that 180 - epsilon is close enough to 180 for your application and use that value as your loop condition.
Also, based on your description of the problem you are trying to solve, sounds like you actually want to check whether the angle is not greater than 180, not if it equals 180.
Also, you are incrementing your voltage incorrectly. Everytime you enter the loop, your control_voltage variable equals 5 - 0.1, which means it's constant and will equal 4.9 no matter how many times you run that loop. This is the correct way:
control_voltage = 5; % initial value
if control_signal == 1; (command to start the execution);
while Angle ~= (180 - epsilon) % some epsilon of your choice
control_voltage = control_voltage - 0.1; % now it's smaller by 0.1 every time the loop is run
end
end

Related

Proportional Control Matlab

I am working on a boiler project in matlab with a partner. We are still new to this program and are currently working on the proportional control part of the boiler program. As a result we get 3 graphs like we are supposed to but we keep getting a mysterious dip on our center graph as seen below. Any diagnosis to the problem on how we can get rid of the dip is what we have been trying to figure out.
Here is our code:
clear all;
close all;
endtime=60;
time=0;
ResPer=0.05; %declares basic functions
setpoint=7;
flowrate=2.4;
valveposition=0.2;
delay=5;
timeaxis=[1:endtime];
valveaxis=0; %declares arrays related to time and graphing
steamaxis=0;
propconstant=0.25;
RelError=0; %declares arrays regarding the proportional controller
desiredvalve=0.5823;
while time<endtime
%RelError=SteamRelativeError(flowrate, setpoint);
%calculates the Relative Error used in the rest of the function
if (time <= delay)
valveposition=0.75*valveposition+0.25*(desiredvalve-propconstant*RelError);
%deals with the valve position before the time exceeds the delay
%valveaxis(time)=valveposition;
else
valveaxis(time-delay)=0.75*valveposition+0.25*(desiredvalve-propconstant*RelError); %calculates and stores valve position after the delay has been passed
end
RelError=SteamRelativeError(flowrate, setpoint); %calculates the Relative Error used in the rest of the function
time=time+1; %increments time
flowrate=ValvePerToFlowRate(valveposition); %calculates flow rate from valve position
valveaxis(time)=valveposition; %stores temporary valve position
steamaxis(time)=flowrate; %stores steam flow rate
erroraxis(time)=RelError; %stores the Relative Error
end
subplot(3,1,2), plot(timeaxis, valveaxis, 'b'); axis([0 endtime 0.2 0.8]);
xlabel('Time(min)'); ylabel('Valve%');
subplot(3,1,1), plot(timeaxis, steamaxis, 'b'); axis([0 endtime 2
setpoint+2]); xlabel('Time(min)'); ylabel('Steam mass flow rate (lbm/s)');
title('Proportional Constant=0.25'); %graphs functions
subplot(3,1,3), plot(timeaxis, erroraxis, 'b'); axis([0 endtime -0.6 0.2]);
xlabel('Time(min)'); ylabel('Error');
And here are our graphs:
The dip is occurring because you set valveaxis in two difference places. In your if else block you set it to:
valveaxis(time-delay)=0.75*valveposition+0.25*(desiredvalve-propconstant*RelError);
However outside of this block you set it to:
valveaxis(time)=valveposition;
At each time the valveaxis is set using the second equation, only to be overwritten by the first formula delay iterations later. The very last values will not get overwritten since you break at time==60, so valveaxis(54) will be the last value set by the first formula. This is why your dip occurs between the 54th and 55th values.

Hold constant signal

I have the model of a dynamic system in Simulink (I cannot change the programming framework). It can be described as an oscillator subject to periodic oscillations. I am trying to control its motion, in particular, to maximize it (for energy generation).
With latching control (a popular control strategy), the idea is to 'latch', i.e. lock in place, the device when its velocity is 0 for a predefined time, and then release it until its velocity reaches 0 again.
So, what I need to do in Simulink is to output a signal 1 once the velocity signal reaches (or is close to) 0, hold it constant for a time period (at 1), then release it (the signal becomes 0), and repeat the process once the velocity reaches 0 again.
I have found a good blog on holding signals constant in Simulink:
http://blogs.mathworks.com/simulink/2014/08/06/how-do-you-hold-the-value-of-a-signal/
However, in my case, I have two conditions for determining the signal: the magnitude of the velocity and the time within the time period. Now, the problem is that as soon as the period is finished, and the device is released (signal = 0), the velocity is still very small, which could result in an incorrect signal of 1 if an if-loop is used.
I think using an S-function may be the best solution, but then I will have to use a fixed time-step. Are there any Simulink-native solutions for this problem?
I ended up using a Matlab function as a temporary solution, and it is very effective. I have taken inspiration from https://uk.mathworks.com/matlabcentral/answers/11323-hold-true-value-for-finite-length-of-time
u is the velocity signal.
function y = fcn(u,nlatch)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% This function is used to determine the latching signal.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Using persistent memory:
persistent tick started sign;
% Initialize variables:
if isempty(tick)
tick = 0;
started = 0;
sign = (u>0);
end
U=0; % no latching
s=(u>0);
if s~=sign
started = 1;
end
if started
if tick<nlatch
tick=tick+1;
U = 1;
else
tick = 0;
started = 0;
sign = s;
end
end
y = U;
end
However, as I mentioned, I have to use a fixed step solver, which is no big deal to me, but it can create problems to other users.
If anyone has a more "Simulink-native" solution, please let me know!
Edit
I have now modified the function: the latching is now applied when there is a change in sign in the velocity signal rather than looking at a small magnitude as earlier on (abs(u)<0.005), which was too case-specific.
A colleague of mine has also found a Simulink-native solution:
However, the Matlab function is faster (less computing intensive) when the same time step is employed. Maybe the least computing-intensive solution is a C S-function.

K-means Stopping Criteria in Matlab?

Im implementing the k-means algorithm on matlab without using the k-means built-in function, The stopping criteria is when the new centroids doesn't change by new iterations, but i cannot implement it in matlab , can anybody help?
Thanks
Setting no change as a stopping criteria is a bad idea. There are a few main reasons you shouldn't use a 0 change condition
even for a well behaved function the difference between 0 change and a very small change (say 1e-5 perhaps)could be 1000+ iterations, so you are wasting time trying to get them to be exactly the same. Especially because computers usually keep far more digits than we are interested in. IF you only need 1 digit accuracy, why wait for the computer to find an answer within 1e-31?
computers have floating point errors everywhere. Try doing some easily reversible matrix operations like a = rand(3,3); b = a*a*inv(a); a-b theoretically this should be 0 but you will see it isn't. So these errors alone could prevent your program from ever stopping
dithering. lets say we have a 1d k means problem with 3 numbers and we want to split them into 2 groups. One iteration the grouping can be a,b vs c. the next iteration could be a vs b,c the next could be a,b vs c the next.... This is of course a simplified example, but there can be instances where a few data points can dither between clusters, and you will end up with a never ending algorithm. Since those few points are reassigned, the change will never be 0
the solution is to use a delta threshold. basically you subtract the current values from the previous and if they are less than a threshold you are done. This on its own is powerful, but as with any loop, you need a backup escape plan. And that is setting a max_iterations variable. Look at matlabs documentation for kmeans, even they have a MaxIter variable (default is 100) so even if your kmeans doesn't converge, at least it wont run endlessly. Something like this might work
%problem specific
max_iter = 100;
%choose a small number appropriate to your problem
thresh = 1e-3;
%ensures it runs the first time
delta_mu = thresh + 1;
num_iter = 0;
%do your kmeans in the loop
while (delta_mu > thresh && num_iter < max_iter)
%save these right away
old_mu = curr_mu;
%calculate new means and variances, this is the standard kmeans iteration
%then store the values in a variable called curr_mu
curr_mu = newly_calculate_values;
%use the two norm to find the delta as a single number. no matter what
%the original dimensionality of mu was. If old_mu -new_mu was
% 0 the norm is still 0. so it behaves well as a distance measure.
delta_mu = norm(old_mu - curr_mu,2);
num_ter = num_iter + 1;
end
edit
if you don't know the 2 norm is essentially the euclidean distance

Local volatility model Matlab

I am trying to do a Monte Carlo simulation of a local volatility model, i.e.
dSt = sigma(St,t) * St dWt .
Unfortunately the Matlab package class sde can not be applied, as the function is rather complex.
For this reason I am simulating this SDE manually with the Euler-Mayurama method. More specifically I used Ito's formula to get an SDE for the log-process Xt=log(St)
dXt = -1/2 sigma^2(exp(Xt),t) dt + sigma(exp(Xt),t) dWt
The code for this is the following:
function [S]=geom_bb(sigma,T,N,m)
% T.. Time horizon, sigma.. standard deviation, N.. timesteps, m.. dimensions
X=zeros(N+1,m);
dt=T/N;
t=(0:N)'*dt;
dW=randn(N,m);
for j=1:N
X(j+1,:)=X(j,:) - 1/2* sigma(exp(X(j,:)),t(j))^2 * sqrt(dt) + sigma(exp(X(j,:)),t(j))*dW(j,:);
end
S=exp(X*sqrt(dt));
end
This code works rather good for small sigma, however for sigma around 10 the process S always tends to zero. This should not happen as S is a martingale, and therefore has expectation =1 (at least for constant sigma).
However X should be simulated correctly, as the mean is exact.
Can anyone help me with this issue? Is this only due to numerical rounding errors? Is there another simulation method that should be preferred to solve this problem?
First are you sure S=exp(X*sqrt(dt)) outside the loop is doing what you want ? Why not have it inside the loop to start with ? You're using the exp(X) for sigma() inside the loop in any case, which is now missing the sqrt(dt).
Beyond that, suggested ways to improve behavior: use the Milstein scheme instead, increase the number of timesteps, make sure your sigma() value is commensurate with your timestep. Sigma of 10 means 1000% volatility, i.e. moves of 60% per day. Assuming dt is more than a few minutes, this simply can't be good.

Matlab: if statements and abs() function in variable-step ODE solvers

I was reading this post online where the person mentioned that using "if statements" and "abs()" functions can have negative repercussions in MATLAB's variable-step ODE solvers (like ODE45). According to the OP, it can significantly affect time-step (requiring too low of a time step) and give poor results when the differential equations are finally integrated. I was wondering whether this is true, and if so, why. Also, how can this problem be mitigated without resorting to fix-step solvers. I've given an example code below as to what I mean:
function [Z,Y] = sauters(We,Re,rhos,nu_G,Uinj,Dinj,theta,ts,SMDs0,Uzs0,...
Uts0,Vzs0,zspan,K)
Y0 = [SMDs0;Uzs0;Uts0;Vzs0]; %Initial Conditions
options = odeset('RelTol',1e-7,'AbsTol',1e-7); %Tolerance Levels
[Z,Y] = ode45(#func,zspan,Y0,options);
function DY = func(z,y)
DY = zeros(4,1);
%Calculate Local Droplet Reynolds Numbers
Rez = y(1)*abs(y(2)-y(4))*Dinj*Uinj/nu_G;
Ret = y(1)*abs(y(3))*Dinj*Uinj/nu_G;
%Calculate Droplet Drag Coefficient
Cdz = dragcof(Rez);
Cdt = dragcof(Ret);
%Calculate Total Relative Velocity and Droplet Reynolds Number
Utot = sqrt((y(2)-y(4))^2 + y(3)^2);
Red = y(1)*abs(Utot)*Dinj*Uinj/nu_G;
%Calculate Derivatives
%SMD
if(Red > 1)
DY(1) = -(We/8)*rhos*y(1)*(Utot*Utot/y(2))*(Cdz*(y(2)-y(4)) + ...
Cdt*y(3)) + (We/6)*y(1)*y(1)*(y(2)*DY(2) + y(3)*DY(3)) + ...
(We/Re)*K*(Red^0.5)*Utot*Utot/y(2);
elseif(Red < 1)
DY(1) = -(We/8)*rhos*y(1)*(Utot*Utot/y(2))*(Cdz*(y(2)-y(4)) + ...
Cdt*y(3)) + (We/6)*y(1)*y(1)*(y(2)*DY(2) + y(3)*DY(3)) + ...
(We/Re)*K*(Red)*Utot*Utot/y(2);
end
%Axial Droplet Velocity
DY(2) = -(3/4)*rhos*(Cdz/y(1))*Utot*(1 - y(4)/y(2));
%Tangential Droplet Velocity
DY(3) = -(3/4)*rhos*(Cdt/y(1))*Utot*(y(3)/y(2));
%Axial Gas Velocity
DY(4) = (3/8)*((ts - ts^2)/(z^2))*(cos(theta)/(tan(theta)^2))*...
(Cdz/y(1))*(Utot/y(4))*(1 - y(4)/y(2)) - y(4)/z;
end
end
Where the function "dragcof" is given by the following:
function Cd = dragcof(Re)
if(Re <= 0.01)
Cd = (0.1875) + (24.0/Re);
elseif(Re > 0.01 && Re <= 260.0)
Cd = (24.0/Re)*(1.0 + 0.1315*Re^(0.32 - 0.05*log10(Re)));
else
Cd = (24.0/Re)*(1.0 + 0.1935*Re^0.6305);
end
end
This is because derivatives that are computed using if-statements, modulus operations (abs()), or things like unit step functions, dirac delta's, etc., will introduce discontinuities in the value of the solution or its derivative(s), resulting in kinks, jumps, inflection points, etc.
This implies the solution to the ODE has a complete change in behavior at the relevant times. What variable step integrators will do is
detect this
recognize that they won't be able to use information directly beyond the "problem point"
decrease the step, and repeat from the top, until the problem point satisfies the accuracy demands
Therefore, there will be many failed steps and reductions in step size near the problem points, negatively affecting the overall integration time.
Variable step integrators will continue to produce good results, however. Constant step integrators are not a good remedy for this sort of problem, since they are not able to detect such problems in the first place (there's no error estimation).
What you could do is simply split the problem up in multiple parts. If you know beforehand at what points in time the changes will occur, you just start a new integration for each interval, each time using the output of the previous integration as the initial value for the next one.
If you don't know beforehand where the problems will be, you could use this very nice feature in Matlab's ODE solvers called event functions (see the documentation). You let one of Matlab's solvers detect the event (change of sign in the derivative, change of condition in the if-statement, or whatever), and terminate the integration when such events are detected. Then start a new integration, starting from the last time and with initial conditions of the previous integration, as before, until the final time is reached.
There will still be a slight penalty in overall execution time this way, since Matlab will try to detect the location of the event accurately. However, it is still much better than running the integration blindly when it comes to both execution time and accuracy of the results.
Yes it is true and it happens because of your solution is not smooth enough at some points.
Assume you want to integrate. y'(t) = f(t,y). Then, what happens in f is getting integrated to become y. Thus, if in your definition of f there is
abs(), then f has a kink and y is still smooth and 1 times differentiable
if, then f has a jump and y a kink and no more differentiability
Matlab's ODE45 presumes that your solution is 5 times differentiable, and tries to ensure an accuracy of order 4. Nonsmooth points of your function are misinterpreted as stiffness what leads to small stepsizes and even to breakdowns.
What you can do: Because of the lack of smoothness you cannot expect a high accuracy anyways. Thus, ODE23 might be a better choice. In the worst case, you have to stick to first-order schemes.