Using Kharitonov method to design the Robust controller for uncertain model - matlab

The tr. functions describing the system.
G11= 1 / (29s^3 + 40s^2 + 30s +20), G12= 1 / (50s^3 + 10s^2 + 70s +2),
G21= 1 / (80s^3 + 10s^2 + 90s +5), G22= 1 / (11s^3 + 4s^2 + 66s +7),
then,
Can I consider it as an uncertain plants given by Gij= 1/ (a s^3 + bs^2 + c s + d ) where,
a min = 11, a max = 80; b min =4 , b max = 40,
c min = 30, c max = 90, d min = 2, d max = 20.
How can we apply Kharitonov theorem to design a PID controller for an uncertain system described transfer functions and plot the interval stable region?

Related

How to make the response of the solve function symbolic?

I am solving a fourth order equation in matlab using the solve function.My script looks like this:
syms m M I L Bp Bc g x
m = 0.127
M = 1.206
I = 0.001
L = 0.178
Bc = 5.4
Bp = 0.002
g = 9.8
eqn = ((m + M)*(I + m*L^2) - m^2*L^2)*x^4 + ((m + M)*Bp + (I + m*L^2)*Bc)*x^3 + ((m + M)*m*g*L + Bc*Bp)*x^2 + m*g*L*Bc*x == 0
S = solve(eqn, x)
In the answer, I should get 4 roots, but instead I get such strange expressions:
S =
0
root(z^3 + (34351166180215288*z^2)/7131724328013535 + (352922208800606144*z)/7131724328013535 + 1379250971773894912/7131724328013535, z, 1)
root(z^3 + (34351166180215288*z^2)/7131724328013535 + (352922208800606144*z)/7131724328013535 + 1379250971773894912/7131724328013535, z, 2)
root(z^3 + (34351166180215288*z^2)/7131724328013535 + (352922208800606144*z)/7131724328013535 + 1379250971773894912/7131724328013535, z, 3)
The first root, which is 0, is displayed clearly. Is it possible to make the other three roots appear as numbers as well? I looked for something about this in the documentation for the solve function, but did not find it.

Add the constraint of normalization coefficient for matlab curve fitting

I want to use a custom model to fit some data. The syntax I used is fit(). The mathematical model I used has the form of this:
a*exp(-x*b)+c*exp(-y*d)+e*exp(-z*f)
where a,b,c,d,e,f are the parametres I will estimate and x,y,z are independent variables. (The actual mathematical formula is more complicated but something nonlinear like this.)
How can I add the constraint of a+c+e=1 (and a,c,e must be positive or 0) when fitting the curve? I know how to set the lower and upper bound but don't know how to add this normalization coefficient constraint to the fitting. Is it possible to do this when using the fit() method?
I think I already posted this somewhere, but can't find it right now.
As it is a non-linear fit, there is no big deal in transforming the parameters. say we chose the continuously differentiable and monotonic function:
a = f(s) = 1/2 ( 1 + s / sqrt( 1 + s^2 ) )
so for s in (-inf, inf) on gets a in (0,1). Actually, with some simple shifting and scaling we could get any a in ( u, v ).
Now we can do the same for b, but with the additional restriction a + b + c = 1 we know that at most c = 0 and b must definitively be smaller than 1 - f(s) = 1/2 ( 1 - s / sqrt( 1 + s^2 ) ). Now, hence, is the time for scaling and we can set:
b = g(t, s) = 1/2 ( 1 - s / sqrt( 1 + s^2 ) ) 1/2 ( 1 + t / sqrt( 1 + t^2 ) )
again with t in (-inf, inf). The first part is the scaling due to the already set value for a, the second part repeats the procedure from above.
Finally, cis simply 1- f(s) - g(t, s)
Eventually, the fit function with parameters s and t looks like:
+ 0.50 * ( 1 + s / sqrt( 1 + s^2 ) ) * exp( -x * b )
+ 0.25 * ( 1 - s / sqrt( 1 + s^2 ) ) * ( 1 + t / sqrt( 1 + t^2 ) ) * exp( -y * d )
+ (
+1.00
-0.50 * ( 1 + s / sqrt( 1 + s^2 )
-0.25 * ( 1 - s / sqrt( 1 + s^2 ) ) * ( 1 + t / sqrt( 1 + t^2 ) )
) * exp( -z * f )
Getting results for s and t provides a, b and c via error propagation

How can we measure the similarity distance between categorical data ?

How can we measure the similarity distance between categorical data ?
Example:
Gender: Male, Female
Numerical values: [0 - 100], [200 - 300]
Strings: Professionals, beginners, etc,...
Thanks in advance.
There are different ways to do this. One of the simplest would be as follows.
1) Assign numeric value to each property so the order matches the meaning behind the property if possible. It is important to order property values from lower to higher if property can be measured. If it is not possible and property is categorical (like gender, profession, etc), just assign number to each possible value.
P1 - Gender
-------------------
0 - Male
1 - Female
P2 - Experience
-----------
0 - Beginner
5 - Average
10 - Professional
P3 - Age
-----------
[0 - 100]
P4 - Body height, cm
-----------
[50 - 250]
2) For each concept find scale factor and offset so all property values fall in the same chosen range, say [0-100]
Sx = 100 / (Px max - Px min)
Ox = -Px min
In sample provided you would get:
S1 = 100
O1 = 0
S2 = 10
O2 = 0
S3 = 1
O3 = 0
S4 = 0.5
O4 = -50
3) Now you can create a vector containing all the property values.
V = (S1 * P1 + O1, S2 * P2 + O2, S3 * P3 + O3, S4 * P4 + O4)
In sample provided it would be:
V = (100 * P1, 10 * P2, P3, 0.5 * P4 - 50)
4) Now you can compare two vectors V1 and V2 by subtracting one from other. The length of resulting vector will tell how different they are.
delta = |V1 - V2|
Vectors are subtracted by subtracting each dimension. Vector length can be calculated as square root of sum of squared vector dimensions.
Imagine we have 3 persons:
John
P1 = 0 (male)
P2 = 0 (beginner)
P3 = 20 (20 years old)
P4 = 190 (body height is 190 cm)
Kevin
P1 = 0 (male)
P2 = 10 (professional)
P3 = 25 (25 years old)
P4 = 186 (body height is 186 cm)
Lea
P1 = 1 (female)
P2 = 10 (professional)
P3 = 40 (40 years old)
P4 = 178 (body height is 178 cm)
Vectors would be:
J = (100 * 0, 10 * 0, 20, 0.5 * 190 - 50) = (0, 0, 20, 45)
K = (100 * 0, 10 * 10, 25, 0.5 * 186 - 50) = (0, 100, 25, 43)
L = (100 * 1, 10 * 10, 40, 0.5 * 178 - 50) = (100, 100, 40, 39)
To determine we need to subtract vectors:
delta JK = |J - K| =
= |(0 - 0, 0 - 100, 20 - 25, 45 - 43)| =
= |(0, -100, -5, 2)| =
= SQRT(0 ^ 2 + (-100) ^ 2 + (-5) ^ 2 + 2 ^ 2) =
= SQRT(10000 + 25 + 4) =
= 100,14
delta KL = |K - L| =
= |(0 - 100, 100 - 100, 25 - 40, 43 - 39)| =
= |(-100, 0, -15, 4)| =
= SQRT((-100) ^ 2 + 0 ^ 2 + (-15) ^ 2 + 4 ^ 2) =
= SQRT(10000 + 225 + 16) =
= 101,20
delta LJ = |L - J| =
= |(100 - 0, 100 - 0, 40 - 20, 39 - 45)| =
= |(100, 100, 20, -6)| =
= SQRT(100 ^ 2 + 100 ^ 2 + (20) ^ 2 + (-6) ^ 2) =
= SQRT(10000 + 10000 + 400 + 36) =
= 142,95
From this you can see that John and Kevin are more similar than any other as delta is smaller.
There are a number of measures for finding similarity between categorical data. The following paper discuses briefly about these measures.
https://conservancy.umn.edu/bitstream/handle/11299/215736/07-022.pdf?sequence=1&isAllowed=y
If you're trying to do this in R, there's a package named 'nomclust', which has all these similarity measures readily available.
Hope this helps!
If you are using python, there is a latest library which helps in finding the proximity matrix based on similarity measures such as Eskin, overlap, IOF, OF, Lin, Lin1, etc.
After obtaining the proximity matrix we can go on clustering using Hierarchical Cluster Analysis.
Check this link to the library named "Categorical_similarity_measures":
https://pypi.org/project/Categorical-similarity-measures/0.4/
Just a thought, We can also apply euclidean distance between two variables to find a drift value. If it is 0, then there is no drift or else call as similar. But the vector should be sorted and same length before calculation.

Get Shrove Tuesday in Lua

How can I get the date of Shrove Tuesday (12/02/2013, 04/03/2014, 17/02/2015, etc.) in Lua from a supplied year? If possible, could it be explained clearly so that it can be adapted for Easter, Mother's Day, and other holidays that change each year? There are scripts available online that get Easter, but they're not explained very clearly and I don't understand how I can change them for Shrove Tuesday and other holidays.
According to Wikipedia, Shrove Tuesday is exactly 47 days before Easter Sunday. So the key is really just how to calculate Easter Day, a movable feast. You can modify the code to calculate Easter to get Shrove Tuesday.
function shrove_tuesday(year)
local leap_year
if year % 4 == 0 then
if year % 100 == 0 then
if year % 400 == 0 then
leap_year = true
else
leap_year = false
end
else
leap_year = true
end
else
leap_year = false
end
local a = year % 19
local b = math.floor(year / 100)
local c = year % 100
local d = math.floor(b / 4)
local e = b % 4
local f = math.floor((b + 8) / 25)
local g = math.floor((b - f + 1) / 3)
local h = (19 * a + b - d - g + 15) % 30
local i = math.floor(c / 4)
local k = c % 4
local L = (32 + 2 * e + 2 * i - h - k) %7
local m = math.floor((a + 11 * h + 22 * L) / 451)
local month = math.floor((h + L - 7 * m + 114 - 47) / 31)
local day = (h + L - 7 * m + 114 - 47) % 31 + 1
if month == 2 then --adjust dates in February
day = leap_year and day - 2 or day - 3
end
return day, month
end
The calculation seems complicated, that's because calculating Easter Day is complicated, this function is following the algorithm of Computus.
Test:
print(shrove_tuesday(2012))
print(shrove_tuesday(2013))
print(shrove_tuesday(2014))
print(shrove_tuesday(2015))
Output:
21 2
12 2
4 3
17 2
You can easily use the day and month to get formatted string using string.format("%02d/%02d/%04d", day, month, year) or whatever you need.

python optimising multiple functions with common variables

i am trying to minimize (globally) 3 functions that use common variables, i tried to combine them into one function and minimize that using L-BFGS-B(i need to set boundaries for the variables), but it has shown to be very difficult to balance each parameter with weightings, i.e. when one is minimised the other will not be. I also tried to use SLSQP method to minimize one of them while setting others as constraints, but the constraints are often ignored/not met.
Here are what need to be minimized, all the maths are done in meritscalculation and meritoflength, meritofROC, meritofproximity, heightorderreturnedare returned from the calculations as globals.
def lengthmerit(x0):
meritscalculation(x0)
print meritoflength
return meritoflength
def ROCmerit(x0):
meritscalculation(x0)
print meritofROC
return meritofROC
def proximitymerit(x0):
meritscalculation(x0)
print meritofproximity+heightorder
return meritofproximity+heightorder
i want to minimize all of them using a common x0 (with boundaries) as independent variable, is there a way to achieve this?
Is this what you want to do ?
minimize a * amerit(x) + b * bmerit(x) + c * cmerit(x)
over a, b, c, x:
a + b + c = 1
a >= 0.1, b >= 0.1, c >= 0.1 (say)
x in xbounds
If x is say [x0 x1 .. x9], set up a new variable abcx = [a b c x0 x1 .. x9],
constrain a + b + c = 1 with a penalty term added to the objective function,
and minimize this:
define fabc( abcx ):
""" abcx = a, b, c, x
-> a * amerit(x) + ... + penalty 100 (a + b + c - 1)^2
"""
a, b, c, x = abcx[0], abcx[1], abcx[2], abcx[3:] # split
fa = a * amerit(x)
fb = b * bmerit(x)
fc = c * cmerit(x)
penalty = 100 * (a + b + c - 1) ** 2 # 100 ?
f = fa + fb + fc + penalty
print "fabc: %6.2g = %6.2g + %6.2g + %6.2g + %6.2g a b c: %6.2g %6.2g %6.2g" % (
f, fa, fb, fc, penalty, a, b, c )
return f
and bounds = [[0.1, 0.5]] * 3 + xbounds, i.e. each of a b c in 0.1 .. 0.5 or so.
The long print s should show you why one of a b c approach 0 --
maybe one of amerit() bmerit() cmerit() is way bigger than the others ?
Plots instead of prints would be easy too.
Summary:
1) formulate the problem clearly on paper, as at the top
2) translate that into python.
here is the result of some scaling and weighting
objective function:
merit_function=wa*meritoflength*1e3+wb*meritofROC+wc*meritofproximity+wd*heightorder*10+1000 * (wa+wb+wc+wd-1) ** 2
input:
abcdex=np.array(( 0.5, 0.5, 0.1, 0.3, 0.1...))
output:
fun: array([ 7.79494644])
x: array([ 4.00000000e-01, 2.50000000e-01, 1.00000000e-01,
2.50000000e-01...])
meritoflength : 0.00465499380753. #target 1e-5, usually start at 0.1
meritofROC: 23.7317956542 #target ~1, range <33
Heightorder: 0 #target :strictly 0, range <28
meritofproximity : 0.0 #target:less than 0.02, range <0.052
i realised after a few runs, all the weightings tend to stay at the minimum values of the bound, and im back to manually tuning the scaling problem i started with.
Is there a possibility that my optimisation function isnt finding the true global minimum?
here is how i minimised it:
minimizer_kwargs = {"method": "L-BFGS-B", "bounds": bnds, "tol":1e0 }
ret = basinhopping(merit_function, abcdex, minimizer_kwargs=minimizer_kwargs, niter=10)
zoom = ret['x']
res = minimize(merit_function, zoom, method = 'L-BFGS-B', bounds=bnds, tol=1e-6)