I'm trying to do some fitting with lsqcurvefit. I have a function like that:
function F = cdf_3p_model(a,data)
F=1-((1-a(5)-a(6)).*(exp(-abs(data)./a(1)))+((1-a(4)-a(6)).*(exp(-abs(data)./a(2))))+((1-a(4)-a(5)).*(exp(-abs(data)./a(3)))));
and
function [a residual] = cdf_fit_3p(x,y)
a0 = [10 1 0.1 0.3 0.3 0.3];
lb = [0 0 0 0 0 0];
ub = [];
curvefitoptions = optimset('Display','final','MaxFunEvals',100000,'MaxIter',50000);
[a, residual] = fmincon(#cdf_3p_model,a0,x,y,lb,ub,curvefitoptions);
end
I set the initial parameters, ub, lb but how do I also declare that:
a(1) > a(2) > a(3)
a(5) + a(6) +a(7) = 1
I think you have better chance using one of the minimization routines such as fmincon which allows you to specify constraints you might otherwise be unable to do. You can easily incorporate least-squares by taking the L2-norm of the difference between model and data
Normally I would say, "make clauses in your function that gives really terrible 'scores' when those conditions are not met." However, your conditions make the range of allowable parameters such a tiny, tiny subset of the range of possible numbers that I think you would cause lsqcurvefit to never converge if you do that. I would say lsqcurvefit is not the right solution for you.
You will have to calculate the parameters you "want" from a set of parameters that's more usable to MatLab.
For example, you can rewrite
a(1) > a(2) > a(3)
a(5) + a(6) + a(7) = 1
as
a(3) = p(1)
a(2) = p(1) + p(2)
a(1) = p(1) + p(2) + p(3)
a(4) = p(4)
a(5) = p(5)
a(6) = p(6)
a(7) = 1 - p(5) - p(6)
with
lb = [0 0 0 0 0 0]
ub = [Inf Inf Inf Inf 1 1]
Well, it's not perfect, because it allows a(7) as low as -1 instead of 0. But it includes your other constraints.
Related
I am trying to understand now fminunc (fmincon) works, however I keep getting error.
When I use documentation example with two variables
fun = #(x)3*x(1)^2 + 2*x(1)*x(2) + x(2)^2 - 4*x(1) + 5*x(2);
x0 = [1,1];
[x,fval] = fminunc(fun,x0);
everything works fine.
Hovewer, when I am trying to fit a plane for 3 points,
the code does not work
n0 = [ 0 1 -2;
1 2 1;
-2 -4 -4]
fun = #(x) [x(1) x(2) x(3)] * n0 - [1 1 1]
The task for fminunc is just an example. I know I can solve it easily analytically.
The cost function returns a scalar. What you have written returns a [1x3] matrix. You could try something like this if you want to minimise the euclidean distance
fun = #(x) sum(([x(1) x(2) x(3)] * n0 - [1 1 1]).^2);
I'm trying to make a program to solve the following system with ode23:
2y’ + z’–y + 2z = 0
y’ + 3z’ –3y +z = 0
with initial values:
y(0) = 1
z(0) = 0
and analytic solution:
y= cos(x)
z= sin(x)
but when I change the variable 4:
function dy = eqdif2(t,y)
%2y’ + z’ –y + 2z = 0;
%y’ + 3z’ –3y +z = 0
% y(0) = 1, Z(0) = 0
% y=y(1), z=y(2), y'=y(3), z'=y(4)
dy = [-2*y(3)+y(1)-2*y(2);3*y(1)-y(2)-3*y(4)];
I have a problem with ode23, reporting only 2 solutions:
clc,clear;
yp = [1 0]; %initial values
options = odeset('RelTol', 1e-4);
[t,y]= ode23('eqdif2',[0 20],yp,options);
ya=cos(x);
za=sin(x);
figure;
plot(t,y(:,1),'-');
figure;
plot(t,ya,t,za);
Here we go...
The ODE are linear, we know it, there is an algebraic solution and so. For using ode23 you need an explicit set, on the form -- ps. note the vectorial form usage on the dependent variable x and the independent variable t hereonforth:
x'=f(t,x)
Which on this case is reduced to the trivial, but useless, set:
x(1)' = x(2)
x(2)' = -x(1)
This is solved as:
f=#(t,x)([-x(2);x(1)]);
y0=[1 0];
% f(0,y0) %Check IC = y'
[t, x] = ode23(f,[0 pi*2],y0);
plot(t,x)
However, if we need or want or we are forced to solve directly the system as is -i.e. implicit-, we should have the set defined as - again, g can be a vector of equations:
g(t,x,x')=0
by using ode15i - i standing for "implicit":
g=#(t,x,dx)([ 2*dx(1)+dx(2)-x(1)+2*x(2); ...
dx(1)+3*dx(2)-3*x(1)+x(2)]);
y0=[1;0];
dy0=[0;1]; % kind of magic to find...
% g(0,y0,dy0) %Check IC = 0
[t, x] = ode15i(g, [0 pi*2],y0,dy0);
plot(t,x)
The solution is the same for both options...
I want to ask Matlab to tell me, for example, the greatest common divisor of polynomials of x^4+x^3+2x+2 and x^3+x^2+x+1 over fields like Z_3[x] (where an answer is x+1) and Z_5[x] (where an answer is x^2-x+2).
Any ideas how I would implement this?
Here's a simple implementation. The polynomials are encoded as arrays of coefficients, starting from the lowest degree: so, x^4+x^3+2x+2 is [2 2 0 1 1]. The function takes two polynomials p, q and the modulus k (which should be prime for the algorithm to work property).
Examples:
gcdpolyff([2 2 0 1 1], [1 1 1 1], 3) returns [1 1] meaning 1+x.
gcdpolyff([2 2 0 1 1], [1 1 1 1], 5) returns [1 3 2] meaning 1+3x+2x^2; this disagrees with your answer but I hand-checked and it seems that yours is wrong.
The function first pads arrays to be of the same length. As long as they are not equal, is identifies the higher-degree polynomial and subtracts from it the lower-degree polynomial multiplied by an appropriate power of x. That's all.
function g = gcdpolyff(p, q, k)
p = [p, zeros(1, numel(q)-numel(p))];
q = [q, zeros(1, numel(p)-numel(q))];
while nnz(mod(p-q,k))>0
dp = find(p,1,'last');
dq = find(q,1,'last');
if (dp>=dq)
p(dp-dq+1:dp) = mod(p(1+dp-dq:dp) - q(1:dq), k);
else
q(dq-dp+1:dq) = mod(q(dq-dp+1:dq) - p(1:dp), k);
end
end
g = p(1:find(p,1,'last'));
end
The names of the variables dp and dq are slightly misleading: they are not degrees of p and q, but rather degrees + 1.
Would there be a function in matlab, or an easy way, to generate the quantile groups to which each data point belongs to?
Example:
x = [4 0.5 3 5 1.2];
q = quantile(x, 3);
ans =
1.0250 3.0000 4.2500
So I would like to see the following:
result = [2 1 2 3 1]; % The quantile groups
In other words, I am looking for the equivalent of this thread in matlab
Thanks!
You can go through all n quantiles in a loop and use logical indexing to find the quantile
n = 3;
q = quantile(x,n);
y = ones(size(x));
for k=2:n
y(x>=q(k)) = k;
end
Depending on how you define "quantile group", you could use:
If "quantile group" means how many values in q are less than x:
result = sum(bsxfun(#gt, x(:).', q(:)));
If "quantile group" means how many values in q are less than or equal to x:
result = sum(bsxfun(#ge, x(:).', q(:)));
If "quantile group" means index of the value in q which is closest to each value in x:
[~, result] = min(abs(bsxfun(#minus, x(:).', q(:))));
None of these returns the result given in your example, though: the first gives [2 0 1 3 1], the second [2 0 2 3 1], the third [3 1 2 3 1].
I have to solve a simple problem using function linprog in matlab math toolbox. The problem is that I don't know how to format my equations so this function solves the problem.
This is the function I am trying to minimize (a_i are some given coefficients, x is in R^5):
x = argmax min{a1*x1 + a2*x2, a2*x2 + a3*x3 + a4*x4, a4*x4 + a5*x5}
subject to:
sum(x_i) = 3000
all x_i >= 0
This could be rephrased as:
(x, lambda) = argmin(-lambda)
subject to:
a1*x1 + a2*x2 >= lambda
a2*x2 + a3*x3 + a4*x4 >= lambda
a4*x4 + a5*x5 >= lambda
sum(x_i) = 3000
all x_i >= 0
I could only find examples of minimization of simple linear functions without min/max arguments in it. Could you give me a hint how to make my structures as arguments for linprog function?
Let's try the following
your x vector is now
[x1 x2 x3 x4 x5 lambda]
the objective vector
f = [0 0 0 0 0 -1]
equality constraint:
Aeq = [1 1 1 1 1 0] beq = 3000
Inequality constraint:
A = [-a1 -a2 0 0 0 1; 0 -a2 -a3 -a4 0 1; 0 0 0 -a4 -a5 1] b = [0;0;0]
lower bound:
lb = [0 0 0 0 0 -inf]
now try
linprog( f, A, b, Aeq, beq, lb )
up to some transposing of arguments should do the trick.
I don't believe you can pose the question as you phrased it as a linprog problem. The "MIN" operation is the problem. Since the objective function can't be phrased as
y = f'x.
Even though your constraints are linear, your objective function isn't.
Maybe with some trickery you can linearize it. But if so, that's a math problem. See: https://math.stackexchange.com/