I've a problem using matlab. I need to fit a dataset with a nonlinear function like:
f=alfa*(1+beta*(zeta))^(1/3)
where alfa and beta are the coefficients to be found. I want to use the least squares method. How can I do this with the command lsqcurvefit? Otherwise, there are other ways to solve my problem?
Thank so much.
Here there is the dataset:
zeta val
0.001141174 1.914017718
0.010606563 1.36090774
0.021610291 1.906194276
0.070026172 1.87606762
0.071438139 1.877264055
0.081679327 1.859341737
0.101181292 2.518896436
0.107877774 2.772125094
0.205038829 3.032759627
0.211802706 1.483644094
0.561521724 2.424261001
0.61500615 2.559041397
0.647249191 2.949944577
0.943396226 2.84068921
1.091107474 3.453699422
1.175260761 2.604008404
1.837813003 4.00262983
2.057613169 4.565849247
2.083333333 3.779001445
3.188521323 4.430824069
4.085801839 7.766971568
4.22832981 5.711800741
4.872107186 4.949950059
9.756097561 10.78574156
you have to use the fit-function with fitType=Power2
fitobject = fit(zeta2,val,'Power2')
you can also use the cftool to manually determine your coefficients, especially if you want to keep the (1/3). Maybe Least-Squares is not the best solution for your data, as woodchips said.
be aware that you have to substitute your zeta:
zeta2 = 1+beta*(zeta)
you can determine the coefficients as follows:
coeffvalues(fitobject)
Related
I am trying to solve equations with this code:
a = [-0.0008333 -0.025 -0.6667 -20];
length_OnePart = 7.3248;
xi = -6.4446;
yi = -16.5187;
syms x y
[sol_x,sol_y] = solve(y == poly2sym(a), ((x-xi)^2+(y-yi)^2) == length_OnePart^2,x,y,'Real',true);
sol_x = sym2poly(sol_x);
sol_y = sym2poly(sol_y);
The sets of solution it is giving are (-23.9067,-8.7301) and (11.0333,-24.2209), which are not even satisfying the equation of circle. How can I rectify this problem?
If you're trying to solve for the intersection of the cubic and the circle, i.e., where y==poly2sym(a) equals (x-xi)^2+(y-yi)^2==length_OnePart^2 it looks like solve may be confused about something when the circle is represented parametrically rather than as single valued functions. It might also have to do with the fact that x and y are not independent solutions, but rather that the latter depends on the former. It also could depend on the use of a numeric solver in this case. solve seems to work fine with similar inputs to yours, so you might report this behavior to the MathWorks to see what they think.
In any case, here is a better, more efficient way to to tackle this as a root-solving problem (as opposed to simultaneous equations):
a = [-0.0008333 -0.025 -0.6667 -20];
length_OnePart = 7.3248;
xi = -6.4446;
yi = -16.5187;
syms x real
f(x) = poly2sym(a);
sol_x = solve((x-xi)^2+(f(x)-yi)^2==length_OnePart^2,x)
sol_y = f(sol_x)
which returns:
sol_x =
0.00002145831413371390464567553686047
-13.182825373861454619370838716408
sol_y =
-20.000014306269544436430325843024
-13.646590348358951818881695033728
Note that you might get slightly more accurate results (one solution is clearly at 0,-20) if you represent your coefficients and parameters more precisely then just four decimal places, e.g., a = [-1/1200 -0.025 -2/3 -20]. In fact, solve might be able to find one or more solutions exactly, if you provide exact representations.
Also, in your code, the calls to sym2poly are doing nothing other than converting back to floating-point (double can be used for this) as the inputs are not in the form of symbolic polynomial equations.
I'm trying to write code that will optimize a multivariate function using sklearn's optimize function, but it keeps returning an IndexError, and I'm not sure where to go from here.
The code is this:
revcoeff = coefficients[::-1]
xdot = np.zeros(0)
normfeat1 = normfeat1.reshape(-1,1)
xdot = np.append(normfeat1, normfeat2.reshape(-1,1), axis=1)
a = revcoeff[1:3]
b = xdot[0, :]
seed = np.zeros(5) #does seed need to be the coefficients? not sure
fun = lambda x: np.multiply((1/666), np.power(np.sum(np.dot(a, xdot[x, :])-medianv[x]),2)) #costfunction
optsol = optimize.minimize(fun, seed)
where there are two features I'm using in my nearest neighbors algorithm. Coefficients for the fitted regression model are given into the array "coefficients".
What I'm having trouble understanding is 1) why my code is throwing a "IndexError: arrays used as indicies must be of integer or boolean type"....and also partially I'm confused by the optimize.minimize function itself. It takes in two input values, the function and x0 (an ndarray with initial guesses). What should x0 be, the coefficients values? Or do I pick random values, and how many are necessary?
np.zeros() does not return integers by default. Try, for example, np.zeros(5, dtype=int) instead. It won't solve all of the problems with your code, though. You'll see some other error message.
Also, notice, that 1/666 returns 0 instead of 0.00150150. You probably want 1/666.0.
It would be helpful if you could clean up your code as half of it is of no use.
I have this function defined:
% Enter the data that was recorded into two vectors, mass and period
mass = 0 : 200 : 1200;
period = [0.404841 0.444772 0.486921 0.522002 0.558513 0.589238 0.622942];
% Calculate a line of best fit for the data using polyfit()
p = polyfit(mass, period, 1);
fit=#(x) p(1).*x + p(2);
Now I want to solve f(x) = .440086, but can't find a way to do this. I know I could easily work it out by hand, but I want to know how to do this in the future.
If you want to solve a linear equation such as A*x+B=0, you can easily solve in MATLAB as follows:
p=[0.2 0.5];
constValue=0.440086;
A=p(1);
B=constValue-p(2);
soln=A\B;
If you want to solve a non-linear system of equations, you can use fsolve as follows (Here I am showing how to use it to solve above linear equation):
myFunSO=#(x)0.2*x+0.5-0.440086; %here you are solving f(x)-0.440086=0
x=fsolve(myFunSO,0.5) %0.5 is the initial guess.
Both methods should give you the same solution.
I would like to maximize this function in MatLab - http://goo.gl/C6pYP
maximize | function | 3x+6y+9z
domain | 12546975x+525x^2+25314000y+6000y^2+47891250z+33750z^2<=4000000000 | for | x y z
But variables x, y and z have to be nonnegative integers only.
Any ideas how to achieve it in MatLab?
The fact that you want integers makes this problem very difficult to solve (i.e. unsolvable in a reasonable time).
You will have to use some general optimizer and just try many starting conditions by brute force. You will not be guaranteed to find a global maximum. See Matlab's optimization package for further possible optimizers.
You have to formulate the problem as an ILP ( integer linear program). To solve an ILP, you need to make few changes to the input to matlab LP solver . You can also get the solution from the LP solver and then round the solution to integer. The solution may not be optimal but will be close.
You can also use the mixed-integer liner programing solver at file exchange site that in turn uses the LP solver. For binary variables you can use the matlab binary integer programing solver.
Well, fortunately the problem size is tiny so we can just brute force it.
First get some upper limits, here is how to do it for x:
xmax= 0;
while 12546975*xmax+525*xmax^2<=4000000000
xmax=xmax+1;
end
This gives us upper limits for all three variables. Now we can see that the product of these limits is not a lot so we can just try all solutions.
bestval = 0;
for x = 0:xmax
for y = 0:ymax
for z = 0:zmax
val = 3*x+6*y+9*z;
if val> bestval && 12546975*x+525*x^2+25314000*y+6000*y^2+47891250*z+33750*z^2<=4000000000
bestval = val;
best = [x y z];
end
end
end
end
best, bestval
This is probably not the most efficient way to do it, but it should be very easy to read.
The max of y and z(152,79) is not very high,so we can just check one by one to find the solution quickly(just 0.040252 seconds in my notebook computer).
My matlab code:
function [MAX,x_star,y_star,z_star]=stackoverflow1
%maximize 3x+6y+9z
% s.t. 12546975x+525x^2+25314000y+6000y^2+47891250z+33750z^2<=4000000000
MAX=0;
y_max=solver(6000,25314000,-4000000000);
z_max=solver(33750,47891250,-4000000000);
for y=0:floor(y_max)
for z=0:floor(z_max)
x=solver(525,12546975,+25314000*y+6000*y^2+47891250*z+33750*z^2-4000000000);
x=floor(x);
if isnan(x) || x<0
break;
end
if 3*x+6*y+9*z>MAX
MAX=3*x+6*y+9*z;
x_star=x;
y_star=y;
z_star=z;
end
end
end
end
function val=solver(a,b,c)
% this function solve equation a*x^2+b*x+c=0.
% this equation should have two answers,this function returns the bigger one only.
if b*b-4*a*c>=0
val=(-b+sqrt(b*b-4*a*c))/(2*a);
else
val=nan; % have no real number answer.
end
end
The solution is:
MAX =
945
x_star =
287
y_star =
14
z_star =
0
We want to study the error in the difference approximation for forward difference and central difference, tabulate the error for h=[1.E-3 1.E-4 1.E-5 1.E-6 1.E-7 1.E-8 1.E-9 1.E-10 1.E-11 1.E-12 1.E-13] and draw a loglog-diagram. Any tips on how to do this?
This is our central- and forward difference.
centdiff=(subs(f, x+h))/(2*h) - (subs(f, x-h))/(2*h)
framdiff=(subs(f, x+h) - f)/h
And our function:
f=60*x-(x.^2+x+0.1).^6./(x+1).^6-10*x.*exp(-x);
The error in the approximation is the difference between the results you get using it, and the analytical result. Luckily, you have a nice function f, which can easily (well, sorta) be differentiated. After finding the derivative and creating the corresponding Matlab function, you just need to compare the analytical result with the approximate result. The simplest way would probably be using a for loop over your different h.
So, the idea is something like this (not tested, just to give you an idea):
cent_error = zeros(size(h));
forw_error = zeros(size(h));
for idx = 1:size(h)
cent_error(idx) = abs(analytical_diff - centdiff(f, h));
forw_error(idx) = abs(analytical_diff - forwdiff(f, h));
end
loglog(...)