Related
I have written a code in which functions are called in each other. The working code is as follows:
import numpy as np
from scipy.optimize import leastsq
import RF
func = RF.roots
# residuals = RF.residuals
def residuals(params, x, y):
return y - func(params, x)
def estimation(x, y):
p_guess = [1, 2, 0.5, 0]
params, cov, infodict, mesg, ier = leastsq(residuals, p_guess, args=(x, y), full_output=True)
return params
x = np.array([2.78e-03, 3.09e-03, 3.25e-03, 3.38e-03, 3.74e-03, 4.42e-03, 4.45e-03, 4.75e-03, 8.05e-03, 1.03e-02, 1.30e-02])
y = np.array([2.16e+02, 2.50e+02, 3.60e+02, 4.48e+02, 5.60e+02, 8.64e+02, 9.00e+02, 1.00e+03, 2.00e+03, 3.00e+03, 4.00e+03])
FIT_params = estimation(x, y)
print(FIT_params)
where RF file is:
def roots(params, x):
a, b, c, d = params
y = a * (b * x) ** c + d
return y
def residuals(params, x, y):
return y - func(params, x)
I would like to remove residuals function from the main code and use it by calling from RF file instead i.e. by activating the code line residuals = RF.residuals. By doing so, error NameError: name 'func' is not defined will be appeared. I put func argument in RF's residuals function as def residuals(func, params, x, y): which will face to error TypeError: residuals() missing 1 required positional argument: 'y'; It seems the error is related to the forth argument of the residuals function in this sample because it will get error for 'func' if the func argument be placed after the y argument. I couldn't find out the source of the issue, but I guess it must be related to limitation of arguments in functions. I would be appreciated if anyone could guide me to understand the error and its solution.
Is it possible to bring residual function from the main code to the RF file? How?
The problem is that there's no global variable func in your file RF.py, hence it can't be found. A simple solution would be to add an additional parameter to your residuals function:
# RF.py
def roots(params, x):
a, b, c, d = params
y = a * (b * x) ** c + d
return y
def residuals(params, func, x, y):
return y - func(params, x)
Then, you can use it inside your other file like this:
import numpy as np
from scipy.optimize import leastsq
from RF import residuals, roots as func
def estimation(func, x, y):
p_guess = [1, 2, 0.5, 0]
params, cov, infodict, mesg, ier = leastsq(residuals, p_guess, args=(func, x, y), full_output=True)
return params
x = np.array([2.78e-03, 3.09e-03, 3.25e-03, 3.38e-03, 3.74e-03, 4.42e-03, 4.45e-03, 4.75e-03, 8.05e-03, 1.03e-02, 1.30e-02])
y = np.array([2.16e+02, 2.50e+02, 3.60e+02, 4.48e+02, 5.60e+02, 8.64e+02, 9.00e+02, 1.00e+03, 2.00e+03, 3.00e+03, 4.00e+03])
FIT_params = estimation(func, x, y)
print(FIT_params)
Hi i've been asked to solve SIR model using fsolve command in MATLAB, and Euler 3 point backward. I'm really confused on how to proceed, please help. This is what i have so far. I created a function for 3BDF scheme but i'm not sure how to proceed with fsolve and solve the system of nonlinear ODEs. The SIR model is shown as and 3BDF scheme is formulated as
clc
clear all
gamma=1/7;
beta=1/3;
ode1= #(R,S,I) -(beta*I*S)/(S+I+R);
ode2= #(R,S,I) (beta*I*S)/(S+I+R)-I*gamma;
ode3= #(I) gamma*I;
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
R0=0;
I0=10;
S0=8e6;
odes={ode1;ode2;ode3}
fun = #root2d;
x0 = [0,0];
x = fsolve(fun,x0)
function [xs,yb] = ThreePointBDF(f,x0, xmax, h, y0)
% This function should return the numerical solution of y at x = xmax.
% (It should not return the entire time history of y.)
% TO BE COMPLETED
xs=x0:h:xmax;
y=zeros(1,length(xs));
y(1)=y0;
yb(1)=y0+f(x0,y0)*h;
for i=1:length(xs)-1
R =R0;
y1(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - R, y1(i-1,:)+2*h*F(i,:))
S = S0;
y2(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - S, y2(i-1,:)+2*h*F(i,:))
I= I0;
y3(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - I, y3(i-1,:)+2*h*F(i,:))
end
end
You have an implicit equation
y(i+1) - 2*h/3*f(t(i+1),y(i+1)) = G = (4*y(i) - y(i-1))/3
where the right-side term G is constant in the call to fsolve, that is, during the solution of the implicit step equation.
Note that this is for the vector valued system y'(t)=f(t,y(t)) where
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
To solve this write
G = (4*y(i,:) - y(i-1,:))/3
y(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - G, y(i-1,:)+2*h*F(i,:))
where a midpoint step is used to get an order 2 approximation as initial guess, F(i,:)=f(t(i),y(i,:)). Add solver options for error tolerances as necessary, you want the error in the implicit equation smaller than the truncation error O(h^3) of the step. One can also keep only a short array of function values, then one has to be careful for the correspondence of the position in the short array to the time index.
Using all that and a reference solution by a higher order standard solver produces the following error graphs for the components
where one can see that the first order error of the constant first step results in a first order global error, while with a second order error in the first step using the Euler method results in a clear second order global error.
Implement the method in general terms
from scipy.optimize import fsolve
def BDF2(f,t,y0,y1):
N, h = len(t)-1, t[1]-t[0];
y = (N+1)*[np.asarray(y0)];
y[1] = y1;
for i in range(1,N):
t1, G = t[i+1], (4*y[i]-y[i-1])/3
y[i+1] = fsolve(lambda u: u-2*h/3*f(t1,u)-G, y[i-1]+2*h*f(t[i],y[i]), xtol=1e-3*h**3)
return np.vstack(y)
Set up the model to be solved
gamma=1/7;
beta=1/3;
print beta, gamma
y0 = np.array([8e6, 10, 0])
P = sum(y0); y0 = y0/P
def f(t,y): S,I,R = y; trns = beta*S*I/(S+I+R); recv=gamma*I; return np.array([-trns, trns-recv, recv])
Compute a reference solution and method solutions for the two initialization variants
from scipy.integrate import odeint
tg = np.linspace(0,120,25*128)
yg = odeint(f,y0,tg,atol=1e-12, rtol=1e-14, tfirst=True)
M = 16; # 8,4
t = tg[::M];
h = t[1]-t[0];
y1 = BDF2(f,t,y0,y0)
e1 = y1-yg[::M]
y2 = BDF2(f,t,y0,y0+h*f(0,y0))
e2 = y2-yg[::M]
Plot the errors, computation as above, but embedded in the plot commands, could be separated in principle by first computing a list of solutions
fig,ax = plt.subplots(3,2,figsize=(12,6))
for M in [16, 8, 4]:
t = tg[::M];
h = t[1]-t[0];
y = BDF2(f,t,y0,y0)
e = (y-yg[::M])
for k in range(3): ax[k,0].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
y = BDF2(f,t,y0,y0+h*f(0,y0))
e = (y-yg[::M])
for k in range(3): ax[k,1].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
for k in range(3):
for j in range(2): ax[k,j].set_ylabel(["$e_S$","$e_I$","$e_R$"][k]); ax[k,j].legend(); ax[k,j].grid()
ax[0,0].set_title("Errors: first step constant");
ax[0,1].set_title("Errors: first step Euler")
I have some experiment data. Hereby, I need to fit the following function to determine one of the variable. A Levenberg–Marquardt least-squares algorithm was used in this procedure.
I have used curve fitting option in Igor Pro software. I defined new fit function and tried to define independent and dependent variable.
Nevertheless, I don't know what is the reason that I got the this error:
"The fitting function returned INF for at least one X variable"
My function is :
sin(theta) = -1+2*sqrt(alpha/x)*exp(-beta*(x-alpha)^2)
beta = 1.135e-4;
sin(theta) = [-0.81704 -0.67649 -0.83137 -0.73468 -0.66744 -0.43602 0.45368 0.75802 0.96705 0.99717 ]
x = [72.01 59.99 51.13 45.53 36.15 31.66 30.16 29.01 25.62 23.47 ]
Is there any suggestion to find alpha variable here?
Is there any handy software or program for nonlinear curve fitting?
In gnuplot, it would look like this. The fit is not great, but that's not the "fault" of gnuplot, but apparently this data cannot be fitted with this function very well.
Code:
### nonlinear curve fitting
reset session
$Data <<EOD
72.01 -0.81704
59.99 -0.67649
51.13 -0.83137
45.53 -0.73468
36.15 -0.66744
31.66 -0.43602
30.16 0.45368
29.01 0.75802
25.62 0.96705
23.47 0.99717
EOD
f(x) = -1+2*sqrt(alpha/x)*exp(-beta*(x-alpha)**2)
# initial guessed values
alpha = 25
beta = 1
set fit nolog results
fit f(x) $Data u 1:2 via alpha,beta
plot $Data u 1:2 w lp pt 7, \
f(x) lc rgb "red"
print sprintf("alpha=%g, beta=%g",alpha,beta)
### end of code
Result:
alpha=25.818, beta=0.0195229
If it might be of some use, my equation search on your data turned up a good fit to a standard 4-parameter logistic equation "y = d + (a - d) / (1.0 + pow(x / c, b))" with parameters a = 0.96207949, b = 44.14292256, c = 30.67324939, and d = -0.74830947 yielding RMSE = 0.0565 and R-squared = 0.9943, and I have included code for a Python graphical fitter using this equation.
import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
theta = [-0.81704, -0.67649, -0.83137, -0.73468, -0.66744, -0.43602, 0.45368, 0.75802, 0.96705, 0.99717]
x = [72.01, 59.99, 51.13, 45.53, 36.15, 31.66, 30.16, 29.01, 25.62, 23.47]
# rename to match previous example code
xData = numpy.array(x)
yData = numpy.array(theta)
# StandardLogistic4Parameter equation from zunzun.com
def func(x, a, b, c, d):
return d + (a - d) / (1.0 + numpy.power(x / c, b))
# these are the same as the scipy defaults
initialParameters = numpy.array([1.0, 1.0, 1.0, 1.0])
# curve fit the test data
fittedParameters, pcov = curve_fit(func, xData, yData, initialParameters)
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print('Parameters:', fittedParameters)
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData))
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)
Matlab
I slightly changed the function, -1 changed to -gamma and optimize to find gamma
The code is as follow
ydata = [-0.81704 -0.67649 -0.83137 -0.73468 -0.66744 -0.43602 0.45368...
0.75802 0.96705 0.99717 ];
xdata = [72.01 59.99 51.13 45.53 36.15 31.66 30.16 29.01 25.62 23.47 ];
sin_theta = #(alpha, beta, gamma, xdata) -gamma+2.*sqrt(alpha./xdata).*exp(beta.*(xdata-alpha).^2);
%Fitting function as function of array(x) required by lsqcurvefit
f = #(x,xdata) sin_theta(x(1),x(2), x(3),xdata);
% [alpha, beta, gamma]
x0 = [25, 0, 1] ;
options = optimoptions('lsqcurvefit','Algorithm','levenberg-marquardt', 'FunctionTolerance', 1e-30);
[x,resnorm,residual,exitflag,output] = lsqcurvefit(f,x0,xdata,ydata,[], [], options);
% Accuracy
RMSE = sqrt(sum(residual.^2)/length(residual));
alpha = x(1); beta = x(2); gamma = x(3);
%Plotting data
data = linspace(xdata(1),xdata(end));
plot(xdata,ydata,'ro',data,f(x,data),'b-', 'linewidth', 3)
legend('Data','Fitted exponential')
title('Data and Fitted Curve')
set(gca,'FontSize',20)
Result
alpha = 26.0582, beta = -0.0329, gamma = 0.7881 instead of 1, RMSE = 0.1498
Graph
If the object function is
How to code it in python?
I've already coded the normal one:
import numpy as np
import scipy as sp
from scipy.optimize import leastsq
import pylab as pl
m = 9 #the degree of the polynomial
def real_func(x):
return np.sin(2*np.pi*x) #sin(2 pi x)
def fake_func(p, x):
f = np.poly1d(p) #polynomial
return f(x)
def residuals(p, y, x):
return y - fake_func(p, x)
#randomly choose 9 points as x
x = np.linspace(0, 1, 9)
x_show = np.linspace(0, 1, 1000)
y0 = real_func(x)
#add normalize noise
y1 = [np.random.normal(0, 0.1) + y for y in y0]
p0 = np.random.randn(m)
plsq = leastsq(residuals, p0, args=(y1, x))
print 'Fitting Parameters :', plsq[0]
pl.plot(x_show, real_func(x_show), label='real')
pl.plot(x_show, fake_func(plsq[0], x_show), label='fitted curve')
pl.plot(x, y1, 'bo', label='with noise')
pl.legend()
pl.show()
Since the penalization term is also just quadratic, you could just stack it together with thesquares of the error and use weights 1 for data and lambda for the penalization rows.
scipy.optimize.curvefit does weighted least squares, if you don't want to code it yourself.
How would one go plotting a plane in matlab or matplotlib from a normal vector and a point?
For all the copy/pasters out there, here is similar code for Python using matplotlib:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
point = np.array([1, 2, 3])
normal = np.array([1, 1, 2])
# a plane is a*x+b*y+c*z+d=0
# [a,b,c] is the normal. Thus, we have to calculate
# d and we're set
d = -point.dot(normal)
# create x,y
xx, yy = np.meshgrid(range(10), range(10))
# calculate corresponding z
z = (-normal[0] * xx - normal[1] * yy - d) * 1. /normal[2]
# plot the surface
plt3d = plt.figure().gca(projection='3d')
plt3d.plot_surface(xx, yy, z)
plt.show()
For Matlab:
point = [1,2,3];
normal = [1,1,2];
%# a plane is a*x+b*y+c*z+d=0
%# [a,b,c] is the normal. Thus, we have to calculate
%# d and we're set
d = -point*normal'; %'# dot product for less typing
%# create x,y
[xx,yy]=ndgrid(1:10,1:10);
%# calculate corresponding z
z = (-normal(1)*xx - normal(2)*yy - d)/normal(3);
%# plot the surface
figure
surf(xx,yy,z)
Note: this solution only works as long as normal(3) is not 0. If the plane is parallel to the z-axis, you can rotate the dimensions to keep the same approach:
z = (-normal(3)*xx - normal(1)*yy - d)/normal(2); %% assuming normal(3)==0 and normal(2)~=0
%% plot the surface
figure
surf(xx,yy,z)
%% label the axis to avoid confusion
xlabel('z')
ylabel('x')
zlabel('y')
For copy-pasters wanting a gradient on the surface:
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import numpy as np
import matplotlib.pyplot as plt
point = np.array([1, 2, 3])
normal = np.array([1, 1, 2])
# a plane is a*x+b*y+c*z+d=0
# [a,b,c] is the normal. Thus, we have to calculate
# d and we're set
d = -point.dot(normal)
# create x,y
xx, yy = np.meshgrid(range(10), range(10))
# calculate corresponding z
z = (-normal[0] * xx - normal[1] * yy - d) * 1. / normal[2]
# plot the surface
plt3d = plt.figure().gca(projection='3d')
Gx, Gy = np.gradient(xx * yy) # gradients with respect to x and y
G = (Gx ** 2 + Gy ** 2) ** .5 # gradient magnitude
N = G / G.max() # normalize 0..1
plt3d.plot_surface(xx, yy, z, rstride=1, cstride=1,
facecolors=cm.jet(N),
linewidth=0, antialiased=False, shade=False
)
plt.show()
The above answers are good enough. One thing to mention is, they are using the same method that calculate the z value for given (x,y). The draw back comes that they meshgrid the plane and the plane in space may vary (only keeping its projection the same). For example, you cannot get a square in 3D space (but a distorted one).
To avoid this, there is a different way by using the rotation. If you first generate data in x-y plane (can be any shape), then rotate it by equal amount ([0 0 1] to your vector) , then you will get what you want. Simply run below code for your reference.
point = [1,2,3];
normal = [1,2,2];
t=(0:10:360)';
circle0=[cosd(t) sind(t) zeros(length(t),1)];
r=vrrotvec2mat(vrrotvec([0 0 1],normal));
circle=circle0*r'+repmat(point,length(circle0),1);
patch(circle(:,1),circle(:,2),circle(:,3),.5);
axis square; grid on;
%add line
line=[point;point+normr(normal)]
hold on;plot3(line(:,1),line(:,2),line(:,3),'LineWidth',5)
It get a circle in 3D:
A cleaner Python example that also works for tricky $z,y,z$ situations,
from mpl_toolkits.mplot3d import axes3d
from matplotlib.patches import Circle, PathPatch
import matplotlib.pyplot as plt
from matplotlib.transforms import Affine2D
from mpl_toolkits.mplot3d import art3d
import numpy as np
def plot_vector(fig, orig, v, color='blue'):
ax = fig.gca(projection='3d')
orig = np.array(orig); v=np.array(v)
ax.quiver(orig[0], orig[1], orig[2], v[0], v[1], v[2],color=color)
ax.set_xlim(0,10);ax.set_ylim(0,10);ax.set_zlim(0,10)
ax = fig.gca(projection='3d')
return fig
def rotation_matrix(d):
sin_angle = np.linalg.norm(d)
if sin_angle == 0:return np.identity(3)
d /= sin_angle
eye = np.eye(3)
ddt = np.outer(d, d)
skew = np.array([[ 0, d[2], -d[1]],
[-d[2], 0, d[0]],
[d[1], -d[0], 0]], dtype=np.float64)
M = ddt + np.sqrt(1 - sin_angle**2) * (eye - ddt) + sin_angle * skew
return M
def pathpatch_2d_to_3d(pathpatch, z, normal):
if type(normal) is str: #Translate strings to normal vectors
index = "xyz".index(normal)
normal = np.roll((1.0,0,0), index)
normal /= np.linalg.norm(normal) #Make sure the vector is normalised
path = pathpatch.get_path() #Get the path and the associated transform
trans = pathpatch.get_patch_transform()
path = trans.transform_path(path) #Apply the transform
pathpatch.__class__ = art3d.PathPatch3D #Change the class
pathpatch._code3d = path.codes #Copy the codes
pathpatch._facecolor3d = pathpatch.get_facecolor #Get the face color
verts = path.vertices #Get the vertices in 2D
d = np.cross(normal, (0, 0, 1)) #Obtain the rotation vector
M = rotation_matrix(d) #Get the rotation matrix
pathpatch._segment3d = np.array([np.dot(M, (x, y, 0)) + (0, 0, z) for x, y in verts])
def pathpatch_translate(pathpatch, delta):
pathpatch._segment3d += delta
def plot_plane(ax, point, normal, size=10, color='y'):
p = Circle((0, 0), size, facecolor = color, alpha = .2)
ax.add_patch(p)
pathpatch_2d_to_3d(p, z=0, normal=normal)
pathpatch_translate(p, (point[0], point[1], point[2]))
o = np.array([5,5,5])
v = np.array([3,3,3])
n = [0.5, 0.5, 0.5]
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
plot_plane(ax, o, n, size=3)
ax.set_xlim(0,10);ax.set_ylim(0,10);ax.set_zlim(0,10)
plt.show()