scipy.optimize.curve_fit ValueError: The truth value of an array with more than one element is ambiguous - scipy

I am trying to use scipy.optimize.curve_fit to fit a sigmoidal curve to my dataset but I get the following error:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "Y:\WORK\code\venv\lib\site-packages\scipy\optimize\minpack.py", line 784, in curve_fit
res = leastsq(func, p0, Dfun=jac, full_output=1, **kwargs)
File "Y:\WORK\code\venv\lib\site-packages\scipy\optimize\minpack.py", line 410, in leastsq
shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
File "Y:\WORK\code\venv\lib\site-packages\scipy\optimize\minpack.py", line 24, in _check_func
res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
File "Y:\WORK\code\venv\lib\site-packages\scipy\optimize\minpack.py", line 484, in func_wrapped
return func(xdata, *params) - ydata
File "<input>", line 7, in func
File "Y:\WORK\code\venv\lib\site-packages\scipy\integrate\quadpack.py", line 348, in quad
flip, a, b = b < a, min(a, b), max(a, b)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
The sigmoidal function I'm trying to fit contains an integral from negative infinity to a function of the independent variable. Here is my code:
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.integrate import quad
import numpy as np
import math
x_data = np.array([ 2.5, 7.5, 12.5, 17.5, 22.5, 27.5, 32.5, 37.5, 42.5, 47.5, 52.5, 57.5, 62.5, 67.5, 72.5, 77.5])
y_data = np.array([0.05, 0.09, 0.13, 0.15, 0.2, 0.35, 0.45, 0.53, 0.68, 0.8, 0.9, 0.92, 0.99, 1, 0.95, 0.97])
#Constructing the sigmoidal function
def integrand(x):
return np.exp(-np.square(x)/2)
def func(x, a, b):
t = (x-a)/(b*a)
integral = quad(integrand, -math.inf, t)[0]
return math.sqrt(1/(2*math.pi))*integral
# Initial guess for parameters a, b
initialGuess = [35, 0.25]
# Perform the curve-fit
popt, pcov = curve_fit(func, x_data, y_data, initialGuess)
The problem seems to be coming from the integral part. In other similar posts, their function contains a boolean argument where if the independent variable exceeds a value then another function is provided. But not in my case. I'm very confused...

Thanks to #hpaulj: the quad function only takes scalar and my t was an array. I have changed the function as follows and the error went away
def func(x, a, b):
sigmoid_arr = np.array([])
for i in x:
t = (i- a)/(b*a)
integral = quad(integrand, -math.inf, t)[0]
sigmoid = math.sqrt(1/(2*math.pi))*integral
sigmoid_arr = np.append(sigmoid_arr, sigmoid)
return sigmoid_arr

Related

I have problem implementing residuals function for leastsq optimization of scipy when importing it from another file

I have written a code in which functions are called in each other. The working code is as follows:
import numpy as np
from scipy.optimize import leastsq
import RF
func = RF.roots
# residuals = RF.residuals
def residuals(params, x, y):
return y - func(params, x)
def estimation(x, y):
p_guess = [1, 2, 0.5, 0]
params, cov, infodict, mesg, ier = leastsq(residuals, p_guess, args=(x, y), full_output=True)
return params
x = np.array([2.78e-03, 3.09e-03, 3.25e-03, 3.38e-03, 3.74e-03, 4.42e-03, 4.45e-03, 4.75e-03, 8.05e-03, 1.03e-02, 1.30e-02])
y = np.array([2.16e+02, 2.50e+02, 3.60e+02, 4.48e+02, 5.60e+02, 8.64e+02, 9.00e+02, 1.00e+03, 2.00e+03, 3.00e+03, 4.00e+03])
FIT_params = estimation(x, y)
print(FIT_params)
where RF file is:
def roots(params, x):
a, b, c, d = params
y = a * (b * x) ** c + d
return y
def residuals(params, x, y):
return y - func(params, x)
I would like to remove residuals function from the main code and use it by calling from RF file instead i.e. by activating the code line residuals = RF.residuals. By doing so, error NameError: name 'func' is not defined will be appeared. I put func argument in RF's residuals function as def residuals(func, params, x, y): which will face to error TypeError: residuals() missing 1 required positional argument: 'y'; It seems the error is related to the forth argument of the residuals function in this sample because it will get error for 'func' if the func argument be placed after the y argument. I couldn't find out the source of the issue, but I guess it must be related to limitation of arguments in functions. I would be appreciated if anyone could guide me to understand the error and its solution.
Is it possible to bring residual function from the main code to the RF file? How?
The problem is that there's no global variable func in your file RF.py, hence it can't be found. A simple solution would be to add an additional parameter to your residuals function:
# RF.py
def roots(params, x):
a, b, c, d = params
y = a * (b * x) ** c + d
return y
def residuals(params, func, x, y):
return y - func(params, x)
Then, you can use it inside your other file like this:
import numpy as np
from scipy.optimize import leastsq
from RF import residuals, roots as func
def estimation(func, x, y):
p_guess = [1, 2, 0.5, 0]
params, cov, infodict, mesg, ier = leastsq(residuals, p_guess, args=(func, x, y), full_output=True)
return params
x = np.array([2.78e-03, 3.09e-03, 3.25e-03, 3.38e-03, 3.74e-03, 4.42e-03, 4.45e-03, 4.75e-03, 8.05e-03, 1.03e-02, 1.30e-02])
y = np.array([2.16e+02, 2.50e+02, 3.60e+02, 4.48e+02, 5.60e+02, 8.64e+02, 9.00e+02, 1.00e+03, 2.00e+03, 3.00e+03, 4.00e+03])
FIT_params = estimation(func, x, y)
print(FIT_params)

Error while executing CrossEntropyLoss() in PyTorch

My dataset contains images of shape [3,28,28]. I have written the following code:
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(nn.Conv2d(3, 28, kernel_size=5, stride=1, padding=2),nn.ReLU(),nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(nn.Conv2d(28, 56, kernel_size=5, stride=1, padding=2),nn.ReLU(),nn.MaxPool2d(kernel_size=2, stride=2))
self.drop_out = nn.Dropout()
self.fc1 = nn.Linear(7 * 7 * 56, 1000)
self.fc2 = nn.Linear(1000, 10)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.drop_out(out)
out = self.fc1(out)
out = self.fc2(out)
return out
model = ConvNet()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
total_step = len(loader_train)
for e in range(num_epochs):
print("Epoch ", e+1,": ")
for i, (images, labels) in enumerate(loader_train):
optimizer.zero_grad()
actual_out = model(images)
loss = criterion(actual_out, labels)
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.3f}' .format(e+1, num_epochs, i+1, total_step, loss.item()))
However, I'm getting the following error:
AttributeError Traceback (most recent call last)
in
8 actual_out = model(images)
9
---> 10 loss = criterion(actual_out, labels)
11 loss.backward()
AttributeError: 'tuple' object has no attribute 'size'
I converted labels into a tensor by the following method:
target_out = torch.empty(batch_size,dtype=torch.long).random_(labels)
loss = criterion(actual_out, target_out)
But that generates:
TypeError Traceback (most recent call last)
in
---> 11 target_out = torch.empty(batch_size,dtype=torch.long).random_(labels)
12 loss = criterion(actual_out, target_out)
TypeError: random_() received an invalid combination of arguments - got (tuple), but expected one of:
(*, torch.Generator generator)
(int from, int to, *, torch.Generator generator)
(int to, *, torch.Generator generator)
Your labels object is a tuple and you want to convert it to a tensor of dtype long.
You can do this via:
torch.tensor(labels, dtype=torch.long)
Assuming this is how your train_loader is structured, you can reshape in the training loop rather than in the forward() function. I've given an example change it as your need.
Also during the backward, clear the grad, the .to(device) is optional, if you have a GPU
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# origin shape: [100, 1, 28, 28]
# resized: [100, 784]
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()

Python: Fitting non-linear function into surface

I've got an error gauss function erfc(x) which I need to fit into my data of surface.
Whole equation is:
Z = Z_0 * erfc(x / 2*sqrt(D*t))
I know from data Z, Z_0, x, t ... the only parameter I am looking for is D. Using curve_fit is fine for single lines but I need to find only one constant parameter D for whole surface.
The surface looks like this
].
Any ideas, please? Thanks
I've created an example that demonstrates a multivariate curve_fit. Note that I used a class object to store the parameter Z0, but there are other ways to do this (see this question).
import numpy as np
from scipy.optimize import curve_fit
from scipy.special import erfc
# Class to contain model and parameters
class fitClass:
def __init__(self):
pass
# Model with unknown parameter D
def func(self, p, D):
x, t = p
Z = self.Z0 * erfc(x / 2*np.sqrt(D*t))
return Z
# Instantiate class and define parameters
inst = fitClass()
inst.Z0 = 1.0
D = 10.0
Nx = int(1e2)
Nt = int(1e1)
# Independent variables
x = np.linspace(-1.0, 1.0, Nx)
t = np.linspace(1.0, 5.0, Nt)
X, T = np.meshgrid(x, t)
# Merge independent variables
xdata = np.vstack([X.reshape(-1), T.reshape(-1)])
# Synthetic ydata (noisy measurement)
noise = 0.5*(np.random.rand(Nx*Nt)-0.5)
Z = inst.func(xdata, D)
Z_noisy = Z + noise
# Fit model to data
popt, pcov = curve_fit(inst.func, xdata, Z_noisy)
D_fit = popt[0]
print(D_fit)

Variable Dependence scipy.special.genlaguerre

I'm new to python. If I wanted L(n, a, x) where L is the general Laguerre polynomial then I could simply use
from scipy.special import genlaguerre
print(genlaguerre(n, a))
However, I am having trouble obtaining something like L(n, a, 2 pi x) since there is no explicit variable dependence in the function genlaguerre.
The object returned by genlaguerre(n, a) is callable; you call it to evaluate it at a given x.
For example,
In [71]: import numpy as np
In [72]: import matplotlib.pyplot as plt
In [73]: from scipy.special import genlaguerre
In [74]: n = 3
In [75]: alpha = 4.5
In [76]: L = genlaguerre(n, alpha)
To get the value of the polynomial at x, call L(x):
In [77]: L(0)
Out[77]: 44.6875
In [78]: L(1)
Out[78]: 23.895833333333332
In [79]: L([2, 2.5, 3])
Out[79]: array([ 9.60416667, 4.58333333, 0.8125 ])
In [80]: x = np.linspace(0, 14, 100)
In [81]: plt.plot(x, L(x))
Out[81]: [<matplotlib.lines.Line2D at 0x11cde42b0>]
In [82]: plt.xlabel('x')
Out[82]: <matplotlib.text.Text at 0x11cddc4a8>
In [83]: plt.ylabel('$L_{%d}^{(%g)}(x)$' % (n, alpha))
Out[83]: <matplotlib.text.Text at 0x11cdce320>
In [84]: plt.grid()
Here's the plot generated by the above code:

In Scipy LeastSq - How to add the penalty term

If the object function is
How to code it in python?
I've already coded the normal one:
import numpy as np
import scipy as sp
from scipy.optimize import leastsq
import pylab as pl
m = 9 #the degree of the polynomial
def real_func(x):
return np.sin(2*np.pi*x) #sin(2 pi x)
def fake_func(p, x):
f = np.poly1d(p) #polynomial
return f(x)
def residuals(p, y, x):
return y - fake_func(p, x)
#randomly choose 9 points as x
x = np.linspace(0, 1, 9)
x_show = np.linspace(0, 1, 1000)
y0 = real_func(x)
#add normalize noise
y1 = [np.random.normal(0, 0.1) + y for y in y0]
p0 = np.random.randn(m)
plsq = leastsq(residuals, p0, args=(y1, x))
print 'Fitting Parameters :', plsq[0]
pl.plot(x_show, real_func(x_show), label='real')
pl.plot(x_show, fake_func(plsq[0], x_show), label='fitted curve')
pl.plot(x, y1, 'bo', label='with noise')
pl.legend()
pl.show()
Since the penalization term is also just quadratic, you could just stack it together with thesquares of the error and use weights 1 for data and lambda for the penalization rows.
scipy.optimize.curvefit does weighted least squares, if you don't want to code it yourself.