Variable Dependence scipy.special.genlaguerre - scipy

I'm new to python. If I wanted L(n, a, x) where L is the general Laguerre polynomial then I could simply use
from scipy.special import genlaguerre
print(genlaguerre(n, a))
However, I am having trouble obtaining something like L(n, a, 2 pi x) since there is no explicit variable dependence in the function genlaguerre.

The object returned by genlaguerre(n, a) is callable; you call it to evaluate it at a given x.
For example,
In [71]: import numpy as np
In [72]: import matplotlib.pyplot as plt
In [73]: from scipy.special import genlaguerre
In [74]: n = 3
In [75]: alpha = 4.5
In [76]: L = genlaguerre(n, alpha)
To get the value of the polynomial at x, call L(x):
In [77]: L(0)
Out[77]: 44.6875
In [78]: L(1)
Out[78]: 23.895833333333332
In [79]: L([2, 2.5, 3])
Out[79]: array([ 9.60416667, 4.58333333, 0.8125 ])
In [80]: x = np.linspace(0, 14, 100)
In [81]: plt.plot(x, L(x))
Out[81]: [<matplotlib.lines.Line2D at 0x11cde42b0>]
In [82]: plt.xlabel('x')
Out[82]: <matplotlib.text.Text at 0x11cddc4a8>
In [83]: plt.ylabel('$L_{%d}^{(%g)}(x)$' % (n, alpha))
Out[83]: <matplotlib.text.Text at 0x11cdce320>
In [84]: plt.grid()
Here's the plot generated by the above code:

Related

DCT julia equivalent to scipy.fftpack.dct

I wonder how I could compute the following in Julia
import scipy.fftpack
scipy.fftpack.dct([1,2,3], axis=0)
array([ 1.20000000e+01, -3.46410162e+00, -4.44089210e-16])
I have seen that FFTW.jl seems to have the equivalent of
import scipy.fftpack
scipy.fftpack.dct([1,2,3], norm='ortho')
array([ 3.46410162, -1.41421356, 0. ])
which in julia FFTW would be
using FFTW
dct([1,2,3])
3-element Vector{Float64}:
3.4641016151377544
-1.414213562373095
9.064933036736789e-17
I don't think there is an equivalent for that, but you can certainly build your own normalization:
import FFTW: dct
function dct(x, dims = 1; norm = nothing)
res = dct(x, dims)
if norm == "ortho"
res[1] = res[1] * 2 * sqrt(size(x, dims))
res[2:end] = res[2:end] * sqrt(2 * size(x, dims))
end
res
end
julia> dct([1,2,3], norm = "ortho")
3-element Vector{Float64}:
11.999999999999998
-3.464101615137754
2.2204460492503128e-16

I have problem implementing residuals function for leastsq optimization of scipy when importing it from another file

I have written a code in which functions are called in each other. The working code is as follows:
import numpy as np
from scipy.optimize import leastsq
import RF
func = RF.roots
# residuals = RF.residuals
def residuals(params, x, y):
return y - func(params, x)
def estimation(x, y):
p_guess = [1, 2, 0.5, 0]
params, cov, infodict, mesg, ier = leastsq(residuals, p_guess, args=(x, y), full_output=True)
return params
x = np.array([2.78e-03, 3.09e-03, 3.25e-03, 3.38e-03, 3.74e-03, 4.42e-03, 4.45e-03, 4.75e-03, 8.05e-03, 1.03e-02, 1.30e-02])
y = np.array([2.16e+02, 2.50e+02, 3.60e+02, 4.48e+02, 5.60e+02, 8.64e+02, 9.00e+02, 1.00e+03, 2.00e+03, 3.00e+03, 4.00e+03])
FIT_params = estimation(x, y)
print(FIT_params)
where RF file is:
def roots(params, x):
a, b, c, d = params
y = a * (b * x) ** c + d
return y
def residuals(params, x, y):
return y - func(params, x)
I would like to remove residuals function from the main code and use it by calling from RF file instead i.e. by activating the code line residuals = RF.residuals. By doing so, error NameError: name 'func' is not defined will be appeared. I put func argument in RF's residuals function as def residuals(func, params, x, y): which will face to error TypeError: residuals() missing 1 required positional argument: 'y'; It seems the error is related to the forth argument of the residuals function in this sample because it will get error for 'func' if the func argument be placed after the y argument. I couldn't find out the source of the issue, but I guess it must be related to limitation of arguments in functions. I would be appreciated if anyone could guide me to understand the error and its solution.
Is it possible to bring residual function from the main code to the RF file? How?
The problem is that there's no global variable func in your file RF.py, hence it can't be found. A simple solution would be to add an additional parameter to your residuals function:
# RF.py
def roots(params, x):
a, b, c, d = params
y = a * (b * x) ** c + d
return y
def residuals(params, func, x, y):
return y - func(params, x)
Then, you can use it inside your other file like this:
import numpy as np
from scipy.optimize import leastsq
from RF import residuals, roots as func
def estimation(func, x, y):
p_guess = [1, 2, 0.5, 0]
params, cov, infodict, mesg, ier = leastsq(residuals, p_guess, args=(func, x, y), full_output=True)
return params
x = np.array([2.78e-03, 3.09e-03, 3.25e-03, 3.38e-03, 3.74e-03, 4.42e-03, 4.45e-03, 4.75e-03, 8.05e-03, 1.03e-02, 1.30e-02])
y = np.array([2.16e+02, 2.50e+02, 3.60e+02, 4.48e+02, 5.60e+02, 8.64e+02, 9.00e+02, 1.00e+03, 2.00e+03, 3.00e+03, 4.00e+03])
FIT_params = estimation(func, x, y)
print(FIT_params)

why does this Cubic Spline Error in dimensions appear?

def f(x):
return 1/(1 + (x**2))
from scipy.interpolate import CubicSpline
a = -1
b = 1
n = 5
xArray = np.linspace(a,b,n)
yArray = f(xArray)
x = np.linspace(a,b,nPts)
y = CubicSpline(xArray, yArray, x)
plt.plot(x, y, label="Interpolation, " + str(n) + " points")
Im wondering whats the problem in using cubic spline in this way. The error that I get says there is a wrong dimension?
ValueError: x and y must have same first dimension, but have shapes (101,) and (1,
I see your misunderstanding here roots from misinterpretation of the 'extrapolate' keyword, to quote the documentation of CubicSpline
extrapolate{bool, ‘periodic’, None}, optional
If bool, determines whether to extrapolate to out-of-bounds points
based on first and last intervals, or to return NaNs. If ‘periodic’,
periodic extrapolation is used. If None (default), extrapolate is set
to ‘periodic’ for bc_type='periodic' and to True otherwise.
is a boolean and not the list of points for which you want to interpolate and or extrapolate.
The correct usage is to fit a CubicSpline first and then use it to interpolate or extrapolate
def f(x):
return 1/(1 + (x**2))
from scipy.interpolate import CubicSpline
import numpy as np
import matplotlib.pyplot as plt
a = -1
b = 1
n = 5
xArray = np.linspace(a,b,n)
yArray = f(xArray)
x = np.linspace(a,b,101)
cs = CubicSpline(xArray, yArray, True) # fit a cubic spline
y = cs(x) # interpolate/extrapolate
plt.plot(x, y, label="Interpolation, " + str(n) + " points")
plt.show()
The above code will work

Python: Fitting non-linear function into surface

I've got an error gauss function erfc(x) which I need to fit into my data of surface.
Whole equation is:
Z = Z_0 * erfc(x / 2*sqrt(D*t))
I know from data Z, Z_0, x, t ... the only parameter I am looking for is D. Using curve_fit is fine for single lines but I need to find only one constant parameter D for whole surface.
The surface looks like this
].
Any ideas, please? Thanks
I've created an example that demonstrates a multivariate curve_fit. Note that I used a class object to store the parameter Z0, but there are other ways to do this (see this question).
import numpy as np
from scipy.optimize import curve_fit
from scipy.special import erfc
# Class to contain model and parameters
class fitClass:
def __init__(self):
pass
# Model with unknown parameter D
def func(self, p, D):
x, t = p
Z = self.Z0 * erfc(x / 2*np.sqrt(D*t))
return Z
# Instantiate class and define parameters
inst = fitClass()
inst.Z0 = 1.0
D = 10.0
Nx = int(1e2)
Nt = int(1e1)
# Independent variables
x = np.linspace(-1.0, 1.0, Nx)
t = np.linspace(1.0, 5.0, Nt)
X, T = np.meshgrid(x, t)
# Merge independent variables
xdata = np.vstack([X.reshape(-1), T.reshape(-1)])
# Synthetic ydata (noisy measurement)
noise = 0.5*(np.random.rand(Nx*Nt)-0.5)
Z = inst.func(xdata, D)
Z_noisy = Z + noise
# Fit model to data
popt, pcov = curve_fit(inst.func, xdata, Z_noisy)
D_fit = popt[0]
print(D_fit)

In Scipy LeastSq - How to add the penalty term

If the object function is
How to code it in python?
I've already coded the normal one:
import numpy as np
import scipy as sp
from scipy.optimize import leastsq
import pylab as pl
m = 9 #the degree of the polynomial
def real_func(x):
return np.sin(2*np.pi*x) #sin(2 pi x)
def fake_func(p, x):
f = np.poly1d(p) #polynomial
return f(x)
def residuals(p, y, x):
return y - fake_func(p, x)
#randomly choose 9 points as x
x = np.linspace(0, 1, 9)
x_show = np.linspace(0, 1, 1000)
y0 = real_func(x)
#add normalize noise
y1 = [np.random.normal(0, 0.1) + y for y in y0]
p0 = np.random.randn(m)
plsq = leastsq(residuals, p0, args=(y1, x))
print 'Fitting Parameters :', plsq[0]
pl.plot(x_show, real_func(x_show), label='real')
pl.plot(x_show, fake_func(plsq[0], x_show), label='fitted curve')
pl.plot(x, y1, 'bo', label='with noise')
pl.legend()
pl.show()
Since the penalization term is also just quadratic, you could just stack it together with thesquares of the error and use weights 1 for data and lambda for the penalization rows.
scipy.optimize.curvefit does weighted least squares, if you don't want to code it yourself.