Curve_Fit Cannot find Covariance of First Order System - scipy

I am trying to fit a simple model in the form of y = A(1-exp(-t/tau))+A0 yet curve_fit spits out the error
Covariance of the parameters could not be estimated
and gives me an atrocious fit. Are my default parameters not robust enough?
Here is my code and data:
Time (s) Ads 1 Ads 2 Ads 3 Des 1 Des 2 Des 3 Des 4
0 0 18.979 18.979 18.979 19.034 19.042 19.026 19.028
1 30 18.997 18.993 18.993 19.023 19.019 19.015 NaN
2 45 19.004 18.997 19.000 19.021 19.018 19.012 NaN
3 60 19.009 19.003 19.007 19.020 19.012 19.012 19.011
4 75 19.013 19.007 19.012 19.019 19.011 19.010 19.009
5 90 19.016 19.010 19.015 19.018 19.009 19.009 19.008
6 300 19.022 19.022 19.028 NaN 18.990 18.989 18.990
7 600 NaN NaN NaN NaN 18.984 NaN NaN
time = df['Time (s)']
def first_order(t,A,tau=100,A0=1):
t0 = t[0]
y = A*(1-np.exp(-(t-t0)/tau))+A0
return y
parameters, covariance = curve_fit(first_order, time, df["Des 2"])
plt.plot(time,df["Des 2"])
plt.plot(np.linspace(time[0],time[-1:],100),first_order(np.linspace(time[0],time[-1:],100),*parameters),'--')

You need to provide a suitable initial guess for the parameters (p0).
Below the code and the image of the fitted model:
import numpy as np
from matplotlib import pyplot as plt
from scipy.optimize import curve_fit
time = [0, 30, 45, 60, 75, 90, 300, 600]
des2 = [19.042, 19.019, 19.018, 19.012, 19.011, 19.009, 18.990, 18.984]
def first_order(t, A, tau, A0):
t0 = t[0]
y = - (A * (1 - np.exp(-(t - t0) / tau)) + A0)
return y
parameters, _ = curve_fit(first_order, time, des2, p0=(1, 100, 1))
plt.plot(time, des2)
plt.plot(
np.linspace(time[0], time[-1], 100),
first_order(np.linspace(time[0], time[-1], 100), *parameters),
"--",
)
plt.show()

Related

To fit Linear regression Model with and without intercept in python

I need to fit Linear regression Model 1 : y = β1x1 + ε and Model 2: y = β0 + β1x1 + ε, to the data x1 = ([0,1,2,3,4])
y = ([1,2,3,2,1]). My objective is to find
coefficients, squared error loss, the absolute error loss, and the L1.5 loss for both model.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
import statsmodels.formula.api as smf
import numpy as np
x1 = ([0,1,2,3,4])
y = ([1,2,3,2,1])
would you please show me some way to get these?
This first method doesn't use the formula api.
import statsmodels.api as sm
import numpy as np
x1 = np.array([0,1,2,3,4])
y = np.array([1,2,3,2,1])
x1 = x1[:, None] # Transform into a (5,1) atrray
res = sm.OLS(y,x1).fit()
print(res.summary())
If you want to use the formula interface, you need to build a DataFrame, and then the regression is "y ~ x1" (if you want a constant you need to include +1 on the right-hand-side of the formula.
import statsmodels.formula.api as smf
import pandas as pd
x1 = [0,1,2,3,4]
y = [1,2,3,2,1]
data = pd.DataFrame({"y":y,"x1":x1})
res = smf.ols("y ~ x1", data).fit()
print(res.summary())
Either produce
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.000
Model: OLS Adj. R-squared: -0.333
Method: Least Squares F-statistic: 4.758e-16
Date: Wed, 17 Mar 2021 Prob (F-statistic): 1.00
Time: 22:11:40 Log-Likelihood: -5.6451
No. Observations: 5 AIC: 15.29
Df Residuals: 3 BIC: 14.51
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 1.8000 0.748 2.405 0.095 -0.582 4.182
x1 0 0.306 0 1.000 -0.972 0.972
==============================================================================
Omnibus: nan Durbin-Watson: 1.429
Prob(Omnibus): nan Jarque-Bera (JB): 0.375
Skew: 0.344 Prob(JB): 0.829
Kurtosis: 1.847 Cond. No. 4.74
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
to include an intercept in the non-formula API, you can simply use
res_constant = sm.OLS(y, sm.add_constant(x1).fit()
You can use sklearn's LinearRegression.
For the one without intercept (wanting to fit the model to intercept at origin), simply set the parameter fit_intercept = False

stan number of effective sample size

I reproduced the results of a hierarchical model using the rethinking package with just rstan() and I am just curious why n_eff is not closer.
Here is the model with random intercepts for 2 groups (intercept_x2) using the rethinking package:
Code:
response = c(rnorm(500,0,1),rnorm(500,200,10))
predicotr1_continuous = rnorm(1000)
predictor2_categorical = factor(c(rep("A",500),rep("B",500) ))
data = data.frame(y = response, x1 = predicotr1_continuous, x2 = predictor2_categorical)
head(data)
library(rethinking)
m22 <- map2stan(
alist(
y ~ dnorm( mu , sigma ) ,
mu <- intercept + intercept_x2[x2] + beta*x1 ,
intercept ~ dnorm(0,10),
intercept_x2[x2] ~ dnorm(0, sigma_2),
beta ~ dnorm(0,10),
sigma ~ dnorm(0, 10),
sigma_2 ~ dnorm(0,10)
) ,
data=data , chains=1 , iter=5000 , warmup=500 )
precis(m22, depth = 2)
Mean StdDev lower 0.89 upper 0.89 n_eff Rhat
intercept 9.96 9.59 -5.14 25.84 1368 1
intercept_x2[1] -9.94 9.59 -25.55 5.43 1371 1
intercept_x2[2] 189.68 9.59 173.28 204.26 1368 1
beta 0.06 0.22 -0.27 0.42 3458 1
sigma 6.94 0.16 6.70 7.20 2927 1
sigma_2 43.16 5.01 35.33 51.19 2757 1
Now here is the same model in rstan():
# create a numeric vector to indicate the categorical groups
data$GROUP_ID = match( data$x2, levels( data$x2 ) )
library(rstan)
standat <- list(
N = nrow(data),
y = data$y,
x1 = data$x1,
GROUP_ID = data$GROUP_ID,
nGROUPS = 2
)
stanmodelcode = '
data {
int<lower=1> N;
int nGROUPS;
real y[N];
real x1[N];
int<lower=1, upper=nGROUPS> GROUP_ID[N];
}
transformed data{
}
parameters {
real intercept;
vector[nGROUPS] intercept_x2;
real beta;
real<lower=0> sigma;
real<lower=0> sigma_2;
}
transformed parameters { // none needed
}
model {
real mu;
// priors
intercept~ normal(0,10);
intercept_x2 ~ normal(0,sigma_2);
beta ~ normal(0,10);
sigma ~ normal(0,10);
sigma_2 ~ normal(0,10);
// likelihood
for(i in 1:N){
mu = intercept + intercept_x2[ GROUP_ID[i] ] + beta*x1[i];
y[i] ~ normal(mu, sigma);
}
}
'
fit22 = stan(model_code=stanmodelcode, data=standat, iter=5000, warmup=500, chains = 1)
fit22
Inference for Stan model: b212ebc67c08c77926c59693aa719288.
1 chains, each with iter=5000; warmup=500; thin=1;
post-warmup draws per chain=4500, total post-warmup draws=4500.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
intercept 10.14 0.30 9.72 -8.42 3.56 10.21 16.71 29.19 1060 1
intercept_x2[1] -10.12 0.30 9.73 -29.09 -16.70 -10.25 -3.50 8.36 1059 1
intercept_x2[2] 189.50 0.30 9.72 170.40 182.98 189.42 196.09 208.05 1063 1
beta 0.05 0.00 0.21 -0.37 -0.10 0.05 0.20 0.47 3114 1
sigma 6.94 0.00 0.15 6.65 6.84 6.94 7.05 7.25 3432 1
sigma_2 43.14 0.09 4.88 34.38 39.71 42.84 46.36 53.26 3248 1
lp__ -2459.75 0.05 1.71 -2463.99 -2460.68 -2459.45 -2458.49 -2457.40 1334 1
Samples were drawn using NUTS(diag_e) at Thu Aug 31 15:53:09 2017.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at
convergence, Rhat=1).
My Questions:
the n_eff is larger using rethinking(). There is simulation differences but do you think something else is going on here?
Besides the n_eff being different the percentiles of the posterior distributions are different. I was thinking rethinking() and rstan() should return similar results with 5000 iterations since rethinking is just calling rstan. Are differences like that normal or something different between the 2 implementations?
I created data$GROUP_ID to indicate the categorical groupings. Is this the correct way to incorporate categorical variables into a hierarchical model in rstan()? I have 2 groups and if I had 50 groups I use the same data$GROUP_ID vector but is that the standard way?
Thank you.

Understanding the Jacobian output of scipy.optimize.minimize

I'm working with scipy.optimize.minimize to find the minimum of the RSS for a custom nonlinear function. I'll provide a simple linear example to illustrate what I am doing:
import numpy as np
from scipy import optimize
def response(X, b0, b1, b2):
return b2 * X[1]**2 + b1 * X[0] + b0
def obj_rss(model_params, y_true, X):
return np.sum((y_true - response(X, *model_params))**2)
x = np.array([np.arange(0, 10), np.arange(10, 20)])
r = 15. * x[1]**2 - 32. * x[0] + 10.
init_guess = np.array([0., 50., 10.])
res = optimize.minimize(obj_rss, init_guess, args=(r, x))
print res
This yields the results:
fun: 3.0218799331864133e-08
hess_inv: array([[ 7.50606278e+00, 2.38939463e+00, -8.33333575e-02],
[ 2.38939463e+00, 8.02462363e-01, -2.74621294e-02],
[ -8.33333575e-02, -2.74621294e-02, 9.46969972e-04]])
jac: array([ -3.31359843e-07, -5.42022462e-08, 2.34304025e-08])
message: 'Optimization terminated successfully.'
nfev: 45
nit: 6
njev: 9
status: 0
success: True
x: array([ 10.00066577, -31.99978062, 14.99999243])
And we see that the fitted parameters 10, -32, and 15 are equivalent to those used to generate the actuals data. That's great. Now my question:
I have the understanding that the Jacobian should be an m x n matrix where m is the number of records from the X input and n is the number of parameters. Clearly I don't have that in the results object. The results object yields an array that is referred to as the Jacobian in the documentation (1 and 2), but is only one-dimensional with a number of elements equal to the number of parameters.
Further confusing the matter, when I use method='SLSQP', the Jacobian that is returned has one more element than that returned by other minimization algorithms.
. . .
My larger goal here is to be able to calculate either confidence intervals or standard errors, t-, and p-values for the fitted parameters, so if you think I'm way off track here, please let me know.
EDIT:
The following is intended to show how the SLSQP minimization algorithm yields different results in the Jacobian than the default minimization algorithm, which is one of BFGS, L-BFGS-B, or SLSQP, depending on if the problem has constraints (as mentioned in the documentation). The SLSQP solver is intended for use with constraints.
import numpy as np
from scipy import optimize
def response(X, b0, b1, b2):
return b2 * X[1]**2 + b1 * X[0] + b0
def obj_rss(model_params, y_true, X):
return np.sum((y_true - response(X, *model_params))**2)
x = np.array([np.arange(0, 10), np.arange(10, 20)])
r = 15. * x[1]**2 - 32. * x[0] + 10.
init_guess = np.array([0., 50., 10.])
res = optimize.minimize(obj_rss, init_guess, method='SLSQP', args=(r, x))
print res
r_pred = response(x, *res.x)
Yields results:
fun: 7.5269461938291697e-10
jac: array([ 2.94677643e-05, 5.52844499e-04, 2.59870917e-02,
0.00000000e+00])
message: 'Optimization terminated successfully.'
nfev: 58
nit: 10
njev: 10
status: 0
success: True
x: array([ 10.00004495, -31.9999794 , 14.99999938])
One can see that there is an extra element in the Jacobian array that is returned from the SLSQP solver. I am confused where this comes from.

scipy optimize minimize: hess_inv strongly depends on initial guess

I am using scipy.optimize.minimize to minimize a simple log likelihood function. The Hessian matrix doesn't seem to behave well.
import scipy.optimize as op
def lnlike(theta, n, bhat, fhat, sigb, sigf):
S, b, f = theta
mu = f*S + b
scb2 = ((b-bhat)/sigb)**2
scf2 = ((f-fhat)/sigf)**2
return n*np.log(mu) - mu - 0.5*(scb2+scf2)
nll = lambda *args: -lnlike(*args)
myargs=(21.0, 20.0, 0.5, 6.0, 0.1)
If the initial guess is at the minimum, the iteration doesn't go anywhere. That is fine in terms of the parameter values, but it doesn't touch Hessian (still identity) either, so I cannot use it for uncertainty estimation.
x0 = [2.0, 20.0, 0.5] # initial guess is at the minimum
result = op.minimize(nll, x0, args= myargs)
print result
status: 0
success: True
njev: 1
nfev: 5
hess_inv: array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
fun: -42.934971192191881
x: array([ 2. , 20. , 0.5])
message: 'Optimization terminated successfully.'
jac: array([ 0.00000000e+00, 0.00000000e+00, 9.53674316e-07])
If I change the initial guess a little bit, it seems to return a sensible hess_inv.
x0 = [2.01, 20.0, 0.5]
result = op.minimize(nll, x0, args= myargs)
print result
print np.sqrt(result.hess_inv[0,0])
status: 0
success: True
njev: 15
nfev: 75
hess_inv: array([[ 2.16004477e+02, -7.60588367e+01, -2.94846112e-02],
[ -7.60588367e+01, 3.55748024e+01, 2.74064505e-03],
[ -2.94846112e-02, 2.74064505e-03, 9.98030944e-03]])
fun: -42.934971191969964
x: array([ 1.99984604, 19.9999814 , 0.5000001 ])
message: 'Optimization terminated successfully.'
jac: array([ -2.38418579e-06, -5.24520874e-06, 1.90734863e-06])
14.697090757
However, hess_inv is very sensitive to the initial guess.
x0 = [2.02, 20.0, 0.5]
result = op.minimize(nll, x0, args= myargs)
print result
print np.sqrt(result.hess_inv[0,0])
status: 0
success: True
njev: 16
nfev: 80
hess_inv: array([[ 1.82153214e+02, -6.03482772e+01, -2.97458789e-02],
[ -6.03482772e+01, 3.30771459e+01, -2.53811809e-03],
[ -2.97458789e-02, -2.53811809e-03, 9.99052952e-03]])
fun: -42.934971192188634
x: array([ 1.9999702 , 20.00000354, 0.50000001])
message: 'Optimization terminated successfully.'
jac: array([ -9.53674316e-07, -4.76837158e-07, -4.76837158e-07])
13.4964148462
Change the initial guess a bit more
x0 = [2.03, 20.0, 0.5]
result = op.minimize(nll, x0, args= myargs)
print result
print np.sqrt(result.hess_inv[0,0])
status: 0
success: True
njev: 14
nfev: 70
hess_inv: array([[ 2.30479371e+02, -7.36087027e+01, -3.79639119e-02],
[ -7.36087027e+01, 3.55785937e+01, 3.54182478e-03],
[ -3.79639119e-02, 3.54182478e-03, 9.97664441e-03]])
fun: -42.93497119204827
x: array([ 1.99975148, 20.00006366, 0.50000009])
message: 'Optimization terminated successfully.'
jac: array([ -9.53674316e-07, -9.53674316e-07, 4.29153442e-06])
15.1815470484
Did I miss something? Is this a bug or a feature?
The way I understand the optimizers, the Hessian are approximated by finite differences. In your case, it does not seem the best idea. Perhaps, utilizing Sympy (in IPython) will produce more usable results:
import sympy as sy
import numpy as np
import scipy.optimize as sopt
from IPython.display import display # nice printing
sy.init_printing() # LaTeX like printing for IPython
def lnlike(theta, n, bhat, fhat, sigb, sigf):
S, b, f = theta
mu = f*S + b
scb2 = ((b-bhat)/sigb)**2
scf2 = ((f-fhat)/sigf)**2
return n*sy.log(mu) - mu - (scb2+scf2) / 2
# declare symbols:
th_S, th_b, th_f = sy.symbols("theta_S, theta_b, theta_f", real=True)
theta = (th_S, th_b, th_f)
n, bhat, fhat = sy.symbols("n, \hat{b}, \hat{f}", real=True )
sigb, sigf = sy.symbols("sigma_b, sigma_d", real=True )
# symbolic optimizaton function:
lf = -lnlike(theta, n, bhat, fhat, sigb, sigf)
# Gradient:
dlf = sy.Matrix([lf.diff(th) for th in theta])
# Hessian:
Hlf = sy.Matrix([dlf.T.diff(th) for th in theta])
print("Symbolic Hessian:")
display(Hlf)
# Make numpy functions:
margs = {n:21, bhat:20, fhat:.5, sigb:6, sigf:.1} # parameters
lf_a, dlf_a, Hlf_a = lf.subs(margs), dlf.subs(margs), Hlf.subs(margs)
lf_lam = sy.lambdify(theta, lf_a, modules="numpy")
dlf_lam = sy.lambdify(theta, dlf_a, modules="numpy")
Hlf_lam = sy.lambdify(theta, Hlf_a, modules="numpy")
nlf = lambda xx: np.array(lf_lam(xx[0], xx[1], xx[2])) # function
ndlf = lambda xx: np.array(dlf_lam(xx[0], xx[1], xx[2])).flatten() # gradient
nHlf = lambda xx: np.array(Hlf_lam(xx[0], xx[1], xx[2])) # Hessian
x0 = [2.02, 20.0, 0.5]
rs = sopt.minimize(nlf, x0, jac=ndlf, hess=nHlf, method='Newton-CG')
print(rs)
print("Hessian:")
print(nHlf(rs.x))
If you're using a quasi-Newton method, which from the documentation it appears you are:
Quasi-Newton methods build up a guess at the Hessian inverse by applying a sequence of low-rank updates to a completely naive guess (typically a multiple of the identity). The low-rank updates used are in some sense the "least-change" updates that make a given equation hold, and the meaning of "least-change" varies with the quasi-Newton method chosen. If you start at, or very close to, the minimiser, the optimiser will figure this out very quickly and it won't build up much information in its approximation to the Hessian inverse.

Chi squared test

I have written code in MATLAB for a Chi-Square test. I wish to obtain P-values as 0.897 or 0.287 and so on, but my results are too small. Below is my code:
pd = fitdist(sample, 'weibull');
[h,p,st] = chi2gof(sample,'CDF',pd)
I've also tried using the AD test with similar result:
dist = makedist('Weibull', 'a',A, 'b',B);
[h,p,ad,cv] = adtest(sample, 'Distribution',dist)
Below is a histogram of the data with a fitted Weibull density function (Weibull parameters are A=4.0420 and B=2.0853)
When the p-value is less than a predetermined significance level (default is 5% or 0.05), it means that the null hypotheses is rejected (which in your case means that the sample did not come from a Weibull distribution).
The chi2gof function first output variable h denotes the test result, where h=1 means that the test rejects the null hypothesis at the specified significance level.
Example:
sample = rand(1000,1); % sample from Uniform distribution
pd = fitdist(sample, 'weibull');
[h,p,st] = chi2gof(sample, 'CDF',pd, 'Alpha',0.05)
The test clearly rejects H0, and concludes that the data did not came from a Weibull distribution:
h =
1 % 1: H1 (alternate hypo), 0: H0 (null hypo)
p =
2.8597e-27 % note that p << 0.05
st =
chi2stat: 141.1922
df: 7
edges: [0.0041 0.1035 0.2029 0.3023 0.4017 0.5011 0.6005 0.6999 0.7993 0.8987 0.9981]
O: [95 92 92 97 107 110 102 95 116 94]
E: [53.4103 105.6778 130.7911 136.7777 129.1428 113.1017 93.1844 72.8444 54.3360 110.7338]
Next let's try that again with a conforming sample:
>> sample = wblrnd(0.5, 2, [1000,1]); % sample from a Weibull distribution
>> pd = fitdist(sample, 'weibull')
pd =
WeibullDistribution
Weibull distribution
A = 0.496413 [0.481027, 0.512292]
B = 2.07314 [1.97524, 2.17589]
>> [h,p] = chi2gof(sample, 'CDF',pd, 'Alpha',0.05)
h =
0
p =
0.7340
the test now clearly passes with a high p-value.
EDIT:
Looking at the histogram you've shown, it does look like the data follows a Weibull distribution, although there might be cases of outliers (look at the right side of the histogram), which might explain why you are getting bad p-values. Consider preprocessing your data to handle extreme outliers..
Here is an example where I simulate outlier values:
% 5000 samples from a Weibull distribution
pd = makedist('Weibull', 'a',4.0420, 'b',2.0853);
sample = random(pd, [5000 1]);
%sample = wblrnd(4.0420, 2.0853, [5000 1]);
% add 20 outlier instances
sample(1:20) = [rand(10,1)+15; rand(10,1)+25];
% hypothesis tests using original distribution
[h,p,st] = chi2gof(sample, 'CDF',pd, 'Alpha',0.05)
[h,p,ad,cv] = adtest(sample, 'Distribution',pd)
% hypothesis tests using empirical distribution
[h,p,st] = chi2gof(sample, 'CDF',fitdist(sample,'Weibull'))
[h,p,ad,cv] = adtest(sample, 'Distribution', 'Weibull')
% show histogram
histfit(sample, 20, 'Weibull')
% chi-squared test
h =
1
p =
0.0382
st =
chi2stat: 8.4162
df: 3
edges: [0.1010 2.6835 5.2659 7.8483 25.9252]
O: [1741 2376 764 119]
E: [1.7332e+03 2.3857e+03 788.6020 92.5274]
% AD test
h =
1
p =
1.2000e-07
ad =
Inf
cv =
2.4924
The outliers are causing the distribution tests to fail (null hypothesis rejected). Still I couldn't reproduce getting a NaN p-value (you might wanna check this related question on Stats.SE about getting NaN p-values)..