Understanding the Jacobian output of scipy.optimize.minimize - scipy

I'm working with scipy.optimize.minimize to find the minimum of the RSS for a custom nonlinear function. I'll provide a simple linear example to illustrate what I am doing:
import numpy as np
from scipy import optimize
def response(X, b0, b1, b2):
return b2 * X[1]**2 + b1 * X[0] + b0
def obj_rss(model_params, y_true, X):
return np.sum((y_true - response(X, *model_params))**2)
x = np.array([np.arange(0, 10), np.arange(10, 20)])
r = 15. * x[1]**2 - 32. * x[0] + 10.
init_guess = np.array([0., 50., 10.])
res = optimize.minimize(obj_rss, init_guess, args=(r, x))
print res
This yields the results:
fun: 3.0218799331864133e-08
hess_inv: array([[ 7.50606278e+00, 2.38939463e+00, -8.33333575e-02],
[ 2.38939463e+00, 8.02462363e-01, -2.74621294e-02],
[ -8.33333575e-02, -2.74621294e-02, 9.46969972e-04]])
jac: array([ -3.31359843e-07, -5.42022462e-08, 2.34304025e-08])
message: 'Optimization terminated successfully.'
nfev: 45
nit: 6
njev: 9
status: 0
success: True
x: array([ 10.00066577, -31.99978062, 14.99999243])
And we see that the fitted parameters 10, -32, and 15 are equivalent to those used to generate the actuals data. That's great. Now my question:
I have the understanding that the Jacobian should be an m x n matrix where m is the number of records from the X input and n is the number of parameters. Clearly I don't have that in the results object. The results object yields an array that is referred to as the Jacobian in the documentation (1 and 2), but is only one-dimensional with a number of elements equal to the number of parameters.
Further confusing the matter, when I use method='SLSQP', the Jacobian that is returned has one more element than that returned by other minimization algorithms.
. . .
My larger goal here is to be able to calculate either confidence intervals or standard errors, t-, and p-values for the fitted parameters, so if you think I'm way off track here, please let me know.
EDIT:
The following is intended to show how the SLSQP minimization algorithm yields different results in the Jacobian than the default minimization algorithm, which is one of BFGS, L-BFGS-B, or SLSQP, depending on if the problem has constraints (as mentioned in the documentation). The SLSQP solver is intended for use with constraints.
import numpy as np
from scipy import optimize
def response(X, b0, b1, b2):
return b2 * X[1]**2 + b1 * X[0] + b0
def obj_rss(model_params, y_true, X):
return np.sum((y_true - response(X, *model_params))**2)
x = np.array([np.arange(0, 10), np.arange(10, 20)])
r = 15. * x[1]**2 - 32. * x[0] + 10.
init_guess = np.array([0., 50., 10.])
res = optimize.minimize(obj_rss, init_guess, method='SLSQP', args=(r, x))
print res
r_pred = response(x, *res.x)
Yields results:
fun: 7.5269461938291697e-10
jac: array([ 2.94677643e-05, 5.52844499e-04, 2.59870917e-02,
0.00000000e+00])
message: 'Optimization terminated successfully.'
nfev: 58
nit: 10
njev: 10
status: 0
success: True
x: array([ 10.00004495, -31.9999794 , 14.99999938])
One can see that there is an extra element in the Jacobian array that is returned from the SLSQP solver. I am confused where this comes from.

Related

How to correctly set the 'rtol' and 'atol' in scipy integration module 'solve_ivp' for solving a system of ODE with unknown analytic solution?

I was trying to reproduce some results of ode45 solver in Python using solve_ivp. Though all parameters, initial conditions, step size, and 'atol' and 'rtol' (which are 1e-6 and 1e-3) are same, I am getting different solutions. Both of the solutions are converging to a periodic solution but of different kind. As solve_ivp uses same rk4(5) method as ode45, this discrepancy in the final result is not quite understable. How can we know which one is the correct solution?
The code is included below
import sys
import numpy as np
from scipy.integrate import solve_ivp
#from scipy import integrate
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
# Pendulum rod lengths (m), bob masses (kg).
L1, L2, mu, a1 = 1, 1, 1/5, 1
m1, m2, B = 1, 1, 0.1
# The gravitational acceleration (m.s-2).
g = 9.81
# The forcing frequency,forcing amplitude
w, a_m =10, 4.5
A=(a_m*w**2)/g
A1=a_m/g
def deriv(t, y, mu, a1, B, w, A): # beware of the order of the aruments
"""Return the first derivatives of y = theta1, z1, theta2, z2, z3."""
a, c, b, d, e = y
#c, s = np.cos(theta1-theta2), np.sin(theta1-theta2)
adot = c
cdot = (-(1-A*np.sin(e))*(((1+mu)*np.sin(a))-(mu*np.cos(a-b)*np.sin(b)))-((mu/a1)*((d**2)+(a1*np.cos(a-b)*c**2))*np.sin(a-b))-(2*B*(1+(np.sin(a-b))**2)*c)-((2*B*A/w)*(2*np.sin(a)-(np.cos(a-b)*np.sin(b)))*np.cos(e)))/(1+mu*(np.sin(a-b))**2)
bdot = d
ddot = ((-a1*(1+mu)*(1-A*np.sin(e))*(np.sin(b)-(np.cos(a-b)*np.sin(a))))+(((a1*(1+mu)*c**2)+(mu*np.cos(a-b)*d**2))*np.sin(a-b))-((2*B/mu)*(((1+mu*(np.sin(a-b))**2)*d)+(a1*(1-mu)*np.cos(a-b)*c)))-((2*B*a1*A/(w*mu))*(((1+mu)*np.sin(b))-(2*mu*np.cos(a-b)*np.sin(a)))*np.cos(e)))/(1+mu*(np.sin(a-b))**2)
edot = w
return adot, cdot, bdot, ddot, edot
# Initial conditions: theta1, dtheta1/dt, theta2, dtheta2/dt.
y0 = np.array([3.15, -0.1, 3.13, 0.1, 0])
# Do the numerical integration of the equations of motion
sol = integrate.solve_ivp(deriv,[0,40000], y0, args=(mu, a1, B, w, A), method='RK45',t_eval=np.arange(0, 40000, 0.005), dense_output=True, rtol=1e-3, atol=1e-6)
T = sol.t
Y = sol.y
I am expecting similar result from ode45 in MATLAB and solve_ivp in Python. How can I exactly reproduce the result from ode45 in python? What is the reason of discrepancy?
Even if ode45and RK45use the same underlying scheme, they do not necessarily use the same exact strategy regarding the evolution of the time step and its adaptation to match the error tolerance. Thus, it is difficult to know which one is better.
The only thing you could is simply trying lower tolerances, e.g. 1e-10. Then, both solutions should end up being virtually identical... Here, your current error tolerance might be insufficiently low, so that small discrepancies in the fine details of both algorithms create a visible difference in the solution.

Using scipy solve_bvp for a nonhomogeneous ODE

I am trying to solve the following 4th order BVP
y'''' = K - C*y
My x variable is a linspace with 100 nodes. As you can see, K is a vector of the same length=100 and makes the equation nonhomogeneous. When I press solve, however, there is the following error:
Cell In [11], line 18, in fun(x, y)
17 def fun(x, y):
---> 18 ans = vector-np.multiply(C,y[0])
19 return np.vstack((y[1],y[2],y[3],ans))
ValueError: operands could not be broadcast together with shapes (100,) (99,)
Why does the solver suddenly change the length of y by 1 and how can I fix this error?
EDIT: I must add that the solver works fine when K is absent i.e. the equation is homogeneous.
from scipy.integrate import solve_bvp
import numpy as np
L = 10
nodes = 100
A = 1000
B = 1500
C = 0.05
x = np.linspace(0,L,nodes)
vector = np.ones(nodes)
def fun(x, y):
ans = vector-np.multiply(C,y[0])
return np.vstack((y[1],y[2],y[3],ans))
def bc(ya, yb):
return np.array([ya[2], yb[2], ya[3]+A/B, yb[3]])
y_a = np.zeros((4, x.size))
res_a = solve_bvp(fun, bc, x, y_a)
res1 = res_a.sol(x)[0]
res2 = res_a.sol(x)[1]
res3 = B*res_a.sol(x)[2]
res4 = B*res_a.sol(x)[3]
The solver establishes in the first round a system for polynomial approximations over the nodes-1=99 segments of the first subdivision.
There is no guarantee that the subdivision remains unchanged in the later solver rounds. So your ODE right-side function has to work with arbitrary x arrays. This means that parameters given as a function table need to be interpolated for the general x array. There are procedures in numpy.interp for instantaneous interpolation and scipy.interpolate.interp1d to generate interpolation functions.

BVP4c solve for unknown boundary

I am trying to use bvp4c to solve a system of 4 odes. The issue is that one of the boundaries is unknown.
Can bvp4c handle this? In my code L is the unknown I am solving for.
I get an error message printed below.
function mat4bvp
L = 8;
solinit = bvpinit(linspace(0,L,100),#mat4init);
sol = bvp4c(#mat4ode,#mat4bc,solinit);
sint = linspace(0,L);
Sxint = deval(sol,sint);
end
% ------------------------------------------------------------
function dtdpdxdy = mat4ode(s,y,L)
Lambda = 0.3536;
dtdpdxdy = [y(2)
-sin(y(1)) + Lambda*(L-s)*cos(y(1))
cos(y(1))
sin(y(1))];
end
% ------------------------------------------------------------
function res = mat4bc(ya,yb,L)
res = [ ya(1)
ya(2)
ya(3)
ya(4)
yb(1)];
end
% ------------------------------------------------------------
function yinit = mat4init(s)
yinit = [ cos(s)
0
0
0
];
end
Unfortunately I get the following error message ;
>> mat4bvp
Not enough input arguments.
Error in mat4bvp>mat4ode (line 13)
-sin(y(1)) + Lambda*(L-s)*cos(y(1))
Error in bvparguments (line 105)
testODE = ode(x1,y1,odeExtras{:});
Error in bvp4c (line 130)
bvparguments(solver_name,ode,bc,solinit,options,varargin);
Error in mat4bvp (line 4)
sol = bvp4c(#mat4ode,#mat4bc,solinit);
One trick to transform a variable end point into a fixed one is to change the time scale. If x'(t)=f(t,x(t)) is the differential equation, set t=L*s, s from 0 to 1, and compute the associated differential equation for y(s)=x(L*s)
y'(s)=L*x'(L*s)=L*f(L*s,y(s))
The next trick to employ is to transform the global variable into a part of the differential equation by computing it as constant function. So the new system is
[ y'(s), L'(s) ] = [ L(s)*f(L(s)*s,y(s)), 0 ]
and the value of L occurs as additional free left or right boundary value, increasing the number of variables = dimension of the state vector to the number of boundary conditions.
I do not have Matlab readily available, in Python with the tools in scipy this can be implemented as
from math import sin, cos
import numpy as np
from scipy.integrate import solve_bvp, odeint
import matplotlib.pyplot as plt
# The original function with the interval length as parameter
def fun0(t, y, L):
Lambda = 0.3536;
#print t,y,L
return np.array([ y[1], -np.sin(y[0]) + Lambda*(L-t)*np.cos(y[0]), np.cos(y[0]), np.sin(y[0]) ]);
# Wrapper function to apply both tricks to transform variable interval length to a fixed interval.
def fun1(s,y):
L = y[-1];
dydt = np.zeros_like(y);
dydt[:-1] = L*fun0(L*s, y[:-1], L);
return dydt;
# Implement evaluation of the boundary condition residuals:
def bc(ya, yb):
return [ ya[0],ya[1], ya[2], ya[3], yb[0] ];
# Define the initial mesh with 5 nodes:
x = np.linspace(0, 1, 3)
# This problem has multiple solutions. Try two initial guesses.
L_a=8
L_b=9
y_a = odeint(lambda y,t: fun1(t,y), [0,0,0,0,L_a], x)
y_b = odeint(lambda y,t: fun1(t,y), [0,0,0,0,L_b], x)
# Now we are ready to run the solver.
res_a = solve_bvp(fun1, bc, x, y_a.T)
res_b = solve_bvp(fun1, bc, x, y_b.T)
L_a = res_a.sol(0)[-1]
L_b = res_b.sol(0)[-1]
print "L_a=%.8f, L_b=%.8f" % ( L_a,L_b )
# Plot the two found solutions. The solution are in a spline form, use this to produce a smooth plot.
x_plot = np.linspace(0, 1, 100)
y_plot_a = res_a.sol(x_plot)[0]
y_plot_b = res_b.sol(x_plot)[0]
plt.plot(L_a*x_plot, y_plot_a, label='L=%.8f'%L_a)
plt.plot(L_b*x_plot, y_plot_b, label='L=%.8f'%L_b)
plt.legend()
plt.xlabel("t")
plt.ylabel("y")
plt.grid(); plt.show()
which produces
Trying different initial values for L finds other solutions on quite different scales, among them
L=0.03195111
L=0.05256775
L=0.05846539
L=0.06888907
L=0.08231966
L=4.50411522
L=6.84868060
L=20.01725616
L=22.53189063

scipy optimize minimize: hess_inv strongly depends on initial guess

I am using scipy.optimize.minimize to minimize a simple log likelihood function. The Hessian matrix doesn't seem to behave well.
import scipy.optimize as op
def lnlike(theta, n, bhat, fhat, sigb, sigf):
S, b, f = theta
mu = f*S + b
scb2 = ((b-bhat)/sigb)**2
scf2 = ((f-fhat)/sigf)**2
return n*np.log(mu) - mu - 0.5*(scb2+scf2)
nll = lambda *args: -lnlike(*args)
myargs=(21.0, 20.0, 0.5, 6.0, 0.1)
If the initial guess is at the minimum, the iteration doesn't go anywhere. That is fine in terms of the parameter values, but it doesn't touch Hessian (still identity) either, so I cannot use it for uncertainty estimation.
x0 = [2.0, 20.0, 0.5] # initial guess is at the minimum
result = op.minimize(nll, x0, args= myargs)
print result
status: 0
success: True
njev: 1
nfev: 5
hess_inv: array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
fun: -42.934971192191881
x: array([ 2. , 20. , 0.5])
message: 'Optimization terminated successfully.'
jac: array([ 0.00000000e+00, 0.00000000e+00, 9.53674316e-07])
If I change the initial guess a little bit, it seems to return a sensible hess_inv.
x0 = [2.01, 20.0, 0.5]
result = op.minimize(nll, x0, args= myargs)
print result
print np.sqrt(result.hess_inv[0,0])
status: 0
success: True
njev: 15
nfev: 75
hess_inv: array([[ 2.16004477e+02, -7.60588367e+01, -2.94846112e-02],
[ -7.60588367e+01, 3.55748024e+01, 2.74064505e-03],
[ -2.94846112e-02, 2.74064505e-03, 9.98030944e-03]])
fun: -42.934971191969964
x: array([ 1.99984604, 19.9999814 , 0.5000001 ])
message: 'Optimization terminated successfully.'
jac: array([ -2.38418579e-06, -5.24520874e-06, 1.90734863e-06])
14.697090757
However, hess_inv is very sensitive to the initial guess.
x0 = [2.02, 20.0, 0.5]
result = op.minimize(nll, x0, args= myargs)
print result
print np.sqrt(result.hess_inv[0,0])
status: 0
success: True
njev: 16
nfev: 80
hess_inv: array([[ 1.82153214e+02, -6.03482772e+01, -2.97458789e-02],
[ -6.03482772e+01, 3.30771459e+01, -2.53811809e-03],
[ -2.97458789e-02, -2.53811809e-03, 9.99052952e-03]])
fun: -42.934971192188634
x: array([ 1.9999702 , 20.00000354, 0.50000001])
message: 'Optimization terminated successfully.'
jac: array([ -9.53674316e-07, -4.76837158e-07, -4.76837158e-07])
13.4964148462
Change the initial guess a bit more
x0 = [2.03, 20.0, 0.5]
result = op.minimize(nll, x0, args= myargs)
print result
print np.sqrt(result.hess_inv[0,0])
status: 0
success: True
njev: 14
nfev: 70
hess_inv: array([[ 2.30479371e+02, -7.36087027e+01, -3.79639119e-02],
[ -7.36087027e+01, 3.55785937e+01, 3.54182478e-03],
[ -3.79639119e-02, 3.54182478e-03, 9.97664441e-03]])
fun: -42.93497119204827
x: array([ 1.99975148, 20.00006366, 0.50000009])
message: 'Optimization terminated successfully.'
jac: array([ -9.53674316e-07, -9.53674316e-07, 4.29153442e-06])
15.1815470484
Did I miss something? Is this a bug or a feature?
The way I understand the optimizers, the Hessian are approximated by finite differences. In your case, it does not seem the best idea. Perhaps, utilizing Sympy (in IPython) will produce more usable results:
import sympy as sy
import numpy as np
import scipy.optimize as sopt
from IPython.display import display # nice printing
sy.init_printing() # LaTeX like printing for IPython
def lnlike(theta, n, bhat, fhat, sigb, sigf):
S, b, f = theta
mu = f*S + b
scb2 = ((b-bhat)/sigb)**2
scf2 = ((f-fhat)/sigf)**2
return n*sy.log(mu) - mu - (scb2+scf2) / 2
# declare symbols:
th_S, th_b, th_f = sy.symbols("theta_S, theta_b, theta_f", real=True)
theta = (th_S, th_b, th_f)
n, bhat, fhat = sy.symbols("n, \hat{b}, \hat{f}", real=True )
sigb, sigf = sy.symbols("sigma_b, sigma_d", real=True )
# symbolic optimizaton function:
lf = -lnlike(theta, n, bhat, fhat, sigb, sigf)
# Gradient:
dlf = sy.Matrix([lf.diff(th) for th in theta])
# Hessian:
Hlf = sy.Matrix([dlf.T.diff(th) for th in theta])
print("Symbolic Hessian:")
display(Hlf)
# Make numpy functions:
margs = {n:21, bhat:20, fhat:.5, sigb:6, sigf:.1} # parameters
lf_a, dlf_a, Hlf_a = lf.subs(margs), dlf.subs(margs), Hlf.subs(margs)
lf_lam = sy.lambdify(theta, lf_a, modules="numpy")
dlf_lam = sy.lambdify(theta, dlf_a, modules="numpy")
Hlf_lam = sy.lambdify(theta, Hlf_a, modules="numpy")
nlf = lambda xx: np.array(lf_lam(xx[0], xx[1], xx[2])) # function
ndlf = lambda xx: np.array(dlf_lam(xx[0], xx[1], xx[2])).flatten() # gradient
nHlf = lambda xx: np.array(Hlf_lam(xx[0], xx[1], xx[2])) # Hessian
x0 = [2.02, 20.0, 0.5]
rs = sopt.minimize(nlf, x0, jac=ndlf, hess=nHlf, method='Newton-CG')
print(rs)
print("Hessian:")
print(nHlf(rs.x))
If you're using a quasi-Newton method, which from the documentation it appears you are:
Quasi-Newton methods build up a guess at the Hessian inverse by applying a sequence of low-rank updates to a completely naive guess (typically a multiple of the identity). The low-rank updates used are in some sense the "least-change" updates that make a given equation hold, and the meaning of "least-change" varies with the quasi-Newton method chosen. If you start at, or very close to, the minimiser, the optimiser will figure this out very quickly and it won't build up much information in its approximation to the Hessian inverse.

How to estimate goodness-of-fit using scipy.odr?

I am fitting data with weights using scipy.odr but I don't know how to obtain a measure of goodness-of-fit or an R squared. Does anyone have suggestions for how to obtain this measure using the output stored by the function?
The res_var attribute of the Output is the so-called reduced Chi-square value for the fit, a popular choice of goodness-of-fit statistic. It is somewhat problematic for non-linear fitting, though. You can look at the residuals directly (out.delta for the X residuals and out.eps for the Y residuals). Implementing a cross-validation or bootstrap method for determining goodness-of-fit, as suggested in the linked paper, is left as an exercise for the reader.
The output of ODR gives both the estimated parameters beta as well as the standard deviation of those parameters sd_beta. Following p. 76 of the ODRPACK documentation, you can convert these values into a t-statistic with (beta - beta_0) / sd_beta, where beta_0 is the number that you're testing significance with respect to (often zero). From there, you can use the t-distribution to get the p-value.
Here's a working example:
import numpy as np
from scipy import stats, odr
def linear_func(B, x):
"""
From https://docs.scipy.org/doc/scipy/reference/odr.html
Linear function y = m*x + b
"""
# B is a vector of the parameters.
# x is an array of the current x values.
# x is in the same format as the x passed to Data or RealData.
#
# Return an array in the same format as y passed to Data or RealData.
return B[0] * x + B[1]
np.random.seed(0)
sigma_x = .1
sigma_y = .15
N = 100
x_star = np.linspace(0, 10, N)
x = np.random.normal(x_star, sigma_x, N)
# the true underlying function is y = 2*x_star + 1
y = np.random.normal(2*x_star + 1, sigma_y, N)
linear = odr.Model(linear_func)
dat = odr.Data(x, y, wd=1./sigma_x**2, we=1./sigma_y**2)
this_odr = odr.ODR(dat, linear, beta0=[1., 0.])
odr_out = this_odr.run()
# degrees of freedom are n_samples - n_parameters
df = N - 2 # equivalently, df = odr_out.iwork[10]
beta_0 = 0 # test if slope is significantly different from zero
t_stat = (odr_out.beta[0] - beta_0) / odr_out.sd_beta[0] # t statistic for the slope parameter
p_val = stats.t.sf(np.abs(t_stat), df) * 2
print('Recovered equation: y={:3.2f}x + {:3.2f}, t={:3.2f}, p={:.2e}'.format(odr_out.beta[0], odr_out.beta[1], t_stat, p_val))
Recovered equation: y=2.00x + 1.01, t=239.63, p=1.76e-137
One note of caution in using this approach on nonlinear problems, from the same ODRPACK docs:
"Note that for nonlinear ordinary least squares, the linearized confidence regions and intervals are asymptotically correct as n → ∞ [Jennrich, 1969]. For the orthogonal distance regression problem, they have been shown to be asymptotically correct as σ∗ → 0 [Fuller, 1987]. The difference between the conditions of asymptotic correctness can be explained by the fact that, as the number of observations increases in the orthogonal distance regression problem one does not obtain additional information for ∆. Note also that Vˆ is dependent upon the weight matrix Ω, which must be assumed to be correct, and cannot be confirmed from the orthogonal distance regression results. Errors in the values of wǫi and wδi that form Ω will have an adverse affect on the accuracy of Vˆ and its component parts. The results of a Monte Carlo experiment examining the accuracy
of the linearized confidence intervals for four different measurement error models is presented in [Boggs and Rogers, 1990b]. Those results indicate that the confidence regions and intervals for ∆ are not as accurate as those for β.
Despite its potential inaccuracy, the covariance matrix is frequently used to construct confidence regions and intervals for both nonlinear ordinary least squares and measurement error models because the resulting regions and intervals are inexpensive to compute, often adequate, and familiar to practitioners. Caution must be exercised when using such regions and intervals, however, since the validity of the approximation will depend on the nonlinearity of the model, the variance and distribution of the errors, and the data itself. When more reliable intervals and regions are required, other more accurate methods should be used. (See, e.g., [Bates and Watts, 1988], [Donaldson and Schnabel, 1987], and [Efron, 1985].)"
As mentioned by R. Ken, chi-square or variance of the residuals is one of the more
commonly used tests of goodness of fit. ODR stores the sum of squared
residuals in out.sum_square and you can verify yourself
that out.res_var = out.sum_square/degrees_freedom corresponds to what is commonly called reduced chi-square: i.e. the chi-square test result divided by its expected value.
As for the other very popular estimator of goodness of fit in linear regression, R squared and its adjusted version, we can define the functions
import numpy as np
def R_squared(observed, predicted, uncertainty=1):
""" Returns R square measure of goodness of fit for predicted model. """
weight = 1./uncertainty
return 1. - (np.var((observed - predicted)*weight) / np.var(observed*weight))
def adjusted_R(x, y, model, popt, unc=1):
"""
Returns adjusted R squared test for optimal parameters popt calculated
according to W-MN formula, other forms have different coefficients:
Wherry/McNemar : (n - 1)/(n - p - 1)
Wherry : (n - 1)/(n - p)
Lord : (n + p - 1)/(n - p - 1)
Stein : (n - 1)/(n - p - 1) * (n - 2)/(n - p - 2) * (n + 1)/n
"""
# Assuming you have a model with ODR argument order f(beta, x)
# otherwise if model is of the form f(x, a, b, c..) you could use
# R = R_squared(y, model(x, *popt), uncertainty=unc)
R = R_squared(y, model(popt, x), uncertainty=unc)
n, p = len(y), len(popt)
coefficient = (n - 1)/(n - p - 1)
adj = 1 - (1 - R) * coefficient
return adj, R
From the output of your ODR run you can find the optimal values for your model's parameters in out.beta and at this point we have everything we need for computing R squared.
from scipy import odr
def lin_model(beta, x):
"""
Linear function y = m*x + q
slope m, constant term/y-intercept q
"""
return beta[0] * x + beta[1]
linear = odr.Model(lin_model)
data = odr.RealData(x, y, sx=sigma_x, sy=sigma_y)
init = odr.ODR(data, linear, beta0=[1, 1])
out = init.run()
adjusted_Rsq, Rsq = adjusted_R(x, y, lin_model, popt=out.beta)