GLM.fit() in Matlab vs. Python Statsmodels: why the different results? - matlab

In what ways is Matlab's glmfit implemented differently than Python statsmodels' GLM.fit()?
Here is a comparison of their results on my dataset:
This represents graph 209 weights, generated from running GLM fit on:
V: (100000, 209) predictor variable (design matrix)
y: (100000,1) response variable
Sum of squared errors: 18.140615678
A Specific Example
Why are these different? First, here's a specific example in Matlab:
yin = horzcat(y,ones(size(y)));
[weights_mat, d0, st0]=glmfit(V, yin,'binomial','probit','off',[],[],'off');
Let's try the equivalent in Python:
import statsmodels.api as sm
## set up GLM
y = np.concatenate((y, np.ones( [len(y),1] )), axis=1)
sm_probit_Link = sm.genmod.families.links.probit
glm_binom = sm.GLM(sm.add_constant(y), sm.add_constant(V_design_matrix), family=sm.families.Binomial(link=sm_probit_Link))
# statsmodels.GLM format: glm_binom = sm.GLM(data.endog, data.exog, family)
## Run GLM fit
glm_result = glm_binom.fit()
weights_py = glm_result.params
## Compare the difference
weights_mat_import = Matpy.get_output('w_output.mat', 'weights_mat') # imports matlab variables
print SSE(weights_mat_import, weights_python)
Let's Check The Docs
glmfit in Matlab:
[b,dev,stats] = glmfit(X,y,distr)
GLM.fit() setup in Python (documentation ) :
glm_model = sm.GLM(endog, exog, family=None, offset=None, exposure=None, missing='none', **kwargs)
glm_model.fit(start_params=None, maxiter=100, method='IRLS', tol=1e-08, scale=None, cov_type='nonrobust', cov_kwds=None, use_t=None, **kwargs)
How might we get Matlab glmfit results with Statsmodels?
Thank you!

Related

Convert string data in HDF5 File to float Format

I need to convert String data from a HDF5 File to Float format to use in a Skyplot (Astropy) with l b coordinates. The data is present here:
https://wwwmpa.mpa-garching.mpg.de/~ensslin/research/data/faraday2020.html
(Faraday Sky 2020)
The code I have programmed until now is:
from astropy import units as u
from astropy.coordinates import SkyCoord
import matplotlib.pyplot as plt
import numpy as np
import h5py
dat = []
ggl=[]
ggb=[]
f1= h5py.File('/home/nikita/faraday_2020/faraday2020.hdf5','r')
data = f1.get('faraday_sky_mean')
faraday_sky_mean = np.array(data)
data1 = f1.get('faraday_sky_std')
faraday_sky_std = np.array(data1)
n1 = 0
for line in f1:
s = line.split()
dat.append(s)
n1 = n1 +1
#
for i in range(0,n1):
ggl.append(float(dat[i][0])) # galactic coordinates input
ggb.append(float(dat[i][1]))
f1.close()
However I am getting the error:
ggl.append(float(dat[i][0])) # galactic coordinates input
ValueError: could not convert string to float: 'faraday_sky_mean'
Please help with this. Thanks.
What what you asked and what (I think) you need are 2 different things.
This line is NOT the way to read a HDF5 file: for line in f1:
You need to use a HDF5 API to read it (h5py is 1 of many).
I think you want to read datasets faraday_sky_mean and faraday_sky_std and load arrays into lists ggl and ggb. To do that, use this code. It will create 2 lists with 3145728 float64 values in each.
with h5py.File('faraday2020.hdf5','r') as hdf:
print(list(hdf.keys()))
faraday_sky_mean = hdf['faraday_sky_mean'][:]
faraday_sky_std = hdf['faraday_sky_std'][:]
print(faraday_sky_mean.shape, faraday_sky_mean.dtype)
print(f'Max Mean={max(faraday_sky_mean)}, Min Mean={min(faraday_sky_mean)}')
print(faraday_sky_std.shape, faraday_sky_std.dtype)
print(f'Max StdDev={max(faraday_sky_std)}, Min StdDev={min(faraday_sky_std)}')
ggl = faraday_sky_mean.tolist()
print(len(ggl),type(ggl[0]))
ggb = faraday_sky_std.tolist()
print(len(ggb),type(ggb[0]))
The procedure above saves the data as both NumPy arrays and Python lists. If you only need the lists (don't need the arrays), you can shorten the code as shown below:
with h5py.File('faraday2020.hdf5','r') as hdf:
ggl = hdf['faraday_sky_mean'][:].tolist()
print(len(ggl),type(ggl[0]))
ggb = hdf['faraday_sky_std'][:].tolist()
print(len(ggb),type(ggb[0]))

How to use a sympy generated Jacobi matrix in the solution of an ODE system ?

I have a first order ODE system which is composed of 3 diff. eqn's. I want to to solve it with scipy.integrate.solve_ivp's BDF method. So I need to calculate jacobi matrix of system (and made it with the help of SymPy).
If i didn't misunderstand; according to the scipy.integrate.solve_ivp document, you must introduce jacobien matrix in the form of jac(t,u) where u should be the state variables of your ODE system. To this end i lambdify jacobien matrix properly.
And my problem arises here. Although I am able to calculate jac(t,u) with some (t,u) such as ((1/800),(150,1E-6,3)), I couldn't send array arguments to my jac. when i introduce jac(t,u) as an argument to solve_ivp it gives an error message. So how should i introduce jac matrix ? Or is my lambdify not proper ?
This is my code. Any help i appreciate it.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
def cvs(t,u):
u1,u2,u3 = u
def Qmi(t):
return t**2
u1p = Qmi(t)*u3
u2p = (u1**2)*np.cos(2*np.pi*200*t)
u3p = (np.sin(2*np.pi*t))*u2**-1
return [u1p,u2p,u3p]
def jac_func():
######### DEFINE THE ODE SYSTEM #########
import sympy
sympy.init_printing()
t = sympy.symbols("t")
Q_mi = sympy.Function("Q_mi")(t)
u1 = sympy.Function("u1")(t)
u2 = sympy.Function("u2")(t)
u3 = sympy.Function("u3")(t)
Q_mi = t**2
u1p = (u3*Q_mi)
u2p = (u1**2)*sympy.cos(2*sympy.pi*200*t)
u3p = sympy.sin(2*sympy.pi*5*t)*u2**-1
####### CALCULATE THE JACOBIEN ########
ode_rhs = sympy.Matrix([u1p,u2p,u3p])
ode_var = sympy.Matrix([u1,u2,u3])
jac = sympy.Matrix([[ode.diff(var) for var in ode_var]for ode in ode_rhs])
u = (u1,u2,u3)
jac_np = sympy.lambdify((t,u),jac,"numpy")
return jac_np
jac_np = jac_func()
U_0 = [500,20,20]
t = np.linspace(0,100,10000)
solf = solve_ivp(cvs,(0,100),y0=U_0,method = 'BDF',jac=jac_np(t,U_0),t_eval=t)
error message:
ValueError Traceback (most recent call last)
<ipython-input-1-8b86ffb3a7cf> in <module>()
41 t = np.linspace(0,100,10000)
42
---> 43 solf = solve_ivp(cvs,(0,100),y0=U_0,method = 'BDF',jac=jac_np(t,U_0),t_eval=t)
<lambdifygenerated-1> in _lambdifygenerated(t, _Dummy_188)
1 def _lambdifygenerated(t, _Dummy_188):
2 [_Dummy_185, _Dummy_186, _Dummy_187] = _Dummy_188
----> 3 return (array([[0, 0, t**2], [2*_Dummy_185*cos(400*pi*t), 0, 0], [0, -sin(10*pi*t)/_Dummy_186**2, 0]]))
ValueError: setting an array element with a sequence.
You are getting the problem because you do what the error message says, you are passing an array where the procedure expects a single number. In
solf = solve_ivp(cvs,(0,100),y0=U_0,method = 'BDF',jac=jac_np(t,U_0),t_eval=t)
you are trying the constant matrix jac_np(t,U_0) to the Jacobian argument. However, at that point t contains all the t values that you want output samples from. A list of [ array, scalar, scalar ] is not compatible with the numpy arrays.
Long story short, remove the arguments, pass the Jacobian as callable function, as you quite probably intended,
solf = solve_ivp(cvs,(0,100),y0=U_0,method = 'BDF',jac=jac_np, t_eval=t)

How to write a flexible multiple exponential fit

I'd like to write a more or less universial fit function for general function
$f_i = \sum_i a_i exp(-t/tau_i)$
for some data I have.
Below is an example code for a biexponential function but I would like to be able to fit a monoexponential or a triexponential function with the smallest code adaptions possible.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
t = np.linspace(0, 10, 100)
a_1 = 1
a_2 = 1
tau_1 = 5
tau_2 = 1
data = 1*np.exp(-t/5) + 1*np.exp(-t/1)
data += 0.2 * np.random.normal(size=t.size)
def func(t, a_1, tau_1, a_2, tau_2): # plus more exponential functions
return a_1*np.exp(-t/tau_1)+a_2*np.exp(-t/tau_2)
popt, pcov = curve_fit(func, t, data)
print(popt)
plt.plot(t, data, label="data")
plt.plot(t, func(t, *popt), label="fit")
plt.legend()
plt.show()
In principle I thought of redefining the function to a general form
def func(t, a, tau): # with a and tau as a list
tmp = 0
tmp += a[i]*np.exp(-t/tau[i])
return tmp
and passing the arguments to curve_fit in the form of lists or tuples. However I get a TypeError as shown below.
TypeError: func() takes 4 positional arguments but 7 were given
Is there anyway to rewrite the code that you can only by the input parameters of curve_fit "determine" the degree of the multiexponential function? So that passing
a = (1)
results in a monoexponential function whereas passing
a = (1, 2, 3)
results in a triexponential function?
Regards
Yes, that can be done easily with np.broadcasting:
def func(t, a, taus): # plus more exponential functions
a=np.array(a)[:,None]
taus=np.array(taus)[:,None]
return (a*np.exp(-t/taus)).sum(axis=0)
func accepts 2 lists, converts them into 2-dim np.array, computes a matrix with all the exponentials and then sums it up. Example:
t=np.arange(100).astype(float)
out=func(t,[1,2],[0.3,4])
plt.plot(out)
Keep in mind a and taus must be the same length, so sanitize your inputs as you see fit. Or you could also directly pass np.arrays instead of lists.

updating subset of parameters in dynet

Is there a way to update a subset of parameters in dynet? For instance in the following toy example, first update h1, then h2:
model = ParameterCollection()
h1 = model.add_parameters((hidden_units, dims))
h2 = model.add_parameters((hidden_units, dims))
...
for x in trainset:
...
loss.scalar_value()
loss.backward()
trainer.update(h1)
renew_cg()
for x in trainset:
...
loss.scalar_value()
loss.backward()
trainer.update(h2)
renew_cg()
I know that update_subset interface exists for this and works based on the given parameter indexes. But then it is not documented anywhere how we can get the parameter indexes in dynet Python.
A solution is to use the flag update = False when creating expressions for parameters (including lookup parameters):
import dynet as dy
import numpy as np
model = dy.Model()
pW = model.add_parameters((2, 4))
pb = model.add_parameters(2)
trainer = dy.SimpleSGDTrainer(model)
def step(update_b):
dy.renew_cg()
x = dy.inputTensor(np.ones(4))
W = pW.expr()
# update b?
b = pb.expr(update = update_b)
loss = dy.pickneglogsoftmax(W * x + b, 0)
loss.backward()
trainer.update()
# dy.renew_cg()
print(pb.as_array())
print(pW.as_array())
step(True)
print(pb.as_array()) # b updated
print(pW.as_array())
step(False)
print(pb.as_array()) # b not updated
print(pW.as_array())
For update_subset, I would guess that the indices are the integers suffixed at the end of parameter names (.name()).
In the doc, we are supposed to use a get_index function.
Another option is: dy.nobackprop() which prevents the gradient to propagate beyond a certain node in the graph.
And yet another option is to zero the gradient of the parameter that do not need to be updated (.scale_gradient(0)).
These methods are equivalent to zeroing the gradient before the update. So, the parameter will still be updated if the optimizer uses its momentum from previous training steps (MomentumSGDTrainer, AdamTrainer, ...).

scipy.optimize failure with a "vectorized" implementation

I have an optimization problem (1d) coded in 2 ways - one using a for loop and an other using numpy arrays. The for loop version works fine but the numpy one fails.
Actually it is a bit more complicated, it can work with different starting points (!!) or if I choose an other optimization algo like CG.
The 2 versions (functions and gradients) are giving the same results and the returned types are also the same as far as I can tell.
Here is my example, what am I missing?
import numpy as np
from scipy.optimize import minimize
# local params
v1 = np.array([1., 1.])
v2 = np.array([1., 2.])
# local functions
def f1(x):
s = 0
for i in range(len(v1)):
s += (v1[i]*x-v2[i])**2
return 0.5*s/len(v1)
def df1(x):
g = 0
for i in range(len(v1)):
g += v1[i]*(v1[i]*x-v2[i])
return g/len(v1)
def f2(x):
return 0.5*np.sum((v1*x-v2)**2)/len(v1)
def df2(x):
return np.sum(v1*(v1*x-v2))/len(v1)
x0 = 10. # x0 = 2 works
# tests...
assert np.abs(f1(x0)-f2(x0)) < 1.e-6 and np.abs(df1(x0)-df2(x0)) < 1.e-6 \
and np.abs((f1(x0+1.e-6)-f1(x0))/(1.e-6)-df1(x0)) < 1.e-4
# BFGS for f1: OK
o = minimize(f1, x0, method='BFGS', jac=df1)
if not o.success:
print('FAILURE', o)
else:
print('SUCCESS min = %f reached at %f' % (f1(o.x[0]), o.x[0]))
# BFGS for f2: failure
o = minimize(f2, x0, method='BFGS', jac=df2)
if not o.success:
print('FAILURE', o)
else:
print('SUCCESS min = %f reached at %f' % (f2(o.x[0]), o.x[0]))
The error I get is
A1 = I - sk[:, numpy.newaxis] * yk[numpy.newaxis, :] * rhok
IndexError: invalid index to scalar variable.
but I doesn't really helps me since it can work with some other starting values.
I am using an all new fresh python install (python 3.5.2, scipy 0.18.1 and numpy 1.11.3).
The solver expects the return value of jacobian df2 to be the same shape as its input x. Even though you passed in a scalar here, it's actually converted into a single element ndarray. Since you used np.sum, your result became scalar and that causes strange things to happen.
Enclose the scalar result of df2 with np.array, and your code should work.