Root finding using a loop - scipy

I have one equation defined in the function
def fun(x, y, z, v, b):
Y = (z*(np.sign(x) * (np.abs(x))**(y-1))) - (v*np.sign(b) * (np.abs(b))**(v-1))/(1-b**v)
return Y.flatten()
that I want to solve for the value of x, given the values of Z0, SS (year 1: Z0=1.2, SS=2, ...) and different combinations of alpha and kappa, for which I am creating a grid.
Z0 = [1.2, 5, 3, 2.5, 4.2]
SS = [2, 3, 2.2, 3.5, 5]
ngrid = 10
kv = np.linspace(0.05, 2, ngrid)
av = np.linspace(1.5, 4, ngrid)
q0 = []
for z in range(len(Z0)):
zz = Z0[z]
ss = SS[z]
for i in range(ngrid):
for j in range(ngrid):
kappa = kv[i]
alpha = av[j]
res0 = root(lambda x: fun(x, alpha, zz, kappa, ss), x0=np.ones(range(ngrid)))
q0 = res0.x
print(q0)
where y = alpha; v=kappa, z = Z0; b = S.
I am getting all [], [], ....
Not sure what is going on. Thanks for your help

Before you attempt to use res0.x, check res0.success. In this case, you'll find that it is False in each case. When res0.success is False, take a look at res0.message for information about why root failed.
During development and debugging, you might also consider getting the solver working for just one set of parameter values before you embed root in three nested loops. For example, here are a few lines from an ipython session (variables were defined in previous lines, not shown):
In [37]: res0 = root(lambda x: fun(x, av[0], Z0[0], kv[0], SS[0]), x0=np.ones(range(ngrid)))
In [38]: res0.success
Out[38]: False
In [39]: res0.message
Out[39]: 'Improper input parameters were entered.'
The message suggests that something is wrong with the input parameters. You call root like this:
res0 = root(lambda x: fun(x, alpha, zz, kappa, ss), x0=np.ones(range(ngrid)))
A close look at that line shows the problem: the initial guess is np.ones(range(ngrid)):
In [41]: np.ones(range(ngrid))
Out[41]: array([], shape=(0, 1, 2, 3, 4, 5, 6, 7, 8, 9), dtype=float64)
That's not what you want! The use of range looks like a simple typo (or "thinko"). The initial guess should be
x0=np.ones(ngrid)
In ipython, we get:
In [50]: res0 = root(lambda x: fun(x, av[0], Z0[0], kv[0], SS[0]), x0=np.ones(ngrid))
In [51]: res0.success
Out[51]: True
In [52]: res0.x
Out[52]:
array([-0.37405428, -0.37405428, -0.37405428, -0.37405428, -0.37405428,
-0.37405428, -0.37405428, -0.37405428, -0.37405428, -0.37405428])
All the return values are the same (and this happens for other parameters values), which suggests that you are solving a scalar equation. A closer look at fun shows that you only use x in element-wise operations, so you are in fact solving just a scalar equation. In that case, you can use x0=1:
In [65]: res0 = root(lambda x: fun(x, av[0], Z0[0], kv[0], SS[0]), x0=1)
In [66]: res0.success
Out[66]: True
In [67]: res0.x
Out[67]: array([-0.37405428])
You could also consider using root_scalar instead of root.

Related

Relative sum of squared error with SciPy least_squares

I am relatively new to model fitting and SciPy; apologies in advance for any ignorance.
I am trying to fit a non-linear model using scipy.optimize least_squares.
Here's the function:
def growthfunction(theta, t):
return (theta[0]*np.exp(-np.exp(-theta[1]*(t-theta[2]))))
and some data
t = [1, 2, 3, 4]
observed = [3, 10, 14, 17]
I first define the model
def fun(theta):
return (myfunction(theta, ts) - observed)
Select some random starting parameters to be optimized below:
theta0 = [1, 1, 1]
Then I utilize leas_squares to optimize
res1 = least_squares(fun, theta0)
This works great, except for the fact that least_squares is here optimizing the absolute error. My data changes with time, meaning an error of 5 at time point 1 is proportionally larger than an error of 5 at time point 100. I would like to change this so that instead the relative error is optimized.
I tried doing it manually, but if I divide by the predicted values in fun(theta) like so:
def fun(theta):
return (myfunction(theta, ts) - observed)/myfunction(theta, ts)
least_squares displays an error that there are too many parameters and cannot optimize
This is working by taking the relative error:
from scipy.optimize import least_squares
import numpy as np
def growthfunction(theta, t):
return (theta[0]*np.exp(-np.exp(-theta[1]*(t-theta[2]))))
t = [1, 2, 3, 4]
observed = [3, 10, 14, 17]
def fun(theta):
return (growthfunction(theta, t) - observed)/growthfunction(theta, t)
theta0 = [1,1,1]
res1 = least_squares(fun, theta0)
print(res1)
Output:
>>> active_mask: array([0., 0., 0.])
cost: 0.0011991963091748607
fun: array([ 0.00255037, -0.0175105 , 0.0397808 , -0.02242228])
grad: array([ 3.15774533e-13, -2.50283465e-08, -1.46139239e-08])
jac: array([[ 0.05617851, -0.92486809, -1.94678829],
[ 0.05730839, 0.28751647, -0.6615416 ],
[ 0.05408162, 0.27956135, -0.20795969],
[ 0.05758503, 0.166258 , -0.07376148]])
message: '`ftol` termination condition is satisfied.'
nfev: 10
njev: 10
optimality: 2.5028346541978996e-08
status: 2
success: True
x: array([17.7550016 , 1.09927597, 1.52223722])
Without a minimal reproducible example it is very hard to help you, but you can try a more traditional version of relative least squares which is
def fun(theta):
return (myfunction(theta, ts) - observed)/observed
or, perhaps, to guard against small/zero values,
def fun(theta):
cutoff = 1e-4
return (myfunction(theta, ts) - observed)/np.maximum(np.abs(observed),cutoff)

How to implement A = sparse(I, J, K) (sparse matrix from triplet) in a Fortran mex file?

I'm trying to create a sparse square matrix in Matlab through a mex function (written in Fortran). I want something like A = sparse(I,J,K) . My triplets look like this, there are repetitions among the entries
femi = [1, 2, 3, 2, 2, 4, 5, 5, 4, 6, 6, 5, 5, 2]
femj = [2, 2, 1, 1, 1, 3, 3, 6, 3, 1, 1, 2, 2, 4]
femk = [2, 1, 5, 4, 2, 4, 5, 7, 2, 1, 6, 2, 1, 4]
I've written a rough piece of code, it works for small matrix dimensions, but it's much slower than the intrinsic Matlab's sparse. Since I have almost no background in coding, I don't know what I'm doing wrong (wrong way to allocate variables? too many do loops?). Any help is appreciated. Thank you. This is the mex computational subroutine. It returns the pr, ir, jc indices array to give to the sparse matrix
subroutine new_sparse(femi, femj, femk, pr, ir, jc, n, m)
implicit none
intrinsic:: SUM, COUNT, ANY
integer :: i, j, k, n, indjc, m
real*8 :: femi(n), femj(n), femk(n)
real*8 :: pr(n)
integer :: ir(n),jc(m+1)
logical :: indices(n)
indices = .false.
k = 1
indjc = 0
jc(1) = 0
do j=1,m
do i =1,m
indices = [femi==i .and. femj==j]
if (ANY(indices .eqv. .true.)) then
ir(k) = i-1
pr(k) = SUM(femk, indices)
k = k+1
indjc = indjc + 1
end if
end do
if (indjc/=0) then
jc(j+1) = jc(j) + indjc
indjc = 0
else
jc(j+1) = jc(j)
end if
end do
return
end
Edit:
As suggested by users #jack and #veryreverie in the comments below, it's better to sort directly femi, femj and femk. I guess that ranking/sorting femi first (and sorting femj and femk according to femi) and then ranking/sorting femj (and sorting femi and femk according to femj) provides the desired result. The only thing left is to deal with duplicates.
Edit #2 :
I translated line by line the serialized version of the C code by Engblom and Lukarksi . This document explains very clearly their reasoning and I think it's useful for beginners like me. However, due to my inexperience, I was unable to translate the parallelized version of the code. Maybe that prompts another question.
subroutine new_sparse(ir, jcS, pr, MatI, MatJ, MatK, n, m)
! use omp_lib
implicit none
integer, parameter :: dp = selected_real_kind(15,300)
integer, intent(in) :: n, m
real(dp), intent(in) :: MatK(n), MatI(n), MatJ(n)
! integer*8, intent(out) :: nnew
integer :: i, k, col, row, c, r !, nthreads
integer :: hcol(m+1), jcS(m+1), jrS(m+1)
integer :: ixijs, irank(n), rank(n)
real*8 :: pr(*)
integer :: ir(*)
hcol = 0
jcS = 0
jrS = 0
do i = 1,n
jrS(MatI(i)+1) = jrS(MatI(i)+1)+1
end do
do r = 2,m+1
jrS(r) = jrS(r) + jrS(r-1)
end do
do i = 1,n
rank(jrS(MatI(i))+1) = i
jrS(MatI(i)) = jrS(MatI(i)) + 1
end do
k = 1
do row = 1,m
do i = k , jrS(row)
ixijs = rank(i)
col = MatJ(ixijs)
if (hcol(col) < row) then
hcol(col) = row
jcS(col+1) = jcS(col+1)+1
end if
irank(ixijs) = jcS(col+1)
k = k+1
end do
end do
do c = 2,m+1
jcS(c) = jcS(c) + jcS(c-1)
end do
do i = 1,n
irank(i) = irank(i) + jcS(MatJ(i))
end do
ir(irank) = MatI-1
do i = 1,n
pr(irank(i)) = pr(irank(i)) + MatK(i)
end do
return
end
This should work:
module test
implicit none
! This should probably be whatever floating point format Matlab uses.
integer, parameter :: dp = selected_real_kind(15,300)
contains
subroutine new_sparse(femi, femj, femk, pr, ir, jc, n, m)
integer, intent(in) :: n ! The size of femi, femj, femk.
integer, intent(in) :: m ! The no. of rows (and cols) in the matrix.
integer, intent(in) :: femi(n) ! The input i indices.
integer, intent(in) :: femj(n) ! The input j indices.
real(dp), intent(in) :: femk(n) ! The input values.
real(dp), intent(out) :: pr(n) ! The output values.
integer, intent(out) :: ir(n) ! The output i indices.
integer, intent(out) :: jc(m+1) ! Column j has jc(j+1)-jc(j) non-zero entries
! loop indices.
integer :: a,b
! Initialise jc.
! All elements of `jc` are `1` as the output initially contains no elements.
jc = 1
! Loop over the input elements.
do_a : do a=1,n
associate(i=>femi(a), j=>femj(a), k=>femk(a))
! Loop over the stored entries in column j of the output,
! looking for element (i,j).
do b=jc(j),jc(j+1)-1
! Element (i,j) is already in the output, update the output and cycle.
if (ir(b)==i) then
pr(b) = pr(b) + femk(a)
cycle do_a
endif
enddo
! Element (i,j) is not already in the output.
! First make room for the new element in ir and pr,
! then add the element to ir and pr,
! then update jc.
ir(jc(j+1)+1:jc(m+1)) = ir(jc(j+1):jc(m+1)-1)
pr(jc(j+1)+1:jc(m+1)) = pr(jc(j+1):jc(m+1)-1)
ir(jc(j+1)) = i
pr(jc(j+1)) = k
jc(j+1:) = jc(j+1:) + 1
end associate
enddo do_a
end subroutine
end module
program prog
use test
implicit none
integer, parameter :: n = 14
integer, parameter :: m = 6
integer :: femi(n), femj(n)
real(dp) :: femk(n)
real(dp) :: pr(n)
integer :: ir(n),jc(m+1)
integer :: a,b
femi = [1, 2, 3, 2, 2, 4, 5, 5, 4, 6, 6, 5, 5, 2]
femj = [2, 2, 1, 1, 1, 3, 3, 6, 3, 1, 1, 2, 2, 4]
femk = real([2, 1, 5, 4, 2, 4, 5, 7, 2, 1, 6, 2, 1, 4], dp)
write(*,*) 'Input:'
do a=1,n
write(*,'(a,i0,a,i0,a,f2.0)') '(',femi(a),',',femj(a),') : ',femk(a)
enddo
write(*,*)
call new_sparse(femi,femj,femk,pr,ir,jc,n,m)
write(*,*) 'Output:'
do a=1,m
do b=jc(a),jc(a+1)-1
write(*,'(a,i0,a,i0,a,f2.0)') '(',ir(b),',',a,') : ',pr(b)
enddo
enddo
end program
This writes:
Input:
(1,2) : 2.
(2,2) : 1.
(3,1) : 5.
(2,1) : 4.
(2,1) : 2.
(4,3) : 4.
(5,3) : 5.
(5,6) : 7.
(4,3) : 2.
(6,1) : 1.
(6,1) : 6.
(5,2) : 2.
(5,2) : 1.
(2,4) : 4.
Output:
(3,1) : 5.
(2,1) : 6.
(6,1) : 7.
(1,2) : 2.
(2,2) : 1.
(5,2) : 3.
(4,3) : 6.
(5,3) : 5.
(2,4) : 4.
(5,6) : 7.
The bottleneck in your algorithm comes from the instructions indices = [femi==i .and. femj==j], any(indices .eqv. .true.) and sum(femk, indices). These all take O(n) operations, and as these are within a double loop the overall cost of the subroutine is O(m^2*n).
My algorithm works in two stages. The first stage, the do b=jc(j),jc(j+1)-1 loop, compares each element in the input with each element in the matching column of the output, for a maximum cost of O(mn) operations. If the input element is found in the output, then the value is updated and nothing more needs to be done.
If the input element is not found in the output, then it needs to be added to the output. This is handled by the second stage, the code after the do b... loop. Since this needs to move the output elements in order to make space for the new element, this stage has a maximum of O(n'^2) operations, where n' is the number of unique elements in the input, which should satisfy n'<=n and n'<<m^2 for a sparse matrix.
My algorithm should run a lot faster for large m and n, but it certainly has a lot of scope for improvement. I suspect it is worth using an intermediate data structure for storing ir and pr, so that new elements can be inserted without having to re-arrange all the elements to do so.

What is a BitVector and how to use it as return in Breeze Scala?

I am doing a comparison between two BreezeDenseVectors with the following way a :< b and what i get as a return is a BitVector.
I haven't worked again with this and everything i read about it, was not helpful enough.
Can anyone explain to me how they work?
Additionally, by printing the output, i get: {0, 1, 2, 3, 4 }. What is this supposed to mean?
You can check BitVectorTest.scala for more detail usage.
Basically, a :< b gives you a BitVector, which indicates which elements in a smaller than the ones in b.
For example val a = DenseVector[Int](4, 9, 3); val b = DenseVector[Int](8, 2, 5); a :< b will gives you BitVector(0, 2), it means that a(0) < b(0) and a(2) < b(2), which is correct.

Multi variable assignment in Scala

I am reading a book in scala and one of the exercises is as follows:
Come up with one situation where the assignment x = y = 1 is valid in Scala. (Hint: Pick a suitable type for x.)
The 2 solutions I could come up with are:
val x, y : Int = 1
val x, y = (1, 2)
Have I missed another way that the exercise is looking for?
"valid" and "useful" don't necessarily mean the same thing :)
scala> var y = 2
y: Int = 2
scala> val x = y = 1
x: Unit = ()
scala>

Variable associated to "Optimization terminated successfully" in scipy.optimize.fmin_cg?

I am using scipy.optimize.fmin https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.fmin_cg.html.
What is the variable associated to "Optimization terminated successfully"?
I need it such that I could write something like:
if "optimization not succesful" then "stop the for loop"
Thank you.
Just follow the docs.
You are interested in warnflag (as mentioned by cel in the comments), the 5th element returned, so just index
(0-indexing in python!) the result with result[4] to obtain your value.
The docs also say that some of these are only returned when called with argument full_output=True, so do this.
Simple example:
import numpy as np
args = (2, 3, 7, 8, 9, 10) # parameter values
def f(x, *args):
u, v = x
a, b, c, d, e, f = args
return a*u**2 + b*u*v + c*v**2 + d*u + e*v + f
def gradf(x, *args):
u, v = x
a, b, c, d, e, f = args
gu = 2*a*u + b*v + d # u-component of the gradient
gv = b*u + 2*c*v + e # v-component of the gradient
return np.asarray((gu, gv))
x0 = np.asarray((0, 0)) # Initial guess.
from scipy import optimize
res1 = optimize.fmin_cg(f, x0, fprime=gradf, args=args, full_output=True) # full_output !!!
print(res1[4]) # index 4 !!!