I am relatively new to model fitting and SciPy; apologies in advance for any ignorance.
I am trying to fit a non-linear model using scipy.optimize least_squares.
Here's the function:
def growthfunction(theta, t):
return (theta[0]*np.exp(-np.exp(-theta[1]*(t-theta[2]))))
and some data
t = [1, 2, 3, 4]
observed = [3, 10, 14, 17]
I first define the model
def fun(theta):
return (myfunction(theta, ts) - observed)
Select some random starting parameters to be optimized below:
theta0 = [1, 1, 1]
Then I utilize leas_squares to optimize
res1 = least_squares(fun, theta0)
This works great, except for the fact that least_squares is here optimizing the absolute error. My data changes with time, meaning an error of 5 at time point 1 is proportionally larger than an error of 5 at time point 100. I would like to change this so that instead the relative error is optimized.
I tried doing it manually, but if I divide by the predicted values in fun(theta) like so:
def fun(theta):
return (myfunction(theta, ts) - observed)/myfunction(theta, ts)
least_squares displays an error that there are too many parameters and cannot optimize
This is working by taking the relative error:
from scipy.optimize import least_squares
import numpy as np
def growthfunction(theta, t):
return (theta[0]*np.exp(-np.exp(-theta[1]*(t-theta[2]))))
t = [1, 2, 3, 4]
observed = [3, 10, 14, 17]
def fun(theta):
return (growthfunction(theta, t) - observed)/growthfunction(theta, t)
theta0 = [1,1,1]
res1 = least_squares(fun, theta0)
print(res1)
Output:
>>> active_mask: array([0., 0., 0.])
cost: 0.0011991963091748607
fun: array([ 0.00255037, -0.0175105 , 0.0397808 , -0.02242228])
grad: array([ 3.15774533e-13, -2.50283465e-08, -1.46139239e-08])
jac: array([[ 0.05617851, -0.92486809, -1.94678829],
[ 0.05730839, 0.28751647, -0.6615416 ],
[ 0.05408162, 0.27956135, -0.20795969],
[ 0.05758503, 0.166258 , -0.07376148]])
message: '`ftol` termination condition is satisfied.'
nfev: 10
njev: 10
optimality: 2.5028346541978996e-08
status: 2
success: True
x: array([17.7550016 , 1.09927597, 1.52223722])
Without a minimal reproducible example it is very hard to help you, but you can try a more traditional version of relative least squares which is
def fun(theta):
return (myfunction(theta, ts) - observed)/observed
or, perhaps, to guard against small/zero values,
def fun(theta):
cutoff = 1e-4
return (myfunction(theta, ts) - observed)/np.maximum(np.abs(observed),cutoff)
I'm trying to create a sparse square matrix in Matlab through a mex function (written in Fortran). I want something like A = sparse(I,J,K) . My triplets look like this, there are repetitions among the entries
femi = [1, 2, 3, 2, 2, 4, 5, 5, 4, 6, 6, 5, 5, 2]
femj = [2, 2, 1, 1, 1, 3, 3, 6, 3, 1, 1, 2, 2, 4]
femk = [2, 1, 5, 4, 2, 4, 5, 7, 2, 1, 6, 2, 1, 4]
I've written a rough piece of code, it works for small matrix dimensions, but it's much slower than the intrinsic Matlab's sparse. Since I have almost no background in coding, I don't know what I'm doing wrong (wrong way to allocate variables? too many do loops?). Any help is appreciated. Thank you. This is the mex computational subroutine. It returns the pr, ir, jc indices array to give to the sparse matrix
subroutine new_sparse(femi, femj, femk, pr, ir, jc, n, m)
implicit none
intrinsic:: SUM, COUNT, ANY
integer :: i, j, k, n, indjc, m
real*8 :: femi(n), femj(n), femk(n)
real*8 :: pr(n)
integer :: ir(n),jc(m+1)
logical :: indices(n)
indices = .false.
k = 1
indjc = 0
jc(1) = 0
do j=1,m
do i =1,m
indices = [femi==i .and. femj==j]
if (ANY(indices .eqv. .true.)) then
ir(k) = i-1
pr(k) = SUM(femk, indices)
k = k+1
indjc = indjc + 1
end if
end do
if (indjc/=0) then
jc(j+1) = jc(j) + indjc
indjc = 0
else
jc(j+1) = jc(j)
end if
end do
return
end
Edit:
As suggested by users #jack and #veryreverie in the comments below, it's better to sort directly femi, femj and femk. I guess that ranking/sorting femi first (and sorting femj and femk according to femi) and then ranking/sorting femj (and sorting femi and femk according to femj) provides the desired result. The only thing left is to deal with duplicates.
Edit #2 :
I translated line by line the serialized version of the C code by Engblom and Lukarksi . This document explains very clearly their reasoning and I think it's useful for beginners like me. However, due to my inexperience, I was unable to translate the parallelized version of the code. Maybe that prompts another question.
subroutine new_sparse(ir, jcS, pr, MatI, MatJ, MatK, n, m)
! use omp_lib
implicit none
integer, parameter :: dp = selected_real_kind(15,300)
integer, intent(in) :: n, m
real(dp), intent(in) :: MatK(n), MatI(n), MatJ(n)
! integer*8, intent(out) :: nnew
integer :: i, k, col, row, c, r !, nthreads
integer :: hcol(m+1), jcS(m+1), jrS(m+1)
integer :: ixijs, irank(n), rank(n)
real*8 :: pr(*)
integer :: ir(*)
hcol = 0
jcS = 0
jrS = 0
do i = 1,n
jrS(MatI(i)+1) = jrS(MatI(i)+1)+1
end do
do r = 2,m+1
jrS(r) = jrS(r) + jrS(r-1)
end do
do i = 1,n
rank(jrS(MatI(i))+1) = i
jrS(MatI(i)) = jrS(MatI(i)) + 1
end do
k = 1
do row = 1,m
do i = k , jrS(row)
ixijs = rank(i)
col = MatJ(ixijs)
if (hcol(col) < row) then
hcol(col) = row
jcS(col+1) = jcS(col+1)+1
end if
irank(ixijs) = jcS(col+1)
k = k+1
end do
end do
do c = 2,m+1
jcS(c) = jcS(c) + jcS(c-1)
end do
do i = 1,n
irank(i) = irank(i) + jcS(MatJ(i))
end do
ir(irank) = MatI-1
do i = 1,n
pr(irank(i)) = pr(irank(i)) + MatK(i)
end do
return
end
This should work:
module test
implicit none
! This should probably be whatever floating point format Matlab uses.
integer, parameter :: dp = selected_real_kind(15,300)
contains
subroutine new_sparse(femi, femj, femk, pr, ir, jc, n, m)
integer, intent(in) :: n ! The size of femi, femj, femk.
integer, intent(in) :: m ! The no. of rows (and cols) in the matrix.
integer, intent(in) :: femi(n) ! The input i indices.
integer, intent(in) :: femj(n) ! The input j indices.
real(dp), intent(in) :: femk(n) ! The input values.
real(dp), intent(out) :: pr(n) ! The output values.
integer, intent(out) :: ir(n) ! The output i indices.
integer, intent(out) :: jc(m+1) ! Column j has jc(j+1)-jc(j) non-zero entries
! loop indices.
integer :: a,b
! Initialise jc.
! All elements of `jc` are `1` as the output initially contains no elements.
jc = 1
! Loop over the input elements.
do_a : do a=1,n
associate(i=>femi(a), j=>femj(a), k=>femk(a))
! Loop over the stored entries in column j of the output,
! looking for element (i,j).
do b=jc(j),jc(j+1)-1
! Element (i,j) is already in the output, update the output and cycle.
if (ir(b)==i) then
pr(b) = pr(b) + femk(a)
cycle do_a
endif
enddo
! Element (i,j) is not already in the output.
! First make room for the new element in ir and pr,
! then add the element to ir and pr,
! then update jc.
ir(jc(j+1)+1:jc(m+1)) = ir(jc(j+1):jc(m+1)-1)
pr(jc(j+1)+1:jc(m+1)) = pr(jc(j+1):jc(m+1)-1)
ir(jc(j+1)) = i
pr(jc(j+1)) = k
jc(j+1:) = jc(j+1:) + 1
end associate
enddo do_a
end subroutine
end module
program prog
use test
implicit none
integer, parameter :: n = 14
integer, parameter :: m = 6
integer :: femi(n), femj(n)
real(dp) :: femk(n)
real(dp) :: pr(n)
integer :: ir(n),jc(m+1)
integer :: a,b
femi = [1, 2, 3, 2, 2, 4, 5, 5, 4, 6, 6, 5, 5, 2]
femj = [2, 2, 1, 1, 1, 3, 3, 6, 3, 1, 1, 2, 2, 4]
femk = real([2, 1, 5, 4, 2, 4, 5, 7, 2, 1, 6, 2, 1, 4], dp)
write(*,*) 'Input:'
do a=1,n
write(*,'(a,i0,a,i0,a,f2.0)') '(',femi(a),',',femj(a),') : ',femk(a)
enddo
write(*,*)
call new_sparse(femi,femj,femk,pr,ir,jc,n,m)
write(*,*) 'Output:'
do a=1,m
do b=jc(a),jc(a+1)-1
write(*,'(a,i0,a,i0,a,f2.0)') '(',ir(b),',',a,') : ',pr(b)
enddo
enddo
end program
This writes:
Input:
(1,2) : 2.
(2,2) : 1.
(3,1) : 5.
(2,1) : 4.
(2,1) : 2.
(4,3) : 4.
(5,3) : 5.
(5,6) : 7.
(4,3) : 2.
(6,1) : 1.
(6,1) : 6.
(5,2) : 2.
(5,2) : 1.
(2,4) : 4.
Output:
(3,1) : 5.
(2,1) : 6.
(6,1) : 7.
(1,2) : 2.
(2,2) : 1.
(5,2) : 3.
(4,3) : 6.
(5,3) : 5.
(2,4) : 4.
(5,6) : 7.
The bottleneck in your algorithm comes from the instructions indices = [femi==i .and. femj==j], any(indices .eqv. .true.) and sum(femk, indices). These all take O(n) operations, and as these are within a double loop the overall cost of the subroutine is O(m^2*n).
My algorithm works in two stages. The first stage, the do b=jc(j),jc(j+1)-1 loop, compares each element in the input with each element in the matching column of the output, for a maximum cost of O(mn) operations. If the input element is found in the output, then the value is updated and nothing more needs to be done.
If the input element is not found in the output, then it needs to be added to the output. This is handled by the second stage, the code after the do b... loop. Since this needs to move the output elements in order to make space for the new element, this stage has a maximum of O(n'^2) operations, where n' is the number of unique elements in the input, which should satisfy n'<=n and n'<<m^2 for a sparse matrix.
My algorithm should run a lot faster for large m and n, but it certainly has a lot of scope for improvement. I suspect it is worth using an intermediate data structure for storing ir and pr, so that new elements can be inserted without having to re-arrange all the elements to do so.
I am using scipy.optimize.fmin https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.fmin_cg.html.
What is the variable associated to "Optimization terminated successfully"?
I need it such that I could write something like:
if "optimization not succesful" then "stop the for loop"
Thank you.
Just follow the docs.
You are interested in warnflag (as mentioned by cel in the comments), the 5th element returned, so just index
(0-indexing in python!) the result with result[4] to obtain your value.
The docs also say that some of these are only returned when called with argument full_output=True, so do this.
Simple example:
import numpy as np
args = (2, 3, 7, 8, 9, 10) # parameter values
def f(x, *args):
u, v = x
a, b, c, d, e, f = args
return a*u**2 + b*u*v + c*v**2 + d*u + e*v + f
def gradf(x, *args):
u, v = x
a, b, c, d, e, f = args
gu = 2*a*u + b*v + d # u-component of the gradient
gv = b*u + 2*c*v + e # v-component of the gradient
return np.asarray((gu, gv))
x0 = np.asarray((0, 0)) # Initial guess.
from scipy import optimize
res1 = optimize.fmin_cg(f, x0, fprime=gradf, args=args, full_output=True) # full_output !!!
print(res1[4]) # index 4 !!!