Callback in Bender's decomposition - callback

I am learning Bender's decomposition method and I want to use that in a small instance. I started from "bendersatsp.py" example in CPLEX. When I run this example with my problem, I got the following error. Could you please let me know what the problem is and how I can fix it? In the following you can see the modifications in lazy constraints function. I have two decision variables in master problem "z_{ik}" and "u_{k}" that would incorporate as constant in the workerLp.
class BendersLazyConsCallback(LazyConstraintCallback):
def __call__(self):
print("shoma")
v = self.v
u = self.u
z = self.z
print ("u:", u)
print ("z:", z)
workerLP = self.workerLP
boxty = len(u)
#scenarios=self.scenarios2
ite=len(z)
print ("ite:", ite)
print ("boxty:", boxty)
# Get the current x solution
sol1 = []
sol2 = []
sol3 = []
print("okkkk")
for k in range(1, boxty+1):
sol1.append([])
sol1[k-1]= [self.get_values(u[k-1])];
print ("sol1:", sol1[k-1])
for i in range(1, ite+1):
sol2.append([])
sol2[i-1]= self.get_values(z[i-1]);
print ("sol2:", sol2[i-1])
for i in range(1, ite+1):
sol3.append([])
sol3[i-1]= self.get_values(v[i-1]);
#print ("sol3:", sol3[i-1])
# Benders' cut separation
if workerLP.separate(sol3,sol1,sol2,v,u,z):
self.add(cut = workerLP.cutLhs, sense = "G", rhs = workerLP.cutRhs)
CPLEX Error 1006: Error during callback.
benders(sys.argv[1][0], datafile)
cpx.solve()
_proc.mipopt(self._env._e, self._lp)
check_status(env, status)
raise callback_exception
TypeError: unsupported operand type(s) for +: 'int' and 'list'

Related

Using the GPU with Lux and NeuralPDE Julia

I am trying to run a model using the GPU, no problem with the CPU. I think somehow using measured boundary conditions is causing the issue but I am not sure. I am following this example: https://docs.sciml.ai/dev/modules/NeuralPDE/tutorials/gpu/. I am following this example for using measured boundary conditions: https://docs.sciml.ai/dev/modules/MethodOfLines/tutorials/icbc_sampled/
using Random
using NeuralPDE, Lux, CUDA, Random
using Optimization
using OptimizationOptimisers
using NNlib
import ModelingToolkit: Interval
using Interpolations
# Measured Boundary Conditions (Arbitrary For Example)
bc1 = 1.0:1:1001.0 .|> Float32
bc2 = 1.0:1:1001.0 .|> Float32
ic1 = zeros(101) .|> Float32
ic2 = zeros(101) .|> Float32;
# Interpolation Functions Registered as Symbolic
itp1 = interpolate(bc1, BSpline(Cubic(Line(OnGrid()))))
up_cond_1_f(t::Float32) = itp1(t)
#register_symbolic up_cond_1_f(t)
itp2 = interpolate(bc2, BSpline(Cubic(Line(OnGrid()))))
up_cond_2_f(t::Float32) = itp2(t)
#register_symbolic up_cond_2_f(t)
itp3 = interpolate(ic1, BSpline(Cubic(Line(OnGrid()))))
init_cond_1_f(x::Float32) = itp3(x)
#register_symbolic init_cond_1_f(x)
itp4 = interpolate(ic2, BSpline(Cubic(Line(OnGrid()))))
init_cond_2_f(x::Float32) = itp4(x)
#register_symbolic init_cond_2_f(x);
# Parameters and differentials
#parameters t, x
#variables u1(..), u2(..)
Dt = Differential(t)
Dx = Differential(x);
# Arbitrary Equations
eqs = [Dt(u1(t, x)) + Dx(u2(t, x)) ~ 0.,
Dt(u1(t, x)) * u1(t,x) + Dx(u2(t, x)) + 9.81 ~ 0.]
# Boundary Conditions with Measured Data
bcs = [
u1(t,1) ~ up_cond_1_f(t),
u2(t,1) ~ up_cond_2_f(t),
u1(1,x) ~ init_cond_1_f(x),
u2(1,x) ~ init_cond_2_f(x)
]
# Space and time domains
domains = [t ∈ Interval(1.0,1001.0),
x ∈ Interval(1.0,101.0)];
# Neural network
input_ = length(domains)
n = 10
chain = Chain(Dense(input_,n,NNlib.tanh_fast),Dense(n,n,NNlib.tanh_fast),Dense(n,4))
strategy = GridTraining(.25)
ps = Lux.setup(Random.default_rng(), chain)[1]
ps = ps |> Lux.ComponentArray |> gpu .|> Float32
discretization = PhysicsInformedNN(chain,
strategy,
init_params=ps)
# Model Setup
#named pdesystem = PDESystem(eqs,bcs,domains,[t,x],[u1(t, x),u2(t, x)])
prob = discretize(pdesystem,discretization);
sym_prob = symbolic_discretize(pdesystem,discretization);
# Losses and Callbacks
pde_inner_loss_functions = sym_prob.loss_functions.pde_loss_functions
bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions
callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
return false
end;
# Train Model (Throws Error)
res = Optimization.solve(prob,Adam(0.01); callback = callback, maxiters=5000)
phi = discretization.phi;
I get the following error:
GPU broadcast resulted in non-concrete element type Union{}.
This probably means that the function you are broadcasting contains an error or type instability.
Please Advise.

Sympy match() doesn't work with multiple integrals?

I want to obtain parameters of multiple integral ():
'''
from sympy import *
z = symbols('z', real=True)
nu = Function('nu', real=True, positive=True)(z)
xx1 = nu.integrate((z,0,z))
xx2 = xx1.integrate((z,0,1))
xx1
xx2
xx1.cancel
xx2.cancel
'''
Integral and its structure in jupyterlab shown
Then with "wild"-type variables "Wi" I try to obtain parameters of integrals
''''
mm1 = Integral(W1,(W2,W3,W4))
mm2 = Integral(W1,(W2,W3,W4),(W5,W6,W7))
mm1
mm2
rr1 = xx1.match(mm1)
rr2 = xx2.match(mm2)
rr1
rr2.type()
''''
matching result
It works for single integral but doesmt for multiple. Why?
SECOND question is: why integration variable "z" is not obtained to "W2"?
THIRD question: why variable "z" is changed to symbol "_0" in "W1:nu(z)? How to do it right?
Recommended in comment '.list' doesn't work:
gives error message
Every expression has args -- all you are (apparently) try to do is capture the arguments in variables. Wild are not needed for this:
>>> W1,(W2,W3,W4),(W5,W6,W7) = xx2.args
>>> W1,(W2,W3,W4),(W5,W6,W7)
(nu(z), (z, 0, z), (z, 0, 1))

Julia: Inexact error when trying to get the integer part of a BigFloat

I am interested in getting the digits of a BigFloat in the form of bytes. I get a very strange error that I cannot debug. I provide a minimal example where the error appears.
function floatToBytes(x::BigFloat)
ret = zeros(UInt8, 4)
xs = significand(x)/2
b = UInt8(0)
for i = 1:4
xs *= 256
b = trunc(UInt8, xs)
ret[i] = b
xs -= b
end
return ret
end
println( floatToBytes(BigFloat(0.9921875001164153)) )
println( floatToBytes(BigFloat(0.9960937501164153)) )
What I get when running this is
UInt8[0xfe, 0x00, 0x00, 0x00]
ERROR: LoadError: InexactError()
Stacktrace:
[1] trunc(::Type{UInt8}, ::BigFloat) at ./mpfr.jl:201
etc.
It seems that it doesn't want to turn 255 into a UInt8. I can circumvent the problem by defining the function as
function floatToBytes(x::BigFloat)
ret = zeros(UInt8, 4)
xs = significand(x)/2
b = UInt8(0)
for i = 1:4
xs *= 256
try
b = trunc(UInt8, xs)
catch
b = trunc(UInt8, xs-1)+UInt8(1)
end
ret[i] = b
xs -= b
end
return ret
end
But this is highly unsatisfactory. What is going on here?
The problem looks like a bug in trunc for BigFloat. The problem is the current code does (typemin(T) <= x <= typemax(T)) || throw(InexactError(:trunc, T, x)) which throws an error because x is larger than 255 which is the typemax.
It actually needs to do the trunc in BigFloat domain and then cast to T (and have the cast check for typemax).
I've opened an issue regarding this at: https://github.com/JuliaLang/julia/issues/24041
In the meantime, a solution could be to do:
UInt8(trunc(xs))
i.e. trunc first and cast later. For example:
julia> UInt8(trunc(BigFloat(0.9960937501164153)*256))
0xff

local variable T conflicts with a static parameter warining

I'm getting warning that I don't understand.
I first run the following code:
type PDF{T <: Real}
𝛑::Vector{Float64} # K
μs::Matrix{T} # D x K
Σs::Vector{Matrix{T}} # K{D x D}
end
type Q{T <: Real}
w::Union{Float64, Vector{Float64}}
μpair::Union{Vector{T}, Matrix{T}}
Σpair::Union{Matrix{T}, Tuple{Matrix{T}, Matrix{T}} } # K{D x D}
end
type Smod{T <: Real}
H::Matrix{T} # D x D
Σs::Vector{Matrix{T}} # K{D x D}
qs::Vector{Q{T}}
end
type Scale{T <: Real}
μ::Vector{T} # D
Σ::Matrix{T} # D x D
end
type Parameters{T <: Real}
scale::Scale{T}
w::Vector{Float64}
maxNumCompsBeforeCompression::Integer
numComponentsAbsorbed::Integer
end
type KDE{T}
pdf::PDF{T}
smod::Smod{T}
params::Parameters{T}
end
And when after this I run the following snippet in IJulia
function initializeKDE{T <: Real}(x::Vector{T})
d = size(x,1)
T = typeof(x)
𝛑 = ones(Float64, 1)
μs = Array(T, d,1)
μs[:,1] = x
Σs = Array(Matrix{T}, 0)
pdf = PDF(𝛑, μs, Σs)
H = zeros(T, d,d)
qs = Array(Q{T}, 0)
smod = Smod(H, Σs, qs)
scale = Scale(x, H)
w = [0.0, 1.0]
maxNumCompsBeforeCompression = min(10, (0.5d^2 + d))
numComponentsAbsorbed = 0
params = Params(scale, w, maxNumCompsBeforeCompression, numComponentsAbsorbed)
kde = KDE(pdf, smod, params)
return kde::KDE
end
I get the following warning:
WARNING: local variable T conflicts with a static parameter in initializeKDE at In[4]:3.
where In[4]:3 corresponds to the 3rd line of the 2nd snippet.
Can anyone explain in human english what this warning is saying?
This is saying that you are trying to use T in two different ways: once as a "static parameter" and once as a local variable.
Firstly, you are using T as the parameter with which you are parametrising the function initializeKDE:
function initializeKDE{T <: Real}(x::Vector{T})
But then you are trying to redefine a new T in that third line:
T = typeof(x)
What are you trying to do here? If you are trying to define T to be the type of the elements that the vector x contains, then you should just delete this line and everything should just work -- T will automatically take the element type (eltype) of the vector that you pass to the initializeKDE function.

class Matrix: AttributeError: Matrix instance has no attribute '__getitem__' in max(self, other) method

class Matrix:
def __init__(self, nr, nc):
self.NRows = nr
self.NCols = nc
self.data = [ [0]*self.NCols for r in range(self.NRows) ]
def max(self, other):
""" return: a matrix with as many rows as the shorter of self and other and as many columns as the narrower of self and other.
Each entry of the returned matrix should be the larger (the max) of the corresponding entries in self and other.
"""
minrows = min(other.NRows, self.NRows)
mincols = min(other.NCols, self.NCols)
M = Matrix(minrows, mincols)
for i in range(minrows):
for j in range(mincols):
M.data[i][j] = max(self.data[i][j], other[i][j])
return M
This code give Traceback in max when tested and output said:
...in max
M.data[i][j] = max(self.data[i][j], other[i][j])
AttributeError: Matrix instance has no attribute '__getitem__'
How to get rid of this error? Where I made mistake?. Please help someone.
You should use this :
M.data[i][j] = max(self.data[i][j], other.data[i][j])
instead of
M.data[i][j] = max(self.data[i][j], other[i][j])