Sympy match() doesn't work with multiple integrals? - match

I want to obtain parameters of multiple integral ():
'''
from sympy import *
z = symbols('z', real=True)
nu = Function('nu', real=True, positive=True)(z)
xx1 = nu.integrate((z,0,z))
xx2 = xx1.integrate((z,0,1))
xx1
xx2
xx1.cancel
xx2.cancel
'''
Integral and its structure in jupyterlab shown
Then with "wild"-type variables "Wi" I try to obtain parameters of integrals
''''
mm1 = Integral(W1,(W2,W3,W4))
mm2 = Integral(W1,(W2,W3,W4),(W5,W6,W7))
mm1
mm2
rr1 = xx1.match(mm1)
rr2 = xx2.match(mm2)
rr1
rr2.type()
''''
matching result
It works for single integral but doesmt for multiple. Why?
SECOND question is: why integration variable "z" is not obtained to "W2"?
THIRD question: why variable "z" is changed to symbol "_0" in "W1:nu(z)? How to do it right?
Recommended in comment '.list' doesn't work:
gives error message

Every expression has args -- all you are (apparently) try to do is capture the arguments in variables. Wild are not needed for this:
>>> W1,(W2,W3,W4),(W5,W6,W7) = xx2.args
>>> W1,(W2,W3,W4),(W5,W6,W7)
(nu(z), (z, 0, z), (z, 0, 1))

Related

Using the GPU with Lux and NeuralPDE Julia

I am trying to run a model using the GPU, no problem with the CPU. I think somehow using measured boundary conditions is causing the issue but I am not sure. I am following this example: https://docs.sciml.ai/dev/modules/NeuralPDE/tutorials/gpu/. I am following this example for using measured boundary conditions: https://docs.sciml.ai/dev/modules/MethodOfLines/tutorials/icbc_sampled/
using Random
using NeuralPDE, Lux, CUDA, Random
using Optimization
using OptimizationOptimisers
using NNlib
import ModelingToolkit: Interval
using Interpolations
# Measured Boundary Conditions (Arbitrary For Example)
bc1 = 1.0:1:1001.0 .|> Float32
bc2 = 1.0:1:1001.0 .|> Float32
ic1 = zeros(101) .|> Float32
ic2 = zeros(101) .|> Float32;
# Interpolation Functions Registered as Symbolic
itp1 = interpolate(bc1, BSpline(Cubic(Line(OnGrid()))))
up_cond_1_f(t::Float32) = itp1(t)
#register_symbolic up_cond_1_f(t)
itp2 = interpolate(bc2, BSpline(Cubic(Line(OnGrid()))))
up_cond_2_f(t::Float32) = itp2(t)
#register_symbolic up_cond_2_f(t)
itp3 = interpolate(ic1, BSpline(Cubic(Line(OnGrid()))))
init_cond_1_f(x::Float32) = itp3(x)
#register_symbolic init_cond_1_f(x)
itp4 = interpolate(ic2, BSpline(Cubic(Line(OnGrid()))))
init_cond_2_f(x::Float32) = itp4(x)
#register_symbolic init_cond_2_f(x);
# Parameters and differentials
#parameters t, x
#variables u1(..), u2(..)
Dt = Differential(t)
Dx = Differential(x);
# Arbitrary Equations
eqs = [Dt(u1(t, x)) + Dx(u2(t, x)) ~ 0.,
Dt(u1(t, x)) * u1(t,x) + Dx(u2(t, x)) + 9.81 ~ 0.]
# Boundary Conditions with Measured Data
bcs = [
u1(t,1) ~ up_cond_1_f(t),
u2(t,1) ~ up_cond_2_f(t),
u1(1,x) ~ init_cond_1_f(x),
u2(1,x) ~ init_cond_2_f(x)
]
# Space and time domains
domains = [t ∈ Interval(1.0,1001.0),
x ∈ Interval(1.0,101.0)];
# Neural network
input_ = length(domains)
n = 10
chain = Chain(Dense(input_,n,NNlib.tanh_fast),Dense(n,n,NNlib.tanh_fast),Dense(n,4))
strategy = GridTraining(.25)
ps = Lux.setup(Random.default_rng(), chain)[1]
ps = ps |> Lux.ComponentArray |> gpu .|> Float32
discretization = PhysicsInformedNN(chain,
strategy,
init_params=ps)
# Model Setup
#named pdesystem = PDESystem(eqs,bcs,domains,[t,x],[u1(t, x),u2(t, x)])
prob = discretize(pdesystem,discretization);
sym_prob = symbolic_discretize(pdesystem,discretization);
# Losses and Callbacks
pde_inner_loss_functions = sym_prob.loss_functions.pde_loss_functions
bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions
callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
return false
end;
# Train Model (Throws Error)
res = Optimization.solve(prob,Adam(0.01); callback = callback, maxiters=5000)
phi = discretization.phi;
I get the following error:
GPU broadcast resulted in non-concrete element type Union{}.
This probably means that the function you are broadcasting contains an error or type instability.
Please Advise.

Why I am getting different P values if using different packages

I am trying to compare categorical data from 2 groups.
Yes No
GrpA: [152, 220]
GrpB: [187, 350]
However, I am getting different P value results when using different methods:
count = [152, 220]
nobs = [187, 350]
import statsmodels
import scipy.stats
# USING STATSMODELS PACKAGE:
res = statsmodels.stats.proportion.proportions_chisquare(count, nobs)
print("P value =", res[1])
res = statsmodels.stats.proportion.proportions_ztest(count, nobs)
print("P value =", res[1])
# USING SCIPY.STATS PACKAGE:
res = scipy.stats.chi2_contingency([count, nobs], correction=True)
print("P value =", res[1])
res = scipy.stats.chi2_contingency([count, nobs], correction=False)
print("P value =", res[1])
Output is:
P value using proportions_chisquare = 1.037221289479458e-05
P value using proportions_ztest= 1.0372212894794536e-05
P value using chi2_contingency with correction= 0.0749218380702875
P value using chi2_contingency without correction= 0.06421435896354544
First 2 are identical (and highly significant), but they are different from last 2 (non-signficant).
Why are the results different? Which is the correct method to do this analysis?

Callback in Bender's decomposition

I am learning Bender's decomposition method and I want to use that in a small instance. I started from "bendersatsp.py" example in CPLEX. When I run this example with my problem, I got the following error. Could you please let me know what the problem is and how I can fix it? In the following you can see the modifications in lazy constraints function. I have two decision variables in master problem "z_{ik}" and "u_{k}" that would incorporate as constant in the workerLp.
class BendersLazyConsCallback(LazyConstraintCallback):
def __call__(self):
print("shoma")
v = self.v
u = self.u
z = self.z
print ("u:", u)
print ("z:", z)
workerLP = self.workerLP
boxty = len(u)
#scenarios=self.scenarios2
ite=len(z)
print ("ite:", ite)
print ("boxty:", boxty)
# Get the current x solution
sol1 = []
sol2 = []
sol3 = []
print("okkkk")
for k in range(1, boxty+1):
sol1.append([])
sol1[k-1]= [self.get_values(u[k-1])];
print ("sol1:", sol1[k-1])
for i in range(1, ite+1):
sol2.append([])
sol2[i-1]= self.get_values(z[i-1]);
print ("sol2:", sol2[i-1])
for i in range(1, ite+1):
sol3.append([])
sol3[i-1]= self.get_values(v[i-1]);
#print ("sol3:", sol3[i-1])
# Benders' cut separation
if workerLP.separate(sol3,sol1,sol2,v,u,z):
self.add(cut = workerLP.cutLhs, sense = "G", rhs = workerLP.cutRhs)
CPLEX Error 1006: Error during callback.
benders(sys.argv[1][0], datafile)
cpx.solve()
_proc.mipopt(self._env._e, self._lp)
check_status(env, status)
raise callback_exception
TypeError: unsupported operand type(s) for +: 'int' and 'list'

Cython and Scipy

I'm cythonizing a skript that containes scipy.stats.norm() function for calculation of implied vola.
Instead of scipy.stats.norm() I use scipy.special.ndtr() since this is somewhat faster. However, when profiling my script most time (50 from 125sec) is still spent within this function, in particular within _distn_infrastructure.py:1610(cdf).
That's the function:
def cdf(self, x, *args, **kwds):
"""
in class rv_continuous(rv_generic):
Cumulative distribution function of the given RV.
Parameters
----------
x : array_like
quantiles
arg1, arg2, arg3,... : array_like
The shape parameter(s) for the distribution (see docstring of the
instance object for more information)
loc : array_like, optional
location parameter (default=0)
scale : array_like, optional
scale parameter (default=1)
Returns
-------
cdf : ndarray
Cumulative distribution function evaluated at `x`
"""
args, loc, scale = self._parse_args(*args, **kwds)
x, loc, scale = map(asarray, (x, loc, scale))
args = tuple(map(asarray, args))
x = (x-loc)*1.0/scale
cond0 = self._argcheck(*args) & (scale > 0)
cond1 = (scale > 0) & (x > self.a) & (x < self.b)
cond2 = (x >= self.b) & cond0
cond = cond0 & cond1
output = zeros(shape(cond), 'd')
place(output, (1-cond0)+np.isnan(x), self.badvalue)
place(output, cond2, 1.0)
if any(cond): # call only if at least 1 entry
goodargs = argsreduce(cond, *((x,)+args))
place(output, cond, self._cdf(*goodargs))
if output.ndim == 0:
return output[()]
return output
However, I neighter see any function that does the actual cdf calculation nor a call of a another function that actually does it. I tried to print the output of this function via inserting
print output
before
return output
however, the print command is highlighted as wrong syntax and when running the skript there is no print of the output. How to go from here? I somehow need to speed up the norm-CDF calculation.

Matlab function calling basic

I'm new to Matlab and now learning the basic grammar.
I've written the file GetBin.m:
function res = GetBin(num_bin, bin, val)
if val >= bin(num_bin - 1)
res = num_bin;
else
for i = (num_bin - 1) : 1
if val < bin(i)
res = i;
end
end
end
and I call it with:
num_bin = 5;
bin = [48.4,96.8,145.2,193.6]; % bin stands for the intermediate borders, so there are 5 bins
fea_val = GetBin(num_bin,bin,fea(1,1)) % fea is a pre-defined 280x4096 matrix
It returns error:
Error in GetBin (line 2)
if val >= bin(num_bin - 1)
Output argument "res" (and maybe others) not assigned during call to
"/Users/mac/Documents/MATLAB/GetBin.m>GetBin".
Could anybody tell me what's wrong here? Thanks.
You need to ensure that every possible path through your code assigns a value to res.
In your case, it looks like that's not the case, because you have a loop:
for i = (num_bins-1) : 1
...
end
That loop will never iterate (so it will never assign a value to res). You need to explicitly specify that it's a decrementing loop:
for i = (num_bins-1) : -1 : 1
...
end
For more info, see the documentation on the colon operator.
for i = (num_bin - 1) : -1 : 1