Incorrect results when using cufftCallbackLoadR for R2C transformation using cupy - callback

Since Callback feature isn't documented beyond [this sample] [(https://docs.cupy.dev/en/latest/user_guide/fft.html#fft-callbacks), I might just be holding it wrong. I tried to modify that example to do a real-to-complex transform as attached below.
When run , it gives this output:
Traceback (most recent call last):
File "/workspace/test/CB-R2C-2D.py", line 28, in <module>
assert cp.allclose(b, c)
AssertionError
C2R is working properly but not R2C. Is it supported? It's not clearly documented if all possible fft combinations are supported.
Here is the modified R2C code:
#!/usr/bin/env python3
import cupy as cp
# a load callback that overwrites the input array to 1
code = r'''
__device__ cufftReal CB_ConvertInputR(
void *dataIn,
size_t offset,
void *callerInfo,
void *sharedPtr)
{
cufftReal x;
x = 1.;
return x;
}
__device__ cufftCallbackLoadR d_loadCallbackPtr = CB_ConvertInputR;
'''
a = cp.random.random((64, 128, 128)).astype(cp.float64)
# this fftn call uses callback
with cp.fft.config.set_cufft_callbacks(cb_load=code):
b = cp.fft.rfftn(a, axes=(1,2))
# this does not use
c = cp.fft.rfftn(cp.ones(shape=a.shape, dtype=cp.float64), axes=(1,2))
# result agrees
assert cp.allclose(b, c)

Related

Mibian returning "NameError: name 'norm' is not defined..." when using with xlwings

I was trying to write an Excel UDF using xlwings to return finance options calculation from the Mibian library. I've tried the code below.
import xlwings as xw
import mibian
#xw.func
def BSPutOptionImpVol(underlyingPrice,strike,interestRate,expiryDays,premium):
c = mibian.BS([underlyingPrice, strike, interestRate, expiryDays], putPrice=premium)
return c.impliedVolatility
From Excel, I then call the function with the following =BSPutOptionImpVol(45,32,1,127,0.95)
It's returning the following error:
"NameError: name 'norm' is not defined
call = self.underlyingPrice * norm.cdf(self.d1) - \
File ""C:\Users...\anaconda3\lib\site-packages\mibian__init__.py"", line 307, in _price
[self.callPrice, self.putPrice] = self._price()
File ""C:\Users...\anaconda3\lib\site-packages\mibian__init__.py"", line 276, in init
estimate = eval(className)(args, volatility=mid, performance=True).putPrice
File ""C:\Users...\anaconda3\lib\site-packages\mibian__init__.py"", line 29, in impliedVolatility
self.class.name, args, putPrice=self.putPrice)
File ""C:\Users...\anaconda3\lib\site-packages\mibian__init__.py"", line 293, in init
c = mibian.BS([underlyingPrice, strike, interestRate, expiryDays], putPrice=premium)
File ""c:\users...\documents\python scripts\BSPutImVol.py"", line 6, in BSPutOptionImpVol
ret = func(*args)
File ""C:\Users...\anaconda3\lib\site-packages\xlwings\udfs.py"", line 298, in call_udf
res = call_udf(script, fname, args, this_workbook, FromVariant(caller))
File ""C:\Users...\anaconda3\lib\site-packages\xlwings\server.py"", line 195, in CallUDF
return func(args)
File ""C:\Users...\anaconda3\lib\site-packages\win32com\server\policy.py"", line 586, in _invokeex_
return S_OK, -1, self._invokeex_(dispid, lcid, wFlags, args, None, None)
File ""C:\Users...\anaconda3\lib\site-packages\win32com\server\policy.py"", line 283, in _invoke_
return self._invoke_(dispid, lcid, wFlags, args)
File ""C:\Users...\anaconda3\lib\site-packages\win32com\server\policy.py"", line 278, in _Invoke_"
I have also tried just calling the function without passing the parameters in (ie the input values are in the python code) but I still get the same error.
However, if I comment out xlwings and just run the Python code from Spyder as below, it works.
#import xlwings as xw
import mibian
##xw.func
def BSPutOptionImpVol(underlyingPrice,strike,interestRate,expiryDays,premium):
c = mibian.BS([underlyingPrice, strike, interestRate, expiryDays], putPrice=premium)
# return c.impliedVolatility
print(c.impliedVolatility)
I'm a newbie to Python, so appreciate any help and advice. Thanks.
Try..
from scipy.stats import norm
Try add
import scipy
to your code. That resolved the 'norm' issue for me.
This worked for me
pip uninstall numpy scipy
and then
pip install -U numpy scipy

GPflow, bvh: ValueError: mean must be 1 dimensional

I am having a weird "ValueError: mean must be 1 dimensional" when I am trying to build a Hierarchical GL-LVM model. Basically I'm trying to reproduce this paper: Hierarchical Gaussian Process Latent Variable Models using GPflow.
Therefore I implemented my own new model as follow:
class myGPLVM(gpflow.models.BayesianModel):
def __init__(self, data, latent_data, x_data_mean, kernel):
super().__init__()
print("GPLVM")
self.kernel0 = kernel[0]
self.kernel1 = kernel[1]
self.mean_function = Zero()
self.likelihood0 = gpflow.likelihoods.Gaussian(1.0)
self.likelihood1 = gpflow.likelihoods.Gaussian(1.0)
# make some parameters
self.data = (gpflow.Parameter(x_data_mean), gpflow.Parameter(latent_data), data)
def hierarchy_ll(self):
x, h, y = self.data
K = self.kernel0(x)
num_data = x.shape[0]
k_diag = tf.linalg.diag_part(K)
s_diag = tf.fill([num_data], self.likelihood0.variance)
ks = tf.linalg.set_diag(K, k_diag + s_diag)
L = tf.linalg.cholesky(ks)
m = self.mean_function(x)
return multivariate_normal(h, m, L)
def log_likelihood(self):
"""
Computes the log likelihood.
.. math::
\log p(Y | \theta).
"""
x, h, y = self.data
K = self.kernel1(h)
num_data = h.shape[0]
k_diag = tf.linalg.diag_part(K)
s_diag = tf.fill([num_data], self.likelihood1.variance)
ks = tf.linalg.set_diag(K, k_diag + s_diag)
L = tf.linalg.cholesky(ks)
m = self.mean_function(h)
# [R,] log-likelihoods for each independent dimension of Y
log_prob = multivariate_normal(y, m, L). # <- trows the error!
log_prob_h = self.hierarchy_ll()
log_likelihood = tf.reduce_sum(log_prob) + tf.reduce_sum(log_prob_h)
return log_likelihood
The model seems to work with a toy example:
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=40, centers=3, n_features=12, random_state=2)
Y = tf.convert_to_tensor(X, dtype=default_float())
but fails and trough me the error when I am trying with a bvh file (the one from the paper actually). I also used Lawrence's code to read my bvh from mocap which I modified to fit python3
Anyway, it's been few a days and I am out of ideas. I tried multiple way to force my mean array "m" to be of one dimensional but nothing worked. I also tried with the "three_phase_oil_flow" dataset from the first GPLVM paper which works as well.
Therefore, I would assume that my model is correct, or at least I got some optimisation going on, and would think that perhaps the bvh reader could be the cause. But the data seems all fine to me... Especially I don't understand why when forcing multivariate function like:
m = np.zeros((np.shape(m)[0], 1))
log_prob = multivariate_normal(y, m, L)
or even with the gpflow Zero function
m = Zero(h)
log_prob = multivariate_normal(y, m, L)
it still trows me the error. Any help will be highly appreciated.
edited thanks to: Artem Artemev
The rest of the code if anyone wants to try to reproduce:
https://github.com/michaelStettler/h-GPLVM
error flow:
(venv) MacBookMichael2:stackOverflow michaelstettler$ python3 HGPLVM.py
(199, 96)
shape Y (199, 3, 38)
2020-01-26 17:00:48.104029: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-26 17:00:48.113609: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f8dd5ff5410 executing computations on platform Host. Devices:
2020-01-26 17:00:48.113627: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
shape Y (199, 38)
Number of points: 199 and Number of dimensions: 38
shape x_mean_latent (199, 8)
shape x_mean_init (199, 2)
HGPLVM
gpr_data (199, 2) (199, 8) (199, 38)
2020-01-26 17:00:48.139003: W tensorflow/python/util/util.cc:299] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
shape m (199, 1)
Traceback (most recent call last):
File "HGPLVM.py", line 131, in <module>
_ = opt.minimize(closure, method="bfgs", variables=model.trainable_variables, options=dict(maxiter=maxiter))
File "/Users/michaelstettler/PycharmProjects/GPflow/venv/lib/python3.6/site-packages/gpflow/optimizers/scipy.py", line 60, in minimize
**scipy_kwargs)
File "/Users/michaelstettler/PycharmProjects/GPflow/venv/lib/python3.6/site-packages/scipy/optimize/_minimize.py", line 594, in minimize
return _minimize_bfgs(fun, x0, args, jac, callback, **options)
File "/Users/michaelstettler/PycharmProjects/GPflow/venv/lib/python3.6/site-packages/scipy/optimize/optimize.py", line 998, in _minimize_bfgs
gfk = myfprime(x0)
File "/Users/michaelstettler/PycharmProjects/GPflow/venv/lib/python3.6/site-packages/scipy/optimize/optimize.py", line 327, in function_wrapper
return function(*(wrapper_args + args))
File "/Users/michaelstettler/PycharmProjects/GPflow/venv/lib/python3.6/site-packages/scipy/optimize/optimize.py", line 73, in derivative
self(x, *args)
File "/Users/michaelstettler/PycharmProjects/GPflow/venv/lib/python3.6/site-packages/scipy/optimize/optimize.py", line 65, in __call__
fg = self.fun(x, *args)
File "/Users/michaelstettler/PycharmProjects/GPflow/venv/lib/python3.6/site-packages/gpflow/optimizers/scipy.py", line 72, in _eval
loss, grads = _compute_loss_and_gradients(closure, variables)
File "/Users/michaelstettler/PycharmProjects/GPflow/venv/lib/python3.6/site-packages/gpflow/optimizers/scipy.py", line 116, in _compute_loss_and_gradients
loss = loss_cb()
File "HGPLVM.py", line 127, in closure
return - model.log_marginal_likelihood()
File "/Users/michaelstettler/PycharmProjects/GPflow/venv/lib/python3.6/site-packages/gpflow/models/model.py", line 45, in log_marginal_likelihood
return self.log_likelihood(*args, **kwargs) + self.log_prior()
File "HGPLVM.py", line 62, in log_likelihood
log_prob = multivariate_normal(y, m, L)
File "mtrand.pyx", line 3729, in numpy.random.mtrand.RandomState.multivariate_normal
ValueError: mean must be 1 dimensional
I would recommend posting a working MWE code. I have tried to use your code snippets, but it gives me errors.
I don't have issues with multivariate_normal function. If you have localised the issue correctly you can debug TF2.0 more thoroughly and find the place that causes that exception. Here is the code which I'm running:
In [2]: from sklearn.datasets.samples_generator import make_blobs
...: X, y = make_blobs(n_samples=40, centers=3, n_features=12, random_state=2)
In [10]: m = np.zeros((np.shape(y)[0], 1))
In [11]: m.shape
Out[11]: (40, 1)
In [12]: y.shape
Out[12]: (40,)
In [13]: L = np.eye(m.shape[0])
In [15]: gpflow.logdensities.multivariate_normal(y, m, L)
Out[15]:
<tf.Tensor: shape=(40,), dtype=float64, numpy=
array([ -56.75754133, ...])>

Sympy .coeff_all() returned list is not readable by scipy

I have question about the data type of the result returned by Sympy Poly.all_coeffs(). I have started to use Sympy just recently.
My Sympy transfer function is following:
Then I run this code:
n,d = fraction(Gs)
num = Poly(n,s)
den = Poly(d,s)
num_c = num.all_coeffs()
den_c = den.all_coeffs()
I get:
Then I run this code:
from scipy import signal
#nu = [5000000.0]
#de = [4.99, 509000.0]
nu = num_c
de = den_c
sys = signal.lti(nu, de)
w,mag,phase = signal.bode(sys)
plt.plot(w/(2*np.pi), mag)
and the result is:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-131-fb960684259c> in <module>
4 nu = num_c
5 de = den_c
----> 6 sys = signal.lti(nu, de)
But if I use those commented line 'nu' and 'de' straight python lists instead, the program works. So what is wrong here?
Why did you just show a bit the error? Why not the full message, maybe even the full traceback!
In [60]: sys = signal.lti(num_c, den_c)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-60-21f71ecd8884> in <module>
----> 1 sys = signal.lti(num_c, den_c)
/usr/local/lib/python3.6/dist-packages/scipy/signal/ltisys.py in __init__(self, *system, **kwargs)
590 self._den = None
591
--> 592 self.num, self.den = normalize(*system)
593
594 def __repr__(self):
/usr/local/lib/python3.6/dist-packages/scipy/signal/filter_design.py in normalize(b, a)
1609 leading_zeros = 0
1610 for col in num.T:
-> 1611 if np.allclose(col, 0, atol=1e-14):
1612 leading_zeros += 1
1613 else:
<__array_function__ internals> in allclose(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/numpy/core/numeric.py in allclose(a, b, rtol, atol, equal_nan)
2169
2170 """
-> 2171 res = all(isclose(a, b, rtol=rtol, atol=atol, equal_nan=equal_nan))
2172 return bool(res)
2173
<__array_function__ internals> in isclose(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/numpy/core/numeric.py in isclose(a, b, rtol, atol, equal_nan)
2267 y = array(y, dtype=dt, copy=False, subok=True)
2268
-> 2269 xfin = isfinite(x)
2270 yfin = isfinite(y)
2271 if all(xfin) and all(yfin):
TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
Now look at the elements of the num_c list (same for den_c):
In [55]: num_c[0]
Out[55]: 500000.000000000
In [56]: type(_)
Out[56]: sympy.core.numbers.Float
The scipy code is doing numpy testing on the inputs. So it's first turned the lists into arrays:
In [61]: np.array(num_c)
Out[61]: array([500000.000000000], dtype=object)
This array contains sympy object(s). It can't cast that to numpy float with 'safe'. But an explicit astype uses unsafe as the default:
In [63]: np.array(num_c).astype(float)
Out[63]: array([500000.])
So lets convert both lists into valid numpy float arrays:
In [64]: sys = signal.lti(np.array(num_c).astype(float), np.array(den_c).astype(float))
In [65]: sys
Out[65]:
TransferFunctionContinuous(
array([100200.4008016]),
array([1.00000000e+00, 1.02004008e+05]),
dt: None
)
Conversion in a list comprehension also works:
sys = signal.lti([float(i) for i in num_c],[float(i) for i in den_c])
You likely need to conver sympy objects to floats / lists of floats.

Loading a pretrained model fails when multiple GPU was used for training

I have trained a network model and saved its weights and architecture via checkpoint = ModelCheckpoint(filepath='weights.hdf5') callback. During training, I am using multiple GPUs by calling the funtion below:
def make_parallel(model, gpu_count):
def get_slice(data, idx, parts):
shape = tf.shape(data)
size = tf.concat([ shape[:1] // parts, shape[1:] ],axis=0)
stride = tf.concat([ shape[:1] // parts, shape[1:]*0 ],axis=0)
start = stride * idx
return tf.slice(data, start, size)
outputs_all = []
for i in range(len(model.outputs)):
outputs_all.append([])
#Place a copy of the model on each GPU, each getting a slice of the batch
for i in range(gpu_count):
with tf.device('/gpu:%d' % i):
with tf.name_scope('tower_%d' % i) as scope:
inputs = []
#Slice each input into a piece for processing on this GPU
for x in model.inputs:
input_shape = tuple(x.get_shape().as_list())[1:]
slice_n = Lambda(get_slice, output_shape=input_shape, arguments={'idx':i,'parts':gpu_count})(x)
inputs.append(slice_n)
outputs = model(inputs)
if not isinstance(outputs, list):
outputs = [outputs]
#Save all the outputs for merging back together later
for l in range(len(outputs)):
outputs_all[l].append(outputs[l])
# merge outputs on CPU
with tf.device('/cpu:0'):
merged = []
for outputs in outputs_all:
merged.append(merge(outputs, mode='concat', concat_axis=0))
return Model(input=model.inputs, output=merged)
With the testing code:
from keras.models import Model, load_model
import numpy as np
import tensorflow as tf
model = load_model('cpm_log/deneme.hdf5')
x_test = np.random.randint(0, 255, (1, 368, 368, 3))
output = model.predict(x = x_test, batch_size=1)
print output[4].shape
I got the error below:
Traceback (most recent call last):
File "cpm_test.py", line 5, in <module>
model = load_model('cpm_log/Jun5_1000/deneme.hdf5')
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 240, in load_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 301, in model_from_config
return layer_module.deserialize(config, custom_objects=custom_objects)
File "/usr/local/lib/python2.7/dist-packages/keras/layers/__init__.py", line 46, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python2.7/dist-packages/keras/utils/generic_utils.py", line 140, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2378, in from_config
process_layer(layer_data)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2373, in process_layer
layer(input_tensors[0], **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 578, in __call__
output = self.call(inputs, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/layers/core.py", line 659, in call
return self.function(inputs, **arguments)
File "/home/muhammed/DEV_LIBS/developments/mocap/pose_estimation/training/cpm/multi_gpu.py", line 12, in get_slice
def get_slice(data, idx, parts):
NameError: global name 'tf' is not defined
By inspecting the error output, I decide that the problem is with the parallelization code. However, I can't resolve the issue.
You may need to use custom_objects to enable loading of the model.
import tensorflow as tf
model = load_model('model.h5', custom_objects={'tf': tf,})

"LapackError: Parameter a has non-native byte order in lapack_lite.dgesdd" when importing from Matlab files

After importing this data file from Matlab with scipy.io.loadmat, things appeared to work fine until we tried to calculate the conditioning number of one of the matrixes within.
Here's the minimum amount of code that reproduces for us:
import scipy
import numpy
stuff = scipy.io.loadmat("dati-esercizio1.mat")
numpy.linalg.cond(stuff["A"])
Here's the extended stacktrace courtesy of iPython:
In [3]: numpy.linalg.cond(A)
---------------------------------------------------------------------------
LapackError Traceback (most recent call last)
/snip/<ipython-input-3-15d9ef00a605> in <module>()
----> 1 numpy.linalg.cond(A)
/snip/python2.7/site-packages/numpy/linalg/linalg.py in cond(x, p)
1409 x = asarray(x) # in case we have a matrix
1410 if p is None:
-> 1411 s = svd(x,compute_uv=False)
1412 return s[0]/s[-1]
1413 else:
/snip/python2.7/site-packages/numpy/linalg/linalg.py in svd(a, full_matrices, compute_uv)
1313 work = zeros((lwork,), t)
1314 results = lapack_routine(option, m, n, a, m, s, u, m, vt, nvt,
-> 1315 work, -1, iwork, 0)
1316 lwork = int(work[0])
1317 work = zeros((lwork,), t)
LapackError: Parameter a has non-native byte order in lapack_lite.dgesdd
All obvious ideas (like flattening and reshaping the matrix or recreating the matrix from scratch reassigning it element by element) failed. How can I want to massage the data, then, in order to make it more agreeable with numpy?
It's a bug, fixed some time ago: https://github.com/numpy/numpy/pull/235
Workaround:
np.linalg.cond(stuff['A'].newbyteorder('='))
This works for me:
In [33]: stuff = loadmat('dati-esercizio1.mat')
In [34]: a = stuff['A']
In [35]: try: np.linalg.cond(a)
....: except: print "Fail!"
Fail!
In [36]: b = np.array(a, dtype='>d')
In [37]: np.linalg.cond(b)
Out[37]: 62493201976.673141
In [38]: np.all(a == b) # Verify they hold the same data.
Out[38]: True
Apparently it's something wrong with the byte order (endianness?) of each number in the resulting ndarray and not just with the ndarray object itself.
Something like this but more elegant should do the trick:
n, m = A.shape()
B = numpy.empty_like(A)
for i in xrange(n):
for j in xrange(m):
B[i,j] = float(A[i,j])
del A
B = A
print numpy.linalg.cond(A) # 62493210091.354507
(For some reason an in-place replacement still gives that error - so there's something wrong with the byte order of the whole object, too.)