_vode.error: failed in processing argument list for call-back f - scipy

I try to solve a series of ODEs by scipy integrate.ode module, what does this error message mean?
create_cb_arglist: Failed to build argument list (siz) with enough arguments (tot-opt) required by user-supplied function (siz,tot,opt=6,7,0).
Traceback (most recent call last):
File "D:/DeepSpillModel/api.py", line 49, in <module>
model.solver(start_time, end_time)
File "D:\DeepSpillModel\far_field.py", line 17, in solver
far_model.simulate(self.parcels, self.initial_location, start_time,
File "D:\DeepSpillModel\single_parcel_model.py", line 15, in simulate
self.t, self.y = calculate_underwater(self.profile, parcel, t0, y0, diff_factor, self.p, delta_t_sub)
File "D:\DeepSpillModel\single_parcel_model.py", line 44, in calculate_underwater
r.integrate(t[-1] + delta_t, step=True)
File "D:\Miniconda3\envs\gnome\lib\site-packages\scipy\integrate\_ode.py", line 433, in integrate
self._y, self.t = mth(self.f, self.jac or (lambda: None),
File "D:\Miniconda3\envs\gnome\lib\site-packages\scipy\integrate\_ode.py", line 1024, in step
r = self.run(*args)
File "D:\Miniconda3\envs\gnome\lib\site-packages\scipy\integrate\_ode.py", line 1009, in run
y1, t, istate = self.runner(*args)
_vode.error: failed in processing argument list for call-back f.
Process finished with exit code 1

Related

How to find expected value of np.array using scipy.stats?

I am trying to get the expected value of a NumPy array but I am running into a problem when I pass my array into the function here is an example of what is happening:
a = np.ones(10)
stats.rv_continuous.expect(args=a)
I get this error:
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
stats.rv_continuous.expect(args=a)
TypeError: expect() missing 1 required positional argument: 'self'
If I try stats.rv_continuous.expect(a) , I get this error:
'numpy.ndarray' object has no attribute '_argcheck'
Can someone tell me how to get scipy.stats to work with an array?
update:
following bob's comment I changed the code to:
st=stats.rv_continuous()
ev = st.expect(args=signal_array)
print(ev)
where signal_array is a numpy array. However I now get this error:
Traceback (most recent call last):
File "C:\Users\...\OneDrive\Área de Trabalho\TickingClock\Main.py", line 35, in <module>
ev = st.expect(args=signal_array)
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\stats\_distn_infrastructure.py", line 2738, in expect
vals = integrate.quad(fun, lb, ub, **kwds)[0] / invfac
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\integrate\quadpack.py", line 351, in quad
retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit,
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\integrate\quadpack.py", line 465, in _quad
return _quadpack._qagie(func,bound,infbounds,args,full_output,epsabs,epsrel,limit)
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\stats\_distn_infrastructure.py", line 2722, in fun
return x * self.pdf(x, *args, **lockwds)
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\stats\_distn_infrastructure.py", line 1866, in pdf
args, loc, scale = self._parse_args(*args, **kwds)
TypeError: _parse_args() got multiple values for argument 'loc'

RuntimeError using Networkx on Example code

Following the examples on https://networkx.github.io/documentation/stable/reference/drawing.html, I tried the following code:
import networkx as nx
G = nx.complete_graph(5)
A = nx.nx_agraph.to_agraph(G)
H = nx.nx_agraph.from_agraph(A)
I get a RuntimeError as follows:
H = nx.nx_agraph.from_agraph(A)
Traceback (most recent call last):
File "/home/nom/anaconda3/envs/wcats/lib/python3.7/site-packages/pygraphviz/agraph.py", line 1750, in iteritems
ah = gv.agnxtattr(self.handle, self.type, ah)
StopIteration: agnxtattr
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<ipython-input-10-19c378da806e>", line 1, in <module>
H = nx.nx_agraph.from_agraph(A)
File "/home/nom/anaconda3/envs/wcats/lib/python3.7/site-packages/networkx/drawing/nx_agraph.py", line 85, in from_agraph
N.graph.update(A.graph_attr)
File "/home/nom/anaconda3/envs/wcats/lib/python3.7/site-packages/pygraphviz/agraph.py", line 1740, in keys
return list(self.__iter__())
File "/home/nom/anaconda3/envs/wcats/lib/python3.7/site-packages/pygraphviz/agraph.py", line 1743, in __iter__
for (k, v) in self.iteritems():
RuntimeError: generator raised StopIteration
This error is so basic that I suspect there's a problem with the package itself. Any suggestions on how I can try to troubleshoot this one?

Cannot cast ListType[tuple(float64 x 2)] to list(tuple(float64 x 2)) in numba

Hello I am trying to use typed List in numba v46.0
>>> from numba.typed import List
>>> from numba import types
>>> mylist = List.empty_list(item_type=types.Tuple((types.f8, types.f8)))
>>> mylist2 = List.empty_list(item_type=types.List(dtype=types.Tuple((types.f8, types.f8))))
>>> mylist2.append(mylist)
but I got the following error, I am wondering how to fix it?
Traceback (most recent call last): File "", line 1, in
File
"/usr/local/lib/python3.7/site-packages/numba/typed/typedlist.py",
line 223, in append
_append(self, item) File "/usr/local/lib/python3.7/site-packages/numba/dispatcher.py", line
401, in _compile_for_args
error_rewrite(e, 'typing') File "/usr/local/lib/python3.7/site-packages/numba/dispatcher.py", line
344, in error_rewrite
reraise(type(e), e, None) File "/usr/local/lib/python3.7/site-packages/numba/six.py", line 668, in
reraise
raise value.with_traceback(tb) numba.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Internal error at
. Failed in
nopython mode pipeline (step: nopython mode backend) Cannot cast
ListType[tuple(float64 x 2)] to list(tuple(float64 x 2)): %".24" =
load {i8*, i8*}, {i8*, i8*}* %"item"
File
"../../usr/local/lib/python3.7/site-packages/numba/listobject.py",
line 434:
def impl(l, item):
casteditem = _cast(item, itemty)
the following should work
mylist2 = List.empty_list(item_type=types.ListType(itemty=types.Tuple((types.f8, types.f8))))

Tensorflow gradient shape incompatible when using Convolutional Transpose Layer

I was having an issue when trying to create a convolution-deconvolution network. The original image dimensions are 565 * 584 and I'm trying to produce a segmentation of 565 * 584.
While I didn't have an issue before with my network with 1024*1024 images, I have been having some issues with these dimensions. I am getting this issue when computing the gradient:
segmentation_result.shape: (?, 565, 584, 1), targets.shape: (?, 565, 584, 1)
Process Process-1:
Traceback (most recent call last):
\Python\Python35\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 558, in merge_with
self.assert_is_compatible_with(other)
\Python\Python35\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 106, in assert_is_compatible_with
other))
ValueError: Dimensions 565 and 566 are not compatible
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
\Python\Python35\lib\multiprocessing\process.py", line 249, in _bootstrap
self.run()
\Python\Python35\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
.py", line 418, in train
network = Network(net_id = count, weight=pos_weight)
.py", line 199, in __init__
self.train_op = tf.train.AdamOptimizer().minimize(self.cost)
\Python\Python35\lib\site-packages\tensorflow\python\training\optimizer.py", line 315, in minimize
grad_loss=grad_loss)
\Python\Python35\lib\site-packages\tensorflow\python\training\optimizer.py", line 386, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
\Python\Python35\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 560, in gradients
in_grad.set_shape(t_in.get_shape())
\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 443, in set_shape
self._shape = self._shape.merge_with(shape)
\Python\Python35\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 561, in merge_with
raise ValueError("Shapes %s and %s are not compatible" % (self, other))
ValueError: Shapes (?, 565, 584, 64) and (?, 566, 584, 64) are not compatible
The entire network has 10 convolutional layers and 10 deconvolutional layers. Each deconvolutional layer is a reversed version of the forward layer. Here is an example of the code to produce the convolutional layer:
def create_layer_reversed(self, input, prev_layer=None):
net_id = self.net_id
print(net_id)
with tf.variable_scope('conv', reuse=False):
W = tf.get_variable('W{}_{}_'.format(self.name[-3:], net_id),
shape=(self.kernel_size, self.kernel_size, self.input_shape[3], self.output_channels))
b = tf.Variable(tf.zeros([W.get_shape().as_list()[2]]))
output = tf.nn.conv2d_transpose(
input, W,
tf.stack([tf.shape(input)[0], self.input_shape[1], self.input_shape[2], self.input_shape[3]]),
strides=[1,1,1,1], padding='SAME')
Conv2d.layer_index += 1
output.set_shape([None, self.input_shape[1], self.input_shape[2], self.input_shape[3]])
output = lrelu(tf.add(tf.contrib.layers.batch_norm(output), b))
return output

Pyspark 'tzinfo' error when using the Cassandra connector

I'm reading from Cassandra using
a = sc.cassandraTable("my_keyspace", "my_table").select("timestamp", "vaue")
and then want to convert it to a dataframe:
a.toDF()
and the schema is correctly infered:
DataFrame[timestamp: timestamp, value: double]
but then when materializing the dataframe I get the following error:
Py4JJavaError: An error occurred while calling o89372.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 285.0 failed 4 times, most recent failure: Lost task 0.3 in stage 285.0 (TID 5243, kepler8.cern.ch): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/opt/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/opt/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/opt/spark-1.6.0-bin-hadoop2.6/python/pyspark/sql/types.py", line 541, in toInternal
return tuple(f.toInternal(v) for f, v in zip(self.fields, obj))
File "/opt/spark-1.6.0-bin-hadoop2.6/python/pyspark/sql/types.py", line 541, in <genexpr>
return tuple(f.toInternal(v) for f, v in zip(self.fields, obj))
File "/opt/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal
return self.dataType.toInternal(obj)
File "/opt/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal
seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo
AttributeError: 'str' object has no attribute 'tzinfo'
which sounds like a string as been given to pyspark.sql.types.TimestampType.
How could I debug this further?