How to find expected value of np.array using scipy.stats? - scipy

I am trying to get the expected value of a NumPy array but I am running into a problem when I pass my array into the function here is an example of what is happening:
a = np.ones(10)
stats.rv_continuous.expect(args=a)
I get this error:
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
stats.rv_continuous.expect(args=a)
TypeError: expect() missing 1 required positional argument: 'self'
If I try stats.rv_continuous.expect(a) , I get this error:
'numpy.ndarray' object has no attribute '_argcheck'
Can someone tell me how to get scipy.stats to work with an array?
update:
following bob's comment I changed the code to:
st=stats.rv_continuous()
ev = st.expect(args=signal_array)
print(ev)
where signal_array is a numpy array. However I now get this error:
Traceback (most recent call last):
File "C:\Users\...\OneDrive\Área de Trabalho\TickingClock\Main.py", line 35, in <module>
ev = st.expect(args=signal_array)
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\stats\_distn_infrastructure.py", line 2738, in expect
vals = integrate.quad(fun, lb, ub, **kwds)[0] / invfac
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\integrate\quadpack.py", line 351, in quad
retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit,
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\integrate\quadpack.py", line 465, in _quad
return _quadpack._qagie(func,bound,infbounds,args,full_output,epsabs,epsrel,limit)
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\stats\_distn_infrastructure.py", line 2722, in fun
return x * self.pdf(x, *args, **lockwds)
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\scipy\stats\_distn_infrastructure.py", line 1866, in pdf
args, loc, scale = self._parse_args(*args, **kwds)
TypeError: _parse_args() got multiple values for argument 'loc'

Related

_vode.error: failed in processing argument list for call-back f

I try to solve a series of ODEs by scipy integrate.ode module, what does this error message mean?
create_cb_arglist: Failed to build argument list (siz) with enough arguments (tot-opt) required by user-supplied function (siz,tot,opt=6,7,0).
Traceback (most recent call last):
File "D:/DeepSpillModel/api.py", line 49, in <module>
model.solver(start_time, end_time)
File "D:\DeepSpillModel\far_field.py", line 17, in solver
far_model.simulate(self.parcels, self.initial_location, start_time,
File "D:\DeepSpillModel\single_parcel_model.py", line 15, in simulate
self.t, self.y = calculate_underwater(self.profile, parcel, t0, y0, diff_factor, self.p, delta_t_sub)
File "D:\DeepSpillModel\single_parcel_model.py", line 44, in calculate_underwater
r.integrate(t[-1] + delta_t, step=True)
File "D:\Miniconda3\envs\gnome\lib\site-packages\scipy\integrate\_ode.py", line 433, in integrate
self._y, self.t = mth(self.f, self.jac or (lambda: None),
File "D:\Miniconda3\envs\gnome\lib\site-packages\scipy\integrate\_ode.py", line 1024, in step
r = self.run(*args)
File "D:\Miniconda3\envs\gnome\lib\site-packages\scipy\integrate\_ode.py", line 1009, in run
y1, t, istate = self.runner(*args)
_vode.error: failed in processing argument list for call-back f.
Process finished with exit code 1

pycairo error: “no current point defined” when trying to use TeeSurface

I'm trying to use a cairo.TeeSurface (pycairo version '1.21.0') to redirect input to multiple surfaces, but I cannot make it work, here is my code:
>>> import cairo
>>> s1 = cairo.ImageSurface(cairo.FORMAT_ARGB32, 500,500)
>>> s2 = cairo.ImageSurface(cairo.FORMAT_ARGB32, 500,500)
>>> s3 = cairo.ImageSurface(cairo.FORMAT_ARGB32, 500,500)
>>> s = cairo.TeeSurface(s1)
>>> s.add(s2)
>>> s.add(s3)
>>> c = cairo.Context(s)
After above iniziatilation if I try to draw a line I get an error:
>>> c.rel_line_to(100,100)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
cairo.Error: no current point defined
I tried to set explicitly a point but I get the same error:
>>> c.move_to(10,10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
cairo.Error: no current point defined
For sure I must have misunderstood something, any clue would be appreciated.

RuntimeError using Networkx on Example code

Following the examples on https://networkx.github.io/documentation/stable/reference/drawing.html, I tried the following code:
import networkx as nx
G = nx.complete_graph(5)
A = nx.nx_agraph.to_agraph(G)
H = nx.nx_agraph.from_agraph(A)
I get a RuntimeError as follows:
H = nx.nx_agraph.from_agraph(A)
Traceback (most recent call last):
File "/home/nom/anaconda3/envs/wcats/lib/python3.7/site-packages/pygraphviz/agraph.py", line 1750, in iteritems
ah = gv.agnxtattr(self.handle, self.type, ah)
StopIteration: agnxtattr
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<ipython-input-10-19c378da806e>", line 1, in <module>
H = nx.nx_agraph.from_agraph(A)
File "/home/nom/anaconda3/envs/wcats/lib/python3.7/site-packages/networkx/drawing/nx_agraph.py", line 85, in from_agraph
N.graph.update(A.graph_attr)
File "/home/nom/anaconda3/envs/wcats/lib/python3.7/site-packages/pygraphviz/agraph.py", line 1740, in keys
return list(self.__iter__())
File "/home/nom/anaconda3/envs/wcats/lib/python3.7/site-packages/pygraphviz/agraph.py", line 1743, in __iter__
for (k, v) in self.iteritems():
RuntimeError: generator raised StopIteration
This error is so basic that I suspect there's a problem with the package itself. Any suggestions on how I can try to troubleshoot this one?

How make a correct query from pymongo

I am using pymongo to subset a collection in mongodb, I am having problems with the syntax of the query, this is my query:
import pymongo
client=pymongo.MongoClient("localhost",27017)
db=client("publications")
collec=db("zikaVirus2")
pipeline=db.collec.aggregate([
{"$match":{"fullText":{"$exists": "true"}}},
{"$out": "pub_fulltext"}
])
db.pub_fulltext.aggregate(pipeline)
I am getting this error:
Traceback (most recent call last):
File "genColFulltext.py", line 3, in <module>
db=client("publications")
TypeError: 'MongoClient' object is not callable
UPDATE 1
If I do
import pymongo
client=pymongo.MongoClient("localhost",27017)
db=client.publications
collec=db.zikaVirus2
pipeline=db.collec.aggregate([
{"$match":{"fullText":{"$exists": "true"}}},
{"$out": "pub_fulltext"}
])
db.pub_fulltext.aggregate(pipeline)
I get
Traceback (most recent call last):
File "genColFulltext.py", line 11, in <module>
db.pub_fulltext.aggregate(pipeline)
File "/home/danielo/.local/lib/python3.5/site-packages/pymongo /collection.py", line 2397, in aggregate
**kwargs)
File "/home/danielo/.local/lib/python3.5/site-packages/pymongo/collection.py", line 2242, in _aggregate
common.validate_list('pipeline', pipeline)
File "/home/danielo/.local/lib/python3.5/site-packages/pymongo/common.py", line 428, in validate_list
raise TypeError("%s must be a list" % (option,))
TypeError: pipeline must be a list
UPDATE 2
This code runs without problems, but it make nothing of his purpose, the collection pub_fulltext have anything, when it should have data.
import pymongo
client=pymongo.MongoClient("localhost",27017)
db=client.publications
collec=db.zikaVirus2
pipeline=db.collec.aggregate([
{"$match":{"fullText":{"$exists": "true"}}},
{"$out": "pub_fulltext"}
])
for item in pipeline:
db.pub_fulltext.aggregate(item)
Then I try this:
import pymongo
client=pymongo.MongoClient("localhost",27017)
db=client.publications
collec=db.zikaVirus2
pipeline=[db.collec.aggregate([
{"$match":{"fullText":{"$exists": "true"}}},
{"$out": "pub_fulltext"}
])]
for item in db.collection.aggregate(pipeline):
db.collection.aggregate(item)
Getting this error:
Traceback (most recent call last):
File "genColFulltext.py", line 10, in <module>
for item in collection.aggregate(pipeline):
Traceback (most recent call last): File "genColFulltext.py", line 10, in <module>
for item in db.collection.aggregate(pipeline): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/collection.py", line 2397, in aggregate
**kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/collection.py", line 2304, in _aggregate
client=self.__database.client) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/pool.py", line 584, in command
self._raise_connection_failure(error) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/pool.py", line 745, in _raise_connection_failure
raise error File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/pool.py", line 579, in command
unacknowledged=unacknowledged) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/network.py", line 114, in command
codec_options, ctx=compression_ctx) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/message.py", line 679, in _op_msg
flags, command, identifier, docs, check_keys, opts) bson.errors.InvalidDocument: Cannot encode object: <pymongo.command_cursor.CommandCursor object at 0x1008e4198>

"InterfaceError: connection already closed" when using multiprocessing.Pool on black box function that queries PostgreSQL database

I've been given a Python (2.7) function that takes 3 strings as arguments, and returns a list of dictionaries. Due to the nature of the project, I can't alter the function, which is quite complex, calling several other non-standard Python modules and querying a PostgreSQL database using psychopg2. I think that it's the Postgres functionality that's causing me problems.
I want to use the multiprocessing module to speed up calling the function hundreds of times. I've written a "helper" function so that I can use multiprocessing.Pool (which takes only 1 argument) with my function:
from function_script import function
def function_helper(args):
return function(*args)
And my main code looks like this:
from helper_script import function_helper
from multiprocessing import Pool
argument_a = ['a0', 'a1', ..., 'a99']
argument_b = ['b0', 'b1', ..., 'b99']
argument_c = ['c0', 'c1', ..., 'c99']
input = zip(argument_a, argument_b, argument_c)
p = Pool(4)
results = p.map(function_helper, input)
print results
What I'm expecting is a list of lists of dictionaries, however I get the following errors:
Traceback (most recent call last):
File "/local/python/2.7/lib/python2.7/site-packages/variantValidator/variantValidator.py", line 898, in validator
vr.validate(input_parses)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 33, in validate
return self._ivr.validate(var, strict) and self._evr.validate(var, strict)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 69, in validate
(res, msg) = self._ref_is_valid(var)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 89, in _ref_is_valid
var_x = self.vm.c_to_n(var) if var.type == "c" else var
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/variantmapper.py", line 223, in c_to_n
tm = self._fetch_TranscriptMapper(tx_ac=var_c.ac, alt_ac=var_c.ac, alt_aln_method="transcript")
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/decorators/lru_cache.py", line 176, in wrapper
result = user_function(*args, **kwds)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/variantmapper.py", line 372, in _fetch_TranscriptMapper
self.hdp, tx_ac=tx_ac, alt_ac=alt_ac, alt_aln_method=alt_aln_method)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/transcriptmapper.py", line 69, in __init__
self.tx_identity_info = hdp.get_tx_identity_info(self.tx_ac)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/decorators/lru_cache.py", line 176, in wrapper
result = user_function(*args, **kwds)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 353, in get_tx_identity_info
rows = self._fetchall(self._queries['tx_identity_info'], [tx_ac])
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 216, in _fetchall
with self._get_cursor() as cur:
File "/local/python/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 529, in _get_cursor
cur.execute("set search_path = " + self.url.schema + ";")
File "/local/python/2.7/lib/python2.7/site-packages/psycopg2/extras.py", line 144, in execute
return super(DictCursor, self).execute(query, vars)
DatabaseError: SSL error: decryption failed or bad record mac
And:
Traceback (most recent call last):
File "/local/python/2.7/lib/python2.7/site-packages/variantValidator/variantValidator.py", line 898, in validator
vr.validate(input_parses)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 33, in validate
return self._ivr.validate(var, strict) and self._evr.validate(var, strict)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 69, in validate
(res, msg) = self._ref_is_valid(var)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/validator.py", line 89, in _ref_is_valid
var_x = self.vm.c_to_n(var) if var.type == "c" else var
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/variantmapper.py", line 223, in c_to_n
tm = self._fetch_TranscriptMapper(tx_ac=var_c.ac, alt_ac=var_c.ac, alt_aln_method="transcript")
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/decorators/lru_cache.py", line 176, in wrapper
result = user_function(*args, **kwds)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/variantmapper.py", line 372, in _fetch_TranscriptMapper
self.hdp, tx_ac=tx_ac, alt_ac=alt_ac, alt_aln_method=alt_aln_method)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/transcriptmapper.py", line 69, in __init__
self.tx_identity_info = hdp.get_tx_identity_info(self.tx_ac)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/decorators/lru_cache.py", line 176, in wrapper
result = user_function(*args, **kwds)
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 353, in get_tx_identity_info
rows = self._fetchall(self._queries['tx_identity_info'], [tx_ac])
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 216, in _fetchall
with self._get_cursor() as cur:
File "/local/python/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/local/python/2.7/lib/python2.7/site-packages/hgvs/dataproviders/uta.py", line 526, in _get_cursor
conn.autocommit = True
InterfaceError: connection already closed
Does anybody know what might cause the Pool function to behave like this, when it seems so simple to use in other examples that I've tried? If this isn't enough information to go on, can anyone advise me on a way of getting to the bottom of the problem (this is the first time I've worked with someone else's code)? Alternatively, are there any other ways that I could use the multiprocessing module to call the function hundreds of times?
Thanks
I think what may be happening is that your connection object is used across all workers and when 1 worker has completed all its tasks it closes the connection and meanwhile the other workers are still working and the connection is closed so when one of those workers tries to use the db it is already closed.