"ufunc 'hyp2f1' not supported for the input types" error - diff

When I'm trying to run the following code:
import math
from scipy import special as spec
import numpy as np
from sympy import *
y = Symbol('y')
x = spec.hyp2f1(1.5, 2.5, 1, y**2)
ans = x.diff(y)
print ans
I get the error:
Traceback (most recent call last):
File "calc.py", line 74, in <module>
x = spec.hyp2f1(1.5, 2.5, 1, y**2)
TypeError: ufunc 'hyp2f1' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
What the problem is and are there any other ways to differentiate the function hyp2f1 symbolically?

We cannot mix Sympy and Scipy in the way you want. The error message is saying that it does not take symbolic inputs
This shows how you can get Hypergeometric functions in Sympy. It also provides examples of how to use this function. I am not sure if Sympy can differentiate this symbolically for you.

Related

is the factorial2 function in scipy broken?

from scipy.special import factorial2
print(factorial2(-1))
In the documentation it says that the above code should return 0, but for me it returns +1. In fact, it returns something for all negative numbers. Is this a bug? I cannot even make sense of this result when taking into account the analytic continuation of the factorial function i.e. considering the various extensions of the factorial function to the negative reals.
Documentation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.factorial2.html
Python 3.7.6
Scipy 1.4.1

Matlab typecast transfer to python 3.7

I have some excerpt code for Matlab. I want to transfer to python3.7 code. But I found out some Matlab function I can not transfer to python 3.7 it, such as function typecast, and single in Matlab.
Therefore, How could I write the python code and get the same result?
Thank you
Here is the Matlab code
AA= uint32(3249047552)
DPD = typecast(AA,'single');
Print(DPD)
DPD = -21.0664 <== This is matlab result.
With numpy you can use view():
# Define your uint32 number
x = np.array(3249047552, dtype=np.uint32)
# Get the equivalent bitwise single number
x.view(np.single)
# output: -21.066406

Does KernelDensity.estimate in pyspark.mllib.stat.KernelDensity work when input data is normally distributed?

Does pyspark's KernelDensity.estimate work correctly on a dataset that is normally distributed? I get an error when I try that. I have filed https://issues.apache.org/jira/browse/SPARK-20803 (KernelDensity.estimate in pyspark.mllib.stat.KernelDensity throws net.razorvine.pickle.PickleException when input data is normally distributed (no error when data is not normally distributed))
Example code:
vecRDD = sc.parallelize(colVec)
kd = KernelDensity()
kd.setSample(vecRDD)
kd.setBandwidth(3.0)
# Find density estimates for the given values
densities = kd.estimate(samplePoints)
When data is NOT Gaussian, I get for e.g.
5.6654703477e-05,0.000100010001,0.000100010001,0.000100010001,.....
For reference, using Scala, for Gaussian data,
Code:
vecRDD = sc.parallelize(colVec)
kd = new KernelDensity().setSample(vecRDD).setBandwidth(3.0)
// Find density estimates for the given values
densities = kd.estimate(samplePoints)
I get:
[0.04113814235801906,1.0994865517293571E-163,0.0,0.0,.....
I faced the same issue and was able to track down the issue to a very minimal test case. If you're using Numpy in Python to generate the data in the RDD, then that's the problem!
import numpy as np
kd = KernelDensity()
kd.setSample(sc.parallelize([0.0, 1.0, 2.0, 3.0])) # THIS WORKS
# kd.setSample(sc.parallelize([0.0, np.float32(1.0), 2.0, 3.0])) # THIS FAILS
kd.setBandwidth(0.35)
kd.estimate([0.0, 1.0])
If this was your issue as well, simply convert the Numpy data to Python base type until the Spark issue is fixed. You can do that by using the np.asscalar function.

Why do I get an error within matlab(octave) using the vpi package using rdivide, mrdivide or quotient?

I am using octave for private use(I'm a student and there is unfortunately no free version of Matlab for students at my university)
Now I try to implement an algorithm using big numbers; So I started downloading the vpi package here:
http://www.mathworks.com/matlabcentral/fileexchange/22725-variable-precision-integer-arithmetic
Now I put the data in the right place and started working with it which was really great;
So I started for example with:
a=vpi(12989487973402)
a =
12989487973402
and then
a=a^10
a = 1367477916402329222848766554412698316550418920659968300260110320792
46979577273468682364762841107165766713454228653820366806955009024
which works really awesome; However I try now to use one of the commands rdivide, mrdivide or quotient of this package which is actually made for vpi numbers and should actually work; However it doesn't work with bigger numbers:
E.g.
b=vpi(129892);
c=b^2
c =
2191528947700288
rdivide(c,2)
ans =
1095764473850144
However when the numbers only a little bigger, I suddenly get error messages (with functions rdivide, mrdivide and quotient exactly the same ones)
c=b^4
c =
284662078074685808896
rdivide(c,2)
error: 'iszero' undefined near line 49 column 6
error: called from:
error: /home/john/test/#vpi/times.m at line 55, column 3
error: /home/john/test/#vpi/mtimes.m at line 27, column 7
error: evaluating argument list element number 1
error: /home/john/test/#vpi/quotient.m at line 103, column 9
>>>error: /home/john/test/#vpi/rdivide.m at line 43, column 5
Now I'm wondering if it is a problem of Octave and I need to have the original matlab or if it is a bug in the program or if I'm too stupid to use it???
Can someone help??? Thank you
EDIT: rdivide, mrdivide and quotient are working all very similar and they are supposed to return a./b for rdive/mrdivide/quotient(a,b) if a is divisible by b

Converting a matlab script into python

I'm an undergrad at university particapting in a research credit with a professor, so this is pretty much an independent project for me.
I am converting a matlab script into a python (3.4) script for easier use on the rest of my project. The 'find' function is employed in the script, like so:
keyindx = find(summags>=cumthresh,1)
Keyindx would contain the location of the first value inside summag above cumthresh
So, as an example:
summags = [ 1 4 8 16 19]
cumthresh = 5
then keyindx would return with an index of 2, whose element corresponds to 8.
My question is, I am trying to find a similar function in python (I am also using numpy and can use whatever library I need) that will work the same way. I mean, coming from a background in C I know how to get everything I need, but I figure there's a better way to do this then just write some C style code.
So, any hints about where to look in the python docs and about finding useful functions in general?
A quick search led me to the argwhere function which you can combine with [0] to get the first index satisfying your condition. For example,
>> import numpy as np
>> x = np.array(range(1,10))
>> np.argwhere(x > 5)[0]
array([5])
This isn't quite the same as saying
find(x > 5, 1)
in MATLAB, since the Python code will throw an IndexError if none of the values satisfy your condition (whereas MATLAB returns an empty array). However, you can catch this and deal with it appropriately, for example
try:
ind = np.argwhere(x > 5)[0]
except IndexError:
ind = np.array([1])
np.nonzero(x) gives a tuple of the nonzero indices. That value can then be used to index any array of the matching size.
In [1262]: x=np.arange(6).reshape(2,3)
In [1263]: ind=np.nonzero(x>3)
In [1264]: x[ind]
Out[1264]: array([4, 5])
In [1265]: ind
Out[1265]: (array([1, 1], dtype=int32), array([1, 2], dtype=int32))