is the factorial2 function in scipy broken? - scipy

from scipy.special import factorial2
print(factorial2(-1))
In the documentation it says that the above code should return 0, but for me it returns +1. In fact, it returns something for all negative numbers. Is this a bug? I cannot even make sense of this result when taking into account the analytic continuation of the factorial function i.e. considering the various extensions of the factorial function to the negative reals.
Documentation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.factorial2.html
Python 3.7.6
Scipy 1.4.1

Related

Convert symbolic expression into a rational polynomial

I have a lengthy symbolic expression that involves rational polynomials (basic arithmetic and integer powers). I'd like to simplify it into a single (simple) rational polynomial.
numden does it, but it seems to use some expensive optimization, which probably addresses a more general case. When tried on my example below, it crashed after a few hours--out of memory (32GB).
I believe something more efficient is possible even if I don't have a cpp access to matlab functionality (e.g. children).
Motivation: I have an objective function that involves polynomials. I manually derived it, and I'd like to verify and compare the derivatives: I subtract the two expressions, and the result should vanish.
Currently, my interest in this is academic since practically, I simply substitute some random expression, get zero, and it's enough for me.
I'll try to find the time to play with this as some point, and I'll update here about it, but I posted in case someone finds it interesting and would like to give it a try before that.
To run my function:
x = sym('x', [1 32], 'real')
e = func(x)
The function (and believe it or not, this is just the Jacobian, and I also have the Hessian) can't be pasted here since the text limit is 30K:
https://drive.google.com/open?id=1imOAa4VS87WDkOwAK0NoFCJPTK_2QIRj

MATLAB: using a minimum function within symsum

I am trying to run code similar to the following, I replaced the function I had with one much smaller, to provide a minimum working example:
clear
syms k m
n=2;
symsum(symsum(k*m,m,0,min(k,n-k)),k,0,n)
I receive the following error message:
"Error using sym/min (line 86)
Input arguments must be convertible to floating-point numbers."
I think this means that the min function cannot be used with symbolic arguments. However, I was hoping that MATLAB would be substituting in actual numbers through its iterations of k=0:n.
Is there a way to get this to work? Any help much appreciated. So far I the most relevant page I found was here, but I am somewhat hesitant as I find it difficult to understand what this function does.
EDIT following #horchler, I messed around putting it in various places to try and make it work, and this one did:
clear
syms k m
n=2;
symsum(symsum(k*m,m,0,feval(symengine, 'min', k,n-k)),k,0,n)
Because I do not really understand this feval function, I was curious to whether there was a better, perhaps more commonly-used solution. Although it is a different function, there are many pieces online advising against the eval function, for example. I thought perhaps this one may also carry issues.
I agree that Matlab should be able to solve this as you expect, even though the documentation is clear that it won't.
Why the issue occurs
The problem is due the inner symbolic summation, and the min function itself, being evaluated first:
symsum(k*m,m,0,min(k,n-k))
In this case, the input arguments to sym/min are not "convertible to floating-point numbers" as k is a symbolic variable. It is only after you wrap the above in another symbolic summation that k becomes clearly defined and could conceivably be reduced to numbers, but the inner expression has already generated an error so it's too late.
I think that it's a poor choice for sym/min to return an error. Rather, it should just return itself. This is what the sym/int function does when it can't evaluate an integral symbolically or numerically. MuPAD (see below) and Mathematica 10 also do something like this as well for their min functions.
About the workaround
This directly calls a MuPAD's min function. Calling MuPAD functions from Matlab is discussed in more detail in this article from The MathWorks.
If you like, you can wrap it in a function or an anonymous function to make calling it cleaner, e.g.:
symmin = #(x,y)feval(symengine,'min',x,y);
Then, you code would simply be:
syms k m
n = 2;
symsum(symsum(k*m,m,0,symmin(k,n-k)),k,0,n)
If you look at the code for sym/min in the Symbolic Math toolbox (type edit sym/min in your Command Window), you'll see that it's based on a different function: symobj::maxmin. I don't know why it doesn't just call MuPAD's min, other than performance reasons perhaps. You might consider filing a service request with The MathWorks to ask about this issue.

Trouble with fortran( ) to generate FORTRAN code from MATLAB symbolic expression

I did a number of symbolic manipulations and obtain an expression for my variable z. I used the following code to generate the FORTRAN code for z:
fortran(z,'file','FTRN_2Mkt_dfpa1');
The program stops after about 10 min, and I get these error messages:
??? Error using ==> mupadmex
Error in MuPAD command: Recursive definition [See ?MAXDEPTH]; during evaluation of 'generate::CFformatting'
Error in ==> sym.sym>sym.generateCode at 2169 tk = mupadmex(['generate::' lang], expr, 0);
Error in ==> sym.fortran at 43 generateCode(sym(t),'fortran',opts);
I think the problem is that the z expression is too long. The MuPAD software is treating this long expression as an infinite recursive operation. I am guessing that the MAXDEPTH in the fortran() source file is set at a level that is smaller than needed to convert the z expression to FORTRAN. If my guess is correct, is there a way to change MAXDEPTH in the fortran() source code?
If my guess is wrong, what can I do to generate the FORTRAN code for the z expression?
I really need the FORTRAN code for the symbolic expression of z. If you can help me, it would be wonderful. Thanks a million in advance!
Best, Limin

Translating matlab code to python: optimizing numerical functions

I have the following bit of matlab code:
f=#(h)(exp(-2*mun*(h/sigma+1.166))-1+2*mun*(h/sigma+1.166))/(2*mun^2)-ARL0;
The parameters aren't important, they are all just constants at this point. What is important is that now I can evaluate that function for any value of h just by calling f(h). In particular, I can find the zeros, min, max, etc of the function over any interval I specify.
I am translating this code into python, mostly as an exercise in learning python, and I was wondering if there is anything similar (perhaps in numpy) that I could use, instead of setting up a numpy array with an arbitrary set of h values to process over.
I could do something like (pseudocode):
f = numpy.array(that function for h in numpy.arange(hmin, hmax,hstep))
But this commits me to a step size. Is there any way to avoid that and get the full precision like in matlab?
EDIT: what I actually want at the end of the day is to find the zeroes, max, and min locations (not values) of the function f. It looks like scipy might have some functions that are more useful here: http://docs.scipy.org/doc/scipy/reference/optimize.html
The Python equivalent of function handles in MATLAB (the # notation) are called "lambda functions" in python. The equivalent syntax is like so:
Matlab:
func = #(h)(h+2):
Python:
func = lambda h: h+2
For your specific case, you would implement the equivalent of the matlab function like this:
import numpy as np
f = lambda h: (np.exp(-2*mun*(h/sigma+1.166))-1+2*mun*(h/sigma+1.166))/(2*mun**2)-ARL0
f can then be used as a function and applied directly to any numpy array. So this would work, for example:
rarr = np.random.random((100, 20))
frarr = f(rarr)
If you are just looking for the f values for integer values of x, the following will work:
f = [your_function(x) for x in xrange(hmin, hmax)]
If you want finer granularity than integer values, you can do:
f = [your_function(x) for x in xrange(hmin, hmax, hstep)]
If you want exact solutions to the zeroes, max, and min locations, I agree with your edit: use scipy optimize.
Two important notes about the scipy optimize functions:
It seems you want to use the bounded versions
There is no guarantee that you will find the actual min/max with these functions. They are non-deterministic optimization functions that can fall victim to local minimums/maximums. If you want to evaluate the mins/maxs symbolically, I suggest looking into sage

Symbolic Math Toolbox hitting divide by zero error when it used to evaluate to NaN

I've just updated to Matlab 2014a finally. I have loads of scripts that use the Symbolic Math Toolbox that used to work fine, but now hit the following error:
Error using mupadmex
Error in MuPAD command: Division by zero. [_power]
Evaluating: symobj::trysubs
I can't post my actual code here, but here is a simplified example:
syms f x y
f = x/y
results = double(subs(f, {'x','y'}, {1:10,-4:5}))
In my actual script I'm passing two 23x23 grids of values to a complicated function and I don't know in advance which of these values will result in the divide by zero. Everything I can find on Google just tells me not to attempt an evaluation that will result in the divide by zero. Not helpful! I used to get 'inf' (or 'NaN' - I can't specifically remember) for those it could not evaluate that I could easily filter for when I do the next steps on this data.
Does anyone know how to force Matlab 2014a back to that behaviour rather than throwing the error? Or am I doomed to running an older version of Matlab forever or going through the significant pain of changing my approach to this to avoid the divide by zero?
You could define a division which has the behaviour you want, this division function returns inf for division by zero:
mydiv=#(x,y)x/(dirac(y)+y)+dirac(y)
f = mydiv(x,y)
results = double(subs(f, {'x','y'}, {1:10,-4:5}))