Complex special function implementation in python - scipy

I need to run a massive computation involving a large of calls to the special function scipy.special.exp1 with a complex argument and may migrate the code from python to another language (C). It turns out that only python has an implementation of this function for complex arguments. The C implementation of the GSL only accepts real arguments. How can I get the code from scipy.special in order to translate it into C ? Many thanks.

Related

Function which one must not shadow in Matlab

I am writing some helper functions for unit testing in Matlab. For that I need to overload Matlab functions, built-ins etc. In the documentation is written, that one shall not overload the function builtin. I also found out that one must not overload the lt operator, since it seems to get called by the UI all the time.
Is there a list of functions which one shall not overload, or do you know of any particular problems from overloading some specific functions?
Further Information:
My use case: I want to do mutation-testing and faul-injection.
in ther words, sometimes the called functions shall return wrong results or throw an error. If I overload the Matlab functions, I do not have to change the source code to do that.

Interface MiniZinc to other languages

I want to solve a problem for which a score function has been implemented in Prolog. Would it possible to call Prolog (or another language) from a MiniZinc script in the case an optimization function is defined in another language?
For instance, MiniZinc can easily be called from python through the package MiniZinc Python. Would there exist an interface to do the opposite (call Python from MiniZinc)?
There is currently no foreign function interface in MiniZinc. So it is currently not possible to use functionality from an other language, like Prolog, in MiniZinc.
Different from exposing MiniZinc to a programming language, integrating other languages into MiniZinc might not be as easy. The problem is that all parts of a MiniZinc instance needs to either be resolved by the compiler or be transformed to a solver-level construct. This means that a computation on parameter values is probably relatively easy to do in another language: MiniZinc could just call a compiled version of the computation. Transformations of variables, on the other hand, would require a strict MiniZinc API to perform them. You could compare such an interface to how you can use CPython in C: it would be more like writing a MiniZinc module in another language.

Is there any way to quickly expose C/C++ functions as MATLAB MEX files?

I want to automate the generation of C source MEX files to expose C functions to MATLAB. I'm hoping to find something similar to Boost.Python which uses C++ template meta-programming to wrap existing C++ code without having to modify the underlying source. Note that I'm primarily interested in exposing functions right now, not class instances. If there isn't any library out there that does this already is there a better approach to solve this problem than template meta-programming? I've tried MATLAB's loadlibrary function but think its usage isn't intuitive for most end-users that I work with.
Currently I've been writing C source MEX wrappers manually which is very time consuming and, most importantly, its difficult to maintain the wrappers. I don't have any issue with getting my MEX wrappers to work, it's just that the time required to expose the function to MATLAB via MEX is orders of magnitude more than what it takes me to expose them to Python.
This example shows the idea of what I'd like to do. I'd like to expose the TimesTwo function to MATLAB which currently requires that I write a mexFunction that checks the input/output argument counts, converts MATLAB array pointers to C data types, calls the C function TimesTwo, then assigns the results to the MATLAB output arguments.
#include "mex.h"
#include "some_template_library.hpp"
// This is the function I want to expose
double TimesTwo(double in)
{
return 2*in;
}
// This macro would expand to generate the mexFunction gateway
// It would also define the data types and perform error checking
// I realize the actual implementation would need to be more complex than this
EXPOSEFUNCTION(TimesTwo,double in, double out)

Why Matlab Coder is slow?

I'm trying to build a Mex function in Matlab-r2015a using Matlab Coder. The entry point function I want to convert is alg.m which is called by main.m.
Following the procedure, I'm at the step in which I'm asked to "define the type of each input for every entry point function". I choose the automatic procedure and enter main.m
My problem is: in order to define the type of each input, the Matlab Coder takes a very long time; the same problem appears at the next step, when I have to check whether there are issues in the Matlab code. Is that because Matlab has to execute the whole main.m+alg.m?
I suspect this should be the case because when I impose values of parameters that make the computation faster, the input types and issue checks are done immediately. Anyway, I would like to have some more explanations and, if any, suggestions to solve the problem.
You are correct, both steps Define Input Types and Check for Run-Time Issues run main.m which will in turn run alg.m.
If the input data types for the entry-point function don’t change, two test-benches (namely two versions of your main.m) can be written – a shorter one that invokes the entry-point once for defining input types, and a more comprehensive one that thoroughly exercises alg.m. The former can be used to quickly define input types, and the latter should be used when checking for run-time issues.

Is it possible to pass a cython function as argument to a scipy function?

Scipy has many functions that accept a python callable to perform some operation. In particular, I'm working with a mathematical optimization function scipy.optimize.leastsq that accepts a Python callable as objective function argument. This objective function can be called by leastsq lots of times during the minimization process.
My profiling shows that a lot of time is spent on that objective function. I have sped up some parts of the function using Cython. However, the function itself is still a Python function and calling it repeatedly (as leastsq does) has some overhead.
I think I could get a further increase in speed if the function was a Cython function (using cdef instead of def). So I put my call to leastsq inside the Cython extension and pass a Cython objective function to it. But when I do this I get a compile error at the leastsq call:
Cannot convert 'object (object, object, object)' to Python object
Is there any way to pass a Cython function as an argument to these Scipy functions that require python callables?
Or, in my case, is there any way to access the underlying leastsq implementation and pass the Cython objective function to it?
Passing cdef functions in is not possible currently. Passing callback functions to the underlying Fortran code is also not possible, as it's wrapped with f2py which doesn't know about Cython.
What you can do is:
Write your function as Cython's def function. These can be passed in to all Scipy's routines. This does not remove the additional function call overhead which comes from using Python's callback mechanism (which might not be significant!), but you can speed up the implementation of your function, and this might well be enough. Just remember to cdef the variables appearing in, as usual when writing Cython code.
Copy the MINPACK source codes from Scipy or netlib.org, and use them directly yourself. This gets rid of the remaining function call overhead by replacing Python function callback mechanism by the low-level one.
(There have been discussions on adding a protocol for passing low-level function pointers around exactly for this purpose, which could be adopted by any Python-based system having a need for it, but AFAIK the design is not completed, and not implemented in Cython & Scipy.)