I have the following bit of matlab code:
f=#(h)(exp(-2*mun*(h/sigma+1.166))-1+2*mun*(h/sigma+1.166))/(2*mun^2)-ARL0;
The parameters aren't important, they are all just constants at this point. What is important is that now I can evaluate that function for any value of h just by calling f(h). In particular, I can find the zeros, min, max, etc of the function over any interval I specify.
I am translating this code into python, mostly as an exercise in learning python, and I was wondering if there is anything similar (perhaps in numpy) that I could use, instead of setting up a numpy array with an arbitrary set of h values to process over.
I could do something like (pseudocode):
f = numpy.array(that function for h in numpy.arange(hmin, hmax,hstep))
But this commits me to a step size. Is there any way to avoid that and get the full precision like in matlab?
EDIT: what I actually want at the end of the day is to find the zeroes, max, and min locations (not values) of the function f. It looks like scipy might have some functions that are more useful here: http://docs.scipy.org/doc/scipy/reference/optimize.html
The Python equivalent of function handles in MATLAB (the # notation) are called "lambda functions" in python. The equivalent syntax is like so:
Matlab:
func = #(h)(h+2):
Python:
func = lambda h: h+2
For your specific case, you would implement the equivalent of the matlab function like this:
import numpy as np
f = lambda h: (np.exp(-2*mun*(h/sigma+1.166))-1+2*mun*(h/sigma+1.166))/(2*mun**2)-ARL0
f can then be used as a function and applied directly to any numpy array. So this would work, for example:
rarr = np.random.random((100, 20))
frarr = f(rarr)
If you are just looking for the f values for integer values of x, the following will work:
f = [your_function(x) for x in xrange(hmin, hmax)]
If you want finer granularity than integer values, you can do:
f = [your_function(x) for x in xrange(hmin, hmax, hstep)]
If you want exact solutions to the zeroes, max, and min locations, I agree with your edit: use scipy optimize.
Two important notes about the scipy optimize functions:
It seems you want to use the bounded versions
There is no guarantee that you will find the actual min/max with these functions. They are non-deterministic optimization functions that can fall victim to local minimums/maximums. If you want to evaluate the mins/maxs symbolically, I suggest looking into sage
Related
I have a lengthy symbolic expression that involves rational polynomials (basic arithmetic and integer powers). I'd like to simplify it into a single (simple) rational polynomial.
numden does it, but it seems to use some expensive optimization, which probably addresses a more general case. When tried on my example below, it crashed after a few hours--out of memory (32GB).
I believe something more efficient is possible even if I don't have a cpp access to matlab functionality (e.g. children).
Motivation: I have an objective function that involves polynomials. I manually derived it, and I'd like to verify and compare the derivatives: I subtract the two expressions, and the result should vanish.
Currently, my interest in this is academic since practically, I simply substitute some random expression, get zero, and it's enough for me.
I'll try to find the time to play with this as some point, and I'll update here about it, but I posted in case someone finds it interesting and would like to give it a try before that.
To run my function:
x = sym('x', [1 32], 'real')
e = func(x)
The function (and believe it or not, this is just the Jacobian, and I also have the Hessian) can't be pasted here since the text limit is 30K:
https://drive.google.com/open?id=1imOAa4VS87WDkOwAK0NoFCJPTK_2QIRj
Good evening everyone,
I want to create a function
f(x) = [f1(x), f2(x), ... , fn(x)]
in MatLab, with an arbitrary form and number for the fi. In my current case they are meant to be basis elements for a finite-dimensional function space, so for example a number of multi variable polynomials. I want to able to be able to set form (e.g. hermite/lagrange polynomials, ...) and number via arguments in some sort of "function creating" function, so I would like to solve this for arbitrary functions fi.
Assume for now that the fi are fi:R^d -> R, so vector input to scalar output. This means the result from f should be a n-dim vector containing the output of all n functions. The number of functions n could be fairly large, as there is permutation involved. I also need to evaluate the resulting function very often, so I hope to do it as efficiently as possible.
Currently I see two ways to do this:
Create a cell with each fi using a loop, using something like
funcell{i}=matlabFunction(createpoly(degree, x),'vars',{x})
and one of the functions from the symbolic toolbox and a symbolic x (vector). It is then possible to create the desired function with cellfun, e.g.
f=#(x) cellfun(#(v) v(x), funcell)
This is relatively short, easy and what can be found when doing searches. It even allows extension to vector output using 'UniformOutput',false and cell2mat. On the downside it is very inefficient, first during creation because of matlabFunction and then during evaluation because of cellfun.
The other idea I had is to create a string and use eval. One way to do this would be
stringcell{i}=[char(createpoly(degree, x)),';']
and then use strjoin. In theory this should yield an efficient function. There are two problems however. The first is the use of eval (mostly on principle), the second is inserting the correct arguments. The symbolic toolbox does not allow symbols of the form x(i), so the resulting string will not contain them either. The only remedy I have so far is some sort of string replacement on the xi that are allowed, but this is also far from elegant.
So I do have ways to do what I need right now, but I would appreciate any ideas for a better solution.
From my understanding of the problem, you could do the straightforward:
Initialization step:
my_fns = cell(n, 1); %where n is number of functions
my_fns{1} = #f1; % Assuming f1 is defined in f1.m etc...
my_fns{2} = #f2;
Evaluation at x:
z = zeros(n, 1);
for i=1:n,
z(i) = my_fns{i}(x)
end
For example if you put it in my_evaluate.m:
function z = my_evaluate(my_fns, x)
z = zeros(n, 1);
for i=1:n,
z(i) = my_fns{i}(x)
end
How might this possibly be sped up?
Depends on if you have special structure than can be exploited.
Are there calculations common to some subset of f1 through fn that need not be repeated with each function call? Eg. if the common calculation step is costly, you could do y = f_helper(x) and z(i) = fi(x, y).
Can the functions f1...fn be vector / matrix friendly, allowing evaluation of multiple points with each function call?
The big issue is how fast your function calls f1 through fn are, not how you collect the results from those calls in a vector.
My question regards using the scipy.interpolate.RectBivariateSpline function to interpolate a 2D mesh. I am essentially trying to emulate functionality of Matlab's interp2 function.
For a particular (light) use case, I call RectBivariateSpline on x and y vectors, which are regularly spaced, monotonically increasing vectors, e.g.:
x = np.arange(0., 20., 1.) # shape (20,)
y = np.arange(0., 20., 1.) # shape (20,)
and some 2D field, e.g.:
fld = np.reshape(np.arange(0., 400., 1.), (20, 20)) # shape (20, 20)
i.e.:
fn = sp.interpolate.RectBivariateSpline(x, y, fld)
To evaluate the interpolation, then, at particular xi, yi coordinates (or arrays, obeying numpy broadcasting), the docs advise to call the RectBivariateSpline.ev function, i.e.:
val1 = fn.ev(1, 1) # Ans: 21
val2 = fn.ev([1, 2], [1, 2]) # Ans: [21, 22]
This allows the user to find one interpolated value, for say (xi, yi) = (1.5, 1.5), or many interpolated values, say on a particular domain (regular or irregular grid).
My question is this: for large xi, yi arrays, how can I make the fn.ev call faster? Compared to the Matlab interp2 call, it is pretty slow (by an order of magnitude or worse).
Following the function call in the debugger, I have found that ev is actually a wrapper for RectBivariateSpline.__call__. This again is a wrapper for a call to the fitpack functions (in C / Fortran) on which Scipy's interpolation functionality is based. The __call__ function has an optional keyword grid, defaulted to False and absent from ev, which allows you to pass two vectors defining an orthogonal grid. Calling with grid set to True leads to the call of the fitpack routine bispev which is MUCH faster than the documented ev function which calls fitpack bispeu. (I assume the performance boost is because bispev exploits the regular grid, while bispeu may simply loop through all the index pairs... though I'm not sure).
In any event, I want to call an .ev like function in such a way that I can interpolate a grid that may not be completely regular (but is close) and that will be faster than the current call to bispeu. "Regularizing" the grid and going with bispev IS an option, and the end results are PRETTY close, and the routine is MUCH faster that way (by a factor of 20!)... however, Matlab interp2 allows the slightly irregular grid and computes with the same speed. I have considered trying to write my own C function, but I'm highly doubtful that such a lowly one as I can do any better than what's already in Scipy and written by geniuses. :)
So.. is there a way I can have my cake and eat it too? Is there some awesome little tricky way I can call the spline evaluator? I have thought about making my own bespoke wrapper for the fitpack calls... but the documentation for fitpack is not readily available (certainly not in the Scipy release I have), and I'd like to avoid that extra work if possible. Also note that this problem is especially irksome because I'm having to call it twice, once for the real and imaginary components of my original field (Matlab takes complex meshes). At the end of the day, I want to give Matlab the BOOT... but this speed issue could be a killer.
Thank you for your time.
I'm trying to model the effect of different filter "building blocks" on a system which is a construct based on these filters.
I would like the basic filters to be "modular", i.e. they should be "replaceable", without rewriting the construct which is based upon the basic filters.
For example, I have a system of filters G_0, G_1, which is defined in terms of some basic filters called H_0 and H_1.
I'm trying to do the following:
syms z
syms H_0(z) H_1(z)
G_0(z)=H_0(z^(4))*H_0(z^(2))*H_0(z)
G_1(z)=H_1(z^(4))*H_0(z^(2))*H_0(z)
This declares the z-domain I'd like to work in, and a construct of two filters G_0,G_1, based on the basic filters H_0,H_1.
Now, I'm trying to evaluate the construct in terms of some basic filters:
H_1(z) = 1+z^-1
H_0(z) = 1+0*z^-1
What I would like to get at this point is an expanded polynomial of z.
E.g. for the declarations above, I'd like to see that G_0(z)=1, and that G_1(z)=1+z^(-4).
I've tried stuff like "subs(G_0(z))", "formula(G_0(z))", "formula(subs(subs(G_0(z))))", but I keep getting result in terms of H_0 and H_1.
Any advice? Many thanks in advance.
Edit - some clarifications:
In reality, I have 10-20 transfer functions like G_0 and G_1, so I'm trying to avoid re-declaring all of them every time I change the basic blocks H_0 and H_1. The basic blocks H_0 and H_1 would actually be of a much higher degree than they are in the example here.
G_0 and G_1 will not change after being declared, only H_0 and H_1 will.
H_0(z^2) means using z^2 as an argument for H_0(z). So wherever z appears in the declaration of H_0, z^2 should be plugged in
The desired output is a function in terms of z, not H_0 and H_1.
A workable hack is having an m-File containing the declarations of the construct (G_0 and G_1 in this example), which is run every time H_0 and H_1 are redefined. I was wondering if there's a more elegant way of doing it, along the lines of the (non-working) code shown above.
This seems to work quite nicely, and is very easily extendable. I redefined H_0 to H_1 as an example only.
syms z
H_1(z) = 1+z^-1;
H_0(z) = 1+0*z^-1;
G_0=#(Ha,z) Ha(z^(4))*Ha(z^(2))*Ha(z);
G_1=#(Ha,Hb,z) Hb(z^(4))*Ha(z^(2))*Ha(z);
G_0(H_0,z)
G_1(H_0,H_1,z)
H_0=#(z) H_1(z);
G_0(H_0,z)
G_1(H_0,H_1,z)
This seems to be a namespace issue. You can't define a symbolic expression or function in terms of arbitrary/abstract symfuns and then later on define these symfuns explicitly and be able to use them to obtain an exploit form of the original symbolic expression or function (at least not easily). Here's an example of how a symbolic function can be replaced by name:
syms z y(z)
x(z) = y(z);
y(z) = z^2; % Redefines y(z)
subs(x,'y(z)',y)
Unfortunately, this method depends on specifying the function(s) to be substituted exactly – because strings are used, Matlab sees arbitrary/abstract symfuns with different arguments as different functions. So the following example does not work as it returns y(z^2):
syms z y(z)
x(z) = y(z^2); % Function of z^2 instead
y(z) = z^2;
subs(x,'y(z)',y)
But if the last line was changed to subs(x,'y(z^2)',y) it would work.
So one option might be to form strings for case, but that seems overly complex and inelegant. I think that it would make more sense to simply not explicitly (re)define your arbitrary/abstract H_0, H_1, etc. functions and instead use other variables. In terms of the simple example:
syms z y(z)
x(z) = y(z^2);
y_(z) = z^2; % Create new explicit symfun
subs(x,y,y_)
which returns z^4. For your code:
syms z H_0(z) H_1(z)
G_0(z) = H_0(z^4)*H_0(z^2)*H_0(z);
G_1(z) = H_1(z^4)*H_0(z^2)*H_0(z);
H_0_(z) = 1+0*z^-1;
H_1_(z) = 1+z^-1;
subs(G_0, {H_0, H_1}, {H_0_, H_1_})
subs(G_1, {H_0, H_1}, {H_0_, H_1_})
which returns
ans(z) =
1
ans(z) =
1/z^4 + 1
You can then change H_0_ and H_1_, etc. at will and use subs to evaluateG_1andG_2` again.
How can one perform computations in MATLAB that involve large numbers. As a simple example, an arbitrary precision calculator would show that ((1/120)^132)*(370!)/(260!) is approximately 1.56, but MATLAB is not able to perform such a computation (power(120,-132)*factorial(370)/factorial(260) = NaN).
I have also tried the following, which does not work:
syms a b c d;
a=120; b=-132; c=370; d=260;
f=sym('power(a,b)*gamma(c+1)/gamma(d+1)')
double(f); % produces error that instructs use of `vpa`
vpa(f) % produces (gamma(c + 1.0)*power(a, b))/gamma(d + 1.0)
If you just want to calculate the factorial of some large numbers, you can use the Java arbitrary precision tools, like so:
result = java.math.BigDecimal(1);
for ix = 1:300
result = result.multiply(java.math.BigDecimal(ix));
end
disp(result)
306057512216440636035370461297268629388588804173576999416776741259476533176716867465515291422477573349939147888701726368864263907759003154226842927906974559841225476930271954604008012215776252176854255965356903506788725264321896264299365204576448830388909753943489625436053225980776521270822437639449120128678675368305712293681943649956460498166450227716500185176546469340112226034729724066333258583506870150169794168850353752137554910289126407157154830282284937952636580145235233156936482233436799254594095276820608062232812387383880817049600000000000000000000000000000000000000000000000000000000000000000000000000
The value result in this case is a java object. You can see the available methods here: http://docs.oracle.com/javase/6/docs/api/java/math/BigDecimal.html
I'm still not sure that I would trust this method for (1e6)! though. You'll have to experiment and see.
Depending on what you're trying to do, then you may be able to evaluate the expression you're interested in in log-space:
log_factorial = sum(log(1:300));
You can use Stirling's approximation to approximate large factorials and simplify your expression before computing it numerically.
This will work:
vpa('120^-132*370!/260!')
and the result is
1.5625098001612564605522837520443