Calculating logarithm own base in Basic (LibreOffice Calc Macro) - libreoffice

LibreOffice has function LOG(x;n) where you can define your own base.
However, when I use Macro to write function in Basic, It does not take second parameter into account thus calculating natural logarithm.
How to calculate logarithm with own base in Basic language?

There is a simple formula to calculate with any base using the natural log. The function LogBase was taken from Andrew Pitonyak's OpenOffice.org Macros Explained page 79.
Sub MyLogarithm
MsgBox(LogBase(256,4))
End Sub
Function LogBase(x, b) As Double
LogBase = Log(x) / Log(b)
End Function
Excel and VBA also do this: Logarithm is different using VBA and Excel function.

Related

Why does Octave/Matlab use function handles

What benefit does it get from treating functions specially? For example,
function n = f(x)
2*x
endfunction
f(2) //outputs 4
f = #f
f(2) //outputs 4
If handles can be called the same way as functions, then what benefit do we get from functions being treated specially. By specially I mean that variables referring to functions can't be passed as arguments:
function n = m(f,x)
f(x)
end
m(f,2) // generates error since f is called without arguments
Why aren't functions procedures (which are always pointed to by variables) like in other functional languages?
EDIT:
It seems like my question has been completely misunderstood, so I will rephrase it. Compare the following python code
def f(x):
return 2*x
def m(f,x):
return f(x)
m(f,3)
to the octave code
function n = f(x)
2*x
end
function n = m(f,x)
f(x)
end
m(#f,2) % note that we need the #
So my question then is, what exactly is a function "object" in octave? In python, it is simply a value (functions are primitive objects which can be assigned to variables). What benefit does octave/matlab get from treating functions differently from primitive objects like all other functional languages do?
What would the following variables point to (what does the internal structure look like?)
x = 2
function n = f(x)
2*x
end
g = #f
In python, you could simply assign g=f (without needing an indirection with #). Why does octave not also work this way? What do they get from treating functions specially (and not like a primitive value)?
Variables referring to functions can be passed as arguments in matlab. Create a file called func.m with the following code
function [ sqr ] = func( x )
sqr = x.^2;
end
Create a file called 'test.m' like this
function [ output ] = test( f, x )
output = f(x);
end
Now, try the following
f=#func;
output = test(f, 3);
There's no "why is it different". It's a design decision. That's just how matlab/octave works. Which is very similar to how, say, c works.
I do not have intricate knowledge of the inner workings of either, but presumably a function simply becomes a symbol which can be accessed at runtime and used to call the instructions specified in its definition (which could be either interpreted or precompiled instructions). A "function handle" on the other hand, is more comparable to a function pointer, which, just like c, can either be used to redirect to the function it's pointing to, or passed as an argument.
This allows matlab/octave to do stuff like define a function completely in its own file, and not require that file to be run / imported for the function to be loaded into memory. It just needs to be accessible (i.e. in the path), and when matlab/octave starts a new session, it will create the appropriate table of available functions / symbols that can be used in the session. Whereas with python, when you define a function in a file, you need to 'run' / import that file for the function definition to be loaded into memory, as a series of interpreted instructions in the REPL session itself. It's just a different way of doing things, and one isn't necessarily better than the other. They're just different design / implementation decisions. Both work very well.
As for whether matlab/octave is good for functional programming / designed with functional programming in mind, I would say that it would not be my language of choice for functional programming; you can do some interesting functional programming with it, but it was not the kind of programming that it was designed for; it was primarily designed with scientific programming in mind, and it excels at that.

Is there an inverse factorial expression I can use in Matlab?

I want to edit this to get numberOfCircuits on its own on the left. Is there a possible way to do this in MATLAB?
e1=power(offeredTraffic,numberOfCircuits)/factorial(numberOfCircuits)/sum
The math for this problem is given in https://math.stackexchange.com/questions/61755/is-there-a-way-to-solve-for-an-unknown-in-a-factorial, but it's unclear how to do this with Matlab's functionality.
I'm guessing the easy part is rearranging:
fact_to_invert = power(offeredTraffic,numberOfCircuits)/sum/e1;
Inverting can be done, for instance, by using fzero. First define a continuous factorial based on the gamma function:
fact = #(n) gamma(n+1);
Then use fzero to invert it numerically:
numberOfCircuits_from_inverse = fzero(#(x) fact(x)-fact_to_invert,1);
Of course you should round the result for safe measure, and if it's not an integer then something's wrong.
Note: it's very bad practice (and brings 7 years bad luck) to name a variable with a name which is also a built-in, such as sum in your example.

Translating matlab code to python: optimizing numerical functions

I have the following bit of matlab code:
f=#(h)(exp(-2*mun*(h/sigma+1.166))-1+2*mun*(h/sigma+1.166))/(2*mun^2)-ARL0;
The parameters aren't important, they are all just constants at this point. What is important is that now I can evaluate that function for any value of h just by calling f(h). In particular, I can find the zeros, min, max, etc of the function over any interval I specify.
I am translating this code into python, mostly as an exercise in learning python, and I was wondering if there is anything similar (perhaps in numpy) that I could use, instead of setting up a numpy array with an arbitrary set of h values to process over.
I could do something like (pseudocode):
f = numpy.array(that function for h in numpy.arange(hmin, hmax,hstep))
But this commits me to a step size. Is there any way to avoid that and get the full precision like in matlab?
EDIT: what I actually want at the end of the day is to find the zeroes, max, and min locations (not values) of the function f. It looks like scipy might have some functions that are more useful here: http://docs.scipy.org/doc/scipy/reference/optimize.html
The Python equivalent of function handles in MATLAB (the # notation) are called "lambda functions" in python. The equivalent syntax is like so:
Matlab:
func = #(h)(h+2):
Python:
func = lambda h: h+2
For your specific case, you would implement the equivalent of the matlab function like this:
import numpy as np
f = lambda h: (np.exp(-2*mun*(h/sigma+1.166))-1+2*mun*(h/sigma+1.166))/(2*mun**2)-ARL0
f can then be used as a function and applied directly to any numpy array. So this would work, for example:
rarr = np.random.random((100, 20))
frarr = f(rarr)
If you are just looking for the f values for integer values of x, the following will work:
f = [your_function(x) for x in xrange(hmin, hmax)]
If you want finer granularity than integer values, you can do:
f = [your_function(x) for x in xrange(hmin, hmax, hstep)]
If you want exact solutions to the zeroes, max, and min locations, I agree with your edit: use scipy optimize.
Two important notes about the scipy optimize functions:
It seems you want to use the bounded versions
There is no guarantee that you will find the actual min/max with these functions. They are non-deterministic optimization functions that can fall victim to local minimums/maximums. If you want to evaluate the mins/maxs symbolically, I suggest looking into sage

Random variable from pdf in matlab

I want to simulate some random variables distributed as a Variance Gamma.
I know the pdf ( http://en.wikipedia.org/wiki/Variance-gamma_distribution ) but I don't know the inverse of the cumulative function F: so I can't generate a random uniform variable U and compute x=F^(-1)(U).
I have to do this in MATLAB.
Thank you!
Stefano
The next natural alternative to look into is Von Neumann's "acceptance-rejection method".
If you can find a density g defined on the same space as your f such that
you know how to generate samples from g, and
f(x) <= cg(x), for some c, for all x,
then you are good to go.
If you search the literature, people must have done this. The VG is widely used in pricing options.
Following #Drake 's idea: for the first step you can use Marsaglia and Tsang’s Method from here.
This is the code to generate gamma random numbers:
function x=gamrand(alpha,lambda)
% Gamma(alpha,lambda) generator using Marsaglia and Tsang method
% Algorithm 4.33
if alpha>1
d=alpha-1/3; c=1/sqrt(9*d); flag=1;
while flag
Z=randn;
if Z>-1/c
V=(1+c*Z)^3; U=rand;
flag=log(U)>(0.5*Z^2+d-d*V+d*log(V));
end
end
x=d*V/lambda;
else
x=gamrand(alpha+1,lambda);
x=x*rand^(1/alpha);
end

Can I Expand "Nested" Symbolic Functions in Matlab?

I'm trying to model the effect of different filter "building blocks" on a system which is a construct based on these filters.
I would like the basic filters to be "modular", i.e. they should be "replaceable", without rewriting the construct which is based upon the basic filters.
For example, I have a system of filters G_0, G_1, which is defined in terms of some basic filters called H_0 and H_1.
I'm trying to do the following:
syms z
syms H_0(z) H_1(z)
G_0(z)=H_0(z^(4))*H_0(z^(2))*H_0(z)
G_1(z)=H_1(z^(4))*H_0(z^(2))*H_0(z)
This declares the z-domain I'd like to work in, and a construct of two filters G_0,G_1, based on the basic filters H_0,H_1.
Now, I'm trying to evaluate the construct in terms of some basic filters:
H_1(z) = 1+z^-1
H_0(z) = 1+0*z^-1
What I would like to get at this point is an expanded polynomial of z.
E.g. for the declarations above, I'd like to see that G_0(z)=1, and that G_1(z)=1+z^(-4).
I've tried stuff like "subs(G_0(z))", "formula(G_0(z))", "formula(subs(subs(G_0(z))))", but I keep getting result in terms of H_0 and H_1.
Any advice? Many thanks in advance.
Edit - some clarifications:
In reality, I have 10-20 transfer functions like G_0 and G_1, so I'm trying to avoid re-declaring all of them every time I change the basic blocks H_0 and H_1. The basic blocks H_0 and H_1 would actually be of a much higher degree than they are in the example here.
G_0 and G_1 will not change after being declared, only H_0 and H_1 will.
H_0(z^2) means using z^2 as an argument for H_0(z). So wherever z appears in the declaration of H_0, z^2 should be plugged in
The desired output is a function in terms of z, not H_0 and H_1.
A workable hack is having an m-File containing the declarations of the construct (G_0 and G_1 in this example), which is run every time H_0 and H_1 are redefined. I was wondering if there's a more elegant way of doing it, along the lines of the (non-working) code shown above.
This seems to work quite nicely, and is very easily extendable. I redefined H_0 to H_1 as an example only.
syms z
H_1(z) = 1+z^-1;
H_0(z) = 1+0*z^-1;
G_0=#(Ha,z) Ha(z^(4))*Ha(z^(2))*Ha(z);
G_1=#(Ha,Hb,z) Hb(z^(4))*Ha(z^(2))*Ha(z);
G_0(H_0,z)
G_1(H_0,H_1,z)
H_0=#(z) H_1(z);
G_0(H_0,z)
G_1(H_0,H_1,z)
This seems to be a namespace issue. You can't define a symbolic expression or function in terms of arbitrary/abstract symfuns and then later on define these symfuns explicitly and be able to use them to obtain an exploit form of the original symbolic expression or function (at least not easily). Here's an example of how a symbolic function can be replaced by name:
syms z y(z)
x(z) = y(z);
y(z) = z^2; % Redefines y(z)
subs(x,'y(z)',y)
Unfortunately, this method depends on specifying the function(s) to be substituted exactly – because strings are used, Matlab sees arbitrary/abstract symfuns with different arguments as different functions. So the following example does not work as it returns y(z^2):
syms z y(z)
x(z) = y(z^2); % Function of z^2 instead
y(z) = z^2;
subs(x,'y(z)',y)
But if the last line was changed to subs(x,'y(z^2)',y) it would work.
So one option might be to form strings for case, but that seems overly complex and inelegant. I think that it would make more sense to simply not explicitly (re)define your arbitrary/abstract H_0, H_1, etc. functions and instead use other variables. In terms of the simple example:
syms z y(z)
x(z) = y(z^2);
y_(z) = z^2; % Create new explicit symfun
subs(x,y,y_)
which returns z^4. For your code:
syms z H_0(z) H_1(z)
G_0(z) = H_0(z^4)*H_0(z^2)*H_0(z);
G_1(z) = H_1(z^4)*H_0(z^2)*H_0(z);
H_0_(z) = 1+0*z^-1;
H_1_(z) = 1+z^-1;
subs(G_0, {H_0, H_1}, {H_0_, H_1_})
subs(G_1, {H_0, H_1}, {H_0_, H_1_})
which returns
ans(z) =
1
ans(z) =
1/z^4 + 1
You can then change H_0_ and H_1_, etc. at will and use subs to evaluateG_1andG_2` again.