Scipy optimize def functions vs lambda - scipy

Is it normal to have different results between:
Using a lambda expression
And using def function
To express constraints with scipy optimizer
I ll add code later

They should be identical unless you're (ab)using somewhat esoteric language features, e.g. pulling variables from outer scopes.
So most likely, there's an issue in your code.

Related

Scala compiler: detecing a pure/impure function

In FP languages like Scala, Haskell etc. pure functions are used which makes it possible for compiler to optimize the code. For eg:
val x = method1()// a pure function call
val y = method2// another pure function call
val c = method3(x,y)
As method1 and method2 are pure functions and hence evaluations are independent of each other, compiler can parallelize both the calls.
Language like Haskell has constructs within it (like IO monad) which tells whether function is pure or performs some IO operation. But how does Scala compiler detect that a function is pure function?
The general approach to classifying a block of code as pure is to define which operations are pure and since purity composes, a composition of pure operations is pure.
Parallelization isn't actually one of the more important benefits of pure code: the benefit is that any evaluation strategy can be used. Evaluation can be reordered or results can be cached etc. Parallelization is another evaluation strategy but without a good sense of the actual execution cost (and note that modern CPUs and memory hierarchies can make it really difficult to get such a sense), it often slows things down relative to other strategies. For modern pure code, laziness and caching repeated values is often more generally effective, while parallelism is controlled by the developer (one benefit of pure code is that you can make arbitrary changes to how you're parallelizing without changing the semantics of the code).
In the case of Scala, the compiler makes no real effort to classify pure/impure code and generally doesn't try alternative evaluation strategies: control of that is left to the programmer (the language helps somewhat by having call-by-name and lazy).
The JVM's JIT compiler can and does perform some purity analysis on bytecode when deciding what it can safely inline and reorder. This isn't Scala-specific, though final local variables (aka local vals in Scala or final variables in Java) enable some optimizations that can't otherwise be performed. Javascript runtimes (for ScalaJS) can (and really aggressively do, in practice) likewise perform that analysis, as does LLVM (for Scala Native).
In the general case, Purity Analysis is equivalent to solving the Halting Problem. In other words, it is impossible to statically decide, in the general case, whether a chunk of code is pure or not.
In a language like Haskell, there is no way of writing impure code in Haskell, therefore purity analysis is trivial. Here is a simple function that takes a Haskell program as an argument and tells you whether it is true or not:
isPureProgram :: a -> Bool
isPureProgram _ = True
Note, I am simplifying a couple of things here:
unsafePerformIO and friends allow you to, well, perform unsafe I/O. It is generally assumed that you know what you are doing when you use these functions.
Exceptions are side-effects.
Contrary to popular belief, the IO monad does not allow you to write impure code in Haskell. What the IO monad does is to write a pure program which returns a list of IO actions, which when interpreted by the runtime system result in impure computation. However, the Haskell program which generates these IO actions is still pure – it is the interpreter which is impure. But of course, the end result will be the same: an impure computation will be performed.
However, since Scala is an impure language at its core, the compiler cannot rely on similar restrictions as a Haskell compiler can, and thus cannot perform purity analysis in the general case.

Is the reason we can use val defining functions in Scala?

Is the reason a val variable can be used to contain a function definition is because functions are first class citizens where they can be contained in variables?
In Scala damn near everything is an expression. From a practical perspective what that means is pretty much every bit of syntactically correct Scala code that you can write evaluates to an object that can you can do more Scala on. Examples of things you can do to these objects are: call a method on it, pass it to a function, or store it in a val. Expressions can be thought of in contrast to statements, which are just instructions to the computer to do something. An example of the use of statements in Scala are import commands. The heavy prevalence of expressions in Scala are a deliberate design choice intended to make the language more flexible and extensible.

Power operator in Chisel

I am trying to find an equivalent of Verilog power operator ** in Chisel. I went through Chisel Cheat sheet and tutorial but I did not find what I was looking for. After going through designs written in Chisel, I found that log2xx functions are popular choice while the power operator is never used. Of course I can always use the shift operator to get power of 2 but I was hoping that there is general power operator in Chisel. I tried to use scala's math functions to the job but I got compilation error.
Since you are trying to calculate a bitwidth which is calculated at elaboration time (ie. when Scala is elaborating the hardware graph), we can use Scala functions. Scala only provides a power function for Doubles, but that works just fine for this case. Try math.pow(base, exp).toInt, note that base and exp can both be Ints and Scala will automatically convert them to Doubles for the function call. You simply need to convert the resulting Double to an Int for use as a bitwidth.

When to use macros functions in Erlang?

I'm currently following the book Learn You Some Erlang for Great Good by Fred Herbert and one of the sections is regarding Macros.
I understand using macros for variables (constant values, mainly), however, I don't understand the use case for macros as functions. For example, Herbert writes:
Defining a "function" macro is similar. Here's a simple macro used to subtract one number from another:
-define(sub(X, Y), X-Y).
Why not just define this as a function elsewhere? Why use a macro? Is there some sort of performance advantage from the compiler or is this merely just a "this function is so simple, let's just define it in one line" type of thing?
I'm not trying to start a debate or preference argument, but after seeing some production Erlang code, I've started noticing lots of macros function usage.
In this case, the one obvious advantage of the macro not being a function (-define(sub(X, Y), X-Y), which would be safer as -define(sub(X, Y), (X-Y))) is that it can be used as a guard since custom function calls are forbidden.
In many cases it would otherwise be safer to define the function as an inlined one.
On the other hand, there are other interesting cases, such as assertions in tests or shortcuts where what you want is to keep some local context in the final place.
For example, let's say I want to make a generic call for a test where the objective is 'match a given pattern and return a given value, or fail after M milliseconds'.
I cannot make this generic with code since patterns are not data structures you are allowed to carry around. However, with macros:
-define(wait_for(PAT, Timeout),
receive
PAT -> VAL
after Timeout ->
error(timeout)
end).
This macro can then be used as:
my_test() ->
Pid = start_whatever(),
%% ...
?wait_for({'EXIT', Pid, Reason}, 5000),
?assertMatch(shutdown, Reason).
By doing this, I'm able to simplify the form of text in some tests without needing a bunch of nesting, and in a way that is not possible with functions.
Do note that the assertion itself as defined by eunit is using a function macro, and does something akin to
-define(assertMatch(PAT, TERM),
%% funs to avoid leaking bindings into parent scope
(fun() ->
try
PAT = TERM,
true
catch _:_ ->
error({assertion_failed, ?LINE, ...})
end
end)()).
This similarly lets you carry patterns and bindings and do fancy forms that couldn't be possible otherwise.
In this last case, you'll notice I used the ?LINE macro. That's another advantage of macros: you preserve information and locality about the call site, such as its module name, line number, and so on. This is useful when such metadata is required, such as when you're reporting test failures.
If you're looking at old code, there might be macros used as a way of inlining small functions under the assumption that function calls are very expensive. I'm not sure if that was ever true, but it's not something you need to worry about today.
Macros can be used to define constants, like
-define(MAX_TIMEOUT, 30 * 1000).
%% ...
gen_server:call(my_server, {do_stuff, Data}, ?MAX_TIMEOUT),
%% ...
I mostly prefer to pass in environment variables for this job, but it's more work to read them on startup and stash them somewhere and write accessors.
Finally, you can do some simple metaprogramming:
-define(MAKE_REQUEST_FUN(Method),
Method(Request, HTTPOptions, Options) ->
httpc:request(Method, Request, HTTPOptions, Options)).
?MAKE_REQUEST_FUN(get).
?MAKE_REQUEST_FUN(put).
%% Now we've defined a get/3 that can be called as
%% get(Request, [], []).

How to safely manipulate MATLAB anonymous functions in string form

I have an anonymous function that I would like to manipulate in string form then use with fsolve.
When I do this the references in the anonymous function to constants are lost and fsolve fails.
The problem is easily illustrated.
The following works:
A=3;
myfun=#(x)sin(A*x);
x = fsolve(#(x)myfun(x),[1 4],optimoptions('fsolve','Display','off'))
The following throws an error as explained here:
A=3;
myfun=#(x)sin(A*x);
mystring=func2str(myfun);
%string operations would go here such as strrep(mystring,'A','A^2') or whatever
myfun2=str2func(mystring);
x = fsolve(#(x)myfun2(x),[1 4],optimoptions('fsolve','Display','off'))
Is there some way I CAN safely manipulate an anonymous function while retaining references to constant parameters?
more info
Specifically I'm writing a simple wrapper to allow fsolve to accept imaginary numbers for simple cases. The following illustrates a working example without a constant parameter:
myeqn=#(x)0.5*x^2-5*x+14.5;
cX0=1+1*1i;
f1=strrep(func2str(myeqn),'#(x)','');
f2=strrep((f1),'x','(x(1)+(x(2))*1i)');
f3=strcat('#(x)[real(',f2,'); imag(',f2,')]');
fc=str2func(f3);
opts=optimoptions('fsolve','Display','off');
result=arrayfun(#(cinput)[1 1i]*(real(fsolve(fc,[real(cinput);imag(cinput)],opts))),cX0)
As in the failed example above if I include a parameter in my wrapper the process fails with the same error as above.
I originally suggested to use the symbolic math toolbox, but reading your question again I realized it's just a simple substitution of input parameters. You can achieve this using function handles without any string processing.
myeqn=#(x)0.5*x^2-5*x+14.5;
cX0=1+1*1i;
wrapper=#(x,f)([real(f(x(1)+x(2)*i)),imag(f(x(1)+x(2)*i))])
opts=optimoptions('fsolve','Display','off');
result=arrayfun(#(cinput)[1 1i]*(real(fsolve(#(x)wrapper(x,myeqn),[real(cinput);imag(cinput)],opts))),cX0)
As much as I hate to suggest using the eval function, you could do:
myfun2 = eval(mystring);
Using eval is kinda frowned upon because it makes code hard to analyze (since arbitrary nastiness could be going on in that string), but don't let other people's coding style stop you from doing what works :)
In your longer example, this would correspond to changing the line:
fc=str2func(f3);
to:
fc=eval(f3);
Again, the use of eval is strongly discouraged, so you should consider alternatives to this type of string manipulation of function definitions.