Wrapping TimerOutputs macros - macros

I've got a situation where it would be handy to have a variable, to, which can either be a TimerOutput or nothing. I'm interested in providing a macro that takes the same arguments as #timeit from TimerOutputs (e.g. #timeit to "time spent" s = foo()). Because to is potentially set to nothing, I can't simply disable a TimerObject.
If to is set, my preference would be to pass the arguments along to #timeit, but could live with calling timer_expr(__module__, false, args...).
If to is not set, I'd want to just return the remaining arguments (something like args[3:end], maybe) as an expression.
I've been fussing with this for a day or so, and can handle each of the cases in isolation. For the case where I'm not involving TimerOutput, this seems to work:
macro no_timer(args...)
args[3:end][1]
end
And for the case where I am using TimerOutput, I can do this:
macro with_timer(args...)
timer_expr(__module__, false, args...)
end
Not surprising, as that's just what #timeit does.
I haven't figured out how to handle both cases in one macro. I've gotten the closest by wrapping everything in a ternary operator - i.e. return :(isnothing($(args[1])) ? <expression stuff> : <TimerOutput stuff>), but there is some level of abstraction mismatch that I haven't unsnarled.
Addendum: I've come to the conclusion that my original framing was an "X-Y" problem. I didn't actually need a new macro to solve my problem - hence my accepting the answer I did. That said, I am struck by the fact that both of the answers proffered stayed well away from defining a macro.

You have already this functionality in TimerOuputs.
Just use disable_timer! and enable_timer! methods.
julia> const to = TimerOutput();
julia> disable_timer!(to);
julia> #timeit to "sleep" sleep(0.02)
julia> to
────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 23.6s / 0.0% 464KiB / 0.0%
Section ncalls time %tot avg alloc %tot avg
────────────────────────────────────────────────────────────────────
────────────────────────────────────────────────────────────────────
julia> enable_timer!(to);
julia> #timeit to "sleep" sleep(0.02)
julia> to
────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 37.3s / 0.1% 777KiB / 0.0%
Section ncalls time %tot avg alloc %tot avg
────────────────────────────────────────────────────────────────────
sleep 1 22.6ms 100.0% 22.6ms 320B 100.0% 320B
────────────────────────────────────────────────────────────────────

You can define your own more specific version of the timer_expr function that the macro calls, where the returned expression contains a check for to argument being nothing, then doing the appropriate thing.
import TimerOutputs: timer_expr
function timer_expr(m::Module, is_debug::Bool, to::Symbol, label::String, ex::Expr)
unescaped(ex) = ex.head == :escape ? ex.args[1] : ex
# this is from the original timer_expr functions,
# to be used when `to` isn't nothing
timer_ex = TimerOutputs.is_func_def(ex) ?
unescaped(TimerOutputs.timer_expr_func(m, is_debug, to, ex, label)) :
TimerOutputs._timer_expr(m, is_debug, to, label, ex)
cond_ex = esc(:(if isnothing($to)
$ex
else
placeholder # dummy symbol, to be replaced
end))
# args[3] = "else" section,
# the placeholder is args[2] within that
unescaped(cond_ex).args[3].args[2] = timer_ex
cond_ex
end

Related

Calling macro from within generated function in Julia

I have been messing around with generated functions in Julia, and have come to a weird problem I do not understand fully: My final goal would involve calling a macro (more specifically #tullio) from within a generated function (to perform some tensor contractions that depend on the input tensors). But I have been having problems, which I narrowed down to calling the macro from within the generated function.
To illustrate the problem, let's consider a very simple example that also fails:
macro my_add(a,b)
return :($a + $b)
end
function add_one_expr(x::T) where T
y = one(T)
return :( #my_add($x,$y) )
end
#generated function add_one_gen(x::T) where T
y = one(T)
return :( #my_add($x,$y) )
end
With these declarations, I find that eval(add_one_expr(2.0)) works just as expected and returns and expression
:(#my_add 2.0 1.0)
which correctly evaluates to 3.0.
However evaluating add_one_gen(2.0) returns the following error:
MethodError: no method matching +(::Type{Float64}, ::Float64)
Doing some research, I have found that #generated actually produces two codes, and in one only the types of the variables can be used. I think this is what is happening here, but I do not understand what is happening at all. It must be some weird interaction between macros and generated functions.
Can someone explain and/or propose a solution? Thank you!
I find it helpful to think of generated functions as having two components: the body and any generated code (the stuff inside a quote..end). The body is evaluated at compile time, and doesn't "know" the values, only the types. So for a generated function taking x::T as an argument, any references to x in the body will actually point to the type T. This can be very confusing. To make things clearer, I recommend the body only refer to types, never to values.
Here's a little example:
julia> #generated function show_val_and_type(x::T) where {T}
quote
println("x is ", x)
println("\$x is ", $x)
println("T is ", T)
println("\$T is ", $T)
end
end
show_val_and_type
julia> show_val_and_type(3)
x is 3
$x is Int64
T is Int64
$T is Int64
The interpolated $x means "take the x from the body (which refers to T) and splice it in.
If you follow the approach of never referring to values in the body, you can test generated functions by removing the #generated, like this:
julia> function add_one_gen(x::T) where T
y = one(T)
quote
#my_add(x,$y)
end
end
add_one_gen
julia> add_one_gen(3)
quote
#= REPL[42]:4 =#
#= REPL[42]:4 =# #my_add x 1
end
That looks reasonable, but when we test it we get
julia> add_one_gen(3)
ERROR: UndefVarError: x not defined
Stacktrace:
[1] macro expansion
# ./REPL[48]:4 [inlined]
[2] add_one_gen(x::Int64)
# Main ./REPL[48]:1
[3] top-level scope
# REPL[49]:1
So let's see what the macro gives us
julia> #macroexpand #my_add x 1
:(Main.x + 1)
It's pointing to Main.x, which doesn't exist. The macro is being too eager, and we need to delay its evaluation. The standard way to do this is with esc. So finally, this works:
julia> macro my_add(a,b)
return :($(esc(a)) + $(esc(b)))
end
#my_add
julia> #generated function add_one_gen(x::T) where T
y = one(T)
quote
#my_add(x,$y)
end
end
add_one_gen
julia> add_one_gen(3)
4

Fast iteration over unicode string Cython

I have the following cython function.
01:
+02: cdef int count_char_in_x(unicode x,Py_UCS4 c):
03: cdef:
+04: int count = 0
05: Py_UCS4 x_k
06:
+07: for x_k in x: ## Yellow
+08: if x_k == c:
+09: count+=1
10:
+11: return count
Line 07 is not properly optimized.
The annotated HTML code is expanded as:
+07: for x_k in x: ## Yellow
if (unlikely(__pyx_v_x == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable");
__PYX_ERR(0, 8, __pyx_L1_error)
}
__Pyx_INCREF(__pyx_v_x);
__pyx_t_1 = __pyx_v_x;
__pyx_t_6 = __Pyx_init_unicode_iteration(__pyx_t_1, (&__pyx_t_3), (&__pyx_t_4), (&__pyx_t_5)); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(0, 8, __pyx_L1_error)
for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_3; __pyx_t_7++) {
__pyx_t_2 = __pyx_t_7;
__pyx_v_x_k = __Pyx_PyUnicode_READ(__pyx_t_5, __pyx_t_4, __pyx_t_2);
Any tips on how could this be improved?
I think it is possible to write a cdef/cpdef function that at runtime completly avoids Python None type checks. Any idea on how this could be done?
The generated C code looks pretty good to me. The loop overall is a int-iterated for loop (i.e. it's not relying on calling the Python methods __iter__ and __next__).
__Pyx_PyUnicode_READ is translated pretty directly to PyUnicode_READ (depending slightly on the Python version you're using). PyUnicode_READ is a C macro which is as close to a direct array access as you can get.
This is probably as good as it's getting. You might get a small improvement by using bytes rather than unicode (provided you're dealing with ASCII characters). You might just consider whether it's really worth reimplementing unicode.count.
If it were a regular def function you could declare x as unicode not None to remove the None check before the loop. That might make a small difference. However, as #ead points out that isn't supported for cdef functions. It's likely the cost of a def function call will be slightly larger than the cost of a None-check, but you should time it if you care.

Calculating the e number using Raku

I'm trying to calculate the e constant (AKA Euler's Number) by calculating the formula
In order to calculate the factorial and division in one shot, I wrote this:
my #e = 1, { state $a=1; 1 / ($_ * $a++) } ... *;
say reduce * + * , #e[^10];
But it didn't work out. How to do it correctly?
I analyze your code in the section Analyzing your code. Before that I present a couple fun sections of bonus material.
One liner One letter1
say e; # 2.718281828459045
"A treatise on multiple ways"2
Click the above link to see Damian Conway's extraordinary article on computing e in Raku.
The article is a lot of fun (after all, it's Damian). It's a very understandable discussion of computing e. And it's a homage to Raku's bicarbonate reincarnation of the TIMTOWTDI philosophy espoused by Larry Wall.3
As an appetizer, here's a quote from about halfway through the article:
Given that these efficient methods all work the same way—by summing (an initial subset of) an infinite series of terms—maybe it would be better if we had a function to do that for us. And it would certainly be better if the function could work out by itself exactly how much of that initial subset of the series it actually needs to include in order to produce an accurate answer...rather than requiring us to manually comb through the results of multiple trials to discover that.
And, as so often in Raku, it’s surprisingly easy to build just what we need:
sub Σ (Unary $block --> Numeric) {
(0..∞).map($block).produce(&[+]).&converge
}
Analyzing your code
Here's the first line, generating the series:
my #e = 1, { state $a=1; 1 / ($_ * $a++) } ... *;
The closure ({ code goes here }) computes a term. A closure has a signature, either implicit or explicit, that determines how many arguments it will accept. In this case there's no explicit signature. The use of $_ (the "topic" variable) results in an implicit signature that requires one argument that's bound to $_.
The sequence operator (...) repeatedly calls the closure on its left, passing the previous term as the closure's argument, to lazily build a series of terms until the endpoint on its right, which in this case is *, shorthand for Inf aka infinity.
The topic in the first call to the closure is 1. So the closure computes and returns 1 / (1 * 1) yielding the first two terms in the series as 1, 1/1.
The topic in the second call is the value of the previous one, 1/1, i.e. 1 again. So the closure computes and returns 1 / (1 * 2), extending the series to 1, 1/1, 1/2. It all looks good.
The next closure computes 1 / (1/2 * 3) which is 0.666667. That term should be 1 / (1 * 2 * 3). Oops.
Making your code match the formula
Your code is supposed to match the formula:
In this formula, each term is computed based on its position in the series. The kth term in the series (where k=0 for the first 1) is just factorial k's reciprocal.
(So it's got nothing to do with the value of the prior term. Thus $_, which receives the value of the prior term, shouldn't be used in the closure.)
Let's create a factorial postfix operator:
sub postfix:<!> (\k) { [×] 1 .. k }
(× is an infix multiplication operator, a nicer looking Unicode alias of the usual ASCII infix *.)
That's shorthand for:
sub postfix:<!> (\k) { 1 × 2 × 3 × .... × k }
(I've used pseudo metasyntactic notation inside the braces to denote the idea of adding or subtracting as many terms as required.
More generally, putting an infix operator op in square brackets at the start of an expression forms a composite prefix operator that is the equivalent of reduce with => &[op],. See Reduction metaoperator for more info.
Now we can rewrite the closure to use the new factorial postfix operator:
my #e = 1, { state $a=1; 1 / $a++! } ... *;
Bingo. This produces the right series.
... until it doesn't, for a different reason. The next problem is numeric accuracy. But let's deal with that in the next section.
A one liner derived from your code
Maybe compress the three lines down to one:
say [+] .[^10] given 1, { 1 / [×] 1 .. ++$ } ... Inf
.[^10] applies to the topic, which is set by the given. (^10 is shorthand for 0..9, so the above code computes the sum of the first ten terms in the series.)
I've eliminated the $a from the closure computing the next term. A lone $ is the same as (state $), an anonynous state scalar. I made it a pre-increment instead of post-increment to achieve the same effect as you did by initializing $a to 1.
We're now left with the final (big!) problem, pointed out by you in a comment below.
Provided neither of its operands is a Num (a float, and thus approximate), the / operator normally returns a 100% accurate Rat (a limited precision rational). But if the denominator of the result exceeds 64 bits then that result is converted to a Num -- which trades performance for accuracy, a tradeoff we don't want to make. We need to take that into account.
To specify unlimited precision as well as 100% accuracy, simply coerce the operation to use FatRats. To do this correctly, just make (at least) one of the operands be a FatRat (and none others be a Num):
say [+] .[^500] given 1, { 1.FatRat / [×] 1 .. ++$ } ... Inf
I've verified this to 500 decimal digits. I expect it to remain accurate until the program crashes due to exceeding some limit of the Raku language or Rakudo compiler. (See my answer to Cannot unbox 65536 bit wide bigint into native integer for some discussion of that.)
Footnotes
1 Raku has a few important mathematical constants built in, including e, i, and pi (and its alias π). Thus one can write Euler's Identity in Raku somewhat like it looks in math books. With credit to RosettaCode's Raku entry for Euler's Identity:
# There's an invisible character between <> and i⁢π character pairs!
sub infix:<⁢> (\left, \right) is tighter(&infix:<**>) { left * right };
# Raku doesn't have built in symbolic math so use approximate equal
say e**i⁢π + 1 ≅ 0; # True
2 Damian's article is a must read. But it's just one of several admirable treatments that are among the 100+ matches for a google for 'raku "euler's number"'.
3 See TIMTOWTDI vs TSBO-APOO-OWTDI for one of the more balanced views of TIMTOWTDI written by a fan of python. But there are downsides to taking TIMTOWTDI too far. To reflect this latter "danger", the Perl community coined the humorously long, unreadable, and understated TIMTOWTDIBSCINABTE -- There Is More Than One Way To Do It But Sometimes Consistency Is Not A Bad Thing Either, pronounced "Tim Toady Bicarbonate". Strangely enough, Larry applied bicarbonate to Raku's design and Damian applies it to computing e in Raku.
There is fractions in $_. Thus you need 1 / (1/$_ * $a++) or rather $_ /$a++.
By Raku you could do this calculation step by step
1.FatRat,1,2,3 ... * #1 1 2 3 4 5 6 7 8 9 ...
andthen .produce: &[*] #1 1 2 6 24 120 720 5040 40320 362880
andthen .map: 1/* #1 1 1/2 1/6 1/24 1/120 1/720 1/5040 1/40320 1/362880 ...
andthen .produce: &[+] #1 2 2.5 2.666667 2.708333 2.716667 2.718056 2.718254 2.718279 2.718282 ...
andthen .[50].say #2.71828182845904523536028747135266249775724709369995957496696762772

Does MATLAB lets you assign default value for input arguments for a function like python does?

I am working on a project and have many functions to create and they do need lots of debugging so instead of just hitting the run button i have to go to command window and give a function call.
does MATLAB support assignment of default values to input arguments like python does?
In python
def some_fcn(arg1 = a, arg2 = b)
% THE CODE
if you now call it without passing the arguments it doesn't give errors but if you try the same in MATLAB it gives an error.
For assigning default values, one might find it easier to manage if you use exist function instead of nargin.
function f(arg1, arg2, arg3)
if ~exist('arg2', 'var')
arg2 = arg2Default;
end
The advantage is that if you change the order of arguments, you don't need to update this part of the code, but when you use nargin you have to start counting and updating numbers.
If you are writing a complex function that requires validation of inputs, default argument values, key-value pairs, passing options as structs etc., you could use the inputParser object. This solution is probably overkill for simple functions, but you might keep it in mind for your monster-function that solves equations, plots results and brings you coffee. It resembles a bit the things you can do with python's argparse module.
You configure an inputParser like so:
>> p = inputParser();
>> p.addRequired('x', #isfinite) % validation function
>> p.addOptional('y', 123) % default value
>> p.addParamValue('label', 'default') % default value
Inside a function, you would typically call it with p.parse(varargin{:}) and look for your parameters in p.Results. Some quick demonstration on the command line:
>> p.parse(44); disp(p.Results)
label: 'default'
x: 44
y: 123
>> p.parse()
Not enough input arguments.
>> p.parse(Inf)
Argument 'x' failed validation isfinite.
>> p.parse(44, 55); disp(p.Results)
label: 'default'
x: 44
y: 55
>> p.parse(13, 'label', 'hello'); disp(p.Results)
label: 'hello'
x: 13
y: 123
>> p.parse(88, 13, 'option', 12)
Argument 'option' did not match any valid parameter of the parser.
You can kind of do this with nargin
function out = some_fcn(arg1, arg2)
switch nargin
case 0
arg1 = a;
arg2 = b;
%//etc
end
but where are a and b coming from? Are they dynamically assigned? Because that effects the validity of this solution
After a few seconds of googling I found that as is often the case, Loren Shure has already solved this problem for us. In this article she outlines exactly my method above, why it is ugly and bad and how to do better.
You can use nargin in your function code to detect when no arguments are passed, and assign default values or do whatever you want in that case.
MathWorks has a new solution for this in R2019b, namely, the arguments block. There are a few rules for the arguments block, naturally, so I would encourage you to learn more by viewing the Function Argument Validation help page. Here is a quick example:
function ret = someFunction( x, y )
%SOMEFUNCTION Calculates some stuff.
arguments
x (1, :) double {mustBePositive}
y (2, 3) logical = true(2, 3)
end
% ...stuff is done, ret is defined, etc.
end
Wrapped into this is narginchk, inputParser, validateattributes, varargin, etc. It can be very convenient. Regarding default values, they are very simply defined as those arguments that equal something. In the example above, x isn't given an assignment, whereas y = true(2, 3) if no value is given when the function is called. If you wanted x to also have a default value, you could change it to, say, x (1, :) double {mustBePositive} = 0.5 * ones(1, 4).
There is a more in-depth answer at How to deal with name/value pairs of function arguments in MATLAB
that hopefully can spare you some headache in getting acquainted with the new functionality.

Scala while loop returns Unit all the time

I have the following code, but I can't get it to work. As soon as I place a while loop inside the case, it's returning a unit, no matter what I change within the brackets.
case While(c, body) =>
while (true) {
eval(Num(1))
}
}
How can I make this while loop return a non-Unit type?
I tried adding brackets around my while condition, but still it doesn't do what it's supposed to.
Any pointers?
Update
A little more background information since I didn't really explain what the code should do, which seems to be handy if I want to receive some help;
I have defined a eval(exp : Exp). This will evaluate a function.
Exp is an abstract class. Extended by several classes like Plus, Minus (few more basic operations) and a IfThenElse(cond : Exp, then : Exp, else : Exp). Last but not least, there's the While(cond: Exp, body: Exp).
Example of how it should be used;
eval(Plus(Num(1),Num(4)) would result in NumValue(5). (Evaluation of Num(v : Value) results in NumValue(v). NumValue extends Value, which is another abstract class).
eval(While(Lt(Num(1),Var("n")), Plus(Num(1), Var("n"))))
Lt(a : Exp, b : Exp) returns NumValue(1) if a < b.
It's probably clear from the other answer that Scala while loops always return Unit. What's nice about Scala is that if it doesn't do what you want, you can always extend it.
Here is the definition of a while-like construct that returns the result of the last iteration (it will throw an exception if the loop is never entered):
def whiley[T](cond : =>Boolean)(body : =>T) : T = {
#scala.annotation.tailrec
def loop(previous : T) : T = if(cond) loop(body) else previous
if(cond) loop(body) else throw new Exception("Loop must be entered at least once.")
}
...and you can then use it as a while. (In fact, the #tailrec annotation will make it compile into the exact same thing as a while loop.)
var x = 10
val atExit = whiley(x > 0) {
val squared = x * x
println(x)
x -= 1
squared
}
println("The last time x was printed, its square was : " + atExit)
(Note that I'm not claiming the construct is useful.)
Which iteration would you expect this loop to return? If you want a Seq of the results of all iterations, use a for expression (also called for comprehension). If you want just the last one, create a var outside the loop, set its value on each iteration, and return that var after the loop. (Also look into other looping constructs that are implemented as functions on different types of collections, like foldLeft and foldRight, which have their own interesting behaviors as far as return value goes.) The Scala while loop returns Unit because there's no sensible one size fits all answer to this question.
(By the way, there's no way for the compiler to know this, but the loop you wrote will never return. If the compiler could theoretically be smart enough to figure out that while(true) never terminates, then the expected return type would be Nothing.)
The only purpose of a while loop is to execute a side-effect. Or put another way, it will always evaluate to Unit.
If you want something meaningful back, why don't you consider using an if-else-expression or a for-expression?
As everyone else and their mothers said, while loops do not return values in Scala. What no one seems to have mentioned is that there's a reason for that: performance.
Returning a value has an impact on performance, so the compiler would have to be smart about when you do need that return value, and when you don't. There are cases where that can be trivially done, but there are complex cases as well. The compiler would have to be smarter, which means it would be slower and more complex. The cost was deemed not worth the benefit.
Now, there are two looping constructs in Scala (all the others are based on these two): while loops and recursion. Scala can optimize tail recursion, and the result is often faster than while loops. Or, otherwise, you can use while loops and get the result back through side effects.