I've been trying to wrap my head around this, I've written four functions that I expect should run the same, I'm curious why they're different.
toEffect :: Tuple Int String -> Effect Unit
toEffect (Tuple i strng) =
log $ append (show i <> ": ") $
statefulPuzzleToString $
selectFirstLadderBruteForce $
parsePuzzle strng
main1 :: Effect Unit
main1 = (toEffect $ Tuple 1 $ fromMaybe "" $ hardestBoardStringsX11 !! 0) >>=
(\_ -> toEffect $ Tuple 2 $ fromMaybe "" $ hardestBoardStringsX11 !! 1) >>=
(\_ -> toEffect $ Tuple 3 $ fromMaybe "" $ hardestBoardStringsX11 !! 2) >>=
(\_ -> toEffect $ Tuple 4 $ fromMaybe "" $ hardestBoardStringsX11 !! 3)
-- ... Pattern could continue for all 11 boards
main2 :: Effect Unit
main2 = do
toEffect $ Tuple 1 $ fromMaybe "" $ hardestBoardStringsX11 !! 0
toEffect $ Tuple 2 $ fromMaybe "" $ hardestBoardStringsX11 !! 1
toEffect $ Tuple 3 $ fromMaybe "" $ hardestBoardStringsX11 !! 2
toEffect $ Tuple 4 $ fromMaybe "" $ hardestBoardStringsX11 !! 3
-- ... Pattern could continue for all 11 boards
main3 :: Effect Unit
main3 = foldl
(\acc new -> acc >>= \_ -> new)
(pure unit)
effects
where
effects :: Array (Effect Unit)
effects = map toEffect $ mapWithIndex Tuple hardestBoardStringsX11
main4 :: Effect Unit
main4 = traverse_ toEffect $ mapWithIndex Tuple hardestBoardStringsX11
For the first two, the console appears to display each effect as it happens. There's maybe upwards of a 1/2second delay between log statements. I'd be extremely surprised to see these behave differently as I understand that the do notation in main2 is just syntactic sugar for what was written in main1
The second two appear log their statements simultaneously.
I'm not entirely certain about main4, but I feel pretty confident that main3 really aught to behave the same as the first two.
Any insight into what's happening here?
Both main3 and main4 behave that way for the same reason, and the reason is the difference between evaluation and execution.
When you have a value of type Effect a, which represents some effect that produces a, presumably you got that value from somewhere. Let's say:
myEffect = makeMeAnEffect "foo"
This value has been evaluated inside makeMeAnEffect, but hasn't yet been executed. Here, "evaluation" means making whatever computation is necessary to produce a value of type Effect a. Creating this value may involve some computation - e.g. multiplying numbers, traversing strings, adding matrices. That's all "evaluation".
But the result of evaluation is a "description" of what should happen when the effect is executed. Here "execution" means "running" the effect, making happen whatever action it describes.
Evaluation and execution are technically separate concepts. Many languages conflate them, but pure functional languages, such as PureScript and Haskell, maintain a strict separation: first you create the description of what should happen ("evaluation"), and then "run" that description ("execution").
This distinction is very important in practice: "evaluation" is pure, which means it's completely unobservable except for its result, and so the compiler can do whatever it wants with it - e.g. optimize, roll/unroll, or even completely drop, - as long as its result stays the same. "Execution", on the other hand, has to be carried out in the exact way that the programmer has specified it, because its whole point is to produce effects, so messing with it will have observable consequences.
In your particular case, in the body of toEffect, evaluation is everything that happens after log $. All those calls to append, selectFirstLadderBruteForce, and so on, - all of that is "evaluation". None of that is effectful. You're performing some computation in order to figure out what sort of effect you're going to create.
And then, once you did all that computation, you pass the result of it to log, and that makes you an Effect Unit, which is a "description of what should happen". And in this particular case, "what should happen" is very small - just write a single string to the console.
And now, finally, we can get to the difference between main1/main2 and main3/main4.
In main1 and main2, you're creating each next effect only after the first one has been executed. So evaluation and execution "overlap", so to speak: first you do fist evaluation, create fist effect, then you run it, and then, only after it's done running, you move on to doing the second evaluation and creating the second effect. And so on. Since the expensive part (in your case) is evaluation, each next execution ends up delayed by however much time the next evaluation takes.
In main3 and main4, on the other hand, you do evaluation first, creating all effects at once by calling map toEffect on an array, and only then you proceed to execute them one by one. And since, again, evaluation (in your case) is the expensive part, and all of it is happening at the beginning, the execution is not delayed. Each effect is very small (just print to console), so they all execute very quickly.
If you really want to prevent the next evaluation from happening until the previous execution is done, you can do this trick: add a pure unit at the beginning of toEffect like so:
toEffect (Tuple i strng) = do
pure unit
log $ append (show i <> ": ") $
statefulPuzzleToString $
selectFirstLadderBruteForce $
parsePuzzle strng
This will make sure that the second line doesn't start evaluating until the first line has executed, thus making each evaluation happen only right before its respective execution.
And finally, another fun fact: in Haskell the same program would work differently, because Haskell is lazy. When asked to do an evaluation, it doesn't do it right away, instead just "remembering" that it's been asked to. And only when the evaluation's result is actually necessary (which would happen on execution), will it be performed.
PureScript, on the other hand, is strict, which means it will always compute everything right away. In this particular case, it means it will compute the whole append etc. series of calls before it can pass their result to log.
Related
I'm looking to create a macro in P6 which converts its argument to a string.
Here's my macro:
macro tfilter($expr) {
quasi {
my $str = Q ({{{$expr}}});
filter-sub $str;
};
}
And here is how I call it:
my #some = tfilter(age < 50);
However, when I run the program, I obtain the error:
Unable to parse expression in quote words; couldn't find final '>'
How do I fix this?
Your use case, converting some code to a string via a macro, is very reasonable. There isn't an established API for this yet (even in my head), although I have come across and thought about the same use case. It would be nice in cases such as:
assert a ** 2 + b ** 2 == c ** 2;
This assert statement macro could evaluate its expression, and if it fails, it could print it out. Printing it out requires stringifying it. (In fact, in this case, having file-and-line information would be a nice touch also.)
(Edit: 007 is a language laboratory to flesh out macros in Perl 6.)
Right now in 007 if you stringify a Q object (an AST), you get a condensed object representation of the AST itself, not the code it represents:
$ bin/007 -e='say(~quasi { 2 + 2 })'
Q::Infix::Addition {
identifier: Q::Identifier "infix:+",
lhs: Q::Literal::Int 2,
rhs: Q::Literal::Int 2
}
This is potentially more meaningful and immediate than outputting source code. Consider also the fact that it's possible to build ASTs that were never source code in the first place. (And people are expected to do this. And to mix such "synthetic Qtrees" with natural ones from programs.)
So maybe what we're looking at is a property on Q nodes called .source or something. Then we'd be able to do this:
$ bin/007 -e='say((quasi { 2 + 2 }).source)'
2 + 2
(Note: doesn't work yet.)
It's an interesting question what .source ought to output for synthetic Qtrees. Should it throw an exception? Or just output <black box source>? Or do a best-effort attempt to turn itself into stringified source?
Coming back to your original code, this line fascinates me:
my $str = Q ({{{$expr}}});
It's actually a really cogent attempt to express what you want to do (turn an AST into its string representation). But I doubt it'll ever work as-is. In the end, it's still kind of based on a source-code-as-strings kind of thinking à la C. The fundamental issue with it is that the place where you put your {{{$expr}}} (inside of a string quote environment) is not a place where an expression AST is able to go. From an AST node type perspective, it doesn't typecheck because expressions are not a subtype of quote environments.
Hope that helps!
(PS: Taking a step back, I think you're doing yourself a disservice by making filter-sub accept a string argument. What will you do with the string inside of this function? Parse it for information? In that case you'd be better off analyzing the AST, not the string.)
(PPS: Moritz++ on #perl6 points out that there's an unrelated syntax error in age < 50 that needs to be addressed. Perl 6 is picky about things being defined before they are used; macros do not change this equation much. Therefore, the Perl 6 parser is going to assume that age is a function you haven't declared yet. Then it's going to consider the < an opening quote character. Eventually it'll be disappointed that there's no >. Again, macros don't rescue you from needing to declare your variables up-front. (Though see #159 for further discussion.))
I am diving into Scala's string interpolation feature and I wonder if it is safe to use it. String interpolation allows us to evaluate expression like:
println(s"Hello World! ${for (i <- 1 to 100) println(s"other values $i")}")
My doubt is if we should evaluate expression in interpolated string. I see a lot of Scala code where other developers are using this feature like in example and don't know if this is correct and safe.
I would never use it and I would not want my colleagues to use it.
Two things are essentially off:
String interpolation is a nice feature because it makes concatenation of values easy to read... unless you stick an entire scala program in there :)
Not only there's a complex expression in there, but you're using a side-effect, so in the process of evaluating the interpolation 100 values will be printed. So you get
other values 1
other values 2
...
other values 100
Hello World! ()
Where the () is the return value of the for-comprehension, i.e. Unit.
I would save myself (and my colleagues) tremendous headaches and just do
println(s"Hello World!")
for (i <- 1 to 100) {
println(s"other values $i")
}
When ran in the command line, this
swipl -g "write(42)" -t "halt"
prints 42 to STDOUT as expected.
However, this
swipl -g "X = 42" -t "halt"
does not print anything, it simply returns.
How do I get it to print what it prints in the REPL (that is, X = 42)?
Note: this is in a Windows terminal. Let me know if this actually works in a Linux terminal.
As expected, X = 42 by itself produces no output whatsoever, because (=)/2 is a completely pure predicate that does not yield any side effects by itself. This is the case on Window, OSX and all other operating systems.
Even if there were a way to obtain and redirect the toplevel output itself, the fact remains that the SWI toplevel is subject to change and you cannot rely on future versions to behave in the same way as it does now. Long term, you will likely be better off to roll your own toplevel and produce exactly the output you want.
It is not so hard to roll your own toplevel. The trick is mainly to use the variable_names/1 option when reading terms, so that you can keep track of the variable names that you want to show in answers. Here is a very simplistic start:
repl :-
read_line_to_codes(current_input, Codes),
read_term_from_codes(Codes, Term, [variable_names(NameVars)]),
call(Term),
report_bindings(NameVars).
repl :- repl.
report_bindings(NameVars) :-
phrase(bindings(NameVars), Bs),
format("~s", [Bs]).
bindings([]) --> [].
bindings([E]) --> name_var(E).
bindings([E1,E2|Rest]) --> name_var(E1), ",\n", bindings([E2|Rest]).
name_var(Name=Var) -->
format_("~w = ~q", [Name,Var]).
format_(Format, Ls) -->
call(format_codes(Format, Ls)).
format_codes(Format, Ls, Cs0, Cs) :-
format(codes(Cs0,Cs), Format, Ls).
Example:
?- repl.
|: X = 4, between(1, 3, Y).
X = 4,
Y = 1
true ;
X = 4,
Y = 2
true ;
X = 4,
Y = 3
true ;
|: X = 7.
X = 7
It is easy to modify this so that it works on terms that are specified as arguments.
Note that the variable_names/1 option is essential for reading terms in such a way, and thanks to the ISO standardization effort an increasing number of implementations provide it for read_term/2 and related predicates.
This ability to read variable names is a requirement for implementing a portable Prolog toplevel!
The main exercise that I leave for you is to check if the quoting is right in all cases and (if desired) to produce answers in such a way that they can always be pasted back on the terminal. To extend this to residual constraints, use copy_term/3 and call_residue_vars/2 to collect pending constraints that you can append to the bindings.
having a code like this:
var foo: Int = 0
print (foo)
With a breakpoint at the print line... then doing:
(lldb) exp foo = 7
(lldb) p foo
(Int) $R2 = 7
0
print(x) still returns 0 instead of 7. How is that possible?
More details would help, but in general, I have seen this happen in three cases:
1) When you have a let, the compiler is free to replace the let with its value. This is not your case, but for sake of argument, given:
let x = 0
print(x)
The compiler is free to rewrite this as
print(0)
leaving the let binding in only for the sake of the debugger. Now, a let is a let, so writing to it is not exactly a well-defined operation. With that said, C compilers seem to not take that same kind of approach with const values, so that failure mode might surprise somebody
2) If you call a function, the compiler is (essentially) going to emit code to push the arguments on the stack, and then jump to the code for the function. In theory, one would expect the line table in the debug info to guide the debugger to stop before the arguments are pushed. Then,
>> push x
call print
A write to x would actually "mean" something in the context of your print call. But, if the line tables actually make us stop at the call, even if you write to "x", it has already been pushed, and the old value will be used
3) The compiler might actually be using a copy of "foo" that is in a register, but for the sake of debug info, pointing us to a stack copy of "foo" which is actually never used, and only updated when writes to "foo" happen. If that is the case, LLDB will gladly write to the stack, but that won't affect the program's behavior as it is not actually reading from the stack
Fun things to try to figure this out include:
the LLDB disassemble command
(lldb) reg read pc
(lldb) dis -s <value of program counter>
or
(lldb) fr var -L foo
and then read at the memory address (or the register) that LLDB gives out as the location of "foo"
or
in your app, print(x) twice, and see if the second print shows a different value
GCC version 4.6
The Problem: To find a way to feed in parameters to the executable, say a.out, from the command line - more specifically feed in an array of double precision numbers.
Attempt: Using the READ(*,*) command, which is older in the standard:
Program test.f -
PROGRAM MAIN
REAL(8) :: A,B
READ(*,*) A,B
PRINT*, A+B, COMMAND_ARGUMENT_COUNT()
END PROGRAM MAIN
The execution -
$ gfortran test.f
$ ./a.out 3.D0 1.D0
This did not work. On a bit of soul-searching, found that
$./a.out
3.d0,1.d0
4.0000000000000000 0
does work, but the second line is an input prompt, and the objective of getting this done in one-line is not achieved. Also the COMMAND_ARGUMENT_COUNT() shows that the numbers fed into the input prompt don't really count as 'command line arguments', unlike PERL.
If you want to get the arguments fed to your program on the command line, use the (since Fortran 2003) standard intrinsic subroutine GET_COMMAND_ARGUMENT. Something like this might work
PROGRAM MAIN
REAL(8) :: A,B
integer :: num_args, ix
character(len=12), dimension(:), allocatable :: args
num_args = command_argument_count()
allocate(args(num_args)) ! I've omitted checking the return status of the allocation
do ix = 1, num_args
call get_command_argument(ix,args(ix))
! now parse the argument as you wish
end do
PRINT*, A+B, COMMAND_ARGUMENT_COUNT()
END PROGRAM MAIN
Note:
The second argument to the subroutine get_command_argument is a character variable which you'll have to parse to turn into a real (or whatever). Note also that I've allowed only 12 characters in each element of the args array, you may want to fiddle around with that.
As you've already figured out read isn't used for reading command line arguments in Fortran programs.
Since you want to read an array of real numbers, you might be better off using the approach you've already figured out, that is reading them from the terminal after the program has started, it's up to you.
The easiest way is to use a library. There is FLAP or f90getopt available. Both are open source and licensed under free licenses.
The latter is written by Mark Gates and me, just one module and can be learned in minutes but contains all what is needed to parse GNU- and POSIX-like command-line options. The first is more sophisticated and can be used even in closed-source projects. Check them out.
Furthermore libraries at https://fortranwiki.org/fortran/show/Command-line+arguments
What READ (*,*) does is that it reads from the standard input. For example, the characters entered using the keyboard.
As the question shows COMMAND_ARGUMENT_COUNT() can be used to get the number of the command line arguments.
The accepted answer by High Performance Mark show how to retrieve the individual command line arguments separated by blanks as individual character strings using GET_COMMAND_ARGUMENT(). One can also get the whole command line using GET_COMMAND(). One then has to somehow parse that character-based information into the data in your program.
I very simple cases you just need the program requires, for example, two numbers, so you read one number from arg 1 and another form arg 2. That is simple. Or you can read a triplet of numbers from a single argument if they are comma-separated like 1,2,3 using a simple read(arg,*) nums(1:3).
For general complicated command line parsing one uses libraries such as those mentioned in the answer by Hani. You have set them up so that the library knows the expected syntax of the command line arguments and the data it should fill with the values.
There is a middle ground, that is still relatively simple, but one already have multiple arguments, that correspond to Fortran variables in the program, that may or may not be present. In that case one can use the namelist for the syntax and for the parsing.
Here is an example, the man point is the namelist /cmd/ name, point, flag:
implicit none
real :: point(3)
logical :: flag
character(256) :: name
character(1024) :: command_line
call read_command_line
call parse_command_line
print *, point
print *, "'",trim(name),"'"
print *, flag
contains
subroutine read_command_line
integer :: exenamelength
integer :: io, io2
command_line = ""
call get_command(command = command_line,status = io)
if (io==0) then
call get_command_argument(0,length = exenamelength,status = io2)
if (io2==0) then
command_line = "&cmd "//adjustl(trim(command_line(exenamelength+1:)))//" /"
else
command_line = "&cmd "//adjustl(trim(command_line))//" /"
end if
else
write(*,*) io,"Error getting command line."
end if
end subroutine
subroutine parse_command_line
character(256) :: msg
namelist /cmd/ name, point, flag
integer :: io
if (len_trim(command_line)>0) then
msg = ''
read(command_line,nml = cmd,iostat = io,iomsg = msg)
if (io/=0) then
error stop "Error parsing the command line or cmd.conf " // msg
end if
end if
end subroutine
end
Usage in bash:
> ./command flag=T name=\"data.txt\" point=1.0,2.0,3.0
1.00000000 2.00000000 3.00000000
'data.txt'
T
or
> ./command flag=T name='"data.txt"' point=1.0,2.0,3.0
1.00000000 2.00000000 3.00000000
'data.txt'
T
Escaping the quotes for the string is unfortunately necessary, because bash eats the first quotes.