Apart from serious performance problems, Scala is a very powerful language. Therefore I am now using it frequently for scripted tasks inside Bash. Is there a way to just execute a *.scala file exactly the way I can do with Python files? As far as I know, Python uses bytecode to execute programs, exactly like the JVM does. However, there is not anything called pythonc (like scalac or javac) I need to call in order to accomplish this. Hence I expect Scala to be able to act in a similar manner.
The scala man page provides some examples on how to run Scala code fragments as if they were a script, for both Windows and non-Windows platforms (below examples copied from the man page):
Unix
#!/bin/sh
exec scala "$0" "$#"
!#
Console.println("Hello, world!")
argv.toList foreach Console.println
Windows
::#!
#echo off
call scala %0 %*
goto :eof
::!#
Console.println("Hello, world!")
argv.toList foreach Console.println
To speed up subsequent runs you can cache the compiled fragment with the -savecompiled option:
#!/bin/sh
exec scala -savecompiled "$0" "$#"
!#
Console.println("Hello, world!")
argv.toList foreach Console.println
Update: as of Scala 2.11 (as noted in this similar answer), you can now just do this on Unix:
#!/usr/bin/env scala
println("Hello, world!")
println(args.mkString(" "))
I don't use python, but in Scala, the most scripty thing I can do is this:
thinkpux:~/proj/mini/forum > echo 'println(" 3 + 4 = " + (3 + 4))' | scala
Welcome to Scala version 2.10.2 (Java HotSpot(TM) Server VM, Java 1.7.0_09).
Type in expressions to have them evaluated.
Type :help for more information.
scala> println(" 3 + 4 = " + (3 + 4))
3 + 4 = 7
scala> thinkpux:~/proj/mini/forum >
However, afterwards, I don't have visual feedback in the bash, so I have to call 'clear'.
But there is no problem in writing a script and executing that:
thinkpux:~/proj/mini/forum > echo 'println(" 3 + 4 = " + (3 + 4))' > print7.scala
thinkpux:~/proj/mini/forum > scala print7.scala
3 + 4 = 7
Then, there aren't issues with the shell.
With an enclosing class, the code wouldn't be executed:
thinkpux:~/proj/mini/forum > echo -e 'class Foo {\nprintln(" 3 + 4 = " + (3 + 4))\n}\n'
class Foo {
println(" 3 + 4 = " + (3 + 4))
}
thinkpux:~/proj/mini/forum > scala Foo.scala
thinkpux:~/proj/mini/forum > cat Foo.scala
class Foo {
println(" 3 + 4 = " + (3 + 4))
}
But with instatiating a class, you can execute code in it, without using the wellknown (hope so) 'main' way:
thinkpux:~/proj/mini/forum > echo -e 'class Foo {\nprintln(" 3 + 4 = " + (3 + 4))\n}\nval foo = new Foo()' > Foo.scala
thinkpux:~/proj/mini/forum > cat Foo.scala
class Foo {
println(" 3 + 4 = " + (3 + 4))
}
val foo = new Foo()
thinkpux:~/proj/mini/forum > scala Foo.scala
3 + 4 = 7
Related
In Python, we break a long line of code into multiple code lines with backslash like this.
a = 5
b = 11
print(str(a) + " plus " + \
str(b) + " is " + \
str(a + b))
# prints "5 plus 11 is 16"
How do we do that in NetLogo?
NetLogo doesn't care about multiple lines except for comments (the comment marker ; only lasts to the end of the line). All of these are the same:
to testme
; single line
let a 25 print a
; command per line
let b 20
print b
; unreadable
let
c
15
print
c
end
I am executing external command from Scala using ! method , but not able to catch exit code , below is the REPL o/p .
scala> import scala.sys.process._
import scala.sys.process._
scala> "ls -lrt CLSTM111.30.SUB#1.D160927.T030108.d.CLSLM001.cls_catg_lkup_EXT.stdout " .!
-rw-r--r-- 1 clsdusr clsdevl 38 Sep 27 03:01 CLSTM111.30.SUB#1.D160927.T030108.d.CLSLM001.cls_catg_lkup_EXT.stdout
res11: Int = 0
scala> println(exitCode)
<console>:35: error: not found: value exitCode
println(exitCode)
^
The exit code is the return value of !. You can do
val exitCode = "ls -lrt CLSTM111.30.SUB#1.D160927.T030108.d.CLSLM001.cls_catg_lkup_EXT.stdout ".!
println(exitCode)
There is no exitCode defined in scala.sys.process
, but the return value of !. From your REPL result it is res11. The docs for ProcessBuilder.! is as follows:
Starts the process represented by this builder, blocks until it exits, and returns the exit code. Standard output and error are sent to the console.
I wanted a convenience function to catenate jQuery parent > child selector strings. I can't get the following to work in CS 1.10.0 (also tested in 1.7.1). What am I doing wrong?
pcsel = (parent_sel, child_sels...) ->
### Uitlity for forming parent > child selector string ###
childchain = [" > " + child for child in child_sels]
parent_sel + childchain.join('')
console.log pcsel("foo", "bar") # OK. yields "foo > bar"
console.log pcsel("foo", "bar", "glop") # BAD. yields "foo > bar, > glop"
# Sanity check
console.log "foo" + [" > bat", " > glop"].join('') # OK. yields "foo > bar > glop"
Thanks!
(I've also posted this as an issue in the CS repository)
A loop comprehension:
expr for e in array
evaluates to an array. That means that this:
[ expr for e in array ]
is actually a single element array whose first (and only) element is the array from the loop. More explicitly:
i for i in [1,2,3]
is [1,2,3] but this:
[ i for i in [1,2,3] ]
is [[1,2,3]].
Your problem is that childchain in pcsel ends up with an extra level of nesting and the stringification from the join call adds unexpected commas.
The solution is to fix pcsel:
childchain = (" > " + child for child in child_sels)
# -----------^-------------------------------------^
You need the parentheses (not brackets) to get around precedence issues; parentheses (()) and brackets ([]) serve entirely different functions so you need to use the right ones.
From what I can tell, the behavior you're seeing is what's to be expected. Here's how your code behaves if you replace the splat with an explicit array:
coffee> ["> " + ['bar']] # => ['> bar']
coffee> ["> " + ['bar', 'baz']] # =>['> bar,baz']
You'll also see the same behavior in node:
> [">" + ['bar']] // => ['>bar']
> ["> " + ['bar', 'baz']] // => ['> bar,baz']
You could achieve what you're after using multiple calls to .join, or by doing something like this:
pcsel = (parent_sel, child_sels...) ->
child_sels.reduce (memo, sel) ->
memo + " > #{sel}"
, parent_sel
console.log pcsel("foo", "bar") # => foo > bar
console.log pcsel("foo", "bar", "glop") # => foo > bar > glop
console.log pcsel("foo", "bar", "glop", "baz") # => foo > bar > glop > baz
I'm trying to use Scala Process in order to concate two files and send the result to a new file.
The code works fine, but when i remove the permissions to the folder, it seems to be stuck.
Here is the code:
val copyCommand = Seq("bash", "-c", "cat \"" + headerPath + "\" \"" + FilePath + "\"")
Process(copyCommand).#>>(new File(FileWithHeader)).!
Maybe something like this can help (without invoking bash)?
import sys.process._
(Seq("cat", "file-1.txt", "file-2.txt") #>> new java.io.File("files-1n2.txt")).!
I preformed the concatination in the same comend without creating new file and it's work fine:
val copyCommand = Seq("bash", "-c", "cat \"" + headerPath + "\" \"" + FilePath + "\">FileWithHeader")
Process(copyCommand).#!
The problem is fairly simple. I am trying to write a rule, that given the name of the required file will be able to tailor its dependencies.
Let's say I have two programs: calc_foo and calc_bar and they generate a file with output dependent on the parameter. My target would have a name 'target_*_*'; for example, 'target_foo_1' would be generated by running './calc_foo 1'.
The question is, how to write a makefile that would generate outputs of the two programs for a range of parameters?
If there are just a few programs, you can have a rule for each one:
target_foo_%:
./calc_foo $*
If you want to run a program with a list of parameters:
foo_parameter_list = 1 2 green Thursday 23 bismuth
foo_targets = $(addprefix target_foo_,$(foo_parameter_list))
all: $(foo_targets)
If you want a different set of parameters for each program, but with some in common, you can separate the common ones:
common_parameter_list = 1 2 green Thursday
foo_parameter_list = $(common_parameters) 23 bismuth
bar_parameter_list = $(common_parameters) 46 111
If it turns out you have more programs than you thought, but still want to use this method, you just want to automate it:
# add programs here
PROGRAMS = foo bar baz
# You still have to tailor the parameter lists by hand
foo_parameter_list = 1 2 green Thursday 23 bismuth
# everything from here on can be left alone
define PROGRAM_template
$(1)_targets = $(addprefix target_$(1)_,$($(1)_parameter_list))
target_$(1)_%:
./calc_$(1) $$*
all: $(1)_targets
endef
$(foreach prog,$(PROGRAMS),$(eval $(call PROGRAM_template,$(prog))))
This seems to do more or less what you are requesting - assuming you are using GNU Make.
Makefile
BAR_out = target_bar_
BAR_list = 1 2 3 4 5 6 7 8 9
BAR_targets = $(addprefix ${BAR_out},${BAR_list})
FOO_out = target_foo_
FOO_list = 11 12 13 14 15 16 17 18 19
FOO_targets = $(addprefix ${FOO_out},${FOO_list})
all: ${BAR_targets} ${FOO_targets}
${BAR_targets}:
calc_bar $(subst ${BAR_out},,$#)
${FOO_targets}:
calc_foo $(subst ${FOO_out},,$#)
It can probably be cleaned up, but I tested it with the commands:
calc_bar
echo "BAR $#" | tee target_bar_$#
sleep $#
calc_foo
echo "FOO $#" | tee target_foo_$#
sleep $#
Clearly, if you want a different list, you can specify that on the command line:
make -j4 FOO_LIST="1 2 3 4 5 6 33"