I am trying to call a function in Javascript from Java/Nashorn (in Scala, but that's not material to the question).
// JS
var foo = function(calculator){ // calculator is a Scala object
return this.num * calculator.calcMult();
}
The context on the Scala side is like this:
case class Thing(
num: Int,
stuff: String
)
case class Worker() { // Scala object to bind to calculator
def calMult() = { 3 } // presumably some complex computation here
}
I start by getting foo into the JS environment:
jsengine.eval("""var foo = function(calculator){return this.num * calculator.calcMult();}"""
To use this I need two things to be available: 1) 'this' context to be populated with my Thing object, and 2) the ability to pass a Java/Scala object to my JS function (to call calcMulti later). (If needed I can easily JSON-serialize Thing.)
How can I do both and successfully call foo() from Scala?
This may not be the only or cleanest solution, but it does work.
Turns out javascript has the ability to bind a given 'this' context to a function, which creates a "bound function" that has your 'this' visible within it. Then you use invoke() as you normally would on the bound function.
val inv = javascript.asInstanceOf[Invocable]
val myThis: String = // JSON serialized Map of stuff you want accessible in the js function
val bindFn = "bind_" + fnName
javascript.eval(bindFn + s" = $fnName.bind(" + myThis + ")")
inv.invokeFunction(bindFn, args: _*)
If you passed myThis into the binding to include {"x":"foo"} then when invoked, any access within your function to this.x will resolve to "foo" as you'd expect.
Related
I read the Spark programming guide about passing functions and wonder what happens when the function reference the outer method parameter/local variable.
For example, I have this object
object Main {
def main(args: Array[String]): Unit = {
val ds: Dataset[String] = ???
ds.map(_ + args(0))
}
}
Does Spark have to serialize Main? What if args is a local variable inside main?
No, in both these cases Spark won't serialize the Main object. Method arguments and local variables (which are pretty much the same thing from the semantics perspective) do not "belong" to the enclosing object or class, they are associated with a particular method invocation and can therefore be captured by the closure directly.
As a general rule, if you need to have a reference to some object in order to access some value, then this reference will be captured and, therefore, serialized:
class Application(n: Int) {
val x = "internal state " + n
def doSomething(ds: Dataset[String], param: String): Unit = {
ds.map(_ + x + param)
}
}
Note that here in order to access x, which is an instance member, you necessarily have to have the enclosing instance to be available, because it depends on the actual parameters that the instance was constructed with. Another way to see it is to remember that when you use x in the example above, it is actually a shortcut for this.x:
ds.map(_ + this.x + param)
Compared to this, the param value has no such dependency - it is passed as is to the method, and there is no need to access any other enclosing object to use it. Therefore, param will be captured and serialized directly.
That's why there is that advice to put instance members to local variables in order not to capture the entire object: when you put a value to a local variable, it no longer requires accessing the enclosing instance:
val localX = this.x
ds.map(_ + localX + param)
Of course, if you have references to the enclosing instance inside the object that you want to capture, like here:
class Inner(app: Application)
class Application {
val x = new Inner(this)
def doSomething(ds: Dataset[String]): Unit = {
val localX = x
ds.map(_ + localX.toString)
}
}
then storing it to a local variable would not help, because Spark would still need to serialize the app field of the Inner class, which points back to the Application instance. That's why you have to be careful if you have complex object graphs that you use in Spark methods which are going to be sent to executors.
I'm learning Scala and I don't really understand the following example :
object Test extends App {
def method1 = println("method 1")
val x = {
def method2 = "method 2" // method inside a block
"this is " + method2
}
method1 // statement inside an object
println(x) // same
}
I mean, it feels inconsistent to me because here I see two different concepts :
Objects/Classes/Traits, which contains members.
Blocks, which contains statements, the last statement being the value of the block.
But here we have a method part of a block, and statements part of an object. So, does it mean that blocks are objects too ? And how are handled the statements part of an object, are they members too ?
Thanks.
Does it mean that blocks are objects too?
No, blocks are not objects. Blocks are used for scoping the binding of variables. Scala enables not only defining expressions inside blocks but also to define methods. If we take your example and compile it, we can see what the compiler does:
object Test extends Object {
def method1(): Unit = scala.Predef.println("method 1");
private[this] val x: String = _;
<stable> <accessor> def x(): String = Test.this.x;
final <static> private[this] def method2$1(): String = "method 2";
def <init>(): tests.Test.type = {
Test.super.<init>();
Test.this.x = {
"this is ".+(Test.this.method2$1())
};
Test.this.method1();
scala.Predef.println(Test.this.x());
()
}
}
What the compiler did is extract method2 to an "unnamed" method on method2$1 and scoped it to private[this] which is scoped to the current instance of the type.
And how are handled the statements part of an object, are they members
too?
The compiler took method1 and println and calls them inside the constructor when the type is initialized. So you can see val x and the rest of the method calls are invoked at construction time.
method2 is actually not a method. It is a local function. Scala allows you to create named functions inside local scopes for organizing your code into functions without polluting the namespace.
It is most often used to define local tail-recursive helper functions. Often, when making a function tail-recursive, you need to add an additional parameter to carry the "state" on the call stack, but this additional parameter is a private internal implementation detail and shouldn't be exposed to clients. In languages without local functions, you would make this a private helper alongside the primary method, but then it would still be within the namespace of the class and callable by all other methods of the class, when it is really only useful for that particular method. So, in Scala, you can instead define it locally inside the method:
// non tail-recursive
def length[A](ls: List[A]) = ls match {
case Nil => 0
case x :: xs => length(xs) + 1
}
//transformation to tail-recursive, Java-style:
def length[A](ls: List[A]) = lengthRec(ls, 0)
private def lengthRec[A](ls: List[A], len: Int) = ls match {
case Nil => len
case x :: xs => lengthRec(xs, len + 1)
}
//tail-recursive, Scala-style:
def length[A](ls: List[A]) = {
//note: lengthRec is nested and thus can access `ls`, there is no need to pass it
def lengthRec(len: Int) = ls match {
case Nil => len
case x :: xs => lengthRec(xs, len + 1)
}
lengthRec(ls, 0)
}
Now you might say, well I see the value in defining local functions inside methods, but what's the value in being able to define local functions in blocks? Scala tries to as simple as possible and have as few corner cases as possible. If you can define local functions inside methods, and local functions inside local functions … then why not simplify that rule and just say that local functions behave just like local fields, you can simply define them in any block scope. Then you don't need different scope rules for local fields and local functions, and you have simplified the language.
The other thing you mentioned, being able to execute code in the bode of a template, that's actually the primary constructor (so to speak … it's technically more like an initializer). Remember: the primary constructor's signature is defined with parentheses after the class name … but where would you put the code for the constructor then? Well, you put it in the body of the class!
I'd like to have some basic knowledge of how deeply my function call is nested. Consider the following:
scala> def decorate(f: => Unit) : Unit = { println("I am decorated") ; f }
decorate: (f: => Unit)Unit
scala> decorate { println("foo") }
I am decorated
foo
scala> decorate { decorate { println("foo") } }
I am decorated
I am decorated
foo
For the last call, I'd like to be able to get the following:
I am decorated 2x
I am decorated 1x
foo
The idea is that the decorate function knows how deeply its nested. Ideas?
Update: As Nikita had thought, my example doesn't represent what I'm really after. The goal is not to produce the strings so much as to be able to pass some state through a series of calls to the same nested function. I think Régis Jean-Gilles is pointing me in the right direction.
You can use the dynamic scope pattern. More prosaically this means using a thread local variable (scala's DynamicVariable is done just for that) to store the current nesting level. See my answer to this other question for a partical example of this pattern: How to define a function that takes a function literal (with an implicit parameter) as an argument?
This is suitable only if you want to know the nesting level for a very specific method though. If you want a generic mecanism that works for any method then this won't work (as you'd need a distinct variable for each method). In this case the only alternative I can think of is to inspect the stack, but not only is it not very reliable, it is also extremely slow.
UPDATE: actually, there is a way to apply the dynamic scope pattern in a generic way (for any possible method). The important part is to be able to implicitly get a unique id for each method. from there, it is just a matter of using this id as a key to associate a DynamicVariable to the method:
import scala.util.DynamicVariable
object FunctionNestingHelper {
private type FunctionId = Class[_]
private def getFunctionId( f: Function1[_,_] ): FunctionId = {
f.getClass // That's it! Beware, implementation dependant.
}
private val currentNestings = new DynamicVariable( Map.empty[FunctionId, Int] )
def withFunctionNesting[T]( body: Int => T ): T = {
val id = getFunctionId( body )
val oldNestings = currentNestings.value
val oldNesting = oldNestings.getOrElse( id, 0 )
val newNesting = oldNesting + 1
currentNestings.withValue( oldNestings + ( id -> newNesting) ) {
body( newNesting )
}
}
}
Usage:
import FunctionNestingHelper._
def decorate(f: => Unit) = withFunctionNesting { nesting: Int =>
println("I am decorated " + nesting + "x") ; f
}
To get a unique id for the method, I actually get an id for a the closure passed to withFunctionNesting (which you must call in the method where you need to retrieve the current nesting). And that's where I err on the implementation dependant side: the id is just the class of the function instance. This does work as expected as of now (because every unary function literal is implemented as exactly one class implementing Function1 so the class acts as a unique id), but the reality is that it might well break (although unlikely) in a future version of scala. So use it at your own risk.
Finally, I suggest that you first evaluate seriously if Nikita Volkov's suggestion of going more functional would not be a better solution overall.
You could return a number from the function and count how many levels you are in on the way back up the stack. But there is no easy way to count on the way down like you have given example output for.
Since your question is tagged with "functional programming" following are functional solutions. Sure the program logic changes completely, but then your example code was imperative.
The basic principle of functional programming is that there is no state. What you're used to have as a shared state in imperative programming with all the headache involved (multithreading issues and etc.) - it is all achieved by passing immutable data as arguments in functional programming.
So, assuming the "state" data you wanted to pass was the current cycle number, here's how you'd implement a function using recursion:
def decorated ( a : String, cycle : Int ) : String
= if( cycle <= 0 ) a
else "I am decorated " + cycle + "x\n" + decorated(a, cycle - 1)
println(decorated("foo", 3))
Alternatively you could make your worker function non-recursive and "fold" it:
def decorated ( a : String, times : Int )
= "I am decorated " + times + "x\n" + a
println( (1 to 3).foldLeft("foo")(decorated) )
Both codes above will produce the following output:
I am decorated 3x
I am decorated 2x
I am decorated 1x
foo
I never understood it from the contrived unmarshalling and verbing nouns ( an AddTwo class has an apply that adds two!) examples.
I understand that it's syntactic sugar, so (I deduced from context) it must have been designed to make some code more intuitive.
What meaning does a class with an apply function give? What is it used for, and what purposes does it make code better (unmarshalling, verbing nouns etc)?
how does it help when used in a companion object?
Mathematicians have their own little funny ways, so instead of saying "then we call function f passing it x as a parameter" as we programmers would say, they talk about "applying function f to its argument x".
In mathematics and computer science, Apply is a function that applies
functions to arguments.
Wikipedia
apply serves the purpose of closing the gap between Object-Oriented and Functional paradigms in Scala. Every function in Scala can be represented as an object. Every function also has an OO type: for instance, a function that takes an Int parameter and returns an Int will have OO type of Function1[Int,Int].
// define a function in scala
(x:Int) => x + 1
// assign an object representing the function to a variable
val f = (x:Int) => x + 1
Since everything is an object in Scala f can now be treated as a reference to Function1[Int,Int] object. For example, we can call toString method inherited from Any, that would have been impossible for a pure function, because functions don't have methods:
f.toString
Or we could define another Function1[Int,Int] object by calling compose method on f and chaining two different functions together:
val f2 = f.compose((x:Int) => x - 1)
Now if we want to actually execute the function, or as mathematician say "apply a function to its arguments" we would call the apply method on the Function1[Int,Int] object:
f2.apply(2)
Writing f.apply(args) every time you want to execute a function represented as an object is the Object-Oriented way, but would add a lot of clutter to the code without adding much additional information and it would be nice to be able to use more standard notation, such as f(args). That's where Scala compiler steps in and whenever we have a reference f to a function object and write f (args) to apply arguments to the represented function the compiler silently expands f (args) to the object method call f.apply (args).
Every function in Scala can be treated as an object and it works the other way too - every object can be treated as a function, provided it has the apply method. Such objects can be used in the function notation:
// we will be able to use this object as a function, as well as an object
object Foo {
var y = 5
def apply (x: Int) = x + y
}
Foo (1) // using Foo object in function notation
There are many usage cases when we would want to treat an object as a function. The most common scenario is a factory pattern. Instead of adding clutter to the code using a factory method we can apply object to a set of arguments to create a new instance of an associated class:
List(1,2,3) // same as List.apply(1,2,3) but less clutter, functional notation
// the way the factory method invocation would have looked
// in other languages with OO notation - needless clutter
List.instanceOf(1,2,3)
So apply method is just a handy way of closing the gap between functions and objects in Scala.
It comes from the idea that you often want to apply something to an object. The more accurate example is the one of factories. When you have a factory, you want to apply parameter to it to create an object.
Scala guys thought that, as it occurs in many situation, it could be nice to have a shortcut to call apply. Thus when you give parameters directly to an object, it's desugared as if you pass these parameters to the apply function of that object:
class MyAdder(x: Int) {
def apply(y: Int) = x + y
}
val adder = new MyAdder(2)
val result = adder(4) // equivalent to x.apply(4)
It's often use in companion object, to provide a nice factory method for a class or a trait, here is an example:
trait A {
val x: Int
def myComplexStrategy: Int
}
object A {
def apply(x: Int): A = new MyA(x)
private class MyA(val x: Int) extends A {
val myComplexStrategy = 42
}
}
From the scala standard library, you might look at how scala.collection.Seq is implemented: Seq is a trait, thus new Seq(1, 2) won't compile but thanks to companion object and apply, you can call Seq(1, 2) and the implementation is chosen by the companion object.
Here is a small example for those who want to peruse quickly
object ApplyExample01 extends App {
class Greeter1(var message: String) {
println("A greeter-1 is being instantiated with message " + message)
}
class Greeter2 {
def apply(message: String) = {
println("A greeter-2 is being instantiated with message " + message)
}
}
val g1: Greeter1 = new Greeter1("hello")
val g2: Greeter2 = new Greeter2()
g2("world")
}
output
A greeter-1 is being instantiated with message hello
A greeter-2 is being instantiated with message world
TLDR for people comming from c++
It's just overloaded operator of ( ) parentheses
So in scala:
class X {
def apply(param1: Int, param2: Int, param3: Int) : Int = {
// Do something
}
}
Is same as this in c++:
class X {
int operator()(int param1, int param2, int param3) {
// do something
}
};
1 - Treat functions as objects.
2 - The apply method is similar to __call __ in Python, which allows you to use an instance of a given class as a function.
The apply method is what turns an object into a function. The desire is to be able to use function syntax, such as:
f(args)
But Scala has both functional and object oriented syntax. One or the other needs to be the base of the language. Scala (for a variety of reasons) chooses object oriented as the base form of the language. That means that any function syntax has to be translated into object oriented syntax.
That is where apply comes in. Any object that has the apply method can be used with the syntax:
f(args)
The scala infrastructure then translates that into
f.apply(args)
f.apply(args) has correct object oriented syntax. Doing this translation would not be possible if the object had no apply method!
In short, having the apply method in an object is what allows Scala to turn the syntax: object(args) into the syntax: object.apply(args). And object.apply(args) is in the form that can then execute.
FYI, this implies that all functions in scala are objects. And it also implies that having the apply method is what makes an object a function!
See the accepted answer for more insight into just how a function is an object, and the tricks that can be played as a result.
To put it crudely,
You can just see it as custom ()operator. If a class X has an apply() method, whenever you call X() you will be calling the apply() method.
Here is a quote from programming scala chapter 1:
Closures are such a powerful abstraction that object systems and fundamental control structures are often implemented using them
Apparently the statement is not specifically about Scala but Closures in general but I can not
make much sense from it. Perhaps it is some pearl of wisdom only meant for those mighty compiler writers!
So who uses Closures to implement fundamental control structures and why?
Edit: I remember reading something about custom control structures in groovy "using the closure as the last parameter of method call" syntax and making the structure available to your code using meta-classes or use keyword with Categories. Could it be something related?
Edit: I found the following reference of the groovy custom control structures syntax here (slide 38):
Custom control structures
Thanks to closures
When closures are last, they can be put “out” of the parentheses
surrounding parameters
unless(account.balance > 100.euros, { account.debit 100.euros })
unless(account.balance > 100.euros) { account.debit 100.euros }
Signature def unless(boolean b, Closure c)
Apparently what groovy is offering is a syntactic sugar for making the Closure based custom control structures appear like first-class control structures offered by the language itself.
I commented on the case of control structures. Let me comment on closures as objects. Consider what happens when you call a method on an object; it has access not only to the argument list, but also the fields of the object. That is, the method/function closes over the fields. This isn't that different from a "bare" function (i.e., not an object method) that closes over variables in scope. However, the object syntax provides a nice abstraction and modularity mechanism.
For example, I could write
case class Welcome(message: String) {
def greet(name: String) = println(message + ", " + name)
}
val w = Welcome("Hello")
w.greet("Dean")
vs.
val message = "Hello"
val greet = (name: String) => println(message + ", " + name)
greet("Dean")
Actually, in this example, I could remove the "case" keyword from Welcome, so that message doesn't become a field, but the value is still in scope:
class Welcome2(message: String) { // removed "case"
def greet(name: String) = println(message + ", " + name)
}
val w = new Welcome2("Hello") // added "new"
w.greet("Dean")
It still works! Now greet closes over the value of the input parameter, not a field.
var welcome = "Hello"
val w2 = new Welcome2(welcome)
w2.greet("Dean") // => "Hello, Dean"
welcome = "Guten tag"
w2.greet("Dean") // => "Hello, Dean" (even though "welcome" changed)
But if the class refers to a variable in the outer scope directly,
class Welcome3 { // removed "message"
def greet(name: String) = println(welcome + ", " + name) // reference "welcome"
}
val w3 = new Welcome3
w3.greet("Dean") // => "Guten tag, Dean"
welcome = "Buon giorno"
w3.greet("Dean") // => "Buon giorno, Dean"
Make sense?
There are three fundamental control structures:
Sequence
a = 1
b = 2
c = a + b
Conditions
if (a != b) {
c = a + b
} else {
c = a - b
}
Iterations/loops
for (a <- array) {
println(a)
}
So, I guess they mean that internally many languages use closures for control structures (you can look the last two structures).
As an example:
if (a < b) {
for (i = a; a < b; a++) {
println(i)
c = i * i
}
} else {
c = a - b
}
So for is a closure inside the if closure, and else is a closure too. That's how I understand it. They create a closure for the first if if the condition is true, create the closure inside the braces, call it. Then create a closure for the for loop and call it while the condition is true.
And I guess there is no list of languages which use closures internally.
Update:
Just as an example, this is how you can implement your own for loop in Scala (o is cyrillic, so it will compile):
def fоr(start: Unit, condition: => Boolean, increment: => Unit)(body: => Unit): Unit = {
if (condition) {
body
increment
fоr(0, condition, increment)(body)
}
}
var i = 0
fоr (i = 0, i < 1000, i += 1) {
print(i + " ")
}
So actually this is how it can be implemented in other languages on the inner level.
I would say that "closures are such a powerful abstraction..." because unlike standard methods, you have a reference to the calling object, regardless of the scope in which the closure has been called.
In Groovy, for example, you can add a new method, "capitalize" to String type:
String.metaClass.capitalize = {
delegate[0].upper() + delegate[1..-1].lower()
}
"hello".capitalize() // "Hello"
Or, you can do something more complex, like create a domain specific language (DSL) using closures.
class ClosureProps {
Map props = [:]
ClosureProps(Closure c) {
c.delegate = this // pass closure scope to "this"
c.each{"$it"()} // iterate through closure, triggering missingMethod()
}
def methodMissing(String name, args) {
props[name] = args.collect{it} // collect extracted closure properties
}
def propertyMissing(String name) {
name
}
}
Example
class Team {
// the closure
static schema = {
table team
id teamID
roster column:playerID, cascade:[update,delete]
}
}
def c = new ClosureProps(Team.schema)
println c.props.id // prints "teamID"
a) Please try at least googling topics before asking questions.
b) Once you have done that, please ask specific questions.
c) Lexical closures are functions that have access to a lexical environment not available where they are invoked. As such, their parameters can be used to select messages, and pass parameters with those messages. For general control structures, they are not sufficient, unless they can affect the call stack, in the manner of continuations.