IO language: what is the difference between a message send, do and doMessage - iolanguage

Although there is a documentation available, I got more confused, rather then enlightened. Let's consider an example:
I have a myObject instance, which has myMethod method and I call it from the lobby:
myObject myMethod
In this method's body following is done:
myObject1 anotherMethod //1
msg := message(anotherMethod)
myObject2 do(msg) //2
myObject3 doMessage(msg) //3
So, could anyone explain me differences between 1 2 and 3?
Who is the actual caller for these cases? The locals object of the method, the method object or myObject? Is there a difference between sender and caller (I suppose there is one in case of doMessage, where sender is the locals object of the myMethod, but the "caller" is myObject3)

Alright, so in order:
The message anotherMethod is received by the instance named myObject. This is done in the calling context (probably the Lobby, unless wrapped inside another do()
do() introduces a new scope, and does nothing with the calling scope. That is to say, you can't reference anything in the calling scope inside a do() unless it happens to be in the Protos hierarchy or introduced inside the do(). do() also takes a message tree, and so what you're doing is effectively sending message(msg) to be evaluated inside the context of myObject which doesn't make much sense since msg first off can't be found due to the scope not being available, and even if it was, wouldn't make a lot of sense. Generally, you want to do something like: msg doInContext(myObject) if you find yourself desiring to write myObject do(msg).
Is functionally equal to #1 above. In fact, if you were to write a compiler for Io code down to messages, this is more or less what you'd get out of that compilation step of #1. They're equivalent in this short example.

Related

Syntax of call to super class constructor

Within a subclass constructor, what is the difference between calling obj#SuperClass(a,b); and obj = obj#SuperClass(a,b);
Both are found in doc, for ex. here
The canonical syntax for calling a superclass constructor is
obj = obj#SuperClass(args)
(see the documentation).
This is comparable to calling any constructor, where the output of the function call is assigned to a variable:
obj = SuperClass(args)
However, because the superclass constructor must initialize fields inside a (larger) derived object, the obj to be modified must be passed in some way to the function call, hence the (awkward) obj# syntax.
But because we pass the object to be initialized, which is modified, we don’t really need to capture that output any more. Hence the other form,
obj#SuperClass(args)
does exactly the same things in all situations I have encountered.
There is no difference that I can see. I would be surprised if the first syntax did any data copying whatsoever, that has most certainly been optimized out, just like obj = obj.method(args) doesn’t copy the object.

How Scala App trait and main works internally?

Hi I'm newbie in Scala.
As far as I know there are 2ways to make entry point in scala, one is define main method with object and the other is extending App trait.
I wondered how App trait works, so I checked the source for App trait, but there are full of confusing code...
The code said that the App has initCodes which are extended from App trait, and these are added in delayedInit method that inherited from DelayedInit. Also the App trait has main method, which will be entry point.
But the What confusing me are
Who call delayedInit? Is it called before the main method is called?(I guess Yes)
Why initCodes is ListBuffer not a element? I think there is only one entry point in application, so I don't think it should be plural.
Where can I check these knowledge? I tried to search in document but I couldn't
Who call delayedInit? Is it called before the main method is called?(I guess Yes)
The delayedInit would be called automatically by the Scala compiler as the initialisation code of the object/class that extends the DelayedInit trait. I expand more on this answer below.
Why initCodes is ListBuffer not a element? I think there is only one entry point in application, so I don't think it should be plural.
Because it is possible to have a hierarchy of classes, where the initialisation code of each class in the hierarchy gets executed as part of executing the program. An example is also provided below.
Where can I check these knowledge? I tried to search in document but I couldn't.
I got to learn about the dynamics by reading the Scala docs and the links it points to. For example this https://github.com/scala/scala/releases/tag/v2.11.0 and https://issues.scala-lang.org/browse/SI-4330?jql=labels%20%3D%20delayedinit%20AND%20resolution%20%3D%20unresolved
I would now try to expatiate more on the answer above by going into more details into the workings of DelayedInit, and how the JVM specifies entry points to programs.
First of all, we have to understand that when Scala is run on the JVM, it still has to adhere to the JVM requirement for defining the entry point to your program, which is to provide the JVM with a class with a main method with signature of public static void main(String[]). Even though when we use the App trait, it might appear as if we are getting away from do this, but this is just an illusion, the JVM still needs to have access to a method with the signature public static void main(String[]). It is just that by extending App together with the mechanism of DelayedInit, Scala can provide this method on our behalf.
Second, it is also good to reiterate that code snippets found in the body of a class (or object) definition, would be the initialisation code of such a class/object and would be executed automatically when such is instantiated. In Java, it is more or less the code you put in the constructor block.
So for a class:
class Foo {
// code.
def method = ???
}
Whatever code is, it will be executed automatically when you call new Foo.
In case of an object
object Foo {
// code.
def method = ???
}
The code will be executed automatically without you having to call new since Scala would automatically make a singleton instance called Foo available for you.
So basically if anything is in the body definition, it gets executed automatically. You do not need to explicitly execute it.
Now to the DelayedInit trait. One thing to be aware of is that it provides us a mechanism to perform what can be called a compiler trick, where certain part of our code gets rewritten. This is one of the reason why it could be confusing to reason about. Because when you use it, what actually gets executed by the Scala compiler is not the code you reading but a slight modification of it. To understand what is going on, you then need to understand the ways the compiler alters the code.
The trick, the DelayedInit trait allows us to perform is to take the code that is part of the body of a class/object definition and turn it, into an argument that is passed by name, to the method delayedInit defined on DelayedInit.
Basically it rewrites this:
object Foo {
// some code
}
into
object Foo {
// delayedInt({some code})
}
This means instead of having // some code executed automatically, delayedInt is the method that is called automatically with // some code passed to it as arguments.
So anything that extends DelayedInit would have its initialisation code replaced by the method call delayedInt with the initialisation code passed as an argument. Hence why nobody needs to explicitly call the delayedInt method.
Now let use see how this then tie to the App trait and how the App trait is used to provide the entry point to a Scala application.
As you will notice, the delayedInit method on the DelayedInit trait does not provide any implementation. Hence the actual behaviour of delayedInit when it is called needs to be provided by something else that extends DelayedInit.
The App trait is such an implementation. And what does the App trait do? Two important thing related to the topic of discussion:
It provides an implementation of delayedInit which takes the initialisation code it is passed, and puts it in a ListBuffer.
It provides the main method def main(args: Array[String]) which satisfy the requirement of the JVM to have a method with public static void main(String[]) to serve as the entry point to a program. And what this main method does, is to execute whatever code placed in the ListBuffer.
The above characteristics of the App trait means that any object/class that extends it would have its initialisation code passed to delayedInit, which would then add it into a ListBuffer, and then the object/class extending it would now have a main method, which when called (most of the time by the JVM as the entry point) would run through the code in the ListBuffer and execute it.
Basically it turns this:
object Foo {
// some code
}
into this
object Foo {
// the implementation of delayedInt is to put `// some code` into a list buffer
delayedInt (// some code)
def main(args: Array[String]) = {
// the implementation below just runs through and execute the code found in list buffer that would have been populated by the call to delayedInt and
???
}
}
So why have a List buffer to store the code to be executed? Because, as I said above it is possible to have a hierarchy of classes, where the initialisation code of each class in the hierarchy gets executed as part of executing the program. To see this in action.
Given the following code snippet:
class AnotherClass {
println("Initialising AnotherClass")
}
trait AnotherTrait {
println("Initialising AnotherTrait")
}
trait YetAnotherTrait {
println("Initialising YetAnotherTrait")
}
object Runner extends AnotherClass with AnotherTrait with YetAnotherTrait with App {
println("Hello world")
}
When run would output the following:
Initialising AnotherClass
Initialising AnotherTrait
Initialising YetAnotherTrait
Hello world
So the individual initialisation code in the hierarchy that consists of AnotherClass, AnotherTrait and YetAnotherTrait gets added to the initCode list buffer, via the delayedInit method of the App trait, and then they get executed by the main method also provided by the App trait.
As you would have noticed by peeking into the source code, the whole mechanism of DelayedInt is deprecated and schedule for removal in the future.
delayedInit:-
The init hook. This saves all initialization code for execution within
main. This method is normally never called directly from user code.
Instead it is called as compiler-generated code for those classes and
objects (but not traits) that inherit from the DelayedInit trait and
that do not themselves define a delayedInit method.
App scala
delayedInit is deprecated since 2.11.0. Also, it has some outstanding bugs which are not fixed.
initCodes:-
I am not sure about the reason for defining the initCodes as ListBuffer.

Why is 'init' not assignable?

I just read that the init method can't be used as a value. Meaning:
var x = SomeClass.someClassFunction // ok
var y = SomeClass.init // error
Example found on Language reference
Why should it be like that? Is it a way to enforce language level that too dirty tricks come into place, because of some cohertion or maybe because it interferes with another feature?
Unlike Obj-C, where the init function can be called multiple times without problems, in Swift there actually is no method called init.
init is just a keyword meaning "the following is a constructor". The constructor is called always via MyClass() during the creation of a new instance. It's never called separately as a method myInstance.init(). You can't get a reference to the underlying function because it would be impossible to call it.
This is also connected with the fact that constructors cannot be inherited. Code
var y = SomeClass.init
would also break subtyping because the subtypes are not required to have the same initializers.
Why should it be like that?
init is a special member, not a regular method.
Beyond that, there's no reason that you'd ever need to store init in a variable. The only objects that could use that function in a valid way are instances of the class where that particular init is defined, and any such object will have already been initialized before you could possibly make the assignment.
Initializers don't have a return value. In order to assign it to something, it should be able to return something - and it doesn't.

Why do we call doesNotRecognizeSelector: method?

I am working with socket programming.I just wanted to clear a doubt related with a code i downloaded from -mobileorchard.com - Chatty. While R&D , I saw a function call in ChatRoomViewController.m file
[chatRoom broadcastChatMessage:input.text fromUser:[AppConfig getInstance].name];
when I saw in Room.m file, for the implementation of above call; it was
- (void)broadcastChatMessage:(NSString*)message fromUser:(NSString*)name
{
// Crude way to emulate an "abstract" class
[self doesNotRecognizeSelector:_cmd];
}
i googled for "doesNotRecognizeSelector:" , according to Apple its for error handling, stating "The runtime system invokes this method whenever an object receives an aSelector message it can’t respond to or forward." my question is why does the developer call the broadcastChatMessage:fromUser: function if its none of use there and to handle which method's "selector not found" exception ?
According to Stackovrflow, its used to create abstract class , according to this Question, its to avoid "Incomplete implementation" warning.
I am still not getting why that method is used in that Chatty Code, Kindly help me to understand the reason why that method is used.
This is the method that exists on every NSObject derived object that triggers the path to an exception when a method isn't recognized in a runtime call. For example, if you try to send a message to an NSString called -foo, it'll end up there since that's not a valid method on NSString.
In this case, the Chatty class Room is a base class that is never used directly. LocalRoom and RemoteRoom derive from it, and both of those classes provide an overriding implementation of -broadcastChatMessage:fromUser. Nobody ever calls that base class version, but for "completeness" the programmer has guaranteed that a subclasser must override this by implementing the method, but then turning around and calling this to trigger an exception.
Thing is, this isn't specifically idiomatic Objective-C. An "abstract" class is a concept from C++ and other languages; it's base class that exists only as a "pattern" from which to subclass. (In ObjC, this is often done by creating a formal #protocol when there isn't meaningful state, as there (mostly) isn't here).
Note that the call to -doesNotRecognizeSelector: is arbitrary. It's not necessary to avoid compiler warnings here (since the method is in fact implemented) and the original writer could have easily just thrown an exception directly, or done nothing instead.
It seems to me that you already answered your own question. There is no method to make abstract classes in Objective-C, so the closest thing to do it to have the methods that you need to override throw exceptions. If you override this method in a subclass, then doesNotRecognizeSelector: will no longer be called. Basically it is a way to get a developer to promise to implement this method in their subclass.
Also, as you mentioned, if you don't put this in then the compiler will issue a warning because no implementation exists for a method defined in the header. This will perform the same behavior as not implementing it, but the compiler will realize that you are doing it on purpose.

Scala instance value scoping

Note that this question and similar ones have been asked before, such as in Forward References - why does this code compile?, but I found the answers to still leave some questions open, so I'm having another go at this issue.
Within methods and functions, the effect of the val keyword appears to be lexical, i.e.
def foo {
println(bar)
val bar = 42
}
yielding
error: forward reference extends over definition of value bar
However, within classes, the scoping rules of val seem to change:
object Foo {
def foo = bar
println(bar)
val bar = 42
}
Not only does this compile, but also the println in the constructor will yield 0 as its output, while calling foo after the instance is fully constructed will result in the expected value 42.
So it appears to be possible for methods to forward-reference instance values, which will, eventually, be initialised before the method can be called (unless, of course, you're calling it from the constructor), and for statements within the constructor to forward-reference values in the same way, accessing them before they've been initialised, resulting in a silly arbitrary value.
From this, a couple of questions arise:
Why does val use its lexical compile-time effect within constructors?
Given that a constructor is really just a method, this seems rather inconsistent to entirely drop val's compile-time effect, giving it its usual run-time effect only.
Why does val, effectively, lose its effect of declaring an immutable value?
Accessing the value at different times may result in different results. To me, it very much seems like a compiler implementation detail leaking out.
What might legitimate usecases for this look like?
I'm having a hard time coming up with an example that absolutely requires the current semantics of val within constructors and wouldn't easily be implementable with a proper, lexical val, possibly in combination with lazy.
How would one work around this behaviour of val, getting back all the guarantees one is used to from using it within other methods?
One could, presumably, declare all instance vals to be lazy in order to get back to a val being immutable and yielding the same result no matter how they are accessed and to make the compile-time effect as observed within regular methods less relevant, but that seems like quite an awful hack to me for this sort of thing.
Given that this behaviour unlikely to ever change within the actual language, would a compiler plugin be the right place to fix this issue, or is it possible to implement a val-alike keyword with, for someone who just spent an hour debugging an issue caused by this oddity, more sensible semantics within the language?
Only a partial answer:
Given that a constructor is really just a method ...
It isn't.
It doesn't return a result and doesn't declare a return type (or doesn't have a name)
It can't be called again for an object of said class like "foo".new ("bar")
You can't hide it from an derived class
You have to call them with 'new'
Their name is fixed by the name of the class
Ctors look a little like methods from the syntax, they take parameters and have a body, but that's about all.
Why does val, effectively, lose its effect of declaring an immutable value?
It doesn't. You have to take an elementary type which can't be null to get this illusion - with Objects, it looks different:
object Foo {
def foo = bar
println (bar.mkString)
val bar = List(42)
}
// Exiting paste mode, now interpreting.
defined module Foo
scala> val foo=Foo
java.lang.NullPointerException
You can't change a val 2 times, you can't give it a different value than null or 0, you can't change it back, and a different value is only possible for the elementary types. So that's far away from being a variable - it's a - maybe uninitialized - final value.
What might legitimate usecases for this look like?
I guess working in the REPL with interactive feedback. You execute code without an explicit wrapping object or class. To get this instant feedback, it can't be waited until the (implicit) object gets its closing }. Therefore the class/object isn't read in a two-pass fashion where firstly all declarations and initialisations are performed.
How would one work around this behaviour of val, getting back all the guarantees one is used to from using it within other methods?
Don't read attributes in the Ctor, like you don't read attributes in Java, which might get overwritten in subclasses.
update
Similar problems can occur in Java. A direct access to an uninitialized, final attribute is prevented by the compiler, but if you call it via another method:
public class FinalCheck
{
final int foo;
public FinalCheck ()
{
// does not compile:
// variable foo might not have been initialized
// System.out.println (foo);
// Does compile -
bar ();
foo = 42;
System.out.println (foo);
}
public void bar () {
System.out.println (foo);
}
public static void main (String args[])
{
new FinalCheck ();
}
}
... you see two values for foo.
0
42
I don't want to excuse this behaviour, and I agree, that it would be nice, if the compiler could warn consequently - in Java and Scala.
So it appears to be possible for methods to forward-reference instance
values, which will, eventually, be initialised before the method can
be called (unless, of course, you're calling it from the constructor),
and for statements within the constructor to forward-reference values
in the same way, accessing them before they've been initialised,
resulting in a silly arbitrary value.
A constructor is a constructor. You are constructing the object. All of its fields are initialized by JVM (basically, zeroed), and then the constructor fills in whatever fields needs filling in.
Why does val use its lexical compile-time effect within constructors?
Given that a constructor is really just a method, this seems rather
inconsistent to entirely drop val's compile-time effect, giving it its
usual run-time effect only.
I have no idea what you are saying or asking here, but a constructor is not a method.
Why does val, effectively, lose its effect of declaring an immutable value?
Accessing the value at different times may result in different
results. To me, it very much seems like a compiler implementation
detail leaking out.
It doesn't. If you try to modify bar from the constructor, you'll see it is not possible. Accessing the value at different times in the constructor may result in different results, of course.
You are constructing the object: it starts not constructed, and ends constructed. For it not to change it would have to start out with its final value, but how can it do that without someone assigning that value?
Guess who does that? The constructor.
What might legitimate usecases for this look like?
I'm having a hard time coming up with an example that absolutely
requires the current semantics of val within constructors and wouldn't
easily be implementable with a proper, lexical val, possibly in
combination with lazy.
There's no use case for accessing the val before its value has been filled in. It's just impossible to find out whether it has been initialized or not. For example:
class Foo {
println(bar)
val bar = 10
}
Do you think the compiler can guarantee it has not been initialized? Well, then open the REPL, put in the above class, and then this:
class Bar extends { override val bar = 42 } with Foo
new Bar
And see that bar was initialized when printed.
How would one work around this behaviour of val, getting back all the
guarantees one is used to from using it within other methods?
Declare your vals before using them. But note that constuctor is not a method. When you do:
println(bar)
inside a constructor, you are writing:
println(this.bar)
And this, the object of the class you are writing a constructor for, has a bar getter, so it is called.
When you do the same thing on a method where bar is a definition, there's no this with a bar getter.