Is there a way to get warnings/errors when a return value of a specific type is unused? - scala

Does Scala have some equivalent to Rust's #[must_use] annotation?
I have a type which always needs to have a method called on it after it is returned. There are several methods that return it, and ignoring the return value is always an error. (It makes calling the method that returned it entirely pointless.)
I can't use -Ywarn-value-discard because the codebase is full of other ignored returns which are fine. I only want a warning/error when certain types are discarded.

In 2.11: -Ywarn-unused Warn when local and private vals, vars, defs, and types are unused.
But that's not what exactly helps in your case.
scala does not warn about unused computation or value
--
For me, it's looks like design issue. Suppose you have init and execute methods.
execute can be invoked only after init... You should force user to invoke this init method before execute.
It can be lazily invoked in the execute or during class construction.
I do not really think about scenarios where you really need such warnings.

Related

Why bother casting the return value since the type has been specified when calling the function?

I am learning Editor Script of Unity recently, and I come across a piece of code in Unity Manual like this:
EditorWindowTest window = (EditorWindowTest)EditorWindow.GetWindow(typeof(EditorWindowTest), true, "My Empty Window");
I don't know why bother casting the result with (EditorWindowTest) again since the type has been specified in the parameter field of GetWindow().
Thanks in advance :)
There are multiple overloads of the EditorWindow.GetWindow method.
The one used in your code snippet is one of the non-generic ones. It accepts a Type argument which it can use at runtime to create the right type of window. However, since it doesn't use generics, it's not possible to know the type of the window at compile time, so the method just returns an EditorWindow, as that's the best it can do.
You can hover over a method in your IDE to see the return type of any method for yourself.
When using one of the generic overloads of the GetWindow method, you don't need to do any manual casting, since the method already knows at compile time the exact type of the window and returns an instance of that type directly.
The generic variants should be used when possible, because it makes the code safer by removing the need for casting at runtime, which could cause exceptions.
If you closely look, GetWindow's return type is EditorWindow. Not the EditorWindowTest, so typecasting makes sense.
https://docs.unity3d.com/ScriptReference/EditorWindow.GetWindow.html

.eq causing warning. How do I get rid of it?

I'm using JDO with the DataNucleus typesafe query language in Scala. I therefore have code that looks like this:
val id: Long = // something
val cand: QDbObject = QDbObject.candidate()
pm.query[DbObject].filter(cand.id.eq(id))...
In a nutshell, this runs a query for all the DbObjects whose id field is equal to id. Unfortunately, I get the following warning:
NumericExpression[Long] and Long are unrelated: they will most likely
never compare equal
Clearly, the Scala compiler thinks that NumericExpression[Long] is using the built-in definition of eq(), which is similar to ==, but since this comes from Java, the eq() method has absolutely nothing to do with Scala's eq() method.
Is there any way to get rid of the warning? Obviously, this is going to happen a lot and I'm afraid these non-warnings will hide real warnings.
Update (2013-06-29)
This was fixed in Scala 2.10.2. The warnings are gone.
I was more concerned whether the eq method would actually be called instead of Scala's eq! But it is. I don't think you can get rid of it, though. If you are using Scala 2.10, you can create an implicit value class with a different method calling eq -- it will be effectively the same thing, but the warning will be limited to one file.

Parameterized logging in slf4j - how does it compare to scala's by-name parameters?

Here are two statements that seem to be generally accepted, but that I can't really get over:
1) Scala's by-name params gracefully replace the ever-so-annoying log4j usage pattern:
if (l.isDebugEnabled() ) {
logger.debug("expensive string representation, eg: godObject.toString()")
}
because the by-name-parameter (a Scala-specific language feature) doesn't get evaluated before the method invocation.
2) However, this problem is solved by parametrized logging in slf4f:
logger.debug("expensive string representation, eg {}:", godObject[.toString()]);
So, how does this work?
Is there some low-level magic involved in the slf4j library that prevents the evaluation of the parameter before the "debug" method execution? (is that even possible? Can a library impact such a fundamental aspect of the language?)
Or is it just the simple fact that an object is passed to the method - rather than a String? (and maybe the toString() of that object is invoked in the debug( ) method itself, if applicable).
But then, isn't that true for log4j as well? (it does have methods with Object params).
And wouldn't this mean that if you pass a string - as in the code above - it would behave identically to log4j?
I'd really love to have some light shed on this matter.
Thanks!
There is no magic in slf4j. The problem with logging used to be that if you wanted to log let's say
logger.debug("expensive string representation: " + godObject)
then no matter if the debug level was enabled in the logger or not, you always evaluated godObject.toString()which can be an expensive operation, and then also string concatenation. This comes simply from the fact that in Java (and most languages) arguments are evaluated before they're passed to a function.
That's why slf4j introduced logger.debug(String msg, Object arg) (and other variants for more arguments). The whole idea is that you pass cheap arguments to the debug function and it calls toString on them and combines them into a message only if the debug level is on.
Note that by calling
logger.debug("expensive string representation, eg: {}", godObject.toString());
you drastically reduce this advantage, as this way you convert godObject all the time, before you pass it to debug, no matter what debug level is on. You should use only
logger.debug("expensive string representation, eg: {}", godObject);
However, this still isn't ideal. It only spares calling toString and string concatenation. But if your logging message requires some other expensive processing to create the message, it won't help. Like if you need to call some expensiveMethod to create the message:
logger.debug("expensive method, eg: {}",
godObject.expensiveMethod());
then expensiveMethod is always evaluated before being passed to logger. To make this work efficiently with slf4j, you still have to resort back to
if (logger.isDebugEnabled())
logger.debug("expensive method, eg: {}",
godObject.expensiveMethod());
Scala's call-by-name helps a lot in this matter, because it allows you to wrap arbitrary piece of code into a function object and evaluate that code only when needed. This is exactly what we need. Let's have a look at slf4s, for example. This library exposes methods like
def debug(msg: => String) { ... }
Why no arguments like in slf4j's Logger? Because we don't need them any more. We can write just
logger.debug("expensive representation, eg: " +
godObject.expensiveMethod())
We don't pass a message and its arguments, we pass directly a piece of code that is evaluated to the message. But only if the logger decides to do so. If the debug level isn't on, nothing that's within logger.debug(...) is ever evaluated, the whole thing is just skipped. Neither expensiveMethod is called nor any toString calls or string concatenation happen. So this approach is most general and most flexible. You can pass any expression that evaluates to a String to debug, no matter how complex it is.

Why making a difference between methods and functions in Scala?

I have been reading about methods and functions in Scala. Jim's post and Daniel's complement to it do a good job of explaining what the differences between these are. Here is what I took with me:
functions are objects, methods are not;
as a consequence functions can be passed as argument, but methods can not;
methods can be type-parametrised, functions can not;
methods are faster.
I also understand the difference between def, val and var.
Now I have actually two questions:
Why can't we parametrise the apply method of a function to parametrise the function? And
Why can't the method be called by the function object to run faster? Or the caller of the function be made calling the original method directly?
Looking forward to your answers and many thanks in advance!
1 - Parameterizing functions.
It is theoretically possible for a compiler to parameterize the type of a function; one could add that as a feature. It isn't entirely trivial, though, because functions are contravariant in their argument and covariant in their return value:
trait Function1[+T,-R] { ... }
which means that another function that can take more arguments counts as a subclass (since it can process anything that the superclass can process), and if it produces a smaller set of results, that's okay (since it will also obey the superclass construct that way). But how do you encode
def fn[A](a: A) = a
in that framework? The whole point is that the return type is equal to the type passed in, whatever that type has to be. You'd need
Function1[ ThisCanBeAnything, ThisHasToMatch ]
as your function type. "This can be anything" is well-represented by Any if you want a single type, but then you could return anything as the original type is lost. This isn't to say that there is no way to implement it, but it doesn't fit nicely into the existing framework.
2 - Speed of functions.
This is really simple: a function is the apply method on another object. You have to have that object in order to call its method. This will always be slower (or at least no faster) than calling your own method, since you already have yourself.
As a practical matter, JVMs can do a very good job inlining functions these days; there is often no difference in performance as long as you're mostly using your method or function, not creating the function object over and over. If you're deeply nesting very short loops, you may find yourself creating way too many functions; moving them out into vals outside of the nested loops may save time. But don't bother until you've benchmarked and know that there's a bottleneck there; typically the JVM does the right thing.
Think about the type signature of a function. It explicitly says what types it takes. So then type-parameterizing apply() would be inconsistent.
A function is an object, which must be created, initialized, and then garbage-collected. When apply() is called, it has to grab the function object in addition to the parent.

scala - is it possible to force immutability on an object?

I mean if there's some declarative way to prevent an object from changing any of it's members.
In the following example
class student(var name:String)
val s = new student("John")
"s" has been declared as a val, so it will always point to the same student.
But is there some way to prevent s.name from being changed by just declaring it like immutable???
Or the only solution is to declare everything as val, and manually force immutability?
No, it's not possible to declare something immutable. You have to enforce immutability yourself, by not allowing anyone to change it, that is remove all ways of modifying the class.
Someone can still modify it using reflection, but that's another story.
Scala doesn't enforce that, so there is no way to know. There is, however, an interesting compiler-plugin project named pusca (I guess it stands for Pure-Scala). Pure is defined there as not mutating a non-local variable and being side-effect free (e.g. not printing to the console)—so that calling a pure method repeatedly will always yield the same result (what is called referentially transparent).
I haven't tried out that plug-in myself, so I can't say if it's any stable or usable already.
There is no way that Scala could do this generally.
Consider the following hypothetical example:
class Student(var name : String, var course : Course)
def stuff(course : Course) {
magically_pure_val s = new Student("Fredzilla", course)
someFunctionOfStudent(s)
genericHigherOrderFunction(s, someFunctionOfStudent)
course.someMethod()
}
The pitfalls for any attempt to actually implement that magically_pure_val keyword are:
someFunctionOfStudent takes an arbitrary student, and isn't implemented in this compilation unit. It was written/compiled knowing that Student consists of two mutable fields. How do we know it doesn't actually mutate them?
genericHigherOrderFunction is even worse; it's going to take our Student and a function of Student, but it's written polymorphically. Whether or not it actually mutates s depends on what its other arguments are; determining that at compile time with full generality requires solving the Halting Problem.
Let's assume we could get around that (maybe we could set some secret flags that mean exceptions get raised if the s object is actually mutated, though personally I wouldn't find that good enough). What about that course field? Does course.someMethod() mutate it? That method call isn't invoked from s directly.
Worse than that, we only know that we'll have passed in an instance of Course or some subclass of Course. So even if we are able to analyze a particular implementation of Course and Course.someMethod and conclude that this is safe, someone can always add a new subclass of Course whose implementation of someMethod mutates the Course.
There's simply no way for the compiler to check that a given object cannot be mutated. The pusca plugin mentioned by 0__ appears to detect purity the same way Mercury does; by ensuring that every method is known from its signature to be either pure or impure, and by raising a compiler error if the implementation of anything declared to be pure does anything that could cause impurity (unless the programmer promises that the method is pure anyway).[1]
This is quite a different from simply declaring a value to be completely (and deeply) immutable and expecting the compiler to notice if any of the code that could touch it could mutate it. It's also not a perfect inference, just a conservative one
[1]The pusca README claims that it can infer impurity of methods whose last expression is a call to an impure method. I'm not quite sure how it can do this, as checking if that last expression is an impure call requires checking if it's calling a not-declared-impure method that should be declared impure by this rule, and the implementation might not be available to the compiler at that point (and indeed could be changed later even if it is). But all I've done is look at the README and think about it for a few minutes, so I might be missing something.