Let's say I have a non-sealed trait, Foo, and throughout my code I define some Objects that extend Foo.
Is there a way that I can, at compile time, look up all objects that extend Foo and print out some information about them (like printing out a string literal I have in a val?)
If so, how? If not, why not?
Seems like an obscure feature to have and it has problems since a lot of times only a few pieces of your code are re-compiled (and it only get's worse with libraries and dependencies). If I where you, I would just "find in files" for extends Foo or with Foo and check the results. You can even do a small script with regex to extract your val for each result if you are willing to do that work.
Related
I am writing unit tests for a class(class A) that takes another class(class B) as its parameter which in turn takes another class(class C) as its parameter. I need to mock class B in order to test class A. But, mock doesn't take class parameters.
So, I created a trait(trait BsTrait) and my class B extends BsTrait. According to this answer - (ScalaMock. Mock a class that takes arguments).
Class I want to test - class A(b: BsTrait){}
class B - class B(c: C){}
class C - class C{}
trait of class B - trait BsTrait{}
My Unit Test -
val mockFactory = mockFunction[C, B]
val mockClient = mock[BsTrait]
mockFactory.expects(new C).returning(mockClient)```
Error: /path/to/file/Test.scala:63: type mismatch; found : com.example.BsTrait required: com.example.B
mockFactory.expects(new C).returning(mockClient)
I tried to add return type for mockFunction as well, but it threw the same error.
I know this is a not directly a solution, and maybe it actually is.
In my experience, ditch the mocking framework in 99% of the cases where you want it, and just write mock implementations of traits (or interfaces in other languages). Yes it will feel like boilerplate, but:
Often it turns out you will write less code - the mock code is short to begin with, but almost always starts having all sorts of work arounds to allow all sorts of calls, sometimes even calls your test does not care about, but has to be there. That you will end up having in a lot of the tests, just for the sake of the test
Once you in your framework mock a function/method with some testing behaviour people often don't discover when it isn't needed for the test any more, and forget to delete it. Auch dead code that nothing warns us is dead
Code you control is easier to extend or modify to your needs. Your own problem as a good example. Had your mock just been a class you wouldn't need to ask a question. Now your mock is hidden i all sorts of reflections, perhaps byte code manipulations (I don't know if ScalaMock does this, but a lot of frameworks do), and most likely other nasty things, that is impossible to debug.
Fameworked mocks are often hard and unwieldy to maintain. If you add a method to your trait you can end needing to update a lot of tests, even though it should not affect them. Just because those tests in some hidden way use them
You don't fall into the mocking framework pitfall with testing how you implemented your function/method rather than what it does. (E.g. is it important if you rest client specifically calls the method named get, or is it important that it performs a get-request, no matter if it calls exchange, get or something else?)
So even though it feels like overkill, heavily consider doing it anyway. You Will most likely write less code in the long run. The code you write is more clear and readable, if a function or method is no longer used you remove it from your trait and get a compiler warning on your mock. Your question would automatically have been solved, because you'd just extend your mock's behavior just enough to get it going. And your are more likely to actually write good tests.
Lastly, if you follow this suggestion, just one thing: keep your mocks independent. Do not let B take a parameter C in your mock. Make your B work with lists, maps, or whatever you need. The point is to test A, not B and C :)
It seems that one issue with scala.Symbol is it two objects, the Symbol and the String it is based on.
Why can this extra object not be eliminated by defining Sym something like:
class Sym private(val name:String) extends AnyVal {
override def toString = "'" + name
}
object Sym {
def apply(name:String) = new Sym(name.intern)
}
Admittedly the performance implications of object allocation are likely tiny, but comments with those with a deeper understanding of Scala would be illuminating. In particular, does the above provide efficient maps via equality by reference?
Another advantage of the simple 'Sym' above is in a map centric application where there are lots of string keys, but where the strings are naming many entirely different kinds of things, type safe Sym classes can be defined so that Maps will definitively show to the programmer, the compiler and refactoring tools what the key really is.
(Neither Symbol nor Sym can be extened, the former apparently by choice, and the latter because it extends AnyVal, but Sym is trivial enough to just duplicate with an appropriate name)
It is not possible to do Symbol as an AnyVal. The main benefit of Symbols over simple Strings is that Symbols are guaranteed to be interned, so you can test equality of symbols using a simple reference comparison instead of an expensive string comparison.
See the source code of Symbol. Equals is overridden and redefined to do a reference comparison using the eq method.
But unfortunately an AnyVal does not allow you to redefine equality. From the SIP-15 for user-defined value classes:
C may not define concrete equals or hashCode methods.
So while it would be extremely useful to have a way to redefine equality without incurring runtime overhead, it is unfortunately not possible.
Edit: never use string.intern in any program where performance is important. The performance of string.intern is horrible compared to even a trivial intern table. See this SO question and answer. See the source code of Symbol above for a simple intern table.
Unfortunately, object allocation for an AnyVal is forced whenever it is put into a collection, like the Map in your example. This is because the value class has to be cast to the type parameter of the collection, and casting to a new type always forces allocation. This eliminates almost any advantage of declaring Sym as a value class. See Allocation Details in the Scala documentation page for value classes.
For AnyVal the class is actually the String. The magically added methods and type-safety are just compiler tricks. It's the String that gets transfered all around.
For pattern matching (Symbol's purpose as I suppose) Scala needs the class of an object. Thus — Symbol extends AnyRef.
I am new to Scala and heard a lot that everything is an object in Scala. What I don't get is what's the advantage of "everything's an object"? What are things that I cannot do if everything is not an object? Examples are welcome. Thanks
The advantage of having "everything" be an object is that you have far fewer cases where abstraction breaks.
For example, methods are not objects in Java. So if I have two strings, I can
String s1 = "one";
String s2 = "two";
static String caps(String s) { return s.toUpperCase(); }
caps(s1); // Works
caps(s2); // Also works
So we have abstracted away string identity in our operation of making something upper case. But what if we want to abstract away the identity of the operation--that is, we do something to a String that gives back another String but we want to abstract away what the details are? Now we're stuck, because methods aren't objects in Java.
In Scala, methods can be converted to functions, which are objects. For instance:
def stringop(s: String, f: String => String) = if (s.length > 0) f(s) else s
stringop(s1, _.toUpperCase)
stringop(s2, _.toLowerCase)
Now we have abstracted the idea of performing some string transformation on nonempty strings.
And we can make lists of the operations and such and pass them around, if that's what we need to do.
There are other less essential cases (object vs. class, primitive vs. not, value classes, etc.), but the big one is collapsing the distinction between method and object so that passing around and abstracting over functionality is just as easy as passing around and abstracting over data.
The advantage is that you don't have different operators that follow different rules within your language. For example, in Java to perform operations involving objects, you use the dot name technique of calling the code (static objects still use the dot name technique, but sometimes the this object or the static object is inferred) while built-in items (not objects) use a different method, that of built-in operator manipulation.
Number one = Integer.valueOf(1);
Number two = Integer.valueOf(2);
Number three = one.plus(two); // if only such methods existed.
int one = 1;
int two = 2;
int three = one + two;
the main differences is that the dot name technique is subject to polymorphisim, operator overloading, method hiding, and all the good stuff that you can do with Java objects. The + technique is predefined and completely not flexible.
Scala circumvents the inflexibility of the + method by basically handling it as a dot name operator, and defining a strong one-to-one mapping of such operators to object methods. Hence, in Scala everything is an object means that everything is an object, so the operation
5 + 7
results in two objects being created (a 5 object and a 7 object) the plus method of the 5 object being called with the parameter 7 (if my scala memory serves me correctly) and a "12" object being returned as the value of the 5 + 7 operation.
This everything is an object has a lot of benefits in a functional programming environment, for example, blocks of code now are object too, making it possible to pass back and forth blocks of code (without names) as parameters, yet still be bound to strict type checking (the block of code only returns Long or a subclass of String or whatever).
When it boils down to it, it makes some kinds of solutions very easy to implement, and often the inefficiencies are mitigated by the lack of need to handle "move into primitives, manipulate, move out of primitives" marshalling code.
One specific advantage that comes to my mind (since you asked for examples) is what in Java are primitive types (int, boolean ...) , in Scala are objects that you can add functionality to with implicit conversions. For example, if you want to add a toRoman method to ints, you could write an implicit class like:
implicit class RomanInt(i:Int){
def toRoman = //some algorithm to convert i to a Roman representation
}
Then, you could call this method from any Int literal like :
val romanFive = 5.toRoman // V
This way you can 'pimp' basic types to adapt them to your needs
In addition to the points made by others, I always emphasize that the uniform treatment of all values in Scala is in part an illusion. For the most part it is a very welcome illusion. And Scala is very smart to use real JVM primitives as much as possible and to perform automatic transformations (usually referred to as boxing and unboxing) only as much as necessary.
However, if the dynamic pattern of application of automatic boxing and unboxing is very high, there can be undesirable costs (both memory and CPU) associated with it. This can be partially mitigated with the use of specialization, which creates special versions of generic classes when particular type parameters are of (programmer-specified) primitive types. This avoids boxing and unboxing but comes at the cost of more .class files in your running application.
Not everything is an object in Scala, though more things are objects in Scala than their analogues in Java.
The advantage of objects is that they're bags of state which also have some behavior coupled with them. With the addition of polymorphism, objects give you ways of changing the implicit behavior and state. Enough with the poetry, let's go into some examples.
The if statement is not an object, in either scala or java. If it were, you could be able to subclass it, inject another dependency in its place, and use it to do stuff like logging to a file any time your code makes use of the if statement. Wouldn't that be magical? It would in some cases help you debug stuff, and in other cases it would make your hairs grow white before you found a bug caused by someone overwriting the behavior of if.
Visiting an objectless, statementful world: Imaging your favorite OOP programming language. Think of the standard library it provides. There's plenty of classes there, right? They offer ways for customization, right? They take parameters that are other objects, they create other objects. You can customize all of these. You have polymorphism. Now imagine that all the standard library was simply keywords. You wouldn't be able to customize nearly as much, because you can't overwrite keywords. You'd be stuck with whatever cases the language designers decided to implement, and you'd be helpless in customizing anything there. Such languages exist, you know them well, they're the sequel-like languages. You can barely create functions there, but in order to customize the behavior of the SELECT statement, new versions of the language had to appear which included the features most desired. This would be an extreme world, where you'd only be able to program by asking the language designers for new features (which you might not get, because someone else more important would require some feature incompatible with what you want)
In conclusion, NOT everything is an object in scala: Classes, expressions, keywords and packages surely aren't. More things however are, like functions.
What's IMHO a nice rule of thumb is that more objects equals more flexibility
P.S. in Python for example, even more things are objects (like the classes themselves, the analogous concept for packages (that is python modules and packages). You'd see how there, black magic is easier to do, and that brings both good and bad consequences.
I found this code example in Programming in Scala, 2nd Ed. (Chapter 25, Listing 25.11):
object PrefixMap extends {
def empty[T] = ...
def apply[T](kvs: (String, T)*): PrefixMap[T] = ...
...
}
Why is the extends clause there without a superclass name? It looks like extending an anonymous class, but for what purpose? The accompanying text doesn't explain or even mention this construct anywhere. The code actually compiles and apparently works perfectly with or without it.
OTOH I found the exact same code on several web pages, including this (which looks like the original version of the chapter in the book). I doubt that a typo could have passed below the radars of so many readers up to now... so am I missing something?
I tried to google it, but struggled even to find proper search terms for it. So could someone explain whether this construct has a name and/or practical use in Scala?
Looks like a print error to me. It will work all the same, though, which probably helped hide it all this time.
Anyway, that object is extending a structural type, though it could also be an early initialization, if you had with XXX at the end. MMmmm. It looks more like an early initialization without any class or trait to be initialized later, actually... structure types do not contain code, I think.
I believe I understand the basics of inline functions: instead of a function call resulting in parameters being placed on the stack and an invoke operation occurring, the definition of the function is copied at compile time to where the invocation was made, saving the invocation overhead at runtime.
So I want to know:
Does scalac use smarts to inline some functions (e.g. private def) without the hints from annotations?
How do I judge when it be a good idea to hint to scalac that it inlines a function?
Can anyone share examples of functions or invocations that should or shouldn't be inlined?
Never #inline anything whose implementation might reasonably change and which is going to be a public part of a library.
When I say "implementation change" I mean that the logic actually might change. For example:
object TradeComparator extends java.lang.Comparator[Trade] {
#inline def compare(t1 : Trade, t2 : Trade) Int = t1.time compare t2.time
}
Let's say the "natural comparison" then changed to be based on an atomic counter. You may find that an application ends up with 2 components, each built and inlined against different versions of the comparison code.
Personally, I use #inline for alias:
class A(param: Param){
#inline def a = param.a
def a2() = a * a
}
Now, I couldn't find a way to know if it does anything (I tried to jad the generated .class, but couldn't conclude anything).
My goal is to explicit what I want the compiler to do. But let it decide what's best, or simply do what it's capable of. If it doesn't do it, maybe later compiler version will.