scala scoping brings in types not shown in import - scala

I read an article concerning that scala's type inference might have done too
much:
Given this piece of code:
package A1 {
case class Foo(a: Int)
package object A2 {
def bar() = Foo(1)
}
}
--
import A1.A2._
object Main extends App {
val a: Foo = bar() // error: not found type Foo
}
It won't compile as Main can't see Foo unless we also import A1.Foo.
Whereas if the type annotation is taken away, then it fine:
import A1.A2._
object Main extends App {
val a = bar()
}
The author thinks this comparing to java, where we have to explicitly import whatever types we're using, would reduce readability as imports no long have complete information about the set of types we're using.
I think what he wants is that the types being used, explicitly or implicitly, need to be imported to make it clear what the code depends on and perhaps to assist some static analysis tools.
For this problem I wonder what you think about it.
EDIT:
As #flavian pointing out, this has little to do with type inference, more of how scoping works.
EDIT2:
I have a second thought on this. Maybe this question is not important if an IDE can automatically add imports(even for those used implicitly) if the developer wants to.
--

In your first example the compiler sees
val a: Foo = bar()
and doesn't know what Foo is, so it complains.
To fix this code there are three options.
// import Foo
import A1.Foo
val a: Foo = bar()
// use the fully qualified name
val a: A1.Foo = bar()
// let the compiler infer the type
val a = bar()
These all compile the same.
The last option is not avaliable to Java.
The author thinks this comparing to java, where we have to explicitly import whatever types we're using
Not true.
// we can use Foo with no import
useFoo(x.getFoo());
// and we can use fully qualified names
A1.Foo foo = bar();
The compiler will add to the compiled class file a list of all classes that are needed by the class.

I don't think type inference is the question at play in here. Members propagate in scope through direct import or inheritance. If you had:
trait A1 {
case class Foo(..)
}
object A2 extends A1
This would correctly import Foo into scope. Again, as far as I know this is not a type inference problem, but rather with the fact that imports and implicits propagate only through inheritance. It's more about how scoping works in Scala than anything else.

Related

ficus configuration load generic

Loading a ficus configuration like
loadConfiguration[T <: Product](): T = {
import net.ceedubs.ficus.readers.ArbitraryTypeReader._
import net.ceedubs.ficus.Ficus._
val config: Config = ConfigFactory.load()
config.as[T]
fails with:
Cannot generate a config value reader for type T, because it has no apply method in a companion object that returns type T, and it doesn't have a primary constructor
when instead directly specifying a case class instead of T i.e. SomeClass it works just fine. What am I missing here?
Ficus uses the type class pattern, which allows you to constrain generic types by specifying operations that must be available for them. Ficus also provides type class instance "derivation", which in this case is powered by a macro that can inspect the structure of a specific case class-like type and automatically create a type class instance.
The problem in this case is that T isn't a specific case class-like type—it's any old type that extends Product, which could be something nice like this:
case class EasyToDecode(a: String, b: String, c: String)
But it could also be:
trait X extends Product {
val youWillNeverDecodeMe: String
}
The macro you've imported from ArbitraryTypeReader has no idea at this point, since T is generic here. So you'll need a different approach.
The relevant type class here is ValueReader, and you could minimally change your code to something like the following to make sure T has a ValueReader instance (note that the T: ValueReader syntax here is what's called a "context bound"):
import net.ceedubs.ficus.Ficus._
import net.ceedubs.ficus.readers.ValueReader
import com.typesafe.config.{ Config, ConfigFactory }
def loadConfiguration[T: ValueReader]: T = {
val config: Config = ConfigFactory.load()
config.as[T]
}
This specifies that T must have a ValueReader instance (which allows us to use .as[T]) but says nothing else about T, or about where its ValueReader instance needs to come from.
The person calling this method with a concrete type MyType then has several options. Ficus provides instances that are automatically available everywhere for many standard library types, so if MyType is e.g. Int, they're all set:
scala> ValueReader[Int]
res0: net.ceedubs.ficus.readers.ValueReader[Int] = net.ceedubs.ficus.readers.AnyValReaders$$anon$2#6fb00268
If MyType is a custom type, then either they can manually define their own ValueReader[MyType] instance, or they can import one that someone else has defined, or they can use generic derivation (which is what ArbitraryTypeReader does).
The key point here is that the type class pattern allows you as the author of a generic method to specify the operations you need, without saying anything about how those operations will be defined for a concrete type. You just write T: ValueReader, and your caller imports ArbitraryTypeReader as needed.

How to handle different package names in different versions?

I have a 3rd party library with package foo.bar
I normally use it as:
import foo.bar.{Baz => MyBaz}
object MyObject {
val x = MyBaz.getX // some method defined in Baz
}
The new version of the library has renamed the package from foo.bar to newfoo.newbar. I have now another version of my code with the slight change as follows:
import newfoo.newbar.{Baz => MyBaz}
object MyObject {
val x = MyBaz.getX // some method defined in Baz
}
Notice that only the first import is different.
Is there any way I can keep the same version of my code and still switch between different versions of the 3rd party library as and when needed?
I need something like conditional imports, or an alternative way.
The other answer is on the right track but doesn't really get you all the way there. The most common way to do this kind of thing in Scala is to provide a base compatibility trait that has different implementations for each version. In my little abstracted library, for example, I have the following MacrosCompat for Scala 2.10:
package io.travisbrown.abstracted.internal
import scala.reflect.ClassTag
private[abstracted] trait MacrosCompat {
type Context = scala.reflect.macros.Context
def resultType(c: Context)(tpe: c.Type)(implicit
tag: ClassTag[c.universe.MethodType]
): c.Type = {
import c.universe.MethodType
tpe match {
case MethodType(_, res) => resultType(c)(res)
case other => other
}
}
}
And this one for 2.11:
package io.travisbrown.abstracted.internal
import scala.reflect.ClassTag
private[abstracted] trait MacrosCompat {
type Context = scala.reflect.macros.whitebox.Context
def resultType(c: Context)(tpe: c.Type): c.Type = tpe.finalResultType
}
And then my classes, traits, and objects that use the macro reflection API can just extend MacrosCompat and they'll get the appropriate Context and an implementation of resultType for the version we're currently building (this is necessary because of changes to the macros API between 2.10 and 2.11).
(This isn't originally my idea or pattern, but I'm not sure who to attribute it to. Probably Eugene Burmako?)
If you're using SBT, there's special support for version-specific source trees—you can have a src/main/scala for your shared code and e.g. src/main/scala-2.10 and src/main/scala-2.11 directories for version-specific code, and SBT will take care of the rest.
You can try to use type aliases:
package myfoo
object mybar {
type MyBaz = newfoo.newbar.Baz
// val MyBaz = newfoo.newbar.Baz // if Baz is a case class/object, then it needs to be aliased twice - as a type and as a value
}
And then you may simply import myfoo.mybar._ and replace the object mybar to switch to different version of the library.

Why is it illegal to put package-level defs/types/implicits at the root-level of a file?

This question has been bugging me ever since I started using Scala. So here's my trail of thought:
Scala doesn't have the requirement, which Java has, that everything has to be in a class—this is a good thing!
Scala has functions, i.e. methods without a class/object
packages are objects, so if you define a class in a package, it'll go into the package object
some things have to be put into package objects explicitly, whereas some things don't—WHY?
I think it's just ugly to have to write:
package foo.bar
object `package` {
type Foo = Bar
def fact(n: Int): Int = ???
}
instead of just
package foo.bar
type Foo = Bar
def fact(n: Int): Int = ???
So: does anybody know when/why was it decided that it's better to require explicit wrapping with (the otherwise implicit) package objects? I'm constantly finding myself annoyed with the extra level of indent and boilerplate introduced by the obscure object package declaration, which is annoying just for the same reason this is annoying in Java:
class MyMath {
public static int fact(int n) { ... }
}

How can I add new methods to a library object?

I've got a class from a library (specifically, com.twitter.finagle.mdns.MDNSResolver). I'd like to extend the class (I want it to return a Future[Set], rather than a Try[Group]).
I know, of course, that I could sub-class it and add my method there. However, I'm trying to learn Scala as I go, and this seems like an opportunity to try something new.
The reason I think this might be possible is the behavior of JavaConverters. The following code:
class Test {
var lst:Buffer[Nothing] = (new java.util.ArrayList()).asScala
}
does not compile, because there is no asScala method on Java's ArrayList. But if I import some new definitions:
class Test {
import collection.JavaConverters._
var lst:Buffer[Nothing] = (new java.util.ArrayList()).asScala
}
then suddenly there is an asScala method. So that looks like the ArrayList class is being extended transparently.
Am I understanding the behavior of JavaConverters correctly? Can I (and should I) duplicate that methodology?
Scala supports something called implicit conversions. Look at the following:
val x: Int = 1
val y: String = x
The second assignment does not work, because String is expected, but Int is found. However, if you add the following into scope (just into scope, can come from anywhere), it works:
implicit def int2String(x: Int): String = "asdf"
Note that the name of the method does not matter.
So what usually is done, is called the pimp-my-library-pattern:
class BetterFoo(x: Foo) {
def coolMethod() = { ... }
}
implicit def foo2Better(x: Foo) = new BetterFoo(x)
That allows you to call coolMethod on Foo. This is used so often, that since Scala 2.10, you can write:
implicit class BetterFoo(x: Foo) {
def coolMethod() = { ... }
}
which does the same thing but is obviously shorter and nicer.
So you can do:
implicit class MyMDNSResolver(x: com.twitter.finagle.mdns.MDNSResolver) = {
def awesomeMethod = { ... }
}
And you'll be able to call awesomeMethod on any MDNSResolver, if MyMDNSResolver is in scope.
This is achieved using implicit conversions; this feature allows you to automatically convert one type to another when a method that's not recognised is called.
The pattern you're describing in particular is referred to as "enrich my library", after an article Martin Odersky wrote in 2006. It's still an okay introduction to what you want to do: http://www.artima.com/weblogs/viewpost.jsp?thread=179766
The way to do this is with an implicit conversion. These can be used to define views, and their use to enrich an existing library is called "pimp my library".
I'm not sure if you need to write a conversion from Try[Group] to Future[Set], or you can write one from Try to Future and another from Group to Set, and have them compose.

Automatically make getters for class parameters (to avoid case classes)?

In Scala, if we have
class Foo(bar:String)
We can create a new object but cannot access bar
val foo = new Foo("Hello")
foo.bar // gives error
However, if we declare Foo to be a case class this works:
case class Foo(bar:String)
val foo = Foo("hello")
foo.bar // works
I am forced to make many of my classes as case classes because of this. Otherwise, I have to write boilerplate code for accessing bar:
class Foo(bar:String) {
val getbar = bar
}
So my questions are:
Is there any way to "fix" this without using case classes or boilerplate code?
Is using case classes in this context a good idea? (or what are the disadvantages of case classes?)
I guess the second one deserves a separate question.
Just use val keyword in constructor to make it publicly accessible:
class Foo(val bar:String) {
}
As for your question: if this is the only feature you need, don't use case classes, just write with val.
However, would be great to know why case classes are not good.
In case classes all arguments by default are public, whereas in plain class they're all private. But you may tune this behaviour:
scala> class Foo(val bar:String, baz: String)
defined class Foo
scala> new Foo("public","private")
res0: Foo = Foo#31d5e2
scala> res0.bar
res1: String = public
scala> res0.baz
<console>:10: error: value baz is not a member of Foo
res0.baz
And even like that:
class Foo(private[mypackage] val bar:String) {
// will be visible only to things in `mypackage`
}
For case classes (thanks to #JamesIry):
case class Bar(`public`: String, private val `private`: String)
You can use the BeanProperty annotation to automatically generate java-like getters
import scala.reflect.BeanProperty
case class Foo(#BeanProperty bar:String)
Now Foo has a getBar method that returns the bar value.
Note though that this is only useful if you have a good reason to use java-like getters (typical reasons being that you need your class to be a proper java bean, so as to work with java libraries that expect java beans and use reflection to access the bean's properties).
Otherwise, just access the bar value directly, this is "the scala way".
See http://www.scala-lang.org/api/current/index.html#scala.reflect.BeanProperty