I am using Intellij 14 and in the module settings, there is an option to export dependencies.
I noticed when I write objects that extend traits, I need to select exportin the module settings when other modules try to use these objects.
For example,
object SomeObj extends FileIO
would require me to export the FileIO dependency.
However, if I write a companion class that creates a new instance when the object is called, the exporting is no longer necessary.
object SomeObject {
private val someObject = new SomeObject()
def apply() = someObject
}
private[objectPkg] class SomeObject() extends FileIO {}
This code is more verbose and kind of a hack to the singleton pattern for Scala. Is it good to export third party dependencies with your module? If not, is my pattern the typical solution with Scala?
It all deal with code design principles in general. Basically, if you may switch underlying third party library later, or you system must be flexible to be ported over some other libs - then hiding implementation behind some facade is a must.
Often there is a ready-made set of interfaces in java/scala, which are implemented in third-party and you may just use those ones as a part of your facade to the rest of the system, and overall it is a java way. If this is not the case - you will need to derive interfaces by yourself. The worthiness of this everyone estimates by oneself in context.
As per your case: keep in mind that in java/scala you export names, and if you will just use your class (which extends FileIO) in any way outside your defining code, this means that class is accessible publicly and its type is exported/leaked outside as well. Scala should throw compile error, if some private class escapes its visibility scope (so in your second version of SomeObject it may be the case).
Consider this example: I use typesafe config library often in my applications. It has convenient methods, but I typically leave the space for possible separation (or rather my own extension):
package object config {
object Config {
private val conf: TypeSafeConfig = ConfigFactory.load()
def toTypeSafeConfig: TypeSafeConfig = conf
}
#inline implicit def confToTypeSafeConfig(conf: Config.type): TypeSafeConfig = conf.toTypeSafeConfig
}
Implicit conversion just allows me to call all TypeSafeConfig methods on my Config, and it has a bunch of convenient methods. Theoretically in future I could remove my implicit and implement methods I used right in the Config object. But I can hardly imagine why I would spend the time on this, though. This is some example of leaked implementation, that I don't consider problematic.
Related
I'm working on a commons library that includes a config library (https://github.com/kxbmap/configs).
This config library uses "kebab-case" when parsing configuration files by default and it can be overridden by an implicit def in scope.
However, I don't want to force that on the users of my commons library when they get access to the config library transitively.
So without me forcing users to import this implicit, like:
import CommonsConfig._
can I somehow override the naming strategy via an implicit that gets into scope by only including my commons library on the classpath. I'm guessing no but I just have to ask :)
So if not, is someone aware of another approach?
kxbmap/configs isn't that well documented to explain this.
Thanks!
Implicits work in compile time, so they cannot get magically present if something is included and then disappear if it isn't.
The closest thing would be something like:
main library
package my.library
// classes, traits, objects but no package object
extension
package my
package object library {
// implicits
}
user's code
import my.library._
however that would only work if there were no package object in main library, only one extension library could pull off this trick at once (Scala doesn't like more than one package object) and user would have to import everything available with a package, always.
In theory you could create a wrapper around all you deps, with your own configs:
final case class MyLibConfig(configsCfg: DerivationConfig)
object MyLibConfig {
implicit val default: MyLibConfig = ...
}
and then derive using this wrapper
def parseThings(args...)(implicit myLibConfig: MyLibConfig) = {
implicit val config: DerivationConfig = myLibConfig.config
// derivation
}
but in practice it would not work (parseThings would have to already know the target type or would need to have the already derived implicits passed). Unless you are up to writing your own derivation methods... avoid it.
Some way of making user just import all relevant stuff is the most maintainable strategy. E.g. you could pull off the same thing authors did and add type aliases for all types that you use, do the same for companion objects and finally put some implicits there:
package my
package object library {
type MyType = some.library.Type
val MyType = some.library.Type
implicit val derivationConfig: DerivationConfig = ...
}
I created a class extend scala.Immutable
class SomeThing(var string: String) extends Immutable {
override def toString: String = string
}
As I expected, scala compiler should help me prevent change state of class SomeThing. But when I run this test
"Test change state of immutable interface" should "not allow" in {
val someThing = new SomeThing("hello")
someThing.string = "hello 1"
println(someThing)
}
The result is hello 1 and scala compiler don't throw any warning or error.
Why they have to add Immutable trait without help us prevent object mutable?
There are several aspects to this question.
1. A simple one is that Scala compiler can't really ensure immutability for many various reasons. For example, the main target platform JVM allows modifying even final fields using reflection. Another reason this is not enforceable is code like this
/////////////////////////////////////////
//// library v1
package library
class LibraryData(val value:Int)
/////////////////////////////////////////
//// code that uses the library
package app
class UserData(val data:LibraryData) extends Immutable
/////////////////////////////////////////
//// library v2
package library
class LibraryData(var value:Int) //now change it to var!
Since the "library" is compiled independently of the "app" and doesn't even know about existence of the "app" there is no point in time where compiler can catch the broken contract.
2. More fundamental misunderstanding you seem to have is what trait does. In this context trait (or "interface" in some other languages) represents a contract between the implementation and the user-code about how the implementation can and should behave. However not every kind of a contract can be represented as a trait (at least without making the code super-complicated). For example, for a mutable collection there is a contract that size should return the number of times add (or +=) has been called but there is no way to represent such a contract as a trait besides declaring that there are methods size and += with corresponding signatures. On the other hand, for most of the contracts there is no way to enforce implementation to follow the contract . For example, an implementation of size that always returns 0 technically matches all the types but is clearly breaking the contract.
Similarly Immutable doc says:
A marker trait for all immutable data structures such as immutable collections.
So it is just a marker trait which is one of the ways to work around contracts that can't be really represented as types. And it says that whoever implements that trait claims to be an immutable object. Your code claims that but clearly breaks the contract. So technically it is your fault for not respecting the contract.
Starting with 2.10, -Xlint complains about classes defined inside of package objects. But why? Defining a class inside a package object should be exactly equivalent to defining the classes inside of a separate package with the same name, except a lot more convenient.
In my opinion, one of the serious design flaws in Scala is the inability to put anything other than a class-like entity (e.g. variable declarations, function definitions) at top level of a file. Instead, you're forced to put them into a separate ''package object'' (often in package.scala), separate from the rest of the code that they belong with and violating a basic programming rule which is that conceptually related code should be physically related as well. I don't see any reason why Scala can't conceptually allow anything at top level that it allows at lower levels, and anything non-class-like automatically gets placed into the package object, so that users never have to worry about it.
For example, in my case I have a util package, and under it I have a number of subpackages (util.io, util.text, util.time, util.os, util.math, util.distances, etc.) that group heterogeneous collections of functions, classes and sometimes variables that are semantically related. I currently store all the various functions, classes, etc. in a package object sitting in a file called io.scala or text.scala or whatever, in the util directory. This works great and it's very convenient because of the way functions and classes can be mixed, e.g. I can do something like:
package object math {
// Coordinates on a sphere
case class SphereCoord(lat: Double, long: Double) { ... }
// great-circle distance between two points
def spheredist(a: SphereCoord, b: SphereCoord) = ...
// Area of rectangle running along latitude/longitude lines
def rectArea(topleft: SphereCoord, botright: SphereCoord) = ...
// ...
// ...
// Exact-decimal functions
class DecimalInexactError extends Exception
// Format floating point value in decimal, error if can't do exactly
formatDecimalExactly(val num: Double) = ...
// ...
// ...
}
Without this, I would have to split the code up inconveniently according to fun vs. class rather than by semantics. The alternative, I suppose, is to put them in a normal object -- kind of defeating the purpose of having package objects in the first place.
But why? Defining a class inside a package object should be exactly equivalent to defining the classes inside of a separate package with the same name,
Precisely. The semantics are (currently) the same, so if you favor defining a class inside a package object, there should be a good reason. But the reality is that there is at least one good reason no to (keep reading).
except a lot more convenient
How is that more convenient?
If you are doing this:
package object mypkg {
class MyClass
}
You can just as well do the following:
package mypkg {
class MyClass
}
You'll even save a few characters in the process :)
Now, a good and concrete reason not to go overboard with package objects is that while packages are open, package objects are not.
A common scenario would be to have your code dispatched among several projects, with each project defining classes in the same package. No problem here.
On the other hand, a package object is (like any object) closed (as the spec puts it "There can be only one package object per package"). In other words,
you will only be able to define a package object in one of your projects.
If you attempt to define a package object for the same package in two distinct projects, bad things will happen, as you will effectively end up with two
distinct versions of the same JVM class (n our case you would end up with two "mypkg.class" files).
Depending on the cases you might end up with the compiler complaining that it cannot find something that you defined in the first version of your package object,
or get a "bad symbolic reference" error, or potentially even a runtime error. This is a general limitation of package objects, so you have to be aware of it.
In the case of defining classes inside a package object, the solution is simple: don't do it (given that you won't gain anything substantial compared to just defining the class as a top level).
For type aliase, vals and vars, we don't have such a luxuary, so in this case it is a matter of weighing whether the syntactic convenience (compared to defining them in an object) is worth it, and then take care not to define duplicate package objects.
I have not found a good answer to why this semantically equivalent operation would generate a lint warning. Methinks this is a lint bug. The only thing that I have found that must not be placed inside a package object (vs inside a plain package) is an object that implements main (or extends App).
Note that -Xlint also complains about implicit classes declared inside package objects, even though they cannot be declared at package scope. (See http://docs.scala-lang.org/overviews/core/implicit-classes.html for the rules on implicit classes.)
I figured out a trick that allows for all the benefits of package objects without the complaints about deprecation. In place of
package object foo {
...
}
you can do
protected class FooPackage {
...
}
package object foo extends FooPackage { }
Works the same but no complaint. Clear sign that the complaint itself is bogus.
I get the coding in that you basically provide an "object SomeClass" and a "class SomeClass" and the companion class is the class declaration and the object is a singleton. Of which you cannot create an instance. So... my question is mostly the purpose of a singleton object in this particular instance.
Is this basically just a way to provide class methods in Scala? Like + based methods in Objective-C?
I'm reading the Programming in Scala book and Chapter 4 just talked about singleton objects, but it doesn't get into a lot of detail on why this matters.
I realize I may be getting ahead of myself here and that it might be explained in greater detail later. If so, let me know. This book is reasonably good so far, but it has a lot of "in Java, you do this", but I have so little Java experience that I sort of miss a bit of the points I fear. I don't want this to be one of those situations.
I don't recall reading anywhere on the Programming in Scala website that Java was a prerequisite for reading this book...
Yes, companion singletons provide an equivalent to Java's (and C++'s, c#'s, etc.) static methods.
(indeed, companion object methods are exposed via "static forwarders" for the sake of Java interop)
However, singletons go a fair way beyond this.
A singleton can inherit methods from other classes/traits, which can't be done with statics.
A singleton can be passed as a parameter (perhaps via an inherited interface)
A singleton can exist within the scope of a surrounding class or method, just as Java can have inner classes
It's also worth noting that a singleton doesn't have to be a companion, it's perfectly valid to define a singleton without also defining a companion class.
Which helps make Scala a far more object-oriented language that Java (static methods don't belong to an object). Ironic, given that it's largely discussed in terms of its functional credentials.
In many cases we need a singleton to stand for unique object in our software system.
Think about the the solar system. We may have following classes
class Planet
object Earth extends Planet
object Sun extends Planet
object is a simple way to create singleton, of course it is usually used to create class level method, as static method in java
Additional to the given answers (and going in the same general direction as jilen), objects play an important role in Scala's implicit mechanism, e.g. allowing type-class-like behavior (as known from Haskell):
trait Monoid[T] {
def zero:T
def sum(t1:T, t2:T):T
}
def fold[T](ts:T*)(implicit m:Monoid[T]) = ts.foldLeft(m.zero)(m.sum(_,_))
Now we have a fold-Function. which "collapses" a number of Ts together, as long as there is an appropriate Monoid (things that have a neutral element, and can be "added" somehow together) for T. In order to use this, we need only one instance of a Monoid for some type T, the perfect job for an object:
implicit object StringMonoid extends Monoid[String] {
def zero = ""
def sum(s1:String, s2:String) = s1 + s2
}
Now this works:
println(fold("a","bc","def")) //--> abcdef
So objects are very useful in their own right.
But wait, there is more! Companion objects can also serve as a kind of "default configuration" when extending their companion class:
trait Config {
def databaseName:String
def userName:String
def password:String
}
object Config extends Config {
def databaseName = "testDB"
def userName = "scott"
def password = "tiger"
}
So on the one hand you have the trait Config, which can be implemented by the user however she wants, but on the other hand there is a ready made object Config when you want to go with the default settings.
Yes, it is basically a way of providing class methods when used as a companion object.
Ok, I'll explain why I ask this question. I begin to read Lift 2.2 source code these days.
It's good if you happened to read lift source code before.
In Lift, I found that, define inner class and inner trait are very heavily used.
object Menu has 2 inner traits and 4 inner classes. object Loc has 18 inner classes, 5 inner traits, 7 inner objects.
There're tons of codes write like this. I wanna to know why the author write like this.
Is it because it's the author's
personal taste or a powerful use of
language feature?
Is there any trade-off for this kind
of usage?
Before 2.8, you had to choose between packages and objects. The problem with packages is that they cannot contain methods or vals on their own. So you have to put all those inside another object, which can get awkward. Observe:
object Encrypt {
private val magicConstant = 0x12345678
def encryptInt(i: Int) = i ^ magicConstant
class EncryptIterator(ii: Iterator[Int]) extends Iterator[Int] {
def hasNext = ii.hasNext
def next = encryptInt(ii.next)
}
}
Now you can import Encrypt._ and gain access to the method encryptInt as well as the class EncryptIterator. Handy!
In contrast,
package encrypt {
object Encrypt {
private[encrypt] val magicConstant = 0x12345678
def encryptInt(i: Int) = i ^ magicConstant
}
class EncryptIterator(ii: Iterator[Int]) extends Iterator[Int] {
def hasNext = ii.hasNext
def next = Encrypt.encryptInt(ii.next)
}
}
It's not a huge difference, but it makes the user import both encrypt._ and encrypt.Encrypt._ or have to keep writing Encrypt.encryptInt over and over. Why not just use an object instead, as in the first pattern? (There's really no performance penalty, since nested classes aren't actually Java inner classes under the hood; they're just regular classes as far as the JVM knows, but with fancy names that tell you that they're nested.)
In 2.8, you can have your cake and eat it too: call the thing a package object, and the compiler will rewrite the code for you so it actually looks like the second example under the hood (except the object Encrypt is actually called package internally), but behaves like the first example in terms of namespace--the vals and defs are right there without needing an extra import.
Thus, projects that were started pre-2.8 often use objects to enclose lots of stuff as if they were a package. Post-2.8, one of the main motivations has been removed. (But just to be clear, using an object still doesn't hurt; it's more that it's conceptually misleading than that it has a negative impact on performance or whatnot.)
(P.S. Please, please don't try to actually encrypt anything that way except as an example or a joke!)
Putting classes, traits and objects in an object is sometimes required when you want to use abstract type variables, see e.g. http://programming-scala.labs.oreilly.com/ch12.html#_parameterized_types_vs_abstract_types
It can be both. Among other things, an instance of an inner class/trait has access to the variables of its parent. Inner classes have to be created with a parent instance, which is an instance of the outer type.
In other cases, it's probably just a way of grouping closely related things, as in your object example. Note that the trait LocParam is sealed, which means that all subclasses have to be in the same compile unit/file.
sblundy has a decent answer. One thing to add is that only with Scala 2.8 do you have package objects which let you group similar things in a package namespace without making a completely separate object. For that reason I will be updating my Lift Modules proposal to use a package object instead of a simple object.