Is it possible to automatically load an implicit def if included as a dependency (no importing) - scala

I'm working on a commons library that includes a config library (https://github.com/kxbmap/configs).
This config library uses "kebab-case" when parsing configuration files by default and it can be overridden by an implicit def in scope.
However, I don't want to force that on the users of my commons library when they get access to the config library transitively.
So without me forcing users to import this implicit, like:
import CommonsConfig._
can I somehow override the naming strategy via an implicit that gets into scope by only including my commons library on the classpath. I'm guessing no but I just have to ask :)
So if not, is someone aware of another approach?
kxbmap/configs isn't that well documented to explain this.
Thanks!

Implicits work in compile time, so they cannot get magically present if something is included and then disappear if it isn't.
The closest thing would be something like:
main library
package my.library
// classes, traits, objects but no package object
extension
package my
package object library {
// implicits
}
user's code
import my.library._
however that would only work if there were no package object in main library, only one extension library could pull off this trick at once (Scala doesn't like more than one package object) and user would have to import everything available with a package, always.
In theory you could create a wrapper around all you deps, with your own configs:
final case class MyLibConfig(configsCfg: DerivationConfig)
object MyLibConfig {
implicit val default: MyLibConfig = ...
}
and then derive using this wrapper
def parseThings(args...)(implicit myLibConfig: MyLibConfig) = {
implicit val config: DerivationConfig = myLibConfig.config
// derivation
}
but in practice it would not work (parseThings would have to already know the target type or would need to have the already derived implicits passed). Unless you are up to writing your own derivation methods... avoid it.
Some way of making user just import all relevant stuff is the most maintainable strategy. E.g. you could pull off the same thing authors did and add type aliases for all types that you use, do the same for companion objects and finally put some implicits there:
package my
package object library {
type MyType = some.library.Type
val MyType = some.library.Type
implicit val derivationConfig: DerivationConfig = ...
}

Related

How to use JSImport when writing scalajs facade for javascript modules

I have written a facade using JSImport, and it works. Unfortunately, I arrived at the solution through trial and error, and I don't fully understand why this particular solution works but others I tried did not.
Background: I'm starting with a working project, built with sbt, which is a single page application that implements the client side code with scala.js and the server side with scala and the Play framework. The javascript libraries were packaged with web jars and bundled into the client js file using the sbt jsDependencies variable. I wanted to implement some new features which required a library up rev, which then required an up rev of some javascript libs which were only available in npm format. So now I am including all the javascript dependencies for the client app using npmDependencies with the scalajs-bundler plugin. This broke some of the scalajs facades leading to my question.
I'll use the facade to log4javascript as an example for this question.
The variable log4javascript is the top level object used to access the rest of the api.
When the js libs were included as web jars, this is how the facade to log4javascript was implemented:
#js.native
#js.annotation.JSGlobalScope
object Log4JavaScript extends js.Object {
val log4javascript:Log4JavaScript = js.native
}
After the change to npm:
import scala.scalajs.js.annotation.JSImport.Namespace
#JSImport("log4javascript", Namespace)
#js.native
object Log4JavaScript extends js.Object {
def resetConfiguration(): Unit = js.native
def getLogger(name:js.UndefOr[String]): JSLogger = js.native
...
}
Following the scala.js docs for writing importing modules I expected the object name (Log4JavaScript in this case) would have to match the exported symbol name in order for the binding to work. However, the top level symbol in log4javascript.js is log4javascript. After experimenting, it seems the scala object name makes no difference for the binding. It binds correctly no matter what I name the scala top level object.
Can someone explain what relationship exists, if any, between the scala object/class/def/val names and the names in the javascript module when using the 'Namespace' arg to JSImport?
According to the scala.js docs, it seems I should be able to provide the actual name of the js object (I also tried "Log4JavaScript")
#JSImport("log4javascript", "log4javascript")
#js.native
object SomeOtherName extends js.Object {
def resetConfiguration(): Unit = js.native
def getLogger(name:js.UndefOr[String]): JSLogger = js.native
...
}
However, this fails to bind. I will get a runtime error when I try to access any of the member functions.
Log4JavaScript.resetConfiguration()
Uncaught TypeError: Cannot read property 'resetConfiguration' of undefined
Can someone explain why this doesn't work?
log4javascript also defines some classes inside the scope of log4javascript. When the lib was included as a web jar the definition looked like:
#js.native
#JSGlobal("log4javascript.AjaxAppender")
class AjaxAppender(url:String) extends Appender {
def addHeader(header:String, value:String):Unit = js.native
}
After switching to npm I had to put the class definition inside the top level object:
#js.native
trait Appender extends js.Object {
...
}
#JSImport("log4javascript", "log4javascript")
#js.native
object Log4JavaScript extends js.Object {
...
class AjaxAppender(url: String) extends Appender {
def addHeader(name: String, value: String): Unit = js.native
}
...
}
This seems sensible, but from the scala.js docs it seems like it should have been possible to define it this way outside of the top level object
#JSImport("log4javascript", "log4javascript.AjaxAppender")
#js.native
class AjaxAppender(url: String) extends Appender {
def addHeader(name: String, value: String): Unit = js.native
}
However, this also fails to bind. Could someone explain the correct way to define the class as above? Or is the definition nested inside the Log4JavaScript object the only correct way to do it?
Can someone explain what relationship exists, if any, between the scala object/class/def/val names and the names in the javascript module when using the 'Namespace' arg to JSImport?
This is explained in this part of the Scala.js documentation. The name of the Scala object defining the facade does not matter. What matter are the parameters of the #JSImport annotation. The first one indicates which module to import from, and the second one indicates what to import.
In your case, the log4javascript module is in the log4javascript.js file, in the log4javascript package directory. So, your first parameter should be:
#JSImport("log4javascript/log4javascript.js", ...)
object Log4JavaScript ...
However, log4javascript is defined as an npm module whose main file refers to the log4javascript.js file. This means that you can just use the package directory name:
#JSImport("log4javascript", ...)
object Log4JavaScript ...
(See this article for more information on how NodeJS does resolve modules)
The second parameter of the #JSImport annotation indicates what to import. In your case, you want to import the whole module, not just a member of it, so you want to use Namespace:
#JSImport("log4javascript", Namespace)
object Log4JavaScript ...
This corresponds to the following EcmaScript import statement:
import * as Log4JavaScript from 'log4javascript'
Note that, although the Scala object name (Log4JavaScript, here) does not matter, the names of its members does matter, as explained in this part of the Scala.js documentation.
According to the scala.js docs, it seems I should be able to provide the actual name of the js object (I also tried "Log4JavaScript")
#JSImport("log4javascript", "log4javascript")
...
However, this fails to bind. I will get a runtime error when I try to access any of the member functions.
When you write that, you try to access to the log4javascript member of the log4javascript module. But that module does not have such a member.
it should have been possible to define it this way outside of the top level object
#JSImport("log4javascript", "log4javascript.AjaxAppender")
...
However, this also fails to bind.
Again, this means “import the log4javascript.AjaxAppender member from the log4javascript module”, but that module does not have such a member. The following should work:
#JSImport("log4javascript", "AjaxAppender")

Is exporting third party library dependencies good programming practice?

I am using Intellij 14 and in the module settings, there is an option to export dependencies.
I noticed when I write objects that extend traits, I need to select exportin the module settings when other modules try to use these objects.
For example,
object SomeObj extends FileIO
would require me to export the FileIO dependency.
However, if I write a companion class that creates a new instance when the object is called, the exporting is no longer necessary.
object SomeObject {
private val someObject = new SomeObject()
def apply() = someObject
}
private[objectPkg] class SomeObject() extends FileIO {}
This code is more verbose and kind of a hack to the singleton pattern for Scala. Is it good to export third party dependencies with your module? If not, is my pattern the typical solution with Scala?
It all deal with code design principles in general. Basically, if you may switch underlying third party library later, or you system must be flexible to be ported over some other libs - then hiding implementation behind some facade is a must.
Often there is a ready-made set of interfaces in java/scala, which are implemented in third-party and you may just use those ones as a part of your facade to the rest of the system, and overall it is a java way. If this is not the case - you will need to derive interfaces by yourself. The worthiness of this everyone estimates by oneself in context.
As per your case: keep in mind that in java/scala you export names, and if you will just use your class (which extends FileIO) in any way outside your defining code, this means that class is accessible publicly and its type is exported/leaked outside as well. Scala should throw compile error, if some private class escapes its visibility scope (so in your second version of SomeObject it may be the case).
Consider this example: I use typesafe config library often in my applications. It has convenient methods, but I typically leave the space for possible separation (or rather my own extension):
package object config {
object Config {
private val conf: TypeSafeConfig = ConfigFactory.load()
def toTypeSafeConfig: TypeSafeConfig = conf
}
#inline implicit def confToTypeSafeConfig(conf: Config.type): TypeSafeConfig = conf.toTypeSafeConfig
}
Implicit conversion just allows me to call all TypeSafeConfig methods on my Config, and it has a bunch of convenient methods. Theoretically in future I could remove my implicit and implement methods I used right in the Config object. But I can hardly imagine why I would spend the time on this, though. This is some example of leaked implementation, that I don't consider problematic.

How to (properly) enrich the standard library?

I would like to define an implicit conversion from Iterator[T] to a class that I have defined: ProactiveIterator[A].
The question isn't really how to do it but how to do it properly, i.e. where to place the method, so that it is as transparent and unobtrusive as possible. Ideally it should be as the implicit conversion from String to StringOps in scala.Predef If the conversion was from a class in the library to some other class, then it could be defined inside that class, but AFAIK that's not possible here.
So far I have considered to add an object containing these conversions, similarly to JavaConversions, but better options may be possible.
You don't really have much of a choice. All implicits must be contained within some sort of object, and imported with a wildcard import (you could import them individually, but I doubt you want that).
So you'll have some sort of implicits object:
package foo.bar
object Implicits {
implicit class ProactiveIterator[A](i: Iterator[A]) {
...
}
}
Then you must explicitly import it wherever you use it:
import foo.bar.Implicits._
In my opinion, this is a good thing. Someone reading the code might not understand where your pimped methods are coming from, so the explicit import is very helpful.
You can similarly place your implicits within a package object. You would have to import them the same way into other namespaces, but they would be available to classes within the same package.
For example, using the following, anything within foo.bar will have this implicit class available:
package foo
package object bar {
implicit class ProactiveIterator[A](i: Iterator[A]) {
...
}
}
Elsewhere you would import foo.bar._ (which may or may not be as clean, depending on what's in bar).

Use function without explicit import in Scala

I like Scala's sys.error function - but I want to distinguish two cases: internal errors (e.g. database problem) and user errors (invalid input).
I tried extending Scala - but it doesn't seem to work:
package scala
class UserException(msg: String) extends RuntimeException(msg)
package object err {
def internal(message: String): Nothing =
sys.error(message)
def usr(message: String): Nothing =
throw new UserException(message)
}
How should I define err.usr() to be able to use it without an explicit import?
You can't, only scala.Predef is imported by default and it's not user extensible in any useful way.
You could put these definitions on the package object of your package hierarchy. Then, everything on that package will see them without import.

In Scala, how can I define a companion object for a class defined in Java?

I'd like to add implicit conversions to Java classes generated by a modeling tool. So I want to add them to the companion object of those classes, so that the compiler automatically finds them. But I cannot add them in a separate file, because the companion has to be defined in the same file. Is there anything I can do about this?
Of course, I can define all my implicit conversions in another object and then bring it into scope, but this requires an extra import. Any other solution?
You can define your own companion object of course, which I often do in my own project-specific Predef-like arrangement. For example:
object domain {
type TimeUnit = java.util.concurrent.TimeUnit
object TimeUnit {
def valueOf(s : String) = java.util.concurrent.TimeUnit.valueOf(str)
val Millis = java.util.concurrent.TimeUnit.MILLISECONDS
//etc
}
Then this can be used:
import my.domain._
val tu : TimeUnit = TimeUnit.valueOf("MILLISECONDS")
But your domain.TimeUnit is a module (i.e. scala object)
With the Scala compiler as it stands now there is no way to define companion objects other than by putting them in the same file. The best you can do is a non-companion object with the same package and name and an extra import.
If you can think of a good way to create post-hoc companionship without breaking assumptions about encapsulation please come post on http://groups.google.com/group/scala-debate because it would clearly be a very useful feature.