Scala: package object v.s. singleton object within a package - scala

I want to group a set of similar functions in a library in scala.
Here are two approaches I have seen elsewhere. I want to understand the
differences between the two.
Singleton object defined in a package
// src/main/scala/com/example/toplevel/functions.scala
package com.example.toplevel
object functions {
def foo: Unit = { ... }
def bar: Unit = { ... }
}
Package object
// src/main/scala/com/example/toplevel/package.scala
package com.example.toplevel
package object functions {
def foo: Unit = { ... }
def bar: Unit = { ... }
}
Comparison
As far as I can tell, the first approach will require explicitly importing
the functions object whenever you want to use its functions. While the package object approach allows anything in the package functions to access those methods without importing them.
Ie, com.example.toplevel.functions.MyClass would have access to com.example.toplevel.functions.foo implicitly.
Is my understanding correct?
If there are no classes defined within com.example.toplevel.functions,
it seems the approaches would be equivalent, is this correct?

Ansered by terminally-chill in a comment:
yes, your understanding is correct. Anything defined in your package
object will be available without having to import. If you use an
object, you will have to import it even within the same package.

Related

loading external scala scripts into a scala file

i originally made scripts with many functions on 2 individual scala worksheets. i got them working and now want to tie these individual scripts together by importing and using them into a third file. from what i have read you can not simply import external scripts you must first make them into a class and put them into a package. so i tried that but i still couldn't import it
i know this may be a bit basic for this site but im struggling to find much scala documentation.
i think my problem might span from a missunderstanding of how packages work. the picture below might help.
my program example
adder.scala
package adder
class adder {
def add_to_this(AA:Int):Int={
var BB = AA + 1;
return BB
}
}
build.scala
package builder
class build {
def make_numbers(){
var a = 0;
var b = 0;}
}
main.sc
import adder
import builder
object main {
adder.adder.add_to_this(10);
}
the errors i get are
object is not a member of package adder
object is not a member of package builder
Classes in scala slightly differ from classes in java. If you need something like singleton, you'll want to use object instead of class i.e.:
package com.example
object Main extends App {
object Hide{
object Adder{
def addToThis(AA:Int):Int = AA + 1
}
}
object Example{
import com.example.Main.Hide.Adder
def run(): Unit = println(Adder.addToThis(10))
}
Example.run()
}
Consider objects like packages/modules which are also regular values. You can import an object by its full path, i.e. com.example.Main.Hide.Adder you can also import contents of an object by adding .{addToThis}, or import anything from object by adding ._ after an object.
Note that classes, traits and case classes could not be used as objects, you can't do anything with it unless you have an instance - there are no static modifier.

Pass parameters to singleton object in Scala

I've inherited a Scala project that has to be extended and instead of one monolithic monster it has to be split: some code needs to become a library that's used by all the other components and there's a few applications that may be included at some later point.
object Shared {
// vals, defs
}
=====
import Shared._
object Utils1 {
// vals, defs
}
=====
import Shared._
object Utils2 {
// vals, defs
}
=====
import Shared._
import Utils1._
class Class1(val ...) {
// stuff
}
=====
import Shared._
import Utils2._
class Class2(val ...) {
// more stuff
}
etc.
The problem is that Utils1 and Utils2 (and many other objects and classes) use the values from Shared (the singleton) and that Shared now has to be instantiated because one of the key things that happens in Shared is that the application name is set (through SparkConf) as well as reading of configuration files with database connection information.
The idea was to have a multi-project build where I can just pick which component needs to be recompiled and do it. Since the code in Utils is shared by pretty much all applications that currently exist and will come, it's nice to keep it together in one large project, at least so I thought. It's hard to publish to a local repository the code that's common because we have no local repository (yet) and getting permission to have one is difficult.
The Utils singletons are needed because they have to be static (because of Spark and serializability).
When I make Shared a proper class, then all the import statements will become useless and I have to pass an instance around, which means I cannot use singletons, even though I need those. It currently works because there is only a single application, so there really only one instance of Shared. In future, there will still be only one instance of Shared per application but there may be multiple applications/services defined in a single project.
I've looked into implicits, package objects, and dependency injection but I'm not sure that's the right road to go down on. Maybe I'm just not seeing what should be obvious.
Long story short, is there a neat way to give Shared a parameter or achieve what I hope to achieve?
Maybe as an option you can create SharedClass class and make Shared object extend it in each app with different constructor params:
class SharedClass(val param: String) { // vals ...
}
object Shared extends SharedClass("value")
One hacky way to do it, is to use a DynamicVariable and make it have a value, when Shared is initialized (i.e., the first time anything that refers to Shared fields or methods is called).
Consider the following:
/* File: Config.scala */
import scala.util.DynamicVariable
object Config {
val Cfg: DynamicVariable[String] = new DynamicVariable[String](null)
}
/**********************/
/* File: Shared.scala */
import Config._
object Shared {
final val Str: String = Cfg.value
def init(): Unit = { }
}
/*********************/
/* File: Utils.scala */
import Shared._
object Utils {
def Length: Int = Str.length
}
/********************/
/* File: Main.scala */
object Main extends App
{
// Make sure `Shared` is initialized with the desired value
Config.Cfg.withValue("foobar") {
Shared.init()
}
// Now Shared.Str is fixed for the duration of the program
println(Shared.Str) // prints: foobar
println(Utils.Length) // prints: 6
}
This setup is thread-safe, although not deterministic, of course. The following Main would randomly select one of the 3 strings, and print the selected string and its length 3 times:
import scala.util.Random
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
object Main extends App
{
def random(): Int = Random.nextInt(100)
def foo(param: String, delay: Int = random()): Future[Unit] = {
Future {
Thread.sleep(delay)
Config.Cfg.withValue(param) {
Shared.init()
}
println(Shared.Str)
println(Utils.Length)
}
}
foo("foo")
foo("foobar")
foo("foobarbaz")
Thread.sleep(1000)
}
object in Scala has "apply" method which you can use for this purpose. So, your Shared object will be like below
object Shared {
def apply(param1: String, param2: String) = ???
}
Now each client of Shared can pass different values.

How to handle different package names in different versions?

I have a 3rd party library with package foo.bar
I normally use it as:
import foo.bar.{Baz => MyBaz}
object MyObject {
val x = MyBaz.getX // some method defined in Baz
}
The new version of the library has renamed the package from foo.bar to newfoo.newbar. I have now another version of my code with the slight change as follows:
import newfoo.newbar.{Baz => MyBaz}
object MyObject {
val x = MyBaz.getX // some method defined in Baz
}
Notice that only the first import is different.
Is there any way I can keep the same version of my code and still switch between different versions of the 3rd party library as and when needed?
I need something like conditional imports, or an alternative way.
The other answer is on the right track but doesn't really get you all the way there. The most common way to do this kind of thing in Scala is to provide a base compatibility trait that has different implementations for each version. In my little abstracted library, for example, I have the following MacrosCompat for Scala 2.10:
package io.travisbrown.abstracted.internal
import scala.reflect.ClassTag
private[abstracted] trait MacrosCompat {
type Context = scala.reflect.macros.Context
def resultType(c: Context)(tpe: c.Type)(implicit
tag: ClassTag[c.universe.MethodType]
): c.Type = {
import c.universe.MethodType
tpe match {
case MethodType(_, res) => resultType(c)(res)
case other => other
}
}
}
And this one for 2.11:
package io.travisbrown.abstracted.internal
import scala.reflect.ClassTag
private[abstracted] trait MacrosCompat {
type Context = scala.reflect.macros.whitebox.Context
def resultType(c: Context)(tpe: c.Type): c.Type = tpe.finalResultType
}
And then my classes, traits, and objects that use the macro reflection API can just extend MacrosCompat and they'll get the appropriate Context and an implementation of resultType for the version we're currently building (this is necessary because of changes to the macros API between 2.10 and 2.11).
(This isn't originally my idea or pattern, but I'm not sure who to attribute it to. Probably Eugene Burmako?)
If you're using SBT, there's special support for version-specific source trees—you can have a src/main/scala for your shared code and e.g. src/main/scala-2.10 and src/main/scala-2.11 directories for version-specific code, and SBT will take care of the rest.
You can try to use type aliases:
package myfoo
object mybar {
type MyBaz = newfoo.newbar.Baz
// val MyBaz = newfoo.newbar.Baz // if Baz is a case class/object, then it needs to be aliased twice - as a type and as a value
}
And then you may simply import myfoo.mybar._ and replace the object mybar to switch to different version of the library.

Best Practice to Load Class in Scala

I'm new to Scala (and functional programming as well) and I'm developing a plugin based application to learn and study.
I've cretead a trait to be the interface of a plugin. So when my app starts, it will load all the classes that implement this trait.
trait Plugin {
def init(config: Properties)
def execute(parameters: Map[String, Array[String]])
}
In my learning of Scala, I've read that if I want to program in functional way, I should avoid using var. Here's my problem:
The init method will be called after the class being loaded. And probably I will want to use the values from the config parameter in the execute method.
How to store this without using a var? Is there a better practice to do what I want here?
Thanks
There is more to programming in a functional way than just avoiding vars. One key concept is also to prefer immutable objects. In that respect your Plugin API is already breaking functional principles as both methods are only executed for their side-effects. With such an API using vars inside the implementation does not make a difference.
For an immutable plugin instance you could split plugin creation:
trait PluginFactory {
def createPlugin (config: Properties): Plugin
}
trait Plugin {
def execute ...
}
Example:
class MyPluginFactory extends MyPlugin {
def createPlugin (config: Properties): Plugin = {
val someValue = ... // extract from config
new MyPlugin(someValue)
}
}
class MyPlugin (someValue: String) extends Plugin {
def execute ... // using someConfig
}
You can use a val! It's basically the same thing, but the value of a val field cannot be modified later on. If you were using a class, you could write:
For example:
class Plugin(val config: Properties) {
def init {
// do init stuff...
}
def execute = // ...
}
Unfortunately, a trait cannot have class parameters. If you want to have a config field in your trait, you wont be able to set its value immediately, so it will have to be a var.

Generating a Scala class automatically from a trait

I want to create a method that generates an implementation of a trait. For example:
trait Foo {
def a
def b(i:Int):String
}
object Processor {
def exec(instance: AnyRef, method: String, params: AnyRef*) = {
//whatever
}
}
class Bar {
def wrap[T] = {
// Here create a new instance of the implementing class, i.e. if T is Foo,
// generate a new FooImpl(this)
}
}
I would like to dynamically generate the FooImpl class like so:
class FooImpl(val wrapped:AnyRef) extends Foo {
def a = Processor.exec(wrapped, "a")
def b(i:Int) = Processor.exec(wrapped, "b", i)
}
Manually implementing each of the traits is not something we would like (lots of boilerplate) so I'd like to be able to generate the Impl classes at compile time. I was thinking of annotating the classes and perhaps writing a compiler plugin, but perhaps there's an easier way? Any pointers will be appreciated.
java.lang.reflect.Proxy could do something quite close to what you want :
import java.lang.reflect.{InvocationHandler, Method, Proxy}
class Bar {
def wrap[T : ClassManifest] : T = {
val theClass = classManifest[T].erasure.asInstanceOf[Class[T]]
theClass.cast(
Proxy.newProxyInstance(
theClass.getClassLoader(),
Array(theClass),
new InvocationHandler {
def invoke(target: AnyRef, method: Method, params: Array[AnyRef])
= Processor.exec(this, method.getName, params: _*)
}))
}
}
With that, you have no need to generate FooImpl.
A limitation is that it will work only for trait where no methods are implemented. More precisely, if a method is implemented in the trait, calling it will still route to the processor, and ignore the implementation.
You can write a macro (macros are officially a part of Scala since 2.10.0-M3), something along the lines of Mixing in a trait dynamically. Unfortunately now I don't have time to compose an example for you, but feel free to ask questions on our mailing list at http://groups.google.com/group/scala-internals.
You can see three different ways to do this in ScalaMock.
ScalaMock 2 (the current release version, which supports Scala 2.8.x and 2.9.x) uses java.lang.reflect.Proxy to support dynamically typed mocks and a compiler plugin to generate statically typed mocks.
ScalaMock 3 (currently available as a preview release for Scala 2.10.x) uses macros to support statically typed mocks.
Assuming that you can use Scala 2.10.x, I would strongly recommend the macro-based approach over a compiler plugin. You can certainly make the compiler plugin work (as ScalaMock demonstrates) but it's not easy and macros are a dramatically superior approach.