Can I define and use functions outside classes and objects in Scala? - scala

I begin to learn Scala and I'm interesting can I define and use function without any class or object in Scala as in Haskell where there is no OOP concept. I'm interested can I use Scala totally without any OOP concept?
P.S. I use IntelliJ plugin for Scala

Well, you cannot do that really, but you can get very close to that by using package objects:
src/main/scala/yourpackage/package.scala:
package object yourpackage {
def function(x: Int) = x*x
}
src/main/scala/yourpackage/Other.scala:
package yourpackage
object Other {
def main(args: Array[String]) {
println(function(10)); // Prints 100
}
}
Note how function is used in Other object - without any qualifiers. Here function belongs to a package, not to some specific object/class.

As the other answers have indicated, its not possible to define a function without an object or class, because in Scala, functions are objects and (for example) Function1 is a class.
However, it is possible to define a function without an enclosing class or object. You do this by making the function itself the object.
src/main/scala/foo/bar/square.scala:
package foo.bar
object square extends (Int => Int) {
def apply(x: Int): Int = x * x
}
extends (Int => Int) is here just syntactic sugar for extends Function1[Int, Int].
This can then be imported and used like so:
src/main/scala/somepackage/App.scala:
package foo.bar.somepackage
import foo.bar.square
object App {
def main(args: Array[String]): Unit = {
println(square(2)) // prints "4"
}
}

No, functions as "first-class objects" means they are still objects.
However, it is still easy to construct a Scala application that does nothing but invoke a function that is arbitrarily composed.
One Odersky meme is that Scala's fusion of FP and OOP exploits modularity of OOP. It's a truism that Scala doesn't strive for "pure FP"; but OOP has real consequences for everyday programming, such as that a Map is a Function, as is a Set or a String (by virtue of enrichment to StringOps).
So, although it's possible to write a program that just invokes a function and is not structured OOPly, the language does not support active avoidance of OOP in the sense of exposure to inheritance and other unFP things like mutable state and side effects.

Related

Understanding companion object in scala

While learning Scala, I came across interesting concept of companion object. Companion object can used to define static methods in Scala. Need few clarifications in the below Spark Scala code in regard of companion object.
class BballStatCounter extends Serializable {
val stats: StatCounter = new StatCounter()
var missing: Long = 0
def add(x: Double): BballStatCounter = {
if (x.isNaN) {
missing += 1
} else {
stats.merge(x)
}
this
}
}
object BballStatCounter extends Serializable {
def apply(x: Double) = new BballStatCounter().add(x)
}
Above code is invoked using val stat3 = stats1.map(b=>BballStatCounter(b)).
What is nature of variables stats and missing declared in the
class? Is it similar to class attributes of Python?
What is the significance of apply method in here?
Here stats and missing are class attributes and each instance of BballStatCounter will have their own copy of them just like in Python.
In Scala the method apply serves a special purpose, if any object has a method apply and if that object is used as function calling notation like Obj() then the compiler replaces that with its apply method calling, like Obj.apply() .
The apply method is generally used as a constructor in a Class Companion object.
All the collection Classes in Scala has a Companion Object with apply method, thus you are able to create a list like : List(1,2,3,4)
Thus in your above code BballStatCounter(b) will get compiled to BballStatCounter.apply(b)
stats and missing are members of the class BcStatCounter. stats is a val so it cannot be changed once it has been defined. missing is a var so it is more like a traditional variable and can be updated, as it is in the add method. Every instance of BcStatCounter will have these members. (Unlike Python, you can't add or remove members from a Scala object)
The apply method is a shortcut that makes objects look like functions. If you have an object x with an apply method, you write x(...) and the compiler will automatically convert this to x.apply(...). In this case it means that you can call BballStatCounter(1.0) and this will call the apply method on the BballStatCounter object.
Neither of these questions is really about companion objects, this is just the normal Scala class framework.
Please note the remarks in the comments about asking multiple questions.

Elegant grouping of implicit value classes

I'm writing a set of implicit Scala wrapper classes for an existing Java library (so that I can decorate that library to make it more convenient for Scala developers).
As a trivial example, let's say that the Java library (which I can't modify) has a class such as the following:
public class Value<T> {
// Etc.
public void setValue(T newValue) {...}
public T getValue() {...}
}
Now let's say I want to decorate this class with Scala-style getters and setters. I can do this with the following implicit class:
final implicit class RichValue[T](private val v: Value[T])
extends AnyVal {
// Etc.
def value: T = v.getValue
def value_=(newValue: T): Unit = v.setValue(newValue)
}
The implicit keyword tells the Scala compiler that it can convert instances of Value to be instances of RichValue implicitly (provided that the latter is in scope). So now I can apply methods defined within RichValue to instances of Value. For example:
def increment(v: Value[Int]): Unit = {
v.value = v.value + 1
}
(Agreed, this isn't very nice code, and is not exactly functional. I'm just trying to demonstrate a simple use case.)
Unfortunately, Scala does not allow implicit classes to be top-level, so they must be defined within a package object, object, class or trait and not just in a package. (I have no idea why this restriction is necessary, but I assume it's for compatibility with implicit conversion functions.)
However, I'm also extending RichValue from AnyVal to make this a value class. If you're not familiar with them, they allow the Scala compiler to make allocation optimizations. Specifically, the compiler does not always need to create instances of RichValue, and can operate directly on the value class's constructor argument.
In other words, there's very little performance overhead from using a Scala implicit value class as a wrapper, which is nice. :-)
However, a major restriction of value classes is that they cannot be defined within a class or a trait; they can only be members of packages, package objects or objects. (This is so that they do not need to maintain a pointer to the outer class instance.)
An implicit value class must honor both sets of constraints, so it can only be defined within a package object or an object.
And therein lies the problem. The library I'm wrapping contains a deep hierarchy of packages with a huge number of classes and interfaces. Ideally, I want to be able to import my wrapper classes with a single import statement, such as:
import mylib.implicits._
to make using them as simple as possible.
The only way I can currently see of achieving this is to put all of my implicit value class definitions inside a single package object (or object) within a single source file:
package mylib
package object implicits {
implicit final class RichValue[T](private val v: Value[T])
extends AnyVal {
// ...
}
// Etc. with hundreds of other such classes.
}
However, that's far from ideal, and I would prefer to mirror the package structure of the target library, yet still bring everything into scope via a single import statement.
Is there a straightforward way of achieving this that doesn't sacrifice any of the benefits of this approach?
(For example, I know that if I forego making these wrappers value classes, then I can define them within a number of different traits - one for each component package - and have my root package object extend all of them, bringing everything into scope through a single import, but I don't want to sacrifice performance for convenience.)
implicit final class RichValue[T](private val v: Value[T]) extends AnyVal
Is essentially a syntax sugar for the following two definitions
import scala.language.implicitConversions // or use a compiler flag
final class RichValue[T](private val v: Value[T]) extends AnyVal
#inline implicit def RichValue[T](v: Value[T]): RichValue[T] = new RichValue(v)
(which, you might see, is why implicit classes have to be inside traits, objects or classes: they also have matching def)
There is nothing that requires those two definitions to live together. You can put them into separate objects:
object wrappedLibValues {
final class RichValue[T](private val v: Value[T]) extends AnyVal {
// lots of implementation code here
}
}
object implicits {
#inline implicit def RichValue[T](v: Value[T]): wrappedLibValues.RichValue[T] = new wrappedLibValues.RichValue(v)
}
Or into traits:
object wrappedLibValues {
final class RichValue[T](private val v: Value[T]) extends AnyVal {
// implementation here
}
trait Conversions {
#inline implicit def RichValue[T](v: Value[T]): RichValue[T] = new RichValue(v)
}
}
object implicits extends wrappedLibValues.Conversions

How can I add new methods to a library object?

I've got a class from a library (specifically, com.twitter.finagle.mdns.MDNSResolver). I'd like to extend the class (I want it to return a Future[Set], rather than a Try[Group]).
I know, of course, that I could sub-class it and add my method there. However, I'm trying to learn Scala as I go, and this seems like an opportunity to try something new.
The reason I think this might be possible is the behavior of JavaConverters. The following code:
class Test {
var lst:Buffer[Nothing] = (new java.util.ArrayList()).asScala
}
does not compile, because there is no asScala method on Java's ArrayList. But if I import some new definitions:
class Test {
import collection.JavaConverters._
var lst:Buffer[Nothing] = (new java.util.ArrayList()).asScala
}
then suddenly there is an asScala method. So that looks like the ArrayList class is being extended transparently.
Am I understanding the behavior of JavaConverters correctly? Can I (and should I) duplicate that methodology?
Scala supports something called implicit conversions. Look at the following:
val x: Int = 1
val y: String = x
The second assignment does not work, because String is expected, but Int is found. However, if you add the following into scope (just into scope, can come from anywhere), it works:
implicit def int2String(x: Int): String = "asdf"
Note that the name of the method does not matter.
So what usually is done, is called the pimp-my-library-pattern:
class BetterFoo(x: Foo) {
def coolMethod() = { ... }
}
implicit def foo2Better(x: Foo) = new BetterFoo(x)
That allows you to call coolMethod on Foo. This is used so often, that since Scala 2.10, you can write:
implicit class BetterFoo(x: Foo) {
def coolMethod() = { ... }
}
which does the same thing but is obviously shorter and nicer.
So you can do:
implicit class MyMDNSResolver(x: com.twitter.finagle.mdns.MDNSResolver) = {
def awesomeMethod = { ... }
}
And you'll be able to call awesomeMethod on any MDNSResolver, if MyMDNSResolver is in scope.
This is achieved using implicit conversions; this feature allows you to automatically convert one type to another when a method that's not recognised is called.
The pattern you're describing in particular is referred to as "enrich my library", after an article Martin Odersky wrote in 2006. It's still an okay introduction to what you want to do: http://www.artima.com/weblogs/viewpost.jsp?thread=179766
The way to do this is with an implicit conversion. These can be used to define views, and their use to enrich an existing library is called "pimp my library".
I'm not sure if you need to write a conversion from Try[Group] to Future[Set], or you can write one from Try to Future and another from Group to Set, and have them compose.

Scala Objects and the rise of singletons

General style question.
As I become better at writing functional code, more of my methods are becoming pure functions. I find that lots of my "classes" (in the loose sense of a container of code) are becoming state free. Therefore I make them objects instead of classes as there is no need to instantiate them.
Now in the Java world, having a class full of "static" methods would seem rather odd, and is generally only used for "helper" classes, like you see with Guava and Commons-* and so on.
So my question is, in the Scala world, is having lots of logic inside "objects" and not "classes" quite normal, or is there another preferred idiom.
As you mention in your title, objects are singleton classes, not classes with static methods as you mention in the text of your question.
And there are a few things that make scala objects better than both static AND singletons in java-world, so it is quite "normal" to use them in scala.
For one thing, unlike static methods, object methods are polymorphic, so you can easily inject objects as dependencies:
scala> trait Quack {def quack="quack"}
defined trait Quack
scala> class Duck extends Quack
defined class Duck
scala> object Quacker extends Quack {override def quack="QUAACK"}
defined module Quacker
// MakeItQuack expects something implementing Quack
scala> def MakeItQuack(q: Quack) = q.quack
MakeItQuack: (q: Quack)java.lang.String
// ...it can be a class
scala> MakeItQuack(new Duck)
res0: java.lang.String = quack
// ...or it can be an object
scala> MakeItQuack(Quacker)
res1: java.lang.String = QUAACK
This makes them usable without tight coupling and without promoting global state (which are two of the issues generally attributed to both static methods and singletons).
Then there's the fact that they do away with all the boilerplate that makes singletons so ugly and unidiomatic-looking in java. This is an often overlooked point, in my opinion, and part of what makes singletons so frowned upon in java even when they are stateless and not used as global state.
Also, the boilerplate you have to repeat in all java singletons gives the class two responsibilities: ensuring there's only one instance of itself and doing whatever it's supposed to do. The fact that scala has a declarative way of specifying that something is a singleton relieves the class and the programmer from breaking the single responsibility principle. In scala you know an object is a singleton and you can just reason about what it does.
You can also use package objects e.g. take a look at the scala.math package object here
https://lampsvn.epfl.ch/trac/scala/browser/scala/tags/R_2_9_1_final/src//library/scala/math/package.scala
Yes, I would say it is normal.
For most of my classes I create a companion object to handle some initialization/validation logic there. For example instead of throwing an exception if validation of parameters fails in a constructor it is possible to return an Option or an Either in the companion objects apply-method:
class X(val x: Int) {
require(x >= 0)
}
// ==>
object X {
def apply(x: Int): Option[X] =
if (x < 0) None else Some(new X(x))
}
class X private (val x: Int)
In the companion object one can add a lot of additional logic, such as a cache for immutable objects.
objects are also good for sending signals between instances if there is no need to also send messages:
object X {
def doSomething(s: String) = ???
}
case class C(s: String)
class A extends Actor {
var calculateLater: String = ""
def receive = {
case X => X.doSomething(s)
case C(s) => calculateLater = s
}
}
Another use case for objects is to reduce the scope of elements:
// traits with lots of members
trait A
trait B
trait C
trait Trait {
def validate(s: String) = {
import validator._
// use logic of validator
}
private object validator extends A with B with C {
// all members of A, B and C are visible here
}
}
class Class extends Trait {
// no unnecessary members and name conflicts here
}

Generating a Scala class automatically from a trait

I want to create a method that generates an implementation of a trait. For example:
trait Foo {
def a
def b(i:Int):String
}
object Processor {
def exec(instance: AnyRef, method: String, params: AnyRef*) = {
//whatever
}
}
class Bar {
def wrap[T] = {
// Here create a new instance of the implementing class, i.e. if T is Foo,
// generate a new FooImpl(this)
}
}
I would like to dynamically generate the FooImpl class like so:
class FooImpl(val wrapped:AnyRef) extends Foo {
def a = Processor.exec(wrapped, "a")
def b(i:Int) = Processor.exec(wrapped, "b", i)
}
Manually implementing each of the traits is not something we would like (lots of boilerplate) so I'd like to be able to generate the Impl classes at compile time. I was thinking of annotating the classes and perhaps writing a compiler plugin, but perhaps there's an easier way? Any pointers will be appreciated.
java.lang.reflect.Proxy could do something quite close to what you want :
import java.lang.reflect.{InvocationHandler, Method, Proxy}
class Bar {
def wrap[T : ClassManifest] : T = {
val theClass = classManifest[T].erasure.asInstanceOf[Class[T]]
theClass.cast(
Proxy.newProxyInstance(
theClass.getClassLoader(),
Array(theClass),
new InvocationHandler {
def invoke(target: AnyRef, method: Method, params: Array[AnyRef])
= Processor.exec(this, method.getName, params: _*)
}))
}
}
With that, you have no need to generate FooImpl.
A limitation is that it will work only for trait where no methods are implemented. More precisely, if a method is implemented in the trait, calling it will still route to the processor, and ignore the implementation.
You can write a macro (macros are officially a part of Scala since 2.10.0-M3), something along the lines of Mixing in a trait dynamically. Unfortunately now I don't have time to compose an example for you, but feel free to ask questions on our mailing list at http://groups.google.com/group/scala-internals.
You can see three different ways to do this in ScalaMock.
ScalaMock 2 (the current release version, which supports Scala 2.8.x and 2.9.x) uses java.lang.reflect.Proxy to support dynamically typed mocks and a compiler plugin to generate statically typed mocks.
ScalaMock 3 (currently available as a preview release for Scala 2.10.x) uses macros to support statically typed mocks.
Assuming that you can use Scala 2.10.x, I would strongly recommend the macro-based approach over a compiler plugin. You can certainly make the compiler plugin work (as ScalaMock demonstrates) but it's not easy and macros are a dramatically superior approach.