Quick Documentation For Scala Apply Constructor Pattern in IntelliJ IDE - scala

I am wondering if there is a way to get the quick documentation in IntelliJ to work for the class construction pattern many scala developers use below.
SomeClass(Param1,Parma2)
instead of
new SomeClass(param1,Param2)
The direct constructor call made with new obviously works but many scala devs use apply to construct objects. When that pattern is used the Intelij documentation look up fails to find any information on the class.

I don't know if there are documents in IntelliJ per se. However, the pattern is fairly easy to explain.
There's a pattern in Java code for having static factory methods (this is a specialization of the Gang of Four Factory Method Pattern), often along the lines of (translated to Scala-ish):
object Foo {
def barInstance(args...): Bar = ???
}
The main benefit of doing this is that the factory controls object instantiation, in particular:
the particular runtime class to instantiate, possibly based on the arguments to the factory. For example, the generic immutable collections in Scala have factory methods which may create optimized small collections if they're created with a sufficiently small amount of contents. An example of this is a sequence of length 1 can be implemented with basically no overhead with a single field referring to the object and a lookup that checks if the offset is 0 and either throws or returns its sole field.
whether an instance is created. One can cache arguments to the factory and memoize or "hashcons" the created objects, or precreate the most common instances and hand them out repeatedly.
A further benefit is that the factory is a function, while new is an operator, which allows the factory to be passed around:
class Foo(x: Int)
object Foo {
def instance(x: Int) = new Foo(x)
}
Seq(1, 2, 3).map(x => Foo(x)) // results in Seq(Foo(1), Foo(2), Foo(3))
In Scala, this is combined with the fact that the language allows any object which defines an apply method to be used syntactically as a function (even if it doesn't extend Function, which would allow the object to be passed around as if it's a function) and with the "companion object" to a class (which incorporates the things that in Java would be static in the class) to get something like:
class Foo(constructor_args...)
object Foo {
def apply(args...): Foo = ???
}
Which can be used like:
Foo(...)
For a case class, the Scala compiler automatically generates a companion object with certain behaviors, one of which is an apply with the same arguments as the constructor (other behaviors include contract-obeying hashCode and equals as well as an unapply method to allow for pattern matching).

Related

why I can update state of an Object extend immutable trait in scala

I created a class extend scala.Immutable
class SomeThing(var string: String) extends Immutable {
override def toString: String = string
}
As I expected, scala compiler should help me prevent change state of class SomeThing. But when I run this test
"Test change state of immutable interface" should "not allow" in {
val someThing = new SomeThing("hello")
someThing.string = "hello 1"
println(someThing)
}
The result is hello 1 and scala compiler don't throw any warning or error.
Why they have to add Immutable trait without help us prevent object mutable?
There are several aspects to this question.
1. A simple one is that Scala compiler can't really ensure immutability for many various reasons. For example, the main target platform JVM allows modifying even final fields using reflection. Another reason this is not enforceable is code like this
/////////////////////////////////////////
//// library v1
package library
class LibraryData(val value:Int)
/////////////////////////////////////////
//// code that uses the library
package app
class UserData(val data:LibraryData) extends Immutable
/////////////////////////////////////////
//// library v2
package library
class LibraryData(var value:Int) //now change it to var!
Since the "library" is compiled independently of the "app" and doesn't even know about existence of the "app" there is no point in time where compiler can catch the broken contract.
2. More fundamental misunderstanding you seem to have is what trait does. In this context trait (or "interface" in some other languages) represents a contract between the implementation and the user-code about how the implementation can and should behave. However not every kind of a contract can be represented as a trait (at least without making the code super-complicated). For example, for a mutable collection there is a contract that size should return the number of times add (or +=) has been called but there is no way to represent such a contract as a trait besides declaring that there are methods size and += with corresponding signatures. On the other hand, for most of the contracts there is no way to enforce implementation to follow the contract . For example, an implementation of size that always returns 0 technically matches all the types but is clearly breaking the contract.
Similarly Immutable doc says:
A marker trait for all immutable data structures such as immutable collections.
So it is just a marker trait which is one of the ways to work around contracts that can't be really represented as types. And it says that whoever implements that trait claims to be an immutable object. Your code claims that but clearly breaks the contract. So technically it is your fault for not respecting the contract.

Using New Keyword inside apply method in a companion object

I am a little bit confused using companion objects in scala. When you want to provide multiple constructors, usually you declare a companion object and overload the apply method. But what is the difference between this two ways of doing it?:
case class Node(....)
object Node {
def apply(...) = new Node(....) // 1 way
def apply(...) = Node(...) // second way
}
Almost all examples I've seen use the first form:
When to use companion object factory versus the new keyword
"new" keyword in Scala
http://alvinalexander.com/scala/how-to-create-scala-object-instances-without-new-apply-case-class
But my code seems to work the same using both forms. Does using new keyword only have sense when we have a normal class? (Not a case class)?
When you call
val n = Node(..)
The compiler will expand the code to a Node.apply call. Now, one of these apply methods will internally have to call new in order to create an instance of the type. Case classes provide companion objects with an apply method for you out of the box to allow the shorter syntax.
When you want to provide multiple constructors, usually you declare a companion object and overload the apply method
This is the case for case classes. You can also provide additional auxiliary constructors using this():
class Foo(i: Int) {
def this() {
this(0)
}
}
Note this will not provide the syntax sugar apply does, you'll need to use new.
When you declare a case class. A companion object is generated by the compiler with apply method in it whose implementation creates the object of the case class using new keyword.
So you need not create a companion object again with apply method creating object of the case class using new keyword. This work will be done by the compiler

Scala type alias with companion object

I'm a relatively new Scala user and I wanted to get an opinion on the current design of my code.
I have a few classes that are all represented as fixed length Vector[Byte] (ultimately they are used in a learning algorithm that requires a byte string), say A, B and C.
I would like these classes to be referred to as A, B and C elsewhere in the package for readability sake and I don't need to add any extra class methods to Vector for these methods. Hence, I don't think the extend-my-library pattern is useful here.
However, I would like to include all the useful functional methods that come with Vector without having to 'drill' into a wrapper object each time. As efficiency is important here, I also didn't want the added weight of a wrapper.
Therefore I decided to define type aliases in the package object:
package object abc {
type A: Vector[Byte]
type B: Vector[Byte]
type C: Vector[Byte]
}
However, each has it's own fixed length and I would like to include factory methods for their creation. It seems like this is what companion objects are for. This is how my final design looks:
package object abc {
type A: Vector[Byte]
object A {
val LENGTH: Int = ...
def apply(...): A = {
Vector.tabulate...
}
}
...
}
Everything compiles and it allows me to do stuff like this:
val a: A = A(...)
a map {...} mkString(...)
I can't find anything specifically warning against writing companion objects for type aliases, but it seems it goes against how type aliases should be used. It also means that all three of these classes are defined in the same file, when ideally they should be separated.
Are there any hidden problems with this approach?
Is there a better design for this problem?
Thanks.
I guess it is totally ok, because you are not really implementing a companion object.
If you were, you would have access to private fields of immutable.Vector from inside object A (like e.g. private var dirty), which you do not have.
Thus, although it somewhat feels like A is a companion object, it really isn't.
If it were possible to create a companion object for any type by using type alias would make member visibility constraints moot (except maybe for private|protected[this]).
Furthermore, naming the object like the type alias clarifies context and purpose of the object, which is a plus in my book.
Having them all in one file is something that is pretty common in scala as I know it (e.g. when using the type class pattern).
Thus:
No pitfalls, I know of.
And, imho, no need for a different approach.

Creating an arraylist in Scala

I am a teachers assistant for a class that teaches Scala. As an assignment, I want the students to implement an arraylist class.
In java I wrote it like:
public class ArrayList<T> implements List<T>{....}
Is there any equivalent List trait that I should use to implement the arraylist?
The name ArrayList suggests that you should mix-in IndexedSeq. Actually you probably want to get all the goodies that are provided by IndexedSeqLike, i.e.
class ArrayList[A] extends IndexedSeq[A] with IndexedSeqLike[A, ArrayList[A]]
This gets you concrete implementations of head, tail, take, drop, filter, etc. If you also want map, flatMap, etc. (all the methods that take a type parameter) to work properly (return an ArrayList[A]), you also have to provide a type class instance for CanBuildFrom in your companion object, e.g.
def cbf[A, B] = new CanBuildFrom[ArrayList[A], B, ArrayList[B]] {
// TODO Implementation!
}
The scala collection library is very complex. For an overview on the inheritance take a look at these pictures:
scala.collection.immutable: http://www.scala-lang.org/docu/files/collections-api/collections.immutable.png
scala.collection.mutable: http://www.scala-lang.org/docu/files/collections-api/collections.mutable.png
Also the scaladoc gives a good overview about all the classes and traits of the collection library.
Be aware, that in Scala a List is a real list, meaning it is a LinearSeq, in Java a List is more like an IndexedSeq in Scala.
In Scala there are many Interfaces. First, they are separated in mutable and immutable ones. In Java ArrayList is based on an array - thus it is an indexed sequence. In Scala the interface for this is IndexedSeq[A]. Because ArrayList is also mutable, you can choose scala.collection.mutable.IndexedSeq otherwise scala.collection.immutable.IndexedSeq. Instead of mutable.IndexedSeq you can also choose scala.collection.mutable.Buffer, which does not guarantee an access time of O(1).
If you wanna have a more functional approach you can prefer Seq[A] as interface or Iterable[A] if you want to be able to implement more than Sequences.
That would be Seq[T], or maybe IndexedSeq[T] - or even List[T].

Scala - are classes sufficient?

Coming from Java I am confused by the class/object distinction of scala.
Note that I do not ask for the formal difference; there are enough
references on the web which explain this, and there are related questions on
SO.
My questions are:
Why did the designers of scala
choosed to make things more
complicated (compared to Java or
C#)? What disadvantages do I have to
expect if I ignore this distinction
and declare only classes?
Thanks.
Java classes contain two completely different types of members -- instance members (such as BigDecimal.plus) and static members (such as BigDecimal.valueOf). In Scala, there are only instance members. This is actually a simplification! But it leaves a problem: where do we put methods like valueOf? That's where objects are useful.
class BigDecimal(value: String) {
def plus(that: BigDecimal): BigDecimal = // ...
}
object BigDecimal {
def valueOf(i: Int): BigDecimal = // ...
}
You can view this as the declaration of anonymous class and a single instantiation thereof:
class BigDecimal$object {
def valueOf(i: Int): BigDecimal = // ...
}
lazy val BigDecimal = new BigDecimal$object
When reading Scala code, it is crucial to distinguish types from values. I've configured IntelliJ to hightlight types blue.
val ls = List.empty[Int] // List is a value, a reference the the object List
ls: List[Int] // List is a type, a reference to class List
Java also has another degree of complexity that was removed in Scala -- the distinction between fields and methods. Fields aren't allowed on interfaces, except if they are static and final; methods can be overriden, fields instead are hidden if redefined in a subclass. Scala does away with this complexity, and only exposes methods to the programmer.
Finally, a glib answer to your second question: If you don't declare any objects, you're program may never run, as you to define the equivalent of public static void main(String... args) {} in Scala, you need at least one object!
Scala doesn't have any notion of static methods with standard classes, so in those scenarios you'll have to use objects. Interesting article here which provides a good intro:
http://www.codecommit.com/blog/scala/scala-for-java-refugees-part-3
(scroll down to Scala’s Sort-of Statics)
One way to look at it is this. An executing program consists of a community of objects and threads. Threads execute code within the context of objects -- i.e. there is always a "this" object that a thread is executing within. This is a simplification from Java in the sense that in Java, there is not always a "this". But now there is a chicken/egg problem. If objects are created by threads and threads are executed within objects, what object is the first thread initially executing within. There has to be a nonempty set of objects that exist at the start of program execution. These are the objects declared with the object keyword.