After watching John De Goes' "FP to the Max" (https://www.youtube.com/watch?v=sxudIMiOo68) I'm wondering about approaches for writing FP programs in the tagless-final pattern.
Say I have some typeclass for modelling a side-effecty thing (taking his Console example):
trait Console[F[_]] {
def putStrLn(str: String): F[Unit]
def getStrLn: F[String]
}
How would you depend on Console?
Implicitly
Like demonstrated in his video:
def inputLength[F[_]: Functor: Console]: F[Int] =
Console[F].getStrLn.map(_.length)
Pros: The function signature is clean, and you can benefit from typeclass automatic derivation
Explicitly
By passing the instance to the function directly:
def inputLength[F[_]: Functor](console: Console[F]): F[Int] =
console.getStrLn.map(_.length)
Pros: This allows you to explicitly wire your dependencies according to your needs, and feels less "magical"
Not sure what's the best / most idiomatic way to write this function, would appreciate your opinions.
Thanks!
When you rely on a typeclass instance via implicit parameters, there's one thing you're certain of, and that is that you can determine the instance of your typeclass at compile time (unless you provide it explicitly, which kind of defeats the purpose and then we go back to example 2). On the contrary, if you can't determine the instance of that class at compile time, for example when you rely on a configuration parameter to determine the instance type, then an implicit parameter of any kind is no longer suitable.
Thus, I'd say, to my own opinion and liking, is that whenever one can determine the instance at compile time and let the compiler figure out the wiring, do so, because as you said you gain a lot from it, such as the ability to enjoy automatic type class derivation when available.
The argument of "magical", while understandable, signals that whomever says this still has milage to go in the language that he's programming in and needs to learn how things work, which is completely OK, yet not a good enough reason not to use typeclass instances via implicit parameters.
Coming from a java background I always mark instance variables as private. I'm learning scala and almost all of the code I have viewed the val/var instances have default (public) access. Why is this the access ? Does it not break information hiding/encapsulation principle ?
It would help it you specified which code, but keep in mind that some example code is in a simplified form to highlight whatever it is that the example is supposed to show you. Since the default access is public, that means that you often get the modifiers left off for simplicity.
That said, since a val is immutable, there's not much harm in leaving it public as long as you recognize that this is now part of the API for your class. That can be perfectly okay:
class DataThingy(data: Array[Double) {
val sum = data.sum
}
Or it can be an implementation detail that you shouldn't expose:
class Statistics(data: Array[Double]) {
val sum = data.sum
val sumOfSquares = data.map(x => x*x).sum
val expectationSquared = (sum * sum)/(data.length*data.length)
val expectationOfSquare = sumOfSquares/data.length
val varianceOfSample = expectationOfSquare - expectationSquared
val standardDeviation = math.sqrt(data.length*varianceOfSample/(data.length-1))
}
Here, we've littered our class with all of the intermediate steps for calculating standard deviation. And this is especially foolish given that this is not the most numerically stable way to calculate standard deviation with floating point numbers.
Rather than merely making all of these private, it is better style, if possible, to use local blocks or private[this] defs to perform the intermediate computations:
val sum = data.sum
val standardDeviation = {
val sumOfSquares = ...
...
math.sqrt(...)
}
or
val sum = data.sum
private[this] def findSdFromSquares(s: Double, ssq: Double) = { ... }
val standardDeviation = findMySD(sum, data.map(x => x*x).sum)
If you need to store a calculation for later use, then private val or private[this] val is the way to go, but if it's just an intermediate step on the computation, the options above are better.
Likewise, there's no harm in exposing a var if it is a part of the interface--a vector coordinate on a mutable vector for instance. But you should make them private (better yet: private[this], if you can!) when it's an implementation detail.
One important difference between Java and Scala here is that in Java you can not replace a public variable with getter and setter methods (or vice versa) without breaking source and binary compatibility. In Scala you can.
So in Java if you have a public variable, the fact that it's a variable will be exposed to the user and if you ever change it, the user has to change his code. In Scala you can replace a public var with a getter and setter method (or a public val with just a getter method) without the user ever knowing the difference. So in that sense no implementation details are exposed.
As an example, let's consider a rectangle class:
class Rectangle(val width: Int, val height:Int) {
val area = width * height
}
Now what happens if we later decide that we don't want the area to be stored as a variable, but rather it should be calculated each time it's called?
In Java the situation would be like this: If we had used a getter method and a private variable, we could just remove the variable and change the getter method to calculate the area instead of using the variable. No changes to user code needed. But since we've used a public variable, we are now forced to break user code :-(
In Scala it's different: we can just change the val to def and that's it. No changes to user code needed.
Actually, some Scala developers tend to use default access too much. But you can find appropriate examples in famous Scala projects(for example, Twitter's Finagle).
On the other hand, creating objects as immutable values is the standard way in Scala. We don't need to hide all the attributes if they're immutable completely.
I'd like to answer the question with a bit more generic approach. I think the answer you are looking for has to do with the design paradigms on which Scala is built. Instead of the classical prodecural / object oriented approach, like you see in Java, functional programming is used to a much higher extend. I cannot cover all the code that you mention of course, but in general (well written) Scala code will not need a lot of mutability.
As pointed out by Rex, val's are immutable, so there are few reasons for them to not be public. But as I see it the immutability is not a goal in itself, but a result of functional programming. So if we consider functions as something like x -> function -> y the function part becomes somewhat of a black box; we don't really care what it does, as long as it does it correctly. As the Haskell Wiki writes:
Purely functional programs typically operate on immutable data. Instead of altering existing values, altered copies are created and the original is preserved.
This also explains the missing closure, since the parts we traditionally wanted to hide away is executed in the functions and thus hidden anyway.
So, to cut things short, I would argue that mutability and closure has become more redundant in Scala. And why clutter things up with getters and setter when it can be avoided?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
So far implicit parameters in Scala do not look good for me -- it is too close to global variables, however since Scala seems like rather strict language I start doubting in my own opinion :-).
Question: could you show a real-life (or close) good example when implicit parameters really work. IOW: something more serious than showPrompt, that would justify such language design.
Or contrary -- could you show reliable language design (can be imaginary) that would make implicit not neccessary. I think that even no mechanism is better than implicits because code is clearer and there is no guessing.
Please note, I am asking about parameters, not implicit functions (conversions)!
Updates
Global variables
Thank you for all great answers. Maybe I clarify my "global variables" objection. Consider such function:
max(x : Int,y : Int) : Int
you call it
max(5,6);
you could (!) do it like this:
max(x:5,y:6);
but in my eyes implicits works like this:
x = 5;
y = 6;
max()
it is not very different from such construct (PHP-like)
max() : Int
{
global x : Int;
global y : Int;
...
}
Derek's answer
This is great example, however if you can think of as flexible usage of sending message not using implicit please post an counter-example. I am really curious about purity in language design ;-).
In a sense, yes, implicits represent global state. However, they are not mutable, which is the true problem with global variables -- you don't see people complaining about global constants, do you? In fact, coding standards usually dictate that you transform any constants in your code into constants or enums, which are usually global.
Note also that implicits are not in a flat namespace, which is also a common problem with globals. They are explicitly tied to types and, therefore, to the package hierarchy of those types.
So, take your globals, make them immutable and initialized at the declaration site, and put them on namespaces. Do they still look like globals? Do they still look problematic?
But let's not stop there. Implicits are tied to types, and they are just as much "global" as types are. Does the fact that types are global bother you?
As for use cases, they are many, but we can do a brief review based on their history. Originally, afaik, Scala did not have implicits. What Scala had were view types, a feature many other languages had. We can still see that today whenever you write something like T <% Ordered[T], which means the type T can be viewed as a type Ordered[T]. View types are a way of making automatic casts available on type parameters (generics).
Scala then generalized that feature with implicits. Automatic casts no longer exist, and, instead, you have implicit conversions -- which are just Function1 values and, therefore, can be passed as parameters. From then on, T <% Ordered[T] meant a value for an implicit conversion would be passed as parameter. Since the cast is automatic, the caller of the function is not required to explicitly pass the parameter -- so those parameters became implicit parameters.
Note that there are two concepts -- implicit conversions and implicit parameters -- that are very close, but do not completely overlap.
Anyway, view types became syntactic sugar for implicit conversions being passed implicitly. They would be rewritten like this:
def max[T <% Ordered[T]](a: T, b: T): T = if (a < b) b else a
def max[T](a: T, b: T)(implicit $ev1: Function1[T, Ordered[T]]): T = if ($ev1(a) < b) b else a
The implicit parameters are simply a generalization of that pattern, making it possible to pass any kind of implicit parameters, instead of just Function1. Actual use for them then followed, and syntactic sugar for those uses came latter.
One of them is Context Bounds, used to implement the type class pattern (pattern because it is not a built-in feature, just a way of using the language that provides similar functionality to Haskell's type class). A context bound is used to provide an adapter that implements functionality that is inherent in a class, but not declared by it. It offers the benefits of inheritance and interfaces without their drawbacks. For example:
def max[T](a: T, b: T)(implicit $ev1: Ordering[T]): T = if ($ev1.lt(a, b)) b else a
// latter followed by the syntactic sugar
def max[T: Ordering](a: T, b: T): T = if (implicitly[Ordering[T]].lt(a, b)) b else a
You have probably used that already -- there's one common use case that people usually don't notice. It is this:
new Array[Int](size)
That uses a context bound of a class manifests, to enable such array initialization. We can see that with this example:
def f[T](size: Int) = new Array[T](size) // won't compile!
You can write it like this:
def f[T: ClassManifest](size: Int) = new Array[T](size)
On the standard library, the context bounds most used are:
Manifest // Provides reflection on a type
ClassManifest // Provides reflection on a type after erasure
Ordering // Total ordering of elements
Numeric // Basic arithmetic of elements
CanBuildFrom // Collection creation
The latter three are mostly used with collections, with methods such as max, sum and map. One library that makes extensive use of context bounds is Scalaz.
Another common usage is to decrease boiler-plate on operations that must share a common parameter. For example, transactions:
def withTransaction(f: Transaction => Unit) = {
val txn = new Transaction
try { f(txn); txn.commit() }
catch { case ex => txn.rollback(); throw ex }
}
withTransaction { txn =>
op1(data)(txn)
op2(data)(txn)
op3(data)(txn)
}
Which is then simplified like this:
withTransaction { implicit txn =>
op1(data)
op2(data)
op3(data)
}
This pattern is used with transactional memory, and I think (but I'm not sure) that the Scala I/O library uses it as well.
The third common usage I can think of is making proofs about the types that are being passed, which makes it possible to detect at compile time things that would, otherwise, result in run time exceptions. For example, see this definition on Option:
def flatten[B](implicit ev: A <:< Option[B]): Option[B]
That makes this possible:
scala> Option(Option(2)).flatten // compiles
res0: Option[Int] = Some(2)
scala> Option(2).flatten // does not compile!
<console>:8: error: Cannot prove that Int <:< Option[B].
Option(2).flatten // does not compile!
^
One library that makes extensive use of that feature is Shapeless.
I don't think the example of the Akka library fits in any of these four categories, but that's the whole point of generic features: people can use it in all sorts of way, instead of ways prescribed by the language designer.
If you like being prescribed to (like, say, Python does), then Scala is just not for you.
Sure. Akka's got a great example of it with respect to its Actors. When you're inside an Actor's receive method, you might want to send a message to another Actor. When you do this, Akka will bundle (by default) the current Actor as the sender of the message, like this:
trait ScalaActorRef { this: ActorRef =>
...
def !(message: Any)(implicit sender: ActorRef = null): Unit
...
}
The sender is implicit. In the Actor there is a definition that looks like:
trait Actor {
...
implicit val self = context.self
...
}
This creates the implicit value within the scope of your own code, and it allows you to do easy things like this:
someOtherActor ! SomeMessage
Now, you can do this as well, if you like:
someOtherActor.!(SomeMessage)(self)
or
someOtherActor.!(SomeMessage)(null)
or
someOtherActor.!(SomeMessage)(anotherActorAltogether)
But normally you don't. You just keep the natural usage that's made possible by the implicit value definition in the Actor trait. There are about a million other examples. The collection classes are a huge one. Try wandering around any non-trivial Scala library and you'll find a truckload.
One example would be the comparison operations on Traversable[A]. E.g. max or sort:
def max[B >: A](implicit cmp: Ordering[B]) : A
These can only be sensibly defined when there is an operation < on A. So, without implicits we’d have to supply the context Ordering[B] every time we’d like to use this function. (Or give up type static checking inside max and risk a runtime cast error.)
If however, an implicit comparison type class is in scope, e.g. some Ordering[Int], we can just use it right away or simply change the comparison method by supplying some other value for the implicit parameter.
Of course, implicits may be shadowed and thus there may be situations in which the actual implicit which is in scope is not clear enough. For simple uses of max or sort it might indeed be sufficient to have a fixed ordering trait on Int and use some syntax to check whether this trait is available. But this would mean that there could be no add-on traits and every piece of code would have to use the traits which were originally defined.
Addition:
Response to the global variable comparison.
I think you’re correct that in a code snipped like
implicit val num = 2
implicit val item = "Orange"
def shopping(implicit num: Int, item: String) = {
"I’m buying "+num+" "+item+(if(num==1) "." else "s.")
}
scala> shopping
res: java.lang.String = I’m buying 2 Oranges.
it may smell of rotten and evil global variables. The crucial point, however, is that there may be only one implicit variable per type in scope. Your example with two Ints is not going to work.
Also, this means that practically, implicit variables are employed only when there is a not necessarily unique yet distinct primary instance for a type. The self reference of an actor is a good example for such a thing. The type class example is another example. There may be dozens of algebraic comparisons for any type but there is one which is special.
(On another level, the actual line number in the code itself might also make for a good implicit variable as long as it uses a very distinctive type.)
You normally don’t use implicits for everyday types. And with specialised types (like Ordering[Int]) there is not too much risk in shadowing them.
Based on my experience there is no real good example for use of implicits parameters or implicits conversion.
The small benefit of using implicits (not needing to explicitly write a parameter or a type) is redundant in compare to the problems they create.
I am a developer for 15 years, and have been working with scala for the last 1.5 years.
I have seen many times bugs that were caused by the developer not aware of the fact that implicits are used, and that a specific function actually return a different type that the one specified. Due to implicit conversion.
I also heard statements saying that if you don't like implicits, don't use them.
This is not practical in the real world since many times external libraries are used, and a lot of them are using implicits, so your code using implicits, and you might not be aware of that.
You can write a code that has either:
import org.some.common.library.{TypeA, TypeB}
or:
import org.some.common.library._
Both codes will compile and run.
But they will not always produce the same results since the second version imports implicits conversion that will make the code behave differently.
The 'bug' that is caused by this can occur a very long time after the code was written, in case some values that are affected by this conversion were not used originally.
Once you encounter the bug, its not an easy task finding the cause.
You have to do some deep investigation.
Even though you feel like an expert in scala once you have found the bug, and fixed it by changing an import statement, you actually wasted a lot of precious time.
Additional reasons why I generally against implicits are:
They make the code hard to understand (there is less code, but you don't know what he is doing)
Compilation time. scala code compiles much slower when implicits are used.
In practice, it changes the language from statically typed, to dynamically typed. Its true that once following very strict coding guidelines you can avoid such situations, but in real world, its not always the case. Even using the IDE 'remove unused imports', can cause your code to still compile and run, but not the same as before you removed 'unused' imports.
There is no option to compile scala without implicits (if there is please correct me), and if there was an option, none of the common community scala libraries would have compile.
For all the above reasons, I think that implicits are one of the worst practices that scala language is using.
Scala has many great features, and many not so great.
When choosing a language for a new project, implicits are one of the reasons against scala, not in favour of it. In my opinion.
Another good general usage of implicit parameters is to make the return type of a method depend on the type of some of the parameters passed to it. A good example, mentioned by Jens, is the collections framework, and methods like map, whose full signature usually is:
def map[B, That](f: (A) ⇒ B)(implicit bf: CanBuildFrom[GenSeq[A], B, That]): That
Note that the return type That is determined by the best fitting CanBuildFrom that the compiler can find.
For another example of this, see that answer. There, the return type of the method Arithmetic.apply is determined according to a certain implicit parameter type (BiConverter).
It's easy, just remember:
to declare the variable to be passed in as implicit too
to declare all the implicit params after the non-implicit params in a separate ()
e.g.
def myFunction(): Int = {
implicit val y: Int = 33
implicit val z: Double = 3.3
functionWithImplicit("foo") // calls functionWithImplicit("foo")(y, z)
}
def functionWithImplicit(foo: String)(implicit x: Int, d: Double) = // blar blar
Implicit parameters are heavily used in the collection API. Many functions get an implicit CanBuildFrom, which ensures that you get the 'best' result collection implementation.
Without implicits you would either pass such a thing all the time, which would make normal usage cumbersome. Or use less specialized collections which would be annoying because it would mean you loose performance/power.
I am commenting on this post a bit late, but I have started learning scala lately.
Daniel and others have given nice background about implicit keyword.
I would provide me two cents on implicit variable from practical usage perspective.
Scala is best suited if used for writing Apache Spark codes. In Spark, we do have spark context and most likely the configuration class that may fetch the configuration keys/values from a configuration file.
Now, If I have an abstract class and if I declare an object of configuration and spark context as follows :-
abstract class myImplicitClass {
implicit val config = new myConfigClass()
val conf = new SparkConf().setMaster().setAppName()
implicit val sc = new SparkContext(conf)
def overrideThisMethod(implicit sc: SparkContext, config: Config) : Unit
}
class MyClass extends myImplicitClass {
override def overrideThisMethod(implicit sc: SparkContext, config: Config){
/*I can provide here n number of methods where I can pass the sc and config
objects, what are implicit*/
def firstFn(firstParam: Int) (implicit sc: SparkContext, config: Config){
/*I can use "sc" and "config" as I wish: making rdd or getting data from cassandra, for e.g.*/
val myRdd = sc.parallelize(List("abc","123"))
}
def secondFn(firstParam: Int) (implicit sc: SparkContext, config: Config){
/*following are the ways we can use "sc" and "config" */
val keyspace = config.getString("keyspace")
val tableName = config.getString("table")
val hostName = config.getString("host")
val userName = config.getString("username")
val pswd = config.getString("password")
implicit val cassandraConnectorObj = CassandraConnector(....)
val cassandraRdd = sc.cassandraTable(keyspace, tableName)
}
}
}
As we can see the code above, I have two implicit objects in my abstract class, and I have passed those two implicit variables as function/method/definition implicit parameters.
I think this is the best use case that we can depict in terms of usage of implicit variables.
This is just one of those "I was wondering..." questions.
Scala has immutable data structures and (optional) lazy vals etc.
How close can a Scala program be to one that is fully pure (in a functional programming sense) and fully lazy (or as Ingo points out, can it be sufficiently non-strict)? What values are unavoidably mutable and what evaluation unavoidably greedy?
Regarding lazyness - currently, passing a parameter to a method is by default strict:
def square(a: Int) = a * a
but you use call-by-name parameters:
def square(a: =>Int) = a * a
but this is not lazy in the sense that it computes the value only once when needed:
scala> square({println("calculating");5})
calculating
calculating
res0: Int = 25
There's been some work into adding lazy method parameters, but it hasn't been integrated yet (the below declaration should print "calculating" from above only once):
def square(lazy a: Int) = a * a
This is one piece that is missing, although you could simulate it with a local lazy val:
def square(ap: =>Int) = {
lazy val a = ap
a * a
}
Regarding mutability - there is nothing holding you back from writing immutable data structures and avoid mutation. You can do this in Java or C as well. In fact, some immutable data structures rely on the lazy primitive to achieve better complexity bounds, but the lazy primitive can be simulated in other languages as well - at the cost of extra syntax and boilerplate.
You can always write immutable data structures, lazy computations and fully pure programs in Scala. The problem is that the Scala programming model allows writing non pure programs as well, so the type checker can't always infer some properties of the program (such as purity) which it could infer given that the programming model was more restrictive.
For example, in a language with pure expressions the a * a in the call-by-name definition above (a: =>Int) could be optimized to evaluate a only once, regardless of the call-by-name semantics. If the language allows side-effects, then such an optimization is not always applicable.
Scala can be as pure and lazy as you like, but a) the compiler won't keep you honest with regards to purity and b) it will take a little extra work to make it lazy. There's nothing too profound about this; you can even write lazy and pure Java code if you really want to (see here if you dare; achieving laziness in Java requires eye-bleeding amounts of nested anonymous inner classes).
Purity
Whereas Haskell tracks impurities via the type system, Scala has chosen not to go that route, and it's difficult to tack that sort of thing on when you haven't made it a goal from the beginning (and also when interoperability with a thoroughly impure language like Java is a major goal of the language).
That said, some believe it's possible and worthwhile to make the effort to document effects in Scala's type system. But I think purity in Scala is best treated as a matter of self-discipline, and you must be perpetually skeptical about the supposed purity of third-party code.
Laziness
Haskell is lazy by default but can be made stricter with some annotations sprinkled in your code... Scala is the opposite: strict by default but with the lazy keyword and by-name parameters you can make it as lazy as you like.
Feel free to keep things immutable. On the other hand, there's no side effect tracking, so you can't enforce or verify it.
As for non-strictness, here's the deal... First, if you choose to go completely non-strict, you'll be forsaking all of Scala's classes. Even Scalaz is not non-strict for the most part. If you are willing to build everything yourself, you can make your methods non-strict and your values lazy.
Next, I wonder if implicit parameters can be non-strict or not, or what would be the consequences of making them non-strict. I don't see a problem, but I could be wrong.
But, most problematic of all, function parameters are strict, and so are closures parameters.
So, while it is theoretically possible to go fully non-strict, it will be incredibly inconvenient.
In Scala, if I define a method called apply in a class or a top-level object, that method will be called whenever I append a pair a parentheses to an instance of that class, and put the appropriate arguments for apply() in between them. For example:
class Foo(x: Int) {
def apply(y: Int) = {
x*x + y*y
}
}
val f = new Foo(3)
f(4) // returns 25
So basically, object(args) is just syntactic sugar for object.apply(args).
How does Scala do this conversion?
Is there a globally defined implicit conversion going on here, similar to the implicit type conversions in the Predef object (but different in kind)? Or is it some deeper magic? I ask because it seems like Scala strongly favors consistent application of a smaller set of rules, rather than many rules with many exceptions. This initially seems like an exception to me.
I don't think there's anything deeper going on than what you have originally said: it's just syntactic sugar whereby the compiler converts f(a) into f.apply(a) as a special syntax case.
This might seem like a specific rule, but only a few of these (for example, with update) allows for DSL-like constructs and libraries.
It is actually the other way around, an object or class with an apply method is the normal case and a function is way to construct implicitly an object of the same name with an apply method. Actually every function you define is an subobject of the Functionn trait (n is the number of arguments).
Refer to section 6.6:Function Applications of the Scala Language Specification for more information of the topic.
I ask because it seems like Scala strongly favors consistent application of a smaller set of rules, rather than many rules with many exceptions.
Yes. And this rule belongs to this smaller set.