Why global ExecutionContext is not a default parameter in future block? - scala

That's really annoying when you write some highly concurrent code with Futures or Actors response and you have manually import ExecutionContext.Implicits.global. Tried to find some good explanation why it's not made as a default parameter, like it is done with Strategy in Scalaz Concurrent. That would be very helpful, not to insert/remove all those imports in the code. Or am i missing some logic?

The general trend seems to be to require the user to explicitly import things like implicits, extra operators or DSLs. I think this is a good thing since it makes things less "magic" and more understandable.
But nothing stops you from defining a package-wide implicit value for your code. Note that if the implicit ExecutionContext was always imported by default, you wouldn't be able to do this.
In the package object:
package object myawsomeconcurrencylibrary {
implicit def defaultExecutionContext = scala.concurrent.ExecutionContext.global
}
In any class in the same package:
package myawsomeconcurrencylibrary
object Bla {
future { ... } // implicit from package object is used unless you explicitly provide your own
}

Related

Scala-Lang recommended import statement style: package._ or package.ClassName?

Looking for best Scala-Lang practice for import statements. Don't see any recommendation here:
https://docs.scala-lang.org/style/naming-conventions.html#packages
https://github.com/scala/scala
So what is recommended?
Option (1):
import package._
Option (2):
import package.ClassName
This is generally a matter of taste, but there is an argument for avoiding ._ so that the code is more robust to library changes that might introduce unwanted symbols into your code or create other subtle changes to your code that are difficult to track down.
Regarding implicits, wildcard imports have a potential advantage in that the name of the implicit can change without having to fix the import statements, for example, consider
object Qux {
object Implicts {
implicit class ExtraListOps[T](l: List[T]) {
def shuffle: List[T] = scala.util.Random.shuffle(l)
}
}
}
import Qux.Implicts._
List(1,2,3,4,5).shuffle
Here we could rename ExtraListOps and yet would not have to modify import statement. On the other hand, Joshua Suereth states:
You can pull implicits in individually without wildcards. Explicit
imports have 'priority' over wildcard imports. Because of this, you
can shadow wildcard imports with an explicit import.
Also consider the discussion in the SIP Wildcard imports considered harmful, where for example, Ichoran states
Superfluous clutter at the top of every file that can only be managed
by an IDE, and even then not very well, is something I find a lot more
harmful in most cases. If you get conflicting implicits and the
compiler doesn’t tell you exactly where they came from, that’s a
compiler issue. Once detected, the fix is easy enough (e.g. import org.foo.{troublesomeImplicit => _, _}).

Writing macros which generate statements

With intention of reducing the boilerplate for the end-user when deriving instances of a certain typeclass (let's take Showable for example), I aim to write a macro, which will autogenerate the instance names. E.g.:
// calling this:
Showable.derive[Int]
Showable.derive[String]
// should expand to this:
// implicit val derivedShowableInstance0 = new Showable[Int] { ... }
// implicit val derivedShowableInstance1 = new Showable[String] { ... }
I tried to approach the problem the following way, but the compiler complained that the expression should return a < no type > instead of Unit:
object Showable {
def derive[a] = macro Macros.derive[a]
object Macros {
private var instanceNameIndex = 0
def derive[ a : c.WeakTypeTag ]( c : Context ) = {
import c.universe._
import Flag._
val newInstanceDeclaration = ...
c.Expr[Unit](
ValDef(
Modifiers(IMPLICIT),
newTermName("derivedShowableInstance" + nameIndex),
TypeTree(),
newInstanceDeclaration
)
)
}
}
}
I get that a val declaration is not exactly an expression and hence Unit might not be appropriate, but then how to make it right?
Is this at all possible? If not then why, will it ever be, and are there any workarounds?
Yes, that's right. Declarations/definitions aren't expressions, so they need to be wrapped into Unit-returning blocks to become ones. Typically Scala does this automatically, but in this particular case you need to do it yourself.
However if you wrap a definition in a block, then it becomes invisible from the outside. That's the limitation of the current macro system that strictly follows the metaphor of "macro application is very much similar to a typed function call". Function calls don't bring new members into scope, so neither do def macros - both for technical and philosophical reasons. As shown in the example #3 of my recent "What Are Macros Good For?" talk, by using structural types def macros can work around this limitation, but this doesn't look particularly related to your question, so I'll omit the details.
On the other hand, there are some ideas how to overcome this limitation with new macro flavors. Macro annotations show that it's technically feasible for the macro engine to hook into new member creation, but we'd like to get more experience with macro annotations before bringing them into trunk. Some details about this can be found in my "Philosophy of Scala Macros" presentation. This can be useful in your situation, but I still won't go into details, because I think there's a much better solution for this particular case (feel free to ask for elaboration in comments, though!).
What I'd like to recommend you is to use materialization as described in http://docs.scala-lang.org/overviews/macros/implicits.html. With materializing macros you can automatically generate type class instances for the user without having the user write any code at all. Would that fit your use case?

how to understand the following scala call

I have a quite puzzling question. I am playing with squeryl, and found when I used:
package models
import org.squeryl.{Schema, KeyedEntity}
object db extends Schema {
val userTable = table[User]("User")
on(userTable)(b => declare(
b.email is(unique,indexed("idxEmailAddresses"))
))
}
I had to import import org.squeryl.PrimitiveTypeMode._
But this does not make sense to me. Here is is defined in org.squeryl.dsl.NonNumericalExpression, but why do I have to include the seemingly irrelevant import org.squeryl.PrimitiveTypeMode._?
Thank you.
I agree with #sschaef that this is due to required implicit conversions. When APIs (like squeryl) decide to build a DSL (domain specific language) in order to get a slick looking way to code against their API, implicit conversions will be required. The core API probably takes certain types of objects that it might be cumbersome/ugly to be instantiating directly in the code. Thus, they will use implicit conversions to do some of the lifting for you and keep the DSL as clean as possible. If you check out the Scaladoc for the PrimitiveTypeMode object, you can see the many implicit defs that are defined on it. Implicit conversions (used in pimping libraries) will 'upconvert' from one type into another to gain access to more functionality on the pimped out class. When the code is the implicit things are explicitly included into the final compiled code.
http://squeryl.org/api/index.html#org.squeryl.PrimitiveTypeMode$
Also, I believe the implicit conversion you are looking for is:
import org.squeryl.PrimitiveTypeMode.string2ScalarString
which is inherited from org.squeryl.dsl.QueryDsl.

When should I make methods with implicit argument in Scala?

I made codes using play framework in scala which look like the following:
object Application extends Controller {
def hoge = Action( implicit request =>
val username = MyCookie.getName.get
Ok("hello " + username)
}
}
object MyCookie {
def getName( implicit request: RequestHeader ) = {
request.cookies.get("name").map(_.value)
}
}
I got a code review from my coworker. He said this code is not readable because of implicit parameter. I couldn't reply to his opinion. So could you tell me what is the best way to use implicit parameters? When should I use implicit parameters?
You should use implicit parameters when there is almost always a "right" way to do things, and you want to ignore those details almost all the time; or when there often is no way to do things, and the implicits provide the functionality for those things that work.
For an example of the first case, in scala.concurrent.Future, almost every method takes an implicit ExecutionContext. You almost never care what your ExecutionContext is from call to call; you just want it to work. But when you need to change the execution context you can supply it as an explicit parameter.
For an example of the second case, look at the CanBuildFroms in the collections library. You cannot build anything from anything; certain capabilities are supplied, and the lack of an implicit that, say, lets you package up a bunch of Vector[Option[String]]s into a HashSet[Char] is one major way to keep the library powerful and flexible yet sane.
You are doing neither: apparently you're just using it to save a little typing in one spot at the expense of the other spot. And, in this case, in doing so you've made it less obvious how things work, as you have to look all over the place to figure out where that implicit request actually gets used. If you want to save typing, you're much better off using short variable names but being explicit about it:
Action{ req => val name = MyCookie.getName(req).get; Ok("hello "+name) }

How should I organize implicits in my Scala application?

Having written a few scala tools, I'm trying to come to grips with the best way to arrange my code - particularly implicits. I have 2 goals:
Sometimes, I want to be able to import just the implicits I ask for.
Othertimes, I want to just import everything.
To avoid duplicating the implicits, I've come up with this structure (similar to the way scalaz is arranged):
case class StringW(s : String) {
def contrived = s + "?"
}
trait StringWImplicits {
implicit def To(s : String) = StringW(s)
implicit def From(sw : StringW) = sw.s
}
object StringW extends StringWImplicits
// Elsewhere on Monkey Island
object World extends StringWImplicits with ListWImplicits with MoreImplicits
This allows me to just
import StringW._ // Selective import
or (in most cases)
import World._. // Import everything
How does everyone else do it?
I think that implicit conversions are dangerous if you don't know where they are coming from. In my case, I put my implicits in a Conversions class and import it as close to the use as possible
def someMethod(d: Date) ; Unit {
import mydate.Conversions._
val tz = TimeZone.getDefault
val timeOfDay = d.getTimeOfDay(tz) //implicit used here
...
}
I'm not sure I like "inheriting" implicits from various traits for the same reason it was considered bad Java practice to implement an interface so you could use its constants directly (static imports are preferred instead).
I usually had implicit conversions in an object which clearly signals that what it is imported is an implicit conversion.
For example, if I have a class com.foo.bar.FilthyRichString, the implicit conversions would go into com.foo.bar.implicit.FilthyRichStringImplicit. I know the names are a bit long, but that's why we have IDEs (and Scala IDE support is getting better). The way I do this is that I feel it is important that all the implicit conversions can be clearly viewed in a 10 second code review. I could look at the following code:
// other imports
import com.foo.bar.FilthyRichString
import com.foo.bar.util.Logger
import com.foo.bar.util.FileIO
import com.foo.bar.implicits.FilthyRichStringImplicit._
import com.foo.bar.implicits.MyListImplicit._
// other implicits
and at a glance see all the implicit conversions that are active in this source file. They would also be all gathered together, if you use the convention that imports are grouped by packages, with a new line between different packages.
Along the lines of the same argument, I wouldn't like a catch-all object that holds all of the implicit conversions. In a big project, would you really use all of the implicit conversions in all your source files? I think that doing that means very tight coupling between different parts of your code.
Also, a catch-all object is not very good for documentation. In the case of explicitly writing all the implicit conversions used in a file, one can just look at your import statements and straight away jump to the documentation of the implicit class. In the case of a catch-all object, one would have to look at that object (which in a big project might be huge) and then search for the implicit conversion they are after.
I agree with oxbow_lakes that having implicit conversion in traits is bad because of the temptation of inheriting from it, which is, as he said, bad practice. Along those lines, I would make the objects holding the implicit conversions final just to avoid the temptation altogether. His idea of importing them as close to the use as possible is very nice as well, if implicit conversions are just used sparingly in the code.
-- Flaviu Cipcigan