I have several implicit context for my application.
like
import scala.collection.JavaConversions._
import HadoopConversion._
etc
Right now I have to copy paste all those imports at each file. Is it possible to combine them in one file and make only one import?
A good technique that some libraries provide by default is bundling up implicits into a trait. This way you can compose sets of implicits by defining a trait that extends other implicit bundling traits. And then you can use it at the top of your scala file with the following.
import MyBundleOfImplicits._
Or be more selective by mixing it in only where you need it.
object Main extends App with MyBundleOfImplicits {
// ...
}
Unfortunately with something like JavaConversions, to use this method you will need to redefine all the implicits you want to use inside a trait.
trait JavaConversionsImplicits {
import java.{lang => jl}
import java.{util => ju}
import scala.collection.JavaConversions
implicit def asJavaIterable[A](i : Iterable[A]): jl.Iterable[A] = JavaConversions.asJavaIterable(i)
implicit def asJavaIterator[A](i : Iterator[A]): ju.Iterator[A] = JavaConversions.asJavaIterator(i)
}
trait MyBundleOfImplicits extends JavaConversionsImplicits with OtherImplicits
Scala does not have first-class imports. So the answer to your question is no. But there is an exception for the scala REPL. You can put all your imports in a file and then just tell the REPL where it is located. See this question.
The other answers/comments are already comprehensive. But if you just want to reduce COPY/PASTEs, all mainstream IDE/text-editors support text templating ('live template' in IntelliJ IDEA, 'template' in Eclipse, 'snippets' in TextMate ...) that will definitely make your life easier.
Related
I'm a scala newbie, using pyspark extensively (on DataBricks, FWIW). I'm finding that Protobuf deserialization is too slow for me in python, so I'm porting my deserialization udf to scala.
I've compiled my .proto files to scala and then a JAR using scalapb as described here
When I try to use these instructions to create a UDF like this:
import gnmi.gnmi._
import org.apache.spark.sql.{Dataset, DataFrame, functions => F}
import spark.implicits.StringToColumn
import scalapb.spark.ProtoSQL
// import scalapb.spark.ProtoSQL.implicits._
import scalapb.spark.Implicits._
val deserialize_proto_udf = ProtoSQL.udf { bytes: Array[Byte] => SubscribeResponse.parseFrom(bytes) }
I get the following error:
command-4409173194576223:9: error: could not find implicit value for evidence parameter of type frameless.TypedEncoder[Array[Byte]]
val deserialize_proto_udf = ProtoSQL.udf { bytes: Array[Byte] => SubscribeResponse.parseFrom(bytes) }
I've double checked that I'm importing the correct implicits, to no avail. I'm pretty fuzzy on implicits, evidence parameters and scala in general.
I would really appreciate it if someone would point me in the right direction. I don't even know how to start diagnosing!!!
Update
It seems like frameless doesn't include an implicit encoder for Array[Byte]???
This works:
frameless.TypedEncoder[Byte]
this does not:
frameless.TypedEncoder[Array[Byte]]
The code for frameless.TypedEncoder seems to include a generic Array encoder, but I'm not sure I'm reading it correctly.
#Dymtro, Thanks for the suggestion. That helped.
Does anyone have ideas about what is going on here?
Update
Ok, progress - this looks like a DataBricks issue. I think that the notebook does something like the following on startup:
import spark.implicits._
I'm using scalapb, which requires that you don't do that
I'm hunting for a way to disable that automatic import now, or "unimport" or "shadow" those modules after they get imported.
If spark.implicits._ are already imported then a way to "unimport" (hide or shadow them) is to create a duplicate object and import it too
object implicitShadowing extends SQLImplicits with Serializable {
protected override def _sqlContext: SQLContext = ???
}
import implicitShadowing._
Testing for case class Person(id: Long, name: String)
// no import
List(Person(1, "a")).toDS() // doesn't compile, value toDS is not a member of List[Person]
import spark.implicits._
List(Person(1, "a")).toDS() // compiles
import spark.implicits._
import implicitShadowing._
List(Person(1, "a")).toDS() // doesn't compile, value toDS is not a member of List[Person]
How to override an implicit value?
Wildcard Import, then Hide Particular Implicit?
How to override an implicit value, that is imported?
How can an implicit be unimported from the Scala repl?
Not able to hide Scala Class from Import
NullPointerException on implicit resolution
Constructing an overridable implicit
Caching the circe implicitly resolved Encoder/Decoder instances
Scala implicit def do not work if the def name is toString
Is there a workaround for this format parameter in Scala?
Please check whether this helps.
Possible problem can be that you don't want just to unimport spark.implicits._ (scalapb.spark.Implicits._), you probably want to import scalapb.spark.ProtoSQL.implicits._ too. And I don't know whether implicitShadowing._ shadow some of them too.
Another possible workaround is to resolve implicits manually and use them explicitly.
I noticed that Monocle has implementations of lens laws that are used to test the libraries internals. They seem to be nicely generalized and modularized. I tried to use them to test my own lenses, but I am lost in the jungle of dependencies. Has anybody tried to do this, and could post an example? The documentation does not seem to talk about laws at all. Thank you.
To elaborate, here is what I am trying to do (fumbling, not sure if I am using the intended way to use the API):
it should "pass the LensLaws" in check {
forAll {(c: (String,Int), a: String) =>
new monocle.law.LensLaws(l).setGet(c,a) } }
where l is the Monocle lens, visible in scope. I am receiving the following error message:
No implicit view available from monocle.internal.IsEq[String] => org.scalacheck.Prop
As far as I can see the setGet law constructs an IsEq object and I was not able to find how to turn it into a Prop (or Boolean).
I can also see that the framework is using a function checkAll to test all LensLaws at the same time, but I could not get this to work in my own code either. Any help appreciated.
The following works for me
import org.scalatest.FunSuite
import org.typelevel.discipline.scalatest.Discipline
class LensesSuite extends FunSuite with Discipline {
import scalaz._
import Scalaz._
checkAll("Lens l", monocle.law.discipline.LensTests(l))
}
It turns out that the main problem was my relatively short knowledge on scalatest. checkAll is a checker provided by org.typelevel.discipline.scalatest.Discipline, which only works in FunSuite (not in FlatSpec that I was using). It took me ages to figure that out ...
Still have no idea how to elegantly use this RuleSet (LensTests) into another spec. It seems strange that the choice of RuleSet by Monocle would enforce the Spec style on the project using these tests.
I see a couple of questions which make use of the scalaz Monad for what looks like a scala Future.
Here and here. I havent seen a satisfactory way of resolving this as an implicit type class without using a global execution context, however I feel that the import of these type classes shouldnt have to have static knowledge of the context.
Is there something I am missing here?
(Im assuming they are not using scalaz.concurrent.Future)
The ExecutionContext just needs to be implicitly available at the call site where your Monad is known to be Future. I agree there is some awkwardness surrounding potentially multiple different definitions of your type class existing in your program, but there is no need to depend on an implementation of it statically.
import scala.concurrent.Future
import scalaz._
import Scalaz._
def foo[A, T[_]: Traverse, M[_]: Monad](t: T[M[A]]): M[T[A]] =
implicitly[Traverse[T]].sequence(t)
def bar(l: List[Future[Int]])(implicit ctx: ExecutionContext): Future[List[Int]] =
foo(l)
https://github.com/scalaz/scalaz/blob/v7.1.0/core/src/main/scala/scalaz/std/Future.scala#L8
I have a quite puzzling question. I am playing with squeryl, and found when I used:
package models
import org.squeryl.{Schema, KeyedEntity}
object db extends Schema {
val userTable = table[User]("User")
on(userTable)(b => declare(
b.email is(unique,indexed("idxEmailAddresses"))
))
}
I had to import import org.squeryl.PrimitiveTypeMode._
But this does not make sense to me. Here is is defined in org.squeryl.dsl.NonNumericalExpression, but why do I have to include the seemingly irrelevant import org.squeryl.PrimitiveTypeMode._?
Thank you.
I agree with #sschaef that this is due to required implicit conversions. When APIs (like squeryl) decide to build a DSL (domain specific language) in order to get a slick looking way to code against their API, implicit conversions will be required. The core API probably takes certain types of objects that it might be cumbersome/ugly to be instantiating directly in the code. Thus, they will use implicit conversions to do some of the lifting for you and keep the DSL as clean as possible. If you check out the Scaladoc for the PrimitiveTypeMode object, you can see the many implicit defs that are defined on it. Implicit conversions (used in pimping libraries) will 'upconvert' from one type into another to gain access to more functionality on the pimped out class. When the code is the implicit things are explicitly included into the final compiled code.
http://squeryl.org/api/index.html#org.squeryl.PrimitiveTypeMode$
Also, I believe the implicit conversion you are looking for is:
import org.squeryl.PrimitiveTypeMode.string2ScalarString
which is inherited from org.squeryl.dsl.QueryDsl.
Having written a few scala tools, I'm trying to come to grips with the best way to arrange my code - particularly implicits. I have 2 goals:
Sometimes, I want to be able to import just the implicits I ask for.
Othertimes, I want to just import everything.
To avoid duplicating the implicits, I've come up with this structure (similar to the way scalaz is arranged):
case class StringW(s : String) {
def contrived = s + "?"
}
trait StringWImplicits {
implicit def To(s : String) = StringW(s)
implicit def From(sw : StringW) = sw.s
}
object StringW extends StringWImplicits
// Elsewhere on Monkey Island
object World extends StringWImplicits with ListWImplicits with MoreImplicits
This allows me to just
import StringW._ // Selective import
or (in most cases)
import World._. // Import everything
How does everyone else do it?
I think that implicit conversions are dangerous if you don't know where they are coming from. In my case, I put my implicits in a Conversions class and import it as close to the use as possible
def someMethod(d: Date) ; Unit {
import mydate.Conversions._
val tz = TimeZone.getDefault
val timeOfDay = d.getTimeOfDay(tz) //implicit used here
...
}
I'm not sure I like "inheriting" implicits from various traits for the same reason it was considered bad Java practice to implement an interface so you could use its constants directly (static imports are preferred instead).
I usually had implicit conversions in an object which clearly signals that what it is imported is an implicit conversion.
For example, if I have a class com.foo.bar.FilthyRichString, the implicit conversions would go into com.foo.bar.implicit.FilthyRichStringImplicit. I know the names are a bit long, but that's why we have IDEs (and Scala IDE support is getting better). The way I do this is that I feel it is important that all the implicit conversions can be clearly viewed in a 10 second code review. I could look at the following code:
// other imports
import com.foo.bar.FilthyRichString
import com.foo.bar.util.Logger
import com.foo.bar.util.FileIO
import com.foo.bar.implicits.FilthyRichStringImplicit._
import com.foo.bar.implicits.MyListImplicit._
// other implicits
and at a glance see all the implicit conversions that are active in this source file. They would also be all gathered together, if you use the convention that imports are grouped by packages, with a new line between different packages.
Along the lines of the same argument, I wouldn't like a catch-all object that holds all of the implicit conversions. In a big project, would you really use all of the implicit conversions in all your source files? I think that doing that means very tight coupling between different parts of your code.
Also, a catch-all object is not very good for documentation. In the case of explicitly writing all the implicit conversions used in a file, one can just look at your import statements and straight away jump to the documentation of the implicit class. In the case of a catch-all object, one would have to look at that object (which in a big project might be huge) and then search for the implicit conversion they are after.
I agree with oxbow_lakes that having implicit conversion in traits is bad because of the temptation of inheriting from it, which is, as he said, bad practice. Along those lines, I would make the objects holding the implicit conversions final just to avoid the temptation altogether. His idea of importing them as close to the use as possible is very nice as well, if implicit conversions are just used sparingly in the code.
-- Flaviu Cipcigan