I am looking at this akka-http docs example (which links to akka-http test code):
Marshalling
This is actually a test from github code MarshalSpec.scala. My question is, where is the implicit Marshaller imported here? I am looking at imports, and I couldn't find it? I tried using IntelliJ to show me implicit imports, but I still couldn't figure this out. Where is the import statement that gets the implicit declaration for a Marshaller that is passed to:
val entityFuture = Marshal(string).to[MessageEntity]
at line 21?
It calls
def to[B](implicit m: Marshaller[A, B], ec: ExecutionContext): Future[B] =
in Marshal.scala and passes an implicit m: Marshaller, which I can't pinpoint.
Marshaller extends PredefinedToEntityMarshallers , which provides marshalling from String.
But, why does Marshaller comes with its own implicits ? It's because scala search for implicits in the implicit scope of type arguments : Where does Scala look for implicits?
So, Marshaller comes with its own Marshallers ready to use :)
Related
Trying to understand akka marshalling/unmarshalling and found a lot of scala implicit magic that goes on in the background and under the hood.
Question:
Is there a way to find which implicit constructs are effective during an execution.
Things that would be useful to know:
- what implicit declarations and conversions are effective
- where they are declared
What I'm thinking is an IDE plugin for this may be? To be used during code debug?
I think this would help in understanding akka marshalling/unmarshalling but also it would be useful generally wherever complex implicit features are used.
Implicits are selected at compile time.
With -Xlog-implicit-conversions:
scala 2.13.0-M5> "42".toInt
^
applied implicit conversion from String("42") to ?{def toInt: ?} = implicit def augmentString(x: String): scala.collection.StringOps
res0: Int = 42
scala 2.13.0-M5> "42".toInt //print<TAB>
scala.Predef.augmentString("42").toInt // : Int
-Xlog-implicits explains when implicits do not apply.
IntelliJ has "show implicits".
Does Scala have a way to get the contained class(es) of a collection? i.e. if I have:
val foo = Vector[Int]()
Is there a way to get back classOf[Int] from it?
(Just checking the first element doesn't work since it might be empty.)
You can use TypeTag:
import scala.reflect.runtime.universe._
def getType[F[_], A: TypeTag](as: F[A]) = typeOf[A]
val foo = Vector[Int]()
getType(foo)
Not from the collection itself, but if you get it a parameter from a method, you could add an implicit TypeTag to that method to obtain the type at runtime. E.g.
def mymethod[T](x: Vector[T])(implicit tag: TypeTag[T]) = ...
See https://docs.scala-lang.org/.../typetags-manifests.html for details.
Technically you can do it by using TypeTag or Typeable/TypeCase from Shapless library (see link). But I just want to note that all these tricks are really very advanced solutions when there is no any better way get the task done without digging inside type parameters.
All type parameters in scala and java are affected by type erasure on runtime, and if you сatch yourself thinking about extracting these information from the class it might be a good sign to redesign the solution that you are trying to implement.
I am trying to write some use cases for Apache Flink. One error I run into pretty often is
could not find implicit value for evidence parameter of type org.apache.flink.api.common.typeinfo.TypeInformation[SomeType]
My problem is that I cant really nail down when they happen and when they dont.
The most recent example of this would be the following
...
val largeJoinDataGen = new LargeJoinDataGen(dataSetSize, dataGen, hitRatio)
val see = StreamExecutionEnvironment.getExecutionEnvironment
val newStreamInput = see.addSource(largeJoinDataGen)
...
where LargeJoinDataGen extends GeneratorSource[(Int, String)] and GeneratorSource[T] extends SourceFunction[T], both defined in separate files.
When trying to build this I get
Error:(22, 39) could not find implicit value for evidence parameter of type org.apache.flink.api.common.typeinfo.TypeInformation[(Int, String)]
val newStreamInput = see.addSource(largeJoinDataGen)
1. Why is there an error in the given example?
2. What would be a general guideline when these errors happen and how to avoid them in the future?
P.S.: first scala project and first flink project so please be patient
You may make an import instead of implicits
import org.apache.flink.streaming.api.scala._
It will also help.
This mostly happens when you have user code, i.e. a source or a map function or something of that nature that has a generic parameter. In most cases you can fix that by adding something like
implicit val typeInfo = TypeInformation.of(classOf[(Int, String)])
If your code is inside another method that has a generic parameter you can also try adding a context bound to the generic parameter of the method, as in
def myMethod[T: TypeInformation](input: DataStream[Int]): DataStream[T] = ...
My problem is that I cant really nail down when they happen and when they dont.
They happen when an implicit parameter is required. If we look at the method definition we see:
def addSource[T: TypeInformation](function: SourceFunction[T]): DataStream[T]
But we don't see any implicit parameter defined, where is it?
When you see a polymorphic method where the type parameter is of the form
def foo[T : M](param: T)
Where T is the type parameter and M is a context bound. It means that the creator of the method is requesting an implicit parameter of type M[T]. It is equivalent to:
def foo[T](param: T)(implicit ev: M[T])
In the case of your method, it is actually expanded to:
def addSource[T](function: SourceFunction[T])(implicit evidence: TypeInformation[T]): DataStream[T]
This is why you see the compiler complaining, as it can't find the implicit parameter the method is requiring.
If we go to the Apache Flink Wiki, under Type Information we can see why this happens :
No Implicit Value for Evidence Parameter Error
In the case where TypeInformation could not be created, programs fail to compile with an error stating “could not find implicit value for evidence parameter of type TypeInformation”.
A frequent reason if that the code that generates the TypeInformation has not been imported. Make sure to import the entire flink.api.scala package.
import org.apache.flink.api.scala._
For generic methods, you'll need to require them to generate a TypeInformation at the call-site as well:
For generic methods, the data types of the function parameters and return type may not be the same for every call and are not known at the site where the method is defined. The code above will result in an error that not enough implicit evidence is available.
In such cases, the type information has to be generated at the invocation site and passed to the method. Scala offers implicit parameters for that.
Note that import org.apache.flink.streaming.api.scala._ may also be necessary.
For your types this means that if the invoking method is generic, it also needs to request the context bound for it's type parameter.
For example Scala versions (2.11, 2.12, etc.) are not binary compatible.
The following is a wrong configuration even if you use import org.apache.flink.api.scala._ :
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<scala.version>2.12.8</scala.version>
<scala.binary.version>2.11</scala.binary.version>
</properties>
Correct configuration in Maven:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<scala.version>2.12.8</scala.version>
<scala.binary.version>2.12</scala.binary.version>
</properties>
I see a couple of questions which make use of the scalaz Monad for what looks like a scala Future.
Here and here. I havent seen a satisfactory way of resolving this as an implicit type class without using a global execution context, however I feel that the import of these type classes shouldnt have to have static knowledge of the context.
Is there something I am missing here?
(Im assuming they are not using scalaz.concurrent.Future)
The ExecutionContext just needs to be implicitly available at the call site where your Monad is known to be Future. I agree there is some awkwardness surrounding potentially multiple different definitions of your type class existing in your program, but there is no need to depend on an implementation of it statically.
import scala.concurrent.Future
import scalaz._
import Scalaz._
def foo[A, T[_]: Traverse, M[_]: Monad](t: T[M[A]]): M[T[A]] =
implicitly[Traverse[T]].sequence(t)
def bar(l: List[Future[Int]])(implicit ctx: ExecutionContext): Future[List[Int]] =
foo(l)
https://github.com/scalaz/scalaz/blob/v7.1.0/core/src/main/scala/scalaz/std/Future.scala#L8
I am finding myself writing alot of boilerplate scala to add implicit class wrappers around modules of functions. For example, if I have this function defined for Seqs
def takeWhileRight[T](p: T=>Boolean)(s: Seq[T]): Seq[T] = s.reverse.takeWhile(p).reverse
I need to write this (completely deterministic) implicit wrapper:
implicit class EnrichSeq[T](value: Seq[T]) {
def takeWhileRight(p: T=>Boolean): Seq[T] = SeqOps.takeWhileRight(p)(value)
}
This is one example of many. In every case the implicit wrapper ends up being mechanically derivable from the function it forwards to.
Is anyone aware of any tools or code generators that can automate the generation of such wrappers?
You're using Scala 2.10's "implicit classes" already? The whole point (the only point) of that new syntactic sugar is to free you from having to write the implicit conversion method.