I am using two imports
import org.json4s._
import org.json4s.native.JsonMethods._
I have the following source code
val json = parse("~~~~~~aklsdjfalksdjfalkdsf")
var abc = (json \\ "something").children map {
_.extract[POJO]
}
After I ran it I saw
Error:(32, 18) No org.json4s.Formats found. Try to bring an instance of org.json4s.Formats in scope or use the org.json4s.DefaultFormats.
_.extract[POJO]
Error:(32, 18) not enough arguments for method extract: (implicit formats: org.json4s.Formats, implicit mf: scala.reflect.Manifest[POJO])POJO.
Unspecified value parameters formats, mf.
_.extract[POJO]
I know I should be declaring :
implicit val df = DefaultFormats
I learnt how to use 'implicit' for my scala code.
However I need to understand how to use a library that enforces developers to define an implicit variable in their source code.
It seems a keyword 'implicit' is used in 'extract' method in ExtractableJsonAstNode class file as stated in the error message.
def extract[A](implicit formats: Formats, mf: scala.reflect.Manifest[A]): A =
Extraction.extract(jv)(formats, mf)
I see that that is looking for 'implicit' variable keyword to be declard in my source code.
The first question is how do I know when an implicit keyword is to be used for another implicit keyword (e.g. declared in a library), or it's going to be a switch of an operation I define (case that 'implicit' not to be declared twice)
the only clue I have is, when mother source code is using 'implicit' keyword and using a variable and it's type is a trait. Then I(dev) need to declare a variable with a type of a concrete class that extends that trait. I don't know if it's true..
also I found the following source code in 'Formats.scala' file within the json library.
class CustomSerializer[A: Manifest](
ser: Formats => (PartialFunction[JValue, A], PartialFunction[Any, JValue])) extends Serializer[A] {
val Class = implicitly[Manifest[A]].runtimeClass
def deserialize(implicit format: Formats) = {
case (TypeInfo(Class, _), json) =>
if (ser(format)._1.isDefinedAt(json)) ser(format)._1(json)
else throw new MappingException("Can't convert " + json + " to " + Class)
}
def serialize(implicit format: Formats) = ser(format)._2
}
note that def deserialize(implicit format: Formats) is declared.
once I write 'implicit val df = DefaultFormats' in my file, will it affect the whole json mechanism not only the 'extract' method? as CustomSerializer is used in json library.
to summarise..
the first question is about one of 'implicit' keyword usages.
the second question is about 'implicit' keyword scope.
When do I use the implicit keyword?
Implicits are used to defne the behavior of things you normally do not have control over. In your question, DefaultFormats is already an implicit. You do not need to declare a new implicit using it, you can just import it.
As for knowing when a library you're using requires some implicit in scope, that error is it. It is essentially telling you "if you're not sure what this error is about you can just import DefaultFormats.
Will an implicit affect the whole mechanism?
This is a key question that is important to understand.
When you have a function that takes an implicit, your compiler will search the scope for an implicit of that type.
Your function is looking for org.json4s.Formats. By importing DefaultFormat or writing your own implicit of type Format, you are teliing your function to use that format.
What effect does this have on the rest of your code?
Any other functions you have that rely on an implicit Format in the scope will use the same implicit. This is probably fine for you.
If you need to use multiple different Formats, you will want to split up those components from each other. You do not want to define multiple implicits of the same type in the same scope. This is confusing for humans and computers and should just be avoided.
Related
I am using avro4s to help with avro serialization and deserialization.
I have a case class that includes Timestamps and need those Timestamps to be converted to nicely formatted strings before I publish the records to Kafka; the default encoder is converting my Timestamps to Longs. I read that I needed to write a decoder and encoder (from the avro4s readme).
Here is my case class:
case class MembershipRecordEvent(id: String,
userHandle: String,
planId: String,
teamId: Option[String] = None,
note: Option[String] = None,
startDate: Timestamp,
endDate: Option[Timestamp] = None,
eventName: Option[String] = None,
eventDate: Timestamp)
I have written the following encoder:
Test.scala
def test() = {
implicit object MembershipRecordEventEncoder extends Encoder[MembershipRecordEvent] {
override def encode(t: MembershipRecordEvent, schema: Schema) = {
val dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss")
val record = new GenericData.Record(schema)
record.put("id", t.id)
record.put("userHandle", t.userHandle)
record.put("teamId", t.teamId.orNull)
record.put("note", t.note.orNull)
record.put("startDate", dateFormat.format(t.startDate))
record.put("endDate", if(t.endDate.isDefined) dateFormat.format(t.endDate.get) else null)
record.put("eventName", t.eventName.orNull)
record.put("eventDate", dateFormat.format(t.eventDate))
record
}
}
val recordInAvro2 = Encoder[MembershipRecordEvent].encode(testRecord, AvroSchema[MembershipRecordEvent]).asInstanceOf[GenericRecord]
println(recordInAvro2)
}
If I declare the my implicit object in line, like I did above, it creates the GenericRecord that I am looking for just fine. I tried to abstract the implicit object to a file, wrapped in an object, and I import Implicits._ to use my custom encoder.
Implicits.scala
object Implicits {
implicit object MembershipRecordEventEncoder extends Encoder[MembershipRecordEvent] {
override def encode(t: MembershipRecordEvent, schema: Schema) = {
val dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss")
val record = new GenericData.Record(schema)
record.put("id", t.id)
record.put("userHandle", t.userHandle)
record.put("teamId", t.teamId.orNull)
record.put("note", t.note.orNull)
record.put("startDate", dateFormat.format(t.startDate))
record.put("endDate", if(t.endDate.isDefined) dateFormat.format(t.endDate.get) else null)
record.put("eventName", t.eventName.orNull)
record.put("eventDate", dateFormat.format(t.eventDate))
record
}
}
}
Test.scala
import Implicits._
val recordInAvro2 = Encoder[MembershipRecordEvent].encode(testRecord, AvroSchema[MembershipRecordEvent]).asInstanceOf[GenericRecord]
println(recordInAvro2)
It fails to use my encoder (doesn't hit my breakpoints). I have tried a myriad of things to try and see why it fails to no avail.
How can I correctly import an implicit object?
Is there a simpler solution to encode my case class's Timestamps to Strings without writing an encoder for the entire case class?
TL;DR
As suggested in one of the comments above, you can place it in the companion object.
The longer version:
Probably you have another encoder, that is used instead of the encoder you defined in Implicits.
I'll quote some phrases from WHERE DOES SCALA LOOK FOR IMPLICITS?
When a value of a certain name is required, lexical scope is searched for a value with that name. Similarly, when an implicit value of a certain type is required, lexical scope is searched for a value with that type.
Any such value which can be referenced with its “simple” name, without selecting from another value using dotted syntax, is an eligible implicit value.
There may be more than one such value because they have different names.
In that case, overload resolution is used to pick one of them. The algorithm for overload resolution is the same used to choose the reference for a given name, when more than one term in scope has that name. For example, println is overloaded, and each overload takes a different parameter type. An invocation of println requires selecting the correct overloaded method.
In implicit search, overload resolution chooses a value among more than one that have the same required type. Usually this entails selecting a narrower type or a value defined in a subclass relative to other eligible values.
The rule that the value must be accessible using its simple name means that the normal rules for name binding apply.
In summary, a definition for x shadows a definition in an enclosing scope. But a binding for x can also be introduced by local imports. Imported symbols can’t override definitions of the same name in an enclosing scope. Similarly, wildcard imports can’t override an import of a specific name, and names in the current package that are visible from other source files can’t override imports or local definitions.
These are the normal rules for deciding what x means in a given context, and also determine which value x is accessible by its simple name and is eligible as an implicit.
This means that an implicit in scope can be disabled by shadowing it with a term of the same name.
Now I'll state the companion object logic:
Implicit syntax can avoid the import tax, which of course is a “sin tax,” by leveraging “implicit scope”, which depends on the type of the implicit instead of imports in lexical scope.
When an implicit of type T is required, implicit scope includes the companion object T:
When an F[T] is required, implicit scope includes both the companion of F and the companion of the type argument, e.g., object C for F[C].
In addition, implicit scope includes the companions of the base classes of F and C, including package objects, such as p for p.F.
I'd like to experiment with the use of a dynamic data model with a reflective library that uses typeOf[].
I've defined a class at runtime with a Scala reflection ToolBox in 2.11:
import scala.tools.reflect.ToolBox
import scala.reflect.runtime.universe._
import scala.reflect.runtime.{ currentMirror => cm }
def cdef() = q"case class C(v: String)"
val tb = cm.mkToolBox()
val csym = tb.define(cdef())
def newc(csym: Symbol) = q"""new ${csym}("hi")"""
val obj = tb.eval(newc(csym))
I'm able to circumvent the typeOf[] call by entering Scala reflection via the ClassSymbol instead, but that requires modifying a library over which I have no immediate control.
Is there any way that I can use it as a type parameter in a library whose entry point is typeOf[]?
I've tried:
The only way I found to go from a value to something that I could use in the type position was use Java reflection to invoke the companion class' apply method and call .type on the result:
val method_apply = obj.getClass.getMethod("apply", "".getClass)
val typeTemplate = method_apply.invoke(obj, "hello")
type MyType = typeTemplate.type
(Keeping with the naming scheme of #xeno_by and #travisbrown 's menagerie of odd types, I might call this "Frankenstein's Type", because it is made from parts, given life at the wrong time, not quite a substitute for the original, and given that this is all happening at runtime, should probably be burned with fire.)
This type alias works as a type parameter is some cases. But in the case of typeOf[MyType], the the compiler makes a TypeTag before the runtime type is defined, so typeOf[MyType] returns a type member that doesn't correspond to the runtime type/class (e.g. TypeTag[package.Example.MyType] instead of TypeTag[package.C])
Should I expect the ToolBox to have generated a TypeTag, and if so, how do I use it?
If I have to make a TypeTag at runtime, this question shows me how, but then how do I attach it to whatever I use as a type parameter?
Thanks for any ideas,
-Julian
The following code fails for me:
object Message {
def parse[T](bsonDoc: BSONDocument): Try[T] = {
implicit val bsonHandler = Macros.handler[T]
bsonDoc.seeAsTry[T]
}
}
Message.parse[messages.ClientHello](data)
The error is:
No apply function found for T
implicit val bsonHandler = Macros.handler[T]
^
However, if I hardcode a type (one of my case classes), it's fine:
object Message {
def parse(bsonDoc: BSONDocument): Try[ClientHello] = {
implicit val bsonHandler = Macros.handler[ClientHello]
bsonDoc.seeAsTry[ClientHello]
}
}
Message.parse(data)
So I presume this is a problem using generics. Incidentally, I have to import messages.ClientHello. If I just use messages.ClientHello I get:
not found: value ClientHello
implicit val bsonHandler = Macros.handler[messages.ClientHello]
^
How can I achieve what I'm trying to do, which is to have a single method that will take a BSON document and return an instance of the appropriate case class?
1) Macro applications get expanded immediately when encountered (well, modulo some fine details of type inference that are irrelevant here). This means that when you write handler[T], handler will try to expand with T as a type parameter. This won't lead to anything good, hence the error. To make this work, you need to turn Message.parse into a macro itself.
2) This happens because ReactiveMongo macros are unhygienic. Specifically, https://github.com/ReactiveMongo/ReactiveMongo/blob/v0.10.0/macros/src/main/scala/macros.scala#L142 isn't going to work correctly in situations like yours, because it uses simple name of the class, not a fully qualified name. I think the best way to make the macro work correctly would be using Ident(companion), not Ident(companion.name) - that would ensure that this identifier binds to the companion, not to something in scope having the same name.
If I have java.util.List and want to iterate over it user Scala syntax I import :
import scala.collection.JavaConversions._
and the java.util.List is implicitly converted to scala.collection.mutable.Set
(http://www.scala-lang.org/api/current/index.html#scala.collection.JavaConversions%24)
But how is this conversion achieved ? I'm confused as this is the first time I've encountered the ability to convert an object type by just importing a package.
JavaConversions object contains many implicit conversions between Scala->Java and Java->Scala collections. When you import all members of JavaConversions, all of those conversions are put in the current scope, and are therefore evaluated when an immediate collection type isn't available.
For example, when Scala compiler is looking for a collection of type X and cannot find it, it will also try to find a collection of type Y and an implicit conversion Y to X in scope.
To understand more about how the conversions are evaluated, see this answer.
There is a pattern "Pimp my library" that allows to "add" methods to any existing class. See for instance the answer of Daniel C. Sobral, http://www.artima.com/weblogs/viewpost.jsp?thread=179766, or google other examples.
In short: an implicit method returns a wrapper with the desired methods:
implicit def enrichString(s:String) = new EnrichedString(s)
class EnrichedString(s:String){
def hello = "Hello, "+s
}
assert("World".hello === "Hello, World")
It can also be shortened with sugar:
implicit class EnrichedString(s:String){
def hello = "Hello, "+s
}
My title is probably not really describing the problem real well. I do not need the answer to this question for what I am doing, I have things correct now, but while I was working with the Scala combinator parsers I had this issue that confused me. I would like to understand the language better (I'm a Scala newbie for the most part), so I thought I would see if anyone can explain this to me:
Here is the code:
package my.example
import scala.io.Source
import scala.util.parsing.input.StreamReader
import scala.util.parsing.combinator.lexical.StdLexical
import scala.util.parsing.combinator.syntactical.StandardTokenParsers
class DummyParser extends StandardTokenParsers
{
def scan
(
filename : String
) : Unit =
{
// Read in file
val input = StreamReader( Source.fromFile( filename ).bufferedReader )
// I want a reference to lexical in StandardTokenParsers
val mylexical = lexical
// Even if I put a type above like these two below it does not help
// val mylexical : StdLexical = lexical
// val mylexical : Tokens = lexical
val tokensGood : lexical.Scanner = new lexical.Scanner( input )
/*
Compile error in following line:
error: type mismatch;
found : mylexical.Scanner
required: DummyParser.this.lexical.Scanner
*/
val tokensBad : lexical.Scanner = new mylexical.Scanner( input )
}
}
The "val tokensBad" line gets the compile error shown in the comments. Isn't mylexical above referencing the exact same object as this.lexical (defined in StandardTokenParsers that the class above is deriving from). Reading "Programming in Scala" I think I sort of understand that the type of lexical.Scanner is path dependent (Section 20.7), but shouldn't lexical.Scanner and mylexical.Scanner be the same type? Isn't lexical and mylexical the same object? Heck, the dog food example in the book on page 426 seems to say the SuitableFood type from two different dogs is the same, and in my case above its the exact same object (I think). What's really going on here?
You would like the compiler to consider lexical.Scanner and mylexical.Scanner as equal, based on the fact that at runtime, the value of mylexical and lexical is always the same. This is a run-time property, and the type checker doesn't do any data-flow analysis (that would be way too slow to be practical).
So you need to help the type checker by telling it that the two values are always the same. You can do that by using a singleton type. A singleton type is a type that has exactly one value, and is written as (in this case) lexical.type.
If you change the definition of mylexical to:
val mylexical: lexical.type = lexical
your program type-checks.
What we just did is tell the type checker that mylexical can only have one value at runtime, given by the singleton type.