Implicit object works inline but not when it is imported - scala

I am using avro4s to help with avro serialization and deserialization.
I have a case class that includes Timestamps and need those Timestamps to be converted to nicely formatted strings before I publish the records to Kafka; the default encoder is converting my Timestamps to Longs. I read that I needed to write a decoder and encoder (from the avro4s readme).
Here is my case class:
case class MembershipRecordEvent(id: String,
userHandle: String,
planId: String,
teamId: Option[String] = None,
note: Option[String] = None,
startDate: Timestamp,
endDate: Option[Timestamp] = None,
eventName: Option[String] = None,
eventDate: Timestamp)
I have written the following encoder:
Test.scala
def test() = {
implicit object MembershipRecordEventEncoder extends Encoder[MembershipRecordEvent] {
override def encode(t: MembershipRecordEvent, schema: Schema) = {
val dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss")
val record = new GenericData.Record(schema)
record.put("id", t.id)
record.put("userHandle", t.userHandle)
record.put("teamId", t.teamId.orNull)
record.put("note", t.note.orNull)
record.put("startDate", dateFormat.format(t.startDate))
record.put("endDate", if(t.endDate.isDefined) dateFormat.format(t.endDate.get) else null)
record.put("eventName", t.eventName.orNull)
record.put("eventDate", dateFormat.format(t.eventDate))
record
}
}
val recordInAvro2 = Encoder[MembershipRecordEvent].encode(testRecord, AvroSchema[MembershipRecordEvent]).asInstanceOf[GenericRecord]
println(recordInAvro2)
}
If I declare the my implicit object in line, like I did above, it creates the GenericRecord that I am looking for just fine. I tried to abstract the implicit object to a file, wrapped in an object, and I import Implicits._ to use my custom encoder.
Implicits.scala
object Implicits {
implicit object MembershipRecordEventEncoder extends Encoder[MembershipRecordEvent] {
override def encode(t: MembershipRecordEvent, schema: Schema) = {
val dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss")
val record = new GenericData.Record(schema)
record.put("id", t.id)
record.put("userHandle", t.userHandle)
record.put("teamId", t.teamId.orNull)
record.put("note", t.note.orNull)
record.put("startDate", dateFormat.format(t.startDate))
record.put("endDate", if(t.endDate.isDefined) dateFormat.format(t.endDate.get) else null)
record.put("eventName", t.eventName.orNull)
record.put("eventDate", dateFormat.format(t.eventDate))
record
}
}
}
Test.scala
import Implicits._
val recordInAvro2 = Encoder[MembershipRecordEvent].encode(testRecord, AvroSchema[MembershipRecordEvent]).asInstanceOf[GenericRecord]
println(recordInAvro2)
It fails to use my encoder (doesn't hit my breakpoints). I have tried a myriad of things to try and see why it fails to no avail.
How can I correctly import an implicit object?
Is there a simpler solution to encode my case class's Timestamps to Strings without writing an encoder for the entire case class?

TL;DR
As suggested in one of the comments above, you can place it in the companion object.
The longer version:
Probably you have another encoder, that is used instead of the encoder you defined in Implicits.
I'll quote some phrases from WHERE DOES SCALA LOOK FOR IMPLICITS?
When a value of a certain name is required, lexical scope is searched for a value with that name. Similarly, when an implicit value of a certain type is required, lexical scope is searched for a value with that type.
Any such value which can be referenced with its “simple” name, without selecting from another value using dotted syntax, is an eligible implicit value.
There may be more than one such value because they have different names.
In that case, overload resolution is used to pick one of them. The algorithm for overload resolution is the same used to choose the reference for a given name, when more than one term in scope has that name. For example, println is overloaded, and each overload takes a different parameter type. An invocation of println requires selecting the correct overloaded method.
In implicit search, overload resolution chooses a value among more than one that have the same required type. Usually this entails selecting a narrower type or a value defined in a subclass relative to other eligible values.
The rule that the value must be accessible using its simple name means that the normal rules for name binding apply.
In summary, a definition for x shadows a definition in an enclosing scope. But a binding for x can also be introduced by local imports. Imported symbols can’t override definitions of the same name in an enclosing scope. Similarly, wildcard imports can’t override an import of a specific name, and names in the current package that are visible from other source files can’t override imports or local definitions.
These are the normal rules for deciding what x means in a given context, and also determine which value x is accessible by its simple name and is eligible as an implicit.
This means that an implicit in scope can be disabled by shadowing it with a term of the same name.
Now I'll state the companion object logic:
Implicit syntax can avoid the import tax, which of course is a “sin tax,” by leveraging “implicit scope”, which depends on the type of the implicit instead of imports in lexical scope.
When an implicit of type T is required, implicit scope includes the companion object T:
When an F[T] is required, implicit scope includes both the companion of F and the companion of the type argument, e.g., object C for F[C].
In addition, implicit scope includes the companions of the base classes of F and C, including package objects, such as p for p.F.

Related

Doobie cannot find or construct a Read instance for type T

I'm using doobie to query some data and everything works fine, like this:
case class Usuario(var documento: String, var nombre: String, var contrasena: String)
def getUsuario(doc: String) =
sql"""SELECT documento, nombre, contrasena FROM "Usuario" WHERE "documento" = $doc"""
.query[Usuario]
.option
.transact(xa)
.unsafeRunSync()
But if I declare a function with type restriction like this:
def getOption[T](f: Fragment): Option[T] = {
f.query[T]
.option
.transact(xa)
.unsafeRunSync()
}
I got these errors:
Error:(42, 12) Cannot find or construct a Read instance for type:
T
This can happen for a few reasons, but the most common case is that a data
member somewhere within this type doesn't have a Get instance in scope. Here are
some debugging hints:
- For Option types, ensure that a Read instance is in scope for the non-Option
version.
- For types you expect to map to a single column ensure that a Get instance is
in scope.
- For case classes, HLists, and shapeless records ensure that each element
has a Read instance in scope.
- Lather, rinse, repeat, recursively until you find the problematic bit.
You can check that an instance exists for Read in the REPL or in your code:
scala> Read[Foo]
and similarly with Get:
scala> Get[Foo]
And find the missing instance and construct it as needed. Refer to Chapter 12
of the book of doobie for more information.
f.query[T].option.transact(xa).unsafeRunSync()
Error:(42, 12) not enough arguments for method query: (implicit evidence$1: doobie.util.Read[T], implicit h: doobie.LogHandler)doobie.Query0[T].
Unspecified value parameter evidence$1.
f.query[T].option.transact(xa).unsafeRunSync()
Does anyone know how to make what I want? I think it's something with implicits but I don't know how to fix it.
In order for doobie to be able to transform the result of SQL query to your case class, it needs an instance of Read typeclass in scope.
For example for Usuario it needs instance of Read[Usuario]. Fortunately, doobie is able to derive typeclasses for types from typeclasses it already knows, like String, so in most cases, we don't need to create these explicitly.
In your case, you want to create method getOption which has type parameter T, which means, that compiler doesn't know for which typeclass of which type to look for.
You can fix it very easily, by just adding context-bound for Read to your type (like T: Read or by adding implicit parameter). It means that your method will pass "request" to resolve typeclass later in compile-time when the concrete type of T would be already known.
So your fixed method will be:
def getOption[T: Read](f: Fragment): Option[T] = {
f.query[T]
.option
.transact(xa)
.unsafeRunSync()
or with implicit parameter:
def getOption[T](f: Fragment)(implicit read: Read[T]): Option[T] = {
f.query[T]
.option
.transact(xa)
.unsafeRunSync()

What is the meaning of a type declaration without definition in an object?

Scala allows to define types using the type keyword, which usually have slightly different meaning and purpose depending on when they are declared.
If you use type inside an object or a package object, you'd define a type alias, i.e. a shorter/clearer name for another type:
package object whatever {
type IntPredicate = Int => Boolean
def checkZero(p: IntPredicate): Boolean = p(0)
}
Types declared in classes/traits are usually intended to be overridden in subclasses/subtraits, and are also eventually resolved to a concrete type:
trait FixtureSpec {
type FixtureType
def initFixture(f: FixtureType) = ...
}
trait SomeSpec extends FixtureSpec {
override type FixtureType = String
def test(): Unit = {
initFixture("hello")
...
}
}
There are other uses for abstract type declarations, but anyway they eventually are resolved to some concrete types.
However, there is also an option to declare an abstract type (i.e. without actual definition) inside an object:
object Example {
type X
}
And this compiles, as opposed to e.g. abstract methods:
object Example {
def method: String // compilation error
}
Because objects cannot be extended, they can never be resolved to concrete types.
I assumed that such type definitions could be conveniently used as phantom types. For example (using Shapeless' tagged types):
import shapeless.tag.##
import shapeless.tag
type ++>[-F, +T]
trait Converter
val intStringConverter: Converter ## (String ++> Int) = tag[String ++> Int](...)
However, it seems that the way the type system treats these types is different from regular types, which causes the above usage of "abstract" types to fail in certain scenarios.
In particular, when looking for implicit parameters, Scala eventually looks into implicit scope associated with "associated" types, i.e. types which are present in the type signature of the implicit parameters. However, it seems that there is some limitation on nesting of these associated types when "abstract" types are used. Consider this example setup:
import shapeless.tag.##
trait Converter
type ++>[-F, +T]
case class DomainType()
object DomainType {
implicit val converter0: Converter ## DomainType = null
implicit val converter1: Converter ## Seq[DomainType] = null
implicit val converter2: Converter ## (Seq[String] ++> Seq[DomainType]) = null
}
// compiles
implicitly[Converter ## DomainType]
// compiles
implicitly[Converter ## Seq[DomainType]]
// fails!
implicitly[Converter ## (Seq[String] ++> Seq[DomainType])]
Here, the first two implicit resolutions compile just fine, while the last one fails with an error about a missing implicit. If I define the implicit in the same scope as the implicitly call, it then compiles:
implicit val converter2: Converter ## (Seq[String] ++> Seq[DomainType]) = null
// compiles
implicitly[Converter ## (Seq[String] ++> Seq[DomainType])]
However, if I change the ++> definition to be a trait rather than type:
trait ++>[-F, +T]
then all implicitly calls above compile just fine.
Therefore, my question is, what exactly is the purpose of such type declarations? What problems they are intended to solve, and why are they not prohibited, like other kinds of abstract members in objects?
For a method (or value) there are only 2 options: either it has body (and then it is "concrete") or it doesn't (then it is "abstract"). A type X is always some type interval X >: LowerBound <: UpperBound (and we call it concrete if LowerBound = UpperBound or completely abstract if LowerBound = Nothing, UpperBound = Any but there is variety of cases between those). So if we'd like to forbid abstract types in objects we should always have way to check that types LowerBound and UpperBound are equal. But they can be defined in some complex way and generally such check can be not so easy:
object Example {
type X >: N#Add[N] <: N#Mult[Two] // Do we expect that compiler proves n+n=n*2?
}

scala : library is looking for an implicit value to be declared

I am using two imports
import org.json4s._
import org.json4s.native.JsonMethods._
I have the following source code
val json = parse("~~~~~~aklsdjfalksdjfalkdsf")
var abc = (json \\ "something").children map {
_.extract[POJO]
}
After I ran it I saw
Error:(32, 18) No org.json4s.Formats found. Try to bring an instance of org.json4s.Formats in scope or use the org.json4s.DefaultFormats.
_.extract[POJO]
Error:(32, 18) not enough arguments for method extract: (implicit formats: org.json4s.Formats, implicit mf: scala.reflect.Manifest[POJO])POJO.
Unspecified value parameters formats, mf.
_.extract[POJO]
I know I should be declaring :
implicit val df = DefaultFormats
I learnt how to use 'implicit' for my scala code.
However I need to understand how to use a library that enforces developers to define an implicit variable in their source code.
It seems a keyword 'implicit' is used in 'extract' method in ExtractableJsonAstNode class file as stated in the error message.
def extract[A](implicit formats: Formats, mf: scala.reflect.Manifest[A]): A =
Extraction.extract(jv)(formats, mf)
I see that that is looking for 'implicit' variable keyword to be declard in my source code.
The first question is how do I know when an implicit keyword is to be used for another implicit keyword (e.g. declared in a library), or it's going to be a switch of an operation I define (case that 'implicit' not to be declared twice)
the only clue I have is, when mother source code is using 'implicit' keyword and using a variable and it's type is a trait. Then I(dev) need to declare a variable with a type of a concrete class that extends that trait. I don't know if it's true..
also I found the following source code in 'Formats.scala' file within the json library.
class CustomSerializer[A: Manifest](
ser: Formats => (PartialFunction[JValue, A], PartialFunction[Any, JValue])) extends Serializer[A] {
val Class = implicitly[Manifest[A]].runtimeClass
def deserialize(implicit format: Formats) = {
case (TypeInfo(Class, _), json) =>
if (ser(format)._1.isDefinedAt(json)) ser(format)._1(json)
else throw new MappingException("Can't convert " + json + " to " + Class)
}
def serialize(implicit format: Formats) = ser(format)._2
}
note that def deserialize(implicit format: Formats) is declared.
once I write 'implicit val df = DefaultFormats' in my file, will it affect the whole json mechanism not only the 'extract' method? as CustomSerializer is used in json library.
to summarise..
the first question is about one of 'implicit' keyword usages.
the second question is about 'implicit' keyword scope.
When do I use the implicit keyword?
Implicits are used to defne the behavior of things you normally do not have control over. In your question, DefaultFormats is already an implicit. You do not need to declare a new implicit using it, you can just import it.
As for knowing when a library you're using requires some implicit in scope, that error is it. It is essentially telling you "if you're not sure what this error is about you can just import DefaultFormats.
Will an implicit affect the whole mechanism?
This is a key question that is important to understand.
When you have a function that takes an implicit, your compiler will search the scope for an implicit of that type.
Your function is looking for org.json4s.Formats. By importing DefaultFormat or writing your own implicit of type Format, you are teliing your function to use that format.
What effect does this have on the rest of your code?
Any other functions you have that rely on an implicit Format in the scope will use the same implicit. This is probably fine for you.
If you need to use multiple different Formats, you will want to split up those components from each other. You do not want to define multiple implicits of the same type in the same scope. This is confusing for humans and computers and should just be avoided.

Optional boolean parameters in Scala

I've been lately working on the DSL-style library wrapper over Apache POI functionality and faced a challenge which I can't seem to good solution for.
One of the goals of the library is to provide user with ability to build a spreadsheet model as a collection of immutable objects, i.e.
val headerStyle = CellStyle(fillPattern = CellFill.Solid, fillForegroundColor = Color.AquaMarine, font = Font(bold = true))
val italicStyle = CellStyle(font = Font(italic = true))
with the following assumptions:
User can optionally specify any parameter (that means, that you can create CellStyle with no parameters as well as with the full list of explicitly specified parameters);
If the parameter hasn't been specified explicitly by the user it is considered undefined and the default environment value (default value for the format we're converting to) will be used;
The 2nd point is important, as I want to convert this data model into multiple formats and i.e. the default font in Excel doesn't have to be the same as default font in HTML browser (and if user doesn't define the font family explicitly I'd like him to see the data using those defaults).
To deal with the requirements I've used the variation of the null pattern described here: Pattern for optional-parameters in Scala using null and also suggested here Scala default parameters and null (below a simplified example).
object ModelObject {
def apply(modelParam : String = null) : ModelObject = ModelObject(
modelParam = Option(modelParam)
)
}
case class ModelObject private(modelParam : Option[String])
Since null is used only internally in the companion object and very localized I decided to accept the null-sacrifice for the sake of the simplicity of the solution. The pattern works well with all the reference classes.
However for Scala primitive types wrappers null cannot be specified. This is especially a huge problem with Boolean for which I effectively consider 3 states (true, false and undefined). Wanting to provide the interface, where user still be able to write bold = true I decided to reach to Java wrappers which accept nulls.
object ModelObject {
def apply(boolParam : java.lang.Boolean = null) : ModelObject = ModelObject(
boolParam = Option(boolParam).map(_.booleanValue)
)
}
case class ModelObject private(boolParam : Option[Boolean])
This however doesn't right and I've been wondering whether there is a better approach to the problem. I've been thinking about defining the union types (with additional object denoting undefined value): How to define "type disjunction" (union types)?, however since the undefined state shouldn't be used explicitly the parameter type exposed by IDE to the user, it is going to be very confusing (ideally I'd like it to be Boolean).
Is there any better approach to the problem?
Further information:
More DSL API examples: https://github.com/norbert-radyk/spoiwo/blob/master/examples/com/norbitltd/spoiwo/examples/quickguide/SpoiwoExamples.scala
Sample implementation of the full class: https://github.com/norbert-radyk/spoiwo/blob/master/src/main/scala/com/norbitltd/spoiwo/model/CellStyle.scala
You can use a variation of the pattern I described here: How to provide helper methods to build a Map
To sum it up, you can use some helper generic class to represent optional arguments (much like an Option).
abstract sealed class OptArg[+T] {
def toOption: Option[T]
}
object OptArg{
implicit def autoWrap[T]( value: T ): OptArg[T] = SomeArg(value)
implicit def toOption[T]( arg: OptArg[T] ): Option[T] = arg.toOption
}
case class SomeArg[+T]( value: T ) extends OptArg[T] {
def toOption = Some( value )
}
case object NoArg extends OptArg[Nothing] {
val toOption = None
}
You can simply use it like this:
scala>case class ModelObject(boolParam: OptArg[Boolean] = NoArg)
defined class ModelObject
scala> ModelObject(true)
res12: ModelObject = ModelObject(SomeArg(true))
scala> ModelObject()
res13: ModelObject = ModelObject(NoArg)
However as you can see the OptArg now leaks in the ModelObject class itself (boolParam is typed as OptArg[Boolean] instead of Option[Boolean].
Fixing this (if it is important to you) just requires to define a separate factory as you have done yourself:
scala> :paste
// Entering paste mode (ctrl-D to finish)
case class ModelObject private(boolParam: Option[Boolean])
object ModelObject {
def apply(boolParam: OptArg[Boolean] = NoArg): ModelObject = new ModelObject(
boolParam = boolParam.toOption
)
}
// Exiting paste mode, now interpreting.
defined class ModelObject
defined module ModelObject
scala> ModelObject(true)
res22: ModelObject = ModelObject(Some(true))
scala> ModelObject()
res23: ModelObject = ModelObject(None)
UPDATE The advantage of using this pattern, over simply defining several overloaded apply methods as shown by #drexin is that in the latter case the number of overloads grows very fast with the number of arguments(2^N). If ModelObject had 4 parameters, that would mean 16 overloads to write by hand!

Object converting string into "A"

I would like to write a class looking like this:
class Store[+A](dest: Symbol)(implicit c: String => A) extends Action(dest) {
override def update(options: HashMap[Symbol,Any], arg: String): Unit = {
options += ((dest -> c(arg)))
}
}
object Store {
def apply[A](dest: Symbol)(c: String=>A) = new Store[A](dest)(c)
def apply[A](dest: Symbol) = new Store[A](dest)
}
Doing so, I have a few problems:
Using implicit with strings causes no end of trouble
Anyway, the system doesn't find the implicit if they are defined in my module, they would need to be defined in the module creating the class
the second apply method of the Store object just doesn't compile as A will be erased so the compiler has no way of finding a conversion from String to A
How would you create such an object that convert a string to some other type? I wouldn't want the user of the library to enter the type rwice (i.e. by specifying both the type and the conversion function).
I don't understand what you are trying with the second apply. To me, it looks like the first apply should have the implicit keyword, and you'd be done with it. You can either pass the parameter explicitly, or leave it out if an implicit is present. Also, you wouldn't need to pass c explicitly, since you'd already have it implicitly in the scope of the first apply.
I'd venture the second apply doesn't compile because there's no implicit String => A available in the scope of object Store.