Implicit Map makes Char methods not compile - scala

So with this code the head and exists methods are not returning a Char but a String so i can't use any Char method.
implicit val mapTest: Map[String, Set[String]] = Map(
"foo" -> Set("foo")
)
val stringTest: String = "test"
//This don't compile
stringTest.head.isLetter
stringTest.exists(_.isLetter)
If i remove the implicit, it works fine. It works fine if i change the Map definition to Map[String, String], but using any Collection in the map makes is not compile.
Is there a way to make it work? I also tried defining the Char type after calling head but it still fails

Don't expose mapTest in implicit scope: as it offers .apply(key: String): Set[String] it's considered as an implicit conversion String => Set[String] .
Anyway implicit conversion should be defined/used with caution, due to readability drawbacks.
Using .head (or .get) is a code smell.

Implicit conversions may lead to seemingly magical changes to semantics of the code and loss of control by the developer, which your example nicely demonstrates. For this reason they are often discouraged and we are advised to find ways to refactor the code to not use them if at all possible.
If there is no way around it, consider moving the implicit conversion "out-of-scope" by wrapping it in another object like so
object FooImplicits {
implicit def mapTest: Map[String, Set[String]] = Map(
"foo" -> Set("foo")
)
}
and then try to avoid importing them at the top level, but instead import FooImplicits._ only where it is absolutely necessary.
Note that implicits are acceptable when defining typeclasses and extension methods.

Related

Strange NPE with io.circe.Decoder

I have 2 variables declared as follows,
implicit val decodeURL: Decoder[URL] = Decoder.decodeString.emapTry(s => Try(new URL(s))) // #1
implicit val decodeCompleted = Decoder[List[URL]].prepare(_.downField("completed")) // #2
Both lines compile and run.
However, if I annotate #2 with type i.e. implicit val decodeCompleted: Decoder[List[URL]] = Decoder[List[URL]].prepare(_.downField("completed")). It compiles and #2 will throw NullPointerException (NPE) during runtime.
How could this happen? I don't know if this is Circe or just plain Scala issue. Why #2 is different from #1? Thanks
The issue is that you are supposed to use implicits with annotation always.
Now, when you use them when they are not annotated you get into some sort on undefined/invalid behavior zone. That is why with unannotated happens sth like this:
I want to use Decoder[List[URL]]
there is no (annotated) Decoder[List[URL]] implicit in the scope
let's derive it normally (no need for generic.auto._ because the definition for that is in a companion object)
once derived you call on it .prepare(_.downField("completed"))
the final result is of type Decoder[List[URL]], so that is inferred type of decodeCompleted
Now, what happens if you annotate?
I want to use Decoder[List[URL]]
there is decodeCompleted declared as something that fulfills that definition
let use decodeCompleted value
but decodeCompleted wasn't initialized! in fact we are initializing it right now!
as a result you end up with decodeCompleted = null
This is virtually equal to:
val decodeCompleted = decodeCompleted
except that the layer of indirection get's in the way of discovering the absurdity of this by compiler. (If you replaced val with def you would end up with an infinite recursion and stack overflow):
# implicit val s: String = implicitly[String]
s: String = null
# implicit def s: String = implicitly[String]
defined function s
# s
java.lang.StackOverflowError
ammonite.$sess.cmd1$.s(cmd1.sc:1)
ammonite.$sess.cmd1$.s(cmd1.sc:1)
ammonite.$sess.cmd1$.s(cmd1.sc:1)
ammonite.$sess.cmd1$.s(cmd1.sc:1)
Yup, its messed up by compiler. You did nothing wrong and in a perfect world it would work.
Scala community mitigates that by distincting:
auto derivation - when you need implicit somewhere and it is automatically derived without you defining a variable for it
semiauto derivation - when you derive into a value, and make that value implicit
In the later case, you usually have some utilities like:
import io.circe.generic.semiauto._
implicit val decodeCompleted: Decoder[List[URL]] = deriveDecoder[List[URL]]
It works because it takes DerivedDecoder[A] implicit and then it extracts Decoder[A] from it, so you never end up with implicit val a: A = implicitly[A] scenario.
Indeed the problem is that you introduce a recursive val, like #MateuszKubuszok explained.
The most straightforward—although slightly ugly—workaround is:
implicit val decodeCompleted: Decoder[List[URL]] = {
val decodeCompleted = null
Decoder[List[URL]].prepare(_.downField("completed"))
}
By shadowing decodeCompleted in the right-hand side, implicit search will no longer consider it as a candidate inside that code block, because it can no longer be referenced.

Sequencing a StateMonad in Cats

I just had a look into the cats library in scala, more specifically the State Monad.
As a toy example I wanted to create some logic that splits a potentially large string (the StringBuilder) and returns the split up String and the remaining StringBuilder:
object Splitter {
def apply(maxSize: Int, overlap: Int): State[StringBuilder, String] = State(
builder => {
val splitPoint = math.min(builder.size, maxSize + overlap)
(builder.drop(maxSize), builder.substring(0, splitPoint))
}
)
}
Running one step of the State monad works fine but I wanted to chain all the steps until the StringBuilder eventually is empty:
val stop = State.inspect((s: StringBuilder) => s.isEmpty)
Splitter(3, 2).untilM[Vector](stop).run(new StringBuilder("tarsntiars"))
However, this doesn't work as untilM is a member of the Monad trait and there are no implicit conversions in scope. What works is:
val monad = StateT.catsDataMonadForStateT[Eval, StringBuilder]
monad.untilM[List, String](Splitter(3, 2))(stop).run(new StringBuilder("tarsntiars"))
However, I think the shorter is much more readable so I am wondering why it doesn't work? Why does the usual MonadOps mechanism doesn't work here?
After SI-2712 was fixed, the Unapply workaround was removed from Cats: https://github.com/typelevel/cats/pull/1583. Now you need the -Ypartial-unification compiler flag (assuming you are using Scala 2.11 or 2.12) in order for State to be treated as a Monad.
Scalaz still has the Unapply machinery so your code should work with Scalaz without the compiler flag.

What is the best way to create and pass around dictionaries containing multiple types in scala?

By dictionary I mean a lightweight map from names to values that can be used as the return value of a method.
Options that I'm aware of include making case classes, creating anon objects, and making maps from Strings -> Any.
Case classes require mental overhead to create (names), but are strongly typed.
Anon objects don't seem that well documented and it's unclear to me how to use them as arguments since there is no named type.
Maps from String -> Any require casting for retrieval.
Is there anything better?
Ideally these could be built from json and transformed back into it when appropriate.
I don't need static typing (though it would be nice, I can see how it would be impossible) - but I do want to avoid explicit casting.
Here's the fundamental problem with what you want:
def get(key: String): Option[T] = ...
val r = map.get("key")
The type of r will be defined from the return type of get -- so, what should that type be? From where could it be defined? If you make it a type parameter, then it's relatively easy:
import scala.collection.mutable.{Map => MMap}
val map: MMap[String, (Manifest[_], Any) = MMap.empty
def get[T : Manifest](key: String): Option[T] = map.get(key).filter(_._1 <:< manifest[T]).map(_._2.asInstanceOf[T])
def put[T : Manifest](key: String, obj: T) = map(key) = manifest[T] -> obj
Example:
scala> put("abc", 2)
scala> put("def", true)
scala> get[Boolean]("abc")
res2: Option[Boolean] = None
scala> get[Int]("abc")
res3: Option[Int] = Some(2)
The problem, of course, is that you have to tell the compiler what type you expect to be stored on the map under that key. Unfortunately, there is simply no way around that: the compiler cannot know what type will be stored under that key at compile time.
Any solution you take you'll end up with this same problem: somehow or other, you'll have to tell the compiler what type should be returned.
Now, this shouldn't be a burden in a Scala program. Take that r above... you'll then use that r for something, right? That something you are using it for will have methods appropriate to some type, and since you know what the methods are, then you must also know what the type of r must be.
If this isn't the case, then there's something fundamentally wrong with the code -- or, perhaps, you haven't progressed from wanting the map to knowing what you'll do with it.
So you want to parse json and turn it into objects that resemble the javascript objets described in the json input? If you want static typing, case classes are pretty much your only option and there are already libraries handling this, for example lift-json.
Another option is to use Scala 2.9's experimental support for dynamic typing. That will give you elegant syntax at the expense of type safety.
You can use approach I've seen in the casbah library, when you explicitly pass a type parameter into the get method and cast the actual value inside the get method. Here is a quick example:
case class MultiTypeDictionary(m: Map[String, Any]) {
def getAs[T <: Any](k: String)(implicit mf: Manifest[T]): T =
cast(m.get(k).getOrElse {throw new IllegalArgumentException})(mf)
private def cast[T <: Any : Manifest](a: Any): T =
a.asInstanceOf[T]
}
implicit def map2multiTypeDictionary(m: Map[String, Any]) =
MultiTypeDictionary(m)
val dict: MultiTypeDictionary = Map("1" -> 1, "2" -> 2.0, "3" -> "3")
val a: Int = dict.getAs("1")
val b: Int = dict.getAs("2") //ClassCastException
val b: Int = dict.getAs("4") //IllegalArgumetExcepton
You should note that there is no real compile-time checks, so you have to deal with all exceptions drawbacks.
UPD Working MultiTypeDictionary class
If you have only a limited number of types which can occur as values, you can use some kind of union type (a.k.a. disjoint type), having e.g. a Map[Foo, Bar | Baz | Buz | Blargh]. If you have only two possibilities, you can use Either[A,B], giving you a Map[Foo, Either[Bar, Baz]]. For three types you might cheat and use Map[Foo, Either[Bar, Either[Baz,Buz]]], but this syntax obviously doesn't scale well. If you have more types you can use things like...
http://cleverlytitled.blogspot.com/2009/03/disjoint-bounded-views-redux.html
http://svn.assembla.com/svn/metascala/src/metascala/OneOfs.scala
http://www.chuusai.com/2011/06/09/scala-union-types-curry-howard/

Scala - implicit conversion with unapply

I'd like an extractor to implicitly convert its parameters, but it doesn't seem to work. Consider this very simple case:
case class MyString(s: String) {}
implicit def string2mystring(x: String): MyString = new MyString(x)
implicit def mystring2string(x: MyString) = x.s
object Apply {
def unapply(s: MyString): Option[String] = Some(s)
}
But I'm not able to use it as I would expect:
val Apply(z) = "a" // error: scrutinee is incompatible with pattern type
Can anyone explain why it fails to convert the parameter from String to MyString? I would expect it to call string2mystring("a") on the fly. Clearly I could work around the issue by saying val Apply(y) = MyString("a"), but it doesn't seem like I should have to do that.
Note: This question is similar to this one, but 1) that one doesn't really have a good answer for why this is happening, 2) the example is more complex than it needs to be.
Implicit conversions are not applied when pattern matching. That's not a bug or a problem with your code, it's simply a design decision of the creators of Scala.
To fix it, you should write another extractor that accepts a String — which in turn can call your implicit conversion.
Alternatively, you can try with a view bound, which seems to work as well, and will also work if you later define other implicit conversions to MyString:
object Apply {
def unapply[S <% MyString](s: S): Option[String] = Some(s.s)
}

Could/should an implicit conversion from T to Option[T] be added/created in Scala?

Is this an opportunity to make things a bit more efficient (for the prorammer): I find it gets a bit tiresome having to wrap things in Some, e.g. Some(5). What about something like this:
implicit def T2OptionT( x : T) : Option[T] = if ( x == null ) None else Some(x)
You would lose some type safety and possibly cause confusion.
For example:
val iThinkThisIsAList = 2
for (i <- iThinkThisIsAList) yield { i + 1 }
I (for whatever reason) thought I had a list, and it didn't get caught by the compiler when I iterated over it because it was auto-converted to an Option[Int].
I should add that I think this is a great implicit to have explicitly imported, just probably not a global default.
Note that you could use the explicit implicit pattern which would avoid confusion and keep code terse at the same time.
What I mean by explicit implicit is rather than have a direct conversion from T to Option[T] you could have a conversion to a wrapper object which provides the means to do the conversion from T to Option[T].
class Optionable[T <: AnyRef](value: T) {
def toOption: Option[T] = if ( value == null ) None else Some(value)
}
implicit def anyRefToOptionable[T <: AnyRef](value: T) = new Optionable(value)
... I might find a better name for it than Optionable, but now you can write code like:
val x: String = "foo"
x.toOption // Some("foo")
val y: String = null
x.toOption // None
I believe that this way is fully transparent and aids in the understanding of the written code - eliminating all checks for null in a nice way.
Note the T <: AnyRef - you should only do this implicit conversion for types that allow null values, which by definition are reference types.
The general guidelines for implicit conversions are as follows:
When you need to add members to a type (a la "open classes"; aka the "pimp my library" pattern), convert to a new type which extends AnyRef and which only defines the members you need.
When you need to "correct" an inheritance hierarchy. Thus, you have some type A which should have subclassed B, but didn't for some reason. In that case, you can define an implicit conversion from A to B.
These are the only cases where it is appropriate to define an implicit conversion. Any other conversion runs into type safety and correctness issues in a hurry.
It really doesn't make any sense for T to extend Option[T], and obviously the purpose of the conversion is not simply the addition of members. Thus, such a conversion would be inadvisable.
It would seem that this could be confusing to other developers, as they read your code.
Generally, it seems, implicit works to help cast from one object to another, to cut out confusing casting code that can clutter code, but, if I have some variable and it somehow becomes a Some then that would seem to be bothersome.
You may want to put some code showing it being used, to see how confusing it would be.
You could also try to overload the method :
def having(key:String) = having(key, None)
def having(key:String, default:String) = having(key, Some(default))
def having(key: String, default: Option[String]=Option.empty) : Create = {
keys += ( (key, default) )
this
}
That looks good to me, except it may not work for a primitive T (which can't be null). I guess a non-specialized generic always gets boxed primitives, so probably it's fine.