ParTraverse Not a Value of NonEmptyList - scala

I am following the instructions on the cats IO website to run a sequence of effects in parallel:
My code looks like:
val maybeNonEmptyList: Option[NonEmptyList[Urls]] = NonEmptyList.fromList(urls)
val maybeDownloads: Option[IO[NonEmptyList[Either[Error, Files]]]] = maybeNonEmptyList map { urls =>
urls.parTraverse(url => downloader(url))
}
But I get a compile time error saying:
value parTraverse is not a member of cats.data.NonEmptyList[Urls]
[error] urls.parTraverse(url => downloader(url))
I have imported the following:
import cats.data.{EitherT, NonEmptyList}
import cats.effect.{ContextShift, IO, Timer}
import cats.implicits._
import cats.syntax.parallel._
and also i have the following implicits:
implicit val cs: ContextShift[IO] = IO.contextShift(ExecutionContext.global)
implicit val timer: Timer[IO] = IO.timer(ExecutionContext.global)
Why do i still get the problem?

This is caused because the implicit enrichment is being imported twice, so it becomes ambiguous
import cats.implicits._
import cats.syntax.parallel._
As of recent versions of cats, implicits imports are never required, only syntax
The recommended pattern is to do only import cats.syntax.all._

Related

using package in Scala?

I have a scala project that uses akka. I want the execution context to be available throughout the project. So I've created a package object like this:
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import com.datastax.driver.core.Cluster
package object connector {
implicit val system = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val executionContext = executionContext
implicit val session = Cluster
.builder
.addContactPoints("localhost")
.withPort(9042)
.build()
.connect()
}
In the same package I have this file:
import akka.stream.alpakka.cassandra.scaladsl.CassandraSource
import akka.stream.scaladsl.Sink
import com.datastax.driver.core.{Row, Session, SimpleStatement}
import scala.collection.immutable
import scala.concurrent.Future
object CassandraService {
def selectFromCassandra()() = {
val statement = new SimpleStatement(s"SELECT * FROM animals.alpakka").setFetchSize(20)
val rows: Future[immutable.Seq[Row]] = CassandraSource(statement).runWith(Sink.seq)
rows.map{item =>
print(item)
}
}
}
However I am getting the compiler error that no execution context or session can be found. My understanding of the package keyword was that everything in that object will be available throughout the package. But that does not seem work. Grateful if this could be explained to me!
Your implementation must be something like this, and hope it helps.
package.scala
package com.app.akka
package object connector {
// Do some codes here..
}
CassandraService.scala
package com.app.akka
import com.app.akka.connector._
object CassandraService {
def selectFromCassandra() = {
// Do some codes here..
}
}
You have two issue with your current code.
When you compile your package object connector it is throwing below error
Error:(14, 35) recursive value executionContext needs type
implicit val executionContext = executionContext
Issue is with implicit val executionContext = executionContext line
Solution for this issue would be as below.
implicit val executionContext = ExecutionContext
When we compile CassandraService then it is throwing error as below
Error:(17, 13) Cannot find an implicit ExecutionContext. You might pass
an (implicit ec: ExecutionContext) parameter to your method
or import scala.concurrent.ExecutionContext.Implicits.global.
rows.map{item =>
Error clearly say that either we need to pass ExecutionContext as implicit parameter or import scala.concurrent.ExecutionContext.Implicits.global. In my system both issues are resolved and its compiled successfully. I have attached code for your reference.
package com.apache.scala
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import com.datastax.driver.core.Cluster
import scala.concurrent.ExecutionContext
package object connector {
implicit val system = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val executionContext = ExecutionContext
implicit val session = Cluster
.builder
.addContactPoints("localhost")
.withPort(9042)
.build()
.connect()
}
package com.apache.scala.connector
import akka.stream.alpakka.cassandra.scaladsl.CassandraSource
import akka.stream.scaladsl.Sink
import com.datastax.driver.core.{Row, SimpleStatement}
import scala.collection.immutable
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
object CassandraService {
def selectFromCassandra() = {
val statement = new SimpleStatement(s"SELECT * FROM animals.alpakka").setFetchSize(20)
val rows: Future[immutable.Seq[Row]] = CassandraSource(statement).runWith(Sink.seq)
rows.map{item =>
print(item)
}
}
}

scala json4s, can't convert LocalDate

I'm having an issue with org.json4s (scala), joda.time.LocalDate and org.json4s.ext.JodaTimeSerializers. Given that JodaTimeSerializers.all has a LocalDate conversion in it, i was hoping that i could do the following, but I get the exception shown after
scala> import org.json4s.JString
import org.json4s.JString
scala> import org.joda.time.LocalDate
import org.joda.time.LocalDate
scala> import org.json4s.ext.JodaTimeSerializers
import org.json4s.ext.JodaTimeSerializers
scala> import org.json4s._
import org.json4s._
scala> implicit val formats: Formats = DefaultFormats ++ JodaTimeSerializers.all
formats: org.json4s.Formats = org.json4s.Formats$$anon$3#693d3d7f
scala> val jDate = JString("2016-01-26")
jDate: org.json4s.JsonAST.JString = JString(2016-01-26)
scala> jDate.extract[LocalDate]
org.json4s.package$MappingException: Can't convert JString(2016-01-26) to class org.joda.time.LocalDate
Other the other hand, this works (not surprisingly)
scala> val jodaDate = LocalDate.parse(jDate.values)
I've tried to create a custom Serializer, which never gets called b/c it falls into the JodaSerializer realm it seems. I have also created a custom Deserializer that will work with Java.time.LocalDate (int and bytes from strings), but java.time.LocalDate messes with some other code which is likely a different question...this one is: I'm looking for clues by JodaTimeSerializers.all can not parse JString(2016-01-26), or any date string.
The top of the exception is: org.json4s.package$MappingException:
Can't convert JString(2016-01-01) to class org.joda.time.LocalDate (JodaTimeSerializers.scala:126)
Edit
This is still biting me, so dug a bit further and its reproducible with the following.
import org.joda.time.LocalDate
import org.json4s.ext.JodaTimeSerializers
import org.json4s._
implicit val formats: Formats = DefaultFormats ++ JodaTimeSerializers.all
import org.joda.time.LocalDate
case class MyDate(myDate: LocalDate)
val stringyDate =
"""
{
"myDate" : "2016-01-01"
}
"""
import org.json4s.jackson.JsonMethods.parse
parse(stringyDate).extract[MyDate]
org.json4s.package$MappingException: No usable value for myDate
Can't convert JString(2016-01-01) to class org.joda.time.LocalDate
This seems to happen b/c on line 125 of JodaTimeSerializers.scala, its not a JObject, it is a JString, so it falls into the value case on line 126, which throws the error.
Adding this here in case it bites someone else and hopefully get some assistance fixing it...but now i'm late. I have moved the code locally hopefully to come up with a fix tomorrow.
This works. I define a custom serializer for LocalDate.
import org.json4s.JString
import org.joda.time.LocalDate
import org.json4s._
case object LocalDateSerializer
extends CustomSerializer[LocalDate](
format =>
({
case JString(s) => LocalDate.parse(s)
}, Map() /* TO BE IMPLEMENTED */)
)
implicit val formats: Formats = DefaultFormats + LocalDateSerializer
val jDate = JString("2016-01-26")
jDate.extract[LocalDate] // res173: org.joda.time.LocalDate = 2016-01-26
The new serializers are included in the library, but not in the default formats:
implicit val formats: Formats = DefaultFormats ++ JavaTimeSerializers.all

Scala Queue import error

If I import scala.collection._ and create a queue:
import scala.collection._
var queue = new Queue[Component]();
I get the following error:
error: not found: type Queue
However, if I also add
import scala.collection.mutable.Queue
The error will disappear.
Why is this happening?
Shouldn't scala.collection._ contain scala.collection.mutable.Queue?
You have to know how the Scala collections library is structured. It splits collections based on whether they are mutable or immutable.
Queue lives in the scala.collection.mutable package and scala.collection.immutable package. You have to specify which one you want e.g.
scala> import scala.collection.mutable._
import scala.collection.mutable._
scala> var q = new Queue[Int]()
q: scala.collection.mutable.Queue[Int] = Queue()
scala> import scala.collection.immutable._
import scala.collection.immutable._
scala> var q = Queue[Int]()
q: scala.collection.immutable.Queue[Int] = Queue()
After import scala.collection._ you can use mutable.Queue; you could write Queue if there was a scala.collection.Queue (or one of scala.Queue, java.lang.Queue, and scala.Predef.Queue, since all of their members are imported by default), but there isn't.
It should be easy to see why it works this way: otherwise, the compiler (or anyone reading your code) would have no idea where they should look for the type: do you want scala.collection.Queue, scala.collection.mutable.Queue, or scala.collection.some.subpackage.from.a.library.Queue?

value lookup is not a member of org.apache.spark.rdd.RDD[(String, String)]

I have got a problem when I tired to compile my scala program with SBT.
I have import the class I need .Here is part of my code.
import java.io.File
import java.io.FileWriter
import java.io.PrintWriter
import java.io.IOException
import org.apache.spark.{SparkConf,SparkContext}
import org.apache.spark.rdd.PairRDDFunctions
import scala.util.Random
......
val data=sc.textFile(path)
val kv=data.map{s=>
val a=s.split(",")
(a(0),a(1))
}.cache()
kv.first()
val start=System.currentTimeMillis()
for(tg<-target){
kv.lookup(tg.toString)
}
The error detail is :
value lookup is not a member of org.apache.spark.rdd.RDD[(String, String)]
[error] kv.lookup(tg.toString)
What confused me is I have import import org.apache.spark.rdd.PairRDDFunctions,
but it doesn't work . And when I run this in Spark-shell ,it runs well.
import org.apache.spark.SparkContext._
to have access to the implicits that let you use PairRDDFunctions on a RDD of type (K,V).
There's no need to directly import PairRDDFunctions

reduceByKey method not being found in IntelliJ

Here is code I'm trying out for reduceByKey :
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext._
import org.apache.spark.SparkContext
import scala.math.random
import org.apache.spark._
import org.apache.spark.storage.StorageLevel
object MapReduce {
def main(args: Array[String]) {
val sc = new SparkContext("local[4]" , "")
val file = sc.textFile("c:/data-files/myfile.txt")
val counts = file.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey(_ + _)
}
}
Is giving compiler error : "cannot resolve symbol reduceByKey"
When I hover over implementation of reduceByKey it gives three possible implementations so it appears it is being found ?:
You need to add the following import to your file:
import org.apache.spark.SparkContext._
Spark documentation:
"In Scala, these operations are automatically available on RDDs containing Tuple2 objects (the built-in
tuples in the language, created by simply writing (a, b)), as long as you import org.apache.spark.SparkContext._ in your program to enable Spark’s implicit conversions. The key-value pair operations are available in the PairRDDFunctions class, which automatically wraps around an RDD of tuples if you import the conversions."
It seems as if the documented behavior has changed in Spark 1.4.x. To have IntelliJ recognize the implicit conversions you now have to add the following import:
import org.apache.spark.rdd.RDD._
I have noticed that at times IJ is unable to resolve methods that are imported implicitly via PairRDDFunctions https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala .
The methods implicitly imported include the reduceByKey* and reduceByKeyAndWindow* methods. I do not have a general solution at this time -except that yes you can safely ignore the intellisense errors