List[String] does not have a member traverse from cats - scala

I am attempting to convert a List[Either[Int]] to anEither[List[Int]] using traverse from cats.
Error
[error] StringCalculator.scala:19:15: value traverseU is not a member of List[String]
[error] numList.traverseU(x => {
Code
import cats.Semigroup
import cats.syntax.traverse._
import cats.implicits._
val numList = numbers.split(',').toList
numList.traverseU(x => {
try {
Right(x.toInt)
} catch {
case e: NumberFormatException => Left(0)
}
})
.fold(
x => {},
x => {}
)
I have tried the same with traverse instead of traverseU as well.
Config(for cats)
lazy val root = (project in file(".")).
settings(
inThisBuild(List(
organization := "com.example",
scalaVersion := "2.12.4",
scalacOptions += "-Ypartial-unification",
version := "0.1.0-SNAPSHOT"
)),
name := "Hello",
libraryDependencies += cats,
libraryDependencies += scalaTest % Test
)

It should indeed be just traverse, as long as you're using a recent cats version (1.0.x), however, you can't import both cats.syntax and cats.implicits._ as they will conflict.
Unfortunately whenever the Scala compiler sees a conflict for implicits, it will give you a really unhelpful message.
Remove the cats.syntax import and it should work fine.
For further information check out the import guide.

You need only cats.implicits._. e.g.
import cats.implicits._
val numbers = "1,2,3"
val numList = numbers.split(',').toList
val lst = numList.traverse(x => scala.util.Try(x.toInt).toEither.leftMap(_ => 0))
println(lst)

Related

Scala conditional compilation

I'm writing a Scala program and I want it to work with two version of a big library.
This big library's version 2 changes the API very slightly (only one class constructor signature has an extra parameter).
// Lib v1
class APIClass(a: String, b:Integer){
...
}
// Lib v2
class APIClass(a: String, b: Integer, c: String){
...
}
// And my code extends APIClass.. And I have no #IFDEF
class MyClass() extends APIClass("x", 1){ // <-- would be APIClass("x", 1, "y") in library v2
...
}
I really don't want to branch my code. Because then I'd need to maintain two branches, and tomorrow 3,4,..branches for tiny API changes :(
Ideally we'd have a simple preprocessor in Scala, but the idea was rejected long ago by Scala community.
A thing I don't really couldn't grasp is: can Scalameta help simulating a preprocessor in this case? I.e. parsing two source files conditionally to - say - an environmental variable known at compile time?
If not, how would you approach this real life problem?
1. C++ preprocessors can be used with Java/Scala if you run cpp before javac or scalac (also there is Manifold).
2. If you really want to have conditional compilation in Scala you can use macro annotation (expanding at compile time)
macros/src/main/scala/extendsAPIClass.scala
import scala.annotation.{StaticAnnotation, compileTimeOnly}
import scala.language.experimental.macros
import scala.reflect.macros.blackbox
#compileTimeOnly("enable macro paradise")
class extendsAPIClass extends StaticAnnotation {
def macroTransform(annottees: Any*): Any = macro ExtendsAPIClassMacro.impl
}
object ExtendsAPIClassMacro {
def impl(c: blackbox.Context)(annottees: c.Tree*): c.Tree = {
import c.universe._
annottees match {
case q"$mods class $tpname[..$tparams] $ctorMods(...$paramss) extends { ..$earlydefns } with ..$parents { $self => ..$stats }" :: tail =>
def updateParents(parents: Seq[Tree], args: Seq[Tree]) =
q"""${tq"APIClass"}(..$args)""" +: parents.filter { case tq"scala.AnyRef" => false; case _ => true }
val parents1 = sys.env.get("LIB_VERSION") match {
case Some("1") => updateParents(parents, Seq(q""" "x" """, q"1"))
case Some("2") => updateParents(parents, Seq(q""" "x" """, q"1", q""" "y" """))
case None => parents
}
q"""
$mods class $tpname[..$tparams] $ctorMods(...$paramss) extends { ..$earlydefns } with ..$parents1 { $self => ..$stats }
..$tail
"""
}
}
}
core/src/main/scala/MyClass.scala (if LIB_VERSION=2)
#extendsAPIClass
class MyClass
//Warning:scalac: {
// class MyClass extends APIClass("x", 1, "y") {
// def <init>() = {
// super.<init>();
// ()
// }
// };
// ()
//}
build.sbt
ThisBuild / name := "macrosdemo"
lazy val commonSettings = Seq(
scalaVersion := "2.13.2",
organization := "com.example",
version := "1.0.0",
scalacOptions ++= Seq(
"-Ymacro-debug-lite",
"-Ymacro-annotations",
),
)
lazy val macros: Project = (project in file("macros")).settings(
commonSettings,
libraryDependencies ++= Seq(
scalaOrganization.value % "scala-reflect" % scalaVersion.value,
)
)
lazy val core: Project = (project in file("core")).aggregate(macros).dependsOn(macros).settings(
commonSettings,
)
)
3. Alternatively you can use Scalameta for code generation (at the time before compile time)
build.sbt
ThisBuild / name := "scalametacodegendemo"
lazy val commonSettings = Seq(
scalaVersion := "2.13.2",
organization := "com.example",
version := "1.0.0",
)
lazy val common = project
.settings(
commonSettings,
)
lazy val in = project
.dependsOn(common)
.settings(
commonSettings,
)
lazy val out = project
.dependsOn(common)
.settings(
sourceGenerators in Compile += Def.task {
Generator.gen(
inputDir = sourceDirectory.in(in, Compile).value,
outputDir = sourceManaged.in(Compile).value
)
}.taskValue,
commonSettings,
)
project/build.sbt
libraryDependencies += "org.scalameta" %% "scalameta" % "4.3.10"
project/Generator.scala
import sbt._
object Generator {
def gen(inputDir: File, outputDir: File): Seq[File] = {
val finder: PathFinder = inputDir ** "*.scala"
for(inputFile <- finder.get) yield {
val inputStr = IO.read(inputFile)
val outputFile = outputDir / inputFile.toURI.toString.stripPrefix(inputDir.toURI.toString)
val outputStr = Transformer.transform(inputStr)
IO.write(outputFile, outputStr)
outputFile
}
}
}
project/Transformer.scala
import scala.meta._
object Transformer {
def transform(input: String): String = {
val (v1on, v2on) = sys.env.get("LIB_VERSION") match {
case Some("1") => (true, false)
case Some("2") => (false, true)
case None => (false, false)
}
var v1 = false
var v2 = false
input.tokenize.get.filter(_.text match {
case "// Lib v1" =>
v1 = true
false
case "// End Lib v1" =>
v1 = false
false
case "// Lib v2" =>
v2 = true
false
case "// End Lib v2" =>
v2 = false
false
case _ => (v1on && v1) || (v2on && v2) || (!v1 && !v2)
}).mkString("")
}
}
common/src/main/scala/com/api/APIClass.scala
package com.api
class APIClass(a: String, b: Integer, c: String)
in/src/main/scala/com/example/MyClass.scala
package com.example
import com.api.APIClass
// Lib v1
class MyClass extends APIClass("x", 1)
// End Lib v1
// Lib v2
class MyClass extends APIClass("x", 1, "y")
// End Lib v2
out/target/scala-2.13/src_managed/main/scala/com/example/MyClass.scala
(after sbt out/compile if LIB_VERSION=2)
package com.example
import com.api.APIClass
class MyClass extends APIClass("x", 1, "y")
Macro annotation to override toString of Scala function
How to merge multiple imports in scala?
I see some options but none if them is "conditional compilation"
you can create 2 modules in your build - they would have a shared source directory and each of them you have a source directory for code specific to it. Then you would publish 2 versions of your whole library
create 3 modules - one with your library and an abstract class/trait that it would talk to/through and 2 other with version-specific implementation of the trait
The problem is - what if you build the code against v1 and user provided v2? Or the opposite? You emitted the bytecode but JVM expects something else and it all crashes.
Virtually every time you have such compatibility breaking changes, library either refuses to update or fork. Not because you wouldn't be able to generate 2 versions - you would. Problem is in the downstream - how would your users deal with this situation. If you are writing an application you can commit to one of these. If you are writing library and you don't want to lock users to your choices... you have to publish separate version for each choice.
Theoretically you could create one project, with 2 modules, which share the same code and use different branches like #ifdef macros in C++ using Scala macros or Scalameta - but that is a disaster if you want to use IDE or publish sourcecode that your users can use in IDE. No source to look at. No way to jump to the definition's source. Disassembled byte code at best.
So the solution that you simply have separate source directories for mismatching versions is much easier to read, write and maintain in a long run.

SparkStreaming & Kafka: value reduceByKey is not a member of org.apache.spark.streaming.dstream.DStream[Any]

I tried to do ETL on DStream using Kafka Consumer and SparkStreaming, but got the following error. Could you help me fix this? Thanks.
KafkaCardCount.scala:56:28: value reduceByKey is not a member of org.apache.spark.streaming.dstream.DStream[Any]
[error] val wordCounts = etl.reduceByKey(_ + _)
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 7 s, completed Jan 14, 2018 2:52:23 PM
I have this sample code. I found many articles suggest adding import import org.apache.spark.streaming.StreamingContext._ but it seems that it does not work for me.
package example
import org.apache.spark.streaming.StreamingContext._
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.{Durations, StreamingContext}
val ssc = new StreamingContext(sparkConf, Durations.seconds(5))
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
val etl = stream.map(r => {
val split = r.value.split("\t")
val id = split(1)
val numStr = split(4)
if (numStr.matches("\\d+")) {
val num = numStr.toInt
val tpl = (id, num)
tpl
} else {
()
}
})
// Create the counts per game
val wordCounts = etl.reduceByKey(_ + _)
wordCounts.print()
I have this build.sbt.
lazy val root = (project in file(".")).
settings(
inThisBuild(List(
organization := "example",
scalaVersion := "2.11.8",
version := "0.1.0-SNAPSHOT"
)),
name := "KafkaCardCount",
libraryDependencies ++= Seq (
"org.apache.spark" %% "spark-core" % "2.1.0",
"org.apache.spark" % "spark-streaming_2.11" % "2.1.0",
"org.apache.spark" %% "spark-streaming-kafka-0-10-assembly" % "2.1.0"
)
)
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
Your problem is here:
else {
()
}
The common supertype for (String, Int) and Unit is Any.
What you need to do is signal that the processing failed with a type that similar to your success (if) clause. For example:
else ("-1", -1)
.filter { case (id, res) => id != "-1" && res != -1 }
.reduceByKey(_ + _)

SBT/Scala: macro implementation not found

I tried my hand on macros, and I keep running into the error
macro implementation not found: W
[error] (the most common reason for that is that you cannot use macro implementations in the same compilation run that defines them)
I believe I've set up a two pass compilation with the macro implementation being compiled first, and the usage second.
Here is part of the /build.sbt:
lazy val root = (project in file(".")).
settings(rootSettings: _*).
settings(name := "Example").
aggregate(macros, core).
dependsOn(macros, core)
lazy val macros = (project in file("src/main/com/example/macros")).
settings(macrosSettings: _*).
settings(name := "Macros")
lazy val core = (project in file("src/main/com/example/core")).
settings(coreSettings: _*).
settings (name := "Core").
dependsOn(macros)
lazy val commonSettings = Seq(
organization := Organization,
version := Version,
scalaVersion := ScalaVersion
)
lazy val rootSettings = commonSettings ++ Seq(
libraryDependencies ++= commonDeps ++ rootDeps ++ macrosDeps ++ coreDeps
)
lazy val macrosSettings = commonSettings ++ Seq(
libraryDependencies ++= commonDeps ++ macrosDeps
)
lazy val coreSettings = commonSettings ++ Seq(
libraryDependencies ++= commonDeps ++ coreDeps
)
The macro implementation looks like this:
/src/main/com/example/macros/Macros.scala
object Macros {
object Color {
def ColorWhite(c: Context): c.Expr[ObjectColor] = c.Expr[ObjectColor](c.universe.reify(ObjectColor(White())).tree)
}
}
The usage looks like this:
/src/main/com/example/core/Main.scala
object Macros {
import com.example.macros.Macros._
def W: ObjectColor = macro Color.ColorWhite
}
object Main extends App {
import Macros._
println(W)
}
Scala 2.11.6. SBT 0.13.8.
What am I doing wrong?
Thanks for your advice!
Fawlty Project:
The Project on Github
Working Project:
Rearranged the projects to a more correct form:
The cleanedup working project
Your macros and core projects don't contain any files, so they don't cause the problem. The error happens when sbt compiles root, which contains both Main.scala and Macros.scala by the virtue of you saying project in file(".") in the sbt build.

Scala parser cuts last bracket

Welcome to Scala 2.12.1 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121).
Type in expressions for evaluation. Or try :help.
scala> :paste
// Entering paste mode (ctrl-D to finish)
import scala.reflect.runtime._
import scala.reflect.runtime.universe._
import scala.tools.reflect.ToolBox
val mirror = universe.runtimeMirror(universe.getClass.getClassLoader)
val toolbox = mirror.mkToolBox(options = "-Yrangepos")
val text =
"""
|libraryDependencies ++= Seq("org.scala-lang" % "scala-compiler" % "2.10.4") map {
| (dependency) =>{
| dependency
| }
|}
""".stripMargin
val parsed = toolbox.parse(text)
val parsedTrees = parsed match {
case Block(stmt, expr) =>
stmt :+ expr
case t: Tree =>
Seq(t)
}
val statements = parsedTrees.map { (t: Tree) =>
text.substring(t.pos.start, t.pos.end)
}
// Exiting paste mode, now interpreting.
import scala.reflect.runtime._
import scala.reflect.runtime.universe._
import scala.tools.reflect.ToolBox
mirror: reflect.runtime.universe.Mirror = JavaMirror with primordial classloader with boot classpath...
scala> statements.head
res0: String =
libraryDependencies ++= Seq("org.scala-lang" % "scala-compiler" % "2.10.4") map {
(dependency) =>{
dependency
}
The result is:
scala> statements.head
res1: String =
libraryDependencies ++= Seq("org.scala-lang" % "scala-compiler" % "2.10.4") map {
(dependency) =>{
dependency
}
I expected:
libraryDependencies ++= Seq("org.scala-lang" % "scala-compiler" % "2.10.4") map {
(dependency) =>{
dependency
}
}
The last brackets } (and end of line) is missing if I use position from Tree object: text.substring(t.pos.start, t.pos.end)
Any proposal how to extract all text from scala.reflect.api.Trees#Tree object?
Update
Affected scala versions :
2.10.6 - needed for sbt 0.13.x
2.11.8
2.12.7
For scala 2.10.6/2.12.7 result is the same like in above output.
Add project to github
Example project for searching the solution
Just to move the question off the unanswered list, one can refer to the issue booked for it:
https://issues.scala-lang.org/browse/SI-8859

Idiomatically defining dynamic tasks in SBT 0.13?

I'm moving an SBT plugin from 0.12 over to 0.13. At various points in my plugin I schedule a dynamic set of tasks onto the SBT build graph.
Below is my old code. Is this still the idiomatic way to express this, or is it possible to leverage the macros to make everything prettier?
import sbt._
import Keys._
object Toplevel extends Build
{
lazy val ordinals = taskKey[Seq[String]]("A list of things")
lazy val times = taskKey[Int]("Number of times to list things")
lazy val inParallel = taskKey[Seq[String]]("Strings to log in parallel")
lazy val Foo = Project( id="Foo", base=file("foo"),
settings = Defaults.defaultSettings ++ Seq(
scalaVersion := "2.10.2",
ordinals := Seq( "First", "Second", "Third", "Four", "Five" ),
times := 3,
inParallel <<= (times, ordinals, streams) flatMap
{ case (t, os, s) =>
os.map( o => toTask( () =>
{
(0 until t).map( _ => o ).mkString(",")
} ) ).join
}
)
)
}
Apologies for the entirely contrived example!
EDIT
So, taking Mark's advice into account I have the following tidier code:
import sbt._
import Keys._
object Toplevel extends Build
{
lazy val ordinals = taskKey[Seq[String]]("A list of things")
lazy val times = taskKey[Int]("Number of times to list things")
lazy val inParallel = taskKey[Seq[String]]("Strings to log in parallel")
def parTask = Def.taskDyn
{
val t = times.value
ordinals.value.map(o => ordinalTask(o, t)).join
}
def ordinalTask(o: String, t: Int) = Def.task
{
(0 until t).map(_ => o).mkString(",")
}
lazy val Foo = Project( id="Foo", base=file("foo"),
settings = Defaults.defaultSettings ++ Seq(
scalaVersion := "2.10.2",
ordinals := Seq( "First", "Second", "Third", "Four", "Five" ),
times := 3,
inParallel := parTask.value
)
)
}
This seems to be nearly there, but fails the build with:
[error] /home/alex.wilson/tmp/sbt0.13/project/build.scala:13: type mismatch;
[error] found : sbt.Def.Initialize[Seq[sbt.Task[String]]]
[error] required: sbt.Def.Initialize[sbt.Task[?]]
[error] ordinals.value.map(o => ordinalTask(o, t)).join
You can use Def.taskDyn, which provides the new syntax for flatMap. The difference from Def.task is that the expected return type is a task Initialize[Task[T]] instead of just T. Translating your example,
inParallel := parTask.value
def parTask = Def.taskDyn {
val t = times.value
ordinals.value.map(o => ordinalTask(o, t)).joinWith(_.join)
}
def ordinalTask(o: String, t: Int) = Def.task {
(0 until t).map(_ => o).mkString(",")
}