Scala parser cuts last bracket - scala

Welcome to Scala 2.12.1 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121).
Type in expressions for evaluation. Or try :help.
scala> :paste
// Entering paste mode (ctrl-D to finish)
import scala.reflect.runtime._
import scala.reflect.runtime.universe._
import scala.tools.reflect.ToolBox
val mirror = universe.runtimeMirror(universe.getClass.getClassLoader)
val toolbox = mirror.mkToolBox(options = "-Yrangepos")
val text =
"""
|libraryDependencies ++= Seq("org.scala-lang" % "scala-compiler" % "2.10.4") map {
| (dependency) =>{
| dependency
| }
|}
""".stripMargin
val parsed = toolbox.parse(text)
val parsedTrees = parsed match {
case Block(stmt, expr) =>
stmt :+ expr
case t: Tree =>
Seq(t)
}
val statements = parsedTrees.map { (t: Tree) =>
text.substring(t.pos.start, t.pos.end)
}
// Exiting paste mode, now interpreting.
import scala.reflect.runtime._
import scala.reflect.runtime.universe._
import scala.tools.reflect.ToolBox
mirror: reflect.runtime.universe.Mirror = JavaMirror with primordial classloader with boot classpath...
scala> statements.head
res0: String =
libraryDependencies ++= Seq("org.scala-lang" % "scala-compiler" % "2.10.4") map {
(dependency) =>{
dependency
}
The result is:
scala> statements.head
res1: String =
libraryDependencies ++= Seq("org.scala-lang" % "scala-compiler" % "2.10.4") map {
(dependency) =>{
dependency
}
I expected:
libraryDependencies ++= Seq("org.scala-lang" % "scala-compiler" % "2.10.4") map {
(dependency) =>{
dependency
}
}
The last brackets } (and end of line) is missing if I use position from Tree object: text.substring(t.pos.start, t.pos.end)
Any proposal how to extract all text from scala.reflect.api.Trees#Tree object?
Update
Affected scala versions :
2.10.6 - needed for sbt 0.13.x
2.11.8
2.12.7
For scala 2.10.6/2.12.7 result is the same like in above output.
Add project to github
Example project for searching the solution

Just to move the question off the unanswered list, one can refer to the issue booked for it:
https://issues.scala-lang.org/browse/SI-8859

Related

Running http4s server with ZIO Env

Trying to learn using ZIO library, so I decided to create a basic web service app. Idea pretty basic, use http4s lib for server and route endpoints, print "hello world" on endpoint call.
With the help of docs and examples I found, produces code:
object Main extends ManagedApp {
type AppEnvironment = Clock with Console with HelloRepository
type AppTask[A] = RIO[AppEnvironment, A]
override def run(args: List[String]): ZManaged[ZEnv, Nothing, Int] = {
val httpApp: HttpApp[AppTask] = Router[AppTask]("/" -> helloWorldService).orNotFound
val server = ZIO.runtime[AppEnvironment].flatMap { implicit rts =>
BlazeServerBuilder[AppTask]
.bindHttp(8080, "0.0.0.0")
.withHttpApp(CORS(httpApp))
.serve
.compile[AppTask, AppTask, ExitCode]
.drain
}
(for {
_ <- ZManaged.environment[ZEnv] >>> server.toManaged_
} yield ())
.foldM(err => putStrLn(s"Execution failed with: $err").as(1).toManaged_, _ => ZManaged.succeed(0))
}
val dsl: Http4sDsl[AppTask] = Http4sDsl[AppTask]
import dsl._
val helloWorldService: HttpRoutes[AppTask] = HttpRoutes.of[AppTask] {
case GET -> Root / "hello" / name => Ok(Repo.getHello(name))
}
}
trait HelloRepository extends Serializable {
val helloRepository: HelloRepository.Service[Any]
}
object HelloRepository extends Serializable {
trait Service[R] extends Serializable {
def getHello(name: String): ZIO[R, Nothing, String]
}
}
object Repo extends HelloRepository.Service[HelloRepository] {
override def getHello(name: String): ZIO[HelloRepository, Nothing, String] = ZIO.succeed(s"Hello $name")
}
I create router: Router[AppTask]("/" ...
I create server: ZIO.runtime[AppEnvironment].flatMap ...
Then trying to start server with ZIO enviroment,
but something I am missing as this line:
_ <- ZManaged.environment[ZEnv] >>> server.toManaged_
is incorected, and throws error on build:
Error:(34, 39) inferred type arguments [touch.Main.AppEnvironment,Throwable,Unit] do not conform to method >>>'s type parameter bounds [R1 >: zio.ZEnv,E1,B]
_ <- ZManaged.environment[ZEnv] >>> server.toManaged_
Error:(34, 39) inferred type arguments [touch.Main.AppEnvironment,Throwable,Unit] do not conform to method >>>'s type parameter bounds [R1 >: zio.ZEnv,E1,B]
Error:(34, 50) type mismatch;
found : zio.ZManaged[touch.Main.AppEnvironment,Throwable,Unit]
(which expands to) zio.ZManaged[zio.clock.Clock with zio.console.Console with touch.HelloRepository,Throwable,Unit]
required: zio.ZManaged[R1,E1,B]
maybe someone can help me with the correct syntax?
also would appriacete some explanation, or link to docs, where this is explained.
I would like to explain more but I don't know where you got your code sample or what your build.sbt looks like but I happen to have some http4s code lying around so I took the liberty of adding some import statements and simplifying it a bit. You can always add back the complexity I took out.
Here's what worked for me.
/tmp/http4s/test.scala
import org.http4s.implicits._
import org.http4s.server.blaze._
import org.http4s.server.Router
import org.http4s.server.middleware.CORS
import org.http4s._
import org.http4s.dsl.Http4sDsl
import zio._
import zio.clock._
import zio.console._
import zio.interop.catz._
trait HelloRepository
{
def getHello(name: String): ZIO[AppEnvironment, Nothing, String]
}
trait AppEnvironment extends Console with Clock
{
val helloRepository: HelloRepository
}
object Main extends App {
type AppTask[A] = RIO[AppEnvironment, A]
val dsl: Http4sDsl[AppTask] = Http4sDsl[AppTask]
import dsl._
val httpApp: HttpApp[AppTask] = Router[AppTask](
"/" -> HttpRoutes.of[AppTask] {
case GET -> Root / "hello" / name => Ok( ZIO.accessM[AppEnvironment](_.helloRepository.getHello(name)) )
}
).orNotFound
val program = for {
server <- ZIO.runtime[AppEnvironment]
.flatMap {
implicit rts =>
BlazeServerBuilder[AppTask]
.bindHttp(8080, "0.0.0.0")
.withHttpApp(CORS(httpApp))
.serve
.compile
.drain
}
} yield server
val runEnv = new AppEnvironment with Console.Live with Clock.Live
{
val helloRepository = new HelloRepository
{
def getHello(name: String): ZIO[AppEnvironment, Nothing, String] = ZIO.succeed(s"Hello $name")
}
}
def run(args: List[String]) =
program
.provide(runEnv)
.foldM(err => putStrLn(s"Execution failed with: $err") *> ZIO.succeed(1), _ => ZIO.succeed(0))
}
/tmp/http4s/build.sbt
val Http4sVersion = "0.20.0"
val CatsVersion = "2.0.0"
val ZioCatsVersion = "2.0.0.0-RC3"
val ZioVersion = "1.0.0-RC13"
val LogbackVersion = "1.2.3"
lazy val root = (project in file("."))
.settings(
organization := "example",
name := "example",
version := "0.0.1-SNAPSHOT",
scalaVersion := "2.12.8",
scalacOptions ++= Seq("-Ypartial-unification"),
libraryDependencies ++= Seq(
"org.typelevel" %% "cats-effect" % CatsVersion,
"dev.zio" %% "zio" % ZioVersion,
"dev.zio" %% "zio-interop-cats" % ZioCatsVersion,
"org.http4s" %% "http4s-blaze-server" % Http4sVersion,
"org.http4s" %% "http4s-dsl" % Http4sVersion,
"ch.qos.logback" % "logback-classic" % LogbackVersion,
),
addCompilerPlugin("org.spire-math" %% "kind-projector" % "0.9.6"),
addCompilerPlugin("com.olegpy" %% "better-monadic-for" % "0.2.4")
)
scalacOptions ++= Seq(
"-deprecation", // Emit warning and location for usages of deprecated APIs.
"-encoding", "UTF-8", // Specify character encoding used by source files.
"-language:higherKinds", // Allow higher-kinded types
"-language:postfixOps", // Allows operator syntax in postfix position (deprecated since Scala 2.10)
"-feature", // Emit warning and location for usages of features that should be imported explicitly.
"-Ypartial-unification", // Enable partial unification in type constructor inference
"-Xfatal-warnings", // Fail the compilation if there are any warnings
)
sample execution
bash-3.2$ cd /tmp/http4s
bash-3.2$ sbt
...
sbt:example> compile
...
[info] Done compiling.
[success] Total time: 5 s, completed Oct 24, 2019 11:20:53 PM
sbt:example> run
...
[info] Running Main
23:21:03.720 [zio-default-async-1-163838348] INFO org.http4s.blaze.channel.nio1.NIO1SocketServerGroup - Service bound to address /0:0:0:0:0:0:0:0:8080
23:21:03.725 [blaze-selector-0] DEBUG org.http4s.blaze.channel.nio1.SelectorLoop - Channel initialized.
23:21:03.732 [zio-default-async-1-163838348] INFO org.http4s.server.blaze.BlazeServerBuilder -
_ _ _ _ _
| |_| |_| |_ _ __| | | ___
| ' \ _| _| '_ \_ _(_-<
|_||_\__|\__| .__/ |_|/__/
|_|
23:21:03.796 [zio-default-async-1-163838348] INFO org.http4s.server.blaze.BlazeServerBuilder - http4s v0.20.0 on blaze v0.14.0 started at http://[0:0:0:0:0:0:0:0]:8080/
23:21:11.070 [blaze-selector-1] DEBUG org.http4s.blaze.channel.nio1.SelectorLoop - Channel initialized.
23:21:11.070 [blaze-selector-1] DEBUG org.http4s.blaze.channel.nio1.NIO1HeadStage - Starting up.
23:21:11.070 [blaze-selector-1] DEBUG org.http4s.blaze.channel.nio1.NIO1HeadStage - Stage NIO1HeadStage sending inbound command: Connected
23:21:11.070 [blaze-selector-1] DEBUG org.http4s.server.blaze.Http1ServerStage$$anon$1 - Starting HTTP pipeline
23:21:11.072 [blaze-selector-1] DEBUG org.http4s.blazecore.IdleTimeoutStage - Starting idle timeout stage with timeout of 30000 ms
At this point after opening http://localhost:8080/hello/there I observed the expected output in the browser.
Hope this helps.

run-main-0) scala.ScalaReflectionException: class java.sql.Date in JavaMirror with ClasspathFilter(

Hi I have a file given to by my teacher. It is about Scala and Spark.
When I run the code it gives me this exception:
(run-main-0) scala.ScalaReflectionException: class java.sql.Date in
JavaMirror with ClasspathFilter
The file itself looks like this:
import org.apache.spark.ml.feature.Tokenizer
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types._
object Main {
type Embedding = (String, List[Double])
type ParsedReview = (Integer, String, Double)
org.apache.log4j.Logger getLogger "org" setLevel
(org.apache.log4j.Level.WARN)
org.apache.log4j.Logger getLogger "akka" setLevel
(org.apache.log4j.Level.WARN)
val spark = SparkSession.builder
.appName ("Sentiment")
.master ("local[9]")
.getOrCreate
import spark.implicits._
val reviewSchema = StructType(Array(
StructField ("reviewText", StringType, nullable=false),
StructField ("overall", DoubleType, nullable=false),
StructField ("summary", StringType, nullable=false)))
// Read file and merge the text abd summary into a single text column
def loadReviews (path: String): Dataset[ParsedReview] =
spark
.read
.schema (reviewSchema)
.json (path)
.rdd
.zipWithUniqueId
.map[(Integer,String,Double)] { case (row,id) => (id.toInt, s"${row getString 2} ${row getString 0}", row getDouble 1) }
.toDS
.withColumnRenamed ("_1", "id" )
.withColumnRenamed ("_2", "text")
.withColumnRenamed ("_3", "overall")
.as[ParsedReview]
// Load the GLoVe embeddings file
def loadGlove (path: String): Dataset[Embedding] =
spark
.read
.text (path)
.map { _ getString 0 split " " }
.map (r => (r.head, r.tail.toList.map (_.toDouble))) // yuck!
.withColumnRenamed ("_1", "word" )
.withColumnRenamed ("_2", "vec")
.as[Embedding]
def main(args: Array[String]) = {
val glove = loadGlove ("Data/glove.6B.50d.txt") // take glove
val reviews = loadReviews ("Data/Electronics_5.json") // FIXME
// replace the following with the project code
glove.show
reviews.show
spark.stop
}
}
I need to keep the line
import org.apache.spark.sql.Dataset
because some code depends on it but it is exactly because of it I have an exception throw.
My build.sbt file looks like this:
name := "Sentiment Analysis Project"
version := "1.1"
scalaVersion := "2.11.12"
scalacOptions ++= Seq("-unchecked", "-deprecation")
initialCommands in console :=
"""
import Main._
"""
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0"
libraryDependencies += "org.apache.spark" %% "spark-mllib" %
"2.3.0"
libraryDependencies += "org.scalactic" %% "scalactic" % "3.0.5"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.5" %
"test"
The Scala guide recommends you compile with Java8:
We recommend using Java 8 for compiling Scala code. Since the JVM is backward compatible, it is usually safe to use a newer JVM to run your code compiled by the Scala compiler for older JVM versions.
Although it's only a recommendation, I found it to fix the issue you mention.
In order to install Java 8 using Homebrew, it's best to use jenv which will help you handle multiple Java versions should you need to.
brew install jenv
Then run the following to add a tap (repository) of alternative versions of casks, since Java 8 is not in the default tap anymore:
brew tap homebrew/cask-versions
To install Java 8:
brew cask install homebrew/cask-versions/adoptopenjdk8
Run the following to add the previously installed Java version to jenv's list of versions:
jenv add /Library/Java/JavaVirtualMachines/<installed_java_version>/Contents/Home
Finally run
jenv global 1.8
or
jenv local 1.8
to use Java 1.8 globally or locally (in the current folder).
Fore more information, follow the instructions at jenv's website

List[String] does not have a member traverse from cats

I am attempting to convert a List[Either[Int]] to anEither[List[Int]] using traverse from cats.
Error
[error] StringCalculator.scala:19:15: value traverseU is not a member of List[String]
[error] numList.traverseU(x => {
Code
import cats.Semigroup
import cats.syntax.traverse._
import cats.implicits._
val numList = numbers.split(',').toList
numList.traverseU(x => {
try {
Right(x.toInt)
} catch {
case e: NumberFormatException => Left(0)
}
})
.fold(
x => {},
x => {}
)
I have tried the same with traverse instead of traverseU as well.
Config(for cats)
lazy val root = (project in file(".")).
settings(
inThisBuild(List(
organization := "com.example",
scalaVersion := "2.12.4",
scalacOptions += "-Ypartial-unification",
version := "0.1.0-SNAPSHOT"
)),
name := "Hello",
libraryDependencies += cats,
libraryDependencies += scalaTest % Test
)
It should indeed be just traverse, as long as you're using a recent cats version (1.0.x), however, you can't import both cats.syntax and cats.implicits._ as they will conflict.
Unfortunately whenever the Scala compiler sees a conflict for implicits, it will give you a really unhelpful message.
Remove the cats.syntax import and it should work fine.
For further information check out the import guide.
You need only cats.implicits._. e.g.
import cats.implicits._
val numbers = "1,2,3"
val numList = numbers.split(',').toList
val lst = numList.traverse(x => scala.util.Try(x.toInt).toEither.leftMap(_ => 0))
println(lst)

SBT package scala script

I am trying to use spark submit with a scala script, but first I need to create my package.
Here is my sbt file:
name := "Simple Project"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.5.2"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.0.0"
When I try sbt package, I am getting these errors:
/home/i329537/Scripts/PandI/SBT/src/main/scala/XML_Script_SBT.scala:3: object functions is not a member of package org.apache.spark.sql
import org.apache.spark.sql.functions._
^
/home/i329537/Scripts/PandI/SBT/src/main/scala/XML_Script_SBT.scala:4: object types is not a member of package org.apache.spark.sql
import org.apache.spark.sql.types._
^
/home/i329537/Scripts/PandI/SBT/src/main/scala/XML_Script_SBT.scala:25: not found: value sc
val hconf = SparkHadoopUtil.get.newConfiguration(sc.getConf)
^
/home/i329537/Scripts/PandI/SBT/src/main/scala/XML_Script_SBT.scala:30: not found: value sqlContext
val df = sqlContext.read.format("xml").option("attributePrefix","").option("rowTag", "project").load(uri.toString())
^
/home/i329537/Scripts/PandI/SBT/src/main/scala/XML_Script_SBT.scala:36: not found: value udf
val sqlfunc = udf(coder)
^
5 errors found
(compile:compileIncremental) Compilation failed
Is anyone faced these errors?
Thanks for helping.
Regards
Majid
You are trying to use class org.apache.spark.sql.functions and package org.apache.spark.sql.types. According to functions class documentation it's available starting from version 1.3.0. And types package is available since version 1.3.1.
Solution: update SBT file to:
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.3.1"
Other errors: "not found: value sc", "not found: value sqlContext", "not found: value udf" are caused by some missing varibales in your XML_Script_SBT.scala file. Can't solve without looking into source code.
Thanks Sergey, your correction corrects 3 errors. Below is my script:
object SimpleApp {
def main(args: Array[String]) {
val today = Calendar.getInstance.getTime
val curTimeFormat = new SimpleDateFormat("yyyyMMdd-HHmmss")
val time = curTimeFormat.format(today)
val destination = "/3.Data/3.Code_Check_Processed/2.XML/" + time + ".extensive.csv"
val source = "/3.Data/2.Code_Check_Raw/2.XML/Extensive/"
val hconf = SparkHadoopUtil.get.newConfiguration(sc.getConf)
val hdfs = FileSystem.get(hconf)
val iter = hdfs.listLocatedStatus(new Path(source))
val uri = iter.next.getPath.toUri
val df = sqlContext.read.format("xml").option("attributePrefix","").option("rowTag", "project").load(uri.toString())
val df2 = df.selectExpr("explode(check) as e").select("e.#VALUE","e.additionalInformation1","e.columnNumber","e.context","e.errorType","e.filePath","e.lineNumber","e.message","e.severity")
val coder: (Long => String) = (arg: Long) => {if (arg > -1) time else "nada"}
val sqlfunc = udf(coder)
val df3 = df2.withColumn("TimeStamp", sqlfunc(col("columnNumber")))
df3.write.format("com.databricks.spark.csv").option("header", "false").save(destination)
hdfs.delete(new Path(uri.toString()), true)
sys.exit(0)
}
}

Why does Scala compiler fail with "object SparkConf in package spark cannot be accessed in package org.apache.spark"?

I cannot access the SparkConf in the package. But I have already import the import org.apache.spark.SparkConf. My code is:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
object SparkStreaming {
def main(arg: Array[String]) = {
val conf = new SparkConf.setMaster("local[2]").setAppName("NetworkWordCount")
val ssc = new StreamingContext( conf, Seconds(1) )
val lines = ssc.socketTextStream("localhost", 9999)
val words = lines.flatMap(_.split(" "))
val pairs_new = words.map( w => (w, 1) )
val wordsCount = pairs_new.reduceByKey(_ + _)
wordsCount.print()
ssc.start() // Start the computation
ssc.awaitTermination() // Wait for the computation to the terminate
}
}
The sbt dependencies are:
name := "Spark Streaming"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.5.2" % "provided",
"org.apache.spark" %% "spark-mllib" % "1.5.2",
"org.apache.spark" %% "spark-streaming" % "1.5.2"
)
But the error shows that SparkConf cannot be accessed.
[error] /home/cliu/Documents/github/Spark-Streaming/src/main/scala/Spark-Streaming.scala:31: object SparkConf in package spark cannot be accessed in package org.apache.spark
[error] val conf = new SparkConf.setMaster("local[2]").setAppName("NetworkWordCount")
[error] ^
It compiles if you add parenthesis after SparkConf:
val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
The point is that SparkConf is a class and not a function, so you could use class name also for scope purposes. So when you add parenthesis after the class name, you are making sure you are calling the class constructor and not the scoping functionality. Here is an example from Scala shell illustrating the difference:
scala> class C1 { var age = 0; def setAge(a:Int) = {age = a}}
defined class C1
scala> new C1
res18: C1 = $iwC$$iwC$C1#2d33c200
scala> new C1()
res19: C1 = $iwC$$iwC$C1#30822879
scala> new C1.setAge(30) // this doesn't work
<console>:23: error: not found: value C1
new C1.setAge(30)
^
scala> new C1().setAge(30) // this works
scala>
In this case you cannot omit parentheses so it should be:
val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")