I have problem in using ScriptEngine from sbt 0.13.8
build.sbt
fork in run := true
scalaVersion := "2.11.6"
libraryDependencies ++= Seq(
"org.scala-lang" % "scala-compiler" % "2.11.6"
)
UseConfig.scala
object UseConfig {
def main(args: Array[String]) = {
import javax.script.ScriptEngineManager
val e = new ScriptEngineManager(null).getEngineByName("scala")
println(e)
}
}
and it prints null.
When I run a similar code in scala 2.11.6 console a scala engine is found successfully.
p.s. Is there are any other ways to compile dynamically scala code under sbt?
Related
It work fine when using generic class.
But get java.lang.NoClassDefFoundError: scala/Product$class error after change class to case class.
Not sure is sbt packaging problem or code problem.
When I'm using:
sbt
scala: 2.11.12
java: 8
sbt assembly to package
package example
import java.util.Properties
import java.nio.charset.StandardCharsets
import org.apache.flink.api.scala._
import org.apache.flink.streaming.util.serialization.{DeserializationSchema, SerializationSchema}
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaConsumer, FlinkKafkaProducer}
import org.apache.flink.streaming.api.watermark.Watermark
import org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks
import org.apache.flink.api.common.typeinfo.TypeInformation
import Config._
case class Record(
id: String,
startTime: Long
) {}
class RecordDeSerializer extends DeserializationSchema[Record] with SerializationSchema[Record] {
override def serialize(record: Record): Array[Byte] = {
return "123".getBytes(StandardCharsets.UTF_8)
}
override def deserialize(b: Array[Byte]): Record = {
Record("1", 123)
}
override def isEndOfStream(record: Record): Boolean = false
override def getProducedType: TypeInformation[Record] = {
createTypeInformation[Record]
}
}
object RecordConsumer {
def main(args: Array[String]): Unit = {
val config : Properties = {
var p = new Properties()
p.setProperty("zookeeper.connect", Config.KafkaZookeeperServers)
p.setProperty("bootstrap.servers", Config.KafkaBootstrapServers)
p.setProperty("group.id", Config.KafkaGroupID)
p
}
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.enableCheckpointing(1000)
var consumer = new FlinkKafkaConsumer[Record](
Config.KafkaTopic,
new RecordDeSerializer(),
config
)
consumer.setStartFromEarliest()
val stream = env.addSource(consumer).print
env.execute("record consumer")
}
}
Error
2020-08-05 04:07:33,963 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Discarding checkpoint 1670 of job 4de8831901fa72790d0a9a973cc17dde.
java.lang.NoClassDefFoundError: scala/Product$class
...
build.SBT
First idea is that maybe version is not right.
But every thing work fine if use normal class
Here is build.sbt
ThisBuild / resolvers ++= Seq(
"Apache Development Snapshot Repository" at "https://repository.apache.org/content/repositories/snapshots/",
Resolver.mavenLocal
)
name := "deedee"
version := "0.1-SNAPSHOT"
organization := "dexterlab"
ThisBuild / scalaVersion := "2.11.8"
val flinkVersion = "1.8.2"
val flinkDependencies = Seq(
"org.apache.flink" %% "flink-scala" % flinkVersion % "provided",
"org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided",
"org.apache.flink" %% "flink-streaming-java" % flinkVersion % "provided",
"org.apache.flink" %% "flink-connector-kafka" % flinkVersion,
)
val thirdPartyDependencies = Seq(
"com.github.nscala-time" %% "nscala-time" % "2.24.0",
"com.typesafe.play" %% "play-json" % "2.6.14",
)
lazy val root = (project in file(".")).
settings(
libraryDependencies ++= flinkDependencies,
libraryDependencies ++= thirdPartyDependencies,
libraryDependencies += "org.scala-lang" % "scala-compiler" % scalaVersion.value,
)
assembly / mainClass := Some("dexterlab.TelecoDataConsumer")
// make run command include the provided dependencies
Compile / run := Defaults.runTask(Compile / fullClasspath,
Compile / run / mainClass,
Compile / run / runner
).evaluated
// stays inside the sbt console when we press "ctrl-c" while a Flink programme executes with "run" or "runMain"
Compile / run / fork := true
Global / cancelable := true
// exclude Scala library from assembly
assembly / assemblyOption := (assembly / assemblyOption).value.copy(includeScala = false)
autoCompilerPlugins := true
Finally success after I add this line in build.sbt
assembly / assemblyOption := (assemblu / assemblyOption).value.copy(includeScala = true)
To include scala library when running sbt assembly
I am new the scala and SBT build files. From the introductory tutorials adding spark dependencies to a scala project should be straight-forward via the sbt-spark-package plugin but I am getting the following error:
[error] (run-main-0) java.lang.NoClassDefFoundError: org/apache/spark/SparkContext
Please provide resources to learn more about what could be driving error as I want to understand process more thoroughly.
CODE:
trait SparkSessionWrapper {
lazy val spark: SparkSession = {
SparkSession
.builder()
.master("local")
.appName("spark citation graph")
.getOrCreate()
}
val sc = spark.sparkContext
}
import org.apache.spark.graphx.GraphLoader
object Test extends SparkSessionWrapper {
def main(args: Array[String]) {
println("Testing, testing, testing, testing...")
var filePath = "Desktop/citations.txt"
val citeGraph = GraphLoader.edgeListFile(sc, filepath)
println(citeGraph.vertices.take(1))
}
}
plugins.sbt
resolvers += "bintray-spark-packages" at "https://dl.bintray.com/spark-packages/maven/"
addSbtPlugin("org.spark-packages" % "sbt-spark-package" % "0.2.6")
build.sbt -- WORKING. Why does libraryDependencies run/work ?
spName := "yewno/citation_graph"
version := "0.1"
scalaVersion := "2.11.12"
sparkVersion := "2.2.0"
sparkComponents ++= Seq("core", "sql", "graphx")
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.2.0",
"org.apache.spark" %% "spark-sql" % "2.2.0",
"org.apache.spark" %% "spark-graphx" % "2.2.0"
)
build.sbt -- NOT WORKING. Would expect this to compile & run correctly
spName := "yewno/citation_graph"
version := "0.1"
scalaVersion := "2.11.12"
sparkVersion := "2.2.0"
sparkComponents ++= Seq("core", "sql", "graphx")
Bonus for explanation + links to resources to learn more about SBT build process, jar files, and anything else that can help me get up to speed!
sbt-spark-package plugin provides dependencies in provided scope:
sparkComponentSet.map { component =>
"org.apache.spark" %% s"spark-$component" % sparkVersion.value % "provided"
}.toSeq
We can confirm this by running show libraryDependencies from sbt:
[info] * org.scala-lang:scala-library:2.11.12
[info] * org.apache.spark:spark-core:2.2.0:provided
[info] * org.apache.spark:spark-sql:2.2.0:provided
[info] * org.apache.spark:spark-graphx:2.2.0:provided
provided scope means:
The dependency will be part of compilation and test, but excluded from
the runtime.
Thus sbt run throws java.lang.NoClassDefFoundError: org/apache/spark/SparkContext
If we really want to include provided dependencies on run classpath then #douglaz suggests:
run in Compile := Defaults.runTask(fullClasspath in Compile, mainClass in (Compile, run), runner in (Compile, run)).evaluated
We have a build.sbt file like this, which is working fine:
name := "Foo"
version := "0.1"
scalaVersion := "2.12.8"
def aws(module: String): ModuleID = "com.amazonaws" % module % "1.11.250"
lazy val Core = project
.settings(
libraryDependencies ++= Seq(
aws("aws-java-sdk-s3"),
aws("aws-java-sdk-dynamodb"),
)
)
Basically, the project has a few AWS SDK library dependencies and we want to avoid typing the groupID (e.g. "com.amazonaws") and the revision (e.g. "1.11.250") multiple times and that's why we have this line:
def aws(module: String): ModuleID = "com.amazonaws" % module % "1.11.250"
However, since we have many repos like this and we want to move this definition to a custom sbt-plugin. To begin with, we try this:
name := "Foo"
version := "0.1"
scalaVersion := "2.12.8"
val awsVersion = settingKey[String]("The version of aws SDK used for building.") // line 5
def aws(module: String): ModuleID = "com.amazonaws" % module % awsVersion.value // line 6
awsVersion := "1.11.250"
lazy val Core = project
.settings(
libraryDependencies ++= Seq(
aws("aws-java-sdk-s3"),
aws("aws-java-sdk-dynamodb"),
)
)
However, line 6 is producing an error:
error: value can only be used within a task or setting macro, such as :=, +=, ++=, Def.task, or Def.setting.
The idea is that we'll move line 5 and 6 above to our plugin eventually so that we can use it like this:
name := "Foo"
version := "0.1"
scalaVersion := "2.12.8"
awsVersion := "1.11.250"
lazy val Core = project
.settings(
libraryDependencies ++= Seq(
aws("aws-java-sdk-s3"),
aws("aws-java-sdk-dynamodb"),
)
)
Any solution or work around for the error above?
We've also tried this:
def aws(module: String, version: String): ModuleID = "com.amazonaws" % module % version
... which is then used like this:
awsVersion := "1.11.250"
lazy val Core = project
.settings(
libraryDependencies ++= Seq(
aws("aws-java-sdk-s3", awsVersion.value),
aws("aws-java-sdk-dynamodb", awsVersion.value),
)
)
That works fine though a bit annoying to use and it defeats the purpose of using a settingKey to begin with.
You cannot use setting or task values in locally defined methods like aws. The values can be used only within other setting or task definitions, ie the error message such as :=, +=, ++=, Def.task, or Def.setting.
This is what you could do.
Create AutoPlugin in project folder.
import sbt.{AutoPlugin, Def, ModuleID, settingKey}
import sbt.PluginTrigger.AllRequirements
import sbt._
object AwsPlugin extends AutoPlugin {
override def trigger = AllRequirements
type GetAWS = String => ModuleID
object autoImport {
val awsVersion =
settingKey[String]("The version of aws SDK used for building.")
val awsLibrary = settingKey[GetAWS]("Builds given AWS library")
}
import autoImport._
override def projectSettings: Seq[Def.Setting[_]] = Seq(
awsLibrary := { id =>
"com.amazonaws" % id % awsVersion.value
}
)
}
Use it in this way in build.sbt
awsVersion in ThisBuild := "1.11.250"
lazy val Core = project
.settings(
libraryDependencies ++= Seq(
awsLibrary.value("aws-java-sdk-s3"),
awsLibrary.value("aws-java-sdk-dynamodb"),
)
)
Actually learning how to code with scala, I need some help on this:
import play.api.libs.json._
case class Alert(email: String, query: String)
{
def main(args: Array[String]): Unit = { println("Hello from main of class")}
}
I've got an error message :
Alert.scala:2: error: not found: object play
import play.api.libs.json._
One error found
I don't know where the problem come, I updated IntelliJ, and even added the missing library Dependencies in my build.sbt
name := """alert"""
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
jdbc,cache,ws,
"org.scalatestplus.play" %% "scalatestplus-play" % "1.5.1" % Test,
"org.tpolecat" %% "doobie-core" % "0.4.1",)
I'm quite sure they explained it very well in this related question:
object play not found in scala application
I am having trouble using Scala Macros. It keeps on telling me to
enable macro paradise to expand macro annotations
from the #compileTimeOnly message I wrote. I followed all the instructions from Macro annotation documentation and the SBT example.
IDE: IntelliJ 14.1
Scala version: 2.11.7
Build.scala under the Project folder:
import sbt._
import sbt.Keys._
object Build extends Build {
val paradiseVersion = "2.1.0-M5"
lazy val sm = Project(id = "server-modules", base = file(".")).settings(
version := "1.0",
logLevel := Level.Warn,
scalacOptions ++= Seq(),
scalaVersion := "2.11.7",
crossScalaVersions := Seq("2.10.2", "2.10.3", "2.10.4", "2.10.5", "2.11.0", "2.11.1", "2.11.2", "2.11.3", "2.11.4", "2.11.5", "2.11.6", "2.11.7"),
resolvers += Resolver.sonatypeRepo("snapshots"),
resolvers += Resolver.sonatypeRepo("releases"),
addCompilerPlugin("org.scalamacros" % "paradise" % paradiseVersion cross CrossVersion.full),
libraryDependencies <+= (scalaVersion)("org.scala-lang" % "scala-reflect" % _),
)
}
Code:
#compileTimeOnly("enable macro paradise to expand macro annotations")
class dmCompile extends StaticAnnotation{
def macroTransform(annottees: Any*): Any = macro DMCompile.impl
}
object DMCompile {
def impl(c: whitebox.Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
import c.universe._
Log.info("Work work work!")
c.Expr(q"""var x = y""")
}
}
#dmCompile class Test{}
What exactly am I missing?
This took me the entire day, but I got it working.
Simply disregard the SBT settings for macros paradise and manually add it in the Preference -> Scala Compiler
That's it!
For me a solution was to change Incrementality type from IDEA to SBT in the Settings. This allows to use native SBT's build engine instead of IDEA's one.