SBT keeps failing with improper append errors. Im using the exact format of build files I have seen numerous times.
build.sbt:
lazy val backend = (project in file("backend")).settings(
name := "backend",
libraryDependencies ++= (Dependencies.backend)
).dependsOn(api).aggregate(api)
dependencies.scala:
import sbt._
object Dependencies {
lazy val backend = common ++ metrics
val common = Seq(
"com.typesafe.akka" %% "akka-actor" % Version.akka,
"com.typesafe.akka" %% "akka-cluster" % Version.akka,
"org.scalanlp.breeze" %% "breeze" % Version.breeze,
"com.typesafe.akka" %% "akka-contrib" % Version.akka,
"org.scalanlp.breeze-natives" % Version.breeze,
"com.google.guava" % "guava" % "17.0"
)
val metrics = Seq("org.fusesource" % "sigar" % "1.6.4")
Im Im not quite why SBT is complaining
error: No implicit for Append.Values[Seq[sbt.ModuleID], Seq[Object]] found,
so Seq[Object] cannot be appended to Seq[sbt.ModuleID]
libraryDependencies ++= (Dependencies.backend)
^
Short Version (TL;DR)
There's an error in common: you want to replace this line
"org.scalanlp.breeze-natives" % Version.breeze,
with this line
"org.scalanlp" %% "breeze-natives" % Version.beeze,
Long Version
"org.scalanlp.breeze-natives" % Version.breeze is a GroupArtifactID not a ModuleID.
This causes common to become a Seq[Object] instead of a Seq[ModuleID].
And therefore also Dependencies.backend to be a Seq[Object]
Which ultimately can't be appended (via ++=) to libraryDependencies (defined as a SettingKey[Seq[ModuleID]]) because there is no available Append.Values[Seq[sbt.ModuleID], Seq[Object]].
One of common or metrics is not a Seq[sbt.ModuleID]. You could find out which with a type ascription:
val common: Seq[sbt.ModuleID] = ...
val metrics: Seq[sbt.ModuleID] = ...
My money is on common, this line doesn't have enough %s in it:
"org.scalanlp.breeze-natives" % Version.breeze
Related
I was thinking this would work but it did not for me.
libraryDependencies += "org.json4s" %% "json4s-core" % "3.6.7" % "test"
libraryDependencies += "org.json4s" %% "json4s-core" % "3.7.0" % "compile"
Any idea?
Test classpath includes compile classpath.
So create different subrojects for different versions of the dependency if you need that.
lazy val forJson4s370 = project
.settings(
libraryDependencies += "org.json4s" %% "json4s-core" % "3.7.0" % "compile"
)
lazy val forJson4s367 = project
.settings(
libraryDependencies += "org.json4s" %% "json4s-core" % "3.6.7" % "test"
)
If you don't want to create different subprojects you can try custom sbt configurations
https://www.scala-sbt.org/1.x/docs/Advanced-Configurations-Example.html
An exotic solution is to manage dependencies and compile/run code programmatically in the exceptional class. Then you can have dependencies/versions different from specified in build.sbt.
import java.net.URLClassLoader
import coursier.{Dependency, Module, Organization, ModuleName, Fetch}
import scala.reflect.runtime.universe
import scala.reflect.runtime.universe.Quasiquote
import scala.tools.reflect.ToolBox
val files = Fetch()
.addDependencies(
Dependency(Module(Organization("org.json4s"), ModuleName("json4s-core_2.13")), "3.6.7"),
)
.run()
val depClassLoader = new URLClassLoader(
files.map(_.toURI.toURL).toArray,
/*getClass.getClassLoader*/ null // ignoring current classpath
)
val rm = universe.runtimeMirror(depClassLoader)
val tb = rm.mkToolBox()
tb.eval(q"""
import org.json4s._
// some exceptional json4s 3.6.7 code
println("hi")
""")
// hi
build.sbt
libraryDependencies ++= Seq(
scalaOrganization.value % "scala-compiler" % scalaVersion.value % "test",
"io.get-coursier" %% "coursier" % "2.1.0-M7-39-gb8f3d7532" % "test",
"org.json4s" %% "json4s-core" % "3.7.0" % "compile",
)
We had a vulnerability check in our sbt project using Anchor Engine.
Most of the errors related to the Jackson data bind. We are not even using it as we are using spray JSON for serialization. After searching I found it was used internally by sbt. So I can not upgrade its version. So I tried upgrading the sbt version from 1.2.6 to 1.4.0, to resolve this issue but it didn't work.
object Versions {
val guice = "4.2.1"
val slick = "3.3.2"
val hikariCP = "3.3.0"
val postgres = "42.2.5"
val rabbitMQClient = "5.5.1"
val logbackClassic = "1.2.3"
val sprayJson = "1.3.5"
val akkaHttp = "10.1.5"
val akkaActor = "2.5.19"
val akkaStream = "2.5.19"
val scalaTest = "3.0.1"
val h2 = "1.4.197"
val rabbitmqMock = "1.0.8"
val mockito = "1.9.5"
}
object CompileDeps {
val guice = "com.google.inject" % "guice" % Versions.guice
val scalaGuice = "net.codingwell" %% "scala-guice" % Versions.guice
val postgresql = "org.postgresql" % "postgresql" % Versions.postgres
val slick = "com.typesafe.slick" %% "slick" % Versions.slick
val hikariCP = "com.typesafe.slick" %% "slick-hikaricp" % Versions.hikariCP
val rabbitMQClient= "com.rabbitmq" % "amqp-client" % Versions.rabbitMQClient exclude("com.fasterxml.jackson.core", "jackson-databind")
val logbackClassic = "ch.qos.logback" % "logback-classic" % Versions.logbackClassic
val sprayJson = "io.spray" %% "spray-json" % Versions.sprayJson
val akkaHttp = "com.typesafe.akka" %% "akka-http" % Versions.akkaHttp
val akkaActor = "com.typesafe.akka" %% "akka-actor" % Versions.akkaActor
val akkaStream = "com.typesafe.akka" %% "akka-stream" % Versions.akkaStream
val akkaHttpSprayJson = "com.typesafe.akka" %% "akka-http-spray-json" % Versions.akkaHttp
}
DependencyBrowseGraph
So can anyone please guide me on how can I resolve these security checks?
Thanks
You are fetching Jackson via RabbitMQ dependency. See compile dependencies of your version of RabbitMQ on Maven repository.
This dependency is marked as optional so you could probably safely remove it using exclude("com.fasterxml.jackson.core", "jackson-databind"). Test it! If it doesn't work add dependency explicitly to bump to some newer safer version or find a way to suppress warning.
For the future: use sbt-dependency-graph to generate visual dependency graph (dependencyBrowseGraph), then you'll be able to see which libraries fetches and evicts your dependencies.
Iam trying to publish messages to a topic in GCP's Pub/Sub using Spark Scala using IntelliJ. Here is the code:
GcpPublish.scala
val publisher = Publisher.newBuilder(s"projects/projectid/topics/test")
.setCredentialsProvider(FixedCredentialsProvider
.create(ServiceAccountCredentials
.fromStream(new FileInputStream("gs://credsfiles/projectid.json"))))
.build()
publisher.publish(PubsubMessage
.newBuilder
.setData(ByteString.copyFromUtf8(JSONData.toString()))
.build())
And this is the build.sbt:
name := "TryingSomething"
version := "1.0"
scalaVersion := "2.11.12"
val sparkVersion = "2.3.2"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.3.2" % "provided",
"org.apache.spark" %% "spark-sql" % "2.3.2" ,
"com.google.cloud" % "google-cloud-bigquery" % "1.106.0",
"org.apache.beam" % "beam-sdks-java-core" % "2.19.0" ,
"org.apache.beam" % "beam-runners-google-cloud-dataflow-java" % "2.19.0",
"com.typesafe.scala-logging" %% "scala-logging" % "3.1.0" ,
"org.apache.beam" % "beam-sdks-java-extensions-google-cloud-platform-core" % "2.19.0" ,
"org.apache.beam" % "beam-sdks-java-io-google-cloud-platform" % "2.19.0" ,
"com.google.apis" % "google-api-services-bigquery" % "v2-rev456-1.25.0" ,
"com.google.cloud" % "google-cloud-pubsub" % "1.102.1",
"com.google.guava" % "guava" % "28.2-jre",
"org.apache.httpcomponents" % "httpclient" % "4.5.11"
)
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case _ => MergeStrategy.first
}
but when I create the fat jar and run it on the Dataprocs cluster I get the below error:
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;I)V
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$Builder.setPoolSize(InstantiatingGrpcChannelProvider.java:527)
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$Builder.setChannelsPerCpu(InstantiatingGrpcChannelProvider.java:546)
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$Builder.setChannelsPerCpu(InstantiatingGrpcChannelProvider.java:535)
at com.google.cloud.pubsub.v1.Publisher$Builder.<init>(Publisher.java:633)
at com.google.cloud.pubsub.v1.Publisher$Builder.<init>(Publisher.java:588)
at com.google.cloud.pubsub.v1.Publisher.newBuilder(Publisher.java:584)
I followed the solutions stated here and added the guava and httpcomponents dependencies but I still get the same exception.
I even changed the code to instantiate the Publisher to:
val publisher = Publisher.newBuilder(s"projects/projectid/topics/test").build()
But this gives the same error as well.
Any suggestions what could cause this error?
The problem was that both Spark and Hadoop injected their own versions of guava that is also present in the Google Pubsub package. I got around this by adding shade rules in the build.sbt file:
assemblyShadeRules in assembly := Seq(
ShadeRule.rename("com.google.common.**" -> "repackaged.com.google.common.#1").inAll,
ShadeRule.rename("com.google.protobuf.**" -> "repackaged.com.google.protobuf.#1").inAll,
ShadeRule.rename("io.netty.**" -> "repackaged.io.netty.#1").inAll
)
The shade rule for com.google.common and com.google.protobuf is the one which solves the guava dependencies. I added the others for later dependency conflicts I encountered on the way.
I was trying to setup a IntelliJ build for spark with janusgraph using gremlin scala but I am running into errors.
My build.sbt file is:
version := "1.0"
scalaVersion := "2.11.11"
libraryDependencies += "com.michaelpollmeier" % "gremlin-scala" % "2.3.0"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.1"
// https://mvnrepository.com/artifact/org.apache.spark/spark-sql
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.2.1"
// https://mvnrepository.com/artifact/org.apache.spark/spark-mllib
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "2.2.1"
// https://mvnrepository.com/artifact/org.apache.spark/spark-hive
libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.2.1"
// https://mvnrepository.com/artifact/org.janusgraph/janusgraph-core
libraryDependencies += "org.janusgraph" % "janusgraph-core" % "0.2.0"
libraryDependencies ++= Seq(
"ch.qos.logback" % "logback-classic" % "1.2.3" % Test,
"org.scalatest" %% "scalatest" % "3.0.3" % Test
)
resolvers ++= Seq(
Resolver.mavenLocal,
"Sonatype OSS" at "https://oss.sonatype.org/content/repositories/public"
)
But I am getting errors when I try to compile code that uses gremlin scala libraries or io.Source libraries. Can someone share their build file or tell what I should modify to fix it.
Thanks in advance.
So, I was trying to compile this code:
import gremlin.scala._
import org.apache.commons.configuration.BaseConfiguration
import org.janusgraph.core.JanusGraphFactory
class Test1() {
val conf = new BaseConfiguration()
conf.setProperty("storage.backend", "inmemory")
val gr = JanusGraphFactory.open(conf)
val graph = gr.asScala()
graph.close
}
object Test{
def main(args: Array[String]) {
val t = new Test1()
println("in Main")
}
}
The errors I get are:
Error:(1, 8) not found: object gremlin
import gremlin.scala._
Error:(10, 18) value asScala is not a member of org.janusgraph.core.JanusGraph
val graph = gr.asScala()
If you go to the Gremlin-Scala GitHub page you'll see that the current version is "3.3.1.1" and that
Typically you just need to add a dependency on "com.michaelpollmeier" %% "gremlin-scala" % "SOME_VERSION" and one for the graph db of your choice to your build.sbt (this readme assumes tinkergraph). The latest version is displayed at the top of this readme in the maven badge.
It is not a surprise that the APi has changed when the major version of the
library is different. If I change your first dependency as
//libraryDependencies += "com.michaelpollmeier" % "gremlin-scala" % "2.3.0" //old!
libraryDependencies += "com.michaelpollmeier" %% "gremlin-scala" % "3.3.1.1"
then your example code compiles for me.
I'm having some problems in building a simple application with Spark SQL. What I want to do is to add a new column to a DataFrame. Thus, I have done:
val sqlContext=new HiveContext(sc)
import sqlContext._
// creating the DataFrame
correctDF.withColumn("COL1", expr("concat('000',COL1)") )
but when I build it with sbt it throws the exception:
not found: value expr
(and also Eclipse complains about it)
Instead in the spark-shell it works like a charm.
In my build.sbt file I have:
scalaVersion := "2.10.5"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.6.0" % "provided"
libraryDependencies += "org.apache.spark" % "spark-sql_2.10" % "1.6.0" % "provided"
libraryDependencies += "org.apache.spark" % "spark-hive_2.10" % "1.6.0" % "provided"
I've added the last line after I read a post, but nothing changed...
Can someone help me?
I found the answer. I was missing this import:
import org.apache.spark.sql.functions._