this is my build.sbt file:
name := "words"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.3.0",
"org.apache.spark" %% "spark-sql" % "1.3.0"
)
sbt.version=0.13.8-RC1
When I compile the program, I have the following error:
[error] D:\projects\bd\words\src\main\scala\test.scala:8:
type SqlContext is not a member of package org.apache.spark.sql
[error] val sqlContext = new org.apache.spark.sql.SqlContext(sc)
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
It's SQLContext not SqlContext.
Related
There is no caliban.federation for scala 3 yet.
My question is what is a correct way to use it along with scala 3 libraries?
For now I have such a dependencies in my build.sbt:
lazy val `bookings` =
project
.in(file("."))
.settings(
scalaVersion := "3.0.1",
name := "bookings"
)
.settings(commonSettings)
.settings(dependencies)
lazy val dependencies = Seq(
libraryDependencies ++= Seq(
"com.github.ghostdogpr" %% "caliban-zio-http" % "1.1.0"
),
libraryDependencies ++= Seq(
org.scalatest.scalatest,
org.scalatestplus.`scalacheck-1-15`,
).map(_ % Test),
libraryDependencies +=
("com.github.ghostdogpr" %% "caliban-federation" % "1.1.0")
.cross(CrossVersion.for3Use2_13)
But when I'm trying to build it, it's erroring:
[error] (update) Conflicting cross-version suffixes in:
dev.zio:zio-query,
org.scala-lang.modules:scala-collection-compat,
dev.zio:zio-stacktracer,
dev.zio:izumi-reflect,
com.github.ghostdogpr:caliban-macros,
dev.zio:izumi-reflect-thirdparty-boopickle-shaded,
dev.zio:zio,
com.github.ghostdogpr:caliban,
dev.zio:zio-streams
I am attempting to use sbt assembly on a spark project. sbt compile and package work but when I attempt sbt assembly I get the following error:
object spark is not a member of package org.apache
I have included the spark core and spark sql libraries and have sbt-assembly in my plugins file. Why is assembly producing these errors?
build.sbt:
name := "redis-record-loader"
scalaVersion := "2.11.8"
val sparkVersion = "2.3.1"
val scalatestVersion = "3.0.3"
val scalatest = "org.scalatest" %% "scalatest" % scalatestVersion
libraryDependencies ++=
Seq(
"com.amazonaws" % "aws-java-sdk-s3" % "1.11.347",
"com.typesafe" % "config" % "1.3.1",
"net.debasishg" %% "redisclient" % "3.0",
"org.slf4j" % "slf4j-log4j12" % "1.7.12",
"org.apache.commons" % "commons-lang3" % "3.0" % "test,it",
"org.apache.hadoop" % "hadoop-aws" % "2.8.1" % Provided,
"org.apache.spark" %% "spark-core" % sparkVersion % Provided,
"org.apache.spark" %% "spark-sql" % sparkVersion % Provided,
"org.mockito" % "mockito-core" % "2.21.0" % Test,
scalatest
)
val integrationTestsKey = "it"
val integrationTestLibs = scalatest % integrationTestsKey
lazy val IntegrationTestConfig = config(integrationTestsKey) extend Test
lazy val root = project.in(file("."))
.configs(IntegrationTestConfig)
.settings(inConfig(IntegrationTestConfig)(Defaults.testSettings): _*)
.settings(libraryDependencies ++= Seq(integrationTestLibs))
test in assembly := Seq(
(test in Test).value,
(test in IntegrationTestConfig).value
)
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
plugins.sbt:
logLevel := Level.Warn
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")
full error message:
/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:11: object spark is not a member of package org.apache
[error] import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession}
[error] ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:26: not found: type SparkSession
[error] implicit val spark: SparkSession = SparkSession.builder
[error] ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:26: not found: value SparkSession
[error] implicit val spark: SparkSession = SparkSession.builder
[error] ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:51: not found: type DataFrame
[error] val testDataframe0: DataFrame = testData0.toDF()
[error] ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:51: value toDF is not a member of Seq[(String, String)]
[error] val testDataframe0: DataFrame = testData0.toDF()
[error] ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:52: not found: type DataFrame
[error] val testDataframe1: DataFrame = testData1.toDF()
[error] ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:52: value toDF is not a member of Seq[(String, String)]
[error] val testDataframe1: DataFrame = testData1.toDF()
[error] ^
[error] missing or invalid dependency detected while loading class file 'RedisRecordLoader.class'.
[error] Could not access term spark in package org.apache,
[error] because it (or its dependencies) are missing. Check your build definition for
[error] missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
[error] A full rebuild may help if 'RedisRecordLoader.class' was compiled against an incompatible version of org.apache.
[error] missing or invalid dependency detected while loading class file 'RedisRecordLoader.class'.
[error] Could not access type SparkSession in value org.apache.sql,
[error] because it (or its dependencies) are missing. Check your build definition for
[error] missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
[error] A full rebuild may help if 'RedisRecordLoader.class' was compiled against an incompatible version of org.apache.sql.
[error] 9 errors found
Cant comment on that I can say "I doubt the AWS SDK & hadoop-aws versions are going to work". You need the exact version of hadoop-aws to match the hadoop-common JAR on your CP, (it's all one project which releases in sync, after all), and the aws SDK Version built against was 1.10. The AWS SDK has a habit of (a) breaking APIs on every point release (b) aggressively pushing new versions of jackson down, even when they are incompatible and (c) causing regressions in the hadoop-aws code.
If you really want to work with S3A, best to go for hadoop-2.9, which pulls in a shaded 1.11.x version
i am using the Play Java Starter Example 2.5.x from the Play Framework page this is my plugins.sbt
addSbtPlugin("com.typesafe.sbt" % "sbt-play-ebean" % "3.0.0")
this is my build.sbt
name := """play-java"""
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayJava,PlayEbean)
scalaVersion := "2.11.11"
libraryDependencies += filters
libraryDependencies ++= Seq(
javaJdbc,
cache,
javaWs,
evolutions
)
In application.conf:
ebean.default = ["models.*"]
when trying to run the application i always get:
error: not found: value PlayEbean lazy val root = (project in file(".")).enablePlugins(PlayJava, PlayEbean) ^ sbt.compiler.EvalException: Type error in expression [error] sbt.compiler.EvalException: Type error in expression Invalid response.
Help is very appreciated.
After researching i solved it the problem was that i did not put the
addSbtPlugin("com.typesafe.sbt" % "sbt-play-ebean" % "3.0.0")
in the correct file.
I've been trying to use the apache commons.
But, it fails and get the following errors.
I've no idea how to fix it. Maybe, need to add something into build.sbt?
$ sbt
> clean
> compile
[error] hw.scala:3: object compress is not a member of package org.apache.commons
[error] import org.apache.commons.compress.utils.IOUtils
[error] ^
[error] hw.scala:24: not found: value IOUtils
[error] val bytes = IOUtils.toByteArray(new FileInputStream(imgFile))
[error] ^
[error] two errors found
[error] (compile:compileIncremental) Compilation failed
hw.scala
import org.apache.commons.codec.binary.{ Base64 => ApacheBase64 }
import org.apache.commons.compress.utils.IOUtils
...
build.sbt
name := "hello"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies ++= Seq(
"commons-codec" % "commons-codec" % "1.10",
"commons-io" % "commons-io" % "2.4"
)
Add this in your build.sbt:
// https://mvnrepository.com/artifact/org.apache.commons/commons-compress
libraryDependencies += "org.apache.commons" % "commons-compress" % "1.14"
To find this by yourself in the future, search on https://mvnrepository.com/
I'm trying to work with Spark and Scala, compiling a standalone application. I don't know why I'm getting this error:
topicModel.scala:2: ';' expected but 'import' found.
[error] import org.apache.spark.mllib.clustering.LDA
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
This is the build.sbt code:
name := "topicModel"
version := "1.0"
scalaVersion := "2.11.6"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.3.1"
libraryDependencies += "org.apache.spark" %% "spark-graphx" % "1.3.1"
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "1.3.1"
And those are the imports:
import scala.collection.mutable
import org.apache.spark.mllib.clustering.LDA
import org.apache.spark.mllib.linalg.{Vector, Vectors}
import org.apache.spark.rdd.RDD
object Simple {
def main(args: Array[String]) {
This could be because your file has old Macintosh line endings (\r)?
See Why do I need semicolons after these imports? for more details.