I have a project built with sbt 0.11.
I'm trying to create a simple UI with Scala Swing, so first thing is to add a dependency on scala-swing in my build.sbt:
libraryDependencies += "org.scala-lang" % "scala-swing" % "2.9.1-1"
But I have a SettingKey scalaVersion defined:
scalaVersion := "2.9.1-1"
How can I reference that property? If I try to use it like
libraryDependencies += "org.scala-lang" % "scala-swing" % scalaVersion
Compiler complains that it found sbt.SettingKey[String] while String is expected. There are methods get(...) and evaluate(...) on SettingKey but they require some Setting[Scope] parameter to be passed in.
What is the simplest way to just reference this property?
You need to tell the system that libraryDependencies now depends on scalaVersion:
libraryDependencies <+= (scalaVersion) { sv => "org.scala-lang" % "scala-swing" % sv }
(that's my preferred formatting; it's actually invoking the apply method on scalaVersion so you could write it a few different ways, e.g., scalaVersion("org.scala-lang" % "scala-swing" % _).)
If you had multiple settings you wanted to depend on simultaneously, you'd apply on the tuple of them:
foo <<= (scalaVersion, organization) { (sv, o) => o + " uses Scala " + sv }
libraryDependencies <+= scalaVersion("org.scala-lang" % "scala-swing" % _)
The < tells SBT that your setting has a dependency on another setting.
The + tells SBT that you want to add another value, not replace the existing ones (also, it indicates the the contents of the setting are a sequence, and you are adding one element to it).
The syntax setting(function) is the same as function(setting), where function takes a setting evaluated at the proper context as parameter. I don't even know how to write that, and it would be very verbose, so the shortcut is very helpful.
One can also use (setting 1, setting 2)((a, b) => ... ) to make dependencies on multiple settings.
PS: The following might works as well, and it is a bit shorter, but it has been deprecated without special compiler flags as of 2.10.0.
libraryDependencies <+= scalaVersion("org.scala-lang" % "scala-swing" %)
Realising this is old - adding an answer in case anyone else comes across it.
Just add .value to the scalaVersion variable to get the string value:
libraryDependencies += "org.scala-lang" % "scala-swing" % scalaVersion.value
Something like
libraryDependencies <+= scalaVersion { v => "org.scala-lang" % "scala-swing" % v}
should work.
Related
I have a project which depends on https://github.com/typelevel/kind-projector and is currently cross-compiled against scala 2.12 and 2.13, and I want to add support for scala 3.0. However, kind-projector is not available on scala 3.0 since the syntax it enables is part of the native scala 3 syntax.
Before, I used this setting to add the compiler plugin:
addCompilerPlugin("org.typelevel" % "kind-projector" % "0.11.3" cross CrossVersion.full)
Now, I'm trying to disable that setting if the scalaVersion is 3.0.0.
The closest I've got is
Def.setting {
scalaVersion.value match {
case "3.0.0" => new Def.SettingList(Nil)
case _ => Seq(
addCompilerPlugin("org.typelevel" % "kind-projector" % "0.11.3" cross CrossVersion.full)
)
}
}
but the types don't work out (that's an Initialize but it needs to be a Setting).
How can I conditionally disable the compiler plugin based on the scala version?
addCompilerPlugin is a shortcut for libraryDependencies += compilerPlugin(dependency)
Thus, it should work with something like this
libraryDependencies ++= {
scalaVersion.value match {
case "3.0.0" =>
Nil
case _ =>
List(compilerPlugin("org.typelevel" % "kind-projector" % "0.11.3" cross CrossVersion.full))
}
}
There might be a better way to do it though.
Original answer which doesn't work because scalaVersion.value is not available in this context:
scalaVersion.value match {
case "3.0.0" =>
new Def.SettingList(Nil)
case _ =>
addCompilerPlugin("org.typelevel" % "kind-projector" % "0.11.3" cross CrossVersion.full)
}
I am trying to import packages into sbt console as the following:
scala> import cats.instances.string
<console>:11: warning: Unused import
import cats.instances.string
^
error: No warnings can be incurred under -Xfatal-warnings.
and you can see, I've got an error message.
The content of the build.sbt is:
scalaVersion := "2.12.8"
scalacOptions ++= Seq(
"-encoding", "UTF-8", // source files are in UTF-8
"-deprecation", // warn about use of deprecated APIs
"-unchecked", // warn about unchecked type parameters
"-feature", // warn about misused language features
"-language:higherKinds",// allow higher kinded types without `import scala.language.higherKinds`
"-Xlint", // enable handy linter warnings
"-Xfatal-warnings", // turn compiler warnings into errors
"-Ypartial-unification" // allow the compiler to unify type constructors of different arities
)
libraryDependencies += "org.typelevel" %% "cats-core" % "1.4.0"
libraryDependencies += "org.tpolecat" %% "atto-core" % "0.6.5"
libraryDependencies += "org.tpolecat" %% "atto-refined" % "0.6.5"
addCompilerPlugin("org.spire-math" %% "kind-projector" % "0.9.3")
What am I doing wrong?
The best solution in this situation is to remove -Xlint from the Scala options that are used for the console:
scalaVersion := "2.12.8"
scalacOptions ++= Seq(
"-Xlint",
"-Xfatal-warnings"
)
scalacOptions in (Compile, console) ~= {
_.filterNot(Set("-Xlint"))
}
libraryDependencies += "org.typelevel" %% "cats-core" % "1.6.0"
With this configuration, any source code in your project will be compiled with -Xlint, but any code that's interpreted in the REPL won't be. This is generally exactly what you want: the most thorough safety-checking possible for your project code, but much more flexibility for experimentation in the REPL.
I have tried the solution from: SBT cross building - choosing a different library version for different scala version however this results in
build.sbt:27: error: No implicit for Append.Value[Seq[sbt.librarymanagement.ModuleID], sbt.Def.Initialize[sbt.librarymanagement.ModuleID]] found,
so sbt.Def.Initialize[sbt.librarymanagement.ModuleID] cannot be appended to Seq[sbt.librarymanagement.ModuleID]
libraryDependencies += scalaVersion(jsonDependency(_)),
^
[error] sbt.compiler.EvalException: Type error in expression
[error] sbt.compiler.EvalException: Type error in expression
[error] Use 'last' for the full log.
What is the correct way of forcing library dependencies for different Scala versions in sbt 1.1.1?
build.sbt:
libraryDependencies += scalaVersion(jsonDependency(_))
def jsonDependency(scalaVersion: String) = scalaVersion match {
case "2.11.7" => "com.typesafe.play" %% "play-json" % "2.4.2"
case "2.12.4" => "com.typesafe.play" %% "play-json" % "2.6.9"
}
The first line should be:
libraryDependencies += jsonDependency(scalaVersion.value)
As for the rest, it's unnecessarily sensitive to exact Scala version numbers. Consider using CrossVersion.partialVersion to be sensitive to the Scala major version only, as follows:
def jsonDependency(scalaVersion: String) =
"com.typesafe.play" %% "play-json" %
(CrossVersion.partialVersion(scalaVersion) match {
case Some((2, 11)) => "2.4.2"
case _ => "2.6.9"
})
I have a sbt build file that use 1 plugin and 3 dependencies:
scalaVersion := "2.10.4"
val reflect = Def.setting { "org.scala-lang" % "scala-reflect" % "2.10.4" }
val compiler = Def.setting { "org.scala-lang" % "scala-compiler" % "2.10.4" }
lazy val macrosSettings = Project.defaultSettings ++ Seq(
addCompilerPlugin("org.scala-lang.plugins" % "macro-paradise_2.10.4-SNAPSHOT" % "2.0.0-SNAPSHOT"),
libraryDependencies ++= {
import Dependencies._
Seq(play_json, specs2, reflect.value)
}
)
lazy val Macros = Project(id="IScala-Macros", base=file("macros"), settings=macrosSettings)
However the compiler gave me the following error in compiling IScala-Macros:
[warn] :: org.scala-lang#scala-compiler;2.10.4-SNAPSHOT: not found
[warn] :: org.scala-lang#scala-library;2.10.4-SNAPSHOT: not found
[warn] :: org.scala-lang#scala-reflect;2.10.4-SNAPSHOT: not found
this seems like a bug as I don't want them to resolve to 2.10.4-SNAPSHOT, but only 2.10.4, is it a bug of sbt? If not, where does this SNAPSHOT come from?
There are a couple of issues in this build.sbt build definition so I highly recommend reading the document Macro Paradise where you can find the link to a project that for an end-to-end example, but in a nutshell working with macro paradise is as easy as adding the following two lines to your build (granted you’ve already set up SBT to use macros).
As to the issues in this build, I don't see a reason for Def.setting for the depdendencies reflect and compiler, and moreover I'm unsure about the dependency in addCompilerPlugin. Use the one below where Def.setting is used to refer to the value of the scalaVersion setting. I still think addCompilerPlugin should follow the sample project above.
import Dependencies._
scalaVersion := "2.10.4"
val reflect = Def.setting {
"org.scala-lang" % "scala-reflect" % scalaVersion.value
}
val compiler = Def.setting {
"org.scala-lang" % "scala-compiler" % scalaVersion.value
}
lazy val macrosSettings = Project.defaultSettings ++ Seq(
addCompilerPlugin("org.scala-lang.plugins" % "macro-paradise_2.10.4-SNAPSHOT" % "2.0.0-SNAPSHOT"),
libraryDependencies ++= Seq(
play_json,
specs2,
reflect.value
)
)
lazy val Macros = Project(id="IScala-Macros", base=file("macros"), settings=macrosSettings)
I'm using xsbt-proguard-plugin, which is an SBT plugin for working with Proguard.
I'm trying to come up with a Proguard configuration for a Hive Deserializer I've written, which has the following dependencies:
// project/Dependencies.scala
val hadoop = "org.apache.hadoop" % "hadoop-core" % V.hadoop
val hive = "org.apache.hive" % "hive-common" % V.hive
val serde = "org.apache.hive" % "hive-serde" % V.hive
val httpClient = "org.apache.httpcomponents" % "httpclient" % V.http
val logging = "commons-logging" % "commons-logging" % V.logging
val specs2 = "org.specs2" %% "specs2" % V.specs2 % "test"
Plus an unmanaged dependency:
// lib/UserAgentUtils-1.6.jar
Because most of these are either for local unit testing or are available within a Hadoop/Hive environment anyway, I want my minified jarfile to only include:
The Java classes SnowPlowEventDeserializer.class and SnowPlowEventStruct.class
org.apache.httpcomponents.httpclient
commons-logging
lib/UserAgentUtils-1.6.jar
But I'm really struggling to get the syntax right. Should I start from a whitelist of classes I want to keep, or explicitly filter out the Hadoop/Hive/Serde/Specs2 libraries? I'm aware of this SO question but it doesn't seem to apply here.
If I initially try the whitelist approach:
// Should be equivalent to sbt> package
import ProguardPlugin._
lazy val proguard = proguardSettings ++ Seq(
proguardLibraryJars := Nil,
proguardOptions := Seq(
"-keepattributes *Annotation*,EnclosingMethod",
"-dontskipnonpubliclibraryclassmembers",
"-dontoptimize",
"-dontshrink",
"-keep class com.snowplowanalytics.snowplow.hadoop.hive.SnowPlowEventDeserializer",
"-keep class com.snowplowanalytics.snowplow.hadoop.hive.SnowPlowEventStruct"
)
)
Then I get a Hadoop processing error, so clearly Proguard is still trying to bundle Hadoop:
proguard: java.lang.IllegalArgumentException: Can't find common super class of [[Lorg/apache/hadoop/fs/FileStatus;] and [[Lorg/apache/hadoop/fs/s3/Block;]
Meanwhile if I try Proguard's filtering syntax to build up the blacklist of libraries I don't want to include:
import ProguardPlugin._
lazy val proguard = proguardSettings ++ Seq(
proguardLibraryJars := Nil,
proguardOptions := Seq(
"-keepattributes *Annotation*,EnclosingMethod",
"-dontskipnonpubliclibraryclassmembers",
"-dontoptimize",
"-dontshrink",
"-injars !*hadoop*.jar"
)
)
Then this doesn't seem to work either:
proguard: java.io.IOException: Can't read [/home/dev/snowplow-log-deserializers/!*hadoop*.jar] (No such file or directory)
Any help greatly appreciated!
The whitelist is the proper approach: ProGuard should get a complete context, so it can properly shake out classes, fields, and methods that are not needed.
The error "Can't find common super class" suggests that some library is still missing from the input. ProGuard probably warned about it, but the configuration appears to contain the option -ignorewarnings or -dontwarn (which should be avoided). You should add the library with -injars or -libraryjars.
If ProGuard then includes some classes that you weren't expecting in the output, you can get an explanation with "-whyareyoukeeping class somepackage.SomeUnexpectedClass".
Starting from a working configuration, you can still try to filter out classes or entire jars from the input. Filters are added to items in a class path though, not on their own, e.g. "-injars some.jar(!somepackage/**.class)" -- cfr. the manual. This can be useful if the input contains test classes that drag in other unwanted classes.
In the end, I couldn't get past duplicate class errors using Proguard, let alone how to figure out how to filter out the relevant jars, so finally switched to a much cleaner sbt-assembly approach:
-1. Added the sbt-assembly plugin to my project as per the README
-2. Updated the appropriate project dependencies with a "provided" flag to stop them being added into my fat jar:
val hadoop = "org.apache.hadoop" % "hadoop-core" % V.hadoop % "provided"
val hive = "org.apache.hive" % "hive-common" % V.hive % "provided"
val serde = "org.apache.hive" % "hive-serde" % V.hive % "provided"
val httpClient = "org.apache.httpcomponents" % "httpclient" % V.http
val httpCore = "org.apache.httpcomponents" % "httpcore" % V.http
val logging = "commons-logging" % "commons-logging" % V.logging % "provided"
val specs2 = "org.specs2" %% "specs2" % V.specs2 % "test"
-3. Added an sbt-assembly configuration like so:
import sbtassembly.Plugin._
import AssemblyKeys._
lazy val sbtAssemblySettings = assemblySettings ++ Seq(
assembleArtifact in packageScala := false,
jarName in assembly <<= (name, version) { (name, version) => name + "-" + version + ".jar" },
mergeStrategy in assembly <<= (mergeStrategy in assembly) {
(old) => {
case "META-INF/NOTICE.txt" => MergeStrategy.discard
case "META-INF/LICENSE.txt" => MergeStrategy.discard
case x => old(x)
}
}
)
Then typing assembly produced a "fat jar" with just the packages I needed in it, including the unmanaged dependency and excluding Hadoop/Hive etc.