What does "str" % "str" mean in SBT? - scala

I came across this code:
import sbt._
class AProject(info: ProjectInfo) extends DefaultProject(info) {
val scalaToolsSnapshots = ScalaToolsSnapshots
val scalatest = "org.scalatest" % "scalatest" %
"1.0.1-for-scala-2.8.0.RC1-SNAPSHOT"
}
And I'm quite confused as to what scalatest contains, and what the % does.

It declares a dependency. In particular,
val scalatest = "org.scalatest" % "scalatest" % "1.0.1-for-scala-2.8.0.RC1-SNAPSHOT
refers to a dependency which can be found at
http://scala-tools.org/repo-snapshots/org/scalatest/scalatest/1.0.1-for-scala-2.8.0.RC1-SNAPSHOT/
Where everything before org refers to the repository, which is (pre-)defined elsewhere.
It is not easy to find the implicit that enables % on String, but, for the record, it is found on ManagedProject, converting a String into a GroupID. In the same trait there's also another implicit which enables the at method.
At any rate, the implicit will turn the first String into a GroupID, the first % will take a String representing the artifact ID and return a GroupArtifactID, and the second will take a String representing the revision and return a ModuleID, which is what finally gets assigned to scalatest.

If you used Maven this is essentially the same thing but with Scala DSL. % works as a separator:
<dependency>
<groupId>org.scalatest</groupId>
<artifactId>scalatest</artifactId>
<version>1.0.1-for-scala-2.8.0.RC1-SNAPSHOT</version>
</dependency>
Read more:
http://code.google.com/p/simple-build-tool/wiki/LibraryManagement

Related

How to exclude logging (like logback-classic) from jar published by sbt

My Scala project has a libraryDependency on slf4j because I use the API for logging. I also want to see the logging output while running from sbt or IntelliJ, both for the Apps that runMain and the unit tests that testOnly from sbt. Therefore there is also a libraryDependency on logback-classic. However, I do not want that second dependency published because of the convention stated below. When someone uses my published library, the transitive dependency should not be automatically brought in. How should that be done? I don't want to explain to the user how to manually exclude the transitive dependency, because they might be using any number of different tools. The logback-classic should continue to be included in an assembled jar, however, if at all possible. It doesn't seem like exclude() is the answer.
"Embedded components such as libraries or frameworks should not declare a dependency on any SLF4J binding/provider [like logback-classic] but only depend on slf4j-api. When a library declares a transitive dependency on a specific binding, that binding is imposed on the end-user negating the purpose of SLF4J. Note that declaring a non-transitive dependency on a binding, for example for testing, does not affect the end-user."
Publish the jar with slf4j-api but use the sbt Test configuration for logback. Unit tests will then have a concrete implementation but it won't be packaged in your artifact.
libraryDependencies ++= Seq(
"org.slf4j" % "slf4j-api" % "1.7.36",
"ch.qos.logback" % "logback-classic" % "1.2.11" % Test
)
This would be a project with sub-projects. Your sample app uses a concrete implementation, but not the library. Anyone using the library would provide their own.
lazy val root = (project in file("."))
.settings(
publish / skip := true,
)
.aggregate(sampleApp, theLibrary)
lazy val sampleApp = project
.settings(
publish / skip := true,
libraryDependencies ++= Seq(
"ch.qos.logback" % "logback-classic" % "1.2.11"
)
)
.dependsOn(theLibrary % "test->test;compile->compile")
lazy val theLibrary = project
.settings(
libraryDependencies ++= Seq(
"org.slf4j" % "slf4j-api" % "1.7.36",
"ch.qos.logback" % "logback-classic" % "1.2.11" % Test
)
)
My tentative solution is to add this code to an sbt file
ThisBuild / pomPostProcess := {
val logback = DependencyId("ch.qos.logback", "logback-classic")
val rule = DependencyFilter { dependencyId =>
dependencyId != logback
}
(node: Node) => new RuleTransformer(rule).transform(node).head
}
and back it up with this Scala code in the project directory
package org.clulab.sbt
import scala.xml.Node
import scala.xml.NodeSeq
import scala.xml.transform.RewriteRule
case class DependencyId(groupId: String, artifactId: String)
abstract class DependencyTransformer extends RewriteRule {
override def transform(node: Node): NodeSeq = {
val name = node.nameToString(new StringBuilder()).toString()
name match {
case "dependency" =>
val groupId = (node \ "groupId").text.trim
val artifactId = (node \ "artifactId").text.trim
transform(node, DependencyId(groupId, artifactId))
case _ => node
}
}
def transform(node: Node, dependencyId: DependencyId): NodeSeq
}
class DependencyFilter(filter: DependencyId => Boolean) extends DependencyTransformer {
def transform(node: Node, dependencyId: DependencyId): NodeSeq =
if (filter(dependencyId)) node
else Nil
}
object DependencyFilter {
def apply(filter: DependencyId => Boolean): DependencyFilter = new DependencyFilter(filter)
}
I'm still hoping to find a similar solution for editing ivy.xml.

SBT Plugin Where is %%% defined?

I have an SBT plugin which will auto-generate some Scala.js code just before compile time. This code depends on a library which I would like to automatically include when the plugin is enabled.
This compiles and runs, but does not get the Scala.js version of the library:
import sbt._
import Keys.libraryDependencies
object MyPlugin extends AutoPlugin {
object autoImport {
lazy val baseSettings: Seq[Def.Setting[_]] = Seq(
libraryDependencies += "my.lib" %% "library" % "0.1.0"
)
}
import autoImport._
override lazy val projectSettings = baseSettings
}
I when I try to use "my.lib" %%% "library" % "0.1.0", I get:
value %%% is not a member of String
I feel like I'm probably missing an import, but I can't find where this is supposed to be defined.
%%% is defined by the sbt-platformdeps plugin.
Unless your sbt plugin already depends on sbt-scalajs, you'll need to add a dependency to it in your plugin project's settings:
addSbtPlugin("org.portable-scala" % "sbt-platform-deps" % "1.0.0")
The following import will bring it in scope:
import org.portablescala.sbtplatformdeps.PlatformDepsPlugin.autoImport._
addSbtPlugin("com.lightbend.lagom" % "lagom-sbt-plugin" % "X.Y.Z") // replace 'X.Y.Z' with your preferred version (e.g. '1.2.0-RC2').
You can refer to this one
https://www.lagomframework.com/documentation/1.6.x/java/LagomBuild.html

Why do I get this compilation error: "could not find implicit value for kstream.Consumed" and how could I fix it?

We are having these dependencies:
libraryDependencies += "org.apache.kafka" %% "kafka-streams-scala" % kafkaVersion
libraryDependencies += "io.confluent" % "kafka-streams-avro-serde" % confluentVersion
libraryDependencies += "io.confluent" % "kafka-schema-registry-client" % confluentVersion
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.2.3"
libraryDependencies += "com.typesafe" % "config" % "1.4.0"
libraryDependencies += "com.sksamuel.avro4s" %% "avro4s-core" % "3.0.4"
We use a code generator to generate Scala case classes out of AVRO schema files. One such generated case class has, as one of its fields, an Either value. In AVRO schema this is expressed with type=[t1,t2] so the generation seems to be decent, that is a sum type: can be type t1 or type t2.
The question becomes what is missing on the deserialization path from topic to case class (binary -> Avro Map -> case class).
Basically I am getting this error currently:
could not find implicit value for parameter consumed: org.apache.kafka.streams.scala.kstream.Consumed[String, custom.UserEvent]
[error] .stream[String, UserEvent]("schma.avsc")
The first thought was kafka-streams-avro-serde, but it may be that this library only ensure the Serde[GenericRecord] for AVRO Map, not for case classes. So one of the other dependencies is helping with the AVRO GenericRecord to case classes mapping and back. We also have some hand written code that generates case classes out of schemas, that seems to work directly with spray json.
I'm thinking that in the (binary <-> Avro GenericRecord <-> case class instance) transformations, there is a gap, and it could be the fact that in the case class there is an Either field?
I'm taking a path now to try to create a Serde[UserEvent] instance. So that in my understanding would involve converting between UserEvent and AVRO GenericRecord, similar to Map, and then between AVRO Record and binary - which is likely covered by the kafka-streams-avro-serde dependency, like there should be a Serde[GenericRecord] or similar.
Imports wise, we have this to import implicits:
import org.apache.kafka.common.serialization.Serde
import org.apache.kafka.streams.Topology
import org.apache.kafka.streams.scala.ImplicitConversions._
import org.apache.kafka.streams.scala.Serdes
import org.apache.kafka.streams.scala.Serdes._
import org.apache.kafka.streams.scala.kstream.Consumed
In fact an import was missing.
Now it works to compile.
Here are the imports:
import org.apache.kafka.streams.Topology
import org.apache.kafka.streams.scala.ImplicitConversions._
import org.apache.kafka.streams.scala.Serdes._
Did you import the corresponding package?
import org.apache.kafka.streams.scala.ImplicitConversions._
Cf. https://kafka.apache.org/24/documentation/streams/developer-guide/dsl-api.html#scala-dsl
For me I had to follow the directions better, and add an implicit serde implementation. Their example in the link looks like this:
// An implicit Serde implementation for the values we want to
// serialize as avro
implicit val userClicksSerde: Serde[UserClicks] = new AvroSerde
For a fuller example, see the scala tests for their avro lib:
// Make an implicit serde available for GenericRecord, which is required for operations such as `to()` below.
implicit val genericAvroSerde: Serde[GenericRecord] = {
val gas = new GenericAvroSerde
val isKeySerde: Boolean = false
gas.configure(Collections.singletonMap(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, cluster.schemaRegistryUrl), isKeySerde)
gas
}

Scala enumeration serialization in jersey/jackson is not working for me

I've read the jackson-module-scala page on enumeration handling (https://github.com/FasterXML/jackson-module-scala/wiki/Enumerations). Still I'm not getting it to work. The essential code goes like this:
#Path("/v1/admin")
#Produces(Array(MediaType.APPLICATION_JSON + ";charset=utf-8"))
#Consumes(Array(MediaType.APPLICATION_JSON + ";charset=utf-8"))
class RestService {
#POST
#Path("{type}/abort")
def abortUpload(#PathParam("type") typeName: ResourceTypeHolder) {
...
}
}
object ResourceType extends Enumeration {
type ResourceType = Value
val ssr, roadsegments, tmc, gab, tne = Value
}
class ResourceTypeType extends TypeReference[ResourceType.type]
case class ResourceTypeHolder(
#JsonScalaEnumeration(classOf[ResourceTypeType])
resourceType:ResourceType.ResourceType
)
This is how it's supposed to work, right? Still I get these errors:
Following issues have been detected:
WARNING: No injection source found for a parameter of type public void no.tull.RestService.abortUpload(no.tull.ResourceTypeHolder) at index 0.
unavailable
org.glassfish.jersey.server.model.ModelValidationException: Validation of the application resource model has failed during application initialization.
[[FATAL] No injection source found for a parameter of type public void no.tull.RestService.abortUpload(no.tull.ResourceTypeHolder) at index 0.; source='ResourceMethod{httpMethod=POST, consumedTypes=[application/json; charset=utf-8], producedTypes=[application/json; charset=utf-8], suspended=false, suspendTimeout=0, suspendTimeoutUnit=MILLISECONDS, invocable=Invocable{handler=ClassBasedMethodHandler{handlerClass=class no.tull.RestService, handlerConstructors=[org.glassfish.jersey.server.model.HandlerConstructor#7ffe609f]}, definitionMethod=public void no.tull.RestService.abortUpload(no.tull.ResourceTypeHolder), parameters=[Parameter [type=class no.tull.ResourceTypeHolder, source=type, defaultValue=null]], responseType=void}, nameBindings=[]}']
at org.glassfish.jersey.server.ApplicationHandler.initialize(ApplicationHandler.java:467)
at org.glassfish.jersey.server.ApplicationHandler.access$500(ApplicationHandler.java:163)
at org.glassfish.jersey.server.ApplicationHandler$3.run(ApplicationHandler.java:323)
at org.glassfish.jersey.internal.Errors$2.call(Errors.java:289)
at org.glassfish.jersey.internal.Errors$2.call(Errors.java:286)
I have also assembled a tiny runnable project (while trying to eliminate any other complications) that demonstrates the problem: project.tgz
Update: Created an sbt-file to see if gradle was building a strange build. Got the same result, but this is the build.sbt:
name := "project"
version := "1.0"
scalaVersion := "2.10.4"
val jacksonVersion = "2.4.1"
val jerseyVersion = "2.13"
libraryDependencies ++= Seq(
"com.fasterxml.jackson.core" % "jackson-annotations" % jacksonVersion,
"com.fasterxml.jackson.core" % "jackson-databind" % jacksonVersion,
"com.fasterxml.jackson.jaxrs" % "jackson-jaxrs-json-provider" % jacksonVersion,
"com.fasterxml.jackson.jaxrs" % "jackson-jaxrs-base" % jacksonVersion,
"com.fasterxml.jackson.module" % "jackson-module-scala_2.10" % jacksonVersion,
"org.glassfish.jersey.containers" % "jersey-container-servlet-core" % jerseyVersion
)
seq(webSettings :_*)
libraryDependencies ++= Seq(
"org.eclipse.jetty" % "jetty-webapp" % "9.1.0.v20131115" % "container",
"org.eclipse.jetty" % "jetty-plus" % "9.1.0.v20131115" % "container"
)
... and this is the project/plugins.sbt:
addSbtPlugin("com.earldouglas" % "xsbt-web-plugin" % "0.9.0")
You seem to possibly have a few problems with your tarball.
You need to add some Scala modules to Jackson to be able to use any Scala functionality. That can be done by doing this:
val jsonObjectMapper = new ObjectMapper()
jsonObjectMapper.registerModule(DefaultScalaModule)
val jsonProvider: JacksonJsonProvider = new JacksonJsonProvider(jsonObjectMapper)
According to this working jersey-jackson example. You also need to inject org.glassfish.jersey.jackson.JacksonFeature into Jersey which is found in jersey-media-json-jackson. My RestApplication.scala came out like this
import javax.ws.rs.core.Application
import javax.ws.rs.ext.{ContextResolver, Provider}
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.module.scala.DefaultScalaModule
import com.google.common.collect.ImmutableSet
import org.glassfish.jersey.jackson.JacksonFeature
#Provider
class ObjectMapperProvider extends ContextResolver[ObjectMapper] {
val defaultObjectMapper = {
val jsonObjectMapper = new ObjectMapper()
jsonObjectMapper.registerModule(DefaultScalaModule)
jsonObjectMapper
}
override def getContext(typ: Class[_]): ObjectMapper = {
defaultObjectMapper
}
}
class RestApplication extends Application {
override def getSingletons: java.util.Set[AnyRef] = {
ImmutableSet.of(
new RestService,
new ObjectMapperProvider,
new JacksonFeature
)
}
}
The real issue, though, is the #PathParam annotation. This code path doesn't invoke Jackson at all. However, what's interesting is that Jersey appears to generically support parsing to any type that has a constructor of a single string. So if you modify your ResourceTypeHolder you can get the functionality you want after all.
case class ResourceTypeHolder(#JsonScalaEnumeration(classOf[ResourceTypeType]) resourceType:ResourceType.ResourceType) {
def this(name: String) = this(ResourceType.withName(name))
}
You might be able to add generic support for enum holders to Jersey as an injectable provider. However, that hasn't come up in dropwizard-scala, a project that would suffer the same fate as it uses Jersey too. Thus I imagine it's either impossible, or simply just not common enough for anyone to have done the work. When it comes to enum's, I tend to keep mine in Java.

How to do unit testing in Scala

I am trying to learn scala (and also the concept of unit testing).
I have an object
object Foo{
def parse(s:String): Array[String] = {
return s.split(",")
}
}
A very simple code block.. but now I want to write unit test?
My code structure is:
src/main/scala/foo.scala
src/test/scala/(empty)
I am using sbt to compile and run?
Thanks
put this in src/test/scala/FooSpec.scala
import org.specs2.mutable.Specification
class FooSpec extends Specification {
"Foo" should {
"parse a String" in {
Foo.parse("a,b") == Array("a","b")
}
}
}
then in the sbt prompt you can run test
for this to work you will need to add a dependency on specs 2 in your build.sbt as explained in the documentation
libraryDependencies ++= Seq(
"org.specs2" %% "specs2" % "2.3.11" % "test"
)
It's a very big topic.
I'm a proponent of Specs2 along with its Mockito and ScalaCheck support. All of these have good documentation, so I recommend you start by looking them up on the Web.