How to exclude package from Scala SBT - scala

I am at loss. I am using Scala with SBT and I had to add databricks dependency (link dependencies i get when I install databricks-connect) via unmanagedBase in the build.sbt and now when I try to run the application I get this error. I have tried excluding packages but nothing helps. Maybe I am doing it wrong. Could someone please help me. What package should I exclude to make this thing work since I am getting org.slf4j.impl.Log4jLoggerFactory cannot be cast to ch.qos.logback.classic.LoggerContext?

To understand what is happening (and to prevent this in the future), you probably need to read and understand SLF4J user manual.
The Simple Logging Facade for Java (SLF4J) serves as a simple facade or abstraction for various logging frameworks, such as java.util.logging, logback and reload4j.
In Binding with a logging framework at deployment time section it says:
slf4j-log4j12-1.7.35.jar
Binding/provider for log4j version 1.2, a widely used logging framework.
logback-classic-1.2.10.jar (requires logback-core-1.2.10.jar)
There are also SLF4J bindings/providers external to the SLF4J project, e.g. logback which implements SLF4J natively.
First you'd have to decide if you want log4j as the logger implementation or logback. The screenshot doesn't show org.slf4j:slf4j-log4j12:something, but is it included?
Assuming you want to exclude log4j binding, then that's the one you need to exclude.
To exclude something from all dependencies in a subproject:
excludeDependencies ++= Seq(
ExclusionRule("org.slf4j", "slf4j-log4j12")
)
For more details, see Exclude Transitive Dependencies.

Related

SBT: modify the order of dependencies in the classpath

I'm currently experiencing a problem with Specs2 + SBT where my tests always fail via command-line because of dependency order in the classpath. Specs2 requires that the Mockito jars come after the Specs2 jars so that Mockito classes can be overridden to fix issues with by-name scala method parameters (see this issue for more information: https://github.com/etorreborre/specs2/issues/428).
In IntelliJ, I can order my dependencies via the Project Structure/Modules/Dependencies window, which fixes my tests when run inside IntelliJ, however, I have not found a solution to fix this issue when running my tests on the command-line via sbt test.
Does anyone know if it is possible to change the classpath order of dependencies for SBT using settings in build.sbt (or similar)?
To my knowledge you need to make sure that specs2-mock comes before mockito in your libraryDependencies setting.

How to find unused sbt dependencies?

My build.sbt has a lot of dependencies now. How do I know which dependencies are actually being used?
Maven seems to have dependency:analyse http://maven.apache.org/plugins/maven-dependency-plugin/
Is there something similar for sbt?
There is the sbt-explicit-dependencies plugin, which has been developed recently. It has direct commands in the SBT console to:
Enforce explicit direct declaration of dependencies, thus disallowing transitive dependencies.
Detect and remove unneeded dependencies.
you can use sbt-dependency-graph plugin. it shows dependencies in different graphical representations. also you can try to use tattletale, but it's not integrated with sbt. it'll require you to copy managed dependencies (retrieveManaged := true). this tool not only shows dependency graph, but analyzes class usage and can display unused dependencies (including transitive)

SBT Scaladoc Configuration

I'm trying to configure the Scaladoc in SBT, specifically the title, output directory and classpath.
I managed to define the title by adding the following to build.sbt:
scalacOptions in (Compile, doc) ++= Opts.doc.title("Scala-Tools")
I can't figure out how to change the doc output directory.
I also can't figure out how to add jars to classpath. The reason I want to edit the classpath is because it appears the standard Scala library is not getting picked up by scaladoc when I refer to its classes, i.e. [[scala.Option]] leads to a warning "Could not find any member to link for "scala.Option"."
Any help, even in the form of an example SBT configuration would be appreciated!
I'm using Scala 2.10-RC3 and SBT 0.12.1.
The Scala library is on the classpath, otherwise scaladoc would bail out with an error pretty quickly. The warning you see means that scaladoc doesn't know how to link to Option. For this, you need to use either the -external-urls option or the -doc-external-doc option coming in 2.10.1. The output of scaladoc -help for the upcoming 2.10.1 shows:
-doc-external-doc:<external-doc> comma-separated list of classpath_entry_path#doc_URL pairs describing external dependencies.
-external-urls:<externalUrl(s)> (deprecated) comma-separated list of package_names=doc_URL for external dependencies, where package names are ':'-separated
The solution until 2.10.1 is out is to use -external-uris:
-external-urls:scala=http://www.scala-lang.org/archives/downloads/distrib/files/nightly/docs/library/

Why can't Scala find org.apache.commons.lang package?

I want to use org.apache.commons.lang.NotImplementedException as it seems to be the only NotImplementedException implementation in Java/Scala domain. I can remember I used to use it with Scala 2.8.1 with no hacks. But now it says "object lang is not a member of package org.apache.commons". Where has org.apache.commons.lang gone?
I've just found the answer myself. The problem is Apache Commons 3 no longer include lang (including lang3 instead, which is differend and doesn't contain NotImplementedException), so we need Apache Commons 2.6. And what's inobvious here is that the Maven group id for it is not org.apache.commons, but commons-lang - the same as its artifact id.
So I had to add "commons-lang" % "commons-lang" % "2.6" dependency and do sbt update to make it work.

How can I get HBase to play nicely with sbt's dependency management?

I'm trying to get an sbt project going which uses CDH3's Hadoop and HBase. I'm trying to using a project/build/Project.scala file to declare dependencies on HBase and Hadoop. (I'll admit my grasp of sbt, maven, and ivy is a little weak. Please pardon me if I'd saying or doing something dumb.)
Everything went swimmingly with the Hadoop dependency. Adding the HBase dependency resulted in a dependency on Thrift 0.2.0, for which there doesn't appear to be a repo, or so it sounds from this SO post.
So, really, I have two questions:
1. Honestly, I don't want a dependency on Thrift because I don't want to use HBase's Thrift interface. Is there a way to tell sbt to skip it?
2. Is there some better way to set this up? Should I just dump the HBase jar in the lib directory and move on?
Update This is the sbt 0.10 build.sbt file that accomplished what I wanted:
scalaVersion := "2.9.0-1"
resolvers += "ClouderaRepo" at "https://repository.cloudera.com/content/repositories/releases"
libraryDependencies ++= Seq(
"org.apache.hadoop" % "hadoop-core" % "0.20.2-cdh3u0",
"org.apache.hbase" % "hbase" % "0.90.1-cdh3u0"
)
ivyXML :=
<dependencies>
<exclude module="thrift"/>
</dependencies>
Looking at the HBase POM file, Thrift is in the repo at http://people.apache.org/~rawson/repo. You can add that to your project, and it should find Thrift. I thought that SBT would have figured that out, but this is an intersection of SBT, Ivy and Maven, so who can really say what really should happen.
If you really don't need Thrift, you can exclude dependencies using inline Ivy XML, as documented on the SBT wiki.
override def ivyXML =
<dependencies>
<exclude module="thrift"/>
</dependencies>
Re: dumping the jar in the lib directory, that would be a short term gain, long term loss. It's certainly more expedient, and if this is some proof of concept you're throwing away next week, sure just drop in the jar and forget about it. But for any project that has a lifespan greater than a couple of months, it's worth it to spend the time to get dependency management right.
While all of these tools have their challenges, the benefits are:
Dependency analysis can tell you when your direct dependencies have conflicting transitive dependencies. Before these tools, this usually resulted in weird runtime behavior or method not found exceptions.
Upgrades are super-simple. Just change the version number, update, and you're done.
It avoids having to commit binaries to version control. They can be problematic when it comes time to merge branches.
Unless you have an explicit policy of how you version the binaries in your lib directory, it's easy to lose track of what versions you have.
I have a very simple example of an sbt project w/ Hadoop on github: https://github.com/deanwampler/scala-hadoop.
Look in project/build/WordCountProject.scala, where I define a variable named ClouderaMavenRepo, which defines the Cloudera repository location, and the variable named hadoopCore, which defines the specific information for the Hadoop jar.
If you go to the Cloudera repo in a browser, you should be able to navigate to the corresponding information for Hive.