We have a giant monolith and I simply want to run a test that only deals with a handful of classes (or files). i.e. the dependency graph if seen from the test file is pretty small
Is it possible to set the compilation mode to on demand or compile just what's needed?
We use sbt
Related
When deploying a Scala application, we use SBT on Jenkins. Currently our build action is specified as clean assembly (using Assembly plugin to produce fat JARs). Our build currently takes between 2-3 minutes, which is sensible, but as the project will become larger and deployments for frequent, it might become a bottleneck.
I remember when doing C++ deployment with Visual Studio, clean (Rebuild All) was necessary, otherwise builds were sometimes (say 0.1%) broken (most likely because build missed some changed dependency in headers).
Is this a concern with SBT? Is clean considered a necessary practice to get reliable builds?
My experience is that sometimes SBT gets mixed-up, the most common thing I've seen is that it can't find classes that are part of the project (and not compiled this time around). I haven't had the inclination to really debug it since doing a clean fixes it every time, but for a CI server I would go for clean every time.
I am new in Scala and I have to learn Scala and SBT, I read the sbt tutorial but i am unable to understand the use of sbt, for what purpose its been used.After reading this tutorial
I am still confused can any one will explain it in simple words, also suggest me if there is some tutorial for simple build tool
When you write small programs that consist of only one, or just two or three source files, then it's easy enough to compile those source files by typing scalac MyProgram.scala in the command line.
But when you start working on a bigger project with dozens or maybe even hundreds of source files, then it becomes too tedious to compile all those source files manually. You will then want to use a build tool to manage compiling all those source files.
sbt is such a tool. There are other tools too, some other well-known build tools that come from the Java world are Ant and Maven.
How it works is that you create a project file that describes what your project looks like; when you use sbt, this file will be called build.sbt. That file lists all the source files your project consists of, along with other information about your project. Sbt will read the file and then it knows what to do to compile the complete project.
Besides managing your project, some build tools, including sbt, can automatically manage dependencies for you. This means that if you need to use some libraries written by others, sbt can automatically download the right versions of those libraries and include them in your project for you.
I'm trying to figure out how does sbt work. I read sbt online documentation and still have an unanswered question.
How does sbt behave when it discovers multiple .scala files in the < root >/project folder each one containing a Build trait implementation?
I performed experiment and discovered that sbt find this situation correct if there is no interference between the build implementations. I call it "interference" but I have no precise picture of what are correct and incorrect cases and how multiple Build definitions actually flattened into single sbt project.
I occasionally play with Scala forks and sometimes need to debug these forks on SBT projects. In general, scalaHome works great, but there are a few things that I'd like to find better ways to achieve.
1) Is it possible to have SBT pick up custom scalac class files produced by the ant quick build rather than jar files emitted by the ant pack build? The latter implies 5-10 seconds of additional delay per build, so it'd be great to avoid it.
2) Even in big projects, problems exhibited by scalac usually manifest themselves when compiling single files. Is there a way to tell sbt to neglect its change tracking heuristics and recompile just a single file? What I would particularly like to prevent is recompilation of the whole world when I recompile scalaHome or change scalac flags.
3) Would it be possible to have sbt hot reload scalac classes coming from scalaHome, when scalaHome gets recompiled? Currently I have to shutdown and restart sbt to apply the changes.
1) No, this would make sbt depend on the details of the Scala build. If Scala were built with sbt, you might be able to depend on Scala as a source dependency or at least this could probably be supported without too many changes.
2) No, see https://github.com/sbt/sbt/issues/604
3) sbt 0.13 should check the last modified times of the jars coming from scalaHome and use a new class loader. It is a bug if it does not.
I am trying to generate some boilerplate with SBT (tool which is totally new to me). I am using shapeless sbt files as my main reference for the task. I have seen that this project uses code generation from scratch, but my case is slightly different, since I would like to generate some classes from another ones. I pretend to use the new Scala 2.10.0-M4 reflection capabilities for doing so. What basic configuration is needed to have reflection available from a SBT build?
By now, the sbt is unable to find the scala.reflect.runtime.universe package, and I do not know if the problem comes either from the new Scala jar division or from a bad configuration. Besides, my sbt about says:
[info] This is sbt 0.13.0-20120530-052139
[info] The current project is {file:/home/jlg/sandbox/abc/}abc
[info] The current project is built against Scala 2.10.0-SNAPSHOT
[info]
[info] sbt, sbt plugins, and build definitions are using Scala 2.9.2
By the way, does anybody know other projects using SBT to generate source code?
Current SBT releases are based on Scala 2.9, and source code generation runs together with SBT with the same libraries. There are basically two choices:
be extremely bleeding-edge: get an SBT release running on Scala 2.10 (not even the 0.13 branch does), or waiting for it. The biggest problem is not just that you'd have to recompile SBT yourself, it's recompiling every single SBT plugin you'll need for Scala 2.10. In the long-term, this is maybe the best strategy to do what you ask, but it might be a lot of effort for now. However, beware that you cannot use reflection on your compiled code without evil tricks, since code generation is supposed to happen before compilation. If you need to do that, consider instead generating code at compile-time within the program using macros. This excludes SBT and is much more standard, but I'm not sure if you can generate complete classes in this release (this is I think planned for the future).
go with the old: stick with Scala 2.9 and use scalap's capabilities (ScalaSigParser) for compile-time reflection. The problem is that the API is different (not sure how deeply) and not really supported for public use, although various people have been using it for ages. For a project I'm running, a colleague implemented approach and I integrated it within SBT for my project (https://github.com/ps-mr/LinqOnSteroids/); on top of that, I use Scalate to write the templates to use for code generation, which is quite powerful.
See in particular build.sbt, which invokes
project/Generator.scala and project/src/main/scala/ivm/generation/ScalaSigHelpers.scala (some non-fully-generic wrappers for ScalaSigParser). Scalate Templates for generated code are in
src/main/resources, the most relevant here is src/main/resources/WrappedClassInlined.ssp.
Even more stuff is involved, I fear you'll pratically need a checkout and playing with it to see what it does exactly—but feel free to ask questions.
Please note that the code is protected by a BSD license, so you need to keep the original copyright if you copy the code.
Note: all the links (except the license) are to the current HEAD for stability, so that they won't disappear so easily even if the files are moved/removed in future versions.
If you're using 2.10.0-SNAPSHOT, then you should go for scala.reflect.runtime.universe. Take a look at http://dcsobral.blogspot.ch/2012/07/json-serialization-with-reflection-in.html for more information.