I have a parent project with 5 modules. I am trying to figure out how to aggregate the module level scaladoc's into one cohesive site. Any help would be much appreciated.
You can do it easily with SBT by integrationg 'Unidoc' into your build:
https://github.com/akka/akka/blob/master/project/Unidoc.scala
The maven plugin for scala support aggregation too, but by default only direct module, you can change the default behavior (aggregateDirectOnly, forceAggregate):
http://davidb.github.com/scala-maven-plugin/doc-mojo.html
Scaladoc2 doesn't support aggregation from multi source/multi classpath (scaladoc run over scalac, so aggregation means a full compilation like if there one big project).
And the scala-maven-plugin is mainly a wrapper of the scala commands.
Aggregation is only available to vscaladoc, who is now abandonned.
Sorry
If aggregating Scala documentation into the overall Java project documentation is an option, see my answer here: https://stackoverflow.com/a/16288487/430128.
Related
I would like to generate xml files during sbt build based on higher level config(lets say yaml), then package them into the tar file(by sbt-native-packager). What would be the simplest way to achieve that?
One way I can think of is to add twirl to project/build.sbt and than use it to write custom task. Is there some simpler way to do that?
To use twirl, you would need to add twirl as a plugin to the your projects builds build - it's a bit meta, the location of your twirl files will be a bit unintuitive (projcet/src/main/twirl). I've done it, but in my opinion it's just not worth it for most use cases.
I would instead just use scala-xml. If using sbt 0.13 (ie, Scala 2.10), then you can just embed the xml directly in your Scala code, otherwise for sbt 1.0 you may need to add a dependency on scala-xml in your project/plugins.sbt (though possibly sbt 1.0 already depends on scala-xml, not sure).
Here's an example of a task that generates XML:
https://github.com/lagom/lagom/blob/4a75ab0773b2cc3f55b6c5fae3f96ba08ddcf4c0/project/SbtMavenPlugin.scala#L47
Scroll down to see examples of embedding xml in Scala:
https://github.com/lagom/lagom/blob/4a75ab0773b2cc3f55b6c5fae3f96ba08ddcf4c0/project/SbtMavenPlugin.scala#L158-L162
I'm trying to convert a persistence layer from a plain old database (using ScalaQuery) to MongoDB, and I'm running into an odd issue. I use the Casbah driver, which is a Scala wrapper around the official MongoDB Java driver. Both the Java and Scala driver define - according to the docs and the overview of the .jar when I open it in Eclipse - a method findOneById that takes a single DBObject as parameter (with an ID in it).
However, when I try to access it, I get a missing method exception from the Scala compiler, both in Eclipse and SBT - Scala version 2.9.0-1, SBT 0.10.1.
What might cause this? Is this perhaps a known SBT / Scala compiler bug?
I just removed my entire repository so all dependencies get downloaded freshly, but this didn't fix the problems.
Are you sure that you call findOneById on a MongoCollection instance ?
Maybe it's the parameter type that is wrong, as I can see on the documentation (http://api.mongodb.org/scala/casbah/2.1.2/scaladoc/com/mongodb/casbah/MongoCollection.html), findOneById should take an Id of type AnyRef and optionnaly the fields to return.
You should try something like mongoCollection.findOneByID(1.asInstanceOf[Object]).
Regarding BBObject, it seems that it doesn't appear in the list of parameter (except as an implicit parameter useful to convert the fields that you request to a DBObject). Maybe the signature of the method changed since a previous release.
Hope this will help.
I'm using Groovy for a calculation engine DSL and really like the support we now have in Eclipse with STS and the Groovy-Eclipse plug-in (I'm on STS 2.8.0M2 with latest milestone of Groovy-Eclipse 2.5.2).
One issue I have is I don't know how to get the Groovy editor to 'know' the automatic imports I've added to my script runner, meaning Eclipse gives me a whole bunch of false errors. If you use the Groovy class loader, you can add additional import for 'free', so you avoid needing to do imports in your script.
I've had a play with the DSLD support in Groovy-Eclipse (which can be used to add auto-completion support) but it's not obvious to me that this is something I could do with it - I don't find the DSLD documentation the simplest to follow.
The inferencing settings for Groovy in Eclipse didn't look like the right thing either.
For example:
def result = new CalculationResult()
gives me an error on the CalculationResult class as it's not imported, but the script will execute correctly in my environment because of the customized imports on the Groovy class loader. I'm using the standard import customization provided by Groovy, for example:
import org.codehaus.groovy.control.customizers.ImportCustomizer
import org.codehaus.groovy.control.CompilerConfiguration
def importCustomizer = new ImportCustomizer()
importCustomizer.addImport 'CalculationResult', 'ch.hedgesphere.core.type.CalculationResult'
def configuration = new CompilerConfiguration()
configuration.addCompilationCustomizers(importCustomizer)
...
Any pointers appreciated.
This seems to be in their bugtracker as coming in the 2.6 release of the plugin.
But the comment from Andrew Eisenberg doesn't bode well:
Unfortunately, this is not something that DSLDs can do. Since a
missing import could mean compile errors, we would need a way to
augment the compiler lookup for this. There might be a way to specify
this information inside of a DSLD, but that would mean hooking into
DSLDs in a very different way. More likely, this will have to be
specified through an Eclipse plugin (like the gradle tooling).
Another possibility is that we can ensure that certain kinds of AST
Transforms are applied during a reconcile and so the editor would just
"magically" know about these extra imports. We will have to look into
the feasibility of this, however.
Still, maybe a vote on that issue wouldn't go amiss?
Java compiler provides incremental build, so javac ant task as well. But most other processes don't.
Considering build processes, they transform some set of files (source) into another set of files (target).
I can distinct two cases here:
Transformator cannot take a subset of source files, only the whole set. Here we can only make lazy build - if no files from source was modified - we skip processing.
Transformator can take a subset of sources files and produce a partial result - incremental build.
What are ant internal, third-party extensions or other tools to implement lazy and incremental build?
Can you provide some widespread buildfile examples?
I am interested this to work with GWT compiler in particular.
The uptodate task is Ant's generic solution to this problem. It's flexible enough to work in most situations where lazy or incremental compilation is desirable.
I had the same problem as you: I have a GWT module as part of my code, and I don't want to pay the (hefty!) cost of recompiling it when I don't need to. The solution in my case looked something like this:
<uptodate property="gwtCompile.mymodule.notRequired"
targetfile="www/com.example.MyGwtModule/com.example.MyGwtModule.nocache.js">
<srcfiles dir="src" includes="**"/>
</uptodate>
<target name="compile-mymodule-gwt" unless="gwtCompile.mymodule.notRequired">
<compile-gwt-module module="com.example.MyGwtModule"/>
</target>
Related to GWT, it's not possible to do incremental builds because the GWT compiler looks at all the source code at once and optimizes and inlines code. This means code that wasn't changed could be evaluated differently, for example if you start using a method from a class that wasn't changed, the method was in the previous compilation step left out, but now needs to be compiled in.
I'm having major problems getting Undercover to work using Maven
I'm using ScalaTest for unit tests and this is working perfectly
When I run Undercover though it simply creates empty files
I think it's probably a problem with the configuration in my pom.xml (but the documentation for Undercover is a little sketchy)
Help :)
Thanks
T
At one point I inquired about Emma on a Scala mailing list, and I was told that by some that they had more success with Cobertura. You might want to try that instead.
Up to what stage is this "working correctly", given that empty files are being produced?
Do you have a sample project/POM that demonstrates the problem?