I'm having major problems getting Undercover to work using Maven
I'm using ScalaTest for unit tests and this is working perfectly
When I run Undercover though it simply creates empty files
I think it's probably a problem with the configuration in my pom.xml (but the documentation for Undercover is a little sketchy)
Help :)
Thanks
T
At one point I inquired about Emma on a Scala mailing list, and I was told that by some that they had more success with Cobertura. You might want to try that instead.
Up to what stage is this "working correctly", given that empty files are being produced?
Do you have a sample project/POM that demonstrates the problem?
Related
I am trying out scala for the first time and am following this page to set upmy first scala project :
https://docs.scala-lang.org/getting-started-intellij-track/getting-started-with-scala-in-intellij.html
However despite this on creating a simple worksheet with println("hello") upon evaluation no results come up.
What am I doing wrong ?
This seems to be an issue that is only happens on Scala 2.13. (I have not exhaustively tested it though.)
I have used 2.12.9 successfully.
I opened an issue on the Scala docs project too.
https://github.com/scala/docs.scala-lang/issues/1486
While exploring how to unit test a Kafka Stream I came across ProcessorTopologyTestDriver, unfortunately this class seems to have gotten broken with version 0.10.1.0 (KAFKA-4408)
Is there a work around available for the KTable issue?
I saw the "Mocked Streams" project but first it uses version 0.10.2.0, while I'm on 0.10.1.1 and second it is Scala, while my tests are Java/Groovy.
Any help here on how to unit test a stream without having to bootstrap zookeeper/kafka would be great.
Note: I do have integration tests that use embedded servers, this is for unit tests, aka fast, simple tests.
EDIT
Thank you to Ramon Garcia
For people arriving here in Google searches, please note that the test driver class is now org.apache.kafka.streams.TopologyTestDriver
This class is in the maven package groupId org.apache.kafka, artifactId kafka-streams-test-utils
I found a way around this, I'm not sure it is THE answer especially after https://stackoverflow.com/users/4953079/matthias-j-sax comment. In any case, sharing what I have so far...
I completely copied ProcessorTopologyTestDriver from the 0.10.1 branch (that's the version I'm using).
To address KAFKA-4408 I made private final MockConsumer<byte[], byte[]> restoreStateConsumer accessible and moved the chunk task = new StreamTask(... to a separate method, e.g. bootstrap.
On the setup phase of my test I do the following
driver = new ProcessorTopologyTestDriver(config, builder)
ArrayList partitionInfos = new ArrayList();
partitionInfos.add(new PartitionInfo('my_ktable', 1, (Node) null, (Node[]) null, (Node[]) null));
driver.restoreStateConsumer.updatePartitions('my_ktable', partitionInfos);
driver.restoreStateConsumer.updateEndOffsets(Collections.singletonMap(new TopicPartition('my_ktable', 1), Long.valueOf(0L)));
driver.bootstrap()
And that's it...
Bonus
I also ran into KAFKA-4461, fortunately since I copied the whole class I was able to "cherry-pick" the accepted fix with minor tweaks.
As always feedback is appreciated. Although apparently not an official test class, this driver is proven super useful!
For people arriving here in Google searches, please note that the test driver class is now org.apache.kafka.streams.TopologyTestDriver
This class is in the maven package groupId org.apache.kafka, artifactId kafka-streams-test-utils
I am using play framework v2.3. The problem I am facing is that any change in html and refreshing browser causes recompilation of the complete code. Can I avoid this?
Twirl templates are compiled, as stated by the docs:
Templates are compiled as standard Scala functions, following a simple naming convention. If you create a views/Application/index.scala.html template file, it will generate a views.html.Application.index class that has an apply() method.
There is no way to disable this behavior because it works this way by design. My suggestion here is use ~ (tilde) before SBT commands so things will happen as you save the file, per instance:
sbt ~run
This will recompile the changed file (and possible others), every time you change and save it. Also, sbt has some options that can possibly help you here: withNameHashing.
See sbt docs to understand how it works. To enable it, add the following line to your build.sbt file:
incOptions := incOptions.value.withNameHashing(nameHashing = true)
I need to define a custom test configuration in sbt which runs test, but with some extra settings. I've been looking around trying to figure out how to do this, but I can't seem to get it right.
What I would like to do is something like this: > test which would run the normal test task and > pipelinetest which would exactly the same as test, only with (javaOptions += "-Dpipeline.run=run".
I've figured out how the set the javaOptions for test, like this:
javaOptions in test += "-Dpipeline.run=run" so what I would like to be able to do is this: javaOptions in pipelinetest += "-Dpipeline.run=run"
How would I define pipelinetest to achieve this goal? Do this need to be a new task? Or does would this be a setting in test. I'm very new to sbt and quite confused over this at the moment, and reading the documentation didn't help, so any help would be greatly appreciated.
I have only a partial answer, but I thought this might be useful info. I was just trying to do something similar for the sbt build in Spark -- I wanted to have a way to run tests with a debugger. Mark Harrah's comment pointed me in the right direction. The change I made was:
lazy val TestDebug = config("testDebug") extend(Test)
...
baseProject
.configs(TestDebug)
.settings(inConfig(TestDebug)(Defaults.testTasks): _*)
.settings(Seq(
javaOptions in TestDebug ++= "-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"
.split(" ").toSeq))
This left my usual invocations of test, testOnly, etc. alone, but now I could also run testDebug:testOnly ..., which would use the extra options defined above. (it probably also created testDebug:test, etc. with those extra options, which aren't useful, but oh well.)
I didn't really understand why, but one important part for me to get this to work was to use inConfig(TestDebug)(Defaults.testTasks), instead of inConfig(TestDebug)(Defaults.testSettings).
In my case, I ran into trouble figuring out how to (a) get it to work for a multi-project build and (b) our build is even weirder b/c its based on a POM file, which makes the project definitions different than every example.
As usual, my issue with sbt is that I find info which seems related, but my build has some unusual aspects which makes me unable to completely cargo-cult the answer; and though it seems like I need trivial modifications, without a thorough understanding, its hard to modify the examples.
I have a parent project with 5 modules. I am trying to figure out how to aggregate the module level scaladoc's into one cohesive site. Any help would be much appreciated.
You can do it easily with SBT by integrationg 'Unidoc' into your build:
https://github.com/akka/akka/blob/master/project/Unidoc.scala
The maven plugin for scala support aggregation too, but by default only direct module, you can change the default behavior (aggregateDirectOnly, forceAggregate):
http://davidb.github.com/scala-maven-plugin/doc-mojo.html
Scaladoc2 doesn't support aggregation from multi source/multi classpath (scaladoc run over scalac, so aggregation means a full compilation like if there one big project).
And the scala-maven-plugin is mainly a wrapper of the scala commands.
Aggregation is only available to vscaladoc, who is now abandonned.
Sorry
If aggregating Scala documentation into the overall Java project documentation is an option, see my answer here: https://stackoverflow.com/a/16288487/430128.