It would be great to improve test driven development productivity by automatically firing of tests whenever there is a code change.
This is what I'm hoping for.
Whenever a Scala file is saved, SBT (or a shell script) should execute the ScalaTest or Specs2 specifications.
If the tests complete, the system should play a sound indicating success or failure
I'm using Scala IDE to do development and SBT to run my test specs at the moment, so just autimating the steps above would save a person from switching to the console, running the test specs and waiting for the result.
Any ideas automatically firing of the tests and playing a 'succeed' or 'fail' sound would be great.
Why not create a custom SBT task that depends on the test task. You could add the code for playing the sound to your build definition.
See here how to run a custom task after another task.
To automatically re-run tests, simply prefix the newly define task with a ~ in the SBT shell before running it.
Related
I have some code that warms up my app's caches that I'd like to run in production or when I start my app with sbt run. However, when I run sbt console, I'd like to skip this code so that I can get to testing on the REPL very quickly without any delays.
Is there a way to detect if my app is being run within sbt console so that I can avoid warming up the caches?
With Eclipse and Spring Tool Suite when creating a Debug configuration we can check the Keep JUnit running after a test run when debugging. Because we're using the SpringJUnit4ClassRunner and loading the Spring app before running, startup time before these test can run is significant so this is a huge time saver for rerunning tests and even hotswapping basic changes.
However, I recently switched to IntelliJ and I'm unable to find an equivalent option for this feature. Can someone tell me where it is?
You can achieve something very similar to Eclipse's "Keep JUnit running after a test run when debugging" by doing the following:
Create a new JUnit Run/Debug configuration with Test kind set to
Method and Repeat to Until Stopped
Choose your test class and the method you plan to test and save the configuration
Now, start the test in Debug mode using the configuration you just created. You will notice that the test will run over and over again without reloading the Spring context.
Hit the sort a-z button in the Debug tab so that the latest JUnit test run is always shown at the top
Pause the test from the Debug or Run tab (the || button on the left)
Make your changes to the code and then build. Changes will be hot swapped. For the best results, I recommend also using HotSwap Agent or JRebel
Resume the test from the Debug or Run tab
Rinse and repeat from 5 to 7 until you are done with the test
Note that pausing the test is optional, changes will be reloaded anyway between test runs.
The only downside of this strategy is that you can only keep-alive test one method at a time.
Is there a way to speed up the build time of unit tests for Play Framework in Intellij? I am doing TDD. Whenever I execute a test, it takes about 30 - 60 seconds to compile. Even a simple Hello World test takes time. Rerunning the same test even without any change will still start the make process.
I am on Intellij 14.1, on Play 2.3.8, written in Scala.
I already tried setting the java compiler to eclipse, and also tried setting Scala compiler to SBT.
In intellij 14.1.2, the workaround I did is to:
1) Remove make from tests (Edit Configurations -> Defaults -> Scala Test -> Before launch -> (-) Make)
2) Start activator (or play) with ~ test:compile (ex: activator ~test:compile) or (sbt ~ test:compile)
This prevents Intellij from calling a play compilation server every time a make is invoked. The compilation is delegated to an external sbt/activator/play process to do the continuous compilation. The disadvantage is that, when you run your test immediately before the compilation completes, you may get a NoClassDefinedFound exception. Also, you will need to monitor an extra process. This setup however, is so much faster compared to the default setup by Intellij (for now). Hope this helps anyone.
I'm going to assume that you know that the problem is build-time - that the actual run-time for the tests themselves is negligible.
What do you have for hardware? In my experience, 4GB RAM is not enough for Intellij Scala to perform well - it needs a big disk cache (which the OS uses free RAM for), I think. An SSD helps, too. Use Performance Monitor or analogous for your OS to see whether the time is disk, CPU, or net. If it's CPU, consider whether heap-size may be a problem.
What is your build process like? Are there sbt plugins? How big is your project?
UPDATE
Triggering a full rebuild without changes is wrong. Is there something in your tests that is modifying the project directories? If you run a dummy no-op test, does it do the same thing? Are you maybe writing logs into the project tree, for instance?
In my limited experience, full Play builds under Intellij are orders of magnitude slower than a pure Scala build - I'd guess because of all the SBT plugins (view compiler, xScript compiler, xSS compiler, etc) that have to run. But incrementals aren't that painful.
On OSX, read "Activity Monitor" for "Performance Monitor".
UPDATE
See Intellij issue SCL-8235 for other folks' experience and workarounds for slow incremental Play builds. Vote for the issue to increase its priority and get it fixed quicker.
What about unmarking existing tests and leaving only yours? Right click on test directory (which should be green) and Unmark as Test Source Root.
I am currently in a process of setting up Jenkins-CI for my Scala/Akka project.
I managed to create build that is integrated with BitBucket and executes builds when new pull reuqest is created/old pull request is updated.
As a testing framework we are using Specs2 which is also integrated with Jenkins by using JUnit post-build action. Now I am wondering how to properly execute e2e tests in my build.
Basically in git repository we have 2 projects, lets call them main-project and rest-tests. rest-tests contains e2e tests that are written using REST-assured library. To execute them I need to start main-project application (which uses Spray library to set up HTTP server) and then execute test task in sbt project of rest-test.
My idea was to execute main-project startup script (generated by sbt-native-packager) whith something like this:
$WORKSPACE/main-project/target/universal/stage/bin/main-project & echo $! > /tmp/main-project.pid
then execute test task of rest-tests project and finally kill process with PID that is saved in /tmp/main-project.pid file.
The last step should be implemented using https://wiki.jenkins-ci.org/display/JENKINS/Post+build+task because if some rest-testswill fail the next steps of build will not be executed (or at least that is what I am thinking) and I could end with my instance of application running after the build is finished.
This is first time when I am setting up CI system and my solution seems to be a little hacky (at least to me). I am wondering if there is a better/more idiomatic way of solving my problem of running e2e tests which require another application running.
I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.