Should I skip tests with maven? - eclipse

tl;dr
Should I skip tests in maven?
Long version
(I don't know, whether this matters.)
I use Java EE eclipse, I have tests and the code in the proper maven directory.
Why should I (not) skip the tests during build?

tl;dr
Never make the test skipping as a normal thing of your build.
Doing it regularly is a vicious circle that will make your tests useless and your application less robust.
Long version
Automatically build from the IDE (Eclipse or any other)
IDEs that allow to use Maven as a build tool don't execute unit tests at each build as by default they don't execute the test goal, neither the package goal. Instead of they focus on the compiler plugin (compile and test-compile) as well the resources plugin (resources and test-resources).
So unit tests should not be executed at each automatically build performed by your IDE.
Build with mvn package/install
Why should I (not) skip the tests during build?
Your question refers Maven but it could be any other build tool or any programming language and it would have the same kind of consequences.
First, ask you the question why do we write unit tests.
We write them to validate the behavior of our components and also to be warned as the behavior of our components don't behave any longer as expected (regression detecting).
Ignoring regularly the tests execution means that you will take the risk that your covered by unit tests components having regressions in their behavior without that you are aware about them.
And later other regressions may occur and you will still be unaware of them.
And much later many of your tests will fail as you never corrected them to stay conform to the expected behavior.
As a test failure prevents the build, it will very probably result to make you skip/disable/comment/delete unit tests because these fail too often and for too many cases.
Your unit tests will become dust and you will lose all benefits provided by unit tests.
But so why does the skip tests option exist ?
I recommend/use them only for corner cases such as the followings (not exhaustive) and only temporarily :
legacy projects which many tests were not maintained.
As correcting them will take too much time, in a first time, you don't have other choices that skipping them to build your component.
And as soon as possible you have to disable unreliable tests (#Ignore with JUnit), correct these that may be or plan them. In this way you could re-enable the tests execution and let working tests to be regularly executed.
projects where the tests may fail according to the execution environment.
Here again it should be a temporary workaround and isolated too.
Indeed, as soon as possible you have to find a way to make the tests successful whatever the environment where they are executed.

Related

Running Xcode unit tests during release build

I have a number of logic unit-tests (where my project files have a target membership of the App and AppTests). I want to add a call to xcodebuild test-without-building to my build system so that my unit-tests run for each build.
However, the tests cannot run on the release build (because release doesn't build for testing).
Is my only choice to build both the release version and the debug version during my build, so that I can use the debug version only to perform the tests? That's very different and very much worse to the other test frameworks I've used (GTest, Catch). Why can't the tests stand on their own?
The test-without-building command is not actually meant to "run the tests without rebuilding the app", but rather it's supposed to be used in tandem with the build-for-testing command.
Please refer to the "Advanced Testing and Continuous Integration" WWDC 2016 session for more info.
The gist is: use build-for-testing to build an .xctestrun file, which is then used by test-without-building to run the tests. This is particularly useful to run big suites across different machines, although I have never done it myself.
Now that we have established that you can't use test-without-building on its own, the only option to run the test from the command line and if they pass build a Release version is to use xcodebuild test, which is going to build the app.
As for the why does it need to be in Debug, I don't have a precise answer. In iOS land, at least in the teams I worked with, the difference between Debug and Release builds is always only on things like the options passed to the compiler in terms of optimization, the architectures to build on, and the type of code signing.
This means that the code that runs in Debug vs Release is exactly the same, and since we already established that you'll need to build the app twice, one to run the tests, one to generate the releasable version, running the tests in Debug seems acceptable.

Is it necessary to "clean" SBT before "stage" for production builds?

We are using Jenkins and SBT pluging to build, test and deploy our application. I see everywhere that people use "clean" command before "test" or "stage" which leads to very long compile times.
Is it necessary to clean everything for each build? What is the risk of using the incremental compiler for production builds?
I think it's a matter of taste. For continuous integration builds, it's ok to rely on the incremental compiler; if something fails I can investigate quite quickly.
But if, for whatever reason, the incremental compiler decides to ignore some changes (and it already happened to me in development), you may get some difficult bugs to resolve.
So, if you want something that's reproducible from a fresh code checkout; just run "clean" before everything else.
Most of the time, I configure jenkins with two build tasks: test builds, triggered by code changes, and packaging builds targeted for deployment and triggered periodically (or manually in some cases, with automatic deployment).
If the build script and/or command line to sbt have not changed then it should not be necessary to do clean.
I have been using Spark for several years - and it is one of the most complex sbt builds you can find. It works just fine to avoid doing clean as long as the build process itself (as embodied in the project/sbt build artifacts and the sbt command line) are the same as previous runs.

Building tests in Intellij for Play Framework is very slow

Is there a way to speed up the build time of unit tests for Play Framework in Intellij? I am doing TDD. Whenever I execute a test, it takes about 30 - 60 seconds to compile. Even a simple Hello World test takes time. Rerunning the same test even without any change will still start the make process.
I am on Intellij 14.1, on Play 2.3.8, written in Scala.
I already tried setting the java compiler to eclipse, and also tried setting Scala compiler to SBT.
In intellij 14.1.2, the workaround I did is to:
1) Remove make from tests (Edit Configurations -> Defaults -> Scala Test -> Before launch -> (-) Make)
2) Start activator (or play) with ~ test:compile (ex: activator ~test:compile) or (sbt ~ test:compile)
This prevents Intellij from calling a play compilation server every time a make is invoked. The compilation is delegated to an external sbt/activator/play process to do the continuous compilation. The disadvantage is that, when you run your test immediately before the compilation completes, you may get a NoClassDefinedFound exception. Also, you will need to monitor an extra process. This setup however, is so much faster compared to the default setup by Intellij (for now). Hope this helps anyone.
I'm going to assume that you know that the problem is build-time - that the actual run-time for the tests themselves is negligible.
What do you have for hardware? In my experience, 4GB RAM is not enough for Intellij Scala to perform well - it needs a big disk cache (which the OS uses free RAM for), I think. An SSD helps, too. Use Performance Monitor or analogous for your OS to see whether the time is disk, CPU, or net. If it's CPU, consider whether heap-size may be a problem.
What is your build process like? Are there sbt plugins? How big is your project?
UPDATE
Triggering a full rebuild without changes is wrong. Is there something in your tests that is modifying the project directories? If you run a dummy no-op test, does it do the same thing? Are you maybe writing logs into the project tree, for instance?
In my limited experience, full Play builds under Intellij are orders of magnitude slower than a pure Scala build - I'd guess because of all the SBT plugins (view compiler, xScript compiler, xSS compiler, etc) that have to run. But incrementals aren't that painful.
On OSX, read "Activity Monitor" for "Performance Monitor".
UPDATE
See Intellij issue SCL-8235 for other folks' experience and workarounds for slow incremental Play builds. Vote for the issue to increase its priority and get it fixed quicker.
What about unmarking existing tests and leaving only yours? Right click on test directory (which should be green) and Unmark as Test Source Root.

Scala - sbt: Is it safe to compile while running?

I often have to run some time-consuming experiments in scala, and usually I run a second sbt
instance for the same project where I make changes to the code that is running in the other instance and compile.
The reason I do this is so that I don't have to wait for a long running process to finish before I make progress with my code.
My question is: Is it safe to do that, or is there a possibility that recompiling parts of currently running code in sbt/scala will cause problems in my running process?
What I have observed so far is that most of the time it is fine, but I did run into a class not defined error once when refactoring my code while running.
As #marcus mentioned, the compiler writing a .class file that has not yet been loaded by your running JVM stands the chance of being loaded and not matching the other compiled classes. In many instances you'll be fine, but it could cause problems. There are a few things you can do in this situation:
Compile in separate directories. Check your code out into two completely different directories and do local commits (assuming you're using git) to push/pull from one copy of the repository to another. This will ensure that your testing doesn't get the compilation changes until you're ready (when you "pull" from the development repository).
Use an automated CI system like Jenkins or Travis to run your tests on each commit. This will, similarly to #1, not conflict with your development work since it is a separate checkout of the code.
Use sbt-revolver which runs the program in a separate JVM with the re-start command and will restart it whenever there are changes. This would interrupt your testing, however.
Use JRebel which does a better job of reloading classes than the JVM or most IDEs.

NUnit vs MSTest - a fickle TDD novice's experiences with both of them

There are a ton of questions here on SO regarding NUnit vs. MSTest, and I have read quite a few of them. I think my question here is slightly different enough to post separately.
When I started to use C#, I never even considered looking at MSTest because I was so used to not having it available when I was using C++ previously. I basically forgot all about it. :) So I started with NUnit, and loved it. Tests were very easy to set up, and testing wasn't too painful -- just launch the IDE and run the tests!
As many here have pointed out, NUnit has frequent updates, while MSTest is only updated as often as the IDE. That's not necessarily a problem if you don't need to be on the bleeding edge of TDD (which I'm not), but the problem I was having with frequent updates is keeping all of the systems up-to-date. I use about four or five different PCs daily, and while updating all of them isn't a huge deal, I was hoping for a way to make my code compile properly on systems with an older version of NUnit. Since my project referenced the NUnit install folder, when I upgraded the framework, any computers with the older framework installed would no longer be able to compile my project. I tried to combat the problem by created a common folder in SVN that had just the NUnit DLLs, but even then it would somehow complain about the version number of the binary. Is there a way to get around this issue? This is what made me stop using the first time.
Then one day I remembered MSTest, and decided to give it a try. I loved that it was integrated into the IDE. CTRL-R,CTRL-A, all tests run. How simple! But then I saw that the types of tests available in MSTest were pretty limited. I didn't know how many I'd actually really need, but I figured I should go back to NUnit, and I did.
About now I was starting to have to debug unit tests, and the only way I could figure out how to do it in NUnit was to set NUnit as the startup application, then set breakpoints in my tests. Then in the NUnit GUI, I would run the tests to hit the breakpoints. This was a complete PITA. I then looked at the MSTest GUI again, and saw that I could just click Debug there and it would execute my tests! WOW! Now that was the killer feature that swayed me back in favor of MSTest.
Right now, I'm back using MSTest. Unfortunately, today I started to think about daily builds and did some searching on Tinderbox, which is the only tool I had heard of before for this sort of thing. This then opened up my eyes to other tools like buildbot and TFS. So the problem here is that I think MSTest is guaranteed to lock me into TFS for automated daily builds, or continuous integration, or whatever the buzzword is. My company can't afford to get locked into MS-only solutions (other than VS), so I want to examine other choices.
I'm perfectly fine to go back to NUnit. I'm not thrilled about rewriting 100+ unit tests, but that's the way it goes. However, I'd really love for someone to explain how to squash those two issues of mine, which in summary are:
how do I setup NUnit and my project so that I don't have to keep upgrading it on every system to make my project build?
how do I get easier debugging of unit tests? My approach was a pain because I'd have to keep switching between NUnit and the default app to test / run my application. I saw a post here on SO that mentioned NUnitIt on codeplex, but I haven't any experience with it.
UPDATE -- I'm comparing stuff in my development VM, and so far, NUnitit is quite nice. It's easy to install (one click), and I just point it to whatever NUnit binaries are in my SVN externals folder. Not bad! I also went into VS -> Tools -> Options -> Keyboard and changed my mapping for CTRL-R,CTRL-A to map to NUnitit.Connect.DebugGUI. Not perfect since I haven't figured out how to make NUnit automatically run the tests when it's opened, but it's pretty good. And debugging works as it should now!
UPDATE #2 -- I installed TestDriven.Net and gave it a quick run through. Overall, I like it a lot better than NUnitit, but at the moment, NUnitit wins because it's free, and since it also works with NUnit, it will allow me to "upgrade" to TestDriven.Net when the time comes. The thing I like most about TestDriven.Net is that when I double click on the failed test, it takes me right to the line in the test that had failed, while NUnit + NUnitit doesn't seem to be capable of this. Has anyone been able to make this link between the NUnit GUI and the VS IDE happen?
Many projects I've worked on have included a copy of the specific version of NUnit (or xUnit.net, whatever) in a "lib" or "extrernal" or "libraries" folder in their source control, and reference that location for building all of their tests. This greatly reduces the "upgrade everyone" headache, since you really don't need to install NUnit or xUnit.net to use it.
This approach will still let you use something like TestDriven.Net to execute the tests, run the tests in a debugger, etc.
For easier debugging (and running, too) of unit tests I recommend checking out TestDriven.Net. The "Test With > Debugger" feature is so handy. The personal version is free.
Have you played with the "Specific Version" property on the NUnit.framework reference? We keep ours set to true so that the tests that are coded for a given nunit version require that specific version to execute.
I'm not sure how it will handle, for example, if you had 2.5 on your machine but another machine only had 2.4 - would .NET bind to the 2.4 version happily or will it only bind from earlier versions to later versions of an assembly (e.g. compiled against 2.4, but 2.5 availale at runtime?)