How to ban moreunit for executing automatically the tests? - eclipse

I install "MoreUnit" as a plugin in eclipse. but, when starting eclipse, tests will be launched automatically. This presents a problem for me, because the tests inclurent of the heads of CRUD. Therefore, because of this automatic startup, the database will be empty after a certain time.
How to ban moreunit for executing automatically the tests ?

MoreUnit is a tool to assist in unit testing. If your tests are doing anything with a database, they are not unit tests. The reason for this is that if you test your class with a real database connection, you are also testing the database along with your class.
You should decouple your dependency on the database with a mock (see my answer here for an idea how to do this).
If you are doing data-driven tests, then it would be better to use a tool such as DbUnit to drive your tests rather than relying on a real database connection. With such a tool, you will have control over the data for each test and won't have to worry that tests fail because someone else updated data in the database or that you executed your tests in "the wrong order".

Related

Should I skip tests with maven?

tl;dr
Should I skip tests in maven?
Long version
(I don't know, whether this matters.)
I use Java EE eclipse, I have tests and the code in the proper maven directory.
Why should I (not) skip the tests during build?
tl;dr
Never make the test skipping as a normal thing of your build.
Doing it regularly is a vicious circle that will make your tests useless and your application less robust.
Long version
Automatically build from the IDE (Eclipse or any other)
IDEs that allow to use Maven as a build tool don't execute unit tests at each build as by default they don't execute the test goal, neither the package goal. Instead of they focus on the compiler plugin (compile and test-compile) as well the resources plugin (resources and test-resources).
So unit tests should not be executed at each automatically build performed by your IDE.
Build with mvn package/install
Why should I (not) skip the tests during build?
Your question refers Maven but it could be any other build tool or any programming language and it would have the same kind of consequences.
First, ask you the question why do we write unit tests.
We write them to validate the behavior of our components and also to be warned as the behavior of our components don't behave any longer as expected (regression detecting).
Ignoring regularly the tests execution means that you will take the risk that your covered by unit tests components having regressions in their behavior without that you are aware about them.
And later other regressions may occur and you will still be unaware of them.
And much later many of your tests will fail as you never corrected them to stay conform to the expected behavior.
As a test failure prevents the build, it will very probably result to make you skip/disable/comment/delete unit tests because these fail too often and for too many cases.
Your unit tests will become dust and you will lose all benefits provided by unit tests.
But so why does the skip tests option exist ?
I recommend/use them only for corner cases such as the followings (not exhaustive) and only temporarily :
legacy projects which many tests were not maintained.
As correcting them will take too much time, in a first time, you don't have other choices that skipping them to build your component.
And as soon as possible you have to disable unreliable tests (#Ignore with JUnit), correct these that may be or plan them. In this way you could re-enable the tests execution and let working tests to be regularly executed.
projects where the tests may fail according to the execution environment.
Here again it should be a temporary workaround and isolated too.
Indeed, as soon as possible you have to find a way to make the tests successful whatever the environment where they are executed.

Running an external process to support an automated IntelliJ test

As a simple example, I have some tests which rely on a fresh (read "empty") local Redis instance. My typical workflow has been to fire up the instance from the terminal and just restart or flushdb manually.
If possible, I'd love to wrap this up in the Run configuration of my tests. The configuration dialog allows me to setup "Before launch" tasks, but these appear to run sequentially. I really want something running in another process in the background that can be shut down at the end of the tests.
I have a few other external processes that I'd like to handle in a similar fashion. I'm not sure the Run/Debug configuration is the right approach. I'm using Scala, and I'm open to other tools if they better suit the objective. The end goal is to have as much as possible a single command that will fire up all the dependencies and shut them down at the end of the test run.
I think I would implement a base class for these tests which spins-up Redis in a stage before running tests and then shuts it down after running tests.
For example in ScalaTest you would use the BeforeAndAfter trait:
http://doc.scalatest.org/2.2.1/#org.scalatest.BeforeAndAfter

Change a value in a file and run all tests again

I wrote an integration test suite using NUnit. Since we're talking integration tests, test code uses configuration files, the file system, database and so on.
However, I noticed that it would be nice to change the test environment (i.e. change a value inside a configuration file - this would change the code behavior in some cases), and then run the full test suite again but using this new environment.
Is there a way to automate this using NUnit? I have code that updates the file, so if I can somehow set things up programatically, great.

Attach Current Build to Test

I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.

NUnit vs MSTest - a fickle TDD novice's experiences with both of them

There are a ton of questions here on SO regarding NUnit vs. MSTest, and I have read quite a few of them. I think my question here is slightly different enough to post separately.
When I started to use C#, I never even considered looking at MSTest because I was so used to not having it available when I was using C++ previously. I basically forgot all about it. :) So I started with NUnit, and loved it. Tests were very easy to set up, and testing wasn't too painful -- just launch the IDE and run the tests!
As many here have pointed out, NUnit has frequent updates, while MSTest is only updated as often as the IDE. That's not necessarily a problem if you don't need to be on the bleeding edge of TDD (which I'm not), but the problem I was having with frequent updates is keeping all of the systems up-to-date. I use about four or five different PCs daily, and while updating all of them isn't a huge deal, I was hoping for a way to make my code compile properly on systems with an older version of NUnit. Since my project referenced the NUnit install folder, when I upgraded the framework, any computers with the older framework installed would no longer be able to compile my project. I tried to combat the problem by created a common folder in SVN that had just the NUnit DLLs, but even then it would somehow complain about the version number of the binary. Is there a way to get around this issue? This is what made me stop using the first time.
Then one day I remembered MSTest, and decided to give it a try. I loved that it was integrated into the IDE. CTRL-R,CTRL-A, all tests run. How simple! But then I saw that the types of tests available in MSTest were pretty limited. I didn't know how many I'd actually really need, but I figured I should go back to NUnit, and I did.
About now I was starting to have to debug unit tests, and the only way I could figure out how to do it in NUnit was to set NUnit as the startup application, then set breakpoints in my tests. Then in the NUnit GUI, I would run the tests to hit the breakpoints. This was a complete PITA. I then looked at the MSTest GUI again, and saw that I could just click Debug there and it would execute my tests! WOW! Now that was the killer feature that swayed me back in favor of MSTest.
Right now, I'm back using MSTest. Unfortunately, today I started to think about daily builds and did some searching on Tinderbox, which is the only tool I had heard of before for this sort of thing. This then opened up my eyes to other tools like buildbot and TFS. So the problem here is that I think MSTest is guaranteed to lock me into TFS for automated daily builds, or continuous integration, or whatever the buzzword is. My company can't afford to get locked into MS-only solutions (other than VS), so I want to examine other choices.
I'm perfectly fine to go back to NUnit. I'm not thrilled about rewriting 100+ unit tests, but that's the way it goes. However, I'd really love for someone to explain how to squash those two issues of mine, which in summary are:
how do I setup NUnit and my project so that I don't have to keep upgrading it on every system to make my project build?
how do I get easier debugging of unit tests? My approach was a pain because I'd have to keep switching between NUnit and the default app to test / run my application. I saw a post here on SO that mentioned NUnitIt on codeplex, but I haven't any experience with it.
UPDATE -- I'm comparing stuff in my development VM, and so far, NUnitit is quite nice. It's easy to install (one click), and I just point it to whatever NUnit binaries are in my SVN externals folder. Not bad! I also went into VS -> Tools -> Options -> Keyboard and changed my mapping for CTRL-R,CTRL-A to map to NUnitit.Connect.DebugGUI. Not perfect since I haven't figured out how to make NUnit automatically run the tests when it's opened, but it's pretty good. And debugging works as it should now!
UPDATE #2 -- I installed TestDriven.Net and gave it a quick run through. Overall, I like it a lot better than NUnitit, but at the moment, NUnitit wins because it's free, and since it also works with NUnit, it will allow me to "upgrade" to TestDriven.Net when the time comes. The thing I like most about TestDriven.Net is that when I double click on the failed test, it takes me right to the line in the test that had failed, while NUnit + NUnitit doesn't seem to be capable of this. Has anyone been able to make this link between the NUnit GUI and the VS IDE happen?
Many projects I've worked on have included a copy of the specific version of NUnit (or xUnit.net, whatever) in a "lib" or "extrernal" or "libraries" folder in their source control, and reference that location for building all of their tests. This greatly reduces the "upgrade everyone" headache, since you really don't need to install NUnit or xUnit.net to use it.
This approach will still let you use something like TestDriven.Net to execute the tests, run the tests in a debugger, etc.
For easier debugging (and running, too) of unit tests I recommend checking out TestDriven.Net. The "Test With > Debugger" feature is so handy. The personal version is free.
Have you played with the "Specific Version" property on the NUnit.framework reference? We keep ours set to true so that the tests that are coded for a given nunit version require that specific version to execute.
I'm not sure how it will handle, for example, if you had 2.5 on your machine but another machine only had 2.4 - would .NET bind to the 2.4 version happily or will it only bind from earlier versions to later versions of an assembly (e.g. compiled against 2.4, but 2.5 availale at runtime?)