As a simple example, I have some tests which rely on a fresh (read "empty") local Redis instance. My typical workflow has been to fire up the instance from the terminal and just restart or flushdb manually.
If possible, I'd love to wrap this up in the Run configuration of my tests. The configuration dialog allows me to setup "Before launch" tasks, but these appear to run sequentially. I really want something running in another process in the background that can be shut down at the end of the tests.
I have a few other external processes that I'd like to handle in a similar fashion. I'm not sure the Run/Debug configuration is the right approach. I'm using Scala, and I'm open to other tools if they better suit the objective. The end goal is to have as much as possible a single command that will fire up all the dependencies and shut them down at the end of the test run.
I think I would implement a base class for these tests which spins-up Redis in a stage before running tests and then shuts it down after running tests.
For example in ScalaTest you would use the BeforeAndAfter trait:
http://doc.scalatest.org/2.2.1/#org.scalatest.BeforeAndAfter
Related
I often have to run some time-consuming experiments in scala, and usually I run a second sbt
instance for the same project where I make changes to the code that is running in the other instance and compile.
The reason I do this is so that I don't have to wait for a long running process to finish before I make progress with my code.
My question is: Is it safe to do that, or is there a possibility that recompiling parts of currently running code in sbt/scala will cause problems in my running process?
What I have observed so far is that most of the time it is fine, but I did run into a class not defined error once when refactoring my code while running.
As #marcus mentioned, the compiler writing a .class file that has not yet been loaded by your running JVM stands the chance of being loaded and not matching the other compiled classes. In many instances you'll be fine, but it could cause problems. There are a few things you can do in this situation:
Compile in separate directories. Check your code out into two completely different directories and do local commits (assuming you're using git) to push/pull from one copy of the repository to another. This will ensure that your testing doesn't get the compilation changes until you're ready (when you "pull" from the development repository).
Use an automated CI system like Jenkins or Travis to run your tests on each commit. This will, similarly to #1, not conflict with your development work since it is a separate checkout of the code.
Use sbt-revolver which runs the program in a separate JVM with the re-start command and will restart it whenever there are changes. This would interrupt your testing, however.
Use JRebel which does a better job of reloading classes than the JVM or most IDEs.
I'm using the xsbt-web-plugin to host my servelet. It works fine, using container:start.
I now need it to run in the background, like a daemon, even if I hang up, and ideally, even if the machine reboots. I'd rather not have to invoke sbt.
I know that the plugin can package a WAR file, but I'm not running tomcat or anything like that. I just want to do what container:start does, but in a more robust (read: noninteractive) way.
(My goal is for a demo of dev: I'd hate for my ssh session to drop sbt, or something similar like that, while people are using the demo. But we're not ready for production yet and have no servelet infrastructure.)
xsbt-web-plugin is really not meant to act as a production server (with features like automatic restarting, fault recovery, running on boot, etc.), however I understand the utility of using it this way for small-scale development purposes.
You have a couple of options:
First approach
Run sbt in a screen session, which you can (dis)connect at will without interrupting sbt.
Second approach
Override the shutdown function that triggers on sbt's exit hook, so that the container keeps running after sbt stops.
For this approach, add the following setting to your sbt configuration:
build.sbt:
onLoad in Global := { state => state }
Note that this will override the onLoad setting entirely, so in the (unlikely) case that you have it configured to do other important things, they won't happen.
Now you can launch your container either by running container:start from sbt and then exiting sbt, or simply by running sbt container:start from the command line, which will return after forking the container JVM. Give it a few seconds, then you should be able to request to localhost:8080.
I have a command-line application that I want to run in a build configuration for the duration of the build, then shut it down at the end when all other build steps have completed.
The application is best thought of as a stub server, which will have a client run against it, then report its results. After the tests, I shut down the server. That's the theory anyway.
What I'm finding is that running my stub server as a command line build step shuts down the stub server immediately before going to the next build step. Since the next build step depends on the server running, the whole thing fails.
I've also tried using the custom script option to run both tools one after another in the same step, but that results in the same thing: the server, launched on the first line, is shut down before invoking the second line of the script.
Is it possible to do what I'm asking in TeamCity? If so, how do I do it? Please list any possibilities, right up to creating a plugin (although the easier, the better).
Yes you can, you can do that in a Nant script, have Teamcity run the script, look for spawn and the nantContrib waitforexit.
However I think you would be much better off creating a mock class that the client uses only when running the tests. Instead of round tripping to the server during the build as that is can be a bit problematic, sometimes ports are closed, sometimes the server hangs from the last run, etc. That way you can run the tests, make sure the code is doing the right thing when the mock returns whatever it needs to return etc.
It would be great to improve test driven development productivity by automatically firing of tests whenever there is a code change.
This is what I'm hoping for.
Whenever a Scala file is saved, SBT (or a shell script) should execute the ScalaTest or Specs2 specifications.
If the tests complete, the system should play a sound indicating success or failure
I'm using Scala IDE to do development and SBT to run my test specs at the moment, so just autimating the steps above would save a person from switching to the console, running the test specs and waiting for the result.
Any ideas automatically firing of the tests and playing a 'succeed' or 'fail' sound would be great.
Why not create a custom SBT task that depends on the test task. You could add the code for playing the sound to your build definition.
See here how to run a custom task after another task.
To automatically re-run tests, simply prefix the newly define task with a ~ in the SBT shell before running it.
I install "MoreUnit" as a plugin in eclipse. but, when starting eclipse, tests will be launched automatically. This presents a problem for me, because the tests inclurent of the heads of CRUD. Therefore, because of this automatic startup, the database will be empty after a certain time.
How to ban moreunit for executing automatically the tests ?
MoreUnit is a tool to assist in unit testing. If your tests are doing anything with a database, they are not unit tests. The reason for this is that if you test your class with a real database connection, you are also testing the database along with your class.
You should decouple your dependency on the database with a mock (see my answer here for an idea how to do this).
If you are doing data-driven tests, then it would be better to use a tool such as DbUnit to drive your tests rather than relying on a real database connection. With such a tool, you will have control over the data for each test and won't have to worry that tests fail because someone else updated data in the database or that you executed your tests in "the wrong order".