If you have NUnit integration tests that test access to the database, how do I run those tests on a build machine where the target database is on a different server.
It's almost like I want to run the integration tests from the build server (using CruiseControl) but have the tests run on the target server so I can exercise the database in question.
I think the simples way is to expose database via network.
NUnit tests can be run from c#/vb.net code.
So according to that, you can create page for running tests, page would be calling codebehind code which then call your tests and you can see results of your tests, remotely.
This should get you started:
http://nunit.net/blogs/?p=23
Related
Currently, I'm creating a project that incorporates the MEAN stack, Docker, and Travis CI. I'm using Travis CI to automate builds for unit testing, integration testing, etc. I'm using Docker to help create a test environment. I've already successfully created unit tests thanks to resources via Medium. However, I haven't found many resources on writing integration tests for a MEAN application. I want to create tests to see if I get expected values in the Angular application when it connects to the REST API endpoints from Express, and the Express application is connected to a MongoDB server. Does anyone have any resources or advice on how to write these tests, and to execute them in a Dockerized test environment?
Having done something similar myself, just a piece of advice.
Test the services independently, like e2e tests for the api server, mail service for the frontend web app. If the selenium tests run alright with the webpage/app, and the api end point is on the local machine then everything looks to be working. There is nothing magic in docker. Your local configs should reflect what you're trying to test, and avoid overcomplicating things and write the testing yourself.
Tools often take more time to learn than the actual thing you're trying to acomplish if you do it yourself. Document it adequatly so the consumer of the container can replicate with minimal effort.
It's actually pretty hard, good luck.
The software I worked on has both unit tests and system tests. System tests can take minutes to run, they take input values and we validate the results against expected output. There are hundreds of system tests. The software must be built (done this) and tested on both windows and Linux.
How can I automate testing with VSTS? I'd like to avoid doing this at build stage, because it would slow the builds down. I can't see how to automate this in the Test stage. Do I need additional extensions to do this? Everything seems so geared up for web development, e.g. selenium tests, how do we run automated tests for good old binary programs?
I would suggest using Release Management to deploy your application to a test environment and then run your tests as a part of your Release Definition. You can then choose to run tests in parallel to make sure that your system tests don't take days to run.
On a side note, having so many system tests is a code smell. I would suggest looking into building as many fast running unit tests as possible and only using system tests when absolutely necessary.
I would like to send out an email notification from DBFit/Fitnesse when a test suite execution completes with some failed tests.
I am unable to find any working solution, has anyone got one?
How do you run your tests?
I run tests on a build server and have that e-mail me and my team when one or more tests fail.
I use FitNesse's jUnit runner to execute the tests as a project automatically. Build servers have standard features to detect failed jUnit tests and notify people of those.
I have a question that's very similar to what's discussed here:
Integration Test of REST APIs with Code Coverage
I deployed a war file that exposes the REST APIs to a web server and I'm using TestNG to write test cases for the REST APIs. I'm not unit testing - I'm only end-to-end / integration testing. Currently, I'm running test cases from eclipse in my machine.
My goal is to get coverage reports on the TestNG test cases.
Since the tests are local to my machine and the REST API is deployed in another server, EclEmma doesn't provide any meaningful data when I run the tests cases in my machine.
Is there a way to point EclEmma to the web server instead of my local machine and get the code coverage report?
Would it be better/possible to include the tests in the war file and run the tests from the web server? That should allow me to get the meaningful code coverage report, right?
The easiest way forward in cases like this is normally to start the web server inside of your IDE and run tests with coverage measuring in there. Even better to start the web server from within the tests - then a build tool like maven can also do code coverage reporting.
I'm using TeamCity for my CI builds, and I'd like to set up a second build for running automated UI tests on Windows XP and Windows 7 virtual machines.
I imagine the build working as follows:
Compile, run unit tests, etc.
Prepare MSI using WiX
Copy MSI to target test machines
Remotely execute MSI's
Copy test harness code to remote machine
Run tests
Build finishes
The automated UI tests are written using NUnit and would need to be run directly on the test virtual machine (they can't run remotely). It's important that if the tests fail, it appears in the TeamCity build log and the build fails. I'd rather not install VS or the TeamCity build agents on either of the test virtual machines.
It seems that most of this should be possible using psexec.exe. Are there any alternative (preferably open source) tools that I should look at?
takes a deep breath
We were looking into something to help us out with our automated UI tests. We use ranorex to test the UI and TeamCity/Msbuild to execute the tests.
We never found any tools to help us out (I’m constantly keeping an eye out for some so will monitor this thread) but here is what we did instead.
The CI server copies the setup files and test scripts to the Testing Host Server.
The CI server then launches a custom app on the Testing Host Server providing the name of the VM to launch.
The Test Host Server then launches the VM software, using Virtual PC.exe -singlepc -pc vhdname.vhd -launch , and waits for it to shutdown (after it’s run its tests).
The VM grabs the setup files and scripts from the network location and executes.
After the tests are run it then returns the results to a networked location and shuts itself down.
Control is returned to the custom app.
Control is returned to the CI server which determines from the results if it has passed or failed (and updates the UI so developers are made aware of the result).
Results are collection as artifacts in TeamCity and tagged in Svn.
I think that's everything. Its convoluted, however, it works. Hope someone of that helps you.
Jeff Brown of the Gallio team has been talking about a tool called Archimedes that he's planning to write to support this kind of requirement. It sounds promising, but I don't think there has been much progress on it so far.
In the mean time though, there is something in the Gallio project called VM Tool that may do what you want. It provides commands to stop, start and snapshot virtual machines, and more importantly, to copy files back and forth and execute commands.
I presume you have also considered Powershell Remoting?