FitNesse Test History with Version Control - version-control

I'm starting some automated acceptance testing for our company, and have decided to use FitNesse.
I want to have FitNesse under source control - that is the FitNesse executable + plugins, the wiki pages and the test fixture source code.
Then anyone can get all they need from source control to build and run the acceptance tests locally. Including a Continuous Integration server.
I have read that the page versioning can be turned off using the -e 0 parameters. Then we don't have ZIP files in the FitNesse root folder under source control - nice.
But what about Test History? Do I want the history of locally ran tests to be checked in? And when someone gets the latest version, do they want their local test history to be overwritten?
I'm very grateful to anyone who can share their experiences of using FitNesse in similar scenarios to that described above.

Why do you not clear all test history before check-in?
In my current project, there are only 2 required operations for creating test cases on FitNesse: Drawing the table(Edit the page) and developing the api(output as .dll files).
We also develop a tool for triggering the FitNesse running testing from remote machines automatically. After finish testing, we get the testing result by handling the output excel files.
The structure of our svn:
-SVN
--FitNess
--- TestLib
--- FitNesse
---- FitNesseBin
---- TestCases
[Update]
Test Fixture code should be finished and frozen before testers start writing test cases and running them. Certainly, when Test Fixture need bug fixing or enhancement, the code could also be changed. In my team, we ask different roles to handle different tasks. Developer provide API for testing use. Senior QA wrap the API in test fixture. QA write the table/wiki. Every role only take the assigned parts. Before modification, team member should update local copy of the FixNesse, and check-out the file. And only check-in the modified check-outed file.

Disabling and auto-purging of test history is still a valid requirement for those integrating fitnesse into a standard build (e.g. Maven) and for running fitnesse builds locally, despite .svnignore .gitignore etc options.
The test history slows down the finalisation of the test and when it's purged after a few runs you will certainly notice the difference.
Uncle Bob mentioned he was working on an option to only keep the test history for the latest test run (you always need at least 1 so you can show the results) here http://tech.groups.yahoo.com/group/fitnesse/message/14306 but cannot see such an option in the code. I got a python error trying to reply to post - so no answer on that option unfortunately :-(

Related

VS Code Extension tests not found on Azure Pipeline

I have an extension, initially created using the standard yo code template, and successfully uploaded to the market place. I have created a test suite, which works correctly when running locally (i.e. pressing F5), and I now wanted to add CI testing to the Github repo.
I followed the instructions on Continuous Integration and created a config file. The extension now builds successfully, however it appears that no tests are discovered.
For example, in this build I intentionally introduced a failing test, but it still passes.
Is there a step I'm missing or a good way to debug the problem?
See the Issue I opened for the answer. Currently, the tests fail silently if you do not have the required dependencies listed.

How to Build a definition and publish test results for a Java project with maven, Junit and selenium on Visual Studios Team Services VSTS

I have an automation script that uses maven POM.xml to import all the dependencies needed from selenium and junit. The main test uses selenium to open a browser, verify some information, close the browser and the test ends.
When run as Junit it works fine: run as Junit test
When run as Maven Test it works fine as well: run as maven test
In both scenarios, the program opens the browser and navigates through the website as it should do for an automated test.
Now I need to integrate it to VSTS so I can visualize the overall pass/fail test on the VSTS dashboard but I'm not familiarized with this tool too much yet.
So far this is what I have managed to do:
Deploy an agent on my WindowsPC (I want to execute and deploy the project on an Azure VM or another azure instance later on) NOTE: this is the same pc I'm successfully running the program using eclipse as shown in the screenshoots above. https://learn.microsoft.com/en-us/vsts/build-release/actions/agents/v2-windows?view=vsts
Create a build definition on VSTS but when I queue the definition the build fails: build definition and the build fail.
I don't know why it can't find mt config.txt file since it is located on the same hosted agent in that same directory. I'll appreciate if someone is capable of guiding me through this process so I can run the program from the VSTS and visualize the overall tests that fail and pass on the VSTS dashboard.
UPDATE: I moved the config.txt file to the public directory and the build was successful(I still need to fix this issue because I do not want my work in a public folder).
Now the problem I have is that even though the build is successful and it looks like it is running my "3 tests", When I look at my pc, nothing is happening. it should open chrome and take a screenshot, then open Firefox and take another screenshot and finally open internet explorer and take another screenshot and save each test on different folders but it is only generating folders for chrome and internet explorer (but still those folders does not have the screenshot I'm asking, maybe because the browser is not being open on the computer.)
Here is the log: https://drive.google.com/open?id=1S_MhAUmzj8i9phPQiqS06s0_1cCRrbF0
test output report generated on my computer
test output on vsts
Look at the error message. The error message tells you precisely what the problem is: java.io.FileNotFoundException: Y:\Automation Team\CopaQA\Architecture\local\config.txt (The system cannot find the path specified)
You need to not rely on hard-coded paths.
You say you registered a build agent against your VSTS account... but did you change the agent queue for your build? If the agent queue is "Hosted", you're using Microsoft's hosted agent.
I don't know why it can't find mt config.txt file since it is located on the same hosted agent in that same directory.
It turns out that Java.IO. can't read files located on a shared network drive, I solved this by using the UNC path to that file (//"computername"/"directory"/"file.txt")
Now the problem I have is that even though the build is successful and
it looks like it is running my "3 tests", When I look at my pc,
nothing is happening.
It took me a little reading to realize that to perform UI tests my agent needs to be set up in INTERACTIVE MODE. it can be done following this guide: https://learn.microsoft.com/en-us/vsts/build-release/actions/agents/v2-windows?view=vsts

Jacoco code coverage for remote machine

I tried to find this answer but hardly found it anywhere. I am doing the API testing, In process I need to call the rest API from my local machine. local machine contains the maven project and a framework to call respective rest API.
I need to check the code coverage of remote Rest API and form a report based on the code coverage. please help, how to do that?
Note: I found this link useful but it does not elaborate clearly on what to do?
http://eclemma.org/jacoco/trunk/doc/agent.html
you will probably do a bit of file copying around - depending on the way you run the tests.
JaCoCo runs as a java agent. So you usually add the javaagent parameter as mentioned in the docs you linked to the start script of you application server.
-javaagent:[yourpath/]jacocoagent.jar=[option1]=[value1],[option2]=[value2]
so it would look like:
java -javaagent: -jar myjar.jar
Using tomcat you can add the "-javaagent" part into JAVA_OPTS or CATALINA_OPTS environment variables. Should be similar for other servers.
this will create the jacoco*.exec files. you need to copy those back to your build or CI server to show its results (for ex if you use sonar you need those files before running the sonar reporter). Its important to just include the packages you're interested in.
You can also create one jacoco.exec file per test flavour (jacoco.exec for unit tests, jacoco-it.exec for integration tests, jacoco-at.exec for application tests).
And I would not mix coverage with performance testing - just to mention that too.
There are some examples on stackoverflow for JBoss

Attach Current Build to Test

I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.

Storing third-party framework/middleware into source control that needs to alter your compiler/IDE

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though