Jacoco code coverage for remote machine - rest

I tried to find this answer but hardly found it anywhere. I am doing the API testing, In process I need to call the rest API from my local machine. local machine contains the maven project and a framework to call respective rest API.
I need to check the code coverage of remote Rest API and form a report based on the code coverage. please help, how to do that?
Note: I found this link useful but it does not elaborate clearly on what to do?
http://eclemma.org/jacoco/trunk/doc/agent.html

you will probably do a bit of file copying around - depending on the way you run the tests.
JaCoCo runs as a java agent. So you usually add the javaagent parameter as mentioned in the docs you linked to the start script of you application server.
-javaagent:[yourpath/]jacocoagent.jar=[option1]=[value1],[option2]=[value2]
so it would look like:
java -javaagent: -jar myjar.jar
Using tomcat you can add the "-javaagent" part into JAVA_OPTS or CATALINA_OPTS environment variables. Should be similar for other servers.
this will create the jacoco*.exec files. you need to copy those back to your build or CI server to show its results (for ex if you use sonar you need those files before running the sonar reporter). Its important to just include the packages you're interested in.
You can also create one jacoco.exec file per test flavour (jacoco.exec for unit tests, jacoco-it.exec for integration tests, jacoco-at.exec for application tests).
And I would not mix coverage with performance testing - just to mention that too.
There are some examples on stackoverflow for JBoss

Related

How to Build a definition and publish test results for a Java project with maven, Junit and selenium on Visual Studios Team Services VSTS

I have an automation script that uses maven POM.xml to import all the dependencies needed from selenium and junit. The main test uses selenium to open a browser, verify some information, close the browser and the test ends.
When run as Junit it works fine: run as Junit test
When run as Maven Test it works fine as well: run as maven test
In both scenarios, the program opens the browser and navigates through the website as it should do for an automated test.
Now I need to integrate it to VSTS so I can visualize the overall pass/fail test on the VSTS dashboard but I'm not familiarized with this tool too much yet.
So far this is what I have managed to do:
Deploy an agent on my WindowsPC (I want to execute and deploy the project on an Azure VM or another azure instance later on) NOTE: this is the same pc I'm successfully running the program using eclipse as shown in the screenshoots above. https://learn.microsoft.com/en-us/vsts/build-release/actions/agents/v2-windows?view=vsts
Create a build definition on VSTS but when I queue the definition the build fails: build definition and the build fail.
I don't know why it can't find mt config.txt file since it is located on the same hosted agent in that same directory. I'll appreciate if someone is capable of guiding me through this process so I can run the program from the VSTS and visualize the overall tests that fail and pass on the VSTS dashboard.
UPDATE: I moved the config.txt file to the public directory and the build was successful(I still need to fix this issue because I do not want my work in a public folder).
Now the problem I have is that even though the build is successful and it looks like it is running my "3 tests", When I look at my pc, nothing is happening. it should open chrome and take a screenshot, then open Firefox and take another screenshot and finally open internet explorer and take another screenshot and save each test on different folders but it is only generating folders for chrome and internet explorer (but still those folders does not have the screenshot I'm asking, maybe because the browser is not being open on the computer.)
Here is the log: https://drive.google.com/open?id=1S_MhAUmzj8i9phPQiqS06s0_1cCRrbF0
test output report generated on my computer
test output on vsts
Look at the error message. The error message tells you precisely what the problem is: java.io.FileNotFoundException: Y:\Automation Team\CopaQA\Architecture\local\config.txt (The system cannot find the path specified)
You need to not rely on hard-coded paths.
You say you registered a build agent against your VSTS account... but did you change the agent queue for your build? If the agent queue is "Hosted", you're using Microsoft's hosted agent.
I don't know why it can't find mt config.txt file since it is located on the same hosted agent in that same directory.
It turns out that Java.IO. can't read files located on a shared network drive, I solved this by using the UNC path to that file (//"computername"/"directory"/"file.txt")
Now the problem I have is that even though the build is successful and
it looks like it is running my "3 tests", When I look at my pc,
nothing is happening.
It took me a little reading to realize that to perform UI tests my agent needs to be set up in INTERACTIVE MODE. it can be done following this guide: https://learn.microsoft.com/en-us/vsts/build-release/actions/agents/v2-windows?view=vsts

Does the XML Report Processing work for NUnit3?

I'm currently moving one of our projects to DNX (.NET Core now) and I was forced to update to nunit3. Because of other considerations, we run compile the test project as a console app with its own entry point, basically self-hosting the NUnit runner.
I now need to report the results to TeamCity via the XML Reporter, which doesn't seem to parse Nunit3 TestResults.xml files.
Any advice on how to work around this?
The NUnit 3 console has the option to produce results formatted in the NUnit 2 style.
Use the option:
--result=[filename];format=nunit2
Docs: https://github.com/nunit/nunit/wiki/Console-Command-Line
To add to the answer above:
NUnitLite inherits the --result CLI parameter which seems to do the trick.
Another option, which I went for in the end is using the --teamcity CLI parameter:
dotnetbuild --project:<path to project directory> -- --teamcity
which will integrate with TC's service messages. This will also do real-time updates.

How to keep separate dev, test, and prod databases in Play! 2 Framework?

In particular, for test-cases, I want to keep the test database separate so that the test cases don't interfere with development or production databases.
What are some good practices for separating development, test and production environments?
EDIT1: Some context
In Ruby On Rails, there are different configuration files by convention for different environments. So does Play! 2 also support that ?
Or, do I have to cook the configuration files, and then write some glue code that selects the appropriate configuration files ?
At the moment if I run sbt test it uses development database ( configured as "default" in conf/application.conf ). However I would like Play!2 to use a different test database.
EDIT2: On commands that play provides
For Play! 2 framework, I observed this.
$ help play
Welcome to Play 2.2.2!
These commands are available:
-----------------------------
...OUTPUT SKIPPED...
run <port> Run the current application in DEV mode.
test Run Junit tests and/or Specs from the command line
start <port> Start the current application in another JVM in PROD mode.
...OUTPUT SKIPPED...
There are three well defined commands for "test", "development" and "production" instances which are:
test: This runs the test cases. So it should automatically select test configuration.
run <port>: this runs the development instance on the specified port. So this command should automatically select development configuration.
start <port>: this runs the production instance on the specified port. So this should automatically select production configuration.
However, all these commands select the values that are provided in conf/application.conf. I feel there is some gap to be filled here.
Please do correct me if I am wrong.
EDIT3: Best approach is using Global.scala
Described here: How to manage application.conf in several environments with play 2.0?
Good practice is keeping separate instances of the application in separate folders and synching them i.e. via git repo.
When you want to keep single instance you can use alternative configuration file for each environment.
In your application.conf file there is an entry (or entries) for your database, e.g. db.default.url=jdbc:mysql://127.0.0.1:3306/devdb
The conf file can read environment variables using ${?ENV_VAR_NAME} syntax, so change that to something like db.default.url=${?DB_URL} and use environment variables.
A simpler way to get this done and manage your configuration easier is via GlobalSettings. There is a method you can override and that its name is "onLoadConfig". Try check its api at API_LINK
Basically on your conf/ project folder, you'll setup similar to below:
conf/application.conf --> configurations common for all environment
conf/dev/application.conf --> configurations for development environment
conf/test/application.conf --> configurations for testing environment
conf/prod/application.conf --> configurations for production environment
So with this, your application knows which configuration to run for your specific environment mode. For a code snippet of my implementation on onLoadConfig try check my article at my blog post
I hope this is helpful.

FitNesse Test History with Version Control

I'm starting some automated acceptance testing for our company, and have decided to use FitNesse.
I want to have FitNesse under source control - that is the FitNesse executable + plugins, the wiki pages and the test fixture source code.
Then anyone can get all they need from source control to build and run the acceptance tests locally. Including a Continuous Integration server.
I have read that the page versioning can be turned off using the -e 0 parameters. Then we don't have ZIP files in the FitNesse root folder under source control - nice.
But what about Test History? Do I want the history of locally ran tests to be checked in? And when someone gets the latest version, do they want their local test history to be overwritten?
I'm very grateful to anyone who can share their experiences of using FitNesse in similar scenarios to that described above.
Why do you not clear all test history before check-in?
In my current project, there are only 2 required operations for creating test cases on FitNesse: Drawing the table(Edit the page) and developing the api(output as .dll files).
We also develop a tool for triggering the FitNesse running testing from remote machines automatically. After finish testing, we get the testing result by handling the output excel files.
The structure of our svn:
-SVN
--FitNess
--- TestLib
--- FitNesse
---- FitNesseBin
---- TestCases
[Update]
Test Fixture code should be finished and frozen before testers start writing test cases and running them. Certainly, when Test Fixture need bug fixing or enhancement, the code could also be changed. In my team, we ask different roles to handle different tasks. Developer provide API for testing use. Senior QA wrap the API in test fixture. QA write the table/wiki. Every role only take the assigned parts. Before modification, team member should update local copy of the FixNesse, and check-out the file. And only check-in the modified check-outed file.
Disabling and auto-purging of test history is still a valid requirement for those integrating fitnesse into a standard build (e.g. Maven) and for running fitnesse builds locally, despite .svnignore .gitignore etc options.
The test history slows down the finalisation of the test and when it's purged after a few runs you will certainly notice the difference.
Uncle Bob mentioned he was working on an option to only keep the test history for the latest test run (you always need at least 1 so you can show the results) here http://tech.groups.yahoo.com/group/fitnesse/message/14306 but cannot see such an option in the code. I got a python error trying to reply to post - so no answer on that option unfortunately :-(

Advice on creating self contained project, and distributing a web server with the source code

I need some advice on configuring a project so it works in development, staging and production environments:
I have a web app project, MainProject, that contains two sub-projects, ProjectA and ProjectB, as well as some common code, Common. It's in a Subversion repository. It's nearly all HTML, CSS and JavaScript.
In our current development environment we check MainProject out, then set up Apache virtual hosts to point at each of the sub-project's directories, as paths within each project are relative to their root. We also have a build process that then compiles each of the sub-projects into their own deliverable package, with the common code copied into each.
So - I'm trying to make development of this project a bit easier. At the moment there is a lot of configuration of file paths in Apache http.conf files, as well as the build.xml file and in a couple of other places too.
Ideally I'd like the project to be checked out of SVN onto a fresh computer, with a web server as part of the project, fully configured, that can then be run from the checkout directory with very little extra configuration, either on a PC or Mac. And I'd like anyone to be able to run the build to compile it too.
I'd love to hear from anyone who has done something like this, and any advice you have.
Thanks,
Paul
If you can add python as a dependency, you can get a minimal HTTP server running in less than ten lines of code. If you have basic server side code, there is a CGI server as well.
The following snippet is copied directly from the BaseHTTPServer documentation
import BaseHTTPServer
def run(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
I've done this with Jetty, from within Java. Basically you write a simple Java class that starts Jetty (which is a small web server) - you can make then this run via an ant task (I used it with automated tests - Java code made requests to the server and checked the results in various ways).
Not sure it's appropriate here because you don't mention Java at all, so apologies if it's not the kind of thing you're looking for.