How to display junit test reports in concourse in a usable/interactive way? - concourse

the company where I work is evaluating different CI/CD systems, we tried GoCD (v17.4), Jenkins 2 (v2.7) and Concourse (v3.2.1).
We liked Concourse, but a big downside was the fact that the test reports were not displayed in a usable way. I asked in the slack chat, I was told Concourse shows the output of the console, respecting the ANSI colors, if any...
...but the thing is, XML test reports contain a lot more information than just a red color for failing tests and we need to use that information.
I created a failing test and Jenkins has a nice plugin to group all tests, show extra info/metrics and group the failing tests to spot them at once. It also keeps the history of test results.
In Concourse, without a tests reporter one has to scroll down a log to see all failing tests... my colleagues are concerned about this.
Is there a way in Concourse to parse a junit XML test report and show it in the UI in a usable/interactive (clickable) way, as jenkins does?
As I learnt is that Concourse has no plugins and simplicity by design, it seems that the answer is: "NO, there isn't: you can just see the console logs as is". but if I'm wrong, please let me know... Thanks

Concourse doesn't discriminate against types of outputs on purpose.
Concourse is made to be generic. That way there isn't highly specialize, unrepeatable deployments of itself.
Jenkins is specialized to solve these types of issues. To the extent it has deep integration for having UIs display custom output.
It sounds like Jenkins solves all your use cases. I wouldn't try to hammer concourse into this use case.

Concourse is minimal in that way. Concourse is made to run tasks in a pipeline configuration, and do so in an atomic container setup. That is also way it does not store build artifacts and so on. It forces you to do the right thing and save everything you need elsewhere, like buckets etc. Push the XML to a service or store it in a bucket for a tool to use later on.

Related

Requirements coverage using pytest

We use LabGrid for our testing, which is based on pytest. I would like to do some requirements coverage measuring. All the searches on covertage for pytest, ends up in line coverage for python and that is not what I want.
Actually I'm not testing python code at all, but remotely testing features on an embedded target.
Now my idea was to create a marker for each test function with an URL to a requirement (eg in Jira). Now when a requirement is identified, then first thing to do is to add an empty test case in the pytest, and mark it as skipped (or some not tested).
After running the test, a report could be generated, telling the total coverage, with links. This would require the information being dumped into the junit.xml file.
In this way I get a tight link between the test and the requirement, bound in the test code.
Now anybody knows of some markers which could help do this. Or even some projects which has gotten a similar idea.
we are using a marker which we create by ourselves:
pytest.mark.reqcov(JIRA-123)
Afterwards we analyzing the test run with self written script.
Using some pytest hooks, collecting the marker, checking JIRA via python api and creating metrics out of it (we are using Testspace).
Not found a way to add this marker to junit report.
Link to JIRA can be done in different ways,
using SPHINX docu and link the Jira id automatically from test
case description to Jira
use the python script, which analyze the requirements coverage
and create a link to Jira.
hope that helps

Is it possible to specify aggregate code coverage testing when deploying with a list of tests to Salesforce

I am automating deployment and CI to our Salesforce orgs using Ant. In my build xml, I am specifying the complete list of our tests to run. Salesforce is returning code coverage errors, demanding 75% code coverage on a per file basis rather than allowing only 75% based on the total code base. Some of our old files do not have that level of coverage, and I am trying not to have to go back and create a ton of new tests for old software.
It seems like Salesforce is doing the code coverage based on the quickdeploy model, rather that the aggregate.
Does anyone know a way I can tell Salesforce not to use the quickdeploy model (if that is what it is doing). I have checked the Migration tool docs, but don't see anything.
Thanks...
Have you tried setting the attribute runAllTests="true" inside the sf:deploy tasks, rather than listing each test out?

Automated testing developer environments

We use gradle as our build tool and use the idea plugin to be able to generate the project/module files. The process for a new developer on the project would look like this:
pull from source control.
run 'gradle idea'.
open idea and be able to develop without any further setup.
This all works nicely, but generally only gets exercised when a new developer joins or someone gets a new machine. I would really like to automate the testing of this more frequently in the same way we automate our unit/integration tests as part of our continuous integration process.
Does anyone know if this is possible and if there is any libraries for doing this kind of thing?
You can also substitue idea for eclipse as we have a similar process for those that prefer using eclipse.
The second step (with or without step one) is easy to smoke test (just execute the task as part of a CI build), the third one less so. However, if you are following best practices and regenerate IDEA files rather than committing them to source control, developers will likely perform both steps more or less regularly (e.g. every time a dependency changes).
As Peter noted, the real challenge is step #3. The first 2 ones are solved by your SCM plugin and gradle task. You could try automating the last task by doing something like this
identify the proper command line option, on your platform, that opens a specified intellij project from the command line
find a simple good enough scenario that could validate that the generated project is working as it should. E.g. make a clean then build. Make sure you can reproduce these steps using keyboard shortcuts only. Validation could be made by validating either produced artifacts or test result reports, etc
use an external library, like Robot, to program the starting of intellij and the running of your keyboards. Here's a simple example with Robot. Use a dynamic language with inbuilt console instead of pure Java for that, it will speed your scripting a lot...
Another idea would be to include a daemon plugin in intellij to pass back the commands from external CLI. Otherwise take contact with the intellij team, they may have something to ease your work here.
Notes:
beware of false negatives: any failure could be caused by external issues, like project instability. Try to make sure you only build from a validated working project...
beware of false positives: any assumption / unchecked result code could hide issues. Make sure you clean properly the workspace, installation, to have a repeatable state and standard scenario matching first use.
Final thoughts: while interesting from a theoretical angle, this automation exercise may not bring all the required results, i.e. the validation of the platform. Still it's an interesting learning experience and could serve as a material for a nice short talk, especially if you find out interesting stuff. Make it a beer challenger with your team when you have a few idle hours to try to see who can implement the fastest a working solution ;) Good luck!

How to keep track of who created a JUnit test and when using Eclipse? How to create statistics based on this information?

In a Java project I'm working (alongside a team of 8 devs), we have a large backlog of features that lack automated tests. We are covering this backlog and we need to keep track of who wrote a JUnit test and when, plus we have to measure how many test we wrote as a team in a week/month/semester (as you may have figured out already, this information is for management purposes). We figured we'd do this by marking the tests with the information we need (author, creation date) and let Eclipse do the processing work, showing us tests we wrote, who wrote'em and how far we were from reaching our goals. How would you smart people go about this? What plugins would you use?
I tried to use Eclipse Custom Tags for this, but it's not the purpose of the feature, and the results I got were kind of brittle. I created a TEST tag that was supposed to mark a test method. It looks like this: (date is mm-dd-yyyy)
//TEST my.name 08-06-2011
Since Eclipse processes tag description by substringing (contains/doesn't contain), it's, as I said, very brittle. I can timestamp the tag, but it's just a string. Eclipse can't process it as a date, compare dates, filter by date interval, stuff like that.
I searched for plugins, but no dice.
My B-plan is to create an Annotation with the information we need and process our test packages using Eclipse Annotation Processing Tool. I haven't tried anything on this front yet, though, just an idea. Anyone knows a good plugin for this kind of Annotation processing? Or any starter tips for dealing with Eclipse APT.
Thanks a bunch, folks
I would not use Eclipse for this.
Your team should be checking the tests into a version control system such as Subversion, Git, Team Foundation Server, etc. From there it should a fairly straightforward matter to determine the owner and check-in time. You can and should do this sort of metrics calculation during every build. Better yet, be sure that your build script actually runs your tests and uses a tool like EMMA to instrument the code and determine the actual coverage.
As a fallback for measuring coverage, if you choose a naming convention then you may even be able to correlate the test classes by file name back to the feature under test.
Many modern build systems, such as CruiseControl, have integration for doing these sorts of things quite nicely.

Are there any good Continuous Testing plugins for Eclipse out right now?

I've used the MIT Continuous testing plugin in the past, but it has long since passed out of date and is no longer compatible with anything approaching a modern release of Eclipse.
Does anyone have a good replacement? Free, naturally, is preferred.
I found that Infinitest now has an Eclipse plugin that seems to work pretty well.
There is a list in this Ben Rady article at Object Mentor: Continuous Testing Explained. Unfortunately the only Eclipse tool appears to be CT-Eclipse which is not currently maintained either.
There is also Fireworks for IntelliJ and Infinitest which is not IDE specific but also has some IntelliJ integration.
My experience is that continuous testing within the IDE can become unwieldy and distracting, so I prefer to use something like CruiseControl to do this kind of testing. One tool I have found very useful is EclEmma, which gives you a very fast coverage turnaround for your units, helping you to decide when you have finished testing a particular area of the code.
Infinitest decides what tests it wants to run. Often it runs the wrong ones. Green bar sometimes good, sometimes meaningless.
I've had good experience with infinitest on a small and simple project. I've not run into any issues with it and find it fast and helpful.
I also use Infinitest (and voted for one of its answers), but I wanted to add another approach, which relies on the build server. Whenever you want to implement something, create a branch in your VCS, do your changes, commit to your branch. If you have a build server configured, which runs unit tests on every checkin, your unit tests are then run on the build server without actually having polluted the trunk (or HEAD, whatever you call it) and without you waiting for the test run to finish.
I admit that this is not really continuous unit testing in the sense you asked the question, but for large projects or large test suites even a "normal" continuous test runner may slow you down way to much.
For small projects I also recommend Infinitest or CT Eclipse.