Is it possible to specify aggregate code coverage testing when deploying with a list of tests to Salesforce - deployment

I am automating deployment and CI to our Salesforce orgs using Ant. In my build xml, I am specifying the complete list of our tests to run. Salesforce is returning code coverage errors, demanding 75% code coverage on a per file basis rather than allowing only 75% based on the total code base. Some of our old files do not have that level of coverage, and I am trying not to have to go back and create a ton of new tests for old software.
It seems like Salesforce is doing the code coverage based on the quickdeploy model, rather that the aggregate.
Does anyone know a way I can tell Salesforce not to use the quickdeploy model (if that is what it is doing). I have checked the Migration tool docs, but don't see anything.
Thanks...

Have you tried setting the attribute runAllTests="true" inside the sf:deploy tasks, rather than listing each test out?

Related

Requirements coverage using pytest

We use LabGrid for our testing, which is based on pytest. I would like to do some requirements coverage measuring. All the searches on covertage for pytest, ends up in line coverage for python and that is not what I want.
Actually I'm not testing python code at all, but remotely testing features on an embedded target.
Now my idea was to create a marker for each test function with an URL to a requirement (eg in Jira). Now when a requirement is identified, then first thing to do is to add an empty test case in the pytest, and mark it as skipped (or some not tested).
After running the test, a report could be generated, telling the total coverage, with links. This would require the information being dumped into the junit.xml file.
In this way I get a tight link between the test and the requirement, bound in the test code.
Now anybody knows of some markers which could help do this. Or even some projects which has gotten a similar idea.
we are using a marker which we create by ourselves:
pytest.mark.reqcov(JIRA-123)
Afterwards we analyzing the test run with self written script.
Using some pytest hooks, collecting the marker, checking JIRA via python api and creating metrics out of it (we are using Testspace).
Not found a way to add this marker to junit report.
Link to JIRA can be done in different ways,
using SPHINX docu and link the Jira id automatically from test
case description to Jira
use the python script, which analyze the requirements coverage
and create a link to Jira.
hope that helps

How to display junit test reports in concourse in a usable/interactive way?

the company where I work is evaluating different CI/CD systems, we tried GoCD (v17.4), Jenkins 2 (v2.7) and Concourse (v3.2.1).
We liked Concourse, but a big downside was the fact that the test reports were not displayed in a usable way. I asked in the slack chat, I was told Concourse shows the output of the console, respecting the ANSI colors, if any...
...but the thing is, XML test reports contain a lot more information than just a red color for failing tests and we need to use that information.
I created a failing test and Jenkins has a nice plugin to group all tests, show extra info/metrics and group the failing tests to spot them at once. It also keeps the history of test results.
In Concourse, without a tests reporter one has to scroll down a log to see all failing tests... my colleagues are concerned about this.
Is there a way in Concourse to parse a junit XML test report and show it in the UI in a usable/interactive (clickable) way, as jenkins does?
As I learnt is that Concourse has no plugins and simplicity by design, it seems that the answer is: "NO, there isn't: you can just see the console logs as is". but if I'm wrong, please let me know... Thanks
Concourse doesn't discriminate against types of outputs on purpose.
Concourse is made to be generic. That way there isn't highly specialize, unrepeatable deployments of itself.
Jenkins is specialized to solve these types of issues. To the extent it has deep integration for having UIs display custom output.
It sounds like Jenkins solves all your use cases. I wouldn't try to hammer concourse into this use case.
Concourse is minimal in that way. Concourse is made to run tasks in a pipeline configuration, and do so in an atomic container setup. That is also way it does not store build artifacts and so on. It forces you to do the right thing and save everything you need elsewhere, like buckets etc. Push the XML to a service or store it in a bucket for a tool to use later on.

Show only specific Tests or TestFixtures in NUNIT via a configuration file or another way

I have a bunch of NUNIT tests in several TestFixtures. Currently, I just display all the tests for everyone. Is there a way to hide some tests and/or test fixtures. I have various "customers" and they don't all need to see every test. For example, I have engineers using low level tests, and I have a QA department that is using higher level tests. If I could have a configuration (XML?) file that I distributed with the dll that would be ideal. Can someone point me to the documentation and example? I did search the NUNIT site and did not see anything.
I am aware of the [IGNORE] attribute and I suppose a somewhat acceptable solution would be to have a configuration file that can apply IGNORE to various tests or testfixtures. I'd hand out a different version of the configuration file to each customer. At least that way certain customers would not be able run certain tests.
I'm using version 2.5.5
Ideas?
Thanks,
Dave
Yes - if the tests are in seperate assemblies, this can be accomplished by proper configuration of your NUnit projects. However, this is not an option if the tests are in one large test assembly. If this is the case, you may wish to break up the test assembly. Here is the documentation on the NUnit ProjectEditor: http://www.nunit.org/index.php?p=projectEditor&r=2.2.10

How to keep track of who created a JUnit test and when using Eclipse? How to create statistics based on this information?

In a Java project I'm working (alongside a team of 8 devs), we have a large backlog of features that lack automated tests. We are covering this backlog and we need to keep track of who wrote a JUnit test and when, plus we have to measure how many test we wrote as a team in a week/month/semester (as you may have figured out already, this information is for management purposes). We figured we'd do this by marking the tests with the information we need (author, creation date) and let Eclipse do the processing work, showing us tests we wrote, who wrote'em and how far we were from reaching our goals. How would you smart people go about this? What plugins would you use?
I tried to use Eclipse Custom Tags for this, but it's not the purpose of the feature, and the results I got were kind of brittle. I created a TEST tag that was supposed to mark a test method. It looks like this: (date is mm-dd-yyyy)
//TEST my.name 08-06-2011
Since Eclipse processes tag description by substringing (contains/doesn't contain), it's, as I said, very brittle. I can timestamp the tag, but it's just a string. Eclipse can't process it as a date, compare dates, filter by date interval, stuff like that.
I searched for plugins, but no dice.
My B-plan is to create an Annotation with the information we need and process our test packages using Eclipse Annotation Processing Tool. I haven't tried anything on this front yet, though, just an idea. Anyone knows a good plugin for this kind of Annotation processing? Or any starter tips for dealing with Eclipse APT.
Thanks a bunch, folks
I would not use Eclipse for this.
Your team should be checking the tests into a version control system such as Subversion, Git, Team Foundation Server, etc. From there it should a fairly straightforward matter to determine the owner and check-in time. You can and should do this sort of metrics calculation during every build. Better yet, be sure that your build script actually runs your tests and uses a tool like EMMA to instrument the code and determine the actual coverage.
As a fallback for measuring coverage, if you choose a naming convention then you may even be able to correlate the test classes by file name back to the feature under test.
Many modern build systems, such as CruiseControl, have integration for doing these sorts of things quite nicely.

Writing Logs/results or generating reports using Selenium C# API

I am testing the web application using Selenium RC. All things works fine and I have written many test cases and executing these test cases using Nunit.
Now the hurdle that I am facing is how to keep track of failures or how to generate the Reports?
Please advice which could be best way to capture this.
Because you're using NUnit you'll want to use NUnits reporting facilities. If the GUI runner is enough, that's great. But if you're running from the command line, or NAnt, you'll want to use the XML output take a look at the NAnt documentation for more information.
If you're using Nant you'll want to look at NUnit2Report. It's nolonger maintained, but it may suit your needs. Alternatively, you could extract it's XSLT files and apply it against the XML output.
Selenium itself doesn't have report because it is only a library used by many different languages.
For anyone else happening randomly into this question, the 'nunit2report' task is now available in NAntContrib.
NantContrib Nunit2report task