NUnit: TestName causes tests to go undiscovered - nunit

There doesn't seem to be any related post I can find here, at Nunit, or on MS sites that report any problems with the TestName parameter. Without this parameter, my tests are discovered and are run. But, if I add this parameter to any TestCase, then that specific test is not discovered, and the 'tests run' counter displays the reduced number. Then I remove the TestName parameter, and they are discovered and the test counter shows the expected number of test discovered.
Is this a known bug with VS2022? Or am I entering the values incorrectly?
VS Enterprise 2022 v17.3.6
.NET Framework v4.8.04084
C# Tools 4.3.0-3.22470.13+80a8ce8d5fdb9ceda4101e2acb8e8eb7be4ebcea
NuGet Package Manager 6.3.0
My tests all load and run as shown here, selecting parseAUR and clicking Run Tests, will then show four test discovered.
[TestCase(noneAur, ExpectedResult = 0, Description = "Finds nothing")]
[TestCase(oneAur, ExpectedResult = 1, Description = "Finds only one aur in report.")]
[TestCase(oneAurPlus, ExpectedResult = 1, Description = "Find one aur, ignores afr in report")]
[TestCase(twoAURs, ExpectedResult = 2, Description = "Finds two aur's in report")]
public int parseAUR(string source)
{
return ClaimStatusService.ProcessUndeliverableReport(source);
}
However, if I add TestName to the attribute list, then the tests are not discovered and are never run.
[TestCase(noneAur, ExpectedResult = 0, Description = "Finds nothing", TestName = "parseAUR(None)")]
[TestCase(oneAur, ExpectedResult = 1, Description = "Finds only one aur in report.", TestName = "parseAUR(One)")]
[TestCase(oneAurPlus, ExpectedResult = 1, Description = "Find one aur, ignores afr in report", TestName = "parseAUR(OnePlus)")]
[TestCase(twoAURs, ExpectedResult = 2, Description = "Finds two aur's in report", TestName = "parseAUR(Two)")]
Adding or removing TestName to any TestCase will alter that specific test's visibility in the TestExplorer and alters the discovered test count.
UPDATE: moving the TestName parameter from the end 'fixes' the issue. In fact, just moving one TestCase in this way cased all the TestCases to be discovered.

Is it possible that by adding the test name, you're moving it up a level in the hierarchy and so it's there, but not where you might expect it to be?
I tried to recreate this in VS 2022 with various runners.
I took the most basic reproduction I could:
public class Tests
{
[TestCase("TestString", ExpectedResult = 1)]
[TestCase("TestString2", ExpectedResult = 1)]
[TestCase("TestString3", ExpectedResult = 1)]
public int StringTest(string inputString)
{
return 1;
}
}
My VS and NCrunch runners both show all 3 test cases -- here's an example side by side:
When I add a test name to one of the tests:
public class Tests
{
[TestCase("TestString", ExpectedResult = 1, TestName = "CheckThisOut(MyThing)")]
[TestCase("TestString2", ExpectedResult = 1)]
[TestCase("TestString3", ExpectedResult = 1)]
public int StringTest(string inputString)
{
return 1;
}
}
Then the runners do show the test:
But -- note the placement of the test on the left-hand side (the VS Test Runner). The test is no longer under StringTest, it appears to be considered a sibling of that test by VS. It appears next to it, not within it.
In Visual Studio's Test Explorer window, try searching for your custom test name to see if it appears (in the screenshot below, I typed my custom name in and saw it come up):

After much investigation, I have discovered it's a configuration mismatch between the project and the installed nunit version. Turns our build master updated the project to use 4.3.1 but I only had 4.2.1 installed. Why this would cause the issue is unknown. But, updating the nunit version on my machine resolved several issues with test discovery. A different solution would not load any test until the version was resolved.

Related

How I can get list all scenario from specflow dll?

Or from other place..
What I want: get all scenario names on start test, but without start all tests.
What I tried: by assembly reflection I scanned content, but one contain only feature names and method names. Not scenario names. (from this: Get list of tests in nunit library programmatically without having to run tests)
Also exist ScenarioContext, but it contain only current names. Not all existing in testsuite.
What i am using:
Specflow for describe.
NUnit for run. VS2019.
TestRail for result collect. TestSuite contains testName equal test describe in Specflow.
I hope it possible.
Thanks to all!
Ok, I`m finded it!
List<string> scenLst = new List<string>();
try
{
var assembly = System.Reflection.Assembly.LoadFrom(Path.Combine(Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location), "project.dll"));
var types = assembly.GetTypes();
foreach (Type t in types){
foreach (var method in t.GetMethods()){
foreach (var attributes in tm.GetCustomAttributes())
{
if (attributes is DescriptionAttribute){
var d = propertyList.Properties["Description"][0].ToString();
if(d != null){
scenLst.Add();
}
}
}
}
}
catch (Exception ex){...}
return scenLst;
}
To get tags use CategoryAttribute (from NUnit).

How to read UnitPrice from invoice line in QBO API v3 .NET

The bizarre properties in the .NET SDK continue to baffle me. How do I read the UnitPrice from an invoice line?
If I do this:
sild = (SalesItemLineDetail)line.AnyIntuitObject;
ln = new QBInvoiceLine(); // My internal line item class
ln.Description = line.Description;
ln.ItemRef = new QBRef() { Id = sild.ItemRef.Value, Name = sild.ItemRef.name };
if (sild.QtySpecified)
ln.Quantity = sild.Qty;
else
ln.Quantity = 0;
if (sild.ItemElementName == ItemChoiceType.UnitPrice)
ln.Rate = (decimal)sild.AnyIntuitObject; // Exception thrown here
The last line throws an invalid cast exception, even though the debugger shows that the value is 20. I've tried other types but get the same exception no matter what I do. So I finally punted and am calculating the rate like so:
ln.Rate = line.Amount / ln.Quantity;
(With proper rounding and checking for divide by zero, of course)
While we're on the subject... I noticed that in many cases ItemElementName == ItemChoiceType.PriceLevelRef. What's up with that? As far as I know, QBO doesn't support price levels, and I certainly wasn't using a price level with this invoice or customer. In this case I was also able to get what I needed from the Amount property.
Try this-
SalesItemLineDetail a1 = (SalesItemLineDetail)invoice11.Line[0].AnyIntuitObject;
object unitprice = a1.AnyIntuitObject;
decimal quantity = a1.Qty;
PriceLevelRef as an 'entity' is not supported. This means CRUD operations are not supported on this entity.
The service might however be returning readonly values in the transactions sometimes, but since this not mentioned in the docs, please consider it as unsupported.
Check that both request/response are in either json or xml format-
You can use the following code to set that-
ServiceContext context = new ServiceContext(appToken, realmId, intuitServiceType, reqvalidator);
context.IppConfiguration.Message.Request.SerializationFormat = Intuit.Ipp.Core.Configuration.SerializationFormat.Json;
context.IppConfiguration.Message.Response.SerializationFormat = Intuit.Ipp.Core.Configuration.SerializationFormat.Json;
Also, in QBO UI, check if Company->sales settings has Track Quantity and Price/rate turned on.

Showing test count in buildbot

I am not particularly happy about the stats that Buildbot provides. I understand that it is for building and not testing - that's why it has a concept of Steps, but no concept of Test. Still there are many cases when you need test statistics from build results. For example when comparing skipped and failed tests on different platforms to estimate the impact of a change.
So, what is needed to make Buildbot display test count in results?
What is the most simple way, so that a person who don't know anything about Buildbot can do this in 15 minutes?
Depending how you want to process the test results and how the test results are presented, Buildbot does provide a Test step, buildbot.steps.shell.Test
An example of how I use it for my build environment:
from buildbot.steps import shell
class CustomStepResult(shell.Test):
description = 'Analyzing results'
descriptionDone = 'Results analyzed'
def __init__(self, log_file = None, *args, **kwargs):
self._log_file = log_file
shell.Test.__init__(self, *args, **kwargs)
self.addFactoryArguments(log_file = log_file)
def start(self):
if not os.path.exists(self._log_file):
self.finished(results.FAILURE)
self.step_status.setText('TestResult XML file not found !')
else:
import xml.etree.ElementTree as etree
tree = etree.parse(self._log_file)
root = tree.getroot()
passing = len(root.findall('./testsuite/testcase/success'))
skipped = len(root.findall('./testsuite/testcase/skip'))
fails = len(root.findall('./testsuite/error')) + len(root.findall('./testsuite/testcase/error')) + len(root.findall('./testsuite/testcase/failure'))
self.setTestResults(total = fails+passing+skipped, failed = fails, passed = passing)
## the final status for WARNINGS is green but the step itself will be orange
self.finished(results.SUCCESS if fails == 0 else results.WARNINGS)
self.step_status.setText(self.describe(True))
And in the configuration factory I create a step as below:
factory.addStep(CustomStepResult(log_file = log_file))
Basically I override the default Test shell step and pass a custom XML file which contains my test results. I then look for the pass/fail/skip result nodes and accordingly display the results in the waterfall.

Trigger works but test doesn't cover 75% of the code

I have a trigger which works in the sandbox. The workflow checks the field in the campaign level and compares it with the custom setting. If it matches, then it returns the target to the DS Multiplier field. The trigger looks as follows
trigger PopulateTarget on Campaign (before insert, before update)
{
for(Campaign campaign : Trigger.new)
{
if (String.isNotBlank(campaign.Apex_Calculator__c) == true)
{
DSTargets__c targetInstance = DSTargets__c.getInstance(campaign.Apex_Calculator__c);
{
String target = targetInstance .Target__c;
campaign.DS_Target_Multiplier__c = Target;
}
}
}
}
However, I had problems to write a proper test to this and asked for the help on the internet. I received the test
#isTest
private class testPopulateTarget{
static testMethod void testMethod1(){
// Load the Custom Settings
DSTargets__c testSetting = new DSTargets__c(Name='Africa - 10 Weeks; CW 10',Target__c='0.1538', SetupOwnerId = apexCalculatorUserId);
insert testSetting;
// Create Campaign. Since it would execute trigger, put it in start and stoptests
Test.startTest();
Campaign testCamp = new Campaign();
// populate all reqd. fields.
testCamp.Name = 'test DS campaign';
testCamp.RecordTypeId = '012200000001b3v';
testCamp.Started_Campaign_weeks_before_Event__c = '12 Weeks';
testCamp.ParentId= '701g0000000EZRk';
insert testCamp;
Test.stopTest();
testCamp = [Select ID,Apex_Calculator__c,DS_Target_Multiplier__c from Campaign where Id = :testCamp.Id];
system.assertEquals(testCamp.DS_Target_Multiplier__c,testSetting.Target__c);// assert that target is populated right
}
}
Such test returns the error "Compile Error: Variable does not exist: apexCalculatorUserId at line 6 column 122". If I remove that ApexCalculator part System.assertEquals then the test passes. However it covers 4/6 part of the code (which is 66%)
Could anyone help me how should I amend the code to make the coverage of 75%?
Yes, apexCalculatorUserId has not been defined. The code you were given appears to be incomplete. You'll need to look at the constructor DSTargets__c and see what kind of ID it is expecting there.
At a guess, you could try UserInfo.getUserId() to get the ID of the current user, but that may not be the ID that's expected in the constructor. It would be worth trying it to see if the test coverage improves.
1) Replace apexCalculatorUserId with UserInfo.getUserId()
2) I'm not sure what kind of field is Apex_Calculator__c on campaign. If its not a formula you want to insert a new line before "insert testCamp". Something like:
testCamp.Apex_Calculator__c = UserInfo.getUserId();

datanucleus complaining about class not being enhanced, but when run 2nd time works fine.

I am trying to make datanucleus, work with Mongodb (using JDO). After successful
mvn clean compile
mvn exec:java
when I run first time it fails with
caused by: org.datanucleus.exceptions.NucleusUserException: Found Meta-Data for class com.samples.jdo.mongodb.Account but this class is either not enhanced or you have multiple copies of jdo-api.jar in your CLASSPATH!! Make sure all persistable classes are enhanced before running DataNucleus and/or the CLASSPATH is correct.
but if I run again it works fine and I can see data getting persisted in mongodb too.
// Create a PersistenceManagerFactory for this datastore
PersistenceManagerFactory pmf = JDOHelper.getPersistenceManagerFactory("datanucleus.properties", this.getClass().getClassLoader());
JDOMetadata jdomd = pmf.newMetadata();
PackageMetadata pmd = jdomd.newPackageMetadata("com.samples.jdo.mongodb");
ClassMetadata cmd1 = pmd.newClassMetadata(Account.class);
cmd1.setTable("Account").setIdentityType(javax.jdo.annotations.IdentityType.DATASTORE);
cmd1.setPersistenceModifier(ClassPersistenceModifier.PERSISTENCE_CAPABLE);
ClassMetadata cmd2 = pmd.newClassMetadata(Login.class);
cmd2.setTable("LOGIN").setIdentityType(javax.jdo.annotations.IdentityType.DATASTORE);
cmd2.setPersistenceModifier(ClassPersistenceModifier.PERSISTENCE_CAPABLE);
PersistenceManager pm = pmf.getPersistenceManager();
Account acct = new Account("firstname","lastname", 3);
Login login = new Login("flastname", "xxxx");
acct.setLogin(login);
final JDOEnhancer enhancer = JDOHelper.getEnhancer();
enhancer.setVerbose(true);
enhancer.registerMetadata(jdomd);
enhancer.setClassLoader(this.getClass().getClassLoader());
String[] classes = {"com.samples.jdo.mongodb.Account","com.samples.jdo.mongodb.Login" };
enhancer.addClasses(classes);
enhancer.enhance();
pmf.registerMetadata(jdomd);
pm.makePersistent(acct);
any ideas?
Thanks