Bdd Cucumber issues - scala

I have newly started working on BDD Cucumber. I am using scala for writing test cases. I am trying to use Scenario Outline and passing parameters in step definitions. My code is as follows.
Scenario Outline: Data is parsed and persisted
Given Portal is running
When A data of <type> is received
Then The data of <type> with <Id> should be parsed and persisted
Examples:
| type | Id |
| Personal | 1 |
|Professional | 2 |
Now in my when condition I am trying to get these parameters as follows
When("""^A data of \"([^\"]*)\" is received$""") {
(type: String) =>
//My code
}
Now on running my code I am getting following error everytime.
io.cucumber.junit.UndefinedStepException: The step "A data of Personal is received" is undefined. You can implement it using the snippet(s) below:
When("""A data of Personal is received""") { () =>
// Write code here that turns the phrase above into concrete actions
throw new io.cucumber.scala.PendingException()
}
Though I have my code in when. Also If I don't use Scenario Outline then it works fine but I want to use Scenario Outline for my code.
I am using tags in my feature file to run my test cases. When I run my test cases with command sbt test #tag1, test cases executes fine but when all test cases are finished running on cmd I am getting following error:
[error] Expected ';'
[error] #tag1
I tried putting ";" after tag but still getting same error
What is this issue and how I can resolve it?
I have 4-5 feature files in my application. That means 4-5 tags. As of now the test case which I want to run I give path of feature file and "glue" it with step deinition in my Runner Class. How I can provide all the tags in my Runner class so that my application runs all the test cases one by one when started?

You are missing the double quotes around <type>:
When A data of "<type>" is received

Just some general advice.
When cuking keep things as simple as possible, focus on clarity and simplicity, do not worry about repetition.
Your task would be much simpler if you wrote 2 simple scenarios
Scenario: Personal data
Given Portal is running
When personal data is received
Then personal data should be persisted
Scenario: Professional data
...
Secondly don't use tags to run your features, you don't need tags yet.
You can cuke much more effectively if you avoid scenario outlines, regex's, tags, transforms etc. etc.. The main power in Cucumber is using natural language to express yourself clearly. Focus on that and keep it simple ...

Related

Unable to run experiment on Azure ML Studio after copying from different workspace

My simple experiment reads from an Azure Storage Table, Selects a few columns and writes to another Azure Storage Table. This experiment runs fine on the Workspace (Let's call it workspace1).
Now I need to move this experiment as is to another workspace(Call it WorkSpace2) using Powershell and need to be able to run the experiment.
I am currently using this Library - https://github.com/hning86/azuremlps
Problem :
When I Copy the experiment using 'Copy-AmlExperiment' from WorkSpace 1 to WorkSpace 2, the experiment and all it's properties get copied except the Azure Table Account Key.
Now, this experiment runs fine if I manually enter the account Key for the Import/Export Modules on studio.azureml.net
But I am unable to perform this via powershell. If I Export(Export-AmlExperimentGraph) the copied experiment from WorkSpace2 as a JSON and insert the AccountKey into the JSON file and Import(Import-AmlExperiment) it into WorkSpace 2. The experiment fails to run.
On PowerShell I get an "Internal Server Error : 500".
While running on studio.azureml.net, I get the notification as "Your experiment cannot be run because it has been updated in another session. Please re-open this experiment to see the latest version."
Is there anyway to move an experiment with external dependencies to another workspace and run it?
Edit : I think the problem is something to do with how the experiment handles the AccountKey. When I enter it manually, it's converted into a JSON array comprising of RecordKey and IndexInRecord. But when I upload the JSON experiment with the accountKey, it continues to remain the same and does not get resolved into RecordKey and IndexInRecord.
For me publishing the experiment as a private experiment for the cortana gallery is one of the most useful options. Only the people with the link can see and add the experiment for the gallery. On the below link I've explained the steps I followed.
https://naadispeaks.wordpress.com/2017/08/14/copying-migrating-azureml-experiments/
When the experiment is copied, the pwd is wiped for security reasons. If you want to programmatically inject it back, you have to set another metadata field to signal that this is a plain-text password, not an encrypted password that you are setting. If you export the experiment in JSON format, you can easily figure this out.
I think I found the issue why you are unable to export the credentials back.
Export the JSON graph into your local disk, then update whatever parameter has to be updated.
Also, you will notice that the credentials are stored as 'Placeholders' instead of 'Literals'. Hence it makes sense to change them to Literals instead of placeholders.
This you can do by traversing through the JSON to find the relevant parameters you need to update.
Here is a brief illustration.
Changing the Placeholder to a Literal:

How to generate junit.xml using minil test?

I am using minilla for generating my module scaffold. While it works very well for me, I'd like to get generated junit.xml file with test results when running minil test.
I found that it is possible to specify tap_harness_args in minil.toml configuration file. I tried
[tap_harness_args]
formatter_class = "TAP::Formatter::JUnit"
which ensures that the tests output is formatted in JUnit format. That works well but the output is mixed up with minilla's other output so I can't easily redirect it to a file. Is there a way how to get only the test results to a file? Optimally, I'd still like to see the test results in TAP format in my terminal output at the same time (but I can live without it).
Just a guess: can't you use the a<file.tgz> option in HARNESS_OPTIONS in TAP::Harness::Env?

ScalaCheck generator for web URLs

Wondering if anyone had to do this in using ScalaCheck: Create a custom generator for spitting out large number of URLs. Actually there is a caveat to this that I want to test a service which accepts ONLY valid/working web URLs. I am thinking if I get a large number of valid external/WEB URLs in a file and somehow feed in to the custom generator, only can make this possible?
something like
val genUrls = for {
url <- "URL1" | "URL2" | "URL3"
}yield url
does this sound like a reasonable and actually more importantly doable approach?
UrlGen seems to give that, since uses a list of top urls, but cannot find the artifact in maven repo anywhere. Raised an issue.
PS
You can always add .suchThat(exists), where exists does make sure the URL exists during the test or better yet do it once, before the start of the tests, i.e. making sure all these guys exist.

How do I get the SpecFlow Scenario to be reported when the test runs?

I've managed to tune the output from my SpecFlow tests so that it reads nicely, with just the steps reported plus failures. But it's still pretty unreadable without the Feature and Scenario names also being reported.
Looking at the generated code, it appears that the Feature and Scenario names are encoded as NUnit DescriptionAttributes.
Can I configure SpecFlow or NUnit to also report these to stdout, so I get a nicely flowing "story-like" output?
If you define an extra method in your step definition class as follows then NUnit will report the feature and scenario text.
[BeforeScenario]
public void OutputScenario()
{
Console.WriteLine("Feature: " + FeatureContext.Current.FeatureInfo.Title);
Console.WriteLine(FeatureContext.Current.FeatureInfo.Description);
Console.WriteLine("\r\nScenario: " + ScenarioContext.Current.ScenarioInfo.Title);
}
I hope this helps.

Programmatically Gathering NUnit results

I am running some NUnit tests automatically when my nightly build completes. I have a console application which detects the new build, and then copies the built MSI's to a local folder, and deploys all of my components to a test server. After that, I have a bunch of tests in NUnit dll's that I run by executing "nunit-console.exe" using Process/ProcessStartInfo. My question is, how can programmatically I get the numbers for Total Success/Failed tests?
Did you consider using a continous integration server like CruiseControl.NET?
It builds and runs the tests for you and displays the results in a web page. If you just want a tool, let the nunit-console.exe output the results in XML and parse/transform it with an XSLT script like the ones coming from cruise control.
Here is an example of such an XSL file if you run the transformation on the direct output of nunit-console.exe then you will have to adapt the select statements and remove cruisecontrol.
However it sounds like you might be interested in continuous integration.
We had a similar requirement and what we did was to read into the Test Result XML file that is generated by NUnit.
XmlDocument testresultxmldoc = new XmlDocument();
testresultxmldoc.Load(this.nunitresultxmlfile);
XmlNode mainresultnode = testresultxmldoc.SelectSingleNode("test-results");
this.MachineName = mainresultnode.SelectSingleNode("environment").Attributes["machine-name"].Value;
int ignoredtests = Convert.ToInt16(mainresultnode.Attributes["ignored"].Value);
int errors = Convert.ToInt16(mainresultnode.Attributes["errors"].Value);
int failures = Convert.ToInt16(mainresultnode.Attributes["failures"].Value);
int totaltests = Convert.ToInt16(mainresultnode.Attributes["total"].Value);
int invalidtests = Convert.ToInt16(mainresultnode.Attributes["invalid"].Value);
int inconclusivetests = Convert.ToInt16(mainresultnode.Attributes["inconclusive"].Value);
We recently had a similar requirement, and wrote a small open source library to combine the results files into one aggregate set of results (as if you had run all of the tests with a single run of nunit-console).
You can find it at https://github.com/15below/NUnitMerger
I'll quote from the release notes for nunit 2.4.3:
The console runner now uses negative return codes for errors encountered in trying to run the test. Failures or errors in the test themselves give a positive return code equal to the number of such failures or errors.
(emphasis mine). The implication here is that, as is usual in bash, a return of 0 indicates success, and non-zero indicates failure or error (as above).
HTH