Robot Framework : How to add documentation tag for test written in BDD + DD style - tags

Following is my test is written in Robot Framework. It uses the BDD+data-driven approach. It runs perfectly fine. Bue how to add [Documentation] tag for each test. I want that in the report.html, for each test case, documentation should be displayed. How to achieve it?
Settings
Resource …/…/…/resources/high-level-api.robot
Library Collections
Test Template this is my test
Test Cases
TC1 ${data1} ${data2}
TC2 ${data3} ${data4}
TC3 ${data5} ${data6}
Keywords
this is my test
[Arguments] ${valid_data1} ${valid_data2}
When perform step1 ${valid_data1}
And step2 ${valid_data1}
Then I should get ${valid_data2}

Use the [Documentation] setting from user guide.
It should look something like this:
***Settings***
Test Template Log Value
*** Test Cases *** VALUE
Example dummy
[Documentation] first example
Example 2 Value
[Documentation] second example
*** Keywords ***
Log Value
[Arguments] ${value}
Log ${value}

Related

Robot Framework Stubborn "Element did not appear in 5 seconds" error

I'm writing up a Robot Framework script to fill out a form. Simple script, using a small dictionary, all the rest of it is working but I can't seem to interact with this certain field. I've tried almost a dozen ways to interact with the element, but none of them are working. Can you guys help me out here? Here's my code.
Documentation Frevvo Form
Library Zoomba.GUILibrary
Library Process
Resource ../../Pages/resource.robot
Suite Setup Browser Setup ${url} ${browser}
*** Keywords ***
Student Form Data
[Arguments] ${Frevvo}
Run Keyword If '${Frevvo}[TulaneID]'!='${EMPTY}' Wait For And Input Text ${TulaneID} ${Frevvo}[TulaneID]
Run Keyword If '${Frevvo}[Term]'!='${EMPTY}' Wait For And Input Text ${Term} ${Frevvo}[Term]
*** Variables ***
${TulaneID} //*[contains(name(),'TulaneID')]
[already tried the below - no success]
#//input[#name='TulaneID']
#//input[#id='w11562aab5b4b2']
#//*[#id="w11562aab5b4b2"]
#//*[#name="TulaneID"]
#//*[#id="_5le4NoQ-EeyE6Z5Sq_0wTQ"]
*** Test Cases ***
TC 001 Basic Frevvo
Set To Dictionary ${Frevvo} TulaneID=211003560 Term=2022 Spring
Login
Student Form Data ${Frevvo}
And here is the HTML of the page. The element I need is in the highlighted part
Try following,
*** Variables ***
${TulaneID} xpath=//input[#id='w11562aab5b4b2']

Bdd Cucumber issues

I have newly started working on BDD Cucumber. I am using scala for writing test cases. I am trying to use Scenario Outline and passing parameters in step definitions. My code is as follows.
Scenario Outline: Data is parsed and persisted
Given Portal is running
When A data of <type> is received
Then The data of <type> with <Id> should be parsed and persisted
Examples:
| type | Id |
| Personal | 1 |
|Professional | 2 |
Now in my when condition I am trying to get these parameters as follows
When("""^A data of \"([^\"]*)\" is received$""") {
(type: String) =>
//My code
}
Now on running my code I am getting following error everytime.
io.cucumber.junit.UndefinedStepException: The step "A data of Personal is received" is undefined. You can implement it using the snippet(s) below:
When("""A data of Personal is received""") { () =>
// Write code here that turns the phrase above into concrete actions
throw new io.cucumber.scala.PendingException()
}
Though I have my code in when. Also If I don't use Scenario Outline then it works fine but I want to use Scenario Outline for my code.
I am using tags in my feature file to run my test cases. When I run my test cases with command sbt test #tag1, test cases executes fine but when all test cases are finished running on cmd I am getting following error:
[error] Expected ';'
[error] #tag1
I tried putting ";" after tag but still getting same error
What is this issue and how I can resolve it?
I have 4-5 feature files in my application. That means 4-5 tags. As of now the test case which I want to run I give path of feature file and "glue" it with step deinition in my Runner Class. How I can provide all the tags in my Runner class so that my application runs all the test cases one by one when started?
You are missing the double quotes around <type>:
When A data of "<type>" is received
Just some general advice.
When cuking keep things as simple as possible, focus on clarity and simplicity, do not worry about repetition.
Your task would be much simpler if you wrote 2 simple scenarios
Scenario: Personal data
Given Portal is running
When personal data is received
Then personal data should be persisted
Scenario: Professional data
...
Secondly don't use tags to run your features, you don't need tags yet.
You can cuke much more effectively if you avoid scenario outlines, regex's, tags, transforms etc. etc.. The main power in Cucumber is using natural language to express yourself clearly. Focus on that and keep it simple ...

How to produce business rule output that can be examined in the ODM Rule Execution Server Console?

I am new to ODM 8.5 (the successor to JRules), and I am trying to test some rules in the ODM Rule Execution Server Console. At this point, I'm merely trying to confirm that my rule changes have been deployed to the RES successfully. According to ODM's Testing Ruleset Execution help page, I should be able to examine the Output text box to see "strings that are written to print.out" from the web page under Explorer > RuleApps > RuleApp > Ruleset > Test Ruleset. I've deployed a rule containing the following snippet:
However, after executing the rule, I don't see the output of the println in the Output box. Is println what the documentation refers to when they say "print.out"? I get syntax errors if I try to replace "System.out.println" with "print.out". How can I get simple debug output to appear in the Output box?
The note method will cause output to go to the Output text box of the ODM Rule Execution Server Console, e.g., use:
note("*** This is the rule modification ***");
You can use the Decision warehouse(DW), in RES console.
First you need to activate the tracing in the ruleset properties.
Then after an execution, you can search in DW for execution informations such as, rule executed, data values, etc... Check online documetation details(look for ODM IBM 8.5)
Please note that this may slow down your decisions, so better not use this feature in production systems. Hope this helps.

Using Buildbot, how do I change "shell_1" to something else?

Before my build starts, each of my ShellCommand steps are labelled shell_\d+. It would be nice if Buildbot used the step description instead of the auto generated shell label. Also when we get an email notification, as it says BUILD FAILED: failed shell_3 but it would be nicer if it said BUILD FAILED: unit test xyz failed.
Is there a way to change this shell ID id something else? Perhaps by creating a custom build step and overriding a function? I'm not sure where this ID comes from exactly.
You give the step a name in the addStep method for example:
f = buildbot.process.factory.BuildFactory()
f.addStep(buildbot.steps.shell.ShellCommand((name = 'Hello',
cmd = ['echo', 'Hello World']))
I'd implement the second part as a log observer

When running multiple tags with NUnit Console Runner and SpecFlow I get incorrect results

This is a follow up to my earlier questions on setting up tags: Can I use tags in SpecFlow to determine the right environment to use? and setting up variables from those tags: How to set up a URL variable to be used in NUnit/SpecFlow framework
I've set up some variables to aid in populating my NUnit Tests, but I find that when the NUnit runner finds the test that fits the first tag the test runs it with the settings of the second tag. Since the tags are important to me to not only know what test to run, but what variables to use, this is causing me problems.
So if I have the following tags:
#first
#first #second
#second
If I run #second everything is fine. If I run #first I get any scenario that has only #first fine, but when it comes to scenarios where I have both #first #second the scenario is run, because #first is there, however, it uses the parameters for #second. Since I am running the DLL through the NUnit-Console and the Tests are written through SpecFlow I am not sure where the issue may lie.
Does anyone have advice on setting up tests to run like this?
You've not been very specific, but it sounds like you have a feature file like this:
#first
Scenario: A - Something Specific happens under the first settings
Given ...etc...
#second
Scenario: B - Something Specific happens under the second settings
Given ...etc...
#first #second
Scenario: C - Something general happens under the first and second settings
Given ...etc...
It looks like you are selecting tests to run in NUnit by running all the tests in the "first" category.
If you set up event definitions like this:
[BeforeFeature("first")]
public static string FirstSettings()
{ ... }
[BeforeFeature("second")]
public static string SecondSettings()
{ ... }
If you execute scenario C then FirstSettings() and SecondSettings() will be executed before it. This is regardless of whether you used the #second category to select the test to run under NUnit.
This is almost certainly the reason that you are seeing the second settings applied to your test with both tags - I expect the second settings overwrite the first ones, right?
My only advice for setting up tests like this, is that binding events and so on to specific tags can be useful but should be used as little as possible. Instead make your individual step definitions reusable, and set up your test environment, where possible, with Given steps.