Why excluding tag with a variable doesn't work in robot framework? - tags

I am trying to exclude specific test cases using tags with a variable. I have added an Initialization init.robot
*** Settings ***
Suite Setup INIT
Test Setup
*** Keywords ***
INIT
Set Global Variable ${hw_version} v1
And test cases
*** Test Cases ***
excludetest
[Tags] ${hw_version}
[Setup]
Log test passed
includetest
No Operation
Despite the excluding of v1 tag with the command : robot -e v1 -s Test-tag . all tests are executed.

It is because the choice to include or exclude tests happen before the first test is run. Before the test is run, the tag is not set on the test and thus can't be used to include or exclude the test.

Related

in vscode for mix unit test, how to use `--only` parameter?

The purpose is to debug only one unit test in the exs file, therefore it is necessary to ignore other unit tests in the same exs file.
My previous solution is comment out the other unit test, but the bad side of this solution is I can't find other unit tests easily through vscode's outline view as follows:
From the mix doc, it is found that mix command has --include and --only option.
I have adjusted launch.json file as follows, update task args as --trace --only :external, and update the exs file, but when runing mix test, it gives the error message.
Remember to keep good posture and stay hydrated!
helloworld
(Debugger) Task failed because an exception was raised:
** (Mix.Error) Could not invoke task "test": 1 error found!
--trace --only :external : Unknown option
(mix 1.13.4) lib/mix.ex:515: Mix.raise/2
(elixir_ls_debugger 0.10.0) lib/debugger/server.ex:1119: ElixirLS.Debugger.Server.launch_task/2
Then I changed launch.json to "--trace --only :external", similar error message as follows:
(Debugger) Task failed because an exception was raised:
** (Mix.Error) Could not invoke task "test": 1 error found!
--trace --only :external : Unknown option
(mix 1.13.4) lib/mix.ex:515: Mix.raise/2
(elixir_ls_debugger 0.10.0) lib/debugger/server.ex:1119: ElixirLS.Debugger.Server.launch_task/2
I use a plugin called Elixir Test. It has a few nice features including what you are asking for.
To run a single test place your cursor within the code of the test, then select "Elixir Test: Run test at cursor" from the command palette.
Another helpful command is: "Elixir Test: Jump". If you are editing a module file, this command will jump to the test file corresponding to the module. It will optionally create the skeleton for the test file if you haven't created it yet.
It is caused by syntax problem. Every paremeter should be one element as follows:
"taskArgs": [
"--trace", "--warnings-as-errors", "--only", "external"
],

Running a sample script using Robot Framework

I am fairly new to Robot Framework. I am trying to run the following code using Ride IDE but facing issues. Could someone kindly help me on how to get this done.
Code:
*** Settings ***
*** Variables ***
*** Test Cases ***
Setting Variables
#| Example of running a python script
${result}= run process | python | C:\Users\name\Desktop\hello.py
#| | Should be equal as integers | ${result.rc} | 0
#| | Should be equal as strings | ${result.stdout} | Hello World
*** Keywords ***
I still think you should include more details in your question, namely:
the content of hello.py
the error message you get
Nevertheless, I think your problem will be somewhere around these:
1/ Your Settings section is empty, but you need Process library in order to execute Run Process keyword.
2/ Your hello.py is wrong, doesn't return and print what you think it does.
3/ You absolute path is wrong, the python file resides somewhere else.
4/ You're missing some modules you need in order to execute RF scripts. Please search on this site, similar question about missing modules has been asked many times.
All in all, the whole runnable example (provided you have all the prerequisites installed) would be:
*** Settings ***
Library Process
*** Test Cases ***
Setting Variables
${result}= Run Process python hello.py
Should be equal as integers ${result.rc} 0
Should be equal as strings ${result.stdout} Hello World
It's a good practice not to use absolute paths, so I refer to hello.py differently. The content of the file is:
hello.py
print('Hello World')

Robot Framework: conditional import of resource

Is it possible to do a conditional import of a resource file in robot framework? Depending on the test environment, I want to import a resource file with different variables. The variable could be read from the robot CLI (e.g. robot --variable VAR:production myTestSuite)
Illustrating Example:
*** Settings***
Resource variables_url_environment_a.robot
Resource variables_url_environment_b.robot
Run keyword if '${VAR}'=='production' Import resource variables_url_environment_b.robot
You could use Arguments file that will have different Environmental variables, You could use something like
QA.args
--variable Enviroment:http://sample.url/QA:1111
--variable USER:John
--variable PASSWORD:John
Then in your Robot.test
*** Test Cases ***
Run Argument File
Go To ${Enviroment}
Login With User ${USER} ${PASSWORD}
NOTE: This is just an example of the argument file use Login with User is not an actual keyword
And then execute command
robot --argumentfile "QA.args" tests
You can also overwrite the variables on the command line.
robot --argumentfile "QA.args" --variable Enviroment:http://sample.url/Staging:1111 tests
You could use a variable in the name of import file.
Set the value of the variable from pom.xml file in case you are using maven.
Something like below, where ${PLATFORM} is a variable :
*Settings*
Resource ../platforms/settings_${PLATFORM}.tsv
Resource ../platforms/settings_default.tsv
*Variables*
${PLATFORM} ${ENV_PLATFORM}
Below is snippet from POM.xml
....
<env.platform>Platform1.</env.platform>
....
<configuration>
<variables>
<param>ENV_PLATFORM:${env.platform}</param>
</variables>
</configuration>
....
Also, this way you can pass the value of platform from jenkins (if used)
using -Denv.platform=Platform_5
I dont think conditional imort is possible in Robot Framework in the way you liked.
However,what you can do is instead of importing envorimnent file as resoucres , you can pass them to your test as --variablefile
How i will do it?
variables_url_environment_a.py
msg='env a'
variables_url_environment_b.py
msg='env b'
Test.robot
*** Settings ***
*** Variables ***
*** Test Cases ***
print message to console
print msg
*** Keywords ***
print msg
log to console ${msg}
Now just run your test suite as per the enviroment you need by creating a simple python script.
Python_run_script
import subprocess
var='Production'
command_a='pybot -V variables_url_environment_a.py Test.robot'
command_b='pybot -V variables_url_environment_a.py Test.robot'
if var='Production':
procId = subprocess.Popen(command_a,stdout = subprocess.PIPE)
else:
procId = subprocess.Popen(command_b,stdout = subprocess.PIPE)
For more information about how to use --variablefile , you can also refer below url
https://automationlab0000.wordpress.com/2018/11/20/how-to-pass-python-variable-file-in-robotframework/
Run Keyword If '${VAR}' == 'iOS' Import Library a.py

How can I hide skipped tasks output in Ansible

I have Ansible role, for example
---
- name: Deploy app1
include: deploy-app1.yml
when: 'deploy_project == "{{app1}}"'
- name: Deploy app2
include: deploy-app2.yml
when: 'deploy_project == "{{app2}}"'
But I deploy only one app in one role call. When I deploy several apps, I call role several times. But every time there is a lot of skipped tasks output (from tasks which do not pass condition), which I do not want to see. How can I avoid it?
I'm assuming you don't want to see the skipped tasks in the output while running Ansible.
Set this to false in the ansible.cfg file.
display_skipped_hosts = false
Note. It will still output the name of the task although it will not display "skipped" anymore.
UPDATE: by the way you need to make sure ansible.cfg is in the current working directory.
Taken from the ansible.cfg file.
ansible will read ANSIBLE_CONFIG,
ansible.cfg in the current working directory, .ansible.cfg in
the home directory or /etc/ansible/ansible.cfg, whichever it
finds first.
So ensure you are setting display_skipped_hosts = false in the right ansible.cfg file.
Let me know how you go
Since ansible 2.4, a callback plugin name full_skip was added to suppress the skipping of task names and skipping keyword in the ansible output. You can try the below ansible configuration:
[defaults]
stdout_callback = full_skip
Ansible allows you to control its output by using custom callbacks.
In this case you can simply use the skippy callback which will not output anything on a skipped task.
That said, skippy is now deprecated and will be removed in ansible v2.11.
If you don't mind losing colours you can elide the skipped tasks by piping the output through sed:
ansible-playbook whatever.yml | sed -nr '/^TASK/{h;n;/^skipping:/{n;b};H;x};p'
If you are using roles, you can use when to cancel the include in main.yml
# roles/myrole/tasks/main.yml
- include: somefile.yml
when: somevar is defined
# roles/myrole/tasks/somefile.yml
- name: this task will only run (and be seen in the output) if somevar is defined
debug:
msg: "Hello World"

How to get test name and test result during run time in pytest

I want to get the test name and test result during runtime.
I have setup and tearDown methods in my script. In setup, I need to get the test name, and in tearDown I need to get the test result and test execution time.
Is there a way I can do this?
You can, using a hook.
I have these files in my test directory:
./rest/
├── conftest.py
├── __init__.py
└── test_rest_author.py
In test_rest_author.py I have three functions, startup, teardown and test_tc15, but I only want to show the result and name for test_tc15.
Create a conftest.py file if you don't have one yet and add this:
import pytest
from _pytest.runner import runtestprotocol
def pytest_runtest_protocol(item, nextitem):
reports = runtestprotocol(item, nextitem=nextitem)
for report in reports:
if report.when == 'call':
print '\n%s --- %s' % (item.name, report.outcome)
return True
The hook pytest_runtest_protocol implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks. It is called when any test finishes (like startup or teardown or your test).
If you run your script you can see the result and name of the test:
$ py.test ./rest/test_rest_author.py
====== test session starts ======
/test_rest_author.py::TestREST::test_tc15 PASSED
test_tc15 --- passed
======== 1 passed in 1.47 seconds =======
See also the docs on pytest hooks and conftest.py.
unittest.TestCase.id() this will return the complete Details including class name , method name .
From this we can extract test method name.
Getting the results during can be achieved by checking if there any exceptions in executing the test.
If the test fails then there wil be an exception if sys.exc_info() returns None then test is pass else test will be fail.
Using pytest_runtest_protocol as suggested with fixture marker solved my problem. In my case it was enough just to use reports = runtestprotocol(item, nextitem=nextitem) within my pytest html fixture. So to finalize the item element contains the information you need.
Many Thanks.