Robot Framework: conditional import of resource - import

Is it possible to do a conditional import of a resource file in robot framework? Depending on the test environment, I want to import a resource file with different variables. The variable could be read from the robot CLI (e.g. robot --variable VAR:production myTestSuite)
Illustrating Example:
*** Settings***
Resource variables_url_environment_a.robot
Resource variables_url_environment_b.robot
Run keyword if '${VAR}'=='production' Import resource variables_url_environment_b.robot

You could use Arguments file that will have different Environmental variables, You could use something like
QA.args
--variable Enviroment:http://sample.url/QA:1111
--variable USER:John
--variable PASSWORD:John
Then in your Robot.test
*** Test Cases ***
Run Argument File
Go To ${Enviroment}
Login With User ${USER} ${PASSWORD}
NOTE: This is just an example of the argument file use Login with User is not an actual keyword
And then execute command
robot --argumentfile "QA.args" tests
You can also overwrite the variables on the command line.
robot --argumentfile "QA.args" --variable Enviroment:http://sample.url/Staging:1111 tests

You could use a variable in the name of import file.
Set the value of the variable from pom.xml file in case you are using maven.
Something like below, where ${PLATFORM} is a variable :
*Settings*
Resource ../platforms/settings_${PLATFORM}.tsv
Resource ../platforms/settings_default.tsv
*Variables*
${PLATFORM} ${ENV_PLATFORM}
Below is snippet from POM.xml
....
<env.platform>Platform1.</env.platform>
....
<configuration>
<variables>
<param>ENV_PLATFORM:${env.platform}</param>
</variables>
</configuration>
....
Also, this way you can pass the value of platform from jenkins (if used)
using -Denv.platform=Platform_5

I dont think conditional imort is possible in Robot Framework in the way you liked.
However,what you can do is instead of importing envorimnent file as resoucres , you can pass them to your test as --variablefile
How i will do it?
variables_url_environment_a.py
msg='env a'
variables_url_environment_b.py
msg='env b'
Test.robot
*** Settings ***
*** Variables ***
*** Test Cases ***
print message to console
print msg
*** Keywords ***
print msg
log to console ${msg}
Now just run your test suite as per the enviroment you need by creating a simple python script.
Python_run_script
import subprocess
var='Production'
command_a='pybot -V variables_url_environment_a.py Test.robot'
command_b='pybot -V variables_url_environment_a.py Test.robot'
if var='Production':
procId = subprocess.Popen(command_a,stdout = subprocess.PIPE)
else:
procId = subprocess.Popen(command_b,stdout = subprocess.PIPE)
For more information about how to use --variablefile , you can also refer below url
https://automationlab0000.wordpress.com/2018/11/20/how-to-pass-python-variable-file-in-robotframework/

Run Keyword If '${VAR}' == 'iOS' Import Library a.py

Related

in vscode for mix unit test, how to use `--only` parameter?

The purpose is to debug only one unit test in the exs file, therefore it is necessary to ignore other unit tests in the same exs file.
My previous solution is comment out the other unit test, but the bad side of this solution is I can't find other unit tests easily through vscode's outline view as follows:
From the mix doc, it is found that mix command has --include and --only option.
I have adjusted launch.json file as follows, update task args as --trace --only :external, and update the exs file, but when runing mix test, it gives the error message.
Remember to keep good posture and stay hydrated!
helloworld
(Debugger) Task failed because an exception was raised:
** (Mix.Error) Could not invoke task "test": 1 error found!
--trace --only :external : Unknown option
(mix 1.13.4) lib/mix.ex:515: Mix.raise/2
(elixir_ls_debugger 0.10.0) lib/debugger/server.ex:1119: ElixirLS.Debugger.Server.launch_task/2
Then I changed launch.json to "--trace --only :external", similar error message as follows:
(Debugger) Task failed because an exception was raised:
** (Mix.Error) Could not invoke task "test": 1 error found!
--trace --only :external : Unknown option
(mix 1.13.4) lib/mix.ex:515: Mix.raise/2
(elixir_ls_debugger 0.10.0) lib/debugger/server.ex:1119: ElixirLS.Debugger.Server.launch_task/2
I use a plugin called Elixir Test. It has a few nice features including what you are asking for.
To run a single test place your cursor within the code of the test, then select "Elixir Test: Run test at cursor" from the command palette.
Another helpful command is: "Elixir Test: Jump". If you are editing a module file, this command will jump to the test file corresponding to the module. It will optionally create the skeleton for the test file if you haven't created it yet.
It is caused by syntax problem. Every paremeter should be one element as follows:
"taskArgs": [
"--trace", "--warnings-as-errors", "--only", "external"
],

Why excluding tag with a variable doesn't work in robot framework?

I am trying to exclude specific test cases using tags with a variable. I have added an Initialization init.robot
*** Settings ***
Suite Setup INIT
Test Setup
*** Keywords ***
INIT
Set Global Variable ${hw_version} v1
And test cases
*** Test Cases ***
excludetest
[Tags] ${hw_version}
[Setup]
Log test passed
includetest
No Operation
Despite the excluding of v1 tag with the command : robot -e v1 -s Test-tag . all tests are executed.
It is because the choice to include or exclude tests happen before the first test is run. Before the test is run, the tag is not set on the test and thus can't be used to include or exclude the test.

How can I debug my python unit tests within Tox with PUDB?

I'm trying to debug a python codebase that uses tox for unit tests. One of the failing tests is proving difficult due to figure out, and I'd like to use pudb to step through the code.
At first thought, one would think to just pip install pudb then in the unit test code add in import pudb and pudb.settrace(). But that results in a ModuleNotFoundError:
> import pudb
>E ModuleNotFoundError: No module named 'pudb'
>tests/mytest.py:130: ModuleNotFoundError
> ERROR: InvocationError for command '/Users/me/myproject/.tox/py3/bin/pytest tests' (exited with code 1)
Noticing the .tox project folder leads one to realize there's a site-packages folder within tox, which makes sense since the point of tox is to manage testing under different virtualenv scenarios. This also means there's a tox.ini configuration file, with a deps section that may look like this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
commands = pytest tests
adding pudb to the deps list should solve the ModuleNotFoundError, but leads to another error:
self = <_pytest.capture.DontReadFromInput object at 0x103bd2b00>
def fileno(self):
> raise UnsupportedOperation("redirected stdin is pseudofile, "
"has no fileno()")
E io.UnsupportedOperation: redirected stdin is pseudofile, has no fileno()
.tox/py3/lib/python3.6/site-packages/_pytest/capture.py:583: UnsupportedOperation
So, I'm stuck at this point. Is it not possible to use pudb instead of pdb within Tox?
There's a package called pytest-pudb which overrides the pudb entry points within an automated test environment like tox to successfully jump into the debugger.
To use it, just make your tox.ini file have both the pudb and pytest-pudb entries in its testenv dependencies, similar to this:
[tox]
envlist = lint, py3
[testenv]
deps =
pytest
pudb
pytest-pudb
commands = pytest tests
Using the original PDB (not PUDB) could work too. At least it works on Django and Nose testers. Without changing tox.ini, simply add a pdb breakpoint wherever you need, with:
import pdb; pdb.set_trace()
Then, when it get to that breakpoint, you can use the regular PDB commands:
w - print stacktrace
s - step into
n - step over
c - continue
p - print an argument value
a - print arguments of current function

Pytest: collecting 0 items even after following the conventions

I created a test module by following all the conventions, but when I run the test, I get the following message:
collecting 0 items
Here's my directory hierarchy:
integration_tests (Directory)-> tests (Directory)-> test_integration_use_cases.py (python file)
And this is the content of the file:
import pytest
from some_tests.integration_tests.backbone.SomeIntegrationTestBase import SomeIntegrationTestBase
class TestSomeIntegration(SomeIntegrationTestBase):
#pytest.mark.p1
def test_some_integration_use_cases(self):
print("**** Executing integration tests ****")
result = self.execute_test(4)
assert (True == result)
when I run the following command:
pytest test_integration_use_cases.py
I see the following result without any errors:
collecting 0 items
FYI: I am running this on a development machine (Like vagrant)
so I had the same problem as you have even after following all the recommended conventions. My application structure was as follows;
Application
-- API
app.py
-- docs
-- venv
-- tests
-- unit_test
test_factory
...
...
I, however, resolved the issue by moving the tests directory under the API package so that my application structure looked as below;
Application
-- API
app.py
-- tests
-- unit_test
test_factory
...
-- docs
-- venv
...
Although pytest is supposed to auto-discover the tests, it seems to do that if they are placed in the application root. Check out the pytest for flask
I also found this resource helpful.

PredictionIO - getting error when build and run Evaluation metrics

I followed this quickstart:
https://docs.prediction.io/templates/classification/quickstart/
and this document for evaluation metrics
https://docs.prediction.io/evaluation/paramtuning/
Everything seems ok until the step build and run evaluation metrics
pio eval org.template.classification.AccuracyEvaluation \
org.template.classification.EngineParamsList
I am getting the exception:
Exception in thread "main" scala.reflect.internal.MissingRequirementError: object org.template.classification.AccuracyEvaluation not found.
at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:16)
at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:17)
at scala.reflect.internal.Mirrors$RootsBase.ensureModuleSymbol(Mirrors.scala:126)
at scala.reflect.internal.Mirrors$RootsBase.staticModule(Mirrors.scala:161)
at scala.reflect.internal.Mirrors$RootsBase.staticModule(Mirrors.scala:21)
at io.prediction.workflow.WorkflowUtils$.getEvaluation(WorkflowUtils.scala:103)
at io.prediction.workflow.CreateWorkflow$$anonfun$19.apply(CreateWorkflow.scala:146)
at io.prediction.workflow.CreateWorkflow$$anonfun$19.apply(CreateWorkflow.scala:144)
Could anyone help me with this?
Thank you very much.
Had the exact same problem. Fixed it by doing the following:
For each .scala file in engine_dir/src/main/scala/org/template/engine_name/ you need to change the first line from...
package <SomeTemplateName>
To the following (replacing engine_name with the name of the folder in the path mentioned above):
package org.template.<engine_name>
Then, in engine.json you need to change the following line...
"engineFactory": "<template name>.<template engine>",
To the following (once again replacing engine_name with the name of the folder in the path mentioned above):
"engineFactory": "org.template.<engine name>.<template engine>",
Now re-run...
pio build
pio train
pio deploy
Then you should be able to run the model evaluation without errors.
Simply run it like this
$ pio eval org.example.classification.AccuracyEvaluation \
org.example.classification.EngineParamsList
You dont have to change anything. The class package from the sample was org.example.classification not org.template.classification