Setting cucumber tag via System property - tags

My entire feature stack is divided into #sanity(10 scenarios) and #smoke(2 scenarios) and whole stack is considered as #regression (no tag required, and total scenarios: 37). My question is how can I pass tag value via command line. Please note this is a cucumber-testng project
Below is how my runner file looks:
Please note, I have tried below command line commands but it still runs #smoke and #sanity both cases (meaning 12 scenarios)
./gradlew -i test -Denv=release -D"cucumber.options=--tags #sanity" ---> It runs 12 scenarios
./gradlew -i test -Denv=release -Dcucumber.filter.tags="#smoke"---> It runs 12 scenarios

If you want to run the tests from command line only, try removing the tags=#sanity or #smoke from your runner file first and then try ....-Dcucumber.filter.tags="#smoke" from the command line again.

So this worked. Apparently I had missed adding below line in my build.gradle file

Related

Rundeck: see what is actually executed on the commandline

I'm just getting started with rundeck and trying to find out how it works.
I created a simple Job that should install some packages on the remote note from a pre-selected list (Option).
When I select more than one option the command fails. I want to find out why it fails but (even with debug-mode enabled) see nowhere which command is actally being executed on the remote node.
My command looks like yum install -y "${option.package}" and the unexpected response is eg: no package [selected options] available ... I have selected (space) as delimitter.
How can I see what is executed on the remote host?
Update:
I meanwhile found out why my options did not work as expected; I had to use the unqouted variant for the command-line. But the main question still stays the same ...
Right now the only way to see the exact executed command is to run the job on debug mode. Just select "Run with Debug output" and you can see the command dispatched in the middle of the execution output.

How to test a single failing test when building perl

This issue usually is encountered when trying to run make test and sees one test fails. The README describes one can run each test individually, didn't clearly specifies how to do so.
make test uses the script called TEST in the test directory (t). To replicate make test of a single file, one would use this script as follows:
[.../perl/t]$ ./perl -I../lib TEST op/array.t
t/op/array ... ok
All tests successful.
Elapsed: 0 sec
u=0.01 s=0.00 cu=0.03 cs=0.02 scripts=1 tests=194
If you want to see the raw output of the test script, you can run perl as follows:
[.../perl/t]$ ./perl -I../lib op/array.t
1..194
ok 1
ok 2
ok 3
...
ok 192 - holes passed to sub do not lose their position (multideref, mg)
ok 193 - holes passed to sub do not lose their position (aelem)
ok 194 - holes passed to sub do not lose their position (aelem, mg)
The above information and more is found in perlhack.
This document explains how Perl development works. It includes details about the Perl 5 Porters email list, the Perl repository, the Perlbug bug tracker, patch guidelines, and commentary on Perl development philosophy.
Note that you need to run make test_prep before the above commands work. (If you've run make test, you've effectively run make test_prep already.)
Run ./perl harness ../foo/boo.t in the t directory, with foo/boo the name of the failing test.
To run a single test script, use perl, or better, prove. Assuming you are in the module's base directory:
prove -lv t/some-test-script.t
This will run the test script against the libraries in ./lib, with fallback to the libraries available to your install of Perl.
If you want to use the build libraries built by make, then this:
prove -bv t/some-test-script.t
Now the test script will be run against the libraries in ./blib, falling back to libraries installed for your Perl.
The test scripts are typically just Perl scripts that live in a t/ or xt/ or some similar path within the distribution's directory structure. So you can also run them just with Perl:
perl -Iblib t/some-test-script.t
But prove produces nicer test summary information and color coding.
That is about as granular as you can get unless tests are written to allow for targeting specific segments within a test script. If you need to target a specific test within a test script you'll usually have to dig into the test code itself.

How to run a pytest-bdd test?

I am not understanding how to properly run a simple test(feature file and python file)
with the library pytest-bdd.
From the official documentation, I can't understand what command to issue to run a test.
I tried using pytest command, but I saw the NO test ran.
Do I need to use another library behave to run a feature file?
I figured out trying for 2 days,that ,
for running a pytest-bdd test, there are certain requirements, at least in my view.
put both the feature file and python file in the same directory (maybe this can be changed with configuration files)
the python file name needs to start with test_
the python file needs to contain a method of which name will start with test_
the method starting with test_ , need to be assigned to the #scenario sentence
to run the test, issue pytest command in the same directory(maybe it is also configurable)
After issuing you will only see the method with the name starting with test_ has passed, but all the tests actually ran. To test, you can assert False in any #when or #then annotated method, it will throw errors.
The system contained : pytest-bdd==3.0.2 (copied from pip freeze output)
Features files and python files can be placed in different folders using the bdd_features_base_dir hook provided by pytest-bdd; I think it is better having features files in different folders too.
Here you can see a working example (a simple hello world BDD test):
https://github.com/davidemoro/pytest-play-docker/tree/master/tests
https://github.com/davidemoro/pytest-play-docker/blob/master/tests/pytest.ini (see bdd_features_base_dir in [pytest] section)
https://github.com/davidemoro/pytest-play-docker/tree/master/tests/bdd
If you want to try out pytest-bdd without installation you can use Docker. Create a folder with inside your pytest BDD files and if you want a separate features folder targeted in bdd_features_base_dir and run:
docker run --rm -it -v $(pwd):/src davidemoro/pytest-play:latest
I've found out, that in the python file you don't have to put:
the method starting with test_ , need to be assigned to the #scenario sentence
You can just add: scenarios("") - to allow the tests to be started, which are using steps defined in this specific python file.
Remember to import scenarios!: from pytest_bdd import scenarios
Example:
Code example
Command..
pytest -v path_to_test_file.py
Things to note here..
Check format of feature file as filename.feature
Always __init__ modules, otherwise test-runner will not find test files
Glue right step definitions to test function
Add feature in features module
If you are using python3 execute test with python3
So,
python3 -m pytest -v path_to_test_file.py
Documentation
https://pytest-bdd.readthedocs.io/en/stable/#

Building Artifactory fails for Build Stage in Delivery Pipeline

I have created a toolchain, which downloads the code from the bitbucket repository and builds the docker image in IBM Cloud.
After the code builds the image, the build stage fails while building the artifactory.
Error:
Preparing the build artifacts...
Customer script does not exist for the job, exitting
I have specified the Build archive directory as the folder name. Do I need to write any scripts for archiving?
That particular error occurs when one of our checks -- the existence of /home/pipeline/$TASK_ID/_customer_script.sh -- fails.
Archiving happens automatically but that file needs to be present as we use it as part of the traceability around how the artifact was created. Is it possible that file is getting removed? (Also will look into removing or making the check non-fatal however that will take time)
This issue appears to be caused by setting a working directory for the job. _customer_script.sh gets dropped into the working directory, but the script Simon is referring to (/opt/IBM/pipeline/bin/ids-buildables-notify.sh) only checks the top-level directory the code input is at (/home/pipeline/$TASK_ID/).
Three options to fix this, assuming you're doing a container registry job:
Run cp _customer_script.sh /home/pipeline/$TASK_ID in your script. The ids-buildables-notify.sh script does some grepping for your bx cr build call, so make sure that's still in there.
touch /home/pipeline/$TASK_ID/_customer_script.sh and export PIPELINE_IMAGE_URL=<your image url>. If PIPELINE_IMAGE_URL is set, the notify script doesn't bother with being clever, which I prefer.
Don't change the working directory.
A script which works for me:
#!/bin/bash
echo -e "Build environment variables:"
echo "REGISTRY_URL=${REGISTRY_URL}"
echo "REGISTRY_NAMESPACE=${REGISTRY_NAMESPACE}"
echo "IMAGE_NAME=${IMAGE_NAME}"
echo "BUILD_NUMBER=${BUILD_NUMBER}"
echo -e "Building container image"
set -x
export PIPELINE_IMAGE_URL=$REGISTRY_URL/$REGISTRY_NAMESPACE/$IMAGE_NAME:$BUILD_NUMBER
bx cr build -t $PIPELINE_IMAGE_URL .
set +x
touch /home/pipeline/$TASK_ID/_customer_script.sh

mimicking make dependency checking in perl

Not sure if I am explaining this well, but here goes...
I have a perl script/flow that runs various steps. Each step is basically dependent on the output of its previous step in order to run.
For example:
myflow -step1...input is file0, produces file1
myflow -step2...input is file1, produces file2
myflow -stepN...input is fileN-1, produces fileN
Right now users can run myflow -step1 -step2...-stepN to go from start to finish. I would like to somehow have the ability for the user to run myflow -stepN, have myflow check to see which steps need to be run prior to it, and then run stepN. Maybe no steps were run, so myflow -stepN would start from step1 and continue until it finishes stepN or an error occurs. Maybe step1 through step3 ran fine previously, so running -stepN would start from step4. Maybe all steps ran fine, but the user modified/deleted/touched an intermediate file, so running -stepN would detect this and rerun from that previous step.
Is there a cpan module that essentially mimics this make behavior, i.e. given steps, inputs they require, and outputs they produce, create a dependency graph and determine which steps need to be run?
I'm thinking you could use make itself instead of trying to simulate it.
The makefile rules for "building" each fileX "target" from the fileX-1 "source file" would be invoking your script for the respective step.