How to migrate pytest to bazel py_test? - pytest

I want to convert our integration pytests over to bazel, but it doesn't seem to actually run nor produce the junitxml output I'm expecting:
ARGS = [
"--verbose",
"--json-report",
"--junit-xml=junit.xml",
]
py_test(
name = "test_something",
srcs = ["test_something.py"],
args = ARGS,
tags = [
"yourit",
],
deps = [
requirement("pytest"),
requirement("pytest-json-report"),
":lib",
],
)
There is a similar question here: How do I use pytest with bazel?. But it didn't cover the multitude of issues I ran into.

So there were multiple things to address in order to make bazel's py_test work with pytest.
Of course you need the pytest dependency requirement("pytest") added.
The pytest files require a hook being added (note that the args are passed as well sys.argv[1:], so py_test args can be passed on to the pytest file):
if __name__ == "__main__":
sys.exit(pytest.main(\[__file__\] + sys.argv\[1:\]))
Since our pytests are trying to access the host network, we need to tell bazel to allow network traffic outside the sandbox by adding a requires-network tag.
The host network is local, so we need to make sure the test runs locally by adding the local tag.
Bazel py_test will create junitxml under bazel-testlogs, but it only shows if py_test rule passed or failed. To get more granular junitxml results, we need to tell pytest how to overwrite bazel's py_test results using XML_OUTPUT_FILE environment variable. Note the $$ is needed to prevent variable expansion. See https://docs.bazel.build/versions/main/test-encyclopedia.html and https://github.com/bazelbuild/bazel/blob/master/tools/test/test-setup.sh#L67 for more info.
Attempts to write json report output to bazel-testlogs using TEST_UNDECLARED_OUTPUTS_DIR environment variable didn't work. The report is still written, but inside bazel-out. There might be a way to scoop up those files, but solving that problem was not a priority.
Note that the ARGS is used to keep it DRY (don't repeat yourself). We use the same arguments across multiple py_test rules.
So the BUILD now looks like this:
ARGS = [
"--verbose",
"--json-report",
"--junit-xml=$$XML_OUTPUT_FILE",
]
py_test(
name = "test_something",
srcs = ["test_something.py"],
args = ARGS,
tags = [
"local",
"requires-network",
"yourit",
],
deps = [
requirement("pytest"),
requirement("pytest-json-report"),
":lib",
],
)

Related

Karate- Gatling: Not able to run scenarios based on tags

I am trying to run performance test on scenario tagged as perf from the below feature file-
#tag1 #tag2 #tag3
**background:**
user login
#tag4 #perf
**scenario1:**
#tag4
**scenario2:**
Below is my .scala file setup-
class PerfTest extends Simulation {
val protocol = karateProtocol()
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath"))
setUp(
getTags.inject(
atOnceUsers(1)
).protocols(protocol)
)
I have tried passing the tags from command line and as well as passing the tag as argument in exec method in scala setup.
Terminal command-
mvn clean test-compile gatling:test "-Dkarate.env={env}" "-Dkarate.options= --tags #perf"
.scala update:- I have also tried passing the tag as an argument in the karate execute.
val getTags = scenario("Name goes here").exec(karateFeature("classpath:filepath", "#perf"))
Both scenarios are being executed with either approach. Any pointers how i can force only the test with tag perf to run?
I wanted to share the finding here. I realized it is working fine when i am passing the tag info in .scala file.
My scenario with perf tag was a combination of GET and POST call as i needed some data from GET call to pass in POST call. That's why i was seeing both calls when running performance test.
I did not find any reference in karate gatling documentation for passing tags in terminal execution command. So i am assuming that might not be a valid case.

Overriding collection paths via PyTest plugin

With PyTest, you can limit the scope of test collection by passing directories/files/nodeids as command line arguments, e.g., pytest tests, pytest tests/my_tests.py and pytest tests/my_tests.py::test_1. Is it possible to override this behavior from within a plugin, i.e., to set them to something else programmatically?
So far I've attempted setting file_or_dir to my own list within config.option and config.known_args_namespace from the pytest_configure hook, but this appears to have no effect on anything.
You are probably looking for config.args:
# conftest.py
def pytest_configure(config):
config.args = ['foo', 'bar/baz.py::test_spam']
Running pytest now will be essentially the same as running pytest foo bar/baz.py::test_spam. However, putting stuff in pytest.ini would be IMO a better solution:
# pytest.ini
[pytest]
addopts = foo bar/baz.py::test_spam

How to fix PipelineParam from discarding all information except for name in Kubeflow Pipeline

I'm trying to write an application using Kubeflow Pipelines. I'm running into trouble when passing in parameters to the pipeline (the main python function decorated with #kfp.dsl.pipeline). The parameters should be automatically converted into a PipelineParam with name, value, etc info. However, it seems that everything except for the name is being discarded. I'm on an Ubuntu server.
I've tried uninstalling/reinstalling and updating Kubeflow, tried installing several of the most recent versions of kfp (0.1.23, 0.1.22, 0.1.20, 0.1.18), as well as installing on my local machine.
def print_pipeline_param():
return(kfp.dsl.PipelineParam("Name", value="Value"))
#kfp.dsl.pipeline(
name='Test Pipeline',
description='Test pipeline'
)
def test_pipeline(output_file='/output.txt'):
print(print_pipeline_param())
print(output_file)
if __name__ == '__main__':
import kfp.compiler as compiler
compiler.Compiler().compile(test_pipeline, __file__ + '.tar.gz'
The result of running this is:
{{pipelineparam:op=;name=Name;value=Value;type=;}}
{{pipelineparam:op=;name=output-file;value=;type=;}
I should be getting '/output.txt' in the "value" field, but the only field populated is the name. This only happens when passing in parameters to the main pipeline function. This also happens when directly passing in a PipelineParam like so:
#kfp.dsl.pipeline(
name='Test Pipeline',
description='Test pipeline'
)
def test_pipeline(output_file=kfp.dsl.PipelineParam("Output File", value="/output.txt")):
print(output_file)
Prints out: {{pipelineparam:op=;name=output-file;value=;type=;}

How do I run only certain tests in karma?

I have karma config set up correctly, config file, running in the background, just great. As soon as I change and save a file, it reruns the tests.... all 750 of the unit tests. I want to be able to run just a few. Short of manually hacking the config file or commenting out hundreds of tests across many files, is there any easy way to do it?
E.g. when running command line server tests using say mocha, I just use regexp: mocha -g 'only tests that I want'. Makes it much easier to debug and quickly check.
So now I feel foolish. mocha supports a very narrow version of regexp matching.
This runs all tests
describe('all tests',function(){
describe('first tests',function(){
});
describe('second tests',function(){
});
});
This runs just 'first tests'
describe('all tests',function(){
describe.only('first tests',function(){
});
describe('second tests',function(){
});
});
You can also do it.only()
I should have noticed that. Sigh.
You can do that at karma startup time unfortunately, not at runtime.
If you want to change it dynamically you have to put some more effort.
Say you want to focus on a specific set/suite of tests from the beginning, on the karma-mocha plugin page there's this snippet of code to do what you want:
module.exports = function(config) {
config.set({
// karma configuration here
...
// this is a mocha configuration object
client: {
// The pattern string will be passed to mocha
args: ['--grep', '<pattern>'],
...
}
});
};
In order to make the <pattern> parametric you have to wrap the configuration file in a Configurator that will listen CLI and customize the karma configuration for you.
Have a look to this SO answer to know how to setup a very simple Configurator.
I have same question and this is my workround by a little change on karma.conf.js.
In fact, take an argument from command line and modify the pattern in "files".
I use minimist to parse the argument list.
In config file:
/* Begin */
var minimist = require('minimist');
var argv = minimist(process.argv);
var testBase="test/unit";
var testExt=".spec.js";
var unitTestPattern = testBase+'/**/*'+testExt;
if ("test" in argv){
unitTestPattern = testBase+"/"+argv["test"]+testExt;
}
/* End */
module.exports = function(config){
config.set({
//....
files : [
//....
unitTestPattern, //place here
// 'test/unit/**/*.spec.js', //replace this
//....
],
//....
});
};
run in command prompt:
karma start test/karma.conf.js --single-run --test #TEST_CASE_FILE#
a nice extension that can help here is karma-jasmine-html-reporter-livereload
https://www.npmjs.com/package/karma-jasmine-html-reporter-livereload
or karma-jasmine-html-reporter https://www.npmjs.com/package/karma-jasmine-html-reporter?__hstc=72727564.86845f057bb4d741f59d578059e30644.1443860954685.1453095135802.1453138187458.37&__hssc=72727564.1.1453138187458&__hsfp=2285154675
It creates a debug page in which you can run each test individually. very useful for large projects!
1) In your karma.conf.js get the params from the terminal:
var files = (process.env.npm_config_single_file) ? process.env.npm_config_single_file : 'test/test_index.js';
2) In order to run a single test you will need to set an option object with all your configuration (Without files and preprocessors):
var option = {
webpack: {
// webpack configuration
},
// more configuration......
};
3) Set your files path and preprocessors:
option.files = [
{pattern: files, watch: false}
];
option.preprocessors = {};
option.preprocessors[files] = [ 'webpack', 'sourcemap' ];
// call config.set function
config.set(option);
4) Run in the terminal:
npm test --single_file=**/my-specific-file-spec.js
For more information check this PR:
https://github.com/webpack/karma-webpack/pull/178
There are different ways to do it.
Use --grep option. The disadvantage of this is that all the tests are preprocessed before running the specific test suite.
Use .only method. Disadvantage same as no. 1. Using both 1 and 2 method my node process used to crash often saying out of memory.
Limit the files options for processing. This is super fast.
Limit preprocessing to certain folder like Unit or Integration folder.
For this I have used custom cli option --only and in the karma config
const modules = config.only;
and in the the files pattern
files: typeof modules === 'string ? '[`tests/**/${module}/**/*.(test|spec).js`]: 'tests/**/*.(test|spec).js'
Advantage: Developers can run only certain tests when they make a small change way faster by limiting in the preprocessing phase.
You can also use combination of no.3 and no.1 or 2.

Erlang: How to access CLI flags (arguments) as application environment variables?

How does one access command line flag (arguments) as environment variables in Erlang. (As flags, not ARGV) For example:
RabbitMQ cli looks something like:
erl \
...
-sasl errlog_type error \
-sasl sasl_error_logger '{file,"'${RABBITMQ_SASL_LOGS}'"}' \
... # more stuff here
If one looks at sasl.erl you see the line:
get_sasl_error_logger() ->
case application:get_env(sasl, sasl_error_logger) of
% ... etc
By some unknown magic the sasl_error_logger variable becomes an erlang tuple! I've tried replicating this in my own erlang application, but I seem to be only able to access these values via init:get_argument, which returns the value as a string.
How does one pass in values via the commandline and be able to access them easily as erlang terms?
UPDATE Also for anyone looking, to use environment variables in the 'regular' way use os:getenv("THE_VAR")
Make sure you set up an application configuration file
{application, fred,
[{description, "Your application"},
{vsn, "1.0"},
{modules, []},
{registered,[]},
{applications, [kernel,stdlib]},
{env, [
{param, 'fred'}
]
...
and then you can set your command line up like this:
-fred param 'billy'
I think you need to have the parameter in your application configuration to do this - I've never done it any other way...
Some more info (easier than putting it in a comment)
Given this
{emxconfig, {ets, [{keypos, 2}]}},
I can certainly do this:
{ok, {StorageType, Config}} = application:get_env(emxconfig),
but (and this may be important) my application is started at this time (may actually just need to be loaded and not actually started from looking at the application_controller code).