pytest get command line options - pytest

I have couple of questions regarding commandline options of pytest
If it is possible to use long names and short names for the same option, for example
parser.addoption('--server', '-srv', dest = 'SERVER')
How to access to commandline option by name, like :
config.option.NAME
def environment_options(parser):
parser.addoption('--server', dest= "SERVER")
#pytest.fixture()
def enfironment_set_up():
if config.option.SERVER == 'some value':
actions
pycharm shows reference unresolved 'config'. Do I need to import something ?

As far as I know (haven't found that in the documentation), it is possible to add a short name, but only one with one upper case letter, e.g.:
def environment_options(parser):
parser.addoption('-S', '--server', dest= "SERVER")
Lowercase letters are reserved for pytest itself, and longer abbreviations are not supported. See also my somewhat related answer.
You can access the option via config in the request fixture:
#pytest.fixture
def enfironment_set_up(request):
if request.config.getoption("SERVER") == 'some value':
actions

Related

In pytest, how to run against different ip for cases with different marks or name patterns

I have a question here.
I am using Pytest.
I have two cases with mark and name as:
1)
#pytest.mark.ip1
def test_ip1_actions()
...
#pytest.mark.ip2
def test_ip2_actions()
...
what I want is:
if the mark is ip1, then I will run the test against 1st ip - 192.168.2.23; if the mark is ip2, then the case should run against 2nd ip - 192.168.2.100
or:
based on the name, if case name contains "ip1", it will run against 1st ip; if case name contains "ip2", then run against 2nd ip.
In fact I have many cases to run against 2 ips, not only 2. and the ip information (as well some other information of the two hosts) are written in an json file. So I would like to find out a general and simple solution.
I tried but didn't get it.
I thought maybe do something in the conftest.py file? for example, before the case running, judge the mark or the name? But don't know how to handle.
Looking forward to your advice, to my first question in stackoverflow! :)
thanks very much!
Don't quite understand, and it's hard to understand more without seeing an example test, but how about using parameters to determine which ips are passed in.
Say you have a test to check ips are accessible. This would be one way to do it, but it's not very nice or extensible.
def test_ip_accessible():
assert is_accessible("192.168.2.23")
assert is_accessible("192.168.2.100")
Instead you can create helper functions that return specific ips, or a list of ips, and then use these as parameters.
ip_map: Dict[int, str] = { # ip id to address
1: "192.168.2.23",
2: "192.168.2.100",
3: "192.168.2.254",
}
def get_single_ip(id_: int) -> str:
return ip_map[id_]
def get_multiple_ips(*ids) -> Tuple[str]:
return tuple(ip_map[i] for i in ids)
#pytest.mark.parametrize("ip_address", [get_single_ip(1), get_single_ip(2)])
def test_ip_accessible(ip_address):
# ip_address will be 192.168.2.23 then 192.168.2.100 in two separate tests
assert is_accessible(ip_address)
#pytest.mark.parametrize("ip_addresses", [get_multiple_ips(1, 2), get_multiple_ips(2, 3)])
def test_with_multiple_ips(ip_addresses):
# this test will also be run twice. the first time, ip_addresses will be
# ["192.162.2.23", "192.168.2.100"]. the second time, it will be
# ["192.168.2.100", "192.168.2.254"].
assert something_with_a_list_of_ips(ip_addresses)

python click passing multiple key values as options

I am using python-click and I would like to pass values with a format like this
myprogram mycommand --parameter param1=value1 --parameter param2=value2
I was looking at click option documentation but I can't find any construct that could help with this, the only solution I can find is invoking a callback function and check that the string is properly constructed <key>=<value>, then elaborate the values properly.
Nothing bad in that solution, but I was wondering if there is a more elegant way to handle this since the pattern looks to be common enough.
So, I've had a go at it. I've managed to do it the following ways.
Firstly, I've tried this, though it doesn't satisfy the --parameter criteria.
#click.command("test")
#click.argument("args", nargs=-1)
def test(args):
args = dict([arg.split("=") for arg in args])
print(args)
so when invoking like test param1=test1 param2=test the output is:
{'param1': 'test1', 'param2': 'test2' }
Secondly, I thought about a multiple option, combined with a variadic argument, which seems to be closer to your requirements:
test -p param1=test -p param2=test
#click.command("test")
#click.option('-p', '--parameter', multiple=True)
#click.argument("args", nargs=-1)
def test(*args, **kwargs):
param_args = kwargs['parameter']
param_args = dict([p.split('=') for p in param_args])
print(param_args)
if __name__ == '__main__':
test()
The output will be the same as the previous case.
If you were to print(kwargs['parameter']), you'd get
('param1=test', 'param2=test')
It sounds a bit more cleaner than using a callback, but not by much. Still, hope it helps.

Test name as given with collect-only

I would like to access the name of the current test as a string in my tests to write some VCD log-files.
Is the name as given when I run pytest --collect-only available as a fixture or equivalent?
Example:
Running pytest --collect-only yields (shorted):
<Class TestFooBar>
<Function test_30_foobar[1-A]>
In my test I would like to access the above string test_30_foobar[1-A].
Is there a (simple) way?
I've found an answer for my own question. It's hidden in the request fixture. See the following thereof derived fixture:
#pytest.fixture
def name_test(request):
"""Make name of test available as string and escaped as filename"""
import types
i = types.SimpleNamespace()
i.name = request.node.name
i.filename = i.name.replace('[', '_').replace(']', '')
return i
The filename variable is a clumsily escaped string that should be a valid filename. However, only tested on POSIX so far.

Apply Command to String-type custom fields with YouTrack Rest API

and thanks for looking!
I have an instance of YouTrack with several custom fields, some of which are String-type. I'm implementing a module to create a new issue via the YouTrack REST API's PUT request, and then updating its fields with user-submitted values by applying commands. This works great---most of the time.
I know that I can apply multiple commands to an issue at the same time by concatenating them into the query string, like so:
Type Bug Priority Critical add Fix versions 5.1 tag regression
will result in
Type: Bug
Priority: Critical
Fix versions: 5.1
in their respective fields (as well as adding the regression tag). But, if I try to do the same thing with multiple String-type custom fields, then:
Foo something Example Something else Bar P0001
results in
Foo: something Example Something else Bar P0001
Example:
Bar:
The command only applies to the first field, and the rest of the query string is treated like its String value. I can apply the command individually for each field, but is there an easier way to combine these requests?
Thanks again!
This is an expected result because all string after foo is considered a value of this field, and spaces are also valid symbols for string custom fields.
If you try to apply this command via command window in the UI, you will actually see the same result.
Such a good question.
I encountered the same issue and have spent an unhealthy amount of time in frustration.
Using the command window from the YouTrack UI I noticed it leaves trailing quotations and I was unable to find anything in the documentation which discussed finalizing or identifying the end of a string value. I was also unable to find any mention of setting string field values in the command reference, grammer documentation or examples.
For my solution I am using Python with the requests and urllib modules. - Though I expect you could turn the solution to any language.
The rest API will accept explicit strings in the POST
import requests
import urllib
from collections import OrderedDict
URL = 'http://youtrack.your.address:8000/rest/issue/{issue}/execute?'.format(issue='TEST-1234')
params = OrderedDict({
'State': 'New',
'Priority': 'Critical',
'String Field': '"Message to submit"',
'Other Details': '"Fold the toilet paper to a point when you are finished."'
})
str_cmd = ' '.join(' '.join([k, v]) for k, v in params.items())
command_url = URL + urllib.urlencode({'command':str_cmd})
result = requests.post(command_url)
# The command result:
# http://youtrack.your.address:8000/rest/issue/TEST-1234/execute?command=Priority+Critical+State+New+String+Field+%22Message+to+submit%22+Other+Details+%22Fold+the+toilet+paper+to+a+point+when+you+are+finished.%22
I'm sad to see this one go unanswered for so long. - Hope this helps!
edit:
After continuing my work, I have concluded that sending all the field
updates as a single POST is marginally better for the YouTrack
server, but requires more effort than it's worth to:
1) know all fields in the Issues which are string values
2) pre-process all the string values into string literals
3) If you were to send all your field updates as a single request and just one of them was missing, failed to set, or was an unexpected value, then the entire request will fail and you potentially lose all the other information.
I wish the YouTrack documentation had some mention or discussion of
these considerations.

Parsing a delimited multiline string using scala StandardTokenParser

I have found a few similar questions but nothing that seems to directly address my needs here.
I am creating a DSL using Scala and have much of it already defined. However, part of the language needs to handle blocks of multi-line textual documentation that are collected and handled as individual entities by the parser. I would like to delimit these blocks in some way (say with something like {{ and }}) and just collect everything between the delimiters and return it as a DocString (a case class in my parser). These blocks will then be used to create additional end-user documentation along with the rest of the parsed file(s).
The parser is already structured as a StandardTokenParsers-derived class. I suppose I could convert it to a RegexParsers-derived class and just use regular expressions but that would be a major change and a lot of my grammar would have to be reworked. I am not sure if there would be any advantage to doing this (other than supporting the desired documentation blocks).
I have seen Using regex in StandardTokenParsers and found this. I am not sure either of those will actually handle what I need, however, or how to begin if they do.
If anyone has any ideas as to a viable way to proceed I would appreciate some pointers.
As an example, here is something I have tried (from Using regex in StandardTokenParsers):
object DModelParser extends StandardTokenParsers {
...
def modelElement: Parser[ModelElement] =
(other stuff, not important here) | docBlock
import scala.util.matching.Regex
import lexical.StringLit
def regexBlockMatch(r: Regex): Parser[String] = acceptMatch(
"string block matching regex " + r,
{case StringLit(s) if r.unapplySeq(s).isDefined => s})
val bmr = """\{\{((?s).*)\}\}""".r
def docBlockStr: Parser[String] = regexBlockMatch(bmr)
def docBlock: Parser[DocString] =
docBlockStr ^^ { s => new DocString(s) }
...
}
However, when passing it even a single line like the following:
{{ A block of docs }}
it fails to match causing the parser to stop parsing. I think the problem is in the case StringLit(s) in this case but I am not sure.
Edit
OK. StringLit was a problem. I forgot that this will only match strings in double quotes. So I tried replacing the string above with:
"{{ A block of docs }}"
and it works fine. However, the multi-line issue still remains. If I replace this with:
"{{ A block
of docs }}"
Then it still fails to parse. Again, I think it is the StringLit not working across line-feeds.
Edit
Another option occurred to me but I am not sure how to make it work in the parser. If I can read and match a line that only contains the opening delimiter then collect into a List[String] all the lines until a line that only contains the closing delimiter that would be sufficient. Is there a way to do this?
Edit 6/22/2015
I went a different direction and this seems to work for the examples I have tried so far:
// https://stackoverflow.com/questions/24771341/scala-regex-multiline-match-with-negative-lookahead
def docBlockRE = regex("""(?s).*?(?=}})""".r)
def docBlock: Parser[DocString] =
"{{" ~> docBlockRE <~ "}}" ^^ { case str => new DocString(str) }