In pytest, how to run against different ip for cases with different marks or name patterns - pytest

I have a question here.
I am using Pytest.
I have two cases with mark and name as:
1)
#pytest.mark.ip1
def test_ip1_actions()
...
#pytest.mark.ip2
def test_ip2_actions()
...
what I want is:
if the mark is ip1, then I will run the test against 1st ip - 192.168.2.23; if the mark is ip2, then the case should run against 2nd ip - 192.168.2.100
or:
based on the name, if case name contains "ip1", it will run against 1st ip; if case name contains "ip2", then run against 2nd ip.
In fact I have many cases to run against 2 ips, not only 2. and the ip information (as well some other information of the two hosts) are written in an json file. So I would like to find out a general and simple solution.
I tried but didn't get it.
I thought maybe do something in the conftest.py file? for example, before the case running, judge the mark or the name? But don't know how to handle.
Looking forward to your advice, to my first question in stackoverflow! :)
thanks very much!

Don't quite understand, and it's hard to understand more without seeing an example test, but how about using parameters to determine which ips are passed in.
Say you have a test to check ips are accessible. This would be one way to do it, but it's not very nice or extensible.
def test_ip_accessible():
assert is_accessible("192.168.2.23")
assert is_accessible("192.168.2.100")
Instead you can create helper functions that return specific ips, or a list of ips, and then use these as parameters.
ip_map: Dict[int, str] = { # ip id to address
1: "192.168.2.23",
2: "192.168.2.100",
3: "192.168.2.254",
}
def get_single_ip(id_: int) -> str:
return ip_map[id_]
def get_multiple_ips(*ids) -> Tuple[str]:
return tuple(ip_map[i] for i in ids)
#pytest.mark.parametrize("ip_address", [get_single_ip(1), get_single_ip(2)])
def test_ip_accessible(ip_address):
# ip_address will be 192.168.2.23 then 192.168.2.100 in two separate tests
assert is_accessible(ip_address)
#pytest.mark.parametrize("ip_addresses", [get_multiple_ips(1, 2), get_multiple_ips(2, 3)])
def test_with_multiple_ips(ip_addresses):
# this test will also be run twice. the first time, ip_addresses will be
# ["192.162.2.23", "192.168.2.100"]. the second time, it will be
# ["192.168.2.100", "192.168.2.254"].
assert something_with_a_list_of_ips(ip_addresses)

Related

Pulum DigitalOcean: use outputs

I want to create some servers on DigitalOcean using Pulumi. I have the following code:
for i in range(0, amount):
name = f"droplet-{i+1}"
droplet = digitalocean.Droplet(
name,
image=_image,
region=_region,
size=_size,
)
pulumi.export(f"droplet-ip-{i+1}", droplet.ipv4_address)
This is correctly outputting the IP address of the servers on the console.
However I would like to use the IP addresses elsewhere in my Python script. Therefor I had added the droplets to a list as follows:
droplets = []
for i in range(0, amount):
name = f"droplet-{i+1}"
droplet = digitalocean.Droplet(
name,
image=_image,
region=_region,
size=_size,
)
pulumi.export(f"droplet-ip-{i+1}", droplet.ipv4_address)
droplets.append(droplet)
to then loop over the droplets as follows:
for droplet in droplets:
print(droplet.ipv4_address)
In the Pulumi output, I see the following:
Diagnostics:
pulumi:pulumi:Stack (Pulumi_DigitalOcean-dev):
<pulumi.output.Output object at 0x105086b50>
<pulumi.output.Output object at 0x1050a5ac0>
I realize that while the droplets are still being created, the IP address is unknown but I'm adding the droplets to the list after the creation.
Is there a way to know the IP addresses at some point so it can be used elsewhere in the Python script.
The short answer is that because these values are Outputs, if you want the strings, you'll need to use .apply:
https://www.pulumi.com/docs/intro/concepts/inputs-outputs/#apply
To access the raw value of an output and transform that value into a new value, use apply. This method accepts a callback that will be invoked with the raw value, once that value is available.
You can print these IPs by iterating over the list and calling the apply method on the ipv4_address output value:
...
pulumi.export(f"droplet-ip-{i+1}", droplet.ipv4_address)
droplets.append(droplet)
...
for droplet in droplets:
droplet.ipv4_address.apply(lambda addr: print(addr))
$ pulumi up
...
Diagnostics:
pulumi:pulumi:Stack (so-71888481-dev):
143.110.157.64
137.184.92.205
Outputs:
droplet-ip-1: "137.184.92.205"
droplet-ip-2: "143.110.157.64"
Depending on how you plan to use these strings in your program, this particular may may not be perfect, but in general, if you want the unwrapped value of pulumi.Output, you'll need to use .apply().
The pulumi.Output.all() also comes in handy if you want to wait for several output values to resolve before using them:
https://www.pulumi.com/docs/intro/concepts/inputs-outputs/#all
If you have multiple outputs and need to join them, the all function acts like an apply over many resources. This function joins over an entire list of outputs. It waits for all of them to become available and then provides them to the supplied callback.
Hope that helps!

Pass string parameter to remote process in kdb

I am trying to pass a variable that is string to the ipc query. This does not work for me.
Example:
[`EDD.RDB; "?[`tab;enlist(like;`OrderId;",("string Number),");();(?:;`Actions)]"]
I am trying to query this RDB where OrderId like Number(string)
Number is a parameter but when I passed as string to the remote process, Number is not string any more. I tried to put string in front but still get the same result.
What I want to pass to remote process is this:
Number:"abc"
"?[`tab;enlist(like;`OrderId;"abc");();(?:;`Actions)]"
EDIT as you have updated your question.
It's hard to give a solid answer here as your example is lacking information.
What you have posted is not a valid IPC call in KDB+. I suspect what you may be trying to run is something like:
h(`EDD.RDB; "?[`tab;enlist(like;`OrderId;",("string Number),");();(?:;`Actions)]"])
Assuming Number is an int (e.g. Number:123) then in that case you could rewrite it as:
h(`EDD.RDB;"select distinct Actions from t where orderID like \"",string[Number],"\"")
Which is easier to read and work with. Assuming Number is defined on the client side then the above should return an answer.
If you do want to use functional form then you could try something like:
"?[`tab;enlist (like;`orderID;string[",string[Number],"]);1b;(enlist`Actions)!enlist`Actions]"
As your query string.
If Number is already a string on your process, e.g Number:"123" then you should be able to either:
h(`EDD.RDB;"select distinct Actions from t where orderID like \"",Number,"\"")
OR
h(`EDD.RDB;"?[`tab;enlist (like;`orderID;string[",Number,"]);1b;(enlist`Actions)!enlist`Actions]")
Does the IPC query have to be string? Passing parameters would be cleaner using (func;params) syntax for IPC.
handleToRdb ({[number] ?[`tab;enlist(like;`OrderId;number);();(?:;`Actions)]};"abc")

py.test: How to avoid naming every test? Generate the test name dynamically

Occasionally I name tests like test_X1, test_X2,...
because
it is always about feature X and
the tests are small and
I don't want a descriptive name that is longer than the test
Especially when things still change a lot I don't want to think of a name at all.
The line where the test resides defines the test in the file.
So how to use the line for the test name?
Here is a way to name the tests dynamically test_<line>.
from inspect import currentframe
def namebyline(f):
line = currentframe().f_back.f_lineno
globals()['test_' + str(line)] = f
#namebyline
def _(): # becomes test_6
assert True
#namebyline
def _(): #becomes test_9
assert False

yaml safe_load of many different objects

I have a huge YAML file with tag definitions like in this snippet
- !!python/object:manufacturer.Manufacturer
name: aaaa
address: !!python/object:address.BusinessAddress {street: bbbb, number: 123, city: cccc}
And I needed to load this, first to make sure that the file is correct YAML, second to extract information at a certain tree-dept given a certain context. I had this all as nested dicts, lists and primitives that would be straightforward to do. But I cannot load the file as I don't have the original Python sources and class defines, so yaml.load() is out.
I have tried yaml.safe_load() but that throws and exception.
The BaseLoader loads the file, so it is correct. But that jumbles all primitive information (number, datetime) together as strings.
Then I found How to deserialize an object with PyYAML using safe_load?, since the file has over 100 different tags defined, the solutions presented there is impractical.
Do I have to use some other tools to strip the !!tag definitions (there is at least one occasion where !! occurs inside a normal string), so I can use safe_load. Is there simpler way to do solve this that I am not aware of?
If not I will have to do some string parsing to get the types back, but I thought I ask here first.
There is no need to go the cumbersome route of adding any of the classes if you want to use the safe_loader() on such a file.
You should have gotten an ConstructorError thrown in SafeConstructor.construct_undefined() in constructor.py. That method gets registered for the fall through case 'None' in the constructor.py file.
If you combine that info with the fact that all such tagged "classes" are mappings (and not lists or scalars), you can just copy the code for the mappings in a new function and register that as the fall-through case.
import yaml
from yaml.constructor import SafeConstructor
def my_construct_undefined(self, node):
data = {}
yield data
value = self.construct_mapping(node)
data.update(value)
SafeConstructor.add_constructor(
None, my_construct_undefined)
yaml_str = """\
- !!python/object:manufacturer.Manufacturer
name: aaaa
address: !!python/object:address.BusinessAddress {street: bbbb, number: 123, city: cccc}
"""
data = yaml.safe_load(yaml_str)
print(data)
should get you:
[{'name': 'aaaa', 'address': {'city': 'cccc', 'street': 'bbbb', 'number': 123}}]
without an exception thrown, and with "number" as integer not as string.

Treeline.io sanitize inputs

I have just started investigating into treeline.io beta, so, I could not find any way in the existing machinepacks that would do the job(sanitizing user inputs). Wondering if i can do it in anyway, best if within treeline.
Treeline automatically does type-checking on all incoming request parameters. If you create a route POST /foo with parameter age and give it 123 as an example, it will automatically display an error message if you try to post to /foo with age set to abc, because it's not a number.
As far as more complex validation, you can certainly do it in Treeline--just add more machines to the beginning of your route. The if machine works well for simple tasks; for example, to ensure that age is < 150, you can use if and set the left-hand value to the age parameter, the right-hand value to 150, and the comparison to "<". For more custom validations you can create your own machine using the built-in editor and add pass and fail exits like the if machine has!
The schema-inspector machinepack allow you to sanitize and validate the inputs in Treeline: machinepack-schemainspector
Here a screenshot how I'm using it in my Treeline project:
The content of the Sanitize element:
The content of the Validate element (using the Sanitize output):
For the next parts, I'm always using the Sanitize output (email trimmed and in lowercase for this example).