Groovy Transform Vector from soap Response - soap

I'm on my first steps with groovey and soapUI:
My question is about how to transform an vector from soap response to a list and then assert values of the vecorts elements.
The response is like that
<![CDATA[<Vector>
<Object>
<pk>1</pk>
<valueA>B</valueA>
<valueB>20132</valueB>
</Object>
<Object>
<pk>2</pk>
<valueA>C</valueA>
<valueB>666</valueB>
</Object>
</Vector>]]>
This Vector I want to transform into an List to compare it with a lokal List, maybe something like this
def localCompare = new Arraylist[2]
def Objekt = new Array[3]
Objekt['pk'] = 1
Objekt['valueA'] = B
Objekt['valueB'] = 20132
localCompare.add(Objekt)
Objekt['pk'] = 2
Objekt['valueA'] = C
Objekt['valueB'] = 666
localCompare.add(Objekt)
assert localCompare.size() == 2
assert localCompare[1]['pk'] == 1
Note: I'll do it in a loop for each entry but I just wanted to show that I want to compare the response vector to local values

ou can use Groovy's XmlParser as well in order to deal with the response data
So, just parse you xml, firstly
To get the items of the response use such way :
groovy> def xml = '''<Vector>
groovy> <Object>
groovy> <pk>1</pk>
groovy> <valueA>B</valueA>
groovy> <valueB>20132</valueB>
groovy> </Object>
groovy> <Object>
groovy> <pk>2</pk>
groovy> <valueA>C</valueA>
groovy> <valueB>666</valueB>
groovy> </Object>
groovy> </Vector>'''
groovy> def records = new XmlParser().parseText(xml)
groovy> println records.Object.pk[0].text() //return from <pk>1</pk>
groovy> println records.Object.pk[1].text() //return from <pk>2</pk>
groovy> println records.Object.valueB[0].text() //return from <valueB>20132</valueB>
groovy> println records.Object.valueB[1].text() //return from <valueB>666</valueB>
groovy> println records.Object.valueA[0].text() //return from <valueA>B</valueA>
groovy> println records.Object.valueA[1].text() //return from <valueA>C</valueA>
Output:
1
2
20132
666
B
C
in addtitional to, check that source look at here http://groovy.codehaus.org/Reading+XML+using+Groovy%27s+XmlParser

Related

Parametrize a Pytest using a dictionary with key/value individual pairs mapping

Have multiple tests in one test class. I would like to use a dictionary to parametrize the class.
Dictionary structure: {key1: [val_1, val2], key2: [val_3, val4]}
Test:
#pytest.mark.parametrize('key, value', [(k, v) for k, l in some_dict.items() for v in l], scope='class')
# ^^^ the best solution that I've found but still not as expected, and without IDS ^^^
class TestSomething:
def test_foo(self):
assert True
def test_boo(self):
assert True
Expected order (ids including, both key and values are objects and can provide '.name' property):
<Class TestSomething>
<Function test_foo[key1_name-val1_name]>
<Function test_boo[key1_name-val1_name]>
<Function test_foo[key1_name-val2_name]>
<Function test_boo[key1_name-val2_name]>
<Function test_foo[key2_name-val3_name]>
<Function test_boo[key2_name-val3_name]>
<Function test_foo[key2_name-val4_name]>
<Function test_boo[key2_name-val4_name]>
How can I add ids for this parametrize?
Here is a solution using a call to an external function in charge of formatting names from parameters value.
def idfn(val):
# receive here each val
# so you can return a custom property
return val.name
#pytest.mark.parametrize(
"key, value",
[(k, v) for k, l in some_dict.items() for v in l],
scope="class",
ids=idfn,
)
class TestSomething:
def test_foo(self, key, value):
assert True
But the simple solution with a lambda suggested by MrBean also works. In your simple case I would pick this one and use the external function only when more complex formatting is required.
#pytest.mark.parametrize(
"key, value",
[(k, v) for k, l in some_dict.items() for v in l],
scope="class",
ids=lambda val: val.name,
)
class TestSomething:
def test_foo(self, key, value):
assert True
The available options are presented in the doc

Use generator with ruamel.yaml

I would like to have a bunch of generators in my config dict. So I tried this:
#yaml.register_class
class UniformDistribution:
yaml_tag = '!uniform'
#classmethod
def from_yaml(cls, a, node):
for x in node.value:
if x[0].value == 'min':
min_ = float(x[1].value)
if x[0].value == 'max':
max_ = float(x[1].value)
def f():
while True:
yield np.random.uniform(min_, max_)
g = f()
return g
However, the parser never returns because generators are used internally to resolve reference like &A and *A. Therefore, something like returning (g,) is a fairly simple workaround, but I would prefer a solution where I don't need the additional and very confusing index 0 term in next(config['position_generator'][0]).
Any Ideas?
This wrapper adapted from a different question did exactly what I was looking for.
class GeneratorWrapper(Generator):
def __init__(self, function, *args):
self.function = function
self.args = args
def send(self, ignored_arg):
return self.function(*self.args)
def throw(self, typ=None, val=None, tb=None):
raise StopIteration
#yaml.register_class
class UniformDistribution:
yaml_tag = '!uniform'
#classmethod
def from_yaml(cls, constructor, node):
for x in node.value:
value = float(x[1].value)
if x[0].value == 'min':
min_ = value
if x[0].value == 'max':
max_ = value
return GeneratorWrapper(np.random.uniform, min_, max_)

How to test function depended on another one in python

I need to test this type of code bellow:
list = [1,2,3,4]
def getData(list):
return list[0] + list[1]
def processData():
data = getData(list)
multiply = data*data
return multiply
def test_functions():
assert getData([0,1]) == 1
assert processData() == 1
How to tell the test I need data = getData([0,1]), so basically replace data with my test values.

Complex custom Matcher

I am writing tests for the Json output of some API calls in my API, written with Play on Scala. In my tests, this pattern keeps appearing, and I would like to deduplicate it.
val response = sut.index()(FakeRequest())
val expected = Json.parse("""{ "channels":[] }""")
status(response) must equalTo(OK)
contentType(response) must beSome.which(_ == "application/json")
contentAsJson(response) mustEqual expected
My first approach was this:
def assertSameJson(response: Future[Result], expected: JsValue): Unit = {
status(response) must equalTo(OK)
contentType(response) must beSome.which(_ == "application/json")
contentAsJson(response) mustEqual expected
}
But this does not feel idiomatic at all. Is seems like I am adding xUnit asserts in my specs
I would like something leading to
response must beSameJson(expected)
The closest thing I managed was
def beSameJson(other:Any) =
be_==(other) ^^ ((t: Future[Result]) => contentAsJson(t)) and
be_==(OK) ^^ ((t: Future[Result]) => status(t))
But this does not check for the content-type, and I feel it's just very hard to read.
Is there a better way to write this Matcher?
I don't think there is a better way to do that.
The ^^ operator is exactly there for this purpose to transform the information before applying an other matcher. and can be used to combine more than two matchers.
So the only thing you could do is to try to write it a bit cleaner:
def beSameJson(data: String) =
equalTo(OK) ^^ {status(_: Future[Result])}
and beSome.which(_ == "application/json") ^^ {contentType(_: Future[Result])}
and be_==(other) ^^ {contentAsJson(_: Future[Result])}
If you need to decompose responses more often, you could try to do this more generically
object Dummy extends Matcher[Any] {
def apply[S <: Any](s: Expectable[S]) = {
result(true,
s.description + " is ignored",
s.description + " is ignored",
s)
}
}
def beResponseWhere(json: Matcher[JsValue] = Dummy, stat: Matcher[Int] = Dummy, tpe: Matcher[Option[String]] = Dummy) =
stat ^^ {status(_: Future[Result])}
and tpe ^^ {contentType(_: Future[Result])}
and json ^^ {contentAsJson(_: Future[Result])}
}
You should probably use nicer parameter names (I tried to avoid conflict with the methods from your context for this example) and be more complete on the available attributes.
Now you should be able to write something like this:
response must beResponseWhere(
json = equalTo(expected),
tpe = beSome.which(_ == "application/json"),
stat = equalTo(OK)
)
The DummyMatcher allows you to leave some parts out.
I obviously did not try this code as I do not have your complete setting. I also had to guess some types that are not clear from your code snippet.

How to handle nil in scala XML parsing?

I have a XML document representing my model that I need to parse and save in db. In some fields it may have NULL values indicated by xsi:nil. Like so
<quantity xsi:nil="true"/>
For parsing I use scala.xml DSL. The problem is I can't find any way of determining if something is nil or not. This: (elem \ "quantity") just returns an empty string which then blows up when I try to convert it to number. Also wrapping that with Option doesn't help.
Is there any way to get None, Nil or even null from that XML piece?
In this case, you could use namespace URI with your XML with attribute method to get the text in the "xsi:nil" attribute.
Here is a working example:
scala> val xml = <quantity xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/>
xml: scala.xml.Elem = <quantity xsi:nil="true" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"></quantity>
scala> xml.attribute("http://www.w3.org/2001/XMLSchema-instance", "nil")
res0: Option[Seq[scala.xml.Node]] = Some(true)
If you consider a empty node is None, then you don't even need to bother the attribute. Just filter out the node without any text inside it, and using headOption to get the value.
scala> val s1 = <quantity xsi:nil="true">12</quantity>
s1: scala.xml.Elem = <quantity xsi:nil="true">12</quantity>
scala> val s2 = <quantity xsi:nil="true"/>
s2: scala.xml.Elem = <quantity xsi:nil="true"></quantity>
scala> s1.filterNot(_.text.isEmpty).headOption.map(_.text.toInt)
res10: Option[Int] = Some(12)
scala> s2.filterNot(_.text.isEmpty).headOption.map(_.text.toInt)
res11: Option[Int] = None
If you use xtract you can do this with a combination of filter and otpional:
(__ \ "quantity").read[Node]
.filter(_.attribute("http://www.w3.org/2001/XMLSchema-instance", "nil").isEmpty)
.map(_.toDouble).optional
See https://www.lucidchart.com/techblog/2016/07/12/introducing-xtract-a-new-xml-deserialization-library-for-scala/
Disclaimer: I work for Lucid Software and am a contributor to xtract.