How to add test-specific user properties to report with pytest-reportlog? - pytest

Say I have a test
# contents of test_foo.py
import pytest
#pytest.mark.parametrize("value,expected", [(2, 4), (4, 16)])
def test_square(value: int, expected: int):
assert value**2 == expected
and I want to return a JSON report of the test with pytest-reportlog. How can I add data to the report in each test? Specifically, I would like to add the value of value and expected to the report for each test.
I run the tests with
pytest --report-log report.json
and I have pytest==6.2.1 and pytest-reportlog==0.1.2 installed.

Use the record_property fixture (see examples). This fixture allows you to use a callable with the signature record_property(name: str, value: object) -> None
import typing as ty
import pytest
#pytest.mark.parametrize("value,expected", [(2, 4), (4, 16)])
def test_square(
record_property: ty.Callable[[str, ty.Any], None], value: int, expected: int
):
record_property("value", value)
record_property("expected", expected)
assert value**2 == expected
These values can be access in the user_properties key of the JSON objects for these tests.
For example:
"user_properties": [["value", 2], ["expected", 4]]

Related

Parametrize a Pytest using a dictionary with key/value individual pairs mapping

Have multiple tests in one test class. I would like to use a dictionary to parametrize the class.
Dictionary structure: {key1: [val_1, val2], key2: [val_3, val4]}
Test:
#pytest.mark.parametrize('key, value', [(k, v) for k, l in some_dict.items() for v in l], scope='class')
# ^^^ the best solution that I've found but still not as expected, and without IDS ^^^
class TestSomething:
def test_foo(self):
assert True
def test_boo(self):
assert True
Expected order (ids including, both key and values are objects and can provide '.name' property):
<Class TestSomething>
<Function test_foo[key1_name-val1_name]>
<Function test_boo[key1_name-val1_name]>
<Function test_foo[key1_name-val2_name]>
<Function test_boo[key1_name-val2_name]>
<Function test_foo[key2_name-val3_name]>
<Function test_boo[key2_name-val3_name]>
<Function test_foo[key2_name-val4_name]>
<Function test_boo[key2_name-val4_name]>
How can I add ids for this parametrize?
Here is a solution using a call to an external function in charge of formatting names from parameters value.
def idfn(val):
# receive here each val
# so you can return a custom property
return val.name
#pytest.mark.parametrize(
"key, value",
[(k, v) for k, l in some_dict.items() for v in l],
scope="class",
ids=idfn,
)
class TestSomething:
def test_foo(self, key, value):
assert True
But the simple solution with a lambda suggested by MrBean also works. In your simple case I would pick this one and use the external function only when more complex formatting is required.
#pytest.mark.parametrize(
"key, value",
[(k, v) for k, l in some_dict.items() for v in l],
scope="class",
ids=lambda val: val.name,
)
class TestSomething:
def test_foo(self, key, value):
assert True
The available options are presented in the doc

apply/get methods in Scala

If we go by the definition in "Programming in Scala" book:
When you apply parentheses surrounding one or more values to a
variable, Scala will transform the code into an invocation of a method
named apply on that variable
Then what about accessing the elements of an array? eg: x(0) is transformed to x.apply(0) ? (let's assume that x is an array). I tried to execute the above line. It was throwing error. I also tried x.get(0) which was also throwing error.
Can anyone please help?
() implies apply(),
Array example,
scala> val data = Array(1, 1, 2, 3, 5, 8)
data: Array[Int] = Array(1, 1, 2, 3, 5, 8)
scala> data.apply(0)
res0: Int = 1
scala> data(0)
res1: Int = 1
not releated but alternative is to use safer method which is lift
scala> data.lift(0)
res4: Option[Int] = Some(1)
scala> data.lift(100)
res5: Option[Int] = None
**Note: ** scala.Array can be mutated,
scala> data(0) = 100
scala> data
res7: Array[Int] = Array(100, 1, 2, 3, 5, 8)
In this you can not use apply, think of apply as a getter not mutator,
scala> data.apply(0) = 100
<console>:13: error: missing argument list for method apply in class Array
Unapplied methods are only converted to functions when a function type is expected.
You can make this conversion explicit by writing `apply _` or `apply(_)` instead of `apply`.
data.apply(0) = 100
^
You better use .update if you want to mutate,
scala> data.update(0, 200)
scala> data
res11: Array[Int] = Array(200, 1, 2, 3, 5, 8)
User defined apply method,
scala> object Test {
|
| case class User(name: String, password: String)
|
| object User {
| def apply(): User = User("updupd", "password")
| }
|
| }
defined object Test
scala> Test.User()
res2: Test.User = User(updupd,password)
If you add an apply method to an object, you can apply that object (like you can apply functions).
The way to do that it is just apply the object as if it was a function, directly with (), without a "dot".
val array:Array[Int] = Array(1,2,3,4)
array(0) == array.apply(0)
For
x(1)=200
which you mention in the comment, the answer is different. It also gets translated to a method call, but not to apply; instead it's
x.update(1, 200)
Just like apply, this will work with any type which defines a suitable update method.

spark: value histogram is not a member of org.apache.spark.rdd.RDD[Option[Any]]

I'm new to spark and scala and I've come up with a compile error with scala:
Let's say we have a rdd, which is a map like this:
val rawData = someRDD.map{
//some ops
Map(
"A" -> someInt_var1 //Int
"B" -> someInt_var2 //Int
"C" -> somelong_var //Long
)
}
Then, I want to get histogram info of these vars. So, here is my code:
rawData.map{row => row.get("A")}.histogram(10)
And the compile error says:
value histogram is not a member of org.apache.spark.rdd.RDD[Option[Any]]
I'm wondering why rawData.map{row => row.get("A")} is org.apache.spark.rdd.RDD[Option[Any]] and how to transform it to rdd[Int]?
I have tried like this:
rawData.map{row => row.get("A")}.map{_.toInt}.histogram(10)
But it compiles fail:
value toInt is not a member of Option[Any]
I'm totally confused and seeking for help here.
You get Option because Map.get returns an option; Map.get returns None if the key doesn't exist in the Map; And Option[Any] is also related to the miscellaneous data types of the Map's Value, you have both Int and Long, in my case it returns AnyVal instead of Any;
A possible solution is use getOrElse to get rid of Option by providing a default value when the key doesn't exist, and if you are sure A's value is always a int, you can convert it from AnyVal to Int using asInstanceOf[Int];
A simplified example as follows:
val rawData = sc.parallelize(Seq(Map("A" -> 1, "B" -> 2, "C" -> 4L)))
rawData.map(_.get("A"))
// res6: org.apache.spark.rdd.RDD[Option[AnyVal]] = MapPartitionsRDD[9] at map at <console>:27
rawData.map(_.getOrElse("A", 0).asInstanceOf[Int]).histogram(10)
// res7: (Array[Double], Array[Long]) = (Array(1.0, 1.0),Array(1))

Scala Array of different types of values

Writing a function in Scala that accepts an Array/Tuples/Seq of different types of values and sorts it based on first two values in each:
def sortFunction[T](input: Array[T]) = input(0)+ " " + input(1)
The input values I have are as below:
val data = Array((1, "alpha",88.9), (2, "alpha",77), (2, "beta"), (3, "alpha"), (1, "gamma",99))
Then I call the sortFunction as:
data.sortWith(sortFunction)
It is giving below errors:
- polymorphic expression cannot be instantiated to expected type; found : [T]scala.collection.mutable.Seq[T] ⇒ Int required: ((Int, String)) ⇒ ? Error occurred in an application involving default arguments.
- type mismatch; found : scala.collection.mutable.Seq[T] ⇒ Int required: ((Int, String)) ⇒ ? Error occurred in an application involving default arguments.
What am I doing wrong, or how do I get around this? I would be grateful for any suggestions.
If you know type of element in Array[T], you can use pattern matching (when same type). But If You don't know, Program can't decide how to sort your data.
One of methods is just String compare like below.
object Hello{
def sortFunction[T](input1: T, input2: T) =
input1 match {
case t : Product =>
val t2 = input2.asInstanceOf[Product]
t.productElement(0).toString < t2.productElement(0).toString
case v => input1.toString > input2.toString
}
def main(args: Array[String]): Unit = {
val data = Array((1, "alpha",88.9), (2, "alpha",77), (2, "beta", 99), (3, "alpha"), (1, "gamma",99))
println(data.sortWith(sortFunction).mkString)
}
}
If you want to know Product tarit, see http://www.scala-lang.org/api/rc2/scala/Product.html
If you have an Array of tuples that all have the same arity, such as tuples of (Int, String), than your sort function could look something like
def sortFunction[T](fst: (Int, String), scd: (Int, String)) = fst._1 < scd._1 // sort by first element
However, since you have an Array of tuples of varying arity, Scala compiler can only put this under nearest common type Product. Then you can sort like this:
def sortFunction[T](fst: (Product), scd: (Product)) = fst.productElement(1).toString < scd.productElement(1).toString
val data = Array((1, "alpha", 99), (2, "alpha"), (2, "beta"), (3, "alpha"), (1, "gamma"))
data.sortWith(sortFunction) // List((1,alpha,99), (2,alpha), (3,alpha), (2,beta), (1,gamma))
Note that this is really bad design though. You should create an abstract data type that encapsulates your data in a more structured way. I can't say what it should look like since I don't know where and how you are getting this information, but here's an example (called Foo, but you should of course name it meaningfully):
case class Foo(index: Int, name: String, parameters: List[Int])
I just assumed that the first element in each piece of data is "index" and second one is "name". I also assumed that the rest of those elements inside will always be integers and that there may be zero, one or more of them, so hence a List (if it's only zero or one, better choice would be Option).
Then you could sort as:
def sortFunction[T](fst: Foo, scd: Foo) = fst.index < scd.index
or
def sortFunction[T](fst: Foo, scd: Foo) = fst.name < scd.name

`circe` Type-level Json => A Function?

Using circe or argonaut, how can I write a Json => A (note - Json may not be the name of the type) where A is given by the SSN class:
// A USA Social Security Number has exactly 8 digits.
case class SSN(value: Sized[List[Nat], _8])
?
Pseudocode:
// assuming this function is named f
f(JsArray(JsNumber(1))) would fail to become an A since its size is 1, whereas
f(JsArray(JsNumber(1), ..., JsNumber(8))) === SSN(SizedList(1,...,8))
circe doesn't (currently) provide instances for Sized, but it probably should. In any case you can write your own pretty straightforwardly:
import cats.data.Xor
import io.circe.{ Decoder, DecodingFailure }
import shapeless.{ Nat, Sized }
import shapeless.ops.nat.ToInt
import shapeless.syntax.sized._
implicit def decodeSized[L <: Nat, A](implicit
dl: Decoder[List[A]],
ti: ToInt[L]
): Decoder[Sized[List[A], L]] = Decoder.instance { c =>
dl(c).flatMap(as =>
Xor.fromOption(as.sized[L], DecodingFailure(s"Sized[List[A], _${ti()}]", c.history))
)
}
I've restricted this to List representations, but you could make it more generic if you wanted.
Now you can write your SSN instance like this (note that I'm using Int instead of Nat for the individual numbers, since once you've got something statically typed as a Nat it's not worth much):
case class SSN(value: Sized[List[Int], Nat._8])
implicit val decodeSSN: Decoder[SSN] = Decoder[Sized[List[Int], Nat._8]].map(SSN(_))
And then:
scala> import io.circe.jawn.decode
import io.circe.jawn.decode
scala> decode[SSN]("[1, 2, 3, 4, 5, 6, 7, 8]")
res0: cats.data.Xor[io.circe.Error,SSN] = Right(SSN(List(1, 2, 3, 4, 5, 6, 7, 8)))
scala> decode[SSN]("[1, 2, 3, 4, 5, 6, 7]")
res1: cats.data.Xor[io.circe.Error,SSN] = Left(DecodingFailure(Sized[List[A], _8], List()))
If you really want a Json => SSN you could do this:
val f: Json => SSN = Decoder[SSN].decodeJson(_).valueOr(throw _)
But that's not really idiomatic use of circe.