How can I get a return value from ScalaTest indicating test suite failure? - scala

I'm running a ScalaTest (FlatSpec) suite programmatically, like so:
new MyAwesomeSpec().execute()
Is there some way I can figure out if all tests passed? Suite#execute() returns Unit here, so does not help. Ideally, I'd like to run the whole suite and then get a return value indicating whether any tests failed; an alternative would be to fail/return immediately on any failed test.
I can probably achieve this by writing a new FlatSpec subclass that overrides the Scalatest Suite#execute() method to return a value, but is there a better way to do what I want here?

org.scalatest.Suite also has run function, which returns the status of a single executed test.
With a few tweaking, we can access the execution results of each test. To run a test, we need to provide a Reporter instance. An ad-hoc empty reporter will be enough in our simple case:
val reporter = new Reporter() {
override def apply(e: Event) = {}
}
So, let's execute them:
import org.scalatest.events.Event
import org.scalatest.{Args, Reporter}
val testSuite = new MyAwesomeSpec()
val testNames = testSuite.testNames
testNames.foreach(test => {
val result = testSuite.run(Some(test), Args(reporter))
val status = if (result.succeeds()) "OK" else "FAILURE!"
println(s"Test: '$test'\n\tStatus=$status")
})
This will produce output similar to following:
Test: 'This test should pass'
Status=OK
Test: 'Another test should fail'
Status=FAILURE!
Having access to each test case name and its respective execution result, you should have enough data to achieve your goal.

Related

Scala Unit testing for ProcessAllWindowFunction

After Reading the official flink testing documentation (https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/stream/testing.html)
I was able to develop tests for a ProcessFunction, using a Test Harness, something like this:
pendingPartitionBuilder = new PendingPartitionBuilder(":::some_name", "")
testHarness =
new OneInputStreamOperatorTestHarness[StaticAdequacyTilePublishedData, PendingPartition](
new ProcessOperator[StaticAdequacyTilePublishedData, PendingPartition](pendingPartitionBuilder)
)
testHarness.open()
now, I’m trying to do the same for a ProcessAllWindowFunction, that looks like this:
class MapVersionValidationDistributor(batchSize: Int) extends
ProcessAllWindowFunction[MapVersionValidation, Seq[StaticAdequacyTilePublishedData],TimeWindow] {
lazy val state: ValueState[Long] = getRuntimeContext .getState(new ValueStateDescriptor[Long]("latestMapVersion", classOf[Long]))
(...)
First I realized I can’t use TestHarness for ProcessAllWindowFunction, because it doesn’t have a processElement method. In this case, what unit test strategy should I follow?
EDIT: At the moment my test code looks like this:
val collector = mock[Collector[Seq[StaticAdequacyTilePublishedData]]]
val mvv = new MapVersionValidationDistributor(1)
val input3 = Iterable(new MapVersionValidation("123",Seq(TileValidation(1,true,Seq(1,3,4)))))
val ctx = mock[mvv.Context]
val streamContext = mock[RuntimeContext]
mvv.setRuntimeContext(streamContext)
mvv.open(mock[Configuration])
mvv.process(ctx,input3,collector)
and I'm getting this error:
Unexpected call: <mock-3> RuntimeContext.getState[T](ValueStateDescriptor{name=latestMapVersion, defaultValue=null, serializer=null}) Expected: inAnyOrder { }
You don't really need test harness to unit test the process method of the ProcessAllWindowFunction. The process function takes 3 arguments: Context, Iterable[IN], Collector[OUT]. You can use some library depending on the language used to mock the Context. You can also easily implement or mock the Collector depending on your prerefences here. And the Iterable[IN] is just an Iterable containing the elements of Your window, that would be passed to the function after the window is triggered.

Give Pytest fixtures different scopes for different tests

In my test suite, I have certain data-generation fixtures which are used with many parameterized tests. Some of these tests would want these fixtures to run only once per session, while others need them to run every function. For example, I may have a fixture similar to:
#pytest.fixture
def get_random_person():
return random.choice(list_of_people)
and 2 parameterized tests, one which wants to use the same person for each test condition and one which wants a new person each time. Is there any way for this fixture to have scope="session" for one test and scope="function" for another?
James' answer is okay, but it doesn't help if you yield from your fixture code. This is a better way to do it:
# Built In
from contextlib import contextmanager
# 3rd Party
import pytest
#pytest.fixture(session='session')
def fixture_session_fruit():
"""Showing how fixtures can still be passed to the different scopes.
If it is `session` scoped then it can be used by all the different scopes;
otherwise, it must be the same scope or higher than the one it is used on.
If this was `module` scoped then this fixture could NOT be used on `fixture_session_scope`.
"""
return "apple"
#contextmanager
def _context_for_fixture(val_to_yield_after_setup):
# Rather long and complicated fixture implementation here
print('SETUP: Running before the test')
yield val_to_yield_after_setup # Let the test code run
print('TEARDOWN: Running after the test')
#pytest.fixture(session='function')
def fixture_function_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
yield result
#pytest.fixture(scope='class')
def fixture_class_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
yield result
#pytest.fixture(scope='module')
def fixture_module_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
yield result
#pytest.fixture(scope='session')
def fixture_session_scope(fixture_session_fruit):
with _context_for_fixture(fixture_session_fruit) as result:
# NOTE if the `_context_for_fixture` just did `yield` without any value,
# there should still be a `yield` here to keep the fixture
# inside the context till it is done. Just remove the ` result` part.
yield result
This way you can still handle contextual fixtures.
Github issue for reference: https://github.com/pytest-dev/pytest/issues/3425
One way to do this to separate out the implementation and then have 2 differently-scoped fixtures return it. So something like:
def _random_person():
return random.choice(list_of_people)
#pytest.fixture(scope='function')
def get_random_person_function_scope():
return _random_person()
#pytest.fixture(scope='session')
def get_random_person_session_scope():
return _random_person()
I've been doing this:
def _some_fixture(a_dependency_fixture):
def __some_fixture(x):
return x
yield __some_fixture
some_temp_fixture = pytest.fixture(_some_fixture, scope="function")
some_module_fixture = pytest.fixture(_some_fixture, scope="module")
some_session_fixture = pytest.fixture(_some_fixture, scope="session")
Less verbose than using a context manager.
Actually there is a workaround for this using the request object.
You could do something like:
#pytest.fixture(scope='class')
def get_random_person(request):
request.scope = getattr(request.cls, 'scope', request.scope)
return random.choice(list_of_people)
Then back at the test class:
#pytest.mark.usefixtures('get_random_person')
class TestSomething:
scope = 'function'
def a_random_test():
def another_test():
However, this only works properly for choosing between 'function' and 'class' scope and particularly if the fixture starts as class-scoped (and then changes to 'function' or is left as is).
If I try the other way around (from 'function' to 'class') funny stuff happen and I still can't figure out why.

Pytest yield fixture usage

I have a use case where I may use fixture multiple times inside a test in a "context manager" way. See example code below:
in conftest.py
class SomeYield(object):
def __enter__(self):
log.info("SomeYield.__enter__")
def __exit__(self, exc_type, exc_val, exc_tb):
log.info("SomeYield.__exit__")
def generate_name():
name = "{current_time}-{uuid}".format(
current_time=datetime.now().strftime("%Y-%m-%d-%H-%M-%S"),
uuid=str(uuid.uuid4())[:4]
)
return name
#pytest.yield_fixture
def some_yield():
name = generate_name()
log.info("Start: {}".format(name))
yield SomeYield()
log.info("End: {}".format(name))
in test_some_yield.py
def test_some_yield(some_yield):
with some_yield:
pass
with some_yield:
pass
Console output:
INFO:conftest:Start: 2017-12-06-01-50-32-5213
INFO:conftest:SomeYield.__enter__
INFO:conftest:SomeYield.__exit__
INFO:conftest:SomeYield.__enter__
INFO:conftest:SomeYield.__exit__
INFO:conftest:End: 2017-12-06-01-50-32-5213
Questions:
If I have some setup code in SomeYield.enter and cleanup code in
SomeYield.exit, is this the right way to do it using fixture for
multiple calls in my test?
Why didn't I see three occurrences of
enter and exit? Is this expected?

async before in scalatest for scalajs

In the code example below, how can I wait for ajaxCall() to finish before starting test 1 when using scalatest to test Scala.js code ? I cannot use await in Scala.js.
class ClientGetEntityDynTest
extends AsyncFunSuite
with Matchers
with BeforeAndAfter {
implicit override def executionContext =
scala.scalajs.concurrent.JSExecutionContext.Implicits.queue
before {
ajaxCall(...) // returns Future[...]
... // I would like to wait for ajaxCall to finish before starting test 1
}
test("test 1") {
...
getEntityDyn(...) // returns Future[Assertion]
}
}
This one year old issue seems to be related but not really resolved.
One simple possibility would be to make my own testWithBefore method... that calls test and waits for a Future to complete before calling test but maybe it is possible to do this without this workaround.
I suspect you need to restructure your tests, to not use BeforeAndAfter. I'm not sure of the best solution, but the fall-back would be to create your own higher-order function, called something like beforeAsync(fun: => Future[Any]), and manually use that in your tests.
I suspect it wouldn't be too hard to take BeforeAndAfter.scala, and create a variant BeforeAndAfterAsyc that has this beforeAsync() function in it, but I haven't tried doing so.

How do I test code that requires an Environment Variable?

I have some code that requires an Environment Variable to run correctly. But when I run my unit tests, it bombs out once it reaches that point unless I specifically export the variable in the terminal. I am using Scala and sbt. My code does something like this:
class something() {
val envVar = sys.env("ENVIRONMENT_VARIABLE")
println(envVar)
}
How can I mock this in my unit tests so that whenever sys.env("ENVIRONMENT_VARIABLE") is called, it returns a string or something like that?
If you can't wrap existing code, you can change UnmodifiableMap System.getenv() for tests.
def setEnv(key: String, value: String) = {
val field = System.getenv().getClass.getDeclaredField("m")
field.setAccessible(true)
val map = field.get(System.getenv()).asInstanceOf[java.util.Map[java.lang.String, java.lang.String]]
map.put(key, value)
}
setEnv("ENVIRONMENT_VARIABLE", "TEST_VALUE1")
If you need to test console output, you may use separate PrintStream.
You can also implement your own PrintStream.
val baos = new java.io.ByteArrayOutputStream
val ps = new java.io.PrintStream(baos)
Console.withOut(ps)(
// your test code
println(sys.env("ENVIRONMENT_VARIABLE"))
)
// Get output and verify
val output: String = baos.toString(StandardCharsets.UTF_8.toString)
println("Test Output: [%s]".format(output))
assert(output.contains("TEST_VALUE1"))
Ideally, environment access should be rewritten to retrieve the data in a safe manner. Either with a default value ...
scala> scala.util.Properties.envOrElse("SESSION", "unknown")
res70: String = Lubuntu
scala> scala.util.Properties.envOrElse("SECTION", "unknown")
res71: String = unknown
... or as an option ...
scala> scala.util.Properties.envOrNone("SESSION")
res72: Option[String] = Some(Lubuntu)
scala> scala.util.Properties.envOrNone("SECTION")
res73: Option[String] = None
... or both [see envOrSome()].
I don't know of any way to make it look like any/all random env vars are set without actually setting them before running your tests.
You shouldn't test it in unit-test.
Just extract it out
class F(val param: String) {
...
}
In your prod code you do
new Foo(sys.env("ENVIRONMENT_VARIABLE"))
I would encapsulate the configuration in a contraption which does not expose the implementation, maybe a class ConfigValue
I would put the implementation in a class ConfigValueInEnvVar extends ConfigValue
This allows me to test the code that relies on the ConfigValue without having to set or clear environment variables.
It also allows me to test the base implementation of storing a value in an environment variable as a separate feature.
It also allows me to store the configuration in a database, a file or anything else, without changing my business logic.
I select implementation in the application layer.
I put the environment variable logic in a supporting domain.
I put the business logic and the traits/interfaces in the core domain.