AWS Device Farm- Appium Python - Running only collected tests - pytest

I need some help regarding the run of the collected tests. So here is my scenario. I have the tests structured like this:
tests_folder {
feature01_tests_folder {
tests
}
feature02_tests_folder {
tests
}
}
I'm using Appium Python. My issue is that if I collect only one tests folder and then make the bundle zip with all the project, on AWS will run all the tests. So what is the point of collecting only some tests?
Notes:
Collection was ok. After running the py.test--collect-only feature_01_tests_folder command only the specified tests were collected.
Thanks

Related

How to run Specs inside "Suites" in parallel in scalatest?

I have a Gradle project with some tests. These tests are directly written in Spec files.
I want to run subset of these specs in parallel (Test inside specs shouldn't run in parallel). So I created a Suite like below:
class MainSuite extends Suites(
new TestSpec1,
new TestSpec2,
new TestSpec3) with BeforeAndAfterAll
{
//...
}
The reason for this is to be able to run something right before any of these TestSpecs. How can I run these Specs inside this Suite in parallel?

Allow to run entire folder of tests in flutter new integration tests - just like normal unit tests

I am having a separate integration test file for each screen and I want to run all the integration tests with a single command like “flutter tests”. I looked into the doc but was not able to find any way to do this. This also causes an issue with the firebase test lab apk. To create an android test apk I can only specify a single test file path to create the apk.
// flutter build generates files in android/ for building the app
flutter build apk
./gradlew app:assembleAndroidTest
./gradlew app:assembleDebug -Ptarget=integration_test/whattodo_tests.dart
For now, I found two workarounds for this.
I’ve moved all my tests to a single dart file with a group test.
But this workaround does not scale well. For the 5-10 test it’s working fine. But let say if we have 50-75 test then it will be a mess to navigate and understand tests in single file.
Create a script to run all tests one by one. This might work on our own CI pipeline, but this won't work in the firebase test lab.
Does anyone able to solve this issue or any better solution?
I have came across one project on GitHub has this kind of structure, I think which may help..
Make common file and import different files, folders or modules on that common file for testing
main.dart
import 'package:integration_test/integration_test.dart';
import 'about_us_page_test.dart' as about;
import 'add_label_page_test.dart' as label;
import 'add_project_page_test.dart' as project;
import 'add_task_page_test.dart' as tasks;
import 'completed_tasks_page_test.dart' as tasks_completed;
import 'home_page_test.dart' as home;
import 'whattodo_tests.dart' as whattodo;
void main() {
IntegrationTestWidgetsFlutterBinding.ensureInitialized();
whattodo.main();
home.main();
tasks.main();
tasks_completed.main();
project.main();
label.main();
about.main();
}
to run all these tests
flutter drive \
--driver=test_driver/integration_test_driver.dart \
--target=integration_test/main.dart
There is now a better way of doing it. Just use the test command like this.
To run all test
flutter test integration_test
To run a specific test
flutter test integration_test/app_test.dart
Reference.

Taurus NUnit runner not finding tests

I have a simple NUnit test that makes a simple WebAPI call:
[TestFixture]
public class PerformanceTests
{
private const string web_api = <myapiurl>;
[Test]
public async Task PerformanceTest()
{
var response = await client.GetAsync(web_api);
Assert.AreEqual(HttpStatusCode.OK, response.StatusCode);
}
}
The test runs fine in Visual Studio when run using the normal test runner and also via ReSharper; it is highlighted as a NUnit test.
I have Taurus installed and have created a simple yml file to run my test:
execution:
- executor: nunit
iterations: 500
scenario:
script: C:\Users\...\tests.dll # assembly with tests
When I run the yml file in Taurus:
bzt my-nunit-tests.yml
the tests do not run and I get the following output:
Target: C:\Users\...\tests.dll
15:36:51 ERROR: NUnitExecutor STDERR:
Unhandled Exception: System.ArgumentException: Nothing to run, no tests were loaded
at NUnitRunner.NUnitRunner.Main(String[] args)
It would seem that the custom Taurus NUnit test runner is not picking up the tests. I can run the tests with the standard dotnet test command (the project does have the Microsoft.NET.Test.Sdk as a dependency).
As my test project is .NET Core I am publishing the project to ensure all dependencies are included. This ensures all the files specified here are in the same directory as the test assembly.
Update: I have created the exact same test project but in .NET Framework and Taurus finds the tests. To me this suggests the custom Taurus NUnit Test Runner doesn't work with .NET Core projects but I can't confirm this.
As of June 2018 Taurus' Custom NUnit runner does not support .NET Core. It would seem that this is because NUnit does not allow .NET Core tests to be run via the .NET Framework engine.

ScalaTest and SBT: Reporting progress of test suite?

I am using ScalaTest and have a test suite that contains many tests, and each test can take about a minute to run.
class LargeSuite extends FunSuite {
test("Name of test 1") { ... }
...
test("Name of test n") { ... }
}
As is usual with running ScalaTest tests from the SBT console, nothing is reported to the screen until each test in the FunSuite has been run. The problem is that when n is large and each test is slow to run, you do not know what is happening. (Running top or Windows Task Manager and looking for the CPU usage of Java is not really satisfactory.)
But the real problem is when the build is run by Travis CI: Travis assumes that the build has gone wrong and kills it if 10 minutes pass and nothing is printed to the screen. In my case, Travis is killing my build, even though the tests are still running, and each individual test in a FunSuite does not require 10 minutes (although the entire suite does require more than 10 minutes because of the number of tests).
My first question is therefore this: how can I get ScalaTest to report on progress to the console after each test in FunSuite, in an idiomatic way?
My partial solution is to use the following trait as a mixin, which solves the problem with Travis:
trait ProgressConsolePrinter extends BeforeAndAfterEach with BeforeAndAfterAll {
self: Suite =>
override def beforeAll: Unit = {
Console.print(s"$suiteName running")
Console.flush
}
override def afterEach: Unit = {
Console.print(".")
Console.flush
}
override def afterAll: Unit = {
Console.println
Console.flush
}
}
But I understand that using Console to print to the SBT console is not entirely reliable (Googling this seems to somewhat confirm this from the experiences of others).
Also (i) anything you print from Console does not go via SBT's logger, and is therefore not prefixed with [info], and (ii) when trying the above, the messages printed from Console are jumbled up with other messages from SBT. If I was able to use the proper logger, then neither of these things would happen.
My second question is therefore this: How can I print to an SBT logger from within a test in ScalaTest?
To get ScalaTest to report to the console after each test within SBT, add
logBuffered in Test := false
to your build configuration. (See the SBT docs.)
For general logging purposes within SBT, you can use an instance of sbt.util.Logger obtained from streams.value.log within an SBT task as described here. Wilson's answer below can be used if logging is required from the test code.
(I'm answering my own question two years later, but this is currently the first hit on Google for "ScalaTest sbt progress".)
This may be helpful for your second question.
To log like that you must use scala logging. You must add scala logging as a test dependency, extend some of its logging classes (suggest LazyLogging) and then call logger.info("Your message").

Using simple-build-tool for benchmarks

I'm trying to get sbt to compile and build some benchmarks. I've told it to add the benchmarks to the test path so they're recompiled along with tests, but I can't figure out how to write an action to let me actually run them. Is it possible to invoke classes from the Project definition class, or even just from the command line?
Yes, it is.
If you'd like to run them in the same VM the SBT is run in, then write a custom task similar to the following in your project definition file:
lazy val benchmark = task {
// code to run benchmarks
None // Some("will return an error message")
}
Typing benchmark in SBT console will run the task above. To actually run the benchmarks, or, for that matter, any other class you've compiled, you can reuse some of the existing infrastructure of SBT, namely the method runTask which will create a task that runs something for you. It has the following signature:
def runTask(mainClass: => Option[String], classpath: PathFinder, options: String*): Task
Simply add the following to your file:
lazy val benchmark = task { args =>
runTask(Some("whatever.your.mainclass.is"), testClasspath, args)
}
When running benchmarks, it is sometimes recommended that you run them in a separate jvm invocation, to get more reliable results. SBT allows you to run separate processes by invoking a method ! on a string command. Say you have a command java -jar path-to-artifact.jar you want to run. Then:
"java -jar path-to-artifact.jar" !
runs the command in SBT. You want to put the snippet above in a separate task, same as earlier.
And don't forget to reload when you change your project definition.
Couldn't you simply write the benchmarks as tests, so they will be run when you call 'test' in SBT?
You could also run a specific test with 'test-only', or run a main with 'run' or 'exec' (see http://code.google.com/p/simple-build-tool/wiki/RunningSbt for details).