I am working with some classes that (for some reason) can only be used once within a single VM. My test cases work if I run them individually (fork := true) enabled in my sbt settings.
If I run more than one of these tests, they fail with an exception that has to with a thread executor rejecting a task (it's most likely closed). It would be very time consuming to find out what causes the problem and even if I find the problem, I might not be able to resolve it (I do not have access to the source code).
I am currently using the specs2 test framework, but any test framework using sbt would be acceptable.
Is there any test framework for sbt that is capable of running each test in a jvm fork?
Thoughts or ideas on possible other solutions are of course welcome.
It turns out this is fairly easy to achieve. The documentation is sufficient and can be found at Testing - Forking tests
// Define a method to group tests, in my case a single test per group
def singleTests(tests: Seq[TestDefinition]) =
tests map { test =>
new Group(
name = test.name,
tests = Seq(test),
runPolicy = SubProcess(javaOptions = Seq.empty[String]))
}
// Add the following to the `Project` settings
testGrouping in Test <<= definedTests in Test map singleTests
Using non-deprecated syntax:
testGrouping in Test := (definedTests in Test).value map { test =>
Tests.Group(name = test.name, tests = Seq(test), runPolicy = Tests.SubProcess(
ForkOptions(
javaHome.value,
outputStrategy.value,
Nil,
Some(baseDirectory.value),
javaOptions.value,
connectInput.value,
envVars.value
)))
}
2023 proper syntax:
Test / testGrouping := (Test / definedTests).value map { test =>
Tests.Group(name = test.name, tests = Seq(test), runPolicy = Tests.SubProcess(
ForkOptions(
javaHome = javaHome.value,
outputStrategy = outputStrategy.value,
bootJars = Vector.empty,
workingDirectory = Some(baseDirectory.value),
runJVMOptions = javaOptions.value.toVector,
connectInput = connectInput.value,
envVars = envVars.value
)))
Related
I have the below test_dss.py file which is used for pytest:
import dataikuapi
import pytest
def setup_list():
client = dataikuapi.DSSClient("{DSS_URL}", "{APY_KEY}")
client._session.verify = False
project = client.get_project("{DSS_PROJECT}")
# Check that there is at least one scenario TEST_XXXXX & that all test scenarios pass
scenarios = project.list_scenarios()
scenarios_filter = [obj for obj in scenarios if obj["name"].startswith("TEST")]
return scenarios_filter
def test_check_scenario_exist():
assert len(setup_list()) > 0, "You need at least one test scenario (name starts with 'TEST_')"
#pytest.mark.parametrize("scenario", setup_list())
def test_scenario_run(scenario, params):
client = dataikuapi.DSSClient(params['host'], params['api'])
client._session.verify = False
project = client.get_project(params['project'])
scenario_id = scenario["id"]
print("Executing scenario ", scenario["name"])
scenario_result = project.get_scenario(scenario_id).run_and_wait()
assert scenario_result.get_details()["scenarioRun"]["result"]["outcome"] == "SUCCESS", "test " + scenario[
"name"] + " failed"
My issue is with setup_list function, which able to get only hard coded values for {DSS_URL}, {APY_KEY}, {PROJECT}. I'm not able to use PARAMS or other method like in test_scenario_run
any idea how I can pass the PARAMS also to this function?
The parameters in the mark.parametrize marker are read at load time, where the information about the config parameters is not yet available. Therefore you have to parametrize the test at runtime, where you have access to the configuration.
This can be done in pytest_generate_tests (which can live in your test module):
#pytest.hookimpl
def pytest_generate_tests(metafunc):
if "scenario" in metafunc.fixturenames:
host = metafunc.config.getoption('--host')
api = metafuc.config.getoption('--api')
project = metafuc.config.getoption('--project')
metafunc.parametrize("scenario", setup_list(host, api, project))
This implies that your setup_list function takes these parameters:
def setup_list(host, api, project):
client = dataikuapi.DSSClient(host, api)
client._session.verify = False
project = client.get_project(project)
...
And your test just looks like this (without the parametrize marker, as the parametrization is now done in pytest_generate_tests):
def test_scenario_run(scenario, params):
scenario_id = scenario["id"]
...
The parametrization is now done at run-time, so it behaves the same as if you had placed a parametrize marker in the test.
And the other test that tests setup_list now has also to use the params fixture to get the needed arguments:
def test_check_scenario_exist(params):
assert len(setup_list(params["host"], params["api"], params["project"])) > 0,
"You need at least ..."
Is there any way to view the contents of an SCollection when running a unit test (PipelineSpec)?
When running something in production on many machines there would be no way to see the entire collection in one machine, but I wonder is there a way to view the contents of an SCollection (for example when running a unit test in debug mode in intellij).
If you want to print debug statements to the console then you can use the debug method which is part of SCollections. A sample code shown below
val stdOutMock = new MockedPrintStream
Console.withOut(stdOutMock) {
runWithContext { sc =>
val r = sc.parallelize(1 to 3).debug(prefix = "===")
r should containInAnyOrder(Seq(1, 2, 3))
}
}
stdOutMock.message.filterNot(_ == "\n") should contain theSameElementsAs
Seq("===1", "===2", "===3")
I have a scala-js project, adding a particular library dependency, is affecting the way project test cases are running. Without the library dependency everything's fine, the moment I add them, tests doesn't execute. I want to check all sbt settings, if those are getting affected. Is there any way I can print all settings and check?
BuildStructure.data seems to give access to all the settings by scope. We could access it by defining a custom command printAllTestSettings like so:
def printAllTestSettings = Command.command("printAllTestSettings") { state =>
val structure = Project.extract(state).structure
val testScope =
Scope(
Select(ProjectRef(new File("/home/mario/sandbox/hello-world-scala/"), "root")),
Select(ConfigKey("test")),
Zero,
Zero
)
structure
.data
.keys(testScope)
.foreach(key => println(s"${key.label} = ${structure.data.get(testScope, key).get}"))
state
}
commands ++= Seq(printAllTestSettings)
Here is output snippet:
...
managedSourceDirectories = List(/home/mario/sandbox/hello-world-scala/target/scala-2.12/src_managed/test)
managedResourceDirectories = List(/home/mario/sandbox/hello-world-scala/target/scala-2.12/resource_managed/test)
testLoader = Task((taskDefinitionKey: ScopedKey(Scope(Select(ProjectRef(file:/home/mario/sandbox/hello-world-scala/,root)), Select(ConfigKey(test)), Zero, Zero),testLoader)))
packageBin = Task((taskDefinitionKey: ScopedKey(Scope(Select(ProjectRef(file:/home/mario/sandbox/hello-world-scala/,root)), Select(ConfigKey(test)), Zero, Zero),packageBin)))
...
Overview
After looking around the internet for a while, I have not found a good way to omit certain folders from being watched by sbt 1.0.x in a Play Framework application.
Solutions posted for older versions of sbt:
How to exclude a folder from compilation
How to not watch a file for changes in Play Framework
There are a few more, but all more or less the same.
And the release notes for 1.0.2 show that the += and ++= behavior was maintained, but everything else was dropped.
https://www.scala-sbt.org/1.x/docs/sbt-1.0-Release-Notes.html
Source code verifies: https://www.scala-sbt.org/1.0.4/api/sbt/Watched$.html
Would love to see if anyone using sbt 1.0.x has found a solution or workaround to this issue. Thanks!
Taking the approach of how SBT excludes managedSources from watchSources I was able to omit a custom folder from being watched like so:
watchSources := {
val directoryToExclude = "/Users/mgalic/sandbox/scala/scala-seed-project/src/main/scala/dirToExclude"
val filesToExclude = (new File(directoryToExclude) ** "*.scala").get.toSet
val customSourcesFilter = new FileFilter {
override def accept(pathname: File): Boolean = filesToExclude.contains(pathname)
override def toString = s"CustomSourcesFilter($filesToExclude)"
}
watchSources.value.map { source =>
new Source(
source.base,
source.includeFilter,
source.excludeFilter || customSourcesFilter,
source.recursive
)
}
},
Here we use PathFinder to get all the *.scala sources from directoryToExclude:
val filesToExclude = (new File(directoryToExclude) ** "*.scala").get.toSet
Then we create customSourcesFilter using filesToExclude, which we then add to every current WatchSource:
watchSources.value.map { source =>
new Source(
...
source.excludeFilter || customSourcesFilter,
...
)
}
Note the above solution is just something that worked for me, that is, I do not know what is the recommend approach of solving this problem.
I'd like to add a configuration within a plugin at runtime and probably also register it at projectConfigurations. Is there a possibility to manipulate the state in such a way? I assume this needs to be done in a command, and the solution is buried somewhere deep inside of the AttributeMap, but I'm not sure how to approach that beast.
def addConfiguration = Command.single( "add-configuration" )( ( state, configuration ) => {
val extracted = Project extract state
val c = configurations.apply( configuration )
// Add c to the state ...
???
} )
Background
I'm working on a sbt-slick-codegen plugin, which has multi-database support as an essential feature. Now to configure databases in sbt, I took a very simple approach: everything is stuffed into a Map.
databases in SlickCodegen ++= Map(
"default" -> Database(
Authentication(
url = "jdbc:postgresql://localhost:5432/db",
username = Some( "db_user_123" ),
password = Some( "p#assw0rd" )
),
Driver(
jdbc = "org.postgresql.Driver",
slick = slick.driver.PostgresDriver,
user = Some( "com.example.MySlickProfile" )
),
container = "Tables",
generator = new SourceCodeGenerator( _ ),
identifier = Some( "com.example" ),
excludes = Seq( "play_evolutions" ),
cache = true
),
"user_db" -> Database( ... )
)
Now, this works, but defeats the purpose of sbt, is hard to setup, difficult to maintain and update. I'd instead prefer configuration to work like this:
container in Database := "MyDefaultName",
url in Database( "backup" ) := "jdbc:postgresql://localhost:5432/database",
cache in Database( "default" ) := false
And also this approach is working on the current development branch, but only for Database( x )-configurations that are known at compile time, which leads to the next problem:
On top of that, I'm planning to add a module with PlayFramework support. The idea is to have a setting SettingKey[Seq[Config]]( "configurations" ) that accepts Typesafe Config instances, reads the slick configurations from them and generates appropriate Database( x )-configurations. But for this step I am out of ideas.