ScalaTest: sharing resource between tests - scala

For my integration tests I need to create and destroy resources when every tests ran (for example starting and stopping docker images to test against).
Creating a resource can take time so I'd like to do it once for every tests that needs it and destroy it when it's not needed anymore.
For now I've done that by creating the resource lazily and adding a shutdown hook :
object MyResource {
lazy val singleton = {
val docker = Await.result(startImage(), 1 minute)
sys.addShutdownHook(Await.result(docker.stop(), 1 minute))
docker
}
}
I'm looking for a better way to handle that like fixture but I'm not sure that I can start and stop the docker image once with that.
My tests are in multiple classes, so I can't use beforeAndAfterAll (and I'd like to avoid using Await.result).

Related

Reset kafka testcontainers without clearing and recreating the testcontainer after every junit test

I am using kafka testcontainers with JUnit5. Can someone let me know how can I delete data from Kafka testcontainers after each test so that I don't have to destroy and recreate the kafka testcontainer every time.
Test Container Version - 1.6.2
Docker Kafka Image Name - confluentinc/cp-kafka:5.2.1
Make the container variable static
Containers declared as static fields will be shared between test methods. They will be started only once before any test method is executed and stopped after the last test method has executed
https://www.testcontainers.org/test_framework_integration/junit_5/
Make sure that you don't share state between tests, though. For example, if you want to test creating a topic, producing to it, then consuming from it, and deleting it, those all should be in one test. Although you can call separate non test methods.
That being said, each test should ideally use a unique topic name. One that describes the test, maybe.
Also, as documented, you cannot use the parallel test runner

Specify container name using testcontainers in Scala

Using dimafeng's test containers for Scala I can create several containers from the same image:
class TestWithTwoDatabases extends FunSuite with ForEachTestContainer {
protected val inputContainer: PostgreSQLContainer = PostgreSQLContainer("postgres:13.3")
protected val outputContainer: PostgreSQLContainer = PostgreSQLContainer("postgres:13.3")
override val container: MultipleContainers = MultipleContainers(inputContainer, outputContainer)
// Some integration tests
}
How distinguish them within the docker console? I currently have:
CONTAINER ID IMAGE NAMES
a2bee9488c02 postgres:13.3 youthful_archimedes # I'd like to have inputContainer here
2c645ed1c54c postgres:13.3 agitated_cori # I'd like to have outputContainer here
How can I specify the containers' names when creating them in Scala?
I don't think it is possible to do it. And it is not clear why you need it.
The idea of test containers is that you'd use them in your code and you can distinguish them by reference, inputContainer and outputContainer in your case. It is not designed to be used in a form of a booting framework, like docker compose, so you can use it externally outside of your code.
Also setting a fixed name to the container will make it impossible to run tests that claim same container name in CI environment with shared docker engine.

Is there a way to invoke an Action without sending a request?

I have a Play2 app which serves some api.
I that same app, I added code that runs an etl based on Alpakka. Under the etl hood, there is a Future[Done] running that never ends. Currently I trigger the etl process with a web request at a separate route.
To deploy my etl service, I wanna be able to run my Play2 app with a special command that ideally does not open a server, but just runs that single Action of etl controller. If it's not achievable and sever has to be open, I'd like to trigger that etl process but isolate my etl box from incoming web connection. I feel that all of this is very hackish and probably there is a better way.
The way I ended up doing it is with scheduled tasks.
A task may be scheduled once and depending on a config variable:
class ProtonConnector #Inject()(
config: TypeSafeConfig,
)(
implicit actorSystem: ActorSystem
) {
// Run the stream
def listen: Future[Done] =
itemsProtonSource.via(mapFlow).via(solrIndexFlow).runWith(Sink.ignore)
// Schedule ETL execution
if (config.scheduleProtonEtlTask) {
actorSystem.scheduler.scheduleOnce(delay = 1.seconds) {
logger.debug("Executing proton ETL task...")
val _ = listen
}
}
}
Then we pick up config.scheduleProtonEtlTask from an environment variable (which has value set to true so the code block inside if would execute).
Then in our CI/CD pipeline we define a new deployment where this env variable would be supplied by task configuration. That deployment may be isolated from outer world too - that way, Play2 still spins up a server but it's inaccessible from outside of local network.

Akka JVM: Test with multiple singletons failing because "name is not unique"

I would like to test my framework based on a "master" singleton and multiple workers. The test I am trying to write would like to cover the standby recovery case. For that, I would need to be able to start several "masters" in the same JVM, but I get a "akka.actor.InvalidActorNameException: actor name [master] is not unique!".
Is there any way around that ?

Execute external script on server with multiple play 2 framework instances

I have a linux server with three play 2 framework instances on it and I would like to execute regularly an external Scala script that has access to all application environment (models) and that is executed only once at a time.
I would like to call this script from crontab but I cannot find any documentation on how to do it. I know that we can schedule asynchronous tasks from Global object, but I want the script executed only once for the three play instances.
Actually I would like to do the same kind of things as Ruby on Rails rake tasks for those who knows them.
Create a regular action for this task which will be accessible via http, then you can use ie. curl in unix' crontab to call that action and it will hit first available instance.
Other possibility is... using Global object to schedule the task with Akka support. In this case to make sure that only one instance will schedule task, you need to determine somehow which one should it be. If you are starting all 3 instances with specified port (always the same per instance) you can read http.port to allow or skip the execution.
Finally you can use database to inform other instances, that task is just executed: all 3 instances tries to execute the Akka scheduler, but before execution the task they can check if this task has still TODO flag. If not, instance sets TODO flag to false and continues execution, otherwise just skips execution this time.
Also you can use filesystem for similar approach: at the beginning of the execution, create flag-file to inform other instances, that, this time they can skip the task.