How can I roll back DAO tests using scala PlaySpec and Slick - postgresql

I'm trying to flesh out my application's abstract DAO test harness to support rolling back any test modifications to the database. I know slick supports transactions with (db.run(<some DBIOAction>.transactionally), but that doesn't work as part of an abstract test class, as the db actions need to actually be run in the actual test method.
Currently, I'm working with attempting to wrap the test method with BeforeAndAfter's runTest method and attempting to find some slick method that allows me to wrap the test execution in a transaction. It feels like the correct first step, but I'm struggling to figure out how to not interfere with regular test creation while still being able to roll back transactions (i.e. I don't want to have to manually add a DBIOAction.failure in every DB test that changes the DB state).
I've tried setting autocommit=false around the method, e.g.
db.run(
SimpleJdbcOperation(_.connection.setAutoCommit(false)) andThen
DBIOAction.successful(super.runTest) zip
SimpleJdbcOperation(_.connection.rollBack()))
but I think the connection pool is foiling that particular method, as getting the autocommit status inside the test method returns true and the rollback doesn't do anything.
Is there anything I can do here short of hacky (manual DBIOAction.failure()) or wasteful (drop and recreate table/schema after every test) solutions?

For now I'm going with https://stackoverflow.com/a/34953817/1676006, but I still feel like there should be a better way.

Related

Postgres AUTONOMOUS_TRANSACTION equivalent on the same DB

I'm currently working on a SpringBatch application that should insert some logs in case a certain type of error happens. The problem is that if the BatchJob fails, it automatically rollback everything done and that’s perfect, but it also rollback the error logs.
I need to achieve something similar to the AUTONOMOUS_TRANSACTION of Oracle while using PostgreSQL (14).
I’ve seen the DBLINK and it seem the only thing close to an alternative, but I have found some problems:
I need to avoid the connection string because the database host/port/name changes in the different environments, is it possible? I need to persist the data in the same database to technically I don’t need to connect to any other database but use the calling connection.
Is it possible to create a Function/Procedure that creates the takes care of all and I only have to call it Java side? Maybe this way I can somehow pass the connection data as a parameter in case that is not possible to avoid.
In a best case scenario I would be able to do something like:
dblink_exec(text sql);
That without arguments considers the same database where is been executed.
The problem is that I need this to be done without specifying any connection data, this will be inside a function on the executing db, in the same schema… that function will pass from one environment to the next one and the code needs to be the same so any name/user/pass needed must be avoided since they will change by environment. And since doing it in the same db and schema technically they can be inferred.
Thanks in advance!
At the moment I haven't try anything, I'm trying to get some information first.

Await statement execution completion in Slick

In my tests, I've got some database actions that aren't exposed as Futures at the test level. Sometimes, my tests run fast enough that close() in my cleanup happens before those database actions complete, and then I get ugly errors. Is there a way to detect how many statements are in-flight or otherwise hold off close()?
When you execute a query you get Future[A] where A is the result of the query.
You can compose all your queries using Future.sequence() to get a single future composedFuture which will be completed when all your queries have returned result.
Now you can use composedFuture.map(_ => close()) to make sure that all queries have finished execution and then you close the resource.
Best option is to expose the actions as future and then compose them.
Otherwise you can put Thread.sleep(someSensibleTime) and hope your future completes within someSensibleTime, but this will make your tests slow and errorprone.
I think it may be database-dependant rather than slick-dependant.
For example, mysql technologies allow you to see currently running queries with the query "show processlist", and act accordingly.
If that's not an option, I suppose that you could poll the db to observe a selected side effect, and close() afterwards ?

Dynamic test cases

We are using NUnit to run our integration tests. One of tests should always do the same, but take different input parameters. Unfortunately, we cannot use [TestCase] attribute, because our test cases are stored in an external storage. We have dynamic test cases which could be added, removed, or disabled (not removed) by our QA engineers. The QA people do not have ability to add [TestCase] attributes into our C# code. All they can do is to add them into the storage.
My goal is to read test cases from the storage into memory, run the test with all enabled test cases, report if a test case is failed. I cannot use "foreach" statement because if test case #1 is failed, then rest of the test cases will not be run at all. We already have build server (CruiseControl.net) where generated NUnit reports are shown, therefore I would like to continue using NUnit.
Could you point to a way how can I achieve my goal?
Thank you.
You can use [TestCaseSource("PropertyName")\] which specifies a property (or method etc) to load data from.
For example, I have a test case in Noda Time which uses all the BCL time zones - and that could change over time, of course (and is different on Mono), without me changing the code at all.
Just make your property/member load the test data into a collection, and you're away.
(I happen to have always used properties, but it sounds like it should work fine with methods too.)

Play Model save function isn't actually writing to the database

I have a play model called "JobStatus" and it's just got one property, an enum with a JobState, (Running/notRunning).
The class extends model and is implemented as a singleton. You call it's getInstance() method to get the only record in the underlying table.
I have a job that runs every month and in the job I will toggle the state of the JobStatus object back and forth at various times and call .save().
I've noticed it isn't actually saving.
When the job starts off, it's first line of code is
JobStatus thisJobStatus = jobStatus.getInstance();
...// exit if already running
thisJobStatus.JobState = JobState.Running;
thisJobStatus.save()
then when the job is done it will change the status back to NotRunning and save again.
The issue is that when I look in the MySql database the actual record value is never changed.
This causes a catastrophic failure because when other nodes try to run the job they check the state and since they're seeing it as NotRunning, they all try to run the job also.
So my clever scheme for managing job state is failing because the actual value isn't getting commited to the DB.
How do I force Play to write to the DB right away when I call .save() on a model?
Thanks
Josh
try adding this to your JobStatus and call it after save.
public static void commit(){
JobStatus.em().getTransaction().commit();
JobStatus.em().getTransaction().begin();
JobStatus.em().flush();
JobStatus.em().clear();
}
I suppose you want to mark your job as "running" pretty much as the first thing when the job starts? In that case, you shouldn't have any other ongoing database statements yet...
To commit your changes in the database immediately (instead of after the job has ended), add the following commands after the thisJobStatus.save(); method call:
JPA.em().flush();
JPA.em().getTransaction().commit();
Additionally, since you're using MySQL, you might want to lock the row immediately upon retriveval using the SELECT ... FOR UPDATE clause. (See MySQL Reference Manual for more information.) Of course, you wouldn't want to have that in your getInstance() method, otherwise every fetch operation would lock the record.

Salesforce deployment error because of test class failure

We are encountering the deployment error due to some test classes where batch apex class is called. The error occurring is:
"System.unexpectedException:No more than one executeBatch can be called within a test method."
In our test class, there are insert and update statements which in turn calls the batch apex from a trigger. We have also tried to limit the batch query by using "Test.isRunningTest()" method but we are again facing the same error.
The code works fine in sandbox and the error is coming only at the time of deployment to production.
Also, the test classes causing the error were working fine previously in the production.
Please provide some pointers/solution for the above mentioned error.
Thank you.
I would suggest the best approach would be to ensure the trigger doesn't execute the batch if Test.IsRunningTest() is true, and then test the batch class with it's own test method. I suspect your trigger is fired twice and so batch instances are created and run more than one.
Using a dedicated test method you can execute the batch specifying a limit on the query, and you should use the optional batch size parameter to control the number of calls to execute, i.e. if your limit is 50, but you do this:
Database.executeBatch(myBatchInstance, 25);
It'll still need to call the execute() method twice to cover all the records, and this is where you hit problems like the one you mentioned.