In my tests, I've got some database actions that aren't exposed as Futures at the test level. Sometimes, my tests run fast enough that close() in my cleanup happens before those database actions complete, and then I get ugly errors. Is there a way to detect how many statements are in-flight or otherwise hold off close()?
When you execute a query you get Future[A] where A is the result of the query.
You can compose all your queries using Future.sequence() to get a single future composedFuture which will be completed when all your queries have returned result.
Now you can use composedFuture.map(_ => close()) to make sure that all queries have finished execution and then you close the resource.
Best option is to expose the actions as future and then compose them.
Otherwise you can put Thread.sleep(someSensibleTime) and hope your future completes within someSensibleTime, but this will make your tests slow and errorprone.
I think it may be database-dependant rather than slick-dependant.
For example, mysql technologies allow you to see currently running queries with the query "show processlist", and act accordingly.
If that's not an option, I suppose that you could poll the db to observe a selected side effect, and close() afterwards ?
Related
As far as I know, we can't use start transaction within functions, thus we can't use COMMIT and ROLLBACK in functions.
But how then we ROLLBACK by some if-condition?
How then we can perform a sequence of statements in a specific level of isolation? I mean a situation when an application wants to call a SQL (plpgsql) function and that function really needs to be run in a transaction with a certain isolation level. What to do in such a case?
In which cases then it is really practical to run ROLLBACK? Only when we manually write a script, check something and then ROLLBACK manually if we don't like the result. And in the same case, I see the practicality of savepoints. However, I feel like it is a serious constraint.
If you want to rollback the complete transaction, RAISE an exception.
If you only want to roll back part of your work, start a new block with a BEGIN at the point to which you want to roll back and add an EXCEPTION clause to the block.
Since the transaction is started outside the function, the isolation level already has to be set properly when you are in the function.
You can query
SELECT current_setting('transaction_isolation', TRUE);
and throw an error if the setting is not correct.
is too general or too simple to answer.
You roll back a transaction if you have reached a point in your processing where you want to undo everything you have done so far in the transaction.
Often, that happens implicitly rather than explicitly by throwing an error.
I'm trying to flesh out my application's abstract DAO test harness to support rolling back any test modifications to the database. I know slick supports transactions with (db.run(<some DBIOAction>.transactionally), but that doesn't work as part of an abstract test class, as the db actions need to actually be run in the actual test method.
Currently, I'm working with attempting to wrap the test method with BeforeAndAfter's runTest method and attempting to find some slick method that allows me to wrap the test execution in a transaction. It feels like the correct first step, but I'm struggling to figure out how to not interfere with regular test creation while still being able to roll back transactions (i.e. I don't want to have to manually add a DBIOAction.failure in every DB test that changes the DB state).
I've tried setting autocommit=false around the method, e.g.
db.run(
SimpleJdbcOperation(_.connection.setAutoCommit(false)) andThen
DBIOAction.successful(super.runTest) zip
SimpleJdbcOperation(_.connection.rollBack()))
but I think the connection pool is foiling that particular method, as getting the autocommit status inside the test method returns true and the rollback doesn't do anything.
Is there anything I can do here short of hacky (manual DBIOAction.failure()) or wasteful (drop and recreate table/schema after every test) solutions?
For now I'm going with https://stackoverflow.com/a/34953817/1676006, but I still feel like there should be a better way.
I am using Spring-batch 3.0.4 stable. While submitting a job I add some specific parameters to its execution, say, a tag. Jobs information is persisted in the DB.
Later on I will need to retrieve all the executions marked with a particular tag.
Currently I see 2 options:
Get all job instances with org.springframework.batch.core.explore.JobExplorer#findJobInstancesByJobName. For each instance get all available executions with org.springframework.batch.core.explore.JobExplorer#getJobExecutions. Filter the resulting collection of executions checking its JobParameters.
Write my own JdbcTemplate-based DAO implementation to run the select query.
While the former option seems pretty inefficient, the latter one suggests writing extra code to deal with the Spring-specific database tables structure.
Is there any option I am missing here?
In our webapp, we have lots of queries running. Most of them reading data but some update queries with high priority might come. Since, we'd like to cancel read queries but when using KILL, I'd like the read query to return certain dataset result or execution result upon receiving cancel.
My intention is to mimic the behavior of signal in C programs for which a signal handler is invoked upon receiving a kill signal.
Is there any method to define an asynchrnous KILL signal handler for SPs?
This is not a fully tested answer. But it is a bit more than just a comment.
One is to have dirty read (with nolock).
This part is tested I do this all time.
Build a large scalable app you need to resort to this and manage it.
A dirty read will not block an update.
You can get that - a dirty read.
A lot of people think a dirty read may get corrupt data.
If you are updating smith to johnson the select is not going to get smison.
The select is going to get smith and it will be immediately stale.
But how is that worse then taking a read lock?
The read get smith and blocks the update.
Once the read locks are cleared it is updated.
I would contend that blocking an update is also stale data.
If you are using reader I think you could pass the same cancellation token to each select and then just cancel the one token.
But if may not process the CancellationToken until it read the row so it may not cancel a query a long running query that has not yet returned any rows.
DbDataReader.ReadAsync Method (CancellationToken)
Or if you are not using reader look at
SqlCommand.Cancel
As far as getting cancel to return alternate data. I doubt SQL is going to do that.
I am using the Play! framework, and have a difficulty with in the following scenario.
I have a server process which has a 'read-only' transaction. This to prevent any possible database lock due to execution as it is a complicated procedure. There are one or two record to be stored, but I do that as a job, as I found doing them in the main thread could result in a deadlock under higher load.
However, in one occasion I need to create an object and subsequently use it.
However, when I create the object using a Job, wait for the resulting id (with a Promise return) and then search in the database for it, it cannot be found.
Is there an easy way to have the JPA search 'afresh' in the DB at this point? I implemented a 5 sec. pause to test, so I am sue it is not because the procedure hadn't finished yet.
Check if there is a transaction wrapped around your INSERT and if there is one check that the transaction is COMMITed.