How to handle PostgreSQL timeout in groovy - postgresql

In my Groovy script, I have following structure:
def sql = Sql.newInstance(connString, "user", "password",
"org.postgresql.Driver")
sql.withTransaction {
sql.withBatch(){}
sql.withBatch(){}
sql.withBatch(){}
.........
}
sql.close()
I want to take care of timeout issues here.
But Sql API doesn't have any method for it.
So how can I do it? I am using PostgreSQL driver.
I came across this. But I get error:
java.sql.SQLFeatureNotSupportedException: Method org.postgresql.jdbc4.Jdbc4Connection.setNetworkTimeout(Executor, int) is not yet implemented.
PS:
int[] modifyCount = sql.withBatch(batchSize, updateQuery) { ps ->
keyValue.each { k,v ->
ps.addBatch(keyvalue:k, newvalue:v)
}
}
In above code, when I try to add ps.setQueryTimeout(), error message says no such method defined.

Low-level timeouts could be defined through connection properties:
https://jdbc.postgresql.org/documentation/head/connect.html
loginTimeout Specify how long to wait for establishment of a database connection.
connectTimeout The timeout value used for socket connect operations.
socketTimeout The timeout value used for socket read operations.
These properties may be specified in either the connection URL or an additional Properties object parameter.
Query timeout. After connecting to the database you could define closure to be executed for each statement:
sql.withStatement{java.sql.Statement stmt->
stmt.setQueryTimeout( MY_SQL_TIMEOUT )
}

Related

Is there support for compression in ReactiveMongo?

I am using ReactiveMongo as the connector for an Akka-Http, Akka-Streams project. I am creating the MongoConnection as shown below, but the data in the database is compressed using Snappy. No matter where I look, I can't find any mention of compression support in the ReactiveMongo documentation. When I try to connect to the Mongo database using a URL with the compressors=snappy flag, it returns an exception.
I looked through the source code and indeed it appears to have no mention of compression support at all. At this point I'm willing to accept a hack work around.
Can anyone help me please?
MongoConnection.fromString("mongodb://localhost:27017?compressors=snappy").flatMap(uri => driver.connect(uri))
Exception:
23:09:15.311 [default-akka.actor.default-dispatcher-6] ERROR akka.actor.ActorSystemImpl - Error during processing of request: 'The connection URI contains unsupported options: compressors'. Completing with 500 Internal Server Error response. To change default exception handling behavior, provide a custom ExceptionHandler.
java.lang.IllegalArgumentException: The connection URI contains unsupported options: compressors
at reactivemongo.api.AsyncDriver.connect(AsyncDriver.scala:227)
at reactivemongo.api.AsyncDriver.connect(AsyncDriver.scala:203)
at reactivemongo.api.AsyncDriver.connect(AsyncDriver.scala:252)
If you need a workable example, you can try this:
(You don't actually need a MongoDB container running locally for the error to be thrown)
object ReactiveMongoCompressorIssue extends App {
import scala.concurrent.Await
import scala.concurrent.duration._
implicit val actorSystem = ActorSystem("ReactiveMongoCompressorIssue")
implicit val dispatcher: ExecutionContextExecutor = actorSystem.dispatcher
final val driver = AsyncDriver()
val url = "mongodb://localhost:27017/?compressors=snappy"
val connection = Await.result(MongoConnection.fromString(url).flatMap(uri => driver.connect(uri)), 3.seconds)
assert(connection.active)
}
Thanks to what #cchantep said about how compression in MongoDB is handled on the server side (see the MongoDB docs here) I went back through the ReactiveMongo source code to see if there was a way to either bypass the check or remove the flag from the URL myself and connect without it.
Indeed, I found that there is a boolean flag called strictMode which determines whether ignoredOptions such as the compressors flag should cause an exception to be thrown or not. So now my connection looks like this:
MongoConnection.fromString(url).flatMap(uri => driver.connect(uri, None, strictMode = false))
The None refers to a name of a connection pool, but the other connect method I was using before doesn't use one either so this works fine.
Thank you for the help!

Vertx : cannot read SQLConnection from fillReport

I want to use vertx and JasperReports, I create my connection and I test it, everything is ok, but when I want to fill jasper report by using fillReport method (where the last one is Connection) it shows error :
The method fillReport(JasperReport, Map< String,Object >, Connection) in the type JasperFillManager is not applicable for the arguments (JasperReport, null, Class < connection>).
Any idea how should I cast my SQLConnect to connect ?
Here is my code :
AsyncSQLClient client = MySQLClient.createShared(vertx, mySQLClientConfig);
client.getConnection(res -> {
if (res.succeeded()) {
SQLConnection connection = res.result();
try{
String report = "C:\\Users\\paths\\Test1.jrxml";
JasperReport Jasp = JasperCompileManager.compileReport(report);
JasperPrint JASP_PRINT = JasperFillManager.fillReport(Jasp, null, connection);
JasperViewer.viewReport(JASP_PRINT);
}
catch(Exception ex){System.out.println(ex);}
Regards.
The answer is simple. You can't cast a Vert.x io.vertx.ext.sql.SQLConnection to a JDBC java.sql.Connection.
Vert.x heavily relies upon asynchronous calls. JDBC is blocking and so Vert.x wraps it with an asynchronous interface (and a bit more). There is no way to get to the real java.sql.Connection as there is no getter or something like that in the JDBCConnectionImpl or in the SQLConnection interface.
That doesn't mean you can't use Jasper with Vert.x. You need to open you own JDBC connection – but don't block the Event loop! So I suggest you have a look at the Worker Verticles, which don't block the Event loop because they spin up a separat thread.

spray.io client configuration

I have a basic client which I use to test my server. For the configuration I am using application.json
"spray": {
"can": {
"client": {
"idle-timeout": "120 s",
"request-timeout": "180 s"
},
"host-connector": {
"max-retries": "1",
"max-connections": "64"
}
}
}
however in the sendrecieve method i see that the timeout is always 60 sec , as according to the documantation , if I use request-timeout it suppose to be the implicit value
def sendReceive(implicit refFactory: ActorRefFactory, executionContext: ExecutionContext,
futureTimeout: Timeout = 60.seconds): SendReceive =
sendReceive(IO(Http)(actorSystem))
Do I need explicitly to load the configuration?
This is a confusing aspect of spary's various timeout values, for a detailed explanation see: Understanding Spray Client Timeout Settings
A couple of points about the method definition above, the timeout is just used to satisfy the timeout required by the ask made to the transport actor, it does not relate to a request timeout for this connection. futureTimeout: Timeout = 60.seconds means that this default value of is used if none is provided, not that it is unconditionally used.
You can programmatically configure the requestTimeout by passing a HostConnectorSetup to either the host or request level API's, as you already have this in your spray.can.client configuration though you should not need to make further changes.

Table lock timeout when executing REST API functional test in Grails 2.4.4 using an in-memory H2 database

I am trying to create a set of functional test for my REST API using the funky-spock and the rest-client-builder plugins.
My H2 DB connection string looks like this:
url = "jdbc:h2:mem:testDb:MVCC=true;LOCK_TIMEOUT=5000"
First I initialize my h2 database introducing some records in the setup() method.
And everything works fine.
def setup() {
// Clean elasticsearch index
elasticSearchService.reinitialiseIndex()
// Initialize the DB
// 1st question
questionService.createQuestionFromOccurrence(
'181718e6-fd3b-4a1b-8b40-3f83fd2965e5',
QuestionType.IDENTIFICATION,
['kangaroo', 'grey'],
userMick,
'1st question 1st comment'
)
}
But when I execute my test and perform my POST request:
RestResponse response = rest.post("http://localhost:8080/${grailsApplication.metadata.'app.name'}/ws/question") {
json([
source : 'biocache',
occurrenceId: 'f6f8a9b8-4d52-49c3-9352-155f154fc96c',
userId : userKeef.alaUserId,
tags : 'octopus, orange',
comment : 'whatever'
])
}
the process fails in the first DB operation which in this case is a get() with the following exception:
ERROR errors.GrailsExceptionResolver - JdbcSQLException occurred when processing request: [POST] /taxon-overflow/ws/question
Timeout trying to lock table "QUESTION"; SQL statement:
It looks like all DB operations within a Grails test are performed in a transaction that is rolled back after each test. Apparently it also locks the DB and since the REST request will be executed in a separate thread to the TEST, it means it cannot access the DB. And even if it was not locked the process would not see the data as it is never committed.
One way to make this work is to make the test `not transactional' by adding the attribute to your test:
class RestAPISpec extends IntegrationSpec {
static transactional = false
...
}
One of the problems with this approach is that you will have to cleanup() the database manually after each test. Here is the easiest way I found to do this:
def grailsApplication
def sessionFactory
...
def cleanup() {
(grailsApplication.getArtefacts("Domain") as List).each {
it.newInstance().list()*.delete()
}
sessionFactory.currentSession.flush()
sessionFactory.currentSession.clear()
sourceService.init()
}
Another approach is to test your REST API by testing the controller in an integrated test which is a bit cumbersome as well and it is not really and e2e testing of your web service. On the other hand it executes a bit faster than a functional test.

Reliably working out if your db is down with Scala Slick

I am using Scala slick to work with my Mysql db.
I am wrapping all the calls using scala.util.Try
I would like to have different behaviour based on the problem
If the DB is down, ultimately I want my webapp to return a 503
If a strange query gets through to my db layer and there is a bug with my code then I want to return a 500
After some googling, it seems like you can get a wide array of different exceptions with error codes and I'm unsure what to look for.
With slick i am using the com.mysql.jdbc.driver
Thanks
Slick/MySQL will throw MySQLSyntaxErrorException for bad syntax and CommunicationsException when it's unable to reach the database.
Here's a quick example that will catch both of these types of exceptions:
try {
Database.forURL("jdbc:mysql://some-host:3306/db-name",
driver = "com.mysql.jdbc.Driver",
user="",
password="") withSession { session: Session =>
implicit val s = session
...
}
} catch {
case e: MySQLSyntaxErrorException =>
... handle the syntax error ...
// You may want to define your own Exception that wraps the MySQL one
// and adds more context
case e: CommunicationsException =>
... handle the connection error ...
}
Then, in your webapp code, you'll want to catch your custom exceptions (see the comment in the code) and return the HTTP codes accordingly.