AKKA remote (with SSL) can't find keystore/truststore files on classpath - scala

I'm trying to configure AKKA SSL connection to use my keystore and trustore files, and I want it to be able to find them on a classpath.
I tried to set application.conf to:
...
remote.netty.ssl = {
enable = on
key-store = "keystore"
key-store-password = "passwd"
trust-store = "truststore"
trust-store-password = "passwd"
protocol = "TLSv1"
random-number-generator = "AES128CounterSecureRNG"
enabled-algorithms = ["TLS_RSA_WITH_AES_128_CBC_SHA"]
}
...
This works fine if keystore and trustore files are in the current directory. In my application these files get packaged into WAR and JAR archives, and because of that I'd like to read them from the classpath.
I tried to use getResource("keystore") in application.conf as described here without any luck. Config reads it literally as a string.
I also tried to parse String conf and force it to read the value:
val conf: Config = ConfigFactory parseString (s"""
...
"${getClass.getClassLoader.getResource("keystore").getPath}"
...""")
In this case it finds proper path on the classpath as file://some_dir/target/scala-2.10/server_2.10-1.1-one-jar.jar!/main/server_2.10-1.1.jar!/keystore which is exactly where it's located (in the jar). However, underlying Netty SSL transport can't find the file given this path, and I get:
Oct 03, 2013 1:02:48 PM org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink
WARNING: Failed to initialize an accepted socket.
45a13eb9-6cb1-46a7-a789-e48da9997f0fakka.remote.RemoteTransportException: Server SSL connection could not be established because key store could not be loaded
at akka.remote.netty.NettySSLSupport$.constructServerContext$1(NettySSLSupport.scala:113)
at akka.remote.netty.NettySSLSupport$.initializeServerSSL(NettySSLSupport.scala:130)
at akka.remote.netty.NettySSLSupport$.apply(NettySSLSupport.scala:27)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$.defaultStack(NettyRemoteSupport.scala:74)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$$anon$1.getPipeline(NettyRemoteSupport.scala:67)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$$anon$1.getPipeline(NettyRemoteSupport.scala:67)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java:277)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:242)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.FileNotFoundException: file:/some_dir/server/target/scala-2.10/server_2.10-1.1-one-jar.jar!/main/server_2.10-1.1.jar!/keystore (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:97)
at akka.remote.netty.NettySSLSupport$.constructServerContext$1(NettySSLSupport.scala:118)
... 10 more
I wonder if there is any way to configure this in AKKA without implementing custom SSL transport. Maybe I should configure Netty in the code?
Obviously I can hardcode the path or read it from an environment variable, but I would prefer a more flexible classpath solution.
I decided to look at the akka.remote.netty.NettySSLSupport at the code where exception is thrown from, and here is the code:
def initializeServerSSL(settings: NettySettings, log: LoggingAdapter): SslHandler = {
log.debug("Server SSL is enabled, initialising ...")
def constructServerContext(settings: NettySettings, log: LoggingAdapter, keyStorePath: String, keyStorePassword: String, protocol: String): Option[SSLContext] =
try {
val rng = initializeCustomSecureRandom(settings.SSLRandomNumberGenerator, settings.SSLRandomSource, log)
val factory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm)
factory.init({
val keyStore = KeyStore.getInstance(KeyStore.getDefaultType)
val fin = new FileInputStream(keyStorePath)
try keyStore.load(fin, keyStorePassword.toCharArray) finally fin.close()
keyStore
}, keyStorePassword.toCharArray)
Option(SSLContext.getInstance(protocol)) map { ctx ⇒ ctx.init(factory.getKeyManagers, null, rng); ctx }
} catch {
case e: FileNotFoundException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because key store could not be loaded", e)
case e: IOException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because: " + e.getMessage, e)
case e: GeneralSecurityException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because SSL context could not be constructed", e)
}
It looks like it must be a plain filename (String) because that's what FileInputStream takes.
Any suggestions would be welcome!

I also got stuck in similar issue and was getting similar errors. In my case I was trying to hit an https server with self-signed certificates using akka-http, with following code I was able to get through:
val trustStoreConfig = TrustStoreConfig(None, Some("/etc/Project/keystore/my.cer")).withStoreType("PEM")
val trustManagerConfig = TrustManagerConfig().withTrustStoreConfigs(List(trustStoreConfig))
val badSslConfig = AkkaSSLConfig().mapSettings(s => s.withLoose(s.loose
.withAcceptAnyCertificate(true)
.withDisableHostnameVerification(true)
).withTrustManagerConfig(trustManagerConfig))
val badCtx = Http().createClientHttpsContext(badSslConfig)
Http().superPool[RequestTracker](badCtx)(httpMat)

At the time of writing this question there was no way to do it AFAIK. I'm closing this question but I welcome updates if new versions provide such functionality or if there are other ways to do that.

Related

Trouble connecting to SSL enabled mongo cluster from Spark Application

I'm trying to connect to a SSL enabled mongo cluster from a spark application. I'm trying to use self signed cert and getting the following error.
Exception in monitor thread while connecting to server CLUSTER_NAME
com.mongodb.MongoSocketWriteException: Exception sending message
at com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:525)
at com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:413)
at com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:269)
at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:253)
at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)
at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)
at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:106)
at com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:63)
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:127)
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No name matching CLUSTER_NAME found
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
My read config uri looks something like this:
val uri: String = "mongodb://" + URLEncoder.encode(Login, "UTF-8") + ":" + URLEncoder.encode(Password, "UTF-8") + "#" + cluster + ":27017/" + database + "." + collection + "?authSource=" + (if (authenticationDatabase != "") authenticationDatabase else "admin") + (if (replicaset == null) "" else "&replicaSet=" + replicaset) + "&ssl=true"
I want to use self signed cert something like :
class TrustAllX509TrustManager extends X509TrustManager {
override def getAcceptedIssuers = new Array[X509Certificate](0)
override def checkClientTrusted(certs: Array[X509Certificate], authType: String): Unit = {
}
override def checkServerTrusted(certs: Array[X509Certificate], authType: String): Unit = {
}
}
The version of the env's I'm using:
Spark: 2.2.0
Mongo: 3.4
Any help will be appreciated.
Thanks!
This is same as making any other SSL connection. Import your cert in keystore and refer to that key store using below code
System.setProperty("javax.net.ssl.trustStore", "keystoreFilefullpath")
System.setProperty("javax.net.ssl.trustStorePassword", "password")
Once these params are set then Kafka SSL should work. If you are publishing from Spark then keystore file must be uploaded to driver/executor using --files option

How to resolve akka-http issues?

I'm new to Akka-Http. I added following Route
path(urlpath / "messages") {
post {
decodeRequest {
withoutSizeLimit {
entity(asSourceOf[Message]) { source =>
val storeToDb = Flow[Message].map[Future[Message]](message => (service ask message).mapTo[Message])
val sendToProviderFlow = Flow[Future[Message]].map[Unit](message => sendToJasminProvider(message))
val futureResponse = source
.via(storeToDb)
.via(sendToProviderFlow)
.runWith(Sink.ignore).map(_ => "Message Received")
complete(futureResponse)
}
}
}
}
}
When I tried run above Route, I've been receiving couple of errors and I do not know how to fix them.
Issues:
Accept error: could not accept new connection java.io.IOException: Too many open files
[1] dead letters encountered. If this is not an expected behavior,This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
akka.http.scaladsl.model.EntityStreamException: Entity stream
truncation
About first error, it's seems that you open too many connection. Maybe you simply didn't close them somewhere. Unix used similar connection for sockets and files
About second error, it's obvious that your actor (service) is dead. Try to debug your actors lifecycle and start with
enabling akka.actor.debug.lifecycle = on
For more details visit this page
https://doc.akka.io/docs/akka/current/logging.html#auxiliary-logging-options
Is sendToJasminProvider has type Future[Message] => Unit? If that function has type like Future[Message] => Future[T] then it's problem

How to write to HDFS using spark programming API if I have authentication details?

I need to write to external HDFS cluster whose authentication details are available for both simple as well as kerberos authentication. For the sake of simplicity, lets assume we are dealing with simple authentication.
This is what I have:
External HDFS cluster connection details (host, port)
Authentication details (user for simple auth)
HDFS location where files need to be written (hdfs://host:port/loc)
Also, other details like format, etc.
Please note SPARK user is not same as user specified for HDFS auth.
Now, using the spark programming API, this is what I am trying to do:
val hadoopConf = new Configuration()
hadoopConf.set("fs.defaultFS", fileSystemPath)
hadoopConf.set("hadoop.job.ugi", userName)
val jConf = new JobConf(hadoopConf)
jConf.setUser(user)
jConf.set("user.name", user)
jConf.setOutputKeyClass(classOf[NullWritable])
jConf.setOutputValueClass(classOf[Text])
jConf.setOutputFormat(classOf[TextOutputFormat[NullWritable, Text]])
outputDStream.foreachRDD(r => {
val rdd = r.mapPartitions { iter =>
val text = new Text()
iter.map { x =>
text.set(x.toString)
println(x.toString)
(NullWritable.get(), text)
}
}
val rddCount = rdd.count()
if(rddCount > 0) {
rdd.saveAsHadoopFile(config.outputPath, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], jConf)
}
})
Here, I was assuming that if we pass JobConf with correct details, it should be used for authentication and write should be done using the user specified in JobConf.
However, write still happens as the spark user ("root") irrespective of the auth details present in JobConf ("hdfs" as user). Below is the exception that I get:
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/spark-deploy/out/_temporary/0":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1665)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3900)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:978)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy40.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy41.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
... 45 more
Please let me know, if there are any suggestions.
This is probably more a comment than an answer but as it is too long I put it here. I haven't tried this because I have no environment to test it. Please try and let me know if this works (and if it doesn't I'll remove this answer).
Looking a bit into the code it looks like DFSClient creates a proxy using createProxyWithClientProtocol that uses UserGroupInformation.getCurrentUser() (I haven't traced the createHAProxy branch down but I suspect the same logic there). Then this info is sent to the server for authentication.
It means that you need to change what UserGroupInformation.getCurrentUser() returns in the context of your particular call. This is what UserGroupInformation.doAs is supposed to do so you just need to get a proper UserGroupInformation instance. And in the case of simple authentication UserGroupInformation.createRemoteUser might actually work.
So I suggest trying something like this:
...
val rddCount = rdd.count()
if(rddCount > 0) {
val remoteUgi = UserGroupInformation.createRemoteUser("hdfsUserName")
remoteUgi.doAs(() => { rdd.saveAsHadoopFile(config.outputPath, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], jConf) })
}

Failed to load data source for config using Play-2.6 and Quill.io

I'm currently getting an error when I try to run my Play app. It says Failed to load data source but then it looks like it is indeed loading the data source. I'm very new to Play and Scala and the rest of my team is also new, so apologies if this is a silly error or if I'm missing some code samples. Database app-users with owner root exists on my local and I don't believe root has a password (created using the createuser tool).
Any ideas on what could cause this? Or what I am missing?
Error:
play.api.UnexpectedException: Unexpected exception[IllegalStateException: Failed to load data source for config: 'Config(SimpleConfigObject({"dataSource":"org.postgresql.ds.PGSimpleDataSource","database":"app-users","driver":"org.postgresql.Driver","host":"localhost","password":"","port":5432,"url":"jdbc:postgresql://localhost:5432/app-users","user":"root"}))']
at play.core.server.DevServerStart$$anon$1.reload(DevServerStart.scala:186)
at play.core.server.DevServerStart$$anon$1.get(DevServerStart.scala:124)
at play.core.server.AkkaHttpServer.modelConversion(AkkaHttpServer.scala:183)
at play.core.server.AkkaHttpServer.handleRequest(AkkaHttpServer.scala:189)
at play.core.server.AkkaHttpServer.$anonfun$createServerBinding$3(AkkaHttpServer.scala:106)
at akka.stream.impl.fusing.MapAsync$$anon$24.onPush(Ops.scala:1191)
at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:512)
at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:475)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:371)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:584)
Caused by: java.lang.IllegalStateException: Failed to load data source for config: 'Config(SimpleConfigObject({"dataSource":"org.postgresql.ds.PGSimpleDataSource","database":"app-users","driver":"org.postgresql.Driver","host":"localhost","password":"","port":5432,"url":"jdbc:postgresql://localhost:5432/app-users","user":"root"}))'
at io.getquill.JdbcContextConfig.dataSource(JdbcContextConfig.scala:24)
at io.getquill.PostgresJdbcContext.<init>(PostgresJdbcContext.scala:17)
at io.getquill.PostgresJdbcContext.<init>(PostgresJdbcContext.scala:18)
at io.getquill.PostgresJdbcContext.<init>(PostgresJdbcContext.scala:19)
at db.db.package$DBContext.<init>(package.scala:6)
at MyComponents.ctx$lzycompute(MyApplicationLoader.scala:19)
at MyComponents.ctx(MyApplicationLoader.scala:19)
at MyComponents.userService$lzycompute(MyApplicationLoader.scala:22)
at MyComponents.userService(MyApplicationLoader.scala:22)
at MyComponents.applicationController$lzycompute(MyApplicationLoader.scala:29)
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: argument type mismatch
at com.zaxxer.hikari.util.PropertyElf.setProperty(PropertyElf.java:154)
at com.zaxxer.hikari.util.PropertyElf.lambda$setTargetFromProperties$0(PropertyElf.java:57)
at java.util.Hashtable.forEach(Hashtable.java:879)
at com.zaxxer.hikari.util.PropertyElf.setTargetFromProperties(PropertyElf.java:52)
at com.zaxxer.hikari.HikariConfig.<init>(HikariConfig.java:132)
at io.getquill.JdbcContextConfig.dataSource(JdbcContextConfig.scala:21)
at io.getquill.PostgresJdbcContext.<init>(PostgresJdbcContext.scala:17)
at io.getquill.PostgresJdbcContext.<init>(PostgresJdbcContext.scala:18)
at io.getquill.PostgresJdbcContext.<init>(PostgresJdbcContext.scala:19)
at db.db.package$DBContext.<init>(package.scala:6)
Caused by: java.lang.IllegalArgumentException: argument type mismatch
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.zaxxer.hikari.util.PropertyElf.setProperty(PropertyElf.java:149)
at com.zaxxer.hikari.util.PropertyElf.lambda$setTargetFromProperties$0(PropertyElf.java:57)
at java.util.Hashtable.forEach(Hashtable.java:879)
at com.zaxxer.hikari.util.PropertyElf.setTargetFromProperties(PropertyElf.java:52)
at com.zaxxer.hikari.HikariConfig.<init>(HikariConfig.java:132)
at io.getquill.JdbcContextConfig.dataSource(JdbcContextConfig.scala:21)
application.conf
play.db {
config = "db"
default = "default"
}
db.default {
driver = "org.postgresql.Driver"
dataSource = "org.postgresql.ds.PGSimpleDataSource"
url = "jdbc:postgresql://localhost:5432/app-users"
user = "root"
user = ${?DB_USER}
host = "localhost"
host = ${?DB_HOST}
port = 5432
port = ${?DB_PORT}
password = ""
password = ${?DB_PASSWORD}
database = "app-users"
}
db/package.scala
import io.getquill.{PostgresJdbcContext, SnakeCase}
package object db {
class DBContext(config: String) extends PostgresJdbcContext(SnakeCase, config)
trait Repository {
val ctx: DBContext
}
}
Using:
Scala 2.12.4
Quill 2.3.2
Play 2.6.6
Postgres JDBC Driver 42.2.1
PostgreSQL 10.2
UPDATE:
Added a password of "root" to the root user and switched to using the same format as the Quill docs, so now appliation.conf looks like this:
db.default {
dataSourceClassName = org.postgresql.ds.PGSimpleDataSource
dataSource.user = root
dataSource.password = root
dataSource.databaseName = app-users
dataSource.portNumber = 5432
dataSource.serverName = host
connectionTimeout = 30000
}
But the error message is still basically the same:
play.api.UnexpectedException: Unexpected exception[IllegalStateException: Failed to load data source for config: 'Config(SimpleConfigObject({"connectionTimeout":30000,"dataSource":{"databaseName":"app-users","password":"root","portNumber":5432,"serverName":"host","user":"root"},"dataSourceClassName":"org.postgresql.ds.PGSimpleDataSource"}))']
The following worked for me:
db.default {
dataSourceClassName = org.postgresql.ds.PGSimpleDataSource
dataSource.user = root
dataSource.password = root
dataSource.databaseName = app-users
dataSource.portNumber = 5432
dataSource.serverName = localhost
connectionTimeout = 30000
}
Basically, localhost instead of host. I'm guessing the first iteration didn't work because of the quotes.

Play 2.1 Scala SQLException Connection Timed out waiting for a free available connection

I have been working on this issue for quite a while now and I cannot find a solution...
A web app built with play framework 2.2.1 using h2 db (for dev) and a simple Model package.
I am trying to implement a REST JSON endpoint and the code works... but only once per server instance.
def createOtherModel() = Action(parse.json) {
request =>
request.body \ "name" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match name =(")).as("application/json")
case name: JsValue =>
request.body \ "value" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match value =(")).as("application/json")
case value: JsValue =>
// this breaks the secod time
val session = ThinkingSession.dummy
val json = Json.obj(
"content" -> value,
"thinkingSession" -> session.id,
)
)
Ok(Json.obj("content" -> json)).as("application/json")
}
} else {
BadRequest(Json.obj("error" -> true,
"message" -> "Name was not content =(")).as("application/json")
}
}
}
so basically I read the JSON, echo the "value" value, create a model obj and send it's id.
the ThinkingSession.dummy function does this:
def all(): List[ThinkingSession] = {
// Tried explicitly closing connection, no difference
//val conn = DB.getConnection()
//try {
// DB.withConnection { implicit conn =>
// SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
// }
//} finally {
// conn.close()
//}
DB.withConnection { implicit conn =>
SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
}
}
def dummy: ThinkingSession = {
(all() head)
}
So this should do a SELECT * FROM thinking_session, create a model obj list from the result and return the first out of the list.
This works fine the first time after server start but the second time I get a
play.api.Application$$anon$1: Execution exception[[SQLException: Timed out waiting for a free available connection.]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at scala.Option.map(Option.scala:145) [scala-library.jar:na]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2.applyOrElse(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
Caused by: java.sql.SQLException: Timed out waiting for a free available connection.
at com.jolbox.bonecp.DefaultConnectionStrategy.getConnectionInternal(DefaultConnectionStrategy.java:88) ~[bonecp.jar:na]
at com.jolbox.bonecp.AbstractConnectionStrategy.getConnection(AbstractConnectionStrategy.java:90) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:553) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:131) ~[bonecp.jar:na]
at play.api.db.DBApi$class.getConnection(DB.scala:67) ~[play-jdbc_2.10.jar:2.2.1]
at play.api.db.BoneCPApi.getConnection(DB.scala:276) ~[play-jdbc_2.10.jar:2.2.1]
My application.conf (db section)
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:file:database/[my_db]"
db.default.logStatements=true
db.default.idleConnectionTestPeriod=5 minutes
db.default.connectionTestStatement="SELECT 1"
db.default.maxConnectionAge=0
db.default.connectionTimeout=10000
Initially the only thing set in my config was the connection and the error occurred. I added all the other stuff while reading up on the issue on the web.
What is interesting is that when I use the h2 in memory db it works once after server start and after that it fails. when I use the h2 file system db it only works once, regardless of the server instances.
Can anyone give me some insight on this issue? Have found some stuff on bonecp problem and tried upgrading to 0.8.0-rc1 but nothing changed... I am at a loss =(
Try to set a maxConnectionAge and idle timeout
turns out the error was quite somewhere else... it was a good ol' stack overflow... have not seen one in a long time. I tried down-voting my question but it's not possible^^