Is there a way to handle sessions explicitly in Slick 3? I currently have some code that looks like
def findUserByEmail(email: String): Option[User] = {
val users = TableQuery[Users]
val action = users.filter(_.email === email).result.headOption
val result = db.run(action.transactionally)
Await.result(result, Duration.Inf)
}
It works fine the first few times I run it, but then I start running into issues where it looks like connections/sessions are being left open (see below). This code is running inside aws lambda functions and I'm thinking I need to handle sessions more explicitly. How would I do this in Slick 3?
"errorMessage": "Timeout after 5000ms of waiting for a connection.",
"errorType": "java.sql.SQLTimeoutException",
"stackTrace": [
"com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:233)",
"com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:183)",
"com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93)",
"slick.jdbc.hikaricp.HikariCPJdbcDataSource.createConnection(HikariCPJdbcDataSource.scala:18)",
"slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:424)",
"slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:47)",
"slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:38)",
"slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:218)",
"slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:217)",
"slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:38)",
"slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:239)",
"java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)",
"java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)",
"java.lang.Thread.run(Thread.java:745)"
],
"cause": {
"errorMessage": "FATAL: remaining connection slots are reserved for non-replication superuser connections",
"errorType": "org.postgresql.util.PSQLException",
"stackTrace": [
"org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)",
"org.postgresql.core.v3.QueryExecutorImpl.readStartupMessages(QueryExecutorImpl.java:2586)",
"org.postgresql.core.v3.QueryExecutorImpl.<init>(QueryExecutorImpl.java:113)",
"org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)",
"org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:52)",
"org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:216)",
"org.postgresql.Driver.makeConnection(Driver.java:404)",
"org.postgresql.Driver.connect(Driver.java:272)",
You could try to set query timeout. Like this:
db.run(action.transactionally.withStatementParameters(statementInit = st => st.setQueryTimeout(100)))
You can also set different properties on Hikari connection pool as below:
slick {
// https://github.com/slick/slick/blob/master/slick-hikaricp/src/main/scala/slick/jdbc/hikaricp/HikariCPJdbcDataSource.scala
dataSourceClass = "slick.jdbc.DriverDataSource"
user = ${database.user}
password = ${database.password}
url = ${database.url}
connectionPool = HikariCP
maxConnections = 50
numThreads = 10
queueSize = 5000
connectionInitSql = "SELECT 1;"
connectionTestQuery = "SELECT 1;"
registerMbeans = true
properties = {
driver = ${database.driver}
url = ${database.url}
}
}
Related
Using these parameters:
canada {
hosts = ["dd.weather.gc.ca"]
username = "anonymous"
password = "anonymous"
port = 5671
exchange = "xpublic"
queue = "q_anonymous_gsk"
routingKey = "v02.post.observations.swob-ml.#"
requestedHeartbeat = 300
ssl = true
}
I can connect to a weather service in Canada using NewMotion/Akka, but when I try op-rabbit, I get:
ACCESS_REFUSED - access to exchange 'xpublic' in vhost '/' refused for user 'anonymous'
[INFO] [foo-akka.actor.default-dispatcher-7] [akka://foo/user/$a/connection] akka://foo/user/$a/connection connected to amqp://anonymous#{dd.weather.gc.ca:5671}:5671//
[INFO] [foo-op-rabbit.default-channel-dispatcher-6] [akka://foo/user/$a/connection/$a] akka://foo/user/$a/connection/$a connected
[INFO] [foo-akka.actor.default-dispatcher-4] [akka://foo/user/$a/connection/confirmed-publisher-channel] akka://foo/user/$a/connection/confirmed-publisher-channel connected
[INFO] [foo-akka.actor.default-dispatcher-4] [akka://foo/user/$a/connection/$b] akka://foo/user/$a/connection/$b connected
[ERROR] [foo-akka.actor.default-dispatcher-3] [akka://foo/user/$a/subscription-q_anonymous_gsk-1] Connection related error while trying to re-bind a consumer to q_anonymous_gsk. Waiting in anticipating of a new channel.
...
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to exchange 'xpublic' in vhost '/' refused for user 'anonymous', class-id=40, method-id=10)
The following works in NewMotion/Akka:
val inQueue = "q_anonymous_gsk"
val inExchange = "xpublic"
val canadaQueue = canadaChannel.queueDeclare(inQueue, false, true, false, null).getQueue
canadaChannel.queueBind(canadaQueue, inExchange, inQueue)
val consumer = new DefaultConsumer(canadaChannel) {
override def handleDelivery(consumerTag: String, envelope: Envelope, properties: BasicProperties, body: Array[Byte]) {
val s = fromBytes(body)
if (republishElsewhere) {
// ...
}
}
}
canadaChannel.basicConsume(canadaQueue, true, consumer)
but using op-rabbit like this:
val inQueue = "q_anonymous_gsk"
val inExchange = "xpublic"
val inRoutingKey = "v02.post.observations.swob-ml.#""
val rabbitCanada: ActorRef = actorSystem.actorOf(Props(classOf[RabbitControl], connParamsCanada))
def runSubscription(): SubscriptionRef = Subscription.run(rabbitCanada) {
channel(qos = 3) {
consume(topic(queue(inQueue), List(inRoutingKey))) {
(body(as[String]) & routingKey) { (msg, key) =>
ack
}
}
}
}
I get the ACCESS_REFUSED error near the top of this post. Why? How do I fix this if I want to use op-rabbit?
Have you tried to use the correct vhost with permission to anonymous user
I just try to write a simple spec like this:
"saves the record on create" in {
val request = FakeRequest(POST, "/countries").withJsonBody(Json.parse("""{ "country": {"title":"Germany", "abbreviation":"GER"} }"""))
val create = route(app, request).get
status(create) mustBe OK
contentType(create) mustBe Some("application/json")
contentAsString(create) must include("country")
}
But on execution it throws such an error:
java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2#f456097 rejected from java.util.concurrent.ThreadPoolExecutor#6265d40c[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]
It works good for get request test for index page, any ideas how to workaround this ?
The problem was OneAppPerTest since problems with DB connections: just replacing it to OneAppPerSuitesolves the problem
After I run the following spec, the table exists. I expected it to never be present as it should only exist within the eventually rolled-back transaction.
import org.specs2.mutable.Specification
import scalikejdbc.{DB, NamedDB}
import scalikejdbc.specs2.mutable.AutoRollback
class MyQuerySpec extends Specification with ArbitraryInput {
sequential
DBs.setup('myDB)
"creating the table" in new AutoRollback {
override def db(): DB = NamedDB('myDB).toDB()
private val tableName = s"test_${UUID.randomUUID().toString.replaceAll("-", "_")}"
private val query = new MyQuery(tableName)
query.createTable
ok
}
}
The line DBs.setup('myDB) is not part of the examples. But if I remove it I get the exception java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'myDB)
The source of MyQuery.create:
SQL(s"DROP TABLE IF EXISTS $tableName").execute().apply()
SQL(s"""
|CREATE TABLE $tableName (
| id bigint PRIMARY KEY
|)""".stripMargin).execute().apply()
Config:
db {
myDB {
driver = "org.postgresql.Driver"
url = "****"
user = "****"
password = "****"
poolInitialSize = 1
poolMaxSize = 300
poolConnectionTimeoutMillis = 120000
poolValidationQuery = "select 1 as one"
poolFactoryName = "commons-dbcp2"
}
}
ScalikeJDBC v2.2.9
The MyQuery#createTable must accept implicit parameter like this:
def createTable(implicit session: DBSession)
i am losing messages in my tornado chat and i do not known how to detect when the message wasn't sent and to send the message again
there is any way to detect when the conexion get lost? and when the conexión restart send the message
this is my code
def get(self):
try:
json.dumps(MessageMixin.cache)
except KeyError:
raise tornado.web.HTTPError(404)
class MessageMixin(object):
waiters = {}
cache = {}
cache_size = 200
def wait_for_messages(self,cursor=None):
t = self.section_slug
waiters = self.waiters.setdefault(t, [])
result_future = Future()
waiters.append(result_future)
return result_future
def cancel_wait(self, future):
t = self.section_slug
waiters = self.waiters.setdefault(t, [])
waiters.remove(future)
# Set an empty result to unblock any coroutines waiting.
future.set_result([])
def new_messages(self, message):
t = self.section_slug
#cache = self.cache.setdefault(t, [])
#print t
#print self.waiters.setdefault(t, [])
waiters = self.waiters.setdefault(t, [])
for future in waiters:
try:
if message is not None:
future.set_result(message)
except Exception:
logging.error("Error in waiter callback", exc_info=True)
waiters = []
#self.cache.extend(message)
#if len(self.cache) > self.cache_size:
#self.cache = self.cache[-self.cache_size:]
class MessageNewHandler(MainHandler, MessageMixin):
def post(self, section_slug):
self.section_slug = section_slug
post = self.get_argument("html")
idThread = self.get_argument("idThread")
isOpPost = self.get_argument("isOpPost")
arg_not = self.get_argument("arg")
type_not = self.get_argument("type")
redirect_to = self.get_argument("next", None)
message= {"posts": [post],"idThread": idThread,"isOpPost": isOpPost,
"type": type_not,"arg_not": arg_not}
if redirect_to:
self.redirect(redirect_to)
else:
self.write(post)
self.new_messages(message)
class MessageUpdatesHandler(MainHandler, MessageMixin):
#gen.coroutine
def post(self, section_slug):
self.section_slug = section_slug
try:
self.future = self.wait_for_messages(cursor=self.get_argument("cursor", None))
data = yield self.future
if self.request.connection.stream.closed():
return
self.write(data)
except Exception:
raise tornado.web.HTTPError(404)
def on_connection_close(self):
self.cancel_wait(self.future)
class Application(tornado.web.Application):
def __init__(self):
handlers = [
(r"/api/1\.0/stream/(\w+)", MessageUpdatesHandler),
(r"/api/1\.0/streamp/(\w+)", MessageNewHandler)
]
tornado.web.Application.__init__(self, handlers)
def main():
tornado.options.parse_command_line()
app = Application()
port = int(os.environ.get("PORT", 5000))
app.listen(port)
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
main()
In the original chatdemo, this is what the cursor parameter to wait_for_messages is for: the browser tells you the last message it got, so you can send it every message since then. You need to buffer messages and potentially re-send them in wait_for_messages. The code you've quoted here will only send messages to those clients that are connected at the time the message came in (and remember that in long-polling, sending a message puts the client out of the "waiting" state for the duration of the network round-trip, so even when things are working normally clients will constantly enter and leave the waiting state)
I have been working on this issue for quite a while now and I cannot find a solution...
A web app built with play framework 2.2.1 using h2 db (for dev) and a simple Model package.
I am trying to implement a REST JSON endpoint and the code works... but only once per server instance.
def createOtherModel() = Action(parse.json) {
request =>
request.body \ "name" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match name =(")).as("application/json")
case name: JsValue =>
request.body \ "value" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match value =(")).as("application/json")
case value: JsValue =>
// this breaks the secod time
val session = ThinkingSession.dummy
val json = Json.obj(
"content" -> value,
"thinkingSession" -> session.id,
)
)
Ok(Json.obj("content" -> json)).as("application/json")
}
} else {
BadRequest(Json.obj("error" -> true,
"message" -> "Name was not content =(")).as("application/json")
}
}
}
so basically I read the JSON, echo the "value" value, create a model obj and send it's id.
the ThinkingSession.dummy function does this:
def all(): List[ThinkingSession] = {
// Tried explicitly closing connection, no difference
//val conn = DB.getConnection()
//try {
// DB.withConnection { implicit conn =>
// SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
// }
//} finally {
// conn.close()
//}
DB.withConnection { implicit conn =>
SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
}
}
def dummy: ThinkingSession = {
(all() head)
}
So this should do a SELECT * FROM thinking_session, create a model obj list from the result and return the first out of the list.
This works fine the first time after server start but the second time I get a
play.api.Application$$anon$1: Execution exception[[SQLException: Timed out waiting for a free available connection.]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at scala.Option.map(Option.scala:145) [scala-library.jar:na]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2.applyOrElse(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
Caused by: java.sql.SQLException: Timed out waiting for a free available connection.
at com.jolbox.bonecp.DefaultConnectionStrategy.getConnectionInternal(DefaultConnectionStrategy.java:88) ~[bonecp.jar:na]
at com.jolbox.bonecp.AbstractConnectionStrategy.getConnection(AbstractConnectionStrategy.java:90) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:553) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:131) ~[bonecp.jar:na]
at play.api.db.DBApi$class.getConnection(DB.scala:67) ~[play-jdbc_2.10.jar:2.2.1]
at play.api.db.BoneCPApi.getConnection(DB.scala:276) ~[play-jdbc_2.10.jar:2.2.1]
My application.conf (db section)
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:file:database/[my_db]"
db.default.logStatements=true
db.default.idleConnectionTestPeriod=5 minutes
db.default.connectionTestStatement="SELECT 1"
db.default.maxConnectionAge=0
db.default.connectionTimeout=10000
Initially the only thing set in my config was the connection and the error occurred. I added all the other stuff while reading up on the issue on the web.
What is interesting is that when I use the h2 in memory db it works once after server start and after that it fails. when I use the h2 file system db it only works once, regardless of the server instances.
Can anyone give me some insight on this issue? Have found some stuff on bonecp problem and tried upgrading to 0.8.0-rc1 but nothing changed... I am at a loss =(
Try to set a maxConnectionAge and idle timeout
turns out the error was quite somewhere else... it was a good ol' stack overflow... have not seen one in a long time. I tried down-voting my question but it's not possible^^