SQLTimeoutException in play-slick - scala

I'm using play-slick with slick 3.0.0 in this way:
I got a connection by
val conn = db.createSession.conn
then got statement:
val statement = conn.prepareStatement(querySQL)
and return ResultSet:
Future{statement.executeQuery()}
But I got a problem: I tried to use this query about 50 times, then, I got the exception:
SQLTimeoutException: Timeout after 1000ms of waiting for a connection.
I know this may caused by connections are not closed and I didn't close the connection or session in my code manually.
I want to know:
Will connection create by my way close and return to connection pool automatically?
Was my situation caused by connection didn't release?
How to close a connection manually?
Any help would be greatly appreciated!

Remark: It would most helpful, if you post your full code (including your call that is performed 50 times)
Will connection create by my way close and return to connection pool automatically?
No. Even though Java 7 (and up) provides the so called try-with-resources (see https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html ) to auto-close your resource. ,AFAIK, this mechanism is not available in Scala (please correct me somebody, if this is not true).
Still, Scala provides the LOAN-Pattern ( see https://wiki.scala-lang.org/display/SYGN/Loan , especially using ) which provides a FP way of closing resources finally.
Was my situation caused by connection didn't release?
As long as you don't provide your full code, it is only a guess. Yes, not closing connections make the connection pool exceed, thus that no new connections are available eventually.
How to close a connection manually?
connection.close()

Related

Can I reuse an connection in mongodb? How this connections actually work?

Trying to do some simple things with mongodb my mind got stuck in something that feels kinda strange for me.
client = MongoClient(connection_string)
db = client.database
print(db)
client.close()
I thought that when make a connection it is used only this one along the rest of the code until the close() method. But it doesn't seem to work that way... I don't know how I ended up having 9 connections when it supposed to be a single one, and even if each 'request' is a connection there's too many of them
For now it's not a big problem, just bothers me the fact that I don't know exactly how this works!
When you do new MongoClient(), you are not establishing just one connection. In fact you are creating the client, that will have a connection pool. When you do one or multiple requests, the driver uses an available connection from the pool. When the use is complete, the connection goes back to the pool.
Calling MongoClient constructor every time you need to talk to the db is a very bad practice and will incur a penalty for the handshake. Use dependency injection or singleton to have MongoClient.
According to the documentation, you should create one client per process.
Your code seems to be the correct way if it is a single thread process. If you don't need any more connections to the server, you can limit the pool size by explicitly specifying the number:
client = MongoClient(host, port, maxPoolSize=<num>).
On the other hand, if the code might later use the same connection, it is better to simply create the client once in the beginning, and use it across the code.

When to close hbase connection and what will happen if connection not closed

Am new to hbase. I want to perform 2 scan operation on different tables.
So i have return genetic function to scan and closed the connection at end in scala.
fuctionhbasescan(Tablename1, scanfield)
fuctionhbasescan(Tablename2, scanfield)
when I call the function for tablename 1 it worked fine and returned result.
But when i called the same function for tablename 2, it says connection closed.
Is that only one connection is established for the instance in scala ? we need to close the connection in end of process in driver?
Please help me to understand the connection process and how it works.
Note:(HConnectionManager)connection established using connectionfactory.createconnection and then connection.getTable.
Not sure based on info provided, but the connection may have been closed by the first function. You can
Check if the connection isClosed as documented
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Connection.html
and if necessary, create another connection with createConnection (see
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html )
Avoid closing the connection until you're done with it. Move the close call on the connection outside of the inner function call, and wait until both scans are done, to close the connection.

How to debug PSQLException: FATAL: sorry, too many clients already in Play

I'm using Play 2.6 with Scala, and something in my program is eating up the connections to my Postgresql database. I keep getting:
PSQLException: FATAL: sorry, too many clients already
In my console. I've checked my max connections with
show max_connections;
And it's at the default of 100, but I shouldn't be eating up this many. Any time I access the database in the application, I use the suggested:
myDB.withConnection { conn => \... do work with SQL here ...\ }
block, which according to the documentation, should release the connection once it escapes the the block. I don't think anything is getting stuck in a loop, as otherwise other pieces of my code wouldn't be executing. Unfortunately, the stack trace that's printing only shows the "behind the scenes" stuff and won't show what caller is establishing the DB connection. Any ideas on how I can find the offender?

Elixir / Elixir-mongo collection find breaks on Enum.to_list

This is my first go around with elixir, and I'm trying to make a simple web scraper that saves into mongodb.
I've installed the elixir-mongo package and am able to insert into the database correctly. Sadly, I'm not able to retrieve the values that I have put into the DB.
Here is the error that I am getting:
** (Mix) Could not start application jobboard: exited in: JB.start(:normal, [])
** (EXIT) an exception was raised:
** (ArgumentError) argument error
(elixir) lib/enum.ex:1266: Enum.reduce/3
(elixir) lib/enum.ex:1798: Enum.to_list/1
(jobboard) lib/scraper.ex:8: JB.Scraper.scrape/0
(jobboard) lib/jobboard.ex:26: JB.start/2
(kernel) application_master.erl:272: :application_master.start_it_old/4
If I understand the source correctly, then the mongo library should implement reduce here:
https://github.com/checkiz/elixir-mongo/blob/13211a0c0c9bb5fed29dd2faf7a01342b4e97eb4/lib/mongo_find.ex#L78
Here are the relevant sections of my code:
#JB.Scraper
def scrape do
urls = JB.ScrapedUrls.unscraped_urls
end
#JB.ScrapedUrls
def unscraped_urls do
MongoService.find(%{scraped: false})
end
#MongoService
def find(statement) do
collection |> Mongo.Collection.find(statement) |> Enum.to_list
end
defp collection do
mongo = Mongo.connect!
db = mongo |> Mongo.db("simply_hired_urls")
db |> Mongo.Db.collection("urls")
end
As a bonus, if anyone can tell me how I can get around connecting to Mongo every time I make a new call, that would be awesome. :) I'm still figuring out FP.
Thanks!
Jon
Didn't use this library, but I just made a simple attempt of the simplified version of your code.
I've started with
Mongo.connect!
|> Mongo.db("test")
|> Mongo.Db.collection("foo")
|> Mongo.Collection.find(%{scraped: true})
|> Enum.to_list
This worked fine. Then I suspected that the problem occurs when too many connections are open, so I ran this test repeatedly, and then it failed with the same error you got. It failed consistently when trying to open the connection for the 2037th time. Looking at the mongodb log, I can tell that it can't open another connection:
[initandlisten] can't create new thread, closing connection
To fix this, I simply closed the connection after I converted the results to list, using Mongo.Server.close/1. That fixed the problem
As you detect yourself, this is not an optimal way of communicating with the database, and you'd be better off if you could reuse the connection for multiple queries.
A standard way of doing this is to hold on to the connection in a process, such as GenServer or an Agent. The connection becomes a part of the process state, and you can run multiple queries in that process over the same connection.
Obviously, if multiple client processes use a single database process, all queries will be serialized, and the database process then becomes a performance bottleneck. To deal with this, you could open a pool of processes, each one managing a distinct database connection. This can be done in simple way with the poolboy library.
My suggestion is that you try implementing a single GenServer based process that maintains the connection and runs queries. Then see if your code works correctly, and when it does, try to use poolboy to be able to deal with concurrent requests efficiently.

ADO.NET SqlData Client connections never go away

An asp.net application I am working on may have a couple hundred users trying to connect. We get an error that the maximum number of connections in the pool has been reached. I understand the concept of connection pools in ADO.NET although in testing I've found that a connection is left "sleeping" on the ms sql 2005 server days after the connection was made and the browser was then closed. I have tried to limit the connection lifetime in the connection string but this has no effect. Should I push the max number of connections? Have I completely misdiagnosed the real problem?
All of your database connections must either be wrapped in a try...finally:
SqlConnection myConnection = new SqlConnection(connString);
myConnection.Open();
try
{
}
finally
{
myConnection.Close();
}
Or...much better yet, create a DAL (Data Access Layer) that has the Close() call in its Dispose method and wrap all of your DB calls with a using:
using (MyQueryClass myQueryClass = new MyQueryClass())
{
// DB Stuff here...
}
A few notes: ASP.NET does rely on you to release your Connections. It will release them through GC after they've gone out of scope but you can not count on this behavior as it may take a very long time for this to kick in - much longer than you may be able to afford before your connection pool runs out. Speaking of which - ASP.NET actually pools your connections to the database as you request them (recycling old connections rather than releasing them completely and then requesting them anew from the database). This doesn't really matter as far as you are concerned: you still must call Close()!
Also, you can control the pooling by using your connection string (e.g. Min/Max pool size, etc.). See this article on MSDN for more information.