Kotliquery doesn't close postgresql connections - postgresql

I'm using Kotlin with the kotliquery jdbc framework
Just ran into a problem. I'm using a remote PostgreSQL database. After a bit of calling the database I get the following error Failure: too many clients already. Which is caused by 100 connections being idle.
I'm trying to create 1 point where I have to do the config. This is what I call my BaseDAO. The relevant code for that class looks like this:
import com.zaxxer.hikari.HikariConfig
import com.zaxxer.hikari.HikariDataSource
import kotliquery.Session
import kotliquery.sessionOf
import javax.sql.DataSource
class BaseDAO {
companion object {
var url: String = "jdbc:postgresql://server.local:5432/myDatabase"
var user: String = "postgres"
var pass: String = "postgres"
val config: HikariConfig = HikariConfig()
private fun dataSource(): DataSource
{
var hikariConfig: HikariConfig = HikariConfig();
hikariConfig.setDriverClassName("org.postgresql.Driver");
hikariConfig.setJdbcUrl(url);
hikariConfig.setUsername(user);
hikariConfig.setPassword(pass);
hikariConfig.setMaximumPoolSize(5);
hikariConfig.setConnectionTestQuery("SELECT 1");
hikariConfig.setPoolName("springHikariCP");
hikariConfig.addDataSourceProperty("dataSource.cachePrepStmts", "true");
hikariConfig.addDataSourceProperty("dataSource.prepStmtCacheSize", "250");
hikariConfig.addDataSourceProperty("dataSource.prepStmtCacheSqlLimit", "2048");
hikariConfig.addDataSourceProperty("dataSource.useServerPrepStmts", "true");
var dataSource: HikariDataSource = HikariDataSource(hikariConfig);
return dataSource;
}
#JvmStatic fun getSession(): Session {
return sessionOf(dataSource())
}
}
}
And one of my DAO's:
class UserDAO {
val toUser: (Row) -> User = { row ->
User(
row.int("id"),
row.string("username"),
row.string("usertype")
)
}
fun getAllUsers(): List<User> {
var returnedList: List<User> = arrayOf<User>().toList()
using(BaseDAO.getSession()) { session ->
val allUsersQuery = queryOf("select * from quintor_user").map(toUser).asList
returnedList = session.run(allUsersQuery)
session.connection.close()
session.close()
}
return returnedList
}
}
After looking into Kotliquery's source code I realized the session.connection.close() and session.close wouldn't even be neccessary when using using (since it closes a closable which the retrieved session is.) but without them I got the same error. (had to restart postgresql database -- 100 idle connections).
I was wondering if there is an error in my code or if this is an error in Kotliquery?
(also submitted github issue #6 but figured the community might be bigger than 24 people

It seems that each call to BaseDAO.getSession() creates new HikariDataSource. This means that every Session has effectively it's own database connection pool. To resolve that you need to maintain instance of HikariDataSource differently i.e.:
class BaseDAO {
companion object {
...
private val dataSource by lazy {
var hikariConfig: HikariConfig = HikariConfig();
...
var dataSource: HikariDataSource = HikariDataSource(hikariConfig);
dataSource;
}
#JvmStatic fun getSession(): Session {
return sessionOf(dataSource)
}
}
}

Related

ScriptUtils.executeSqlScript throws "connection is closed" after spring boot upgrade

I was updating spring boot from 2.5.1 to 2.7 together with the r2dbc and postgres dependencies. I did no change the application.yml or test setup. Before the update my repository tests run fine with testcontainers, but now I see this exception which is thrown by a #AfterEach that tries to clean the DB:
2022-05-29 10:04:52.447 INFO 16673 --- [tainers-r2dbc-0] 🐳 [postgres:13.2] : Container postgres:13.2 started in PT1.244757S
Failed to execute SQL script statement #1 of InputStream resource [resource loaded through InputStream]: DROP SCHEMA public CASCADE; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$PostgresConnectionClosedException: Cannot exchange messages because the connection is closed
org.springframework.r2dbc.connection.init.ScriptStatementFailedException: Failed to execute SQL script statement #1 of InputStream resource [resource loaded through InputStream]: DROP SCHEMA public CASCADE; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$PostgresConnectionClosedException: Cannot exchange messages because the connection is closed
at org.springframework.r2dbc.connection.init.ScriptUtils.lambda$runStatement$9(ScriptUtils.java:571)
This is my abstract RepositoryTest:
#DataR2dbcTest
#ActiveProfiles("test")
internal abstract class RepositoryTest {
#Autowired
protected lateinit var connectionFactory: ConnectionFactory
#AfterEach
fun clean() {
runSql(
"""
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
"""
)
}
protected fun runSql(sql: String) {
runScript(InputStreamResource(sql.byteInputStream()))
}
protected fun runScript(sqlScript: Resource) {
runBlocking {
val connection = connectionFactory.create().awaitFirst()
ScriptUtils.executeSqlScript(connection, sqlScript)
.block() // <---- throws the said exception, but it worked before the update.
}
}
}
My actual test looks like this:
internal class MyRepoTest : RepositoryTest() {
#Autowired
private lateinit var myRepo: MyRepository
#Test
fun someTest() {
val userId = 3429L
val myEntities = ...
runBlocking { myRepo.saveAll(myEntities).collect() }
val result = myRepo.findAllByUserId(userId).asFlux()
StepVerifier.create(result)
.expectNextMatches { it.userId == userId}
.expectNextMatches { it.userId == userId}
.verifyComplete()
}
}
I guess the way I try to execute the SQL commands is not fine, how should I do it?
val connection = connectionFactory.create().awaitFirst()
ScriptUtils.executeSqlScript(connection, sqlScript)
.block() // <---- throws the said exception, but it worked before the update.
EDIT
I figured out that using ResourceDatabasePopulator works fine:
7protected fun runScript(sqlScript: Resource) {
runBlocking {
ResourceDatabasePopulator(sqlScript).populate(connectionFactory).block()
}
}
But I still would like to understand why the original implementation now fails.

Upgrade to CSLA 6: ConnectionManager problem

we are trying to upgrade to CSLA 6.
now, we are getting a message:
"ConnectionManager is obsolete, use dependency injection ... use ApplicationContext.LocalContext"
for this code:
using (var ctx = ConnectionManager<OracleConnection>.GetManager("dbEndpoint", true))
We've tried this code snippet but all connections is NULL.
Could you please help us to correctly get Connection?
var services = new ServiceCollection();
services.AddCsla();
var provider = services.BuildServiceProvider();
DataPortalFactory = provider.GetRequiredService<IDataPortalFactory>();
var appContext = provider.GetRequiredService<Csla.ApplicationContext>();
var conn1 = appContext.LocalContext.GetValueOrNull("dbEndpoint");
var conn2 = appContext.LocalContext.GetValueOrNull("__db:default-dbEndpoint");
var conn3 = appContext.LocalContext["dbEndpoint"];
var conn4 = appContext.LocalContext["__db:default-dbEndpoint"];
another experiment:
....
var CONNECTION_ORACLE = new OracleConnection(ConfigurationManager.ConnectionStrings["dbEndpoint"].ConnectionString);
services.AddScoped<IDbConnection>(o => CONNECTION_ORACLE);
....
var provider = services.BuildServiceProvider();
...
var connectionResolved = provider.GetRequiredService<IDbConnection>();
appContext.LocalContext.Add("dbEndpoint", connectionResolved);
then connection is not null;
and inside of Factory is successfully resolved by DI:
public DocFactory(ApplicationContext appContext, IDbConnection connection) : base(
appContext)
{
_connection = connection;
}
then
[Fetch]
public Doc_Fetch(DocCriteria criteria)
{
bool cancel = false;
OnFetching(criteria, ref cancel);
if (cancel) return null;
Doc item = null;
OracleConnection connection = _connection as OracleConnection;
connection is Closed (but NOT null!!). it's possible to open it but if close it, somebody else consuming it will face with a problem or child objects also will face problem with closed connection.
so, making ConnectionManager as Obsolete may be not so obvious way to go. But ConnectionManager was very useful for counting open connection, supporting transactions etc
Could you please provide a workaround for it.
more attempts:
var connectionString =
ConfigurationManager.ConnectionStrings["dbEndpoint"].ConnectionString;
..
appContext.ClientContext.Add("DBConnectionString", connectionString );
...
Factory
using (var connection = new OracleConnection(ApplicationContext.ClientContext["DBConnectionString"].ToString()))
{
connection.Open();
Your DAL should require that a database connection be injected.
public class MyDal : IDisposable
{
public MyDal(OracleConnection connection)
{
Connection = connection;
}
private OracleConnection Connection { get; set; }
public MyData GetData()
{
// use Connection to get the data
return data;
}
public void Dispose()
{
Connection.Dispose();
}
}
Then in the app server startup code, register your DAL type(s) and also register your connection type.
services.AddScoped(typeof(OracleConnection), () =>
{
// initialize the connection here
return connection;
});
services.AddScoped<MyDal>();
Then, in your data portal operation method (such as create, fetch, etc.), inject your DAL:
[Fetch]
private void Fetch([Inject] MyDal dal)
{
var data = dal.GetData();
}

ObjectInputStream readObject unexpectedly closing socket

I want to set up a Server/Client where the client sends a serializable object over a socket to the server. For some reason, I keep getting java.net.SocketException: SocketClosed when I try to read the object sent to the server.
Here is my code for the client:
class Client(address: String, port: Int) {
private val connection: Socket = Socket(address, port)
private var connected: Boolean = true
private val writer = ObjectOutputStream(connection.getOutputStream())
private val reader = ObjectInputStream(connection.getInputStream())
init {
println("Connected to server at $address on port $port")
}
fun run() {
var sent = false
while (connected) {
try {
if (!sent) {
sent = true
writer.use {
it.writeObject("Hello")
it.flush()
}
println("Sent")
} else {
println("Didn't send")
}
Thread.sleep(1000)
} catch (ex: Exception) {
ex.printStackTrace()
shutdown()
}
}
}
...
}
and here is the code for the server:
class ClientHandler(private val client: Socket) {
private val reader = ObjectInputStream(client.getInputStream())
private val writer = ObjectOutputStream(client.getOutputStream())
private var running: Boolean = false
fun run() {
running = true
while (running) {
try {
reader.use {
val packet = it.readObject()
when (packet) {
is String -> {
println("Received packet with data: ${packet}")
}
}
}
} catch (ex: Exception) {
ex.printStackTrace()
shutdown()
}
}
}
...
}
The output on my server is
Server is running on port 9999
Client connected: 127.0.0.1
Received packet with data: Hello
java.net.SocketException: Socket closed
at java.base/java.net.SocketInputStream.socketRead0(Native Method)
at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168)
...
So, it seems like one instance of my String is making it across, but later calls claim that the socket is closed.
Every other post I've seen related to this problem claims that the sender (client) is closing the socket early. However, I know that the client is not closing the socket through its own means. If I change my client code to:
class Client(address: String, port: Int) {
...
private val writer = connection.getOutputStream() // Regular streams
private val reader = Scanner(connection.getInputStream())
...
fun run() {
var sent = false
while (connected) {
try {
if (!sent) {
sent = true
writer.write("Hello\n".toByteArray()) // Send regular byte array
println("Sent")
} else {
println("Didn't send")
}
...
}
and my server code to:
class ClientHandler(private val client: Socket) {
private val reader = Scanner(client.getInputStream()) // Regular streams
private val writer = client.getOutputStream()
private var running: Boolean = false
fun run() {
running = true
while (running) {
try {
// Just read lines from stream
println(reader.nextLine())
}
...
}
then my output is what I expect:
Server is running on port 9999
Client connected: 127.0.0.1
Hello
My only hypothesis is that .readObject() is somehow closing the socket connection, forcing the next readObject() to throw an exception. This doesn't make too much sense to me, though. Why would that happen?
Digging around through the code for a bit more gave me the answer I needed. It looks like .use closes the socket after it finishes. Removing the use { } blocks made this work as expected.

Can I use repository populator bean with fongo?

I'm using Fongo not only for unit tests but also for integration tests so I would like to initialize Fongo with some collections, is that possible?
This is my java config (based on Oliver G. answer):
#EnableAutoConfiguration(exclude = {
EmbeddedMongoAutoConfiguration.class,
MongoAutoConfiguration.class,
MongoDataAutoConfiguration.class
})
#Configuration
#ComponentScan(basePackages = { "com.foo" },
excludeFilters = { #ComponentScan.Filter(classes = { SpringBootApplication.class })
})
public class ConfigServerWithFongoConfiguration extends AbstractFongoBaseConfiguration {
private static final Logger log = LoggerFactory.getLogger(ConfigServerWithFongoConfiguration.class);
#Autowired
ResourcePatternResolver resourceResolver;
#Bean
public Jackson2RepositoryPopulatorFactoryBean repositoryPopulator() {
Jackson2RepositoryPopulatorFactoryBean factory = new Jackson2RepositoryPopulatorFactoryBean();
try {
factory.setResources(resourceResolver.getResources("classpath:static/collections/*.json"));
} catch (IOException e) {
log.error("Could not load data", e);
}
return factory;
}
}
When I run my IT tests, on the log it appears Reading resource: file *.json but the tests fails because they retrieve nothing (null) from Fongo database.
Tests are annotated with:
#RunWith(SpringRunner.class)
#SpringBootTest(classes={ConfigServerWithFongoConfiguration.class})
#AutoConfigureMockMvc
#TestPropertySource(properties = {"spring.data.mongodb.database=fake"})
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
Lol, I feel so stupid right now. Was format issue. JSON collections must be formated like this:
[
{/*doc1*/},
{/*doc2*/},
{/*doc3*/}
]
I was missing the [] and comma separated documents.

Mongo db java driver - resource management with client

Background
I have a remote hosted server thats running java vm with custom server code for multiplayer real-time quiz game. The server deals with matchmaking, rooms, lobbies etc. I'm also using a Mongo db on same space which holds all the questions for mobile phone quiz game.
This is my first attempt at such a project and although I'm competent in Java my mongo skill are novice at best.
Client Singleton
My server contains static singleton of mongo client:
public class ClientSingleton
{
private static ClientSingleton uniqueInstance;
// The MongoClient class is designed to be thread safe and shared among threads.
// We create only 1 instance for our given database cluster and use it across
// our application.
private MongoClient mongoClient;
private MongoClientOptions options;
private MongoCredential credential;
private final String password = "xxxxxxxxxxxxxx";
private final String host = "xx.xx.xx.xx";
private final int port = 38180;
/**
*
*/
private ClientSingleton()
{
// Setup client credentials for DB connection (user, db name & password)
credential = MongoCredential.createCredential("XXXXXX", "DBName", password.toCharArray());
options = MongoClientOptions.builder()
.connectTimeout(25000)
.socketTimeout(60000)
.connectionsPerHost(100)
.threadsAllowedToBlockForConnectionMultiplier(5)
.build();
try
{
// Create client (server address(host,port), credential, options)
mongoClient = new MongoClient(new ServerAddress(host, port),
Collections.singletonList(credential),
options);
}
catch (UnknownHostException e)
{
e.printStackTrace();
}
}
/**
* Double checked dispatch method to initialise our client singleton class
*
*/
public static ClientSingleton getInstance()
{
if(uniqueInstance == null)
{
synchronized (ClientSingleton.class)
{
if(uniqueInstance == null)
{
uniqueInstance = new ClientSingleton();
}
}
}
return uniqueInstance;
}
/**
* #return our mongo client
*/
public MongoClient getClient() {
return mongoClient;
}
}
Notes here:
Mongo client is new to me and I understand failure to properly utilise connection pooling is one major “gotcha” that greatly impact Mongo db performance. Also creating new connections to the db is expensive and I should try and re-use existing connections.
I've not left socket timeout and connect timeout at defaults (eg infinite) if connection hangs for some reason I think it will get stuck forever!
I set number of milliseconds the driver will wait before a connection attempt is aborted, for connections made through a Platform-as-a-Serivce (where server is hosted) it is advised to have a higher timeout (e.g. 25 seconds). I also set number of milliseconds the driver will wait for a response from the server for all types of requests (queries, writes, commands, authentication, etc.). Finally I set threadsAllowedToBlockForConnectionMultiplier to 5 (500) connection in, a FIFO stack, awaiting their turn on the db.
Server Zone
Zone gets a game request from client and receives the meta data string for quiz type. In this case "Episode 3". Zone creates room for user or allows user to join room with with that property.
Server Room
Room then establishes db connection to mongo collection for the quiz type:
// Get client & collection
mongoDatabase = ClientSingleton.getInstance().getClient().getDB("DBName");
mongoColl = mongoDatabase.getCollection("GOT");
// Query mongo db with meta data string request
queryMetaTags("Episode 3");
Notes here:
Following a game or I should say after an room idle time the room get destroyed - this idle time is currently set to 60 mins. I believe that if connections per host is set to 100 then while this room is idle then it would be using valuable connection resources.
Question
Is this a good way to manage my client connections?
If I have several hundred concurrently connected games and each accessing the db to pull the questions then maybe following that request free up the client connection for other rooms to use? How should this be done? I'm concerned about possible bottle necks here!
Mongo Query FYI
// Query our collection documents metaTag elements for a matching string
// #SuppressWarnings("deprecation")
public void queryMetaTags(String query)
{
// Query to search all documents in current collection
List<String> continentList = Arrays.asList(new String[]{query});
DBObject matchFields = new
BasicDBObject("season.questions.questionEntry.metaTags",
new BasicDBObject("$in", continentList));
DBObject groupFields = new BasicDBObject( "_id", "$_id").append("questions",
new BasicDBObject("$push","$season.questions"));
//DBObject unwindshow = new BasicDBObject("$unwind","$show");
DBObject unwindsea = new BasicDBObject("$unwind", "$season");
DBObject unwindepi = new BasicDBObject("$unwind", "$season.questions");
DBObject match = new BasicDBObject("$match", matchFields);
DBObject group = new BasicDBObject("$group", groupFields);
#SuppressWarnings("deprecation")
AggregationOutput output =
mongoColl.aggregate(unwindsea,unwindepi,match,group);
String jsonString = null;
JSONObject jsonObject = null;
JSONArray jsonArray = null;
ArrayList<JSONObject> ourResultsArray = new ArrayList<JSONObject>();
// Loop for each document in our collection
for (DBObject result : output.results())
{
try
{
// Parse our results so we can add them to an ArrayList
jsonString = JSON.serialize(result);
jsonObject = new JSONObject(jsonString);
jsonArray = jsonObject.getJSONArray("questions");
for (int i = 0; i < jsonArray.length(); i++)
{
// Put each of our returned questionEntry elements into an ArrayList
ourResultsArray.add(jsonArray.getJSONObject(i));
}
}
catch (JSONException e1)
{
e1.printStackTrace();
}
}
pullOut10Questions(ourResultsArray);
}
The way I've done this is to use Spring to create a MongoClient Bean. You can then autowire this bean wherever it is needed.
For example:
MongoConfig.java
import com.mongodb.MongoClient;
import com.mongodb.MongoClientURI;
import com.tescobank.insurance.telematics.data.connector.config.DatabaseProperties;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.net.UnknownHostException;
#Configuration
public class MongoConfig {
private #Autowired DatabaseProperties properties;
#Bean
public MongoClient fooClient() throws UnknownHostException {
return mongo(properties.getFooDatabaseURI());
}
}
Class Requiring Mongodb connection:
#Component
public class DatabaseUser {
private MongoClient mongoClient;
....
#Autowired
public DatabaseUser(MongoClient mongoClient) {
this.mongoClient = mongoClient;
}
}
Spring will then create the connection and wire it where required. What you've done seems very complex and perhaps tries to recreate the functionality you would get for free by using a tried and test framework such as Spring. I'd generally try to avoid the use of Singletons too if I could avoid it. I've had no performance issues using Mongodb connections like this.