ScriptUtils.executeSqlScript throws "connection is closed" after spring boot upgrade - spring-data-r2dbc

I was updating spring boot from 2.5.1 to 2.7 together with the r2dbc and postgres dependencies. I did no change the application.yml or test setup. Before the update my repository tests run fine with testcontainers, but now I see this exception which is thrown by a #AfterEach that tries to clean the DB:
2022-05-29 10:04:52.447 INFO 16673 --- [tainers-r2dbc-0] 🐳 [postgres:13.2] : Container postgres:13.2 started in PT1.244757S
Failed to execute SQL script statement #1 of InputStream resource [resource loaded through InputStream]: DROP SCHEMA public CASCADE; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$PostgresConnectionClosedException: Cannot exchange messages because the connection is closed
org.springframework.r2dbc.connection.init.ScriptStatementFailedException: Failed to execute SQL script statement #1 of InputStream resource [resource loaded through InputStream]: DROP SCHEMA public CASCADE; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$PostgresConnectionClosedException: Cannot exchange messages because the connection is closed
at org.springframework.r2dbc.connection.init.ScriptUtils.lambda$runStatement$9(ScriptUtils.java:571)
This is my abstract RepositoryTest:
#DataR2dbcTest
#ActiveProfiles("test")
internal abstract class RepositoryTest {
#Autowired
protected lateinit var connectionFactory: ConnectionFactory
#AfterEach
fun clean() {
runSql(
"""
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
"""
)
}
protected fun runSql(sql: String) {
runScript(InputStreamResource(sql.byteInputStream()))
}
protected fun runScript(sqlScript: Resource) {
runBlocking {
val connection = connectionFactory.create().awaitFirst()
ScriptUtils.executeSqlScript(connection, sqlScript)
.block() // <---- throws the said exception, but it worked before the update.
}
}
}
My actual test looks like this:
internal class MyRepoTest : RepositoryTest() {
#Autowired
private lateinit var myRepo: MyRepository
#Test
fun someTest() {
val userId = 3429L
val myEntities = ...
runBlocking { myRepo.saveAll(myEntities).collect() }
val result = myRepo.findAllByUserId(userId).asFlux()
StepVerifier.create(result)
.expectNextMatches { it.userId == userId}
.expectNextMatches { it.userId == userId}
.verifyComplete()
}
}
I guess the way I try to execute the SQL commands is not fine, how should I do it?
val connection = connectionFactory.create().awaitFirst()
ScriptUtils.executeSqlScript(connection, sqlScript)
.block() // <---- throws the said exception, but it worked before the update.
EDIT
I figured out that using ResourceDatabasePopulator works fine:
7protected fun runScript(sqlScript: Resource) {
runBlocking {
ResourceDatabasePopulator(sqlScript).populate(connectionFactory).block()
}
}
But I still would like to understand why the original implementation now fails.

Related

How to handle Attempt to update job execution id=1 with wrong version (0), where current version is 1 for Sybase

I'm faced with critical issue using spring batch for sybase
I don't know why occur this issue for sybase
Maybe, INSERT into BATCH_JOB_EXECUTION is success but update isn't success
This is my stack trace
2022-08-31 11:06:12.857 DEBUG 7072 --- [ main] o.s.j.d.DataSourceTransactionManager : Releasing JDBC Connection [HikariProxyConnection#59930654 wrapping com.sybase.jdbc4.jdbc.SybConnection#17dad32f] after transaction
2022-08-31 11:06:12.860 ERROR 7072 --- [ main] o.s.batch.core.job.AbstractJob : Encountered fatal error executing job
org.springframework.dao.OptimisticLockingFailureException: Attempt to update job execution id=1 with wrong version (0), where current version is 1
#Configuration
#MapperScan(
value = "test.store.storebatch.mapper.primary",
sqlSessionFactoryRef = "primarySqlSessionFactory"
)
public class PrimaryDatabaseConfig {
#Primary
#Bean(name = "primaryDataSource")
#ConfigurationProperties(prefix = "spring.datasource.hikari.primary")
public DataSource primaryDataSource() {
return DataSourceBuilder.create().build();
}
#Primary
#Bean(name = "primarySqlSessionFactory")
public SqlSessionFactory primarySqlSessionFactory(
#Qualifier("primaryDataSource") DataSource primaryDataSource,
ApplicationContext applicationContext) throws Exception {
log.info("primarySqlSessionFactory created");
SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean();
sqlSessionFactoryBean.setDataSource(primaryDataSource);
sqlSessionFactoryBean.setMapperLocations(applicationContext.getResources("classpath:mapper/primary/*.xml"));
sqlSessionFactoryBean.setConfigLocation(applicationContext.getResource("classpath:mybatis-config.xml"));
sqlSessionFactoryBean.setTransactionFactory(null);
log.info("sqlSessionFactory = " + sqlSessionFactoryBean.toString());
return sqlSessionFactoryBean.getObject();
}
#Primary
#Bean(name="primarySqlSessionTemplate")
public SqlSessionTemplate primarySqlSessionTemplate(#Qualifier("primarySqlSessionFactory")
SqlSessionFactory primarySqlSessionFactory) throws Exception {
return new SqlSessionTemplate(primarySqlSessionFactory);
}
#Primary
#Bean(name= "primaryTransactionManager")
public PlatformTransactionManager primaryTransactionManager() {
log.info("primaryTransactionManager created");
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager();
transactionManager.setDataSource(primaryDataSource());
return transactionManager;
}
}
spring:
application:
name: store-batch
config:
activate:
on-profile: local
main:
web-application-type: NONE
datasource:
hikari:
primary:
# tps-dev connection
driver-class-name: com.sybase.jdbc4.jdbc.SybDriver
jdbc-url: jdbc:sybase:Tds:127.000.000.1:5000/ibims?CHARSET=eucksc&JAVA_CHARSET_MAPPING=ms949
username: id
password: password
maximum-pool-size: 2
Database connection was success.
When the connection information is changed to mysql and tested, it works well.
but Sybase is not works.
Has anyone solved this problem?

java.lang.ClassCastException: class .$Proxy143 cannot be cast to class .MessageChannel (... are in unnamed module of loader 'app')

I am writing the tests for a Spring Cloud Stream application. This has a KStream reading from topicA. In the test I use a KafkaTemplate to publish the messages and wait for the KStream logs to show up.
The tests throw the following exception:
java.lang.ClassCastException: class com.sun.proxy.$Proxy143 cannot be cast to class org.springframework.messaging.MessageChannel (com.sun.proxy.$Proxy143 and org.springframework.messaging.MessageChannel are in unnamed module of loader 'app')
at org.springframework.cloud.stream.test.binder.TestSupportBinder.bindConsumer(TestSupportBinder.java:66) ~[spring-cloud-stream-test-support-3.0.1.RELEASE.jar:3.0.1.RELEASE]
at org.springframework.cloud.stream.binding.BindingService.doBindConsumer(BindingService.java:169) ~[spring-cloud-stream-3.0.2.BUILD-SNAPSHOT.jar:3.0.2.BUILD-SNAPSHOT]
at org.springframework.cloud.stream.binding.BindingService.bindConsumer(BindingService.java:115) ~[spring-cloud-stream-3.0.2.BUILD-SNAPSHOT.jar:3.0.2.BUILD-SNAPSHOT]
at org.springframework.cloud.stream.binding.AbstractBindableProxyFactory.createAndBindInputs(AbstractBindableProxyFactory.java:112) ~[spring-cloud-stream-3.0.2.BUILD-SNAPSHOT.jar:3.0.2.BUILD-SNAPSHOT]
at org.springframework.cloud.stream.binding.InputBindingLifecycle.doStartWithBindable(InputBindingLifecycle.java:58) ~[spring-cloud-stream-3.0.2.BUILD-SNAPSHOT.jar:3.0.2.BUILD-SNAPSHOT]
at java.base/java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608) ~[na:na]
This exception doesn't show up in the normal execution of the application.
KSTREAM:
#Configuration
class MyKStream() {
private val logger = LoggerFactory.getLogger(javaClass)
#Bean
fun processSomething(): Consumer<KStream<XX, XX>> {
return Consumer { something ->
something.foreach { key, value ->
logger.info("--------> Processing xxx key {} - value {}", key, value)
}
}
TEST:
#TestInstance(PER_CLASS)
#EmbeddedKafka
#SpringBootTest(properties = [
"spring.profiles.active=local",
"schema-registry.user=",
"schema-registry.password=",
"spring.cloud.stream.bindings.processSomething-in-0.destination=topicA",
"spring.cloud.stream.bindings.processSomething-in-0.producer.useNativeEncoding=true",
"spring.cloud.stream.bindings.processSomethingElse-in-0.destination=topicB",
"spring.cloud.stream.bindings.processSomethingElse-in-0.producer.useNativeEncoding=true",
"spring.cloud.stream.kafka.streams.binder.configuration.application.server=localhost:8080",
"spring.cloud.stream.function.definition=processSomething;processSomethingElse"])
class MyKStreamTests {
private val logger = LoggerFactory.getLogger(javaClass)
#Autowired
private lateinit var embeddedKafka: EmbeddedKafkaBroker
#Autowired
private lateinit var schemaRegistryMock: SchemaRegistryMock
#AfterAll
fun afterAll() {
embeddedKafka.kafkaServers.forEach { it.shutdown() }
embeddedKafka.kafkaServers.forEach { it.awaitShutdown() }
}
#Test
fun `should send and process something`() {
val producer = createProducer()
logger.debug("**********----> presend")
val msg = MessageBuilder.withPayload(xxx)
.setHeader(KafkaHeaders.MESSAGE_KEY, xxx)
.setHeader(KafkaHeaders.TIMESTAMP, 1L)
.build()
producer.send(msg).get()
logger.debug("**********----> sent")
Thread.sleep(100000)
}
}
#Configuration
class KafkaTestConfiguration(private val embeddedKafkaBroker: EmbeddedKafkaBroker) {
private val schemaRegistryMock = SchemaRegistryMock()
#PostConstruct
fun init() {
System.setProperty("spring.kafka.bootstrap-servers", embeddedKafkaBroker.brokersAsString)
System.setProperty("spring.cloud.stream.kafka.streams.binder.brokers", embeddedKafkaBroker.brokersAsString)
schemaRegistryMock.start()
System.setProperty("spring.cloud.stream.kafka.streams.binder.configuration.schema.registry.url", schemaRegistryMock.url)
}
#Bean
fun schemaRegistryMock(): SchemaRegistryMock {
return schemaRegistryMock
}
#PreDestroy
fun preDestroy() {
schemaRegistryMock.stop()
}
}
You are probably using spring-cloud-stream-test-support as a dependency and this dependency bypasses some of the core functionality of the binder API resulting in this error.
https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.3.RELEASE/reference/html/spring-cloud-stream.html#_testing

Kafka: Consumer api: Regression test fails if runs in a group (sequentially)

I have implemented a kafka application using consumer api. And I have 2 regression tests implemented with stream api:
To test happy path: by producing data from the test ( into the input topic that the application is listening to) that will be consumed by the application and application will produce data (into the output topic ) that the test will consume and validate against expected output data.
To test error path: behavior is the same as above. Although this time application will produce data into output topic and test will consume from application's error topic and will validate against expected error output.
My code and the regression-test codes are residing under the same project under expected directory structure. Both time ( for both tests) data should have been picked up by the same listener at the application side.
The problem is :
When I am executing the tests individually (manually), each test is passing. However, If I execute them together but sequentially ( for example: gradle clean build ) , only first test is passing. 2nd test is failing after the test-side-consumer polling for data and after some time it gives up not finding any data.
Observation:
From debugging, it looks like, the 1st time everything works perfectly ( test-side and application-side producers and consumers). However, during the 2nd test it seems that application-side-consumer is not receiving any data ( It seems that test-side-producer is producing data, but can not say that for sure) and hence no data is being produced into the error topic.
What I have tried so far:
After investigations, my understanding is that we are getting into race conditions and to avoid that found suggestions like :
use #DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
Tear off broker after each test ( Please see the ".destry()" on brokers)
use different topic names for each test
I applied all of them and still could not recover from my issue.
I am providing the code here for perusal. Any insight is appreciated.
Code for 1st test (Testing error path):
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
topics = {
AdapterStreamProperties.Constants.INPUT_TOPIC,
AdapterStreamProperties.Constants.ERROR_TOPIC
},
brokerProperties = {
"listeners=PLAINTEXT://localhost:9092",
"port=9092",
"log.dir=/tmp/data/logs",
"auto.create.topics.enable=true",
"delete.topic.enable=true"
}
)
public class AbstractIntegrationFailurePathTest {
private final int retryLimit = 0;
#Autowired
protected EmbeddedKafkaBroker embeddedFailurePathKafkaBroker;
//To produce data
#Autowired
protected KafkaTemplate<PreferredMediaMsgKey, SendEmailCmd> inputProducerTemplate;
//To read from output error
#Autowired
protected Consumer<PreferredMediaMsgKey, ErrorCmd> outputErrorConsumer;
//Service to execute notification-preference
#Autowired
protected AdapterStreamProperties projectProerties;
protected void subscribe(Consumer consumer, String topic, int attempt) {
try {
embeddedFailurePathKafkaBroker.consumeFromAnEmbeddedTopic(consumer, topic);
} catch (ComparisonFailure ex) {
if (attempt < retryLimit) {
subscribe(consumer, topic, attempt + 1);
}
}
}
}
.
#TestConfiguration
public class AdapterStreamFailurePathTestConfig {
#Autowired
private EmbeddedKafkaBroker embeddedKafkaBroker;
#Value("${spring.kafka.adapter.application-id}")
private String applicationId;
#Value("${spring.kafka.adapter.group-id}")
private String groupId;
//Producer of records that the program consumes
#Bean
public Map<String, Object> sendEmailCmdProducerConfigs() {
Map<String, Object> results = KafkaTestUtils.producerProps(embeddedKafkaBroker);
results.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.KEY_SERDE.serializer().getClass());
results.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.INPUT_VALUE_SERDE.serializer().getClass());
return results;
}
#Bean
public ProducerFactory<PreferredMediaMsgKey, SendEmailCmd> inputProducerFactory() {
return new DefaultKafkaProducerFactory<>(sendEmailCmdProducerConfigs());
}
#Bean
public KafkaTemplate<PreferredMediaMsgKey, SendEmailCmd> inputProducerTemplate() {
return new KafkaTemplate<>(inputProducerFactory());
}
//Consumer of the error output, generated by the program
#Bean
public Map<String, Object> outputErrorConsumerConfig() {
Map<String, Object> props = KafkaTestUtils.consumerProps(
applicationId, Boolean.TRUE.toString(), embeddedKafkaBroker);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.KEY_SERDE.deserializer().getClass()
.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.ERROR_VALUE_SERDE.deserializer().getClass()
.getName());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public Consumer<PreferredMediaMsgKey, ErrorCmd> outputErrorConsumer() {
DefaultKafkaConsumerFactory<PreferredMediaMsgKey, ErrorCmd> rpf =
new DefaultKafkaConsumerFactory<>(outputErrorConsumerConfig());
return rpf.createConsumer(groupId, "notification-failure");
}
}
.
#RunWith(SpringRunner.class)
#SpringBootTest(classes = AdapterStreamFailurePathTestConfig.class)
#ActiveProfiles(profiles = "errtest")
public class ErrorPath400Test extends AbstractIntegrationFailurePathTest {
#Autowired
private DataGenaratorForErrorPath400Test datagen;
#Mock
private AdapterHttpClient httpClient;
#Autowired
private ErroredEmailCmdDeserializer erroredEmailCmdDeserializer;
#Before
public void setup() throws InterruptedException {
Mockito.when(httpClient.callApi(Mockito.any()))
.thenReturn(
new GenericResponse(
400,
TestConstants.ERROR_MSG_TO_CHK));
Mockito.when(httpClient.createURI(Mockito.any(),Mockito.any(),Mockito.any())).thenCallRealMethod();
inputProducerTemplate.send(
projectProerties.getInputTopic(),
datagen.getKey(),
datagen.getEmailCmdToProduce());
System.out.println("producer: "+ projectProerties.getInputTopic());
subscribe(outputErrorConsumer , projectProerties.getErrorTopic(), 0);
}
#Test
public void testWithError() throws InterruptedException, InvalidProtocolBufferException, TextFormat.ParseException {
ConsumerRecords<PreferredMediaMsgKeyBuf.PreferredMediaMsgKey, ErrorCommandBuf.ErrorCmd> records;
List<ConsumerRecord<PreferredMediaMsgKeyBuf.PreferredMediaMsgKey, ErrorCommandBuf.ErrorCmd>> outputListOfErrors = new ArrayList<>();
int attempt = 0;
int expectedRecords = 1;
do {
records = KafkaTestUtils.getRecords(outputErrorConsumer);
records.forEach(outputListOfErrors::add);
attempt++;
} while (attempt < expectedRecords && outputListOfErrors.size() < expectedRecords);
//Verify the recipient event stream size
Assert.assertEquals(expectedRecords, outputListOfErrors.size());
//Validate output
}
#After
public void tearDown() {
outputErrorConsumer.close();
embeddedFailurePathKafkaBroker.destroy();
}
}
2nd test is almost the same in structure. Although this time the test-side-consumer is consuming from application-side-output-topic( instead of error topic). And I named the consumers,broker,producer,topics differently. Like :
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
topics = {
AdapterStreamProperties.Constants.INPUT_TOPIC,
AdapterStreamProperties.Constants.OUTPUT_TOPIC
},
brokerProperties = {
"listeners=PLAINTEXT://localhost:9092",
"port=9092",
"log.dir=/tmp/data/logs",
"auto.create.topics.enable=true",
"delete.topic.enable=true"
}
)
public class AbstractIntegrationSuccessPathTest {
private final int retryLimit = 0;
#Autowired
protected EmbeddedKafkaBroker embeddedKafkaBroker;
//To produce data
#Autowired
protected KafkaTemplate<PreferredMediaMsgKey,SendEmailCmd> sendEmailCmdProducerTemplate;
//To read from output regular topic
#Autowired
protected Consumer<PreferredMediaMsgKey, NotifiedEmailCmd> ouputConsumer;
//Service to execute notification-preference
#Autowired
protected AdapterStreamProperties projectProerties;
protected void subscribe(Consumer consumer, String topic, int attempt) {
try {
embeddedKafkaBroker.consumeFromAnEmbeddedTopic(consumer, topic);
} catch (ComparisonFailure ex) {
if (attempt < retryLimit) {
subscribe(consumer, topic, attempt + 1);
}
}
}
}
Please let me know if I should provide any more information.,
"port=9092"
Don't use a fixed port; leave that out and the embedded broker will use a random port; the consumer configs are set up in KafkaTestUtils to point to the random port.
You shouldn't need to dirty the context after each test method - use a different group.id for each test and a different topic.
In my case the consumer was not closed properly. I had to do :
#After
public void tearDown() {
// shutdown hook to correctly close the streams application
Runtime.getRuntime().addShutdownHook(new Thread(ouputConsumer::close));
}
to resolve.

Can I use repository populator bean with fongo?

I'm using Fongo not only for unit tests but also for integration tests so I would like to initialize Fongo with some collections, is that possible?
This is my java config (based on Oliver G. answer):
#EnableAutoConfiguration(exclude = {
EmbeddedMongoAutoConfiguration.class,
MongoAutoConfiguration.class,
MongoDataAutoConfiguration.class
})
#Configuration
#ComponentScan(basePackages = { "com.foo" },
excludeFilters = { #ComponentScan.Filter(classes = { SpringBootApplication.class })
})
public class ConfigServerWithFongoConfiguration extends AbstractFongoBaseConfiguration {
private static final Logger log = LoggerFactory.getLogger(ConfigServerWithFongoConfiguration.class);
#Autowired
ResourcePatternResolver resourceResolver;
#Bean
public Jackson2RepositoryPopulatorFactoryBean repositoryPopulator() {
Jackson2RepositoryPopulatorFactoryBean factory = new Jackson2RepositoryPopulatorFactoryBean();
try {
factory.setResources(resourceResolver.getResources("classpath:static/collections/*.json"));
} catch (IOException e) {
log.error("Could not load data", e);
}
return factory;
}
}
When I run my IT tests, on the log it appears Reading resource: file *.json but the tests fails because they retrieve nothing (null) from Fongo database.
Tests are annotated with:
#RunWith(SpringRunner.class)
#SpringBootTest(classes={ConfigServerWithFongoConfiguration.class})
#AutoConfigureMockMvc
#TestPropertySource(properties = {"spring.data.mongodb.database=fake"})
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
Lol, I feel so stupid right now. Was format issue. JSON collections must be formated like this:
[
{/*doc1*/},
{/*doc2*/},
{/*doc3*/}
]
I was missing the [] and comma separated documents.

Kotliquery doesn't close postgresql connections

I'm using Kotlin with the kotliquery jdbc framework
Just ran into a problem. I'm using a remote PostgreSQL database. After a bit of calling the database I get the following error Failure: too many clients already. Which is caused by 100 connections being idle.
I'm trying to create 1 point where I have to do the config. This is what I call my BaseDAO. The relevant code for that class looks like this:
import com.zaxxer.hikari.HikariConfig
import com.zaxxer.hikari.HikariDataSource
import kotliquery.Session
import kotliquery.sessionOf
import javax.sql.DataSource
class BaseDAO {
companion object {
var url: String = "jdbc:postgresql://server.local:5432/myDatabase"
var user: String = "postgres"
var pass: String = "postgres"
val config: HikariConfig = HikariConfig()
private fun dataSource(): DataSource
{
var hikariConfig: HikariConfig = HikariConfig();
hikariConfig.setDriverClassName("org.postgresql.Driver");
hikariConfig.setJdbcUrl(url);
hikariConfig.setUsername(user);
hikariConfig.setPassword(pass);
hikariConfig.setMaximumPoolSize(5);
hikariConfig.setConnectionTestQuery("SELECT 1");
hikariConfig.setPoolName("springHikariCP");
hikariConfig.addDataSourceProperty("dataSource.cachePrepStmts", "true");
hikariConfig.addDataSourceProperty("dataSource.prepStmtCacheSize", "250");
hikariConfig.addDataSourceProperty("dataSource.prepStmtCacheSqlLimit", "2048");
hikariConfig.addDataSourceProperty("dataSource.useServerPrepStmts", "true");
var dataSource: HikariDataSource = HikariDataSource(hikariConfig);
return dataSource;
}
#JvmStatic fun getSession(): Session {
return sessionOf(dataSource())
}
}
}
And one of my DAO's:
class UserDAO {
val toUser: (Row) -> User = { row ->
User(
row.int("id"),
row.string("username"),
row.string("usertype")
)
}
fun getAllUsers(): List<User> {
var returnedList: List<User> = arrayOf<User>().toList()
using(BaseDAO.getSession()) { session ->
val allUsersQuery = queryOf("select * from quintor_user").map(toUser).asList
returnedList = session.run(allUsersQuery)
session.connection.close()
session.close()
}
return returnedList
}
}
After looking into Kotliquery's source code I realized the session.connection.close() and session.close wouldn't even be neccessary when using using (since it closes a closable which the retrieved session is.) but without them I got the same error. (had to restart postgresql database -- 100 idle connections).
I was wondering if there is an error in my code or if this is an error in Kotliquery?
(also submitted github issue #6 but figured the community might be bigger than 24 people
It seems that each call to BaseDAO.getSession() creates new HikariDataSource. This means that every Session has effectively it's own database connection pool. To resolve that you need to maintain instance of HikariDataSource differently i.e.:
class BaseDAO {
companion object {
...
private val dataSource by lazy {
var hikariConfig: HikariConfig = HikariConfig();
...
var dataSource: HikariDataSource = HikariDataSource(hikariConfig);
dataSource;
}
#JvmStatic fun getSession(): Session {
return sessionOf(dataSource)
}
}
}