PostgreSQL connections still idle after close in JDBC - postgresql

I am New to SpringBoot.
I have a Spring Boot Application and the Database I am using is PostgreSQL, I am using JdbcTemplate and I have 2 datasource connections.
The code works fine,but I observed that in PostgreSQL pgAdmin in the Server Status Dashboard,it shows the connection pool(Say 10 connection) in idle mode.
While earlier I was working with single datasource, I observed the same thing,but I solved it by setting some properties within application.properties file.
For.e.g:
spring.datasource.hikari.minimum-idle=somevalue
spring.datasource.hikari.idle-timeout=somevalue
How do I achieve the same with multiple Datasources.
properties file
spring.datasource.jdbcUrl=jdbc:postgresql://localhost:5432/stsdemo
spring.datasource.username=postgres
spring.datasource.password=********
spring.datasource.driver-class-name=org.postgresql.Driver
spring.seconddatasource.jdbcUrl=jdbc:postgresql://localhost:5432/postgres
spring.seconddatasource.username=postgres
spring.seconddatasource.password=********
spring.seconddatasource.driver-class-name=org.postgresql.Driver
DbConfig
#Configuration
public class DbConfig {
#Bean(name="db1")
#Primary
#ConfigurationProperties(prefix="spring.datasource")
public DataSource firstDatasource()
{
return DataSourceBuilder.create().build();
}
#Bean(name = "jdbcTemplate1")
public JdbcTemplate jdbcTemplate1(#Qualifier("db1") DataSource ds) {
return new JdbcTemplate(ds);
}
#Bean(name="db2")
#ConfigurationProperties(prefix="spring.seconddatasource")
public DataSource secondDatasource()
{
return DataSourceBuilder.create().build();
}
#Bean(name="jdbcTemplate2")
public JdbcTemplate jdbcTemplate2(#Qualifier("db2") DataSource ds)
{
return new JdbcTemplate(ds);
}
}

That's to be expected.
"closing" a connection (i.e. calling close()) that is obtained from am pool will only return it to the pool. The pool will not immediately close the physical connection to the database to avoid costly reconnects (which is the whole point of using a connection pool)
An "idle" connection is also no real problem in Postgres.
If you have connections that stay "idle in transaction" for a long time - that would be a problem.

Related

Spring Batch very slow when using 2 datasources - one for Spring Batch and another for the App

I modified this sample batch job provided by spring to use two custom datasources instead of the one autoconfigured by boot. Both datasources point to the same MySql DB server, but to different schemas. One schema for Batch/Task tables and another for the app tables. MySql is running locally. The performance was much much slower compared to the same job running with default boot configured datasource or with ONE custom datasource.
Here is the timing I got and can't figure out why #3 is taking a long time:
Default Boot configured datasource - 1 second
One custom datasource (for both Batch/Task and App) - 1 second
Two custom datasources (one each for Batch/Task and App) - 90 seconds !!!
Do I need to set any CP settings for the custom datasources when using two of them? I tried a few, but didn't help.
Here is the properties file:
spring.application.name=fileIngest
spring.datasource.url=jdbc:mysql://localhost:3306/test-scdf?useSSL=false
spring.datasource.username=<user>
spring.datasource.password=<pwd>
spring.datasource.driverClassName=org.mariadb.jdbc.Driver
app.datasource.url=jdbc:mysql://localhost:3306/test?useSSL=false
app.datasource.username=<user>
app.datasource.password=<pwd>
app.datasource.driverClassName=org.mariadb.jdbc.Driver
Here are relevant portions of my datasource config as recommended here.
#Bean(name = "springDataSource") // for Batch/Task tables
public DataSource dataSource(#Qualifier("springDataSourceProperties")DataSourceProperties springDataSourceProperties) {
return DataSourceBuilder.create().driverClassName(springDataSourceProperties.getDriverClassName()).
url(springDataSourceProperties.getUrl()).
password(springDataSourceProperties.getPassword()).
username(springDataSourceProperties.getUsername()).
build();
}
#Bean(name = "appDataSource") // for App tables
#Primary
public DataSource appDataSource(#Qualifier("appDataSourceProperties") DataSourceProperties appDataSourceProperties) {
DataSource ds = DataSourceBuilder.create().driverClassName(appDataSourceProperties.getDriverClassName()).
url(appDataSourceProperties.getUrl()).
password(appDataSourceProperties.getPassword()).
username(appDataSourceProperties.getUsername()).
build();
I just inject the appropriate datasource into the BatchConfiguration as needed.
#Configuration
#EnableBatchProcessing
public class BatchConfiguration extends DefaultBatchConfigurer {
...
#Override
#Autowired
public void setDataSource(#Qualifier("springDataSource") DataSource batchDataSource) {
super.setDataSource(batchDataSource);
}
#Bean
public BatchDataSourceInitializer batchDataSourceInitializer(#Qualifier("springDataSource") DataSource batchDataSource,
ResourceLoader resourceLoader) {
BatchProperties batchProperties = new BatchProperties();
batchProperties.setInitializeSchema(DataSourceInitializationMode.ALWAYS);
return new BatchDataSourceInitializer(batchDataSource, resourceLoader, batchProperties);
}

How to integrate a JAX-RS REST-Service with a JNDI lookup into SpringBoot?

I have a simple jax-rs REST-service that is deployed as a WAR on a wildfly server and uses a JNDI lookup for a datasource configured in the standalone.xml. For this the path is read from a datasource.properties file. The service then performas database actions through this datasource.
Now I want to use this REST-service in a SpringBoot application which is deployed to an embedded tomcat. My implementation uses RESTEasy and the service can easily be integrated with the resteasy-spring-boot-starter. But the JNDI lookup doesn't work, because of course the datasource is now not configured in a standalone.xml, but in the application.properties file. It is a completely different datasource.
I'm looking for a solution to set the datasource without having to "hard code" it. This is how the connection is retrieved currently in the WAR for the wildfly:
private Connection getConnection() {
Connection connection = null;
try (InputStream config = OutboxRestServiceJbossImpl.class.getClassLoader().getResourceAsStream("application.properties")) {
Properties properties = new Properties();
properties.load(config);
DataSource ds = (DataSource) new InitialContext().lookup(properties.getProperty("datasource"));
connection = ds.getConnection();
} catch (Exception e) {
}
return connection;
}
Currently I solved this by having a core module which actually performs the logic and 2 implementations with jax-rs for wildfly and SpringMVC in SpringBoot. They invoke the methods of an instance of the core module and the the connection is handed over to these methods. This looks like this for wildfly:
public String getHelloWorld() {
RestServiceCoreImpl rsc = new RestServiceCoreImpl();
try (Connection connection = getConnection()) {
String helloWorld = rsc.getHelloWorld(connection);
} catch (Exception e) {
}
return helloWorld;
}
public String getHelloWorld(Connection connection){
//database stuff, eg. connection.execute(SQL);
}
And like this in SpringBoot:
#Autowired
RestServiceCoreImpl rsc;
#Autowired
DataSource restServiceDataSource;
#Override
public String getHelloWorld() {
try (Connection connection = restServiceDataSource.getConnection()){
return rsc.getHelloWorld(connection);
} catch (SQLException e) {
}
return null;
}
Is there any way to solve this datasource issue? I need the SpringMVC solution to be replaced with the jax-rs solution within SpringBoot.
Okay, I was able to solve this myself. Here is my solution:
I enabled the naming in the embedded tomcat server as follows:
#Bean
public TomcatServletWebServerFactory tomcatFactory() {
return new TomcatServletWebServerFactory() {
#Override
protected TomcatWebServer getTomcatWebServer(org.apache.catalina.startup.Tomcat tomcat) {
tomcat.enableNaming();
return super.getTomcatWebServer(tomcat);
}
Then I was able to add the JNDI ressource in the server context. Now a JNDI lookup is possible.

how to give mongodb socketkeepalive in spring boot application?

In spring boot if we want to connect to mongodb, we can create a configuration file for mongodb or writing datasource in application.properties
I am following the second way
For me, I am gettint this error
"Timeout while receiving message; nested exception is com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
.
spring.data.mongodb.uri = mongodb://mongodb0.example.com:27017/admin
I am gettint this error If I am not using my app for 6/7 hours and after that If I try to hit any controller to retrieve data from Mongodb. After 1/2 try I am able to get
Question - Is it the normal behavior of mongodb?
So, in my case it is closing the socket after some particular hours
I read some blogs where it was written you can give socket-keep-alive, so the connection pool will not close
In spring boot mongodb connection, we can pass options in uri like
spring.data.mongodb.uri = mongodb://mongodb0.example.com:27017/admin/?replicaSet=test&connectTimeoutMS=300000
So, I want to give socket-keep-alive options for spring.data.mongodb.uri like replicaset here.
I searched the official site, but can't able to find any
You can achieve this by providing a MongoClientOptions bean. Spring Data's MongoAutoConfiguration will pick this MongoClientOptions bean up and use it further on:
#Bean
public MongoClientOptions mongoClientOptions() {
return MongoClientOptions.builder()
.socketKeepAlive(true)
.build();
}
Also note that the socket-keep-alive option is deprecated (and defaulted to true) since mongo-driver version 3.5 (used by spring-data since version 2.0.0 of spring-data-mongodb)
You can achieve to pass this option using MongoClientOptionsFactoryBean.
public MongoClientOptions mongoClientOptions() {
try {
final MongoClientOptionsFactoryBean bean = new MongoClientOptionsFactoryBean();
bean.setSocketKeepAlive(true);
bean.afterPropertiesSet();
return bean.getObject();
} catch (final Exception e) {
throw new BeanCreationException(e.getMessage(), e);
}
}
Here an example of this configuration by extending AbstractMongoConfiguration:
#Configuration
public class DataportalApplicationConfig extends AbstractMongoConfiguration {
//#Value: inject property values into components
#Value("${spring.data.mongodb.uri}")
private String uri;
#Value("${spring.data.mongodb.database}")
private String database;
/**
* Configure the MongoClient with the uri
*
* #return MongoClient.class
*/
#Override
public MongoClient mongoClient() {
return new MongoClient(new MongoClientURI(uri,mongoClientOptions().builder()));
}

How to change connection pool of MyBatis?

I would like to use a custom connection pool with MyBatis and the following questions have arisen:
What connection pool implementation does MyBatis use?
How can I change the default connection pool by HikariCP or BoneCP?
you can use org.mybatis.spring
#Bean
public SqlSessionFactoryBean mysqlSessionFactoryBean(#Autowired #Qualifier("mysqlDataSource") DataSource source) throws IOException {
SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
bean.setConfigLocation(new ClassPathResource("/mybatis-config.xml"));
bean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:/mysqlmapper/**/*Mapper.xml"));
bean.setDataSource(source);
return bean;
}
#Bean
public DataSource mysqlDataSource() {
return DataSourceBuilder.create()
.driverClassName("com.mysql.jdbc.Driver")
.url(mysqlUrl)
.username(mysqlUser)
.password(mysqlPassword)
.type(HikariDataSource.class)
.build();
}
To use hikaricp connection pool, set
sql.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
as sql datasource driver in application.properties

setAutoCommit(false) not working with c3p0

I'm working with postgresql 9.2 and C3p0 0.9.2.1, and I created a connection customizer to disable autoCommit and set transactionMode but when I do a lookup on InitialContext to retrieve the dataSource, autoCommit is not disabled on the connection (log at bottom). How can I disable auto commit ?
Connection Customizer :
public class IsolationLevelConnectionCustomizer extends
AbstractConnectionCustomizer {
#Override
public void onAcquire(Connection c, String parentDataSourceIdentityToken)
throws Exception {
super.onAcquire(c, parentDataSourceIdentityToken);
System.out.println("Connection acquired, set autocommit off and repeatable read transaction mode.");
c.setAutoCommit(false);
c.setTransactionIsolation(Connection.TRANSACTION_REPEATABLE_READ);
}
}
Class to retrieve datasource for DAOs :
public class DAOAcquire {
private ComboPooledDataSource m_cpdsDataSource = null;
private static final String LOOKUP_CONNECT = "jdbc/mydb";
public DAOAcquire() throws NamingException {
InitialContext context = new InitialContext();
m_cpdsDataSource = (ComboPooledDataSource) context.lookup(LOOKUP_CONNECT);
if (m_cpdsDataSource != null) {
try {
System.out.println("Autocommit = "+String.valueOf(m_cpdsDataSource.getConnection().getAutoCommit()));
} catch (SQLException e) {
System.out.println("Could not get autocommit value : "+e.getMessage());
e.printStackTrace();
}
}
}
public ComboPooledDataSource getComboPooledDataSource() {
return m_cpdsDataSource;
}
/**
* #return the jdbcTemplate
* #throws NamingException
*/
public JdbcTemplate getJdbcTemplate() throws NamingException {
return new JdbcTemplate(m_cpdsDataSource);
}
/**
* Commit transactions
* #throws SQLException
*/
public void commit() throws SQLException {
if (m_cpdsDataSource != null) {
m_cpdsDataSource.getConnection().commit();
} else {
throw new SQLException("Could not commit. Reason : Unable to connect to database, dataSource is null.");
}
}
/**
* rollback all transactions to previous save point
* #throws SQLException
*/
public void rollback() throws SQLException {
if (m_cpdsDataSource != null) {
m_cpdsDataSource.getConnection().rollback();
} else {
throw new SQLException("Could not rollback. Reason : Unable to connect to database, dataSource is null.");
}
}
}
Log :
Connection acquired, set autocommit off and repeatable read transaction mode.
Connection acquired, set autocommit off and repeatable read transaction mode.
Connection acquired, set autocommit off and repeatable read transaction mode.
Autocommit = true
By default, postgresql auto commit mode is disabled so why does c3p0 activate it automatically ? Should I set forceIgnoreUnresolvedTransactions to true ?
EDIT : whenever I commit a transaction after retrieving the datasource, I get this error :
org.postgresql.util.PSQLException: Cannot commit when autoCommit is enabled.
The JDBC spec states that, "The default is for auto-commit mode to be enabled when the Connection object is created." That's a cross DBMS default, regardless of how the database behaves in other contexts. JDBC programmers may rely on autoCommit being set unless they explicitly call setAutoCommit( false ). c3p0 honors this.
c3p0 allows ConnectionCustomizers to persistently override Connection defaults in the onAcquire() method when the no single behavior is specified. For example, the spec states that "The default transaction level for a Connection object is determined by the driver
supplying the connection." So, for transactionIsolation, if you reset that in onAcquire(...), c3p0 will remember the default you have chosen, and always restore the transactionIsolation back to that default prior to checkout. However, c3p0 explicitly will not permit you to disable autoCommit once in onAcquire(...) and have autoCommit be disabled by default. at the moment of check-out, c3p0 insists you have a spec conformant Connection.
You can get the behavior that you want by overriding the onCheckOut(...) method. The Connection is already checked-out when onCheckOut(...) is called, you can do anything you want there, c3p0 has exhausted it obligations to the specification gods at that point. If you want your clients to always see non-autoCommit Connections, call setAutoCommit( false ) in onCheckOut(...). But do beware that this renders your client code unportable. If you leave c3p0 and switch to a different DataSource, you'll need to use some other library-specific means of always disabling autoCommit or else you'll find that your application misbehaves. Because even for postgres, JDBC Connections are autoCommit by default.
Note: The Connection properties whose values are not fixed by the spec and so can be persistently overridden in an onAcquire(...) method are catalog, holdability, transactionIsolation, readOnly, and typeMap.
p.s. don't set forceIgnoreUnresolvedTransactions to true. yuk.