How to handle Attempt to update job execution id=1 with wrong version (0), where current version is 1 for Sybase - spring-batch

I'm faced with critical issue using spring batch for sybase
I don't know why occur this issue for sybase
Maybe, INSERT into BATCH_JOB_EXECUTION is success but update isn't success
This is my stack trace
2022-08-31 11:06:12.857 DEBUG 7072 --- [ main] o.s.j.d.DataSourceTransactionManager : Releasing JDBC Connection [HikariProxyConnection#59930654 wrapping com.sybase.jdbc4.jdbc.SybConnection#17dad32f] after transaction
2022-08-31 11:06:12.860 ERROR 7072 --- [ main] o.s.batch.core.job.AbstractJob : Encountered fatal error executing job
org.springframework.dao.OptimisticLockingFailureException: Attempt to update job execution id=1 with wrong version (0), where current version is 1
#Configuration
#MapperScan(
value = "test.store.storebatch.mapper.primary",
sqlSessionFactoryRef = "primarySqlSessionFactory"
)
public class PrimaryDatabaseConfig {
#Primary
#Bean(name = "primaryDataSource")
#ConfigurationProperties(prefix = "spring.datasource.hikari.primary")
public DataSource primaryDataSource() {
return DataSourceBuilder.create().build();
}
#Primary
#Bean(name = "primarySqlSessionFactory")
public SqlSessionFactory primarySqlSessionFactory(
#Qualifier("primaryDataSource") DataSource primaryDataSource,
ApplicationContext applicationContext) throws Exception {
log.info("primarySqlSessionFactory created");
SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean();
sqlSessionFactoryBean.setDataSource(primaryDataSource);
sqlSessionFactoryBean.setMapperLocations(applicationContext.getResources("classpath:mapper/primary/*.xml"));
sqlSessionFactoryBean.setConfigLocation(applicationContext.getResource("classpath:mybatis-config.xml"));
sqlSessionFactoryBean.setTransactionFactory(null);
log.info("sqlSessionFactory = " + sqlSessionFactoryBean.toString());
return sqlSessionFactoryBean.getObject();
}
#Primary
#Bean(name="primarySqlSessionTemplate")
public SqlSessionTemplate primarySqlSessionTemplate(#Qualifier("primarySqlSessionFactory")
SqlSessionFactory primarySqlSessionFactory) throws Exception {
return new SqlSessionTemplate(primarySqlSessionFactory);
}
#Primary
#Bean(name= "primaryTransactionManager")
public PlatformTransactionManager primaryTransactionManager() {
log.info("primaryTransactionManager created");
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager();
transactionManager.setDataSource(primaryDataSource());
return transactionManager;
}
}
spring:
application:
name: store-batch
config:
activate:
on-profile: local
main:
web-application-type: NONE
datasource:
hikari:
primary:
# tps-dev connection
driver-class-name: com.sybase.jdbc4.jdbc.SybDriver
jdbc-url: jdbc:sybase:Tds:127.000.000.1:5000/ibims?CHARSET=eucksc&JAVA_CHARSET_MAPPING=ms949
username: id
password: password
maximum-pool-size: 2
Database connection was success.
When the connection information is changed to mysql and tested, it works well.
but Sybase is not works.
Has anyone solved this problem?

Related

"WSSecurityException: Cannot find key for alias" of a digital certificate in WS-Security SOAP client with Spring Boot

I am trying to make a client to a SOAP with Spring Boot. The requests must have a digital certificate (public key) in the header, but when I try to add it to the secuityInterceptor.
I'm deploying the client on a WildFly server, I thought maybe I would have to add the certificate to the server somehow but I don't know for sure. In principle it is in the resources folder of the project and when generating the war it is still there.
Config:
private static final Resource KEYSTORE_LOCATION = new ClassPathResource("client-keystore.jks");
private static final String KEYSTORE_PASSWORD = "password";
private static final String KEY_ALIAS = "alias";
#Bean
TrustManagersFactoryBean trustManagers() throws Exception {
TrustManagersFactoryBean factoryBean = new TrustManagersFactoryBean();
factoryBean.setKeyStore(keyStore().getObject());
return factoryBean;
}
#Bean
HttpsUrlConnectionMessageSender messageSender() throws Exception {
HttpsUrlConnectionMessageSender sender = new HttpsUrlConnectionMessageSender();
KeyManagersFactoryBean keyManagersFactoryBean = new KeyManagersFactoryBean();
keyManagersFactoryBean.setKeyStore(keyStore().getObject());
keyManagersFactoryBean.setPassword(KEYSTORE_PASSWORD);
keyManagersFactoryBean.afterPropertiesSet();
sender.setKeyManagers(keyManagersFactoryBean.getObject());
sender.setTrustManagers(trustManagers().getObject());
return sender;
}
#Bean
KeyStoreFactoryBean keyStore() throws GeneralSecurityException, IOException {
KeyStoreFactoryBean factoryBean = new KeyStoreFactoryBean();
factoryBean.setLocation(KEYSTORE_LOCATION);
factoryBean.setPassword(KEYSTORE_PASSWORD);
return factoryBean;
}
#Bean
public Jaxb2Marshaller marshaller() {
Jaxb2Marshaller marshaller = new Jaxb2Marshaller();
marshaller.setContextPath("contextpath");
return marshaller;
}
#Bean
Wss4jSecurityInterceptor securityInterceptor() throws Exception {
Wss4jSecurityInterceptor securityInterceptor = new Wss4jSecurityInterceptor();
securityInterceptor.setSecurementActions("Signature");
securityInterceptor.setSecurementUsername(KEY_ALIAS);
securityInterceptor.setSecurementPassword(KEYSTORE_PASSWORD);
securityInterceptor.setSecurementSignatureCrypto(cryptoFactoryBean().getObject());
return securityInterceptor;
}
#Bean
SOAPConnector client() throws Exception {
SOAPConnector client = new SOAPConnector();
System.out.println("client(): ");
client.setInterceptors(new ClientInterceptor[] { securityInterceptor() });
client.setMessageSender(messageSender());
client.setMarshaller(marshaller());
client.setUnmarshaller(marshaller());
client.afterPropertiesSet();
return client;
}
Error:
Caused by: org.apache.wss4j.common.ext.WSSecurityException: Error during Signature:
Original Exception was org.apache.wss4j.common.ext.WSSecurityException: Cannot find key for alias: [certificado]
Original Exception was org.apache.wss4j.common.ext.WSSecurityException: Cannot find key for alias: [certificado]
at org.apache.wss4j.dom.action.SignatureAction.execute(SignatureAction.java:174)
at org.apache.wss4j.dom.handler.WSHandler.doSenderAction(WSHandler.java:238)
at org.springframework.ws.soap.security.wss4j2.Wss4jHandler.doSenderAction(Wss4jHandler.java:58)
at org.springframework.ws.soap.security.wss4j2.Wss4jSecurityInterceptor.secureMessage(Wss4jSecurityInterceptor.java:609)
... 80 more
Caused by: org.apache.wss4j.common.ext.WSSecurityException: Cannot find key for alias: [certificado]
Original Exception was org.apache.wss4j.common.ext.WSSecurityException: Cannot find key for alias: [certificado]
at org.apache.wss4j.dom.message.WSSecSignature.computeSignature(WSSecSignature.java:615)
at org.apache.wss4j.dom.action.SignatureAction.execute(SignatureAction.java:166)
... 83 more
Caused by: org.apache.wss4j.common.ext.WSSecurityException: Cannot find key for alias: [certificado]
at org.apache.wss4j.common.crypto.Merlin.getPrivateKey(Merlin.java:696)
at org.apache.wss4j.dom.message.WSSecSignature.computeSignature(WSSecSignature.java:558)
In case it is useful, I am basing myself on this repository to make the client
I think the error is that the functions I'm using are to add certificates with a private key but I try to do it with a public one, in that case I don't know how to add the public one
this is the sing of the setSecurementUsername method:
public void setSecurementUsername(String securementUsername)
Sets the username for securement username token or/and the alias of the private key for securement signature

ScriptUtils.executeSqlScript throws "connection is closed" after spring boot upgrade

I was updating spring boot from 2.5.1 to 2.7 together with the r2dbc and postgres dependencies. I did no change the application.yml or test setup. Before the update my repository tests run fine with testcontainers, but now I see this exception which is thrown by a #AfterEach that tries to clean the DB:
2022-05-29 10:04:52.447 INFO 16673 --- [tainers-r2dbc-0] 🐳 [postgres:13.2] : Container postgres:13.2 started in PT1.244757S
Failed to execute SQL script statement #1 of InputStream resource [resource loaded through InputStream]: DROP SCHEMA public CASCADE; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$PostgresConnectionClosedException: Cannot exchange messages because the connection is closed
org.springframework.r2dbc.connection.init.ScriptStatementFailedException: Failed to execute SQL script statement #1 of InputStream resource [resource loaded through InputStream]: DROP SCHEMA public CASCADE; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$PostgresConnectionClosedException: Cannot exchange messages because the connection is closed
at org.springframework.r2dbc.connection.init.ScriptUtils.lambda$runStatement$9(ScriptUtils.java:571)
This is my abstract RepositoryTest:
#DataR2dbcTest
#ActiveProfiles("test")
internal abstract class RepositoryTest {
#Autowired
protected lateinit var connectionFactory: ConnectionFactory
#AfterEach
fun clean() {
runSql(
"""
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
"""
)
}
protected fun runSql(sql: String) {
runScript(InputStreamResource(sql.byteInputStream()))
}
protected fun runScript(sqlScript: Resource) {
runBlocking {
val connection = connectionFactory.create().awaitFirst()
ScriptUtils.executeSqlScript(connection, sqlScript)
.block() // <---- throws the said exception, but it worked before the update.
}
}
}
My actual test looks like this:
internal class MyRepoTest : RepositoryTest() {
#Autowired
private lateinit var myRepo: MyRepository
#Test
fun someTest() {
val userId = 3429L
val myEntities = ...
runBlocking { myRepo.saveAll(myEntities).collect() }
val result = myRepo.findAllByUserId(userId).asFlux()
StepVerifier.create(result)
.expectNextMatches { it.userId == userId}
.expectNextMatches { it.userId == userId}
.verifyComplete()
}
}
I guess the way I try to execute the SQL commands is not fine, how should I do it?
val connection = connectionFactory.create().awaitFirst()
ScriptUtils.executeSqlScript(connection, sqlScript)
.block() // <---- throws the said exception, but it worked before the update.
EDIT
I figured out that using ResourceDatabasePopulator works fine:
7protected fun runScript(sqlScript: Resource) {
runBlocking {
ResourceDatabasePopulator(sqlScript).populate(connectionFactory).block()
}
}
But I still would like to understand why the original implementation now fails.

RabbitTransactionManager not rolling back at ChainedTransactionManager when an error occurs

I'm trying to use one transaction manager (ChainedTransactionManager) for Rabbit and Kafka, chaining RabbitTransactionManager and KafkaTransactionManager. We intend to achieve a Best effort 1-phase commit.
To test it, the transactional method throws an exception after the 2 operations (sending a message to a Rabbit exchange and publishing and event in Kafka). When running the test, the logs suggest a rollback is initiated but the message ends up in Rabbit anyway.
Notes:
We're using QPid to simulate in-memory RabbitMQ for testing (version 7.1.12)
We're using an in-memory Kafka for testing (spring-kafka-test)
Other relevant frameworks/libraries: spring-cloud-stream
Here's the method where the problem occurs:
#Transactional
public void processMessageAndEvent() {
Message<String> message = MessageBuilder
.withPayload("Message to RabbitMQ")
.build();
outputToRabbitMQExchange.output().send(message);
outputToKafkaTopic.output().send(
withPayload("Message to Kafka")
.setHeader(KafkaHeaders.MESSAGE_KEY, "Kafka message key")
.build()
);
throw new RuntimeException("We want the previous changes to rollback");
}
Here is the main Spring-boot application configuration:
#SpringBootApplication
**#EnableTransactionManagement**
public class MyApplication {
public static void main(String[] args) {
SpringApplication.run(MyApplication.class, args);
}
}
Here is TransactionManager configuration:
#Bean
public RabbitTransactionManager rabbitTransactionManager(ConnectionFactory cf) {
return new RabbitTransactionManager(cf);
}
#Bean(name = "transactionManager")
#Primary
public ChainedTransactionManager chainedTransactionManager(RabbitTransactionManager rtm, BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder("kafka", MessageChannel.class))
.getTransactionalProducerFactory();
KafkaTransactionManager<byte[], byte[]> ktm = new KafkaTransactionManager<>(pf);
ktm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return new ChainedKafkaTransactionManager<>(ktm, rtm);
}
And finally, the relevant configuration in the application.yml file:
spring:
application:
name: my-application
main:
allow-bean-definition-overriding: true
cloud:
stream:
bindings:
source_outputToRabbitMQExchange:
content-type: application/json
destination: outputToRabbitMQExchange
group: ${spring.application.name}
sink_outputToKafkaTopic:
content-type: application/json
destination: outputToKafkaTopic
binder: kafka
rabbit:
bindings:
output_outputToRabbitMQExchange:
producer:
transacted: true
routing-key-expression: headers.myKey
kafka:
bindings:
sink_outputToKafkaTopic:
producer:
transacted: true
binder:
brokers: ${...kafka.hostname}
transaction:
transaction-id-prefix: ${CF_INSTANCE_INDEX}.${spring.application.name}.T
default-binder: rabbit
kafka:
producer:
properties:
max.block.ms: 3000
transaction.timeout.ms: 5000
enable.idempotence: true
retries: 1
acks: all
bootstrap-servers: ${...kafka.hostname}
When we execute the method, we can see the message is still in Rabbit despite the logs saying the transaction is to be rolled back.
Anything we could be missing or misunderstood?
#EnableBinding is deprecated in favor of the newer functional programming model.
That said, I copied your code/config pretty-much as-is (transacted is not a kafka producer binding property) and it works fine for me (Boot 2.4.5, cloud 2020.0.2)...
#SpringBootApplication
#EnableTransactionManagement
#EnableBinding(Bindings.class)
public class So67297869Application {
public static void main(String[] args) {
SpringApplication.run(So67297869Application.class, args);
}
#Bean
public RabbitTransactionManager rabbitTransactionManager(ConnectionFactory cf) {
return new RabbitTransactionManager(cf);
}
#Bean(name = "transactionManager")
#Primary
public ChainedTransactionManager chainedTransactionManager(RabbitTransactionManager rtm, BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder("kafka",
MessageChannel.class))
.getTransactionalProducerFactory();
KafkaTransactionManager<byte[], byte[]> ktm = new KafkaTransactionManager<>(pf);
ktm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return new ChainedKafkaTransactionManager<>(ktm, rtm);
}
#Bean
public ApplicationRunner runner(Foo foo) {
return args -> {
foo.send("test");
};
}
}
interface Bindings {
#Output("source_outputToRabbitMQExchange")
MessageChannel rabbitOut();
#Output("sink_outputToKafkaTopic")
MessageChannel kafkaOut();
}
#Component
class Foo {
#Autowired
Bindings bindings;
#Transactional
public void send(String in) {
bindings.rabbitOut().send(MessageBuilder.withPayload(in)
.setHeader("myKey", "test")
.build());
bindings.kafkaOut().send(MessageBuilder.withPayload(in)
.setHeader(KafkaHeaders.MESSAGE_KEY, "test".getBytes())
.build());
throw new RuntimeException("fail");
}
}
spring:
application:
name: my-application
main:
allow-bean-definition-overriding: true
cloud:
stream:
bindings:
source_outputToRabbitMQExchange:
content-type: application/json
destination: outputToRabbitMQExchange
group: ${spring.application.name}
sink_outputToKafkaTopic:
content-type: application/json
destination: outputToKafkaTopic
binder: kafka
rabbit:
bindings:
source_outputToRabbitMQExchange:
producer:
transacted: true
routing-key-expression: headers.myKey
kafka:
binder:
brokers: localhost:9092
transaction:
transaction-id-prefix: foo.${spring.application.name}.T
default-binder: rabbit
kafka:
producer:
properties:
max.block.ms: 3000
transaction.timeout.ms: 5000
enable.idempotence: true
retries: 1
acks: all
bootstrap-servers: localhost:9092
logging:
level:
org.springframework.transaction: debug
org.springframework.kafka: debug
org.springframework.amqp.rabbit: debug
2021-04-28 09:35:32.488 DEBUG 53253 --- [ main] o.s.a.r.t.RabbitTransactionManager : Initiating transaction rollback
2021-04-28 09:35:32.489 DEBUG 53253 --- [ main] o.s.a.r.connection.RabbitResourceHolder : Rolling back messages to channel: Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,2), conn: Proxy#3c770db4 Shared Rabbit Connection: SimpleConnection#1f736d00 [delegate=amqp://guest#127.0.0.1:5672/, localPort= 63439]
2021-04-28 09:35:32.490 DEBUG 53253 --- [ main] o.s.a.r.t.RabbitTransactionManager : Resuming suspended transaction after completion of inner transaction
2021-04-28 09:35:32.490 DEBUG 53253 --- [ main] o.s.k.t.KafkaTransactionManager : Initiating transaction rollback
2021-04-28 09:35:32.490 DEBUG 53253 --- [ main] o.s.k.core.DefaultKafkaProducerFactory : CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer#38e83838] abortTransaction()
And there is no message in the queue that I bound to the exchange with RK #.
What versions are you using?
EDIT
And here is the equivalent app after removing the deprecations, using the functional model and StreamBridge (same yaml):
#SpringBootApplication
#EnableTransactionManagement
public class So67297869Application {
public static void main(String[] args) {
SpringApplication.run(So67297869Application.class, args);
}
#Bean
public RabbitTransactionManager rabbitTransactionManager(ConnectionFactory cf) {
return new RabbitTransactionManager(cf);
}
#Bean(name = "transactionManager")
#Primary
public ChainedTransactionManager chainedTransactionManager(RabbitTransactionManager rtm, BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder("kafka",
MessageChannel.class))
.getTransactionalProducerFactory();
KafkaTransactionManager<byte[], byte[]> ktm = new KafkaTransactionManager<>(pf);
ktm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return new ChainedKafkaTransactionManager<>(ktm, rtm);
}
#Bean
public ApplicationRunner runner(Foo foo) {
return args -> {
foo.send("test");
};
}
}
#Component
class Foo {
#Autowired
StreamBridge bridge;
#Transactional
public void send(String in) {
bridge.send("source_outputToRabbitMQExchange", MessageBuilder.withPayload(in)
.setHeader("myKey", "test")
.build());
bridge.send("sink_outputToKafkaTopic", MessageBuilder.withPayload(in)
.setHeader(KafkaHeaders.MESSAGE_KEY, "test".getBytes())
.build());
throw new RuntimeException("fail");
}
}

Spring Batch - Create Two Datasources and how to customized to use other properties

I need quick guidance to create two relational datasources in Spring Boot Batch project. One is Oracle as a Source DB and Other is Postgres Target DB.
Spring Boot V2.2.5.RELEADE
Spring Boot Version 2.2.5.RELEASE
Here I want to customized both datasources to use all properties mentioned here (http://shekup.blogspot.com/2018/05/multiple-data-sources-in-spring-batch.html#:~:text=Multiple%20Data%20sources%20in%20Spring%20batch,such%20as%20ETL%20batch%20job.) for both datasources
spring.datasource.url=jdbc:postgresql://localhost:5432/postgres?currentSchema=XXXX?useSSL=false
spring.datasource.url=jdbc:postgresql://localhost:5432/postgres
spring.datasource.username=postgres
spring.datasource.password=admin
spring.datasource.driver-class-name=org.postgresql.Driver
# max no. of connections in the pool
spring.datasource.hikari.maximum-pool-size=30
spring.datasource.hikari.minimum-idle=20
spring.datasource.hikari.connection-test-query=SELECT 1
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.properties.hibernate.default_schema=YYYY
spring.batch.initialize-schema=always
spring.batch.table-prefix=YYYY.BATCH_
### Source oracle DS ###
oracle.datasource.url=jdbc:oracle:thin:#//XXXX:1527/XXX
oracle.datasource.username=XXX
oracle.datasource.password=XXX
oracle.datasource.driverClassName=oracle.jdbc.Oracloracleiver
# max no. of connections in the pool
oracle.spring.datasource.hikari.maximum-pool-size=30
oracle.spring.datasource.hikari.minimum-idle=20
oracle.spring.datasource.hikari.connection-test-query=SELECT 1
DBConfig
#Configuration
public class DataSourceConfig {
#Autowired
private Environment env;
#Primary
#Bean(name = "postgresDataSource")
#ConfigurationProperties("spring.datasource")
public DataSource batchDataSource() {
return DataSourceBuilder.create().url(env.getProperty("spring.datasource.url"))
.driverClassName(env.getProperty("spring.datasource.driver-class-name"))
.username(env.getProperty("spring.datasource.username"))
.password(env.getProperty("spring.datasource.password")).build();
}
#Bean(name = "oracleDataSource")
#ConfigurationProperties("oracle.datasource")
public DataSource mysqlBatchDataSource() {
return DataSourceBuilder.create().url(env.getProperty("oracle.datasource.url"))
.driverClassName(env.getProperty("oracle.datasource.driver-class-name"))
.username(env.getProperty("oracle.datasource.username"))
.password(env.getProperty("oracle.datasource.password")).build();
}
}
You have to provide the additional information by your own code.
Do not use DataSourceBuilder. Here is examples of Tomcat-Jdbc and Hikari
#Bean(name = "sourceBatchDataSource")
public DataSource sourceBatchDataSource() {
HikariDataSource hikariDataSource = new HikariDataSource();
hikariDataSource.setJdbcUrl(sourceDataSourceProperties.getUrl());
hikariDataSource.setUsername(sourceDataSourceProperties.getUsername());
hikariDataSource.setPassword(sourceDataSourceProperties.getPassword());
hikariDataSource.setDriverClassName(sourceDataSourceProperties.getDriverClassName());
hikariDataSource.setAutoCommit(from(environment.getProperty("spring.datasource.hikari.auto-commit")));
hikariDataSource.setConnectionTimeout(environment.getProperty("spring.datasource.hikari.connection-timeout", Integer.class));
hikariDataSource.setMaximumPoolSize(environment.getProperty("spring.datasource.hikari.maximum-pool-size", Integer.class));
hikariDataSource.setMaxLifetime(environment.getProperty("spring.datasource.hikari.max-lifetime", Integer.class));
hikariDataSource.setMinimumIdle(environment.getProperty("spring.datasource.hikari.minimum-idle", Integer.class));
hikariDataSource.setPoolName("SourceBatchHikariCP");
return hikariDataSource;
}
#Primary
#Bean(destroyMethod = "close", name = "sourceDataSource")
public DataSource sourceDataSource() {
DataSourceProperties dataSourceProperties = dataSourceProperties();
PoolProperties properties = new PoolProperties();
properties.setUrl(dataSourceProperties.getUrl());
properties.setDriverClassName(dataSourceProperties.getDriverClassName());
properties.setUsername(dataSourceProperties.getUsername());
properties.setPassword(dataSourceProperties.getPassword());
properties.setInitialSize(environment.getProperty("spring.datasource.tomcat.initial-size", Integer.class));
properties.setMaxWait(environment.getProperty("spring.datasource.tomcat.max-wait", Integer.class));
properties.setMaxActive(environment.getProperty("spring.datasource.tomcat.max-active", Integer.class));
properties.setMaxIdle(environment.getProperty("spring.datasource.tomcat.max-idle", Integer.class));
properties.setMinIdle(environment.getProperty("spring.datasource.tomcat.min-idle", Integer.class));
properties.setDefaultAutoCommit(from(environment.getProperty("spring.datasource.tomcat.default-auto-commit")));
properties.setValidationQuery(environment.getProperty("spring.datasource.tomcat.validation-query"));
properties.setTestOnBorrow(from(environment.getProperty("spring.datasource.tomcat.test-on-borrow")));
properties.setTestWhileIdle(from(environment.getProperty("spring.datasource.tomcat.test-while-idle")));
properties.setTestOnReturn(from(environment.getProperty("spring.datasource.tomcat.test-on-return")));
properties.setTimeBetweenEvictionRunsMillis(environment.getProperty("spring.datasource.tomcat.time-between-eviction-runs-millis", Integer.class));
properties.setMinEvictableIdleTimeMillis(environment.getProperty("spring.datasource.tomcat.min-evictable-idle-time-millis", Integer.class));
properties.setRemoveAbandoned(from(environment.getProperty("spring.datasource.tomcat.remove-abandoned")));
properties.setRemoveAbandonedTimeout(environment.getProperty("spring.datasource.tomcat.remove-abandoned-timeout", Integer.class));
properties.setLogAbandoned(from(environment.getProperty("spring.datasource.tomcat.log-abandoned")));
properties.setLogValidationErrors(from(environment.getProperty("spring.datasource.tomcat.log-validation-errors")));
properties.setJdbcInterceptors(environment.getProperty("spring.datasource.tomcat.jdbc-interceptors"));
return new org.apache.tomcat.jdbc.pool.DataSource(properties);
}

The web application [ROOT] appears to have started a thread named [pollingConfigurationSource] but has failed to stop it. Memory leak

Hi i am getting memory leak error while running project. I am using spring boot + quards scheduler + liquibase + postgreSQL 9.6. These are technologies we are using.
Error:
12018-10-15 11:43:19.005 WARN [billing,,,] 19152 --- [ost-startStop-1] o.a.c.loader.WebappClassLoaderBase : The web application [ROOT] appears to have started a thread named [pollingConfigurationSource] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
and my configuration code is:
package com.xyz.api;
#Slf4j
#XyzService
#EnableWebSecurity
#EnableResourceServer
#EnableGlobalMethodSecurity( prePostEnabled = true )
public class BApplication extends ResourceServerConfigurerAdapter implements WebMvcConfigurer
{
public static void main( String[] args )
{
SpringApplication.run( BApplication.class, args );
}
#Override
public void configure( ResourceServerSecurityConfigurer resources )
{
resources.resourceId( "xxxx" );
}
#Override
public void configure( HttpSecurity http ) throws Exception
{
http.authorizeRequests()
.antMatchers( "/" ).permitAll()
.antMatchers( "/docs/**" ).permitAll()
.antMatchers( "/actuator/health" ).permitAll() // can we tighten this up?
.anyRequest().authenticated(); //individual services use annotations
}
#Override
public void addViewControllers( ViewControllerRegistry registry )
{
registry.addViewController( "/" ).setViewName( "forward:/docs/index.html" );
}
#Bean
public Clock clock()
{
return Clock.systemUTC();
}
#Bean
public RestTemplate restTemplate( RestTemplateBuilder builder )
{
return builder.build();
}
}
Actually this work fine earlier but suddenly it give error at deployment. Help me out to solve this issue.
I also referred: Memory Leak Issue with spring-cloud-starter-hystrix and spring-cloud-starter-archaius integration
But not getting any solution on this.