I added ShedLock to my project to prevent working of scheduled job more than one time. I configured it like below but I'm getting
"org.postgresql.util.PSQLException: ERROR: relation "shedlock" does not exist" error.
This is lockProviderBean:
#Bean
public LockProvider lockProvider(DataSource dataSource) {
return new JdbcTemplateLockProvider(
JdbcTemplateLockProvider.Configuration.builder()
.withJdbcTemplate(new JdbcTemplate(dataSource))
.usingDbTime()
.build()
);
}
This is scheduled job:
#Scheduled(cron = "${cronProperty:0 00 23 * * *}")
#SchedulerLock(name = "schedulerLockName")
public void scheduledJob() {
..............
}
I added these notations to my class which contains schduledJob method:
#EnableScheduling
#Component
#Configuration
#EnableSchedulerLock(defaultLockAtMostFor = "2m")
I'm using Spring Data to do database operations and using these properties:
spring.datasource.url = jdbc:postgresql://ip:port/databaseName?currentSchema=schemeName
spring.datasource.driver-class-name = org.postgresql.Driver
spring.jpa.database = postgresql
spring.datasource.platform = postgresql
spring.datasource.hikari.maximum-pool-size=5
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.datasource.username = username
spring.datasource.password = password
You have to create the table as described in the documentation.
maybe this is what you are missing:
If you need to specify a schema, you can set it in the table name
using the usual dot notation new JdbcTemplateLockProvider(datasource,
"my_schema.shedlock")
I face this problem too even though shedlock table has been created.
Workarounds for this is by
Setting pg's user default schema using ALTER ROLE YourPgUser SET search_path TO ... , or
Specifing shedlock schema on LockProvider bean
#Bean
public LockProvider getLockProvider(#Autowired JdbcTemplate jdbcTemplate) {
jdbcTemplate.execute("SET search_path TO domaindbschema");
return new JdbcTemplateLockProvider(jdbcTemplate);
}
or another style
#Bean
public LockProvider getLockProvider(#Autowired JdbcTemplate jdbcTemplate) {
return new JdbcTemplateLockProvider(jdbcTemplate, "domaindbschema.shedlock");
}
Related
I modified this sample batch job provided by spring to use two custom datasources instead of the one autoconfigured by boot. Both datasources point to the same MySql DB server, but to different schemas. One schema for Batch/Task tables and another for the app tables. MySql is running locally. The performance was much much slower compared to the same job running with default boot configured datasource or with ONE custom datasource.
Here is the timing I got and can't figure out why #3 is taking a long time:
Default Boot configured datasource - 1 second
One custom datasource (for both Batch/Task and App) - 1 second
Two custom datasources (one each for Batch/Task and App) - 90 seconds !!!
Do I need to set any CP settings for the custom datasources when using two of them? I tried a few, but didn't help.
Here is the properties file:
spring.application.name=fileIngest
spring.datasource.url=jdbc:mysql://localhost:3306/test-scdf?useSSL=false
spring.datasource.username=<user>
spring.datasource.password=<pwd>
spring.datasource.driverClassName=org.mariadb.jdbc.Driver
app.datasource.url=jdbc:mysql://localhost:3306/test?useSSL=false
app.datasource.username=<user>
app.datasource.password=<pwd>
app.datasource.driverClassName=org.mariadb.jdbc.Driver
Here are relevant portions of my datasource config as recommended here.
#Bean(name = "springDataSource") // for Batch/Task tables
public DataSource dataSource(#Qualifier("springDataSourceProperties")DataSourceProperties springDataSourceProperties) {
return DataSourceBuilder.create().driverClassName(springDataSourceProperties.getDriverClassName()).
url(springDataSourceProperties.getUrl()).
password(springDataSourceProperties.getPassword()).
username(springDataSourceProperties.getUsername()).
build();
}
#Bean(name = "appDataSource") // for App tables
#Primary
public DataSource appDataSource(#Qualifier("appDataSourceProperties") DataSourceProperties appDataSourceProperties) {
DataSource ds = DataSourceBuilder.create().driverClassName(appDataSourceProperties.getDriverClassName()).
url(appDataSourceProperties.getUrl()).
password(appDataSourceProperties.getPassword()).
username(appDataSourceProperties.getUsername()).
build();
I just inject the appropriate datasource into the BatchConfiguration as needed.
#Configuration
#EnableBatchProcessing
public class BatchConfiguration extends DefaultBatchConfigurer {
...
#Override
#Autowired
public void setDataSource(#Qualifier("springDataSource") DataSource batchDataSource) {
super.setDataSource(batchDataSource);
}
#Bean
public BatchDataSourceInitializer batchDataSourceInitializer(#Qualifier("springDataSource") DataSource batchDataSource,
ResourceLoader resourceLoader) {
BatchProperties batchProperties = new BatchProperties();
batchProperties.setInitializeSchema(DataSourceInitializationMode.ALWAYS);
return new BatchDataSourceInitializer(batchDataSource, resourceLoader, batchProperties);
}
In spring boot if we want to connect to mongodb, we can create a configuration file for mongodb or writing datasource in application.properties
I am following the second way
For me, I am gettint this error
"Timeout while receiving message; nested exception is com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
.
spring.data.mongodb.uri = mongodb://mongodb0.example.com:27017/admin
I am gettint this error If I am not using my app for 6/7 hours and after that If I try to hit any controller to retrieve data from Mongodb. After 1/2 try I am able to get
Question - Is it the normal behavior of mongodb?
So, in my case it is closing the socket after some particular hours
I read some blogs where it was written you can give socket-keep-alive, so the connection pool will not close
In spring boot mongodb connection, we can pass options in uri like
spring.data.mongodb.uri = mongodb://mongodb0.example.com:27017/admin/?replicaSet=test&connectTimeoutMS=300000
So, I want to give socket-keep-alive options for spring.data.mongodb.uri like replicaset here.
I searched the official site, but can't able to find any
You can achieve this by providing a MongoClientOptions bean. Spring Data's MongoAutoConfiguration will pick this MongoClientOptions bean up and use it further on:
#Bean
public MongoClientOptions mongoClientOptions() {
return MongoClientOptions.builder()
.socketKeepAlive(true)
.build();
}
Also note that the socket-keep-alive option is deprecated (and defaulted to true) since mongo-driver version 3.5 (used by spring-data since version 2.0.0 of spring-data-mongodb)
You can achieve to pass this option using MongoClientOptionsFactoryBean.
public MongoClientOptions mongoClientOptions() {
try {
final MongoClientOptionsFactoryBean bean = new MongoClientOptionsFactoryBean();
bean.setSocketKeepAlive(true);
bean.afterPropertiesSet();
return bean.getObject();
} catch (final Exception e) {
throw new BeanCreationException(e.getMessage(), e);
}
}
Here an example of this configuration by extending AbstractMongoConfiguration:
#Configuration
public class DataportalApplicationConfig extends AbstractMongoConfiguration {
//#Value: inject property values into components
#Value("${spring.data.mongodb.uri}")
private String uri;
#Value("${spring.data.mongodb.database}")
private String database;
/**
* Configure the MongoClient with the uri
*
* #return MongoClient.class
*/
#Override
public MongoClient mongoClient() {
return new MongoClient(new MongoClientURI(uri,mongoClientOptions().builder()));
}
<dataset>
<user id="1" created_date='2017-01-01 00:00:00' email="" user_name="root"/>
</dataset>
xml above gives me error. The problem is i have reserved word for user. how can I solve this. any links?
updated
I am using spring boot, spring data jpa, spring-test-dbunit, dbunit, postgresql
According to this forum https://sourceforge.net/p/dbunit/mailman/message/20643023/ it doesn’t seem like DBUnit has a way to “quote” the table name. But you can configure DatabaseDataSourceConnectionFactoryBean if you do not want to rename tables for some reason or working with legacy database
#Configuration
public class Custom.... {
#Autowired
private DataSource dataSource;
#Bean
public DatabaseConfigBean dbUnitDatabaseConfig() {
DatabaseConfigBean dbConfigBean = new DatabaseConfigBean();
// dbConfigBean.setDatatypeFactory(new PostgresqlDataTypeFactory());
dbConfigBean.setQualifiedTableNames(true);
return dbConfigBean;
}
#Bean
public DatabaseDataSourceConnectionFactoryBean dbUnitDatabaseConnection() {
DatabaseDataSourceConnectionFactoryBean databaseDataSourceConnectionFactoryBean = new DatabaseDataSourceConnectionFactoryBean(dataSource);
databaseDataSourceConnectionFactoryBean.setDatabaseConfig(dbUnitDatabaseConfig());
return databaseDataSourceConnectionFactoryBean;
}
}
After setting true to qualifiedTableNames you should give full name for ur tables in xml
<public.user id="1" created_date='2017-01-01 00:00:00' email="root#demo.io" password="your password" username="root"/>
I tried the cassandra version of 2.2.6 (a docker image) and 3.7 (the latest version, not with docker). Both of them report the same issue when I creating trigger for a table.
Exception when creating cassandra trigger
```
package com.ttData.triggers;
import ...
public class DataTrigger implements ITrigger {
private Properties properties = loadProperties();
#Autowired
private KafkaTemplate<Integer, String> kafkaTemplate;
private static AtomicInteger index = new AtomicInteger(1);
#Override
public Collection<Mutation> augment(Partition update) {
...
return Collections.singletonList(audit.build());
}
private static Properties loadProperties()
{
...
return properties;
}
}
```
You should use single-quote instead instead of double-quote for className
cqlsh:test> CREATE TRIGGER myTrigger on mytable using "className";
SyntaxException: <ErrorMessage code=2000 [Syntax error in CQL query] message="line 1:42 mismatched input 'className' expecting STRING_LITERAL (...TRIGGER myTrigger on mytable using ["classNam]e";)">
cqlsh:test>
cqlsh:test> CREATE TRIGGER myTrigger on mytable using 'className';
ConfigurationException: <ErrorMessage code=2300 [Query invalid because of configuration issue] message="Trigger class 'className' doesn't exist">
After debugging the cassandra source code, I think this is a bug.
Even though the trigger directory and trigger classes are both correct, it could still reports error that the trigger class doesn't exist.
The reason is that the working thread for creating trigger is not a secure thread which should be managed by cassandra's SecurityManager and belongs to a SecurityThreadGroup. So, an exception will be thrown when validating security failed.
I'm setting up a new version of my application in a demo server and would love to find a way of resetting the database daily. I guess I can always have a cron job executing drop and create queries but I'm looking for a cleaner approach. I tried using a special persistence unit with drop-create approach but it doesn't work as the system connects and disconnects from the server frequently (on demand).
Is there a better approach?
H2 supports a special SQL statement to drop all objects:
DROP ALL OBJECTS [DELETE FILES]
If you don't want to drop all tables, you might want to use truncate table:
TRUNCATE TABLE
As this response is the first Google result for "reset H2 database", I post my solution below :
After each JUnit #tests :
Disable integrity constraint
List all tables in the (default) PUBLIC schema
Truncate all tables
List all sequences in the (default) PUBLIC schema
Reset all sequences
Reenable the constraints.
#After
public void tearDown() {
try {
clearDatabase();
} catch (Exception e) {
Fail.fail(e.getMessage());
}
}
public void clearDatabase() throws SQLException {
Connection c = datasource.getConnection();
Statement s = c.createStatement();
// Disable FK
s.execute("SET REFERENTIAL_INTEGRITY FALSE");
// Find all tables and truncate them
Set<String> tables = new HashSet<String>();
ResultSet rs = s.executeQuery("SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES where TABLE_SCHEMA='PUBLIC'");
while (rs.next()) {
tables.add(rs.getString(1));
}
rs.close();
for (String table : tables) {
s.executeUpdate("TRUNCATE TABLE " + table);
}
// Idem for sequences
Set<String> sequences = new HashSet<String>();
rs = s.executeQuery("SELECT SEQUENCE_NAME FROM INFORMATION_SCHEMA.SEQUENCES WHERE SEQUENCE_SCHEMA='PUBLIC'");
while (rs.next()) {
sequences.add(rs.getString(1));
}
rs.close();
for (String seq : sequences) {
s.executeUpdate("ALTER SEQUENCE " + seq + " RESTART WITH 1");
}
// Enable FK
s.execute("SET REFERENTIAL_INTEGRITY TRUE");
s.close();
c.close();
}
The other solution would be to recreatethe database at the begining of each tests. But that might be too long in case of big DB.
Thre is special syntax in Spring for database manipulation within unit tests
#Sql(scripts = "classpath:drop_all.sql", executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD)
#Sql(scripts = {"classpath:create.sql", "classpath:init.sql"}, executionPhase = Sql.ExecutionPhase.BEFORE_TEST_METHOD)
public class UnitTest {}
In this example we execute drop_all.sql script (where we dropp all required tables) after every test method.
In this example we execute create.sql script (where we create all required tables) and init.sql script (where we init all required tables before each test method.
The command: SHUTDOWN
You can execute it using
RunScript.execute(jdbc_url, user, password, "classpath:shutdown.sql", "UTF8", false);
I do run it every time when the Suite of tests is finished using #AfterClass
If you are using spring boot see this stackoverflow question
Setup your data source. I don't have any special close on exit.
datasource:
driverClassName: org.h2.Driver
url: "jdbc:h2:mem:psptrx"
Spring boot #DirtiesContext annotation
#DirtiesContext(classMode = DirtiesContext.ClassMode.BEFORE_EACH_TEST_METHOD)
Use #Before to initialise on each test case.
The #DirtiesContext will cause the h2 context to be dropped between each test.
you can write in the application.properties the following code to reset your tables which are loaded by JPA:
spring.jpa.hibernate.ddl-auto=create