Why `spring-data-jpa` with `spring-data-cassandra` won't create cassandra tables automatically? - jpa

I'm using spring-data-cassandra:3.1.9 and the properties looks like :
spring:
data:
cassandra:
keyspace-name: general_log
session-name: general_log
local-datacenter: datacenter1
schema-action: CREATE
Cassandra version: apache-cassandra-4.0.1
spring-boot: 2.4.7
spring-data-jpa: 2.4.9
spring-jdbc: 5.3.8
spring-orm: 5.3.8
My entity looks like:
#ApiModel(description = "Audit log")
#Entity
#Table(name = "audit_log")
#org.springframework.data.cassandra.core.mapping.Table("audit_log")
public class AuditLogPO implements Serializable {
#PrimaryKeyClass
public static class Id implements Serializable {
private static final long serialVersionUID = 1L;
#ApiModelProperty(value = "业务标识")
#Column(name = "business_key")
#PrimaryKeyColumn(ordinal = 1, ordering = Ordering.ASCENDING)
private String businessKey;
// setters & getters ...
}
#javax.persistence.Id
#PrimaryKey
#org.springframework.data.annotation.Id
#Transient
private Id id;
#ApiModelProperty(value = "业务分区")
#Column(name = "business_partition")
#org.springframework.data.cassandra.core.mapping.Column(value = "business_partition")
private String businessPartition;
// getters & setters ...
}
After running this application, table audit_log will not be created automatically.

Actually, after digging into the source code located in spring-data-cassandra:3.1.9, you can check the implementation:
org.springframework.data.cassandra.config.SessionFactoryFactoryBean#performSchemaAction
wich implementation as following:
protected void performSchemaAction() throws Exception {
boolean create = false;
boolean drop = DEFAULT_DROP_TABLES;
boolean dropUnused = DEFAULT_DROP_UNUSED_TABLES;
boolean ifNotExists = DEFAULT_CREATE_IF_NOT_EXISTS;
switch (this.schemaAction) {
case RECREATE_DROP_UNUSED:
dropUnused = true;
case RECREATE:
drop = true;
case CREATE_IF_NOT_EXISTS:
ifNotExists = SchemaAction.CREATE_IF_NOT_EXISTS.equals(this.schemaAction);
case CREATE:
create = true;
case NONE:
default:
// do nothing
}
if (create) {
createTables(drop, dropUnused, ifNotExists);
}
}
which means you have to assign CREATE to schemaAction if the table has never been created. And CREATE_IF_NOT_EXISTS dose not work.
Unfortunately, we've not done yet.
SessionFactoryFactoryBean#performSchemaAction will be invoked as expected, however tables are still not be created, why?
It is because Spring Data JPA will add entities in org.springframework.data.cassandra.repository.support.CassandraRepositoryFactoryBean#afterPropertiesSet(org.springframework.data.mapping.context.AbstractMappingContext#addPersistentEntity(org.springframework.data.util.TypeInformation<?>)). But performSchemaAction method will be invoked in SessionFactoryFactoryBean. And all of these two FactoryBeans do not have an order and we do not know who will be firstly invoked.
Which means if SessionFactoryFactoryBean#afterPropertiesSet has been invoked firstly, probably no Entity is already there. In this circumstance, no tables will be created automatically for sure.
And how to create these tables automatically?
One solution is that you can invoke SessionFactoryFactoryBean#performSchemaAction in a bean of ApplicationRunner manually.
First of all, let's create another class extends from SessionFactoryFactoryBean as:
public class ExecutableSessionFactoryFactoryBean extends SessionFactoryFactoryBean {
#Override
public void createTables(boolean drop, boolean dropUnused, boolean ifNotExists) throws Exception {
super.createTables(drop, dropUnused, ifNotExists);
}
}
Next we should override org.springframework.data.cassandra.config.AbstractCassandraConfiguration#cassandraSessionFactory as:
#Override
#Bean
#Primary
public SessionFactoryFactoryBean cassandraSessionFactory(CqlSession cqlSession) {
sessionFactoryFactoryBean = new ExecutableSessionFactoryFactoryBean();
// Initialize the CqlSession reference first since it is required, or must not be null!
sessionFactoryFactoryBean.setSession(cqlSession);
sessionFactoryFactoryBean.setConverter(requireBeanOfType(CassandraConverter.class));
sessionFactoryFactoryBean.setKeyspaceCleaner(keyspaceCleaner());
sessionFactoryFactoryBean.setKeyspacePopulator(keyspacePopulator());
sessionFactoryFactoryBean.setSchemaAction(getSchemaAction());
return sessionFactoryFactoryBean;
}
Now we can create an ApplicationRunner to perform the schema action:
#Bean
public ApplicationRunner autoCreateCassandraTablesRunner() {
return args -> {
if (SchemaAction.CREATE.name().equalsIgnoreCase(requireBeanOfType(CassandraProperties.class).getSchemaAction())) {
sessionFactoryFactoryBean.createTables(false, false, true);
}
};
}

please refer this doc https://docs.spring.io/spring-data/cassandra/docs/4.0.x/reference/html/#cassandra.schema-management.initializing.config
But you still need to create keyspace before excuting the following codes:
#Configuration
public class SessionFactoryInitializerConfiguration extends AbstractCassandraConfiguration {
#Bean
SessionFactoryInitializer sessionFactoryInitializer(SessionFactory sessionFactory) {
SessionFactoryInitializer initializer = new SessionFactoryInitializer();
initializer.setSessionFactory(sessionFactory);
ResourceKeyspacePopulator populator = new ResourceKeyspacePopulator();
populator.setSeparator(";");
populator.setScripts(new ClassPathResource("com/myapp/cql/db-schema.cql"));
initializer.setKeyspacePopulator(populator);
return initializer;
}
// ...
}

You can also specify this behavior in your application.yml:
spring:
data:
cassandra:
schema-action: create-if-not-exists
Although, you will need to create the keyspace (with appropriate data center / replication factor pairs) ahead of time.

Related

Dynamic injection using #SpringBean in wicket

I have a form that based on collected information generates a report. I have multiple sources from which to generate reports, but the form for them is the same. I tried to implement strategy pattern using an interface implementing report generator services, but that led to wicket complaining about serialization issues of various parts of the report generator. I would like to solve this without duplicating the code contained in the form, but I have not been able to find information on dynamic injection with #SpringBean.
Here is a rough mock up of what I have
public class ReportForm extends Panel {
private IReportGenerator reportGenerator;
public ReportForm(String id, IReportGenerator reportGenerator) {
super(id);
this.reportGenerator = reportGenerator;
final Form<Void> form = new Form<Void>("form");
this.add(form);
...
form.add(new AjaxButton("button1") {
private static final long serialVersionUID = 1L;
#Override
protected void onSubmit(AjaxRequestTarget target)
{
byte[] report = reportGenerator.getReport(...);
...
}
});
}
}
If I do it this way, wicket tries to serialize the concrete instance of reportGenerator. If I annotate the reportGenerator property with #SpringBean I receive Concrete bean could not be received from the application context for class: IReportGenerator
Edit: I have reworked implementations of IRerportGenerator to be able to annotate them with #Component and now I when I use #SpringBean annotation I get More than one bean of type [IReportGenerator] found, you have to specify the name of the bean (#SpringBean(name="foo")) or (#Named("foo") if using #javax.inject classes) in order to resolve this conflict. Which is exactly what I don't want to do.
I think the behavior you're trying to achieve can be done with a slight workaround, by introducing a Spring bean that holds all IReportGenerator instances:
#Component
public class ReportGeneratorHolder {
private final List<IReportGenerator> reportGenerators;
#Autowired
public ReportGeneratorHolder(List<IReportGenerator> reportGenerators) {
this.reportGenerators = reportGenerators;
}
public Optional<IReportGenerator> getReportGenerator(Class<? extends IReportGenerator> reportGeneratorClass) {
return reportGenerators.stream()
.filter(reportGeneratorClass::isAssignableFrom)
.findAny();
}
}
You can then inject this class into your Wicket page, and pass the desired class as a constructor-parameter. Depending on your Spring configuration you might need to introduce an interface for this as well.
public class ReportForm extends Panel {
#SpringBean
private ReportGeneratorHolder reportGeneratorHolder;
public ReportForm(String id, Class<? extends IReportGenerator> reportGeneratorClass) {
super(id);
IReportGenerator reportGenerator = reportGeneratorHolder
.getReportGenerator(reportGeneratorClass)
.orElseThrow(IllegalStateException::new);
// Form logic omitted for brevity
}
}
As far as I am able to find, looking through documentation and even the source for wicket #SpringBean annotation, this isn't possible. The closest I got is with explicitly creating a proxy for a Spring bean based on class passed. As described in 13.2.4 Using proxies from the wicket-spring project chapter in Wicket in Action.
public class ReportForm extends Panel {
private IReportGenerator reportGenerator;
private Class<? extends IReportGenerator> classType;
private static ISpringContextLocator CTX_LOCATOR = new ISpringContextLocator() {
public ApplicationContext getSpringContext() {
return ((MyApplication)MyApplication.get()).getApplicationContext();
}
};
public ReportForm(String id, Class<? extends IReportGenerator> classType) {
super(id);
this.classType = classType;
final Form<Void> form = new Form<Void>("form");
this.add(form);
...
form.add(new AjaxButton("button1") {
private static final long serialVersionUID = 1L;
#Override
protected void onSubmit(AjaxRequestTarget target)
{
byte[] report = getReportGenerator().getReport(...);
...
}
});
}
private <T> T createProxy(Class<T> classType) {
return (T) LazyInitProxyFactory.createProxy(classType, new
SpringBeanLocator(classType, CTX_LOCATOR));
}
private IReportGenerator getReportGenerator() {
if (reportGenerator = null) {
reportGenerator = createProxy(classType);
}
return reportGenerator;
}
}

Custom DynamoDb TableNameResolver not being called when using CrudRepository

I am testing DynamoDB tables and want to set up different table names for prod and dev environment using the keyword"dev" for development and prod for production.
I have a POJO
#DynamoDBTable(tableName = "abc_xy_dev_MyProjectName_Employee")
public class Employee implements Cloneable {
}
On Prod I want its name to be abc_xy_prod_MyProjectName_Employee.
So, I wrote a TableNameResolver
public static class MyTableNameResolver implements TableNameResolver {
public static final MyTableNameResolver INSTANCE = new MyTableNameResolver();
#Override
public String getTableName(Class<?> clazz, DynamoDBMapperConfig config) {
final TableNameOverride override = config.getTableNameOverride();
String tableNameToReturn = null;
if (override != null) {
final String tableName = override.getTableName();
if (tableName != null) {
System.out.println("MyTableNameResolver ==================================");
return tableName;
}
}
String env = System.getenv("DEPLOYMENT_ENV");
for(Annotation annotation : clazz.getAnnotations()){
if(annotation instanceof DynamoDBTable){
DynamoDBTable myAnnotation = (DynamoDBTable) annotation;
if ("production".equals(env)){
tableNameToReturn = myAnnotation.tableName().replace("_dev_", "_prod_");
}
else {
tableNameToReturn = myAnnotation.tableName();
}
}
}
return tableNameToReturn;
}
}
This works by creating a table with the name abc_xy_prod_MyProjectName_Employee in production.
However, I have a repository with the following code
#EnableScan
public interface EmployeeRepository extends CrudRepository<Employee, String>
{
#Override
<S extends Employee> S save(S employee);
Optional<Employee> findById(String id);
#Override
List<Employee> findAll();
Optional<Employee> findByEmployeeNumber(String EmployeeNumber);
}
Thus when i try to call the method findAll via a endpoint /test case, i get the exception
There was an unexpected error (type=Internal Server Error,
status=500). User:
arn:aws:iam::87668976786:user/svc_nac_ps_MyProjectName_prod is not
authorized to perform: dynamodb:Scan on resource:
:table/abc_xy_dev_MyProjectName_Employee (Service: AmazonDynamoDBv2;
Status Code: 400; Error Code: AccessDeniedException; Request ID:
aksdnhLDFL)
i.e MyTableNameResolver doesn't get called internally when the respository methods are executed. It still points to table name with the name abc_xy_dev_MyProjectName_Employee given in the annotation #DynamoDBTable(tableName = "abc_xy_dev_MyProjectName_Employee")
You have used spring JPA as persistence dynamoDB Integration.
Below configuration can be used to set table name override as part of spring boot configuration.
Sample example is found in https://github.com/ganesara/SpringExamples/tree/master/spring-dynamo
Map Dynamo db repository with user defined mapper config reference
#EnableDynamoDBRepositories(basePackages = "home.poc.spring", dynamoDBMapperConfigRef="dynamoDBMapperConfig")
Mapper Config for table override is as below
#Bean
public DynamoDBMapperConfig dynamoDBMapperConfig() {
DynamoDBMapperConfig mapperConfig = new DynamoDBMapperConfig
.Builder()
.withTableNameOverride(DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix("PROD_"))
.build();
return mapperConfig;
}
Full configuration for reference
#Configuration
#EnableDynamoDBRepositories(basePackages = "home.poc.spring", dynamoDBMapperConfigRef="dynamoDBMapperConfig")
public class DynamoDBConfig {
#Value("${amazon.dynamodb.endpoint}")
private String amazonDynamoDBEndpoint;
#Value("${amazon.aws.accesskey}")
private String amazonAWSAccessKey;
#Value("${amazon.aws.secretkey}")
private String amazonAWSSecretKey;
#Bean
public AmazonDynamoDB amazonDynamoDB() {
AmazonDynamoDB amazonDynamoDB
= new AmazonDynamoDBClient(amazonAWSCredentials());
if (!StringUtils.isEmpty(amazonDynamoDBEndpoint)) {
amazonDynamoDB.setEndpoint(amazonDynamoDBEndpoint);
}
return amazonDynamoDB;
}
#Bean
public AWSCredentials amazonAWSCredentials() {
return new BasicAWSCredentials(
amazonAWSAccessKey, amazonAWSSecretKey);
}
#Bean
public DynamoDBMapperConfig dynamoDBMapperConfig() {
DynamoDBMapperConfig mapperConfig = new DynamoDBMapperConfig
.Builder()
.withTableNameOverride(DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix("PROD_"))
.build();
return mapperConfig;
}
#Bean
public DynamoDBMapper dynamoDBMapper() {
return new DynamoDBMapper(amazonDynamoDB(), dynamoDBMapperConfig());
}
}
You are using DynamoDBMapper (the Java SDK). Here is how I use it. Lets say I have a table called Users, with an associated User POJO. In DynamoDB I have DEV_Users and LIVE_Users.
I have an environment variable 'ApplicationEnvironmentName' which is either DEV or LIVE.
I create a custom DynamoDBMapper like this:
public class ApplicationDynamoMapper {
private static Map<String, DynamoDBMapper> mappers = new HashMap<>();
private static AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withRegion(System.getProperty("DynamoDbRegion")).build();
protected ApplicationDynamoMapper() {
// Exists only to defeat instantiation.
}
public static DynamoDBMapper getInstance(final String tableName) {
final ApplicationLogContext LOG = new ApplicationLogContext();
DynamoDBMapper mapper = mappers.get(tableName);
if (mapper == null) {
final String tableNameOverride = System.getProperty("ApplicationEnvironmentName") + "_" + tableName;
LOG.debug("Creating DynamoDBMapper with overridden tablename {}.", tableNameOverride);
final DynamoDBMapperConfig mapperConfig = new DynamoDBMapperConfig.Builder().withTableNameOverride(TableNameOverride.withTableNameReplacement(tableNameOverride)).build();
mapper = new DynamoDBMapper(client, mapperConfig);
mappers.put(tableName, mapper);
}
return mapper;
}
}
My Users POJO looks like this:
#DynamoDBTable(tableName = "Users")
public class User {
...
}
When I want to use the database I create an application mapper like this:
DynamoDBMapper userMapper = ApplicationDynamoMapper.getInstance(User.DB_TABLE_NAME);
If I wanted to a load a User, I would do it like this:
User user = userMapper.load(User.class, userId);
Hope that helps.

Why Lazy Collections do not work with JavaFX getters / setters?

I experienced poor performance when using em.find(entity, primaryKey).
The reason seems to be that em.find() will also load entity collections, that are annotated with FetchType.LAZY.
This small test case illustrates what I mean:
public class OriginEntityTest4 {
[..]
#Test
public void test() throws Exception {
final OriginEntity oe = new OriginEntity("o");
final ReferencePeakEntity rpe = new ReferencePeakEntity();
oe.getReferencePeaks().add(rpe);
DatabaseAccess.onEntityManager(em -> {
em.persist(oe);
em.persist(rpe);
});
System.out.println(rpe.getEntityId());
DatabaseAccess.onEntityManager(em -> {
em.find(OriginEntity.class, oe.getEntityId());
});
}
}
#Access(AccessType.PROPERTY)
#Entity(name = "Origin")
public class OriginEntity extends NamedEntity {
[..]
private final ListProperty<ReferencePeakEntity> referencePeaks =
referencePeaks =
new SimpleListProperty<>(FXCollections.observableArrayList(ReferencePeakEntity.extractor()));
#Override
#OneToMany(mappedBy = "origin", fetch = FetchType.LAZY)
public final List<ReferencePeakEntity> getReferencePeaks() {
return this.referencePeaksProperty().get();
}
public final void setReferencePeaks(final List<ReferencePeakEntity> referencePeaks) {
this.referencePeaksProperty().setAll(referencePeaks);
}
}
I cannot find any documentation on that, my question is basically how can I prevent the EntityManager from loading the lazy collection?
Why I need em.find()?
I use the following method to decide whether I need to persist a new entity or update an existing one.
public static void mergeOrPersistWithinTransaction(final EntityManager em, final XIdentEntity entity) {
final XIdentEntity dbEntity = em.find(XIdentEntity.class, entity.getEntityId());
if (dbEntity == null) {
em.persist(entity);
} else {
em.merge(entity);
}
}
Note that OriginEntity is a JavaFX bean, where getter and setter delegate to a ListProperty.
Because FetchType.LAZY is only a hint. Depending on the implementation and how you configured your entity it will be able to do it or not.
Not an answer to titles question but maybe to your problem.
You can use also em.getReference(entityClass, primaryKey) in this case. It should be more efficient in your case since it just gets a reference to possibly existing entity.
See When to use EntityManager.find() vs EntityManager.getReference()
On the other hand i think your check is perhaps not needed. You could just persist or merge without check?
See JPA EntityManager: Why use persist() over merge()?

jpa-derby Boolean merge

am working with JPA(EclipseLink) and Derby. In my object there is a boolean field. Before a merge operation, the field is set to true. but after the merge, the field still holds the false value.
#Entity
#Access(AccessType.PROPERTY)
public class SleepMeasure extends AbstractEntity {
private static final long serialVersionUID = 1361849156336265486L;
...
private boolean WeatherDone;
public boolean isWeatherDone() { // I have already tried with the "getWeatherDone()"
return WeatherDone;
}
public void setWeatherDone(boolean weatherDone) {
WeatherDone = weatherDone;
}
...
}
It doesn't seem to matter whether, I use "getWeatherDone()" or "isWeatherDone()".
using code:
public class WeatherDataCollectorImpl{
...
private void saveMeasures(WeatherResponse mResponse, SleepMeasure sleep) throws Exception {
AppUser owner = sleep.getOwner();
...
sleep.setWeatherDone(Boolean.TRUE);
reposService.updateEntity(sleep,SleepMeasure.class);
}
...
}
And here is my repository class
public class RepositoryImpl{
...
public <T extends AbstractEntity> T updateEntity(T entity, Class<T> type) throws RepositoryException {
openEM();
EntityTransaction tr = em.getTransaction();
try {
tr.begin();
{
// entity.weatherdone has value true
entity = em.merge(entity);
// entity.weatherdone has value false
}
tr.commit();
} catch (Exception e) {
tr.rollback();
}
return entity;
}
...
}
JPA console Info: There is no error, nor warning and not even any info that the boolean column shall be updated.
--Merge clone with references com.sleepmonitor.persistence.entities.sleep.SleepMeasure#b9025d
...
--Register the existing object // other objects
...
--Register the existing object com.sleepmonitor.persistence.entities.sleep.SleepMeasure#1ba90cc
So how do I solve this small problem.
Note:
Derby defined this field as "SMALLINT".
thanks.
Oh God! I found my problem. Actually I realised, it was not only the boolean field, but the whole object could not be updated.
While trying to complete a bideirection referencing, I stupidly did this in a setter property instead of an addMethod() .
public void setSleepProperties(SleepProperties sleepProperties) {
this.sleepProperties = sleepProperties;
if (!(sleepProperties == null)) {
this.sleepProperties.setSleepMeasure(this);
}
}
Instead of:
public void addSleepProperties(SleepProperties sleepProperties) {
this.sleepProperties = sleepProperties;
if (!(sleepProperties == null)) {
this.sleepProperties.setSleepMeasure(this);
}
}
So I ended up with the referenced entity (sleepProperties.sleepMeasure) over-writing the updates on the owning entity just before a merge. That was very defficult to find, and I think have learned a big lesson from it. Thanks to all who tried to help me out.
The "addMethod()" solved my problem.

Spring Data MongoDB No property get found for type at org.springframework.data.mapping.PropertyPath

I am using Spring Data MongodB 1.4.2.Release version. For Spring Data MongoDB, I have created the custom repository interface and implementation in one location and create custom query function getUsersName(Users users).
However I am still getting below exception:
Caused by: org.springframework.data.mapping.PropertyReferenceException:
No property get found for type Users! at org.springframework.data.mapping.PropertyPath. (PropertyPath.java:75) at
org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:327) at
org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:359) at
org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:359) at
org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:307) at
org.springframework.data.mapping.PropertyPath.from(PropertyPath.java:270) at
org.springframework.data.mapping.PropertyPath.from(PropertyPath.java:241) at
org.springframework.data.repository.query.parser.Part.(Part.java:76) at
org.springframework.data.repository.query.parser.PartTree$OrPart.(PartTree.java:201) at
org.springframework.data.repository.query.parser.PartTree$Predicate.buildTree(PartTree.java:291) at
org.springframework.data.repository.query.parser.PartTree$Predicate.(PartTree.java:271) at
org.springframework.data.repository.query.parser.PartTree.(PartTree.java:80) at
org.springframework.data.mongodb.repository.query.PartTreeMongoQuery.(PartTreeMongoQuery.java:47)
Below is my Spring Data MongoDB structure:
/* Users Domain Object */
#Document(collection = "users")
public class Users {
#Id
private ObjectId id;
#Field ("last_name")
private String last_name;
#Field ("first_name")
private String first_name;
public String getLast_name() {
return last_name;
}
public void setLast_name(String last_name) {
this.last_name = last_name;
}
public String getFirst_name() {
return first_name;
}
public void setFirst_name(String first_name) {
this.first_name = first_name;
}
}
/* UsersRepository.java main interface */
#Repository
public interface UsersRepository extends MongoRepository<Users,String>, UsersRepositoryCustom {
List findUsersById(String id);
}
/* UsersRepositoryCustom.java custom interface */
#Repository
public interface UsersRepositoryCustom {
List<Users> getUsersName(Users users);
}
/* UsersRepositoryImpl.java custom interface implementation */
#Component
public class UsersRepositoryImpl implements UsersRepositoryCustom {
#Autowired
MongoOperations mongoOperations;
#Override
public List<Users> getUsersName(Users users) {
return mongoOperations.find(
Query.query(Criteria.where("first_name").is(users.getFirst_name()).and("last_name").is(users.getLast_name())), Users.class);
}
/* Mongo Test function inside Spring JUnit Test class calling custom function with main UsersRepository interface */
#Autowired
private UsersRepository usersRepository;
#Test
public void getUsersName() {
Users users = new Users();
users.setFirst_name("James");`enter code here`
users.setLast_name("Oliver");
List<Users> usersDetails = usersRepository.getUsersName(users);
System.out.println("users List" + usersDetails.size());
Assert.assertTrue(usersDetails.size() > 0);
}
The query method declaration in your repository interface is invalid. As clearly stated in the reference documentation, query methods need to start with get…By, read_By, find…By or query…by.
With custom repositories, there shouldn't be a need for method naming conventions as Oliver stated. I have mine working with a method named updateMessageCount
Having said that, I can't see the problem with the code provided here.
I resolved this issue with the help of this post here, where I wasn't naming my Impl class correctly :
No property found for type error when try to create custom repository with Spring Data JPA