Eclipselink Integrity Checker - jpa

I'm using EclipseLink and I'd like to check whether entity and table definitions are consistent.
I found "Integrity Checker" and tried it.
public final class EMF {
public static class EnableIntegrityChecker implements SessionCustomizer {
#Override
public void customize(Session session) throws Exception {
session.getIntegrityChecker().checkDatabase();
session.getIntegrityChecker().setShouldCatchExceptions(false);
}
}
private static final EntityManagerFactory INSTANCE;
static {
String appId = SystemProperty.applicationId.get();
Map<String, String> overWriteParam = new HashMap<>();
overWriteParam.put(
PersistenceUnitProperties.SESSION_CUSTOMIZER,
EnableIntegrityChecker.class.getName());
INSTANCE = Persistence.createEntityManagerFactory("unit", overWriteParam);
}
private EMF() {
}
public static EntityManager create() {
return INSTANCE.createEntityManager();
}
}
Some cases it can detect inconsistency, but some cases can not.
If entity has variable A and table does not have column A, Integrity Checker can found inconsistency.
If table has colume A and entity does not have variable A, Integrity Checker can not found inconsistency.
If column A in table is int and variable A in entity is String, Integrity Checker can not found inconsistency.
How can I detect inconsistency in case 2 and 3?

You can extend IntegrityChecker, override it's checkTable method and use it via session.setIntegrityChecker(customIntegrityChecker). Note that some of validations are located in ClassDecriptor#checkDatabase so it's hard to directly re-use them and properly report exact error cause.

Related

Mapping a field using existing target value (Mapstruct)

i have a custom case that some of my dto's have a field of type X, and i need to map this class to Y by using a spring service method call(i do a transactional db operation and return an instance of Y). But in this scenario i need to use existing value of Y field. Let me explain it by example.
// DTO
public class AnnualLeaveRequest {
private FileInfoDTO annualLeaveFile;
}
//ENTITY
public class AnnualLeave {
#OneToOne
private FileReference annualLeaveFile;
}
#Mapper
public abstract class FileMapper {
#Autowired
private FileReferenceService fileReferenceService;
public FileReference toFileReference(#MappingTarget FileReference fileReference, FileInfoDTO fileInfoDTO) {
return fileReferenceService.updateFile(fileInfoDTO, fileReference);
}
}
//ACTUAL ENTITY MAPPER
#Mapper(uses = {FileMapper.class})
public interface AnnualLeaveMapper {
void updateEntity(#MappingTarget AnnualLeave entity, AnnualLeaveRequest dto);
}
// WHAT IM TRYING TO ACHIEVE
#Component
public class MazeretIzinMapperImpl implements tr.gov.hmb.ikys.personel.izinbilgisi.mazeretizin.mapper.MazeretIzinMapper {
#Autowired
private FileMapper fileMapper;
#Override
public void updateEntity(AnnualLeave entity, AnnualLeaveUpdateRequest dto) {
entity.setAnnualLeaveFile(fileMapper.toFileReference(dto.getAnnualLeaveFile(), entity.getAnnualLeaveFile()));
}
}
But mapstruct ignores the result of "FileReference toFileReference(#MappingTarget FileReference fileReference, FileInfoDTO fileInfoDTO) " and does not map the result of it to the actual entity's FileReference field. Do you have any idea for resolving this problem?
Question
How do I replace the annualLeaveFile property while updating the AnnualLeave entity?
Answer
You can use expression to get this result. For example:
#Autowired
FileMapper fileMapper;
#Mapping( target = "annualLeaveFile", expression = "java(fileMapper.toFileReference(entity.getAnnualLeaveFile(), dto.getAnnualLeaveFile()))" )
abstract void updateEntity(#MappingTarget AnnualLeave entity, AnnualLeaveRequest dto);
MapStruct does not support this without expression usage. See the end of the Old analysis for why.
Alternative without expression
Instead of fixing it in the location where FileMapper is used, we fix it inside the FileMapper itself.
#Mapper
public abstract class FileMapper {
#Autowired
private FileReferenceService fileReferenceService;
public void toFileReference(#MappingTarget FileReference fileReference, FileInfoDTO fileInfoDTO) {
FileReference wanted = fileReferenceService.updateFile(fileInfoDTO, fileReference);
updateFileReference(fileReference, wanted);
}
// used to copy the content of the service one to the mapstruct one.
abstract void updateFileReference(#MappingTarget FileReference fileReferenceTarget, FileReference fileReferenceFromService);
}
Old analysis
The following is what I notice:
(Optional) your FileMapper class is not a MapStruct mapper. This can just be a normal class annotated with #Component, since it does not have any unimplemented abstract methods. (Does not affect code generation of the MazeretIzinMapper implementation)
(Optional, since you have this project wide configured) you do not have componentModel="spring" in your #Mapper definition, maybe you have this configured project wide, but that is not mentioned. (required for the #Autowired annotation, and #Component on implementations)
Without changing anything I already get a working result as you want it to be, but for non-update methods (not listed in your question, but was visible on the gitter page where you also requested help) the FileMapper as is will not be used. It requires an additional method that takes only 1 argument: public FileReference toFileReference(FileInfoDTO fileInfoDTO)
(Edit) to get rid of the else statement with null value handling you can add nullValuePropertyMappingStrategy = NullValuePropertyMappingStrategy.IGNORE to the #Mapper annotation.
I've run a test and with 1.5.0.Beta2 and 1.4.2.Final I get the following result with the thereafter listed FileMapper and MazeretIzinMapper classes.
Generated mapper implementation
#Generated(
value = "org.mapstruct.ap.MappingProcessor",
date = "2022-03-11T18:01:30+0100",
comments = "version: 1.4.2.Final, compiler: Eclipse JDT (IDE) 1.4.50.v20210914-1429, environment: Java 17.0.1 (Azul Systems, Inc.)"
)
#Component
public class MazeretIzinMapperImpl implements MazeretIzinMapper {
#Autowired
private FileMapper fileMapper;
#Override
public AnnualLeave toEntity(AnnualLeaveRequest dto) {
if ( dto == null ) {
return null;
}
AnnualLeave annualLeave = new AnnualLeave();
annualLeave.setAnnualLeaveFile( fileMapper.toFileReference( dto.getAnnualLeaveFile() ) );
return annualLeave;
}
#Override
public void updateEntity(AnnualLeave entity, AnnualLeaveRequest dto) {
if ( dto == null ) {
return;
}
if ( dto.getAnnualLeaveFile() != null ) {
if ( entity.getAnnualLeaveFile() == null ) {
entity.setAnnualLeaveFile( new FileReference() );
}
fileMapper.toFileReference( entity.getAnnualLeaveFile(), dto.getAnnualLeaveFile() );
}
}
}
Source classes
Mapper
#Mapper( componentModel = "spring", uses = { FileMapper.class }, nullValuePropertyMappingStrategy = NullValuePropertyMappingStrategy.IGNORE )
public interface MazeretIzinMapper {
AnnualLeave toEntity(AnnualLeaveRequest dto);
void updateEntity(#MappingTarget AnnualLeave entity, AnnualLeaveRequest dto);
}
FileMapper component
#Mapper
public abstract class FileMapper {
#Autowired
private FileReferenceService fileReferenceService;
public FileReference toFileReference(#MappingTarget FileReference fileReference, FileInfoDTO fileInfoDTO) {
return fileReferenceService.updateFile( fileInfoDTO, fileReference );
}
public FileReference toFileReference(FileInfoDTO fileInfoDTO) {
return toFileReference( new FileReference(), fileInfoDTO );
}
// other abstract methods for MapStruct mapper generation.
}
Why the exact wanted code will not be generated
When generating the mapping code MapStruct will use the most generic way to do this.
An update mapper has the following criteria:
The #MappingTarget annotated argument will always be updated.
It is allowed to have no return type.
the generic way to update a field is then as follows:
// check if source has the value.
if (source.getProperty() != null) {
// Since it is allowed to have a void method for update mappings the following steps are needed:
// check if the property exists in the target.
if (target.getProperty() == null) {
// if it does not have the value then create it.
target.setProperty( new TypeOfProperty() );
}
// now we know that target has the property so we can call the update method.
propertyUpdateMappingMethod( target.getProperty(), source.getProperty() );
// The arguments will match the order as specified in the other update method. in this case the #MappingTarget annotated argument is the first one.
} else {
// default behavior is to set the target property to null, you can influence this with nullValuePropertyMappingStrategy.
target.setProperty( null );
}

microprofile-config custom ConfigSource using JPA

I am currently trying to setup a custom ConfigSource reading config values from our DB2. As the ConfigSources are loaded via ServiceLoader it looks like there is no way to access the database via JPA as the ServiceLoader is scanning for custom ConfigSources very early.
Any ideas?
You can anotate your ConfigSource as a singleton session bean and mark it for eager initialization during the application startup sequence.
Also you need to define a static member variable holding your config values.
With this setup you can lazy load your properties values from an injected JPA source or also from any other CDI or EJB.
See the following example Code
#Startup
#Singleton
public class MyConfigSource implements ConfigSource {
public static final String NAME = "MyConfigSource";
public static Map<String, String> properties = null; // note to use static here!
#PersistenceContext(unitName = ".....")
private EntityManager manager;
#PostConstruct
void init() {
// load your data from teh JPA source or EJB
....
}
#Override
public int getOrdinal() {
return 890;
}
#Override
public String getValue(String key) {
if (properties != null) {
return properties.get(key);
} else {
return null;
}
}
#Override
public String getName() {
return NAME;
}
#Override
public Map<String, String> getProperties() {
return properties;
}
}
ConfigSources are POJO’s because if a CDI bean expected config to be injected into it at startup based on a ConfigSource that had dependencies on CDI’s you could get into startup looping issues.
For this reason the example CongigSoruce is constructed twice - once at the beginning from the Config-API and later from the CDI implemenation on #PostConstruct. With the static variable 'properties' we overload the values from the already constructed ConfigSource. Of course you can also separate the code in two classes if you like.

Why Lazy Collections do not work with JavaFX getters / setters?

I experienced poor performance when using em.find(entity, primaryKey).
The reason seems to be that em.find() will also load entity collections, that are annotated with FetchType.LAZY.
This small test case illustrates what I mean:
public class OriginEntityTest4 {
[..]
#Test
public void test() throws Exception {
final OriginEntity oe = new OriginEntity("o");
final ReferencePeakEntity rpe = new ReferencePeakEntity();
oe.getReferencePeaks().add(rpe);
DatabaseAccess.onEntityManager(em -> {
em.persist(oe);
em.persist(rpe);
});
System.out.println(rpe.getEntityId());
DatabaseAccess.onEntityManager(em -> {
em.find(OriginEntity.class, oe.getEntityId());
});
}
}
#Access(AccessType.PROPERTY)
#Entity(name = "Origin")
public class OriginEntity extends NamedEntity {
[..]
private final ListProperty<ReferencePeakEntity> referencePeaks =
referencePeaks =
new SimpleListProperty<>(FXCollections.observableArrayList(ReferencePeakEntity.extractor()));
#Override
#OneToMany(mappedBy = "origin", fetch = FetchType.LAZY)
public final List<ReferencePeakEntity> getReferencePeaks() {
return this.referencePeaksProperty().get();
}
public final void setReferencePeaks(final List<ReferencePeakEntity> referencePeaks) {
this.referencePeaksProperty().setAll(referencePeaks);
}
}
I cannot find any documentation on that, my question is basically how can I prevent the EntityManager from loading the lazy collection?
Why I need em.find()?
I use the following method to decide whether I need to persist a new entity or update an existing one.
public static void mergeOrPersistWithinTransaction(final EntityManager em, final XIdentEntity entity) {
final XIdentEntity dbEntity = em.find(XIdentEntity.class, entity.getEntityId());
if (dbEntity == null) {
em.persist(entity);
} else {
em.merge(entity);
}
}
Note that OriginEntity is a JavaFX bean, where getter and setter delegate to a ListProperty.
Because FetchType.LAZY is only a hint. Depending on the implementation and how you configured your entity it will be able to do it or not.
Not an answer to titles question but maybe to your problem.
You can use also em.getReference(entityClass, primaryKey) in this case. It should be more efficient in your case since it just gets a reference to possibly existing entity.
See When to use EntityManager.find() vs EntityManager.getReference()
On the other hand i think your check is perhaps not needed. You could just persist or merge without check?
See JPA EntityManager: Why use persist() over merge()?

Overriding Getters and Setters in tinkerpop Frames annotated model

I'm working on a new piece of software and I'd like the values in the database to be encrypted. We are using OrientDB and are trying to implement the project using the tinkerpop libraries. Here I'm stuck a little bit.
For one function, I need to pull a list of all vertices of a type and return them. I have my annotated interface for the person object, and I added methods to encrypt and decrypt necessary fields right now. But when I decrypt them, it persists the decrypted values back to the database.
Is there a way to either override the getters and setters to handle the encryption/decryption at that point or do I need to detach the models from the db before performing my decryption?
Here's my code for my interface:
public interface iPerson {
#Property("firstName")
public void setFirstName(String firstName);
#Property("firstName")
public String getFirstName();
#Property("lastName")
public String getLastName();
#Property("lastName")
public void setLastName(String lastName);
#Property("id")
public String getId();
#Property("id")
public void setId(String id);
#Property("dateOfBirth")
public String getDateOfBirth();
#Property("dateOfBirth")
public void setDateOfBirth(String dateOfBirth);
#JavaHandler
public void encryptFields() throws Exception;
#JavaHandler
public void decryptFields() throws Exception;
public abstract class Impl implements JavaHandlerContext<Vertex>, iPerson {
#Initializer
public void init() {
//This will be called when a new framed element is added to the graph.
setFirstName("");
setLastName("");
setDateOfBirth("01-01-1900");
setPK_Person("-1");
}
/**
* shortcut method to make the class encrypt all of the fields that should be encrypted for data storage
* #throws Exception
*/
public void encryptFields() throws Exception {
setLastName(Crypto.encryptHex(getLastName()));
setFirstName(Crypto.encryptHex(getFirstName()));
if(getDateOfBirth() != null) {
setDateOfBirth(Crypto.encryptHex(getDateOfBirth()));
}
}
/**
* shortcut method to make the class decrypt all of the fields that should be decrypted for data display and return
* #throws Exception
*/
public void decryptFields() throws Exception {
setLastName(Crypto.decryptHex(getLastName()));
setFirstName(Crypto.decryptHex(getFirstName()));
if(getDateOfBirth() != null) {
setDateOfBirth(Crypto.decryptHex(getDateOfBirth()));
}
}
}
}
(I assume) Data is persisted to the database when a Vertex's property is set. If you want to store encrypted values in the database, then you need to ensure the value is encrypted when the property is set.
If you want to override the default behaviour of the #Property getter/setter methods (so that you can add en/decryption), I'd recommend using a custom handler (e.g. #JavaHandler).
For example:
IPerson
#JavaHandlerClass(Person.class)
public interface IPerson extends VertexFrame {
#JavaHandler
public void setFirstName(String firstName);
#JavaHandler
public String getFirstName();
}
Person
abstract class Person implements JavaHandlerContext<Vertex>, IPerson {
#Override
void setFirstName(String firstName) {
asVertex().setProperty('firstName', encrypt(firstName))
}
#Override
String getFirstName() {
return decrypt(asVertex().getProperty('firstName'))
}
static String encrypt(String plain){
return plain.toUpperCase(); // <- your own implementation here
}
static String decrypt(Object encrypted){
return encrypted.toString().toLowerCase(); // <- your own implementation here
}
}
Usage example (Groovy)
// setup
IPerson nickg = framedGraph.addVertex('PID1', IPerson)
IPerson jspriggs = framedGraph.addVertex('PID2', IPerson)
nickg.setFirstName('nickg')
jspriggs.setFirstName('jspriggs')
// re-retrieve from Frame vertices sometime later...
IPerson nickg2 = framedGraph.getVertex(nickg.asVertex().id, IPerson)
IPerson jspriggs2 = framedGraph.getVertex(jspriggs.asVertex().id, IPerson)
// check encrypted values (these are stored in the DB)...
assert nickg2.asVertex().getProperty('firstName') == 'NICKG'
assert jspriggs2.asVertex().getProperty('firstName') == 'JSPRIGGS'
// check decrypted getters...
assert nickg2.getFirstName() == 'nickg'
assert jspriggs2.getFirstName() == 'jspriggs'
If using Groovy, you could intercept calls to these methods programatically (which would be nice because you could keep using #Property annotations).
I'm not sure if there's a Tinkerpop solution to intercepting these calls, other than writing your own custom handler (maybe try extending the JavaHandlerModule?).
Thanks for the comment, and I should have gotten back to respond to this sooner, but I recently found a better answer to my problem. I was looking for a way to make the encrypt/decrypt happen without overhead and without developers really noticing it happens.
The better way to tackle this issue was actually to write hooks for before insert/update and after read to handle it just at the database layer. I was able to write it in java, package a jar file for it and install it on our orientDB instance, picked up pretty flawlessly and helped us to encrypt the necessary fields without noticing any speed decreases.

Morphia converter calling other converters

I want to convert Optional<BigDecimal> in morphia. I created BigDecimalConverter, and it works fine. Now I want to create OptionalConverter.
Optional can hold any object type. In my OptionalConverter.encode method I can extract underlying object, and I'd like to pass it to default mongo conversion. So that if there is string, I'll just get string, if there is one of my entities, I'll get encoded entity. How can I do it?
There are two questions:
1. How to call other converters?
2. How to create a converter for a generic class whose type parameters are not statically known?
The first one is possible by creating the MappingMongoConveter and the custom converter together:
#Configuration
public class CustomConfig extends AbstractMongoConfiguration {
#Override
protected String getDatabaseName() {
// ...
}
#Override
#Bean
public Mongo mongo() throws Exception {
// ...
}
#Override
#Bean
public MappingMongoConverter mappingMongoConverter() throws Exception {
MappingMongoConverter mmc = new MappingMongoConverter(
mongoDbFactory(), mongoMappingContext());
mmc.setCustomConversions(new CustomConversions(CustomConverters
.create(mmc)));
return mmc;
}
}
public class FooConverter implements Converter<Foo, DBObject> {
private MappingMongoConverter mmc;
public FooConverter(MappingMongoConverter mmc) {
this.mmc = mmc;
}
public DBObject convert(Foo foo) {
// ...
}
}
public class CustomConverters {
public static List<?> create(MappingMongoConverter mmc) {
List<?> list = new ArrayList<>();
list.add(new FooConverter(mmc));
return list;
}
}
The second one is much more difficult due to type erasure. I've tried to create a converter for Scala's Map but haven't found a way. Unable to get the exact type information for the source Map when writing, or for the target Map when reading.
For very simple cases, e.g. if you don't need to handle all possible parameter types, and there is no ambiguity while reading, it may be possible though.