I experienced poor performance when using em.find(entity, primaryKey).
The reason seems to be that em.find() will also load entity collections, that are annotated with FetchType.LAZY.
This small test case illustrates what I mean:
public class OriginEntityTest4 {
[..]
#Test
public void test() throws Exception {
final OriginEntity oe = new OriginEntity("o");
final ReferencePeakEntity rpe = new ReferencePeakEntity();
oe.getReferencePeaks().add(rpe);
DatabaseAccess.onEntityManager(em -> {
em.persist(oe);
em.persist(rpe);
});
System.out.println(rpe.getEntityId());
DatabaseAccess.onEntityManager(em -> {
em.find(OriginEntity.class, oe.getEntityId());
});
}
}
#Access(AccessType.PROPERTY)
#Entity(name = "Origin")
public class OriginEntity extends NamedEntity {
[..]
private final ListProperty<ReferencePeakEntity> referencePeaks =
referencePeaks =
new SimpleListProperty<>(FXCollections.observableArrayList(ReferencePeakEntity.extractor()));
#Override
#OneToMany(mappedBy = "origin", fetch = FetchType.LAZY)
public final List<ReferencePeakEntity> getReferencePeaks() {
return this.referencePeaksProperty().get();
}
public final void setReferencePeaks(final List<ReferencePeakEntity> referencePeaks) {
this.referencePeaksProperty().setAll(referencePeaks);
}
}
I cannot find any documentation on that, my question is basically how can I prevent the EntityManager from loading the lazy collection?
Why I need em.find()?
I use the following method to decide whether I need to persist a new entity or update an existing one.
public static void mergeOrPersistWithinTransaction(final EntityManager em, final XIdentEntity entity) {
final XIdentEntity dbEntity = em.find(XIdentEntity.class, entity.getEntityId());
if (dbEntity == null) {
em.persist(entity);
} else {
em.merge(entity);
}
}
Note that OriginEntity is a JavaFX bean, where getter and setter delegate to a ListProperty.
Because FetchType.LAZY is only a hint. Depending on the implementation and how you configured your entity it will be able to do it or not.
Not an answer to titles question but maybe to your problem.
You can use also em.getReference(entityClass, primaryKey) in this case. It should be more efficient in your case since it just gets a reference to possibly existing entity.
See When to use EntityManager.find() vs EntityManager.getReference()
On the other hand i think your check is perhaps not needed. You could just persist or merge without check?
See JPA EntityManager: Why use persist() over merge()?
Related
I'm using spring-data-cassandra:3.1.9 and the properties looks like :
spring:
data:
cassandra:
keyspace-name: general_log
session-name: general_log
local-datacenter: datacenter1
schema-action: CREATE
Cassandra version: apache-cassandra-4.0.1
spring-boot: 2.4.7
spring-data-jpa: 2.4.9
spring-jdbc: 5.3.8
spring-orm: 5.3.8
My entity looks like:
#ApiModel(description = "Audit log")
#Entity
#Table(name = "audit_log")
#org.springframework.data.cassandra.core.mapping.Table("audit_log")
public class AuditLogPO implements Serializable {
#PrimaryKeyClass
public static class Id implements Serializable {
private static final long serialVersionUID = 1L;
#ApiModelProperty(value = "业务标识")
#Column(name = "business_key")
#PrimaryKeyColumn(ordinal = 1, ordering = Ordering.ASCENDING)
private String businessKey;
// setters & getters ...
}
#javax.persistence.Id
#PrimaryKey
#org.springframework.data.annotation.Id
#Transient
private Id id;
#ApiModelProperty(value = "业务分区")
#Column(name = "business_partition")
#org.springframework.data.cassandra.core.mapping.Column(value = "business_partition")
private String businessPartition;
// getters & setters ...
}
After running this application, table audit_log will not be created automatically.
Actually, after digging into the source code located in spring-data-cassandra:3.1.9, you can check the implementation:
org.springframework.data.cassandra.config.SessionFactoryFactoryBean#performSchemaAction
wich implementation as following:
protected void performSchemaAction() throws Exception {
boolean create = false;
boolean drop = DEFAULT_DROP_TABLES;
boolean dropUnused = DEFAULT_DROP_UNUSED_TABLES;
boolean ifNotExists = DEFAULT_CREATE_IF_NOT_EXISTS;
switch (this.schemaAction) {
case RECREATE_DROP_UNUSED:
dropUnused = true;
case RECREATE:
drop = true;
case CREATE_IF_NOT_EXISTS:
ifNotExists = SchemaAction.CREATE_IF_NOT_EXISTS.equals(this.schemaAction);
case CREATE:
create = true;
case NONE:
default:
// do nothing
}
if (create) {
createTables(drop, dropUnused, ifNotExists);
}
}
which means you have to assign CREATE to schemaAction if the table has never been created. And CREATE_IF_NOT_EXISTS dose not work.
Unfortunately, we've not done yet.
SessionFactoryFactoryBean#performSchemaAction will be invoked as expected, however tables are still not be created, why?
It is because Spring Data JPA will add entities in org.springframework.data.cassandra.repository.support.CassandraRepositoryFactoryBean#afterPropertiesSet(org.springframework.data.mapping.context.AbstractMappingContext#addPersistentEntity(org.springframework.data.util.TypeInformation<?>)). But performSchemaAction method will be invoked in SessionFactoryFactoryBean. And all of these two FactoryBeans do not have an order and we do not know who will be firstly invoked.
Which means if SessionFactoryFactoryBean#afterPropertiesSet has been invoked firstly, probably no Entity is already there. In this circumstance, no tables will be created automatically for sure.
And how to create these tables automatically?
One solution is that you can invoke SessionFactoryFactoryBean#performSchemaAction in a bean of ApplicationRunner manually.
First of all, let's create another class extends from SessionFactoryFactoryBean as:
public class ExecutableSessionFactoryFactoryBean extends SessionFactoryFactoryBean {
#Override
public void createTables(boolean drop, boolean dropUnused, boolean ifNotExists) throws Exception {
super.createTables(drop, dropUnused, ifNotExists);
}
}
Next we should override org.springframework.data.cassandra.config.AbstractCassandraConfiguration#cassandraSessionFactory as:
#Override
#Bean
#Primary
public SessionFactoryFactoryBean cassandraSessionFactory(CqlSession cqlSession) {
sessionFactoryFactoryBean = new ExecutableSessionFactoryFactoryBean();
// Initialize the CqlSession reference first since it is required, or must not be null!
sessionFactoryFactoryBean.setSession(cqlSession);
sessionFactoryFactoryBean.setConverter(requireBeanOfType(CassandraConverter.class));
sessionFactoryFactoryBean.setKeyspaceCleaner(keyspaceCleaner());
sessionFactoryFactoryBean.setKeyspacePopulator(keyspacePopulator());
sessionFactoryFactoryBean.setSchemaAction(getSchemaAction());
return sessionFactoryFactoryBean;
}
Now we can create an ApplicationRunner to perform the schema action:
#Bean
public ApplicationRunner autoCreateCassandraTablesRunner() {
return args -> {
if (SchemaAction.CREATE.name().equalsIgnoreCase(requireBeanOfType(CassandraProperties.class).getSchemaAction())) {
sessionFactoryFactoryBean.createTables(false, false, true);
}
};
}
please refer this doc https://docs.spring.io/spring-data/cassandra/docs/4.0.x/reference/html/#cassandra.schema-management.initializing.config
But you still need to create keyspace before excuting the following codes:
#Configuration
public class SessionFactoryInitializerConfiguration extends AbstractCassandraConfiguration {
#Bean
SessionFactoryInitializer sessionFactoryInitializer(SessionFactory sessionFactory) {
SessionFactoryInitializer initializer = new SessionFactoryInitializer();
initializer.setSessionFactory(sessionFactory);
ResourceKeyspacePopulator populator = new ResourceKeyspacePopulator();
populator.setSeparator(";");
populator.setScripts(new ClassPathResource("com/myapp/cql/db-schema.cql"));
initializer.setKeyspacePopulator(populator);
return initializer;
}
// ...
}
You can also specify this behavior in your application.yml:
spring:
data:
cassandra:
schema-action: create-if-not-exists
Although, you will need to create the keyspace (with appropriate data center / replication factor pairs) ahead of time.
I'm using EclipseLink and I'd like to check whether entity and table definitions are consistent.
I found "Integrity Checker" and tried it.
public final class EMF {
public static class EnableIntegrityChecker implements SessionCustomizer {
#Override
public void customize(Session session) throws Exception {
session.getIntegrityChecker().checkDatabase();
session.getIntegrityChecker().setShouldCatchExceptions(false);
}
}
private static final EntityManagerFactory INSTANCE;
static {
String appId = SystemProperty.applicationId.get();
Map<String, String> overWriteParam = new HashMap<>();
overWriteParam.put(
PersistenceUnitProperties.SESSION_CUSTOMIZER,
EnableIntegrityChecker.class.getName());
INSTANCE = Persistence.createEntityManagerFactory("unit", overWriteParam);
}
private EMF() {
}
public static EntityManager create() {
return INSTANCE.createEntityManager();
}
}
Some cases it can detect inconsistency, but some cases can not.
If entity has variable A and table does not have column A, Integrity Checker can found inconsistency.
If table has colume A and entity does not have variable A, Integrity Checker can not found inconsistency.
If column A in table is int and variable A in entity is String, Integrity Checker can not found inconsistency.
How can I detect inconsistency in case 2 and 3?
You can extend IntegrityChecker, override it's checkTable method and use it via session.setIntegrityChecker(customIntegrityChecker). Note that some of validations are located in ClassDecriptor#checkDatabase so it's hard to directly re-use them and properly report exact error cause.
I'm trying to implement UoW and Repository pattern, but I get error
An entity object cannot be referenced by multiple instances of IEntityChangeTracker.
I know that I get that error because I have two repositories which create two different DBContext, but I don't know why that happens.
Here is my code for UoW
public class UnitOfWorkRepositoryRepository : IUnitOfWorkRepository
{
private readonly IDatabaseFactory _databaseFactory;
private DatabaseContext _databaseContext;
public UnitOfWorkRepositoryRepository(IDatabaseFactory databaseFactory)
{
_databaseFactory = databaseFactory;
}
public DatabaseContext Database
{
get { return _databaseContext ?? (_databaseContext = _databaseFactory.GetDatabaseContext()); }
}
public void Save()
{
_databaseContext.Save();
}
}
And here sample Repository:
private static readonly DatabaseFactory DatabaseFactory = new DatabaseFactory();
private static readonly UnitOfWorkRepositoryRepository UnitOfWorkRepositoryRepository = new UnitOfWorkRepositoryRepository(DatabaseFactory);
public User GetUserById(int id)
{
return UnitOfWorkRepositoryRepository.Database.Users.SingleOrDefault(u => u.UserId.Equals(id));
}
What's wrong ? how should I implement UoW
P.S.
I'm not getting any errors in this repository, but other one was too long, this one serves just as sample.
Did you try this
http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
I think it is more descriptive, I have ever seen.
Have a look at this SO answer where I describe a way to decoulple Uow from Repository.
What is the best way to handle transactions in this environment?
I have a Transacao class, which has a collection of Transacao.
public class Transacao {
#OneToMany(fetch = FetchType.LAZY, mappedBy = "pai")
private List<Transacao> filhos;
}
I load this in JSF from a EJB, something like:
public class TransacaoBean {
#EJB
private TransacaoService transacaoService;
private void edit(Long id) {
this.transacao = transacaoService.findById(id);
}
}
although, if I want to get the collection of filhos, I have to do this:
public class TransacaoBean {
...
private void edit(Long id) {
this.transacao = transacaoService.findById(id);
log.info(this.transacao.getFilhos.size()); //this throws a LazyInitializationException
}
}
and I get an Exception.
What is the best way to have this loaded in my JSF? I'm considering creating a Filter and using USerTransaction to keep the transaction open for the request or fetching the filhos in my EJB. Is there a better solution to this, which one is better?
The fetch's default value of the #OneToMany is FetchType.LAZY.
You can set it FetchType.EAGER to use them in non-managed environment.
Or you can make another EJB or method for getting a list or just the size.
public class TransacaoService {
public Transacao findById(final long id) {
...
}
public long getFilhosSize(final long id) {
// SELECT f FROM Transacao AS t WHERE t.pai.id=:id
}
#PersistenceContext
private EntityManager entityManager;
}
Can someone please provide me an example of GWT + JPA + Gilead, I can't seem to find anything on Google with this topic.
Thanks
Thanks Maksim,
I'm not using this in an EJB server but Tomcat. I understand the step you've pointed out above but not sure on how to do the next step which is to set up PersistentBeanManager and send my object over the wire.
Here is what I have thus far but I haven't got a chance to test if this works yet. If you see a problem with this let me know, thanks.
private HibernateJpaUtil gileadUtil = new HibernateJpaUtil();
private static final EntityManagerFactory factory =
Persistence.createEntityManagerFactory("MyPersistentUnit");
public MyServlet() {
gileadUtil.setEntityManagerFactory(factory);
PersistentBeanManager pbm = new PersistentBeanManager();
pbm.setPersistenceUtil(gileadUtil);
pbm.setProxyStore(new StatelessProxyStore());
setBeanManager(pbm);
Book book = new Book();
Book cloned = (Book) pbm.clone(book);
//send the cloned book over the wire
}
I tried to set up my project very similar and also ran into the hibernate exception. I figured out that when using JPA I need to initialize the HibernateJPAUtil with the EntityManagerFactory. When I did this it worked. This changes your first two lines of code to:
public class MyServiceImpl extends PersistentRemoteService implements MyService {
public MyServiceImpl() {
final EntityManagerFactory emf = Persistence.createEntityManagerFactory("MA");
final PersistentBeanManager persistentBeanManager = new PersistentBeanManager();
persistentBeanManager.setPersistenceUtil(new HibernateJpaUtil(emf)); // <- needs EMF here
persistentBeanManager.setProxyStore(new StatelessProxyStore());
setBeanManager(persistentBeanManager);
}
#Override // from MyService
public Stuff getStuff() {
// no need for clone/merge here, as Gilead's GWT PersistentRemoteService does this for us
...
return stuff;
}
}
Also I used net.sf.gilead.pojo.java5.legacy.LightEntity as base class for all my entities (note the java5.legacy package).
Entity:
//imports
#Entity
public class Book extends LightEntity implements Serializable {
private static final long serialVersionUID = 21L;
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String title;
#Lob
private String description;
#ManyToMany(cascade = CascadeType.ALL)
private List<Author> author;
// Getters and setters
#Override
public int hashCode() {
int hash = 0;
hash += (getId() != null ? getId().hashCode() : 0);
return hash;
}
#Override
public boolean equals(Object object) {
// TODO: Warning - this method won't work in the case the id fields are not set
if (!(object instanceof Book)) {
return false;
}
Course other = (Book) object;
if ((this.getId() == null && other.getId() != null) || (this.getId() != null && !this.id.equals(other.id))) {
return false;
}
return true;
}
}
The Book object looks same.
Then use it as regular EJB on your server and as regular DTO's on your client.
Don't forget to add Gilead's libraries to your project.
Hope this blog will help you.
http://zawoad.blogspot.com/2010/06/google-app-engine-jdo-and-gxtext-gwt.html
This is not a direct a example of what you want but the approach should be like this. We followed the same approach in our project with GWT+JPA+EJB. to send your object over the wire you need a Data Transfer Object (DTO). Convert this DTO to Entity object and do whatever you want to do.