I am following the example provided here by eclipselink.
When I start my tests, it fails with:
javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.7.1.v20171221-bd47e8f):
org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERROR: relation "event_history" does not exist.
The framework isn't creating the table as I would expect. I have the following configuration:
<property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
From this link, I don't feel it's necessary to add the DescriptorCustomizer class to the persistence.xml file. But I may be wrong.
My question is, do I have to create the table manually? Or I am doing something wrong? The examples I found relative to the feature are quiet poor.
Some solutions are discussed in eclipse link forum.
Clovis Wichoski CLA Friend 2016-01-02 15:29:19 EST
The problem still occurs with 2.6.2
Follow a StringTemplate to be used to easy the creation by hand (for Postgres database)
CREATE TABLE <tableName>_audit (
LIKE <tableName> EXCLUDING ALL,
audit_date_start timestamp,
audit_date_end timestamp
)
Here is another possible solution:
Peter Hansson CLA Friend 2016-03-25 05:30:42 EDT
Yes, I've had the same issue.
I've explored a couple of avenues in order to get EclipseLink to generate the history tables for me (so that they always reflect their base table). I haven't been able to come up with a method that would work, less one that would be db agnostic.
I do believe the only way to solve this is in the core of EclipseLink, for example by adding a new annotation, #HistoryTable.
I'm thinking something along the lines of the following:
Suppose you have base class, Person:
#Entity
public class Person {
#Id
private Long personId;
private String firstName;
private String lastName;
..
}
Then we could define a history entity for that entity as follows:
#Entity
#HistoryTable(base=Person.class, primaryKeyFields="personId,rowStartTime")
public class PersonHist {
// Add here the extra fields/columns that should exist for the
// history table.
private Date rowStartTime;
private Date rowEndTime;
..
}
The #HistoryTable annotation would replicate all fields from the base entity, including most field annotations, except for annotations related to relations, which wouldn't be relevant on the history table.
By definition the history table's primary key will always be a composite of columns in the base table, typically it will be like in the example. In the example the PersonHist entity will think it has an #Id notation on fields personId and rowStartTime. (yeah, this area needs more brain work :-))
Related
As far I can understand from reading this part of the documentation of
https://docs.jboss.org/hibernate/search/6.0/reference/en-US/html_single/#mapper-orm-reindexing-basics
there's no automatic reindexing on #IndexedEmbedded fields that doesn't have a bidirectional mapping. Am I correct? And if so I'm curious to know what leaded to introduce this, because in Hibernate search automatic reindexing happened when updating a field in an #IndexedEmbedded field. Does this mean that now I'm responsible to update the index?
Here's an example of my use case which leads to a not updated index:
#Indexed(index = "foo_index")
#Entity
public class Foo {
private Long id;
#IndexedEmbedded
#ManyToOne(fetch = LAZY)
private Bar bar;
}
#Entity
public class Bar {
private Long id;
#GenericField
private String barFieldOne;
#GenericField
private String barFieldTwo;
}
Then let's say I retrieve the Foo from the db and change a bar field like this:
Foo foo = repository.findById(1);
foo.getBar().setBarFieldOne("newValue");
repository.save(foo);
This will not trigger index update of the foo index despite I'm working through the #Indexed object(Foo in our case). I have a lot of uni directional relations and I don't want to make them bidirectional because I don't need them and they can lead to performance problems. I understand that if I update the bar entity by itself it won't update the index but here I'm updating it through the main #Indexed entity and I expect the index to be updated.
This use case worked flawlessly in hibernate search 5 and in my honest opinion this is an important. Is there a way to make it work here, because this will make my life a lot easier.
You understood well, Hibernate Search cannot trigger reindexing when there's just an unidirectional association between the modified entity and the indexed entity.
There are plans to address that, maybe, one day, but that will still require some configuration: https://hibernate.atlassian.net/browse/HSEARCH-1937
This use case worked flawlessly in hibernate search 5 and in my honest opinion this is an important
I'm going to need a reproducer for that. I would be very, very surprised if you managed to make it work. If it worked, it was probably just a side-effect of something else: you disabled dirty checking, or you had a transient property on your entity that caused it to be reindexed every single time.
All we did in Search 6 was to make sure we throw an error when you try to use #IndexedEmbedded on an uni-directional association, and force you to explicitly disable automatic reindexing for that association.
It didn't work in Hibernate Search 5 either, but Hibernate Search 5 would ignore these problems silently and you would end up thinking it worked, but it did not.
So really, the only change is that you are now aware of the problem. It existed before.
I am working with Eclipselink and having issue with using secondary table.
I have two tables as below.
Student with columns student_id(Primary Key), student_name etc.
Registration with columns student_id(FK relationship with Student table), course_name (with not null constraint) etc.
The requirement is student may or may not have registration. If student has registration, the data should be persisted to Registration table as well. Otherwise only Student table should be persisted.
My code snippet is as below.
Student.java
------------
#Entity
#Table(name = "STUDENT")
#SecondaryTable(name = "REGISTRATION")
#Id
#Column(name = "STUDENT_ID")
private long studentId;
#Basic(optional=true)
#Column(name = "COURSE_NAME", table = "REGISTRATION")
private String courseName;
I tried the following scenarios.
1. Student with registration - Working fine. Data is added to both Student and Registration tables
2. Student without registration - Getting error such as 'COURSE_NAME' cannot be null.
Is there a way to prevent persisting into secondary table?
Any help is much appreciated.
Thanks!!!
As #Eelke states, the best solution is to define two classes and a OneToOne relationship.
Potentially you could also use inheritance, having a Student and a RegisteredStudent that adds the additional table. But the relationship is a much better design.
It‘s possible using a DescriptorEventListener. The aboutToInsert and aboutToUpdate callbacks have access to the DatabaseCalls and may even remove the statements hitting the secondary table.
Register the DescriptorEventListener with the ClassDescriptor of the entity. For registration use a DescriptorCustomizer specified in a Customizer annotation at the entity.
However, you will not succeed fetching the entities back again later on. EclipseLink uses inner joins when selecting from the secondary table, so that the row of the primary table will be gone in the results.
I'm trying to audit a collection of #Embeddable objects using hibernate-envers.
According to https://hibernate.atlassian.net/browse/HHH-6613 support for auditing #ElementCollection was added. This feature doesn't seem to work well: when trying to save several #Embeddable objects with the same revision number NonUniqueObjectException is thrown.
Does anyone have a working example of #ElementCollection+#Embeddable audited with Envers?
As of Hibernate 5.2.8, we managed to make it work by these steps:
Define the java type as Set for the collection of embeddable elements
Implement hashCode() and equals() methods in the class of the embeddable elements
Be sure to create an int column named SETORDINAL in the table that holds
the audit log of said elements (or let hibernate create the tables for you
by setting the appropriate configuration key).
I have a working project that uses JPA to persist data on MySQL (EclipseLink as provider). Recently I wanted to drop the database and create it again with the tables via Eclipse => EclipseLink 'Generate tables from entities'. I also have the persistence.xml updated (first generated automatically, then modified manually to narrow down on this problem).
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<class>xxx.entity.options.Difficulty</class>
<class>xxx.entity.options.Options</class>
(Omitted the rest since it is ok otherwise => it is working in general)
The problem is that when I generate the tables I get the error:
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'xxx.DIFFICULTY' doesn't exist
Error Code: 1146
(The entity classes have table names defined:
#Entity
#Table(name = "DIFFICULTY"))
If I comment out the 'Options' from persistence.xml, the 'DIFFICULTY' table gets created ok. Then uncommenting 'Options' again and re-generating the tables => the 'OPTIONS' table will be created also (Options has #ManyToOne association with the Difficulty).
In the persistence.xml I have
<property name="eclipselink.ddl-generation" value="drop-and-create-tables" />
as I am still developing.
In the Options class I have
#ManyToOne(optional = false, cascade = CascadeType.REFRESH, fetch = FetchType.LAZY)
private Difficulty difficulty = null;
(Many Options are suppose to have the same Difficulty selected, so none really owns one. I hope this is correct?)
After The Difficulty and Options tables were created successfully, I was able to re-create the rest of the database tables.
The question is that should I (be able to) specify the order in which the tables are created?
Have I something wrong with the #ManyToOne association?
Already spent couple of hours on this issue, but couldn't figure it out what is the problem.
Sorry for the long text, I just try to explain the whole situation.
Last time I received 'tl; dr;' answer (didn't know by that time what it meant), I spent 6 hours looking the wrong thing, so please do not bother to answer in such case.
If you look at the exceptions stack, you'll see it is coming from your entity's default constructor. This constructor is trying to issue a query by obtaining an entitymanager, and is failing because the table it needs doesn't exist yet. When you create that one table, the constructor can query it, which allows everything to proceed.
You should not have business logic in your default constructor. This is used by the provider, and is getting called during deployment before DDL. Removing that will resolve the issue.
Are you changing the class/schema in between your last create? It could be that you have added or removed new relationships, so EclipseLink is not able to drop the old constraints.
For drop-and-create-tables EclipseLink will basically do the following,
drop known constraints
drop known tables
create known tables
create known constraints
In the latest release EclipseLink will try a couple drop passes to try and handle unknown constraints, but the best method for a rapidly changing development schema would be to drop the entire schema in between deployments.
Sure this is a simple answer, but I cannot find the right source to give the details.
I have a ManyToOne relationship. Because of a synchronization system, when a child is removed a field named 'removed' is set to 'true', and will automatically be deleted only a month later.
However, in the meanwhile, I would like it not to appear in the List in the parent. Is there an easy way to specify a select statement in the definition of the field or so?
#OneToMany(mappedBy = "parent")
#OrderBy("level")
public List<MenuItem> children;
As you are using hibernate you can use the #where annotation. I never used it myself but it seems quite straight forward. Have a look here: http://docs.jboss.org/hibernate/stable/annotations/reference/en/html_single/#entity-hibspec-collection