Owner:
#Entity
public class Strategy implements Serializable {
#Id
#GeneratedValue
private Long id;
#ManyToMany(fetch = FetchType.EAGER, cascade = {CascadeType.PERSIST})
#JoinTable(name = "StrategyHost", joinColumns = {#JoinColumn(name = "strategyId")}, inverseJoinColumns = {#JoinColumn(name = "hostId")})
private Set<Host> hostName;
}
Related entity:
#Entity
public class Host {
#Id
#GeneratedValue
private Long id;
#Column(unique = true)
private String name;
#ManyToMany(mappedBy = "hostName")
private List<Strategy> strategies;
public Host(String name) {
this.name = name;
}
}
Test:
#Test
#Transactional(propagation = Propagation.NOT_SUPPORTED)
public void testStrategyWithHosts() {
Strategy s = new Strategy();
Set<Host> hosts= new HashSet<>();
hosts.add(Host.builder().name("aaa").build());
hosts.add(Host.builder().name("bbb").build());
s.setHostName(hosts);
Strategy saved= strategyDao.save(s);
Set<Host> hostName = saved.getHostName();
}
debug shows the persisted saved object having Host:
Where are name values? However, if I add merge in cascade type array, name are valued. Why insert (not update managed entities) operation for related entities must have merge cascade type? Although log shows nothing suspicious:
insert into strategy...
insert into host...
insert into host...
update strategy ...
insert into strategy_host ...
insert into strategy_host ...
Related
I have two entities, fileVersion and fileEnvironment, which have a many to many relationship. I'm using a junction table, modeled by fileDeployment entity.
The junction entity:
#Data
#Builder(toBuilder = true)
#NoArgsConstructor
#AllArgsConstructor
#Entity
#Table(
name = "file_deployment"
)
public class FileDeploymentEntity {
#EmbeddedId
private FileDeploymentKey id;
#ToString.Exclude
#ManyToOne
#MapsId("fileVersionId")
#JoinColumn(name = "fileVersionId")
private FileVersionEntity fileVersion;
#ToString.Exclude
#ManyToOne
#MapsId("fileEnvironmentId")
#JoinColumn(name = "fileEnvironmentId")
private FileEnvironmentEntity fileEnvironment;
}
It's composite key:
#Embeddable
#NoArgsConstructor
#AllArgsConstructor
#Data
#Builder(toBuilder = true)
public class FileDeploymentKey implements Serializable {
#Column
private UUID fileVersionId;
#Column
private UUID fileEnvironmentId;
}
Its JPA repository:
#Repository
public interface FileDeploymentEntityRepository extends
JpaRepository<FileDeploymentEntity, FileDeploymentKey>,
JpaSpecificationExecutor<FileDeploymentEntity> {
}
The two entities for which the junction entity is capturing the many-to-many relationship for:
#Data
#Builder(toBuilder = true)
#NoArgsConstructor
#AllArgsConstructor
#Entity
#Table(
name = "file_environment"
)
public class FileEnvironmentEntity {
#Id
#GeneratedValue(generator = "UUID")
#GenericGenerator(name = "UUID", strategy = "uuid2")
private UUID id;
#ToString.Exclude
#OneToMany(mappedBy = "fileEnvironment")
private List<FileDeploymentEntity> fileDeployments;
}
FileVersion is the other
#Data
#Builder(toBuilder = true)
#NoArgsConstructor
#AllArgsConstructor
#Entity
#Table(
name = "file_version"
)
public class FileVersionEntity {
#Id
#GeneratedValue(generator = "UUID")
#GenericGenerator(name = "UUID", strategy = "uuid2")
private UUID id;
#ToString.Exclude
#OneToMany(mappedBy = "fileVersion")
private List<FileDeploymentEntity> fileDeployments;
}
The following code executes fine:
var fileDeploymentEntity = FileDeploymentEntity.builder()
.id(FileDeploymentKey.builder()
.fileVersionId(existingFileVersion.get().getId())
.fileEnvironmentId(existingFileEnvironment.get().getId())
.build())
.deploymentTime(
Instant.now(clock))
.fileEnvironment(existingFileEnvironment.get())
.fileVersion(existingFileVersion.get())
.build();
var result = fileDeploymentEntityRepository.save(fileDeploymentEntity);
But when eventually fileDeploymentEntityRepository.flush() is called I get the following exception:
could not execute statement; SQL [n/a]; constraint [id]
org.hibernate.exception.ConstraintViolationException: could not
execute statement
org.postgresql.util.PSQLException: ERROR: null value in column "id"
violates not-null constraint Detail: Failing row contains
(7670fec3-3766-4c69-9598-d4e89b5d1845,
b9f6819e-af89-4270-a7b9-ccbd47f62c39, 2019-10-15 20:29:10.384987,
null, null, null, null).
If I also call save for the 2 entities it doesn't change the result:
fileVersionEntityRepository
.save(existingFileVersion.get().addFileDeployment(fileDeploymentEntity));
fileEnvironmentEntityRepository
.save(existingFileEnvironment.get().addFileDeployment(fileDeploymentEntity));
Any help would be greatly appreciated!
For me the issue was that I named another entity with the same table name by accident which caused the schema that was generated to be very different from what I thought it was.
Take away lesson:
1) Check the schema that is generated when in doubt.
var con = dataSource.getConnection();
var databaseMetaData = con.getMetaData();
ResultSet resultSet = databaseMetaData.getTables(null, null, null, new String[]{"TABLE"});
System.out.println("Printing TABLE_TYPE \"TABLE\" ");
System.out.println("----------------------------------");
while(resultSet.next())
{
//Print
System.out.println(resultSet.getString("TABLE_NAME"));
}
ResultSet columns = databaseMetaData.getColumns(null,null, "study", null);
while(columns.next())
{
String columnName = columns.getString("COLUMN_NAME");
//Printing results
System.out.println(columnName);
}
I have three tables each mapping to one of these entities. The 'assigned' table acts as the relationship between 'users' and 'roles' with a foreign key to each table. How would I map this on my entities so that I can get a Set of EntityRoles from the UserEntity? I can't quite figure out how to make this work. Is this even possible?
#Entity
#Table(name = "users")
public class UserEntity {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name="user_id")
private long id;
#Column(name="user_username")
private String username;
#Column(name="user_password")
private String password;
#Column(name="user_email")
private String email;
//I want to be able to get a set of RoleEntities
#OneToMany(fetch = FetchType.LAZY, mappedBy = "id")
private Set<RoleEntity> roles;
}
#Entity
#Table(name = "assigned")
public class AssignedEntity implements Serializable {
#Id
//#Column(name = "assigned_role")
#ManyToOne(targetEntity = RoleEntity.class, fetch = FetchType.LAZY)
#JoinColumn(name = "fk_role")
private long roleId;
#Id
//#Column(name = "assigned_user")
#ManyToOne(targetEntity = UserEntity.class, fetch = FetchType.LAZY)
#JoinColumn(name = "fk_user")
private long userId;
}
#Entity
#Table(name = "roles")
public class RoleEntity implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name="role_id")
#OneToOne(fetch = FetchType.LAZY, mappedBy="roleId")
private long id;
#Column(name="role_name")
private String name;
}
You are using an incorrect/inconvenient mapping. Always keep things as simply as possible.
#Entity
#Table(name = "users")
public class User {
#Id
#GeneratedValue
private Long id;
#ManyToMany(fetch = FetchType.LAZY)
private List<Role> roles;
}
#Entity
#Table(name = "roles")
public class Role {
#Id
private Long id;
#Column
private String name;
}
A persistent provider will create a (valid) join table for you. You can specify the name of the join table using #JoinTable annotation. Also you will need to think about auto generation values of id for the Role entity: the roles table is something like a reference data table. So, probably, you will need to hardcode the id values.
To get user roles (in the persistent context):
user.getRoles()
I have two 2 classes in relation many to many.
#Entity
#Table(name = "recipies")
public class Recipie implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
private String url;
private String image;
#ManyToMany
#JoinTable(
name = "recipie_ingredients",
joinColumns = {
#JoinColumn(name = "recipie_id", referencedColumnName = "id")},
inverseJoinColumns = {
#JoinColumn(name = "ingredient_id", referencedColumnName = "id")})
private List<Ingredient> ingredients = new ArrayList<>();
#Entity
#Table(name = "ingredients")
public class Ingredient implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer id;
private String name;
#ManyToMany(mappedBy = "ingredients")
private List<Recipie> recipies;
I would like to create a new recipie this way:
List<Ingredient> ingredientsList = new ArrayList<>();
String ingredientName = "example";
Ingredient ingredient = ingredientsDao.findIngredientByName(ingredientName);
if (ingredient == null) {
ingredient = new Ingredient();
ingredient.setName(ingredientName);
}
ingredientsList.add(ingredient);
.....
recipie.setIngredients(ingredientsList);
recipiesDao.addRecipie(recipie);
If ingredient doesn't exist in database, occur errors like this
Caused by: java.lang.IllegalStateException: During synchronization a new object was found through a relationship that was not marked cascade PERSIST
Is there any way to Ingredient objects created in the table automatically?
I try add CascadeType.PERSIST but It also doesn't work
#ManyToMany(mappedBy = "ingredients", cascade = CascadeType.PERSIST)
private List<Recipie> recipies;
First of all, for a bidirectional relationship, both sides need to be updated, so:
recipe.getIngredients().add(ingredient);
ingredient.getRecipes().add(recipe);
Then, you can set the cascade to PERSIST on the side of the relationship which you are passing to save(). So if you are saving the recipe, you should mark the Recipe.ingredients with
#ManyToMany(cascade = CascadeType.PERSIST)
(Side note, it's spelled "recipe", not "recipie")
As mentioned by #Gimby, you need to assign both sides of the relationship.
When dealing with #Many... sided relationships I always initialise the collection (which you've done on one side):
#Entity
#Table(name = "recipies")
public class Recipie implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
private String url;
private String image;
#ManyToMany
#JoinTable(
name = "recipie_ingredients",
joinColumns = {
#JoinColumn(name = "recipie_id", referencedColumnName = "id")},
inverseJoinColumns = {
#JoinColumn(name = "ingredient_id", referencedColumnName = "id")})
private List<Ingredient> ingredients = new ArrayList<>();
...
}
#Entity
#Table(name = "ingredients")
public class Ingredient implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer id;
private String name;
#ManyToMany(mappedBy = "ingredients")
private List<Recipie> recipies = new ArrayList<>();
...
}
And then a slight variation in your logic:
String ingredientName = "example";
Ingredient ingredient = ingredientsDao.findIngredientByName(ingredientName);
if (ingredient == null) {
ingredient = new Ingredient();
ingredient.setName(ingredientName);
}
...
// Don't forget to assign both sides of the relationship
recipe.getIngredients().add(ingredient);
ingredient.getRecipies().add(recipe);
recipiesDao.addRecipe(recipe);
This should then cascade persist/update correctly.
The real fun will begin when you try to figure out how to associate a quantity with the ingredient...
I have two tables called SL_DOCUMENT and SL_PROPOSE. The SL_DOCUMENT has its own ID (ID_DOCUMENT) and a foreign key to SL_PROPOSE (ID_PROPOSE). The SL_PROPOSE ID column is ID_PROPOSE. The particularity is that SL_PROPOSE ID value is actually the SL_DOCUMENT.ID_DOCUMENT value. i.e., after a new SL_DOCUMENT is inserted, the related SL_PROPOSE should be inserted with the SL_DOCUMENT.ID_DOCUMENT as ID and later the same value should be used in SL_DOCUMENT.ID_PROPOSE column.
I did my JPA mapping as follows:
#Entity
#Table(name = "SL_DOCUMENT")
public class DocumentORM {
#Id
#Column(name = "ID_DOCUMENT")
#SequenceGenerator(name = "SEQ_SL_DOCUMENT", sequenceName = "SEQ_SL_DOCUMENT")
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "SEQ_SL_DOCUMENT")
private Long id;
#OneToOne(mappedBy = "document", cascade = { CascadeType.PERSIST })
// #JoinColumn(name = "ID_PROPOSE", updatable = false)
private ProposeORM propose;
// ...
}
#Entity
#Table(name = "SL_PROPOSE")
public class ProposeORM {
#Id
#Column(name = "ID_PROPOSE")
private Long id;
#MapsId
#OneToOne
#JoinColumn(name="ID_PROPOSE")
private DocumentORM document;
// ...
public ProposeORM(DocumentORM document) {
super();
this.document = document;
this.document.setPropositura(this);
}
}
To create the new instances of of DocumentORM and ProposeORM:
DocumentORM document = new DocumentORM();
ProposeORM propose = new ProposeORM(document);
And finally to insert the new Document with ProposeORM:
this.documentoDAO.insert(document);
When I really insert a document, according the snippets above, I see in the console (Websphere 8.5) the INSERT commands for the SL_DOCUMENT, SL_PROPOSE running correctly. However, when I see the tables, the column SL_DOCUMENT.ID_PROPOSE is still NULL. Even If I uncomment the #JoinColumn annotation over DocumentORM.propose, the SL_DOCUMENT.ID_PROPOSE column continues to be not filled.
The ideal would be if SL_DOCUMENT had a discriminator column and ProposeORM was a DocumentORM subclass, using the JOINED InheritanceType (there are other tables with the same kind of relationship with SL_DOCUMENT). However, these are legacy tables and it is not possible to change it.
So, what is the alternative to fill SL_DOCUMENT.ID_PROPOSE? A workaround I was thinking is fill this column using a native SQL. Do you have better ideas?
Thanks,
Rafael Afonso
The solution I see is to make ProposeORM's ID not auto-generated, since you always want it to have the ID of the document it's linked to, AND still have a join column in the document table:
#Entity
#Table(name = "SL_DOCUMENT")
public class DocumentORM {
#Id
#Column(name = "ID_DOCUMENT")
#SequenceGenerator(name = "SEQ_SL_DOCUMENT", sequenceName = "SEQ_SL_DOCUMENT")
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "SEQ_SL_DOCUMENT")
private Long id;
#OneToOne
#JoinColumn(name = "ID_PROPOSE")
private ProposeORM propose;
// ...
}
#Entity
#Table(name = "SL_PROPOSE")
public class ProposeORM {
#Id
#Column(name = "ID_PROPOSE")
private Long id;
#OneToOne(mappedBy = propose)
private DocumentORM document;
// ...
public ProposeORM(DocumentORM document) {
super();
this.id = document.getId();
this.document = document;
this.document.setPropositura(this);
}
}
You'll have to persist the document first, flush the EntityManager to make sure the document has a generated ID, and then persist the propose and set it into the document.
I have three classes: Location, MTFCC, and BorderPoint.
Location has a unidirectional #ManyToOne relationship with MTFCC, which is intended only as a Lookup table. No cascading is defined.
Location also has a bidirectional #ManyToOne/#OneToMany with BorderPoint. Since I want all associated BorderPoint objects to delete when I delete a Location, I set cascadetype.ALL on the Location side of the relationship.
Unfortunately, an EntityExistsException is being thrown when I attempt to delete a location:
org.apache.openjpa.persistence.EntityExistsException: Cannot delete or update
a parent row: a foreign key constraint fails (`mapmaker`.`BORDERPOINT`,
CONSTRAINT `BORDERPOINT_ibfk_1` FOREIGN KEY (`LOCATIONID`) REFERENCES `LOCATION`
(`LOCATIONID`)) {prepstmnt 21576566 DELETE t0, t1 FROM LOCATION t0 INNER JOIN
MTFCC t1 ON t0.MTFCCID = t1.MTFCCID WHERE (t0.STATEFP = ? AND t1.MTFCCCODE = ?)
[params=?, ?]} [code=1451, state=23000]
[ERROR] FailedObject: DELETE t0, t1 FROM LOCATION t0 INNER JOIN MTFCC t1 ON
t0.MTFCCID = t1.MTFCCID WHERE (t0.STATEFP = ? AND t1.MTFCCCODE = ?)
[java.lang.String]
It looks like it's attempting to delete the associated MTFCC object which I do NOT want to happen. I do, however, want the associated BorderPoint objects to be deleted.
Here is the code (chopped down a bit):
#SuppressWarnings("unused")
#Entity
#Table(name="LOCATION")
#DetachedState(enabled=true)
public class Location implements Serializable, IsSerializable, Cloneable {
private Long id;
private String stateGeoId;
private MTFCC mtfcc;
private List<BorderPoint> borderPointList;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name="LOCATIONID")
public Long getId() {
return id;
}
#ManyToOne
#JoinColumn(name="MTFCCID")
public MTFCC getMtfcc() {
return mtfcc;
}
#OneToMany(cascade = CascadeType.ALL, mappedBy = "location", fetch = FetchType.EAGER)
public List<BorderPoint> getBorderPointList() {
return borderPointList;
}
}
#Entity
#Table(name = "BORDERPOINT")
#DetachedState(enabled = true)
public class BorderPoint implements Serializable, IsSerializable {
private Long id;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name="BORDERID")
public Long getId() {
return id;
}
#ManyToOne(targetEntity = Location.class)
#JoinColumn(name="LOCATIONID")
public Location getLocation() {
return location;
}
}
#Entity
#Table(name = "MTFCC")
public class MTFCC implements Serializable, IsSerializable {
private Long id;
private String mtfccCode;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "MTFCCID")
public Long getId() {
return id;
}
// etc
}
And, for good measure, here is the deletion code:
#Override
#Transactional
public int removeByStateGeoIdAndMtfcc(String stateGeoId, String mtfccCode) throws RepositoryException {
EntityManager em = entityManagerFactory.createEntityManager();
String jpaQuery = "DELETE FROM Location L where L.stateFP = ?1 AND L.mtfcc.mtfccCode = ?2";
int affectedRows = 0;
Query query = em.createQuery(jpaQuery).setParameter(1, stateGeoId).setParameter(2, mtfccCode);
try {
em.getTransaction().begin();
affectedRows = query.executeUpdate();
em.getTransaction().commit();
} catch (Exception e) {
//log.debug("Exception: ", e);
throw new RepositoryException(e);
}
em.close();
return affectedRows;
}
Hopefully I copied all relevant parts... can anyone assist?
You aren't reading the error message correctly. It says that the deletion is forbidden because of the foreign key constraint between BorderPoint and Location.
The cascade delete would work if you used em.remove(location) to delete your Location. Using a delete query like you're doing won't automagically delete the BorderPoints before deleting the location.
Either load them and remove them using em.remove, or execute other delete queries before to delete the BorderPoints.