How to handle relations in jpa - jpa

I use spring boot 2 with spring data
In a one to many relation, when we want to remove relation in a rest architecture what should be the good way to do it
Child and Parent continue to exist... only relation must be removed
#DeleteMapping(value="/{id}/child/{childId}")
public void deleteChildRelation(#PathVariable("id") Integer id, #PathVariable("childId") Integer childId){
service.deleteChildRelation(id, childId);
}
We can get parent, remove child and save
Or use query annotation and do something like
#Query("update Child c set c.parent=null where c.id=:id ")
void deleteChildRelation(#Param("id") Long id);

The first approach is the JPA way to do it. It is slower but leaves you with a consistent session employs optimistic locking and it also updates JPAs 2nd level cache. You should use it if this is of use for you.
If you just want the relation to be gone, the second approach is faster and simpler, since it does a single database round trip.

Related

Is it good practice to use AccessBean or SQL to fetch data from OOTB table in IBM WCS

I want to get data from multiple OOTB WCS table for which there is no OOTB rest available. I am using multiple access bean in databean to get data from tables. Is this a good practice or we should use ServerJDBCHelperAccessBean make a single query with join to hit database. I understand that AccessBean are cached but there are techniques we can cache sql also.
Is there any other reason we should use AccessBean instead of ServerJDBCHelperAccessBean in case fetching data from multiple tables. or we should use ServerJDBCHelperAccessBean and get data in single sql query with joins.
And which will be more expensive in above approaches.
Thanks
Ankit
There is no hard and fast rule to choose between the above two methods for database interactions. Developer has to make a logical choice
AccessBeans
Caching is one of the advantage of access beans. That is a good performance improvement and is achieved by caching the home objects as the look up for home objects are costly. Another point in favour of access bean is handling optimistic updates. Your case is to get the data (not to update/insert) and hence you are safe here.
Session Bean
Like access bean , session beans are another way of reading data from DB when you want to get data from multiple tables. A session bean must implement BASEJDBCHelper class.
public class TestSessionBean extends
com.ibm.commerce.base.helpers.BaseJDBCHelper
implements SessionBean{
public Object fetchResults() throws
javax.naming.NamingException, SQLException
{
try {
// get a connection from the WebSphere Commerce data source
makeConnection();
PreparedStatement prepStatement = getPreparedStatement( "sql to execute");
ResultSet rs = executeQuery(prepStatement, false);
}
finally {
closeConnection();
}
}
}
Using ServerJDBCHelperAccessBean
This is used when you have to make a db transaction outside of EJBs. Keep in mind that it is highly recommended to use EJBs for update/delete for keeping the overall integrity.
In your case, as far as I understand it is a select involving multiple tables and you are not keen on the data to be really in sync (like you are OK to lose a data which was updated nano seconds back or so). Hence you can go ahead with second or third approach
A good reference :
http://deepakpadmakumar.blogspot.com.au/2012/05/session-beans-and-entity-beans-in-wcs.html

JHipster Role based masking of certain columns

In a JHipster based project, we need to selectively filter out certain columns based on role/user logged in. All users will be able to view/modify most of the columns, but only some privileged users will be able to view/modify certain secure fields/columns.
It looks like the only option to get this done is using EntityListeners. I can use an EntityListener and mask a certain column during PostLoad event. Say for example, I mask the column my_secure_column with XXX and display to the user.
User then changes some other fields/columns (that he has access to) and submits the form. Do I have to again trap the partially filled in entity in PreUpdate event, get the original value for my_secure_column from database and set it before persisting?
All this seems inefficient. Scoured several hours but couldn't find a specific implementation that best suits this use case.
Edit 1: This looks like a first step to achieving this in a slightly better way. Updating Entities with Update Query in Spring Data JPA
I could use specific partial updates like updateAsUserRole, updateAsManagerRole, etc., instead of persisting the whole entity all the time.
#Repository
public interface CompanyRepository extends JpaRepository<Company, Integer> {
#Modifying(clearAutomatically = true)
#Query("UPDATE Company c SET c.address = :address WHERE c.id = :companyId")
int updateAddress(#Param("companyId") int companyId, #Param("address") String address);
}
Column based security is not an easy problem to solve, and especially in combination with JPA.
Ideally you like to avoid even loading the columns, but since you are selecting entities this is not possible by default, so you have to remove the restricted content by overriding the value after load.
As an alternative you can create a view bean (POJO) and then use JPQL Constructor Expression. Personally I would use CriteriaBuilder. construct() instead of concatenating a JPQL query, but same principle.
With regards to updating the data, the UI should of cause not allow the editing of restricted fields. However you still have to validate on the backend, and I would recommend that you check if the column was modify before calling JPA. Typically you have the modifications in a DTO and would need to load the Entity anyway, if a restricted column was modified, you would send an error back. This way you only call JPA after the security has been checked.

Updating the ID of an instance in JPA

I am using JPA annotations(hibernate implementation), and i want to change the ID of an entity by merging it.There is any annotation or solution to avoid duplicating then removing the entity?
This is not possible using JPA, for good reasons:
you have an entity removed from the persistence context and you want to reattach it, how possibly could it be connected to the original row it was modified from if you remove the only way to make the connection? Ok, let's assume we store the original id and try to go from there, but now since id is modifiable there is 0 guarantee that it wasn't changed by some other process as well while it was detached, making our stored original id useless and causing complete chaos.
You can do workarounds though:
use a native query to modify the row
don't use this column as your primary key but instead create a new one with generated sequences
duplicate then remove entity as you said is also completely valid and safe as it's in the same transaction
you can change Entity's id in jpa using JPQL like this example :
public void updateUsername(User userToUpdate,String newUserName) {
EntityManager manager=ConnectionDao.getConnecting();
User user=find(userToUpdate.getUsername());
manager.getTransaction().begin();
manager.createQuery("update User u set u.username=\'"+newUserName+"\'").executeUpdate();
manager.getTransaction().commit();
return;
}

JPA, pattern or anti-pattern: to have both flat and related sets of mappings?

This question concerns using JPA to manage some data where some scenarios benefit from the full object model and others seem to be better implemented by a much flatter model. I'm therefore inclined to create two models. I get the feeling that this is not a good idea but I'm hard-pressed to see exactly why, or what the alternatives may be.
The basis scenario is that there is an Entity, lets call it A which the many side of a relationship with entity B. So in the database A has a foreign key field and if the full object model we see (simplified, getters/setters removed)
public Class A {
public int aKey;
public B;
// more attributes
}
public Class B {
public int bKey;
public List<A> collectionOfA;
// and more
}
One particular scenario is handling the arrival into the system of new As. They come from some external in the form of, say, text files. the insertion code needs to
for each CVS record
get the bKey from the record
find the B, or manage any error
create the A, setting the B
persist
Now in fact my scenario is more complex, there are several such relationships, so that find/set pairing is repeated several times.
Alternatively I could (and in fact have) created a second mapping for the A table
public Class Ainserter {
public int aKey;
public int bKey;
// more attributes
}
Now I just set the two values and persist. This does assume that the DB will have the referential integrity constraints, but with the tooling I'm using that is the case. In this, and in many legacy systems the DB pre-exists and may be accessed from both the new JPA code and other even non-Java code. I therefore don't see a reason to put the referential integrity checking in the JPA code in such simple cases.
I can see that potentially there are opportunities for aspects of the full model to become stale with respect to my insertions, but in a legacy environment there could be insertions happening in the DB itself at any time. So I don't see a new problem here.
I can also see potential for confusion if the same Entity Context were used for both models, but that can be avoided by suitable encapsulation.
Any other thoughts?
Edit:
There is a suggestion from axtavt to use EntityManager.getReference(B.class, bkey) to get the B instance. My understanding is that if I do this then to be properly conforming with the JPA programming model I am supposed to set both sides of the relationship, hence I would need to visit the "referenced" B object and add my A into his collection.
Edited again:
I was concerned that visiting B would cause a database lookup, so in performance terms I would not get the win. I have it on very good authority that, at least OpenJPA, will in fact not need to "inflate" B if we only access B's key and the collection of As - and so getReference() is a good suggestion. I seems reasonable that a well designed JPA implementation would have such optimisations.
JPA has an EntityManager.getReference() method, which basically combines the approaches you describe.
It gets primary key and returns a proxy object with that primary key without hitting the database. So, you can use that object to initialize the relationship field, exactly as you want to do in your second approach.

Create new or update existing entity at one go with JPA

A have a JPA entity that has timestamp field and is distinguished by a complex identifier field. What I need is to update timestamp in an entity that has already been stored, otherwise create and store new entity with the current timestamp.
As it turns out the task is not as simple as it seems from the first sight. The problem is that in concurrent environment I get nasty "Unique index or primary key violation" exception. Here's my code:
// Load existing entity, if any.
Entity e = entityManager.find(Entity.class, id);
if (e == null) {
// Could not find entity with the specified id in the database, so create new one.
e = entityManager.merge(new Entity(id));
}
// Set current time...
e.setTimestamp(new Date());
// ...and finally save entity.
entityManager.flush();
Please note that in this example entity identifier is not generated on insert, it is known in advance.
When two or more of threads run this block of code in parallel, they may simultaneously get null from entityManager.find(Entity.class, id) method call, so they will attempt to save two or more entities at the same time, with the same identifier resulting in error.
I think that there are few solutions to the problem.
Sure I could synchronize this code block with a global lock to prevent concurrent access to the database, but would it be the most efficient way?
Some databases support very handy MERGE statement that updates existing or creates new row if none exists. But I doubt that OpenJPA (JPA implementation of my choice) supports it.
Event if JPA does not support SQL MERGE, I can always fall back to plain old JDBC and do whatever I want with the database. But I don't want to leave comfortable API and mess with hairy JDBC+SQL combination.
There is a magic trick to fix it using standard JPA API only, but I don't know it yet.
Please help.
You are referring to the transaction isolation of JPA transactions. I.e. what is the behaviour of transactions when they access other transactions' resources.
According to this article:
READ_COMMITTED is the expected default Transaction Isolation level for using [..] EJB3 JPA
This means that - yes, you will have problems with the above code.
But JPA doesn't support custom isolation levels.
This thread discusses the topic more extensively. Depending on whether you use Spring or EJB, I think you can make use of the proper transaction strategy.