I am having a difficult time trying to understand the Transient annotation of JPA. I assume the fields noted with Transient annotation will be stored in a local cache and not persisted in DB. I basically like to know when will it be cleaned up from the local cache?
I am using this for a table to store its intermittent status and I use this value in a method that is returned after I call an external service. Is this an appropriate use case? If so, what will be the life time of such a transient field?
#Entity
class Sample {
#Transient
String fieldOne;
transient String otherField;
}
fieldOne is not transient (has not transient keyword), so is serialised (to/from cache, network, file or other sources). But JPA will not store it in database, because annotation denies.
otherField is not seriazable, has transient keyword (i.e. after getting from cache engine, or network can/will be null), but is pesissted in JPA database with default behaviour
This is not academic discussion, sometimes it is useable. Usually values computed from others, or hashed /encrypted /hidden fields.
Related
I'm playing around with spring-data-jdbc and discovered a problem, with I can't solve using Google.
No matter what I try to do, I just can't push a trivial object into the database (Bean1.java:25):
carRepository.save(new Car(2L, "BMW", "5"));
Both, without one and with a TransactionManager +#Transactional the database (apparently) does not commit the record.
The code is based on a Postgres database, but you might also simply use a H2 below and get the same result.
Here is the (minimalistic) source code:
https://github.com/bitmagier/spring-data-jdbc-sandbox/tree/stackoverflow-question
Can somebody tell me, why the car is not inserted into the database?
This is not related to transactions not working.
Instead, it's about Spring Data JDBC considering your instance an existing instance that needs updating (instead of inserting).
You can verify this is the problem by activating logging for org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate. You should see an update but no insert.
By default, Spring Data JDBC considers an entity as new when it has an id of an object type and a value of null or of a primitive type (e.g. int or long) and a value of 0.
If your entity has an attribute with #Version annotation that attribute will be used to determine if the instance is a new one.
You have the following options in order to make it work:
Set the id to null and configure your database schema so that it will automatically create a new value on insert. After the save your entity instance will contain the generated value from the database.
Note: Spring Data JDBC will set the id even if it is final in your entity.
Leave the id null and set it in a Before-Convert listener to the desired value.
Let your entity implement Persistable. This allows you to control when an entity is considered new. You'll probably need a listener as well so you can let the entity know it is not new any longer.
Beginning with version 1.1 of Spring Data JDBC you'll also be able to use a JdbcAggregateTemplate to do a direct insert, without inspecting the id, see https://jira.spring.io/browse/DATAJDBC-282. Of course, you can do that in a custom method of your repository, as is done in this example: https://github.com/spring-projects/spring-data-examples/pull/441
In our project we are using kotlin with JPA. All of our entities are immutable so, it is not possible to set fields of our entities directly. You have to create a new instance by using the copy method. If you want these changes to be reflected to database, you must persist this newly created entity with an explicit function call.
In the beginning, this approach looks perfect to us. However, nowadays we are having some problems like some of our instances are changing unexpectedly in the memory.
val instance1 = repository.findById(entityId)
repository.save(instance1.copy(deletedAt = Instant.now()))
..
..
assertNull(instance1.deletedAt())
In the code snipped above, instance1 is retrieved from database and its deletedAt field is set with copy method and the new instance which is created with this copy method is passed to save method of the repository. We don't set any field of instance1, we create a new instance to do these changes. However, the result on assert line is unexpectedly not-null.
It seems, There is a confliction on JPA persistence context (first level cache) and kotlin's immutable and copy method logic.
Is anyone facing this problem or any suggestion or best practices when using JPA and immutable Kotlin entities?
I suspect the problem is that you're ignoring the return value from save(). Its docs say:
Saves a given entity. Use the returned instance for further operations as the save operation might have changed the entity instance completely.
But you're not doing that; you're instead continuing to use the original instance which (as that says) may have changed.
Instead, store the return value from save(), and use that thereafter. (Either by making instance1 a var, or creating a new val and not referring to instance1 afterward.)
(This isn't a Kotlin-specific problem, and is exactly the same in Java. JPA , Spring, &c work their magic by futzing with the bytecode, so can do things your code can't — such as changing immutable values. Most of the time you can ignore it, but this case makes it obvious.)
Immutable types are not compatible on how JPA works.
JPA works around the concept of UnitOfWork, which mean objects retrieved from the database lives in a PersistedContext (1st level cache) and they get discarded once the EntityManager is closed (on a web application at the end of the HTTP request).
When using the copy method in an entity you just retrieved from the database, the copied object is considered detached from the current session meaning that changes on it cannot be tracked by JPA and the underlying implememtation (Hibernate / EclipseLink) have hard time figuring out which SQL statement needs to be fired (Insert/Update/Delete ????)
Things got way more complex when you have complex object graph with OneToMany associations and cascading options.
So my recommendation is unfortunately is to avoid Immutable types when using JPA.
I have a JPA entity that links to others -- something like this:
#Entity
class LinkRec implements Serializable {
...
#OneToOne
private OtherEntity otherTable;
...
}
So my logic eventually can delete this entity (calling the EntityManger.remove method), then I want to write to a log file what was done, including reference members of the otherTable object. Is this a permitted operation in JPA?
Is this a permitted operation in JPA?
Yes.
What JPA (underlying JPA provider) does when you invoke remove is just "mark" that the instance is expected to be deleted/removed. But even if the transaction is committed (and the instance deleted from the database) or not, the instance object remains the same. Any changes on its attributes depend on what you do.
Due to you mark the entity as removed you won't can refresh the instance's state from the database (call EntityManager.refersh method). You will get an IllegalArgumentException.
Be aware that, in other cases, you could screw up if you refresh the entity before loggin what you want.
I quote a text from the JPA specification (see Synchronization to the Database section) that could help you to understand the "JPA" behaivor
Synchronization to the database does not involve a refresh of any managed entities unless the refresh operation is explicitly invoked on those entities or cascaded to them as a result of the specification of the cascade=REFRESH or cascade=ALL annotation element value
The relevant line in the spec is:
After an entity has been removed, its state (except for generated state) will be that of the entity at the point at which the remove operation was called.
Since this is all I can find on the subject in the spec, I would say that it could vary from implementation to implementation. In my opinion, this makes what you are tying to do dangerous. It may work in one JPA implementation and not another, or work in one version and not in an upgrade.
If I had to guess on implementations, I would say that #OneToOne objects will probably work okay. Where I would worry is with things like #OneToMany. In the case of Hibernate for example: this collection may be hydrated and in memory, but it may also point to a proxy. If it is a proxy and you call the getter it will check with the database for the collection and fail to load it because the object is gone.
I read this in the EJB/JPA Book:
"Even if you mark the property as LAZY for a #Basic type, the persistence provider is still allowed to load the property eagerly. This is due to the fact that this feature requires class-level instrumentation. It should also be noted that lazy loading is neither really useful nor a significant performance optimization. It is best practice to eagerly load basic properties."
QUESTION 1)
If I set property as an LAZY, why e persistence provider is still allowed to load the property eagerly? when this happens? and why? is this for primitives only?
QUESTION 2)
"The #Basic annotation is the simplest form of mapping for a persistent property. This is the default mapping type for properties which are primitives, primitive wrapper types"
If I use does not use primitive or wrapper (for instance I use my class object), will he persistence provider is still allowed to load the property eagerly?
QUESTION 3)
"You do not need to tell your persistence manager explicitly that you're mapping a basic property because it can usually figure out how to map it to JDBC using the property's type."
As I understand this happens when I use primitives or wrappers, don't I? And how does it figure out how to map? Is there any obvious rule?
QUESTION 1)If I set property as an LAZY, why e persistence provider is
still allowed to load the property eagerly? when this happens? and
why? is this for primitives only?
Because of performance issues: the JPA provider has the right (according to the JPA spec) to decide that it is better to fetch the field eagerly. This is valid also for wrapper fields & Strings. It is not specified when this happens, which means that can happen when the JPA provider considers it needed.
QUESTION 2)"The #Basic annotation is the simplest form of mapping for
a persistent property. This is the default mapping type for properties
which are primitives, primitive wrapper types"
If I use does not use primitive or wrapper (for instance I use my
class object), will he persistence provider is still allowed to load
the property eagerly?
Actually yes, also for relationships you have the same rule, although almost always the JPA provider will consider your hint. Of course: when you have a field of type YouClass, you are not allowed to annotate it with #Basic and must use #ManyToOne-like annotations. You will read further about them.
QUESTION 3) "You do not need to tell your persistence manager
explicitly that you're mapping a basic property because it can usually
figure out how to map it to JDBC using the property's type."
As I understand this happens when I use primitives or wrappers, don't
I? And how does it figure out how to map? Is there any obvious rule?
That happens will all types listed in the documentation of the #Basic annotation, not only those that you enumerated. The rule is pretty simple: String types are mapped as VARCHAR/CHAR like columns, number-fields like NUMBER (or DECIMAL) and so further.
i have a class User which holds an email address and password for authentication users in my web application. This user is mapped to the database via JPA / Eclipselink.
My question is, how can i prevent JPA from loading the password field back from the database? Since i will access the user object in my web app, i'm uncomfortable regarding security with sending the password to the browser.
Is there any way i can prevent loading the field in JPA / EclipseLink? Declaring the field transient is not an option, since i want to store it on the database when i call persist() on the user object.
Thanks,
fredddmadison
JB Nizet has a valid point. Retrieving it and serializing it in the Http response are two separate concerns.
I'm not sure what you're using to serialize your data. If it this is a REST API, consider Jackson's #JsonIgnore annotation or Eclipselink MOXy's #XmlTransient equivalent. If this uses Java EL (facelets, jsps), you should be able to select only the bean properties of interest.
If you really must do this during retrieval, consider JPQL's/Criteria API's constructor functionality. Ensure that the object has a constructor that accepts the specified parameters, and keep in mind that it won't be managed in the persistence context if it's retrieved in this manner.
SELECT NEW my.package.User(u.id, u.name, u.etc) FROM User u
Alternatively, consider the #PostLoad lifecycle callback.
#PostLoad
private void postLoad() {
this.password = null;
}
Finally, this might not be the case, but I would like to reinforce the notion that passwords shouldn't be stored in plaintext. I mention this because returning a hashed salted password that used a secure algorithm (bCrypt, multiple iteration SHA-512, etc) wouldn't be that big a deal (but still isn't ideal).
I have the similar problem. But in my case I have many #OneToMany relationships inside of Entity class and some of them are EAGER. When I query against this Entity it loads all of them, although for web service I need only some of them.
I tried TupleQuery. But it's not the solution because to get needed OneToMany relationships I have to join and get many duplicate rows of the main query. It makes the result more heawy, than economic.