I use spring boot 2 with spring data jpa, hibernate and posgres.
I need to reset a sequence every 1 day of the year, because we use current + ''+ sequence id.
Is there a way to reset a sequence with jpa?
A pure JPA way to reset a sequence does not exist, and resetting sequences is not even supported by all databases. That being said, you could try this solution with a native query (em.createNativeQuery(...).executeUpdate()) or the stored procedure API if you absolutely must use JPA for the job.
Related
I am using spring boot specifically spring data jpa for working with the database and the database manager is postgres. I need to create a sequence that will be used in the conformation of an attribute.
Specifically my attribute is called classifier (this attribute is not the primary key) and it is formed with a given string and a consecutive number (which would be the value given by the sequence) example FMFC-COMP-1234
currently I create the sequence with postgres using
CREATE SEQUENCE seq_name START WITH 1;
but I wish I could create it from spring boot. Any help will be welcome. Thanks in advance
I have a sequence created using flyway in postgres which should start from 10000.
I want to get the next value of the sequence using JPA and not a native query , since i have different db platforms being run at different cloud providers.
I'm not able to find a JPA query to get the next value of a sequence, please redirect me to the right page if i am missing something already ..
Thanks for any help in that area though already!
P.S : I found this link which helps me doing the same with native query.
postgresql sequence nextval in schema
I don't think this is possible in a direct way.
JPA doesn't know about sequences.
Only the implementation knows about those and utilizes them to create ids.
I see the following options to get it to work anyway:
create a view in the database with a single row and a single column containing the next value. You can query that with native SQL which should be the same for all databases since it is a trivial select.
Create a dummy entity using the sequence for id generation, save a new instance and let JPA populate the id.
A horrible workaround but pure JPA.
Bite the bullet and create a simple class that provides the correct native SQL statement to use for the current environment and execute it via JdbcTemplate.
Context: Java EE application should minimize the risk of data loss as much as possible. This can come at the cost of performance for now, as the load on the server is not that high.
My understanding is that GenerationType.IDENTITY performs an immediate insert operation into the database when the Object is instantiated, whereas GenerationType.SEQUENCE performs a read operation on the database to retrieve the next value of a database sequence. The source used for this information is from here. Please correct me if my understanding is wrong.
Given my understanding, should GenerationType.IDENTITY be preferred over GenerationType.SEQUENCE in this context since data is persisted to the database earlier?
Hibernate persists data immediately for entities marked with GenerationType.IDENTITY as Hibernate needs to know the primary key value to maintain the session. But this has flaw, as this generation type wont be able to use many Hibernate optimization techniques.
For entities marked with GenerationType.SEQUENCE, Hibernate internally fires a select query when an object is persisted using session.save() or session.persist() and populates the primary key. Then it can save the data when the session is Flushed/Committed.
Moreover, since the entity is being populated with the primary key , Hibernate can internally use optimization techniques on these entities like JDBC Batching and all.
More information can be seen here - https://www.thoughts-on-java.org/jpa-generate-primary-keys/
We have a Spring Boot project that uses Spring-JPA for data access. We have a couple of tables where we create/update rows once (or a few times, all within minutes). We don't update rows that are older than a day. These tables (like audit table) can get very large and we want to use Postgres' table partitioning features to help break up the data by month. So the main table always has this calendar month's data but if the query requires retrieval from previous months it would somehow read it from other partitions.
Two questions:
1) Is this a good idea for archiving older data but still leave it query-able?
2) Does Spring-JPA work with partitioned tables? Or do we have to figure out how to break up the query and do native queries and concatenate the restult set?
Thanks.
I am working with postgres partitioning with Hibernate & Spring JPA for a period of time. So I think, I can try to answer your questions.
1) Is this a good idea for archiving older data but still leave it query-able?
If you are applying indexes and not re-indexing table frequently, then partitioning of data may result faster query results.
Also you can use clustered index feature in postgres as well to fetch the data faster.
Because table with older data will not going to be updated, so clustered index will improve the performance efficiently.
2) Does Spring-JPA work with partitioned tables? Or do we have to figure out how to break up the query and do native queries and concatenate the restult set?
Spring JPA will work out of the box with partitioned table. It will retrieve the data from master as well as child tables and returns the concatenated result set.
Note : Issue with partitioned table
The only issue you will face with partitioned table is insertion in partitioned table.
Let me explain, when you partition a table, you will create a trigger over master table, and that trigger will return null. This is the key behind insertion issue in partitioned table using Spring JPA / Hibernate.
When you try to insert a row using Spring JPA or Hibernate you will face below issue
Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
To overcome this issue you need to override implementation of Batching batcher.
In hibernate you can provide the custom implementation of batcher factory using below configuration
hibernate.jdbc.factory_class=path.to.my.batcher.factory.implementation
In spring JPA you can achieve the same by custom implementation of batch builder using below configuration
hibernate.jdbc.batch.builder=path.to.my.batch.builder.implementation
References :
Custom Batch Builder/Batch in Spring-JPA
Demo Application
In addition to the #Anil Agrawal answer.
If you are using spring boot 2 then you need to define the customBatcher using the property.
spring.jpa.properties.hibernate.jdbc.batch.builder=net.xyz.jdbc.CustomBatchBuilder
You do not have to break down the JDBC query with postgres 11+.
If you execute select on the main table with plain jdbc, the DB would return the aggregated results from the partitioned tables.
In other words, the work is done by the Postgres DB, so Spring JPA will simply get the result and map it to objects as if there were no partitioning.
For having inserts work in a partitioned table you need to make sure that your partitions are already created, i think spring data will not create them for you.
I am supposed to come up with a plan to encrypt our existing personal information and be able to retrieve it and show it back to user through application if required. This is not PCI information nor password, so it requires a two way journey that is encrypt and decrypt both (and for the moment to keep things simple no tokenization, no public private key encryption either), as well as query the information through JPA if required.
Here is what I have: application engine JBOSS 7, database PostGres, Java EE 7 and JPA 2.
What I have thought of? Converting the existing data in postgres is doable with pgcrypto using
ALTER TABLE table_name ALTER column_name TYPE bytea USING pgp_sym_encrypt(column_name, key, [other options])
If there are any dependencies they can be handled but they are not a lot. Decryption and searching or displaying on the same would be
SELECT pgp_sym_decrypt(column_name, key) AS column_name FROM table_name WHERE ...
The entity files can be handled by just changing their data type and so on.
Where I am stuck? The system uses JPA 2 (hibernate implementation) for the queries presently that take advantage of the fields being in plain text. If I update the database the queries that are present would fail, they would need to rewritten anyways to handle encrypted data. But I would have to use native queries in JPA instead of JPQL, which could lead to problems in future in case we change our database.
So the question is there any way in JPA or JPQL other than native calls to query data? I did have a look at jasypt but documentation says it is for hibernate, and it looks specifically pertaining to encryption decryption only.
When I insert new data into table via JPA where do I encrypt it? Should I encrypt the data in the Java world using some cipher algorithm and then insert the bytes into table column. Or is there any elegant JPA way of doing this. Also do note that even if I take care that encryption algorithm used in pgcrypto and that from Java library are same, will they cause any inconsistency problem when we try to compare those data.
Are there better approaches to the problem? I do not mean in terms of security for now but in terms of ease of implementation and future robustness. We have lot of old code that we have recently updated to JSR 299 specification, and I would like to keep updates to minimal.
I only seek answer for first bullet point, the rest two are additional details if someone experienced wants to chip in.