I have two tables with identical schema and data in MySql. one is using InnoDB engine and the other one using MyISAM engine.
I have mapped a JPA entity to InnoDB one and have trigger to update the MyISAM one,
but while querying records I want to use MyISAM version of the table with JPA queries but at the same time I cannot create entity for MyISAM one because it will increase maintenance and complexity.
I would create a #MappedSuperclass and have two subclasses for each table.
Otherwise, if you are using EclipseLink you could map to the table you wish to read from, and then override the insert, update and delete operations using a DescriptorCustomizer through the DescriptorQueryManager.
Related
I'm using db view that is carrying values from multiple tables using postgresql db, mapped by hibernate, and managed by JPA.
I want to make a join with the view right after updating a value in table column that is mapped to the view.
My question is how to guarantee instant view update? Also, does it happen after finishing the transaction by the hibernate?
Is this revinfo table obligatory for envers to even work properly? Or can I make a workaround using custom revision entities, without the need to even create revinfo in the database.
I'm asking because I'm developing a system using envers with spring and postgres database, but my superiors don't want a table to be created in the public schema.
We have a Spring Boot project that uses Spring-JPA for data access. We have a couple of tables where we create/update rows once (or a few times, all within minutes). We don't update rows that are older than a day. These tables (like audit table) can get very large and we want to use Postgres' table partitioning features to help break up the data by month. So the main table always has this calendar month's data but if the query requires retrieval from previous months it would somehow read it from other partitions.
Two questions:
1) Is this a good idea for archiving older data but still leave it query-able?
2) Does Spring-JPA work with partitioned tables? Or do we have to figure out how to break up the query and do native queries and concatenate the restult set?
Thanks.
I am working with postgres partitioning with Hibernate & Spring JPA for a period of time. So I think, I can try to answer your questions.
1) Is this a good idea for archiving older data but still leave it query-able?
If you are applying indexes and not re-indexing table frequently, then partitioning of data may result faster query results.
Also you can use clustered index feature in postgres as well to fetch the data faster.
Because table with older data will not going to be updated, so clustered index will improve the performance efficiently.
2) Does Spring-JPA work with partitioned tables? Or do we have to figure out how to break up the query and do native queries and concatenate the restult set?
Spring JPA will work out of the box with partitioned table. It will retrieve the data from master as well as child tables and returns the concatenated result set.
Note : Issue with partitioned table
The only issue you will face with partitioned table is insertion in partitioned table.
Let me explain, when you partition a table, you will create a trigger over master table, and that trigger will return null. This is the key behind insertion issue in partitioned table using Spring JPA / Hibernate.
When you try to insert a row using Spring JPA or Hibernate you will face below issue
Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
To overcome this issue you need to override implementation of Batching batcher.
In hibernate you can provide the custom implementation of batcher factory using below configuration
hibernate.jdbc.factory_class=path.to.my.batcher.factory.implementation
In spring JPA you can achieve the same by custom implementation of batch builder using below configuration
hibernate.jdbc.batch.builder=path.to.my.batch.builder.implementation
References :
Custom Batch Builder/Batch in Spring-JPA
Demo Application
In addition to the #Anil Agrawal answer.
If you are using spring boot 2 then you need to define the customBatcher using the property.
spring.jpa.properties.hibernate.jdbc.batch.builder=net.xyz.jdbc.CustomBatchBuilder
You do not have to break down the JDBC query with postgres 11+.
If you execute select on the main table with plain jdbc, the DB would return the aggregated results from the partitioned tables.
In other words, the work is done by the Postgres DB, so Spring JPA will simply get the result and map it to objects as if there were no partitioning.
For having inserts work in a partitioned table you need to make sure that your partitions are already created, i think spring data will not create them for you.
I was used to work with Zend Db Table Relationships with MySQL. I declared $_dependentTables and $_referenceMap in the table classes as described in the manual. Then was able to work with functions findDependentRowset(), findParentRow() etc.
Now I use PostgreSQL, which is able to define the relations (REFERENCES) between tables right in the database.
The manual states:
Skip declaration of $_dependentTables if you use referential integrity constraints in the RDBMS server to implement cascading operations
what should be the case of Postgres. Despite this, I am not able to get it working. Unless I declare the referenceMap (but this shouldn't be needed!), I am still getting error:
No reference from table ... to table ...
The question is - is it possible to use references declared in Postgres in Zend Db, without (re-)declaring them in referenceMap? How - does ZF load it from Postgres to the referenceMap? If so, why I am getting the error?
My reading of the linked documentation is that these two address something different.
The DRI in the db recommendation is a recommendation to specify ON UPDATE CASCADE and ON DELETE CASCADE operations in the db rather than telling Zend to cascade.
What you are doing is something different, which is to use the referential integrity mapping to fetch related rows. In this case, it looks like Zend requires that you declare it.
We have an application that creates new tables at runtime, but always with the same table schema. The only thing that varies from one of these tables to the next is the table name. Is it possible to access these tables using Entity Framework, specifying which table to access by name?
Entity Framework is not designed for DDL, it's an ORM tool for data access. You would want to use a simple ADO.NET query to create/drop the table.
Creating and dropping tables for every user session will make your log file grow very big very fast. I would consider carefully the reasons you think this is necessary. If the data is temporary, why not save the Session ID in each row and truncate the table on a daily basis?
UPDATE:
No, not really. The Entity Data Model is not dynamic, it's a static XML document that describes the structure of the database. If you want to interact with a table with a dynamic name, you're going to have to stick to "classic" ADO.NET.
With Linq to SQL I guess it would be possible with a stored procedure taking the table Name as a parameter.
A nice post about SP in L2SQL: http://weblogs.asp.net/scottgu/archive/2007/08/16/linq-to-sql-part-6-retrieving-data-using-stored-procedures.aspx
I don't know if that feature exists in EF.