I tried to use the code below to ensure referential integrity in my database, but seems not to be working with GreenDao. I can still delete all the records. On the other hand, when I try to delete in Sqlitemanager, the trigger is raised and delete operation fails.
DevOpenHelper helper = new DaoMaster.DevOpenHelper(this, Common.DBNAME, null)
{
#Override
public void onCreate(SQLiteDatabase db) {
super.onCreate(db);
db.execSQL("CREATE TRIGGER grupe_artikli BEFORE DELETE ON groups "+
"FOR EACH ROW BEGIN "+
"SELECT CASE " +
"WHEN ((SELECT id_group FROM products WHERE id_group = OLD._id) IS NOT NULL) "+
"THEN RAISE(ABORT, 'error') "+
"END; END;");
DaoSession session = new DaoMaster(db).newSession();
}
Does GreenDao support triggers, or is there another method to maintain database referential integrity?
greenDAO has no built-in trigger support. However, I cannot think of any reason why your approach should not work. greenDAO does not hijack the database or something, so you should be able to work directly with the database just like you wouldn't use greenDAO at all.
Related
This is the mysql table.
create table Customer
(
id int auto_increment primary key,
birth date null,
createdTime time null,
updateTime datetime(6) null
);
This my java code
#Before
public void init() {
this.entityManagerFactory=Persistence.createEntityManagerFactory("jpaLearn");
this.entityManager=this.entityManagerFactory.createEntityManager();
this.entityTransaction=this.entityManager.getTransaction();
this.entityTransaction.begin();
}
#Test
public void persistentTest() {
this.entityManager.setFlushMode(FlushModeType.COMMIT); //don't work.
for (int i = 0; i < 1000; i++) {
Customer customer = new Customer();
customer.setBirth(new Date());
customer.setCreatedTime(new Date());
customer.setUpdateTime(new Date());
this.entityManager.persist(customer);
}
}
#After
public void destroy(){
this.entityTransaction.commit();
this.entityManager.close();
this.entityManagerFactory.close();
}
When I reading the wikibooks of JPA, it said "This means that when you call persist, merge, or remove the database DML INSERT, UPDATE, DELETE is not executed, until commit, or until a flush is triggered."
But at same time my code runing, I read the mysql log, I find each time the persist execution, mysql will execute the sql. And I also read the wireShark, each time will cause the request to Database.
I remember jpa saveAll method can send SQL statements to the database in batches? If I wanna to insert 10000 records, how to improve the efficiency?
My answer below supposes that you use Hibernate as jpa implementation. Hibernate doesn't enable batching by default. This means that it'll send a separate SQL statement for each insert/update operation.
You should set hibernate.jdbc.batch_size property to a number bigger than 0.
It is better to set this property in your persistence.xml file where you have your jpa configuration but since you have not posted it in the question, below it is set directly on the EntityManagerFactory.
#Before
public void init() {
Properties properties = new Properties();
properties.put("hibernate.jdbc.batch_size", "5");
this.entityManagerFactory = Persistence.createEntityManagerFactory("jpaLearn", properties);
this.entityManager = this.entityManagerFactory.createEntityManager();
this.entityTransaction = this.entityManager.getTransaction();
this.entityTransaction.begin();
}
Then by observing your logs you should see that the Customer records are persisted in the database in batches of 5.
For further reading please check: https://www.baeldung.com/jpa-hibernate-batch-insert-update
you should enable batch for hibernate
spring.jpa.properties.hibernate.order_inserts=true
spring.jpa.properties.hibernate.order_updates=true
spring.jpa.properties.hibernate.jdbc.batch_size=20
and use
reWriteBatchedInserts&reWriteBatchedStatements=true
end of your connectionString
Yestody,I got this question, how jpa run DDL sql with dynamic tableName?
usually,I just used DQL and DML like 'select,insert,update,delete'.
such as :
public interface UserRepository extends JpaRepository<User, Integer> {
#Query(value = "select a.* from user a where a.username = ? and a.password = ?", nativeQuery = true)
List<User> loginCheck(String username, String password);
}
but when I required run DDL sql below
String sql = "create table " + tableName + " as select * from user where login_flag = '1'";
I don't find a way to solve this with Jpa (or EntityManager).
Finally I used JDBC to run the DDL sql,but I think it's ugly...
Connection conn = null;
PreparedStatement ps = null;
String sql=" create table " + tableName + " as select * from user where login_flag = '1' ";
try {
Class.forName(drive);
conn = DriverManager.getConnection(url, username, password);
ps = conn.prepareStatement(sql);
ps.executeUpdate();
ps.close();
conn.close();
} catch (Exception e) {
e.printStackTrace();
}
So,can jpa run DDL sql(such as CREATE/DROP/ALTER) wiht dynamic tableName in an easy way?
Your question seems to consist of two parts
The first part
can jpa run DDL sql
Sure, just use entityManager.createNativeQuery("CREATE TABLE ...").executeUpdate(). This is probably not the best idea (you should be using a database migration tool like Flyway or Liquibase for DB creation), but it will work.
Note that you might run into some issues, e.g. different RDBMSes have different requirements regarding transactions around DDL statements, but they can be solved quite easily most of the time.
You're probably wondering how to get hold of an EntityManager when using Spring Data. See here for an explanation on how to create custom repository fragments where you can inject virtually anything you need.
The second part
with dynamic tableName
JPA only supports parameters in certain clauses within the query, and identifiers are not one of them. You'll need to use string concatenation, I'm afraid.
Why dynamic table names, though? It's not like your entity definitions are going to change at runtime. Static DDL scripts are generally less error-prone.
Before I summit the front entry data (with JSF/PrimeFaces), I had to check if existing name record. view scope bean like this :
public void updateProfileListener(ActionEvent actionEvent) {
if(supplierService.isExistSupplierName(supplier.getName(), true)) return;
// else saveDate();
}
and database check code like this :
userDatabase.createQuery("select c from Supplier c where c.name = :name")
.setParameter("name", name)
.getResultList();
It is just regular query sql for checking if existing name, but it is still to update the new data from front entry, May I know what happened?
BalusC's rigth!
Now, if you want find olny 1 supplier you may try add other data validation in where clause( well, i dont know your business logic :D ).
But, you really want resultList I think better use 'like'
select c from Supplier c where c.name like ...
I don't understand how ADO.NET recognizes a concurrency violation unless it's doing something beyond what I'm telling it to do, inside its "black box".
My update query in SQL Server 2000 does something like the following example, which is simplified; if the rowversion passed to the stored proc by the client doesn't match the rowversion in the database, the where-clause will fail, and no rows will be updated:
create proc UpdateFoo
#rowversion timestamp OUTPUT,
#id int,
#foodescription varchar(255)
as UPDATE FOO set description = #foodescription
where id = #id and rowversion = #rowversion;
if ##ROWCOUNT = 1
select #rowversion from foo where id = #id;
I create a SqlCommand object and populate the parameters and assign the command object to the SqlDataAdapter's UpdateCommand property. Then I invoke the data adapter's Update method.
There should indeed be a concurrency error because I deliberately change the database row in order to force a new rowversion. But how does ADO.NET know this? Is it doing something more than executing the command?
In the RowUpdated event of the SqlDataAdapter there will be a Concurrency error:
MySqlDataAdapter += (sender, evt) =>
{
if ((evt.Status == UpdateStatus.Continue) && (evt.StatementType == StatementType.Update))
{
// update succeeded
}
else
{
// update failed, check evt.Errors
}
}
Is ADO.NET comparing the rowversions? Is it looking at ##rowcount?
I'm setting up a new version of my application in a demo server and would love to find a way of resetting the database daily. I guess I can always have a cron job executing drop and create queries but I'm looking for a cleaner approach. I tried using a special persistence unit with drop-create approach but it doesn't work as the system connects and disconnects from the server frequently (on demand).
Is there a better approach?
H2 supports a special SQL statement to drop all objects:
DROP ALL OBJECTS [DELETE FILES]
If you don't want to drop all tables, you might want to use truncate table:
TRUNCATE TABLE
As this response is the first Google result for "reset H2 database", I post my solution below :
After each JUnit #tests :
Disable integrity constraint
List all tables in the (default) PUBLIC schema
Truncate all tables
List all sequences in the (default) PUBLIC schema
Reset all sequences
Reenable the constraints.
#After
public void tearDown() {
try {
clearDatabase();
} catch (Exception e) {
Fail.fail(e.getMessage());
}
}
public void clearDatabase() throws SQLException {
Connection c = datasource.getConnection();
Statement s = c.createStatement();
// Disable FK
s.execute("SET REFERENTIAL_INTEGRITY FALSE");
// Find all tables and truncate them
Set<String> tables = new HashSet<String>();
ResultSet rs = s.executeQuery("SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES where TABLE_SCHEMA='PUBLIC'");
while (rs.next()) {
tables.add(rs.getString(1));
}
rs.close();
for (String table : tables) {
s.executeUpdate("TRUNCATE TABLE " + table);
}
// Idem for sequences
Set<String> sequences = new HashSet<String>();
rs = s.executeQuery("SELECT SEQUENCE_NAME FROM INFORMATION_SCHEMA.SEQUENCES WHERE SEQUENCE_SCHEMA='PUBLIC'");
while (rs.next()) {
sequences.add(rs.getString(1));
}
rs.close();
for (String seq : sequences) {
s.executeUpdate("ALTER SEQUENCE " + seq + " RESTART WITH 1");
}
// Enable FK
s.execute("SET REFERENTIAL_INTEGRITY TRUE");
s.close();
c.close();
}
The other solution would be to recreatethe database at the begining of each tests. But that might be too long in case of big DB.
Thre is special syntax in Spring for database manipulation within unit tests
#Sql(scripts = "classpath:drop_all.sql", executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD)
#Sql(scripts = {"classpath:create.sql", "classpath:init.sql"}, executionPhase = Sql.ExecutionPhase.BEFORE_TEST_METHOD)
public class UnitTest {}
In this example we execute drop_all.sql script (where we dropp all required tables) after every test method.
In this example we execute create.sql script (where we create all required tables) and init.sql script (where we init all required tables before each test method.
The command: SHUTDOWN
You can execute it using
RunScript.execute(jdbc_url, user, password, "classpath:shutdown.sql", "UTF8", false);
I do run it every time when the Suite of tests is finished using #AfterClass
If you are using spring boot see this stackoverflow question
Setup your data source. I don't have any special close on exit.
datasource:
driverClassName: org.h2.Driver
url: "jdbc:h2:mem:psptrx"
Spring boot #DirtiesContext annotation
#DirtiesContext(classMode = DirtiesContext.ClassMode.BEFORE_EACH_TEST_METHOD)
Use #Before to initialise on each test case.
The #DirtiesContext will cause the h2 context to be dropped between each test.
you can write in the application.properties the following code to reset your tables which are loaded by JPA:
spring.jpa.hibernate.ddl-auto=create