hibernate second level cache can get lazy-loading entity when i set breakpoint - spring-data-jpa

I use Spring data JPA and hibernate second level cache via hibernate-redis in my project. I use #Transactional for lazy-loading, But it hints miss when I run application. if i debug it, and set a breakpoint wait for some time, it works and retrieve cache from redis. Here is the code:
Entity ItemCategory:
#Entity
#Cacheable
public class ItemCategory extends BaseModel {
#NotNull
#Column(updatable=false)
private String name;
#JsonBackReference
#ManyToOne(fetch = FetchType.LAZY)
private ItemCategory root;
}
Entity Item:
#Entity
#Cacheable
public class Item extends BaseModel {
#ManyToOne(fetch = FetchType.EAGER)
private ItemCategory category;
}
Repository:
#Repository
public interface ItemCategoryRepository extends JpaRepository<ItemCategory, Long> {
#QueryHints(value = {
#QueryHint(name = "org.hibernate.cacheable", value = "true")
})
#Query("select distinct i.category.root from Item i where i.store.id = :id and i.category.parent.id = i.category.root.id")
List<ItemCategory> findByStoreId(#Param("id") Long id);
}
hint miss:
2017-03-06 14:49:30.105 TRACE 30295 --- [nio-8080-exec-2] o.h.cache.redis.client.RedisClient : retrieve cache item. region=hibernate.org.hibernate.cache.internal.StandardQueryCache, key=sql: select distinct itemcatego2_.id as id1_21_, itemcatego2_.create_by_id as create_b8_21_, itemcatego2_.create_date as create_d2_21_, itemcatego2_.last_modified_by_id as last_mod9_21_, itemcatego2_.last_modified_date as last_mod3_21_, itemcatego2_.background_id as backgro10_21_, itemcatego2_.enabled as enabled4_21_, itemcatego2_.name as name5_21_, itemcatego2_.parent_id as parent_11_21_, itemcatego2_.root_id as root_id12_21_, itemcatego2_.slide as slide6_21_, itemcatego2_.son_number as son_numb7_21_ from item item0_ inner join item_category itemcatego1_ on item0_.category_id=itemcatego1_.id inner join item_category itemcatego2_ on itemcatego1_.root_id=itemcatego2_.id where item0_.store_id=? and itemcatego1_.parent_id=itemcatego1_.root_id; parameters: ; named parameters: {id=4}; transformer: org.hibernate.transform.CacheableResultTransformer#110f2, value=[6098054966726656, 3, 1]
2017-03-06 14:49:30.116 TRACE 30295 --- [nio-8080-exec-2] o.h.cache.redis.client.RedisClient : retrieve cache item. region=hibernate.org.hibernate.cache.spi.UpdateTimestampsCache, key=item, value=null
2017-03-06 14:49:30.127 TRACE 30295 --- [nio-8080-exec-2] o.h.cache.redis.client.RedisClient : retrieve cache item. region=hibernate.org.hibernate.cache.spi.UpdateTimestampsCache, key=item_category, value=null
2017-03-06 14:49:41.971 INFO 30295 --- [nio-8080-exec-2] i.StatisticalLoggingSessionEventListener : Session Metrics {
974551 nanoseconds spent acquiring 1 JDBC connections;
0 nanoseconds spent releasing 0 JDBC connections;
0 nanoseconds spent preparing 0 JDBC statements;
0 nanoseconds spent executing 0 JDBC statements;
0 nanoseconds spent executing 0 JDBC batches;
0 nanoseconds spent performing 0 L2C puts;
19881210 nanoseconds spent performing 1 L2C hits;
24082571 nanoseconds spent performing 2 L2C misses;
0 nanoseconds spent executing 0 flushes (flushing a total of 0 entities and 0 collections);
26331 nanoseconds spent executing 1 partial-flushes (flushing a total of 0 entities and 0 collections)
}
if i debug and set a breakpoint wait for some time(not work every time):
2017-03-06 14:50:00.565 TRACE 30295 --- [nio-8080-exec-3] o.h.cache.redis.client.RedisClient : retrieve cache item. region=hibernate.org.hibernate.cache.internal.StandardQueryCache, key=sql: select distinct itemcatego2_.id as id1_21_, itemcatego2_.create_by_id as create_b8_21_, itemcatego2_.create_date as create_d2_21_, itemcatego2_.last_modified_by_id as last_mod9_21_, itemcatego2_.last_modified_date as last_mod3_21_, itemcatego2_.background_id as backgro10_21_, itemcatego2_.enabled as enabled4_21_, itemcatego2_.name as name5_21_, itemcatego2_.parent_id as parent_11_21_, itemcatego2_.root_id as root_id12_21_, itemcatego2_.slide as slide6_21_, itemcatego2_.son_number as son_numb7_21_ from item item0_ inner join item_category itemcatego1_ on item0_.category_id=itemcatego1_.id inner join item_category itemcatego2_ on itemcatego1_.root_id=itemcatego2_.id where item0_.store_id=? and itemcatego1_.parent_id=itemcatego1_.root_id; parameters: ; named parameters: {id=4}; transformer: org.hibernate.transform.CacheableResultTransformer#110f2, value=[6098054966726656, 3, 1]
2017-03-06 14:50:00.584 TRACE 30295 --- [nio-8080-exec-3] o.h.cache.redis.client.RedisClient : retrieve cache item. region=hibernate.org.hibernate.cache.spi.UpdateTimestampsCache, key=item, value=null
2017-03-06 14:50:00.595 TRACE 30295 --- [nio-8080-exec-3] o.h.cache.redis.client.RedisClient : retrieve cache item. region=hibernate.org.hibernate.cache.spi.UpdateTimestampsCache, key=item_category, value=null
2017-03-06 14:50:01.805 TRACE 30295 --- [nio-8080-exec-3] o.h.cache.redis.client.RedisClient : retrieve cache item. region=hibernate.com.foo.bar.model.item.ItemCategory, key=com.foo.bar.model.item.ItemCategory#3, value={parent=null, lastModifiedDate=2016-12-14 09:30:48.0, lastModifiedBy=1, enabled=true, sonNumber=2, _subclass=com.foo.bar.model.item.ItemCategory, createBy=1, children=3, background=1, slide=0, root=3, name=foo, _lazyPropertiesUnfetched=false, _version=null, createDate=2016-12-14 09:29:56.0}
Hibernate: select user0_.id as id1_59_0_, user0_.create_by_id as create_11_59_0_, user0_.create_date as create_d2_59_0_, user0_.last_modified_by_id as last_mo12_59_0_, user0_.last_modified_date as last_mod3_59_0_, user0_.avatar_id as avatar_13_59_0_, user0_.email as email4_59_0_, user0_.enabled as enabled5_59_0_, user0_.gender as gender6_59_0_, user0_.nickname as nickname7_59_0_, user0_.phone as phone8_59_0_, user0_.seller_auth_info_id as seller_14_59_0_, user0_.seller_auth_status as seller_a9_59_0_, user0_.user_ext_id as user_ex15_59_0_, user0_.user_group_id as user_gr16_59_0_, user0_.username as usernam10_59_0_, user1_.id as id1_59_1_, user1_.create_by_id as create_11_59_1_, user1_.create_date as create_d2_59_1_, user1_.last_modified_by_id as last_mo12_59_1_, user1_.last_modified_date as last_mod3_59_1_, user1_.avatar_id as avatar_13_59_1_, user1_.email as email4_59_1_, user1_.enabled as enabled5_59_1_, user1_.gender as gender6_59_1_, user1_.nickname as nickname7_59_1_, user1_.phone as phone8_59_1_, user1_.seller_auth_info_id as seller_14_59_1_, user1_.seller_auth_status as seller_a9_59_1_, user1_.user_ext_id as user_ex15_59_1_, user1_.user_group_id as user_gr16_59_1_, user1_.username as usernam10_59_1_, user2_.id as id1_59_2_, user2_.create_by_id as create_11_59_2_, user2_.create_date as create_d2_59_2_, user2_.last_modified_by_id as last_mo12_59_2_, user2_.last_modified_date as last_mod3_59_2_, user2_.avatar_id as avatar_13_59_2_, user2_.email as email4_59_2_, user2_.enabled as enabled5_59_2_, user2_.gender as gender6_59_2_, user2_.nickname as nickname7_59_2_, user2_.phone as phone8_59_2_, user2_.seller_auth_info_id as seller_14_59_2_, user2_.seller_auth_status as seller_a9_59_2_, user2_.user_ext_id as user_ex15_59_2_, user2_.user_group_id as user_gr16_59_2_, user2_.username as usernam10_59_2_, usergroup3_.id as id1_65_3_, usergroup3_.create_by_id as create_b5_65_3_, usergroup3_.create_date as create_d2_65_3_, usergroup3_.last_modified_by_id as last_mod6_65_3_, usergroup3_.last_modified_date as last_mod3_65_3_, usergroup3_.name as name4_65_3_ from user user0_ left outer join user user1_ on user0_.create_by_id=user1_.id left outer join user user2_ on user1_.last_modified_by_id=user2_.id left outer join user_group usergroup3_ on user1_.user_group_id=usergroup3_.id where user0_.id=?
Hibernate: select usergroup0_.id as id1_65_0_, usergroup0_.create_by_id as create_b5_65_0_, usergroup0_.create_date as create_d2_65_0_, usergroup0_.last_modified_by_id as last_mod6_65_0_, usergroup0_.last_modified_date as last_mod3_65_0_, usergroup0_.name as name4_65_0_, user1_.id as id1_59_1_, user1_.create_by_id as create_11_59_1_, user1_.create_date as create_d2_59_1_, user1_.last_modified_by_id as last_mo12_59_1_, user1_.last_modified_date as last_mod3_59_1_, user1_.avatar_id as avatar_13_59_1_, user1_.email as email4_59_1_, user1_.enabled as enabled5_59_1_, user1_.gender as gender6_59_1_, user1_.nickname as nickname7_59_1_, user1_.phone as phone8_59_1_, user1_.seller_auth_info_id as seller_14_59_1_, user1_.seller_auth_status as seller_a9_59_1_, user1_.user_ext_id as user_ex15_59_1_, user1_.user_group_id as user_gr16_59_1_, user1_.username as usernam10_59_1_, user2_.id as id1_59_2_, user2_.create_by_id as create_11_59_2_, user2_.create_date as create_d2_59_2_, user2_.last_modified_by_id as last_mo12_59_2_, user2_.last_modified_date as last_mod3_59_2_, user2_.avatar_id as avatar_13_59_2_, user2_.email as email4_59_2_, user2_.enabled as enabled5_59_2_, user2_.gender as gender6_59_2_, user2_.nickname as nickname7_59_2_, user2_.phone as phone8_59_2_, user2_.seller_auth_info_id as seller_14_59_2_, user2_.seller_auth_status as seller_a9_59_2_, user2_.user_ext_id as user_ex15_59_2_, user2_.user_group_id as user_gr16_59_2_, user2_.username as usernam10_59_2_, user3_.id as id1_59_3_, user3_.create_by_id as create_11_59_3_, user3_.create_date as create_d2_59_3_, user3_.last_modified_by_id as last_mo12_59_3_, user3_.last_modified_date as last_mod3_59_3_, user3_.avatar_id as avatar_13_59_3_, user3_.email as email4_59_3_, user3_.enabled as enabled5_59_3_, user3_.gender as gender6_59_3_, user3_.nickname as nickname7_59_3_, user3_.phone as phone8_59_3_, user3_.seller_auth_info_id as seller_14_59_3_, user3_.seller_auth_status as seller_a9_59_3_, user3_.user_ext_id as user_ex15_59_3_, user3_.user_group_id as user_gr16_59_3_, user3_.username as usernam10_59_3_, usergroup4_.id as id1_65_4_, usergroup4_.create_by_id as create_b5_65_4_, usergroup4_.create_date as create_d2_65_4_, usergroup4_.last_modified_by_id as last_mod6_65_4_, usergroup4_.last_modified_date as last_mod3_65_4_, usergroup4_.name as name4_65_4_, user5_.id as id1_59_5_, user5_.create_by_id as create_11_59_5_, user5_.create_date as create_d2_59_5_, user5_.last_modified_by_id as last_mo12_59_5_, user5_.last_modified_date as last_mod3_59_5_, user5_.avatar_id as avatar_13_59_5_, user5_.email as email4_59_5_, user5_.enabled as enabled5_59_5_, user5_.gender as gender6_59_5_, user5_.nickname as nickname7_59_5_, user5_.phone as phone8_59_5_, user5_.seller_auth_info_id as seller_14_59_5_, user5_.seller_auth_status as seller_a9_59_5_, user5_.user_ext_id as user_ex15_59_5_, user5_.user_group_id as user_gr16_59_5_, user5_.username as usernam10_59_5_, authoritie6_.user_group_id as user_gro1_66_6_, authoritie6_.authorities as authorit2_66_6_ from user_group usergroup0_ left outer join user user1_ on usergroup0_.create_by_id=user1_.id left outer join user user2_ on user1_.create_by_id=user2_.id left outer join user user3_ on user1_.last_modified_by_id=user3_.id left outer join user_group usergroup4_ on user1_.user_group_id=usergroup4_.id left outer join user user5_ on usergroup0_.last_modified_by_id=user5_.id left outer join user_group_authorities authoritie6_ on usergroup0_.id=authoritie6_.user_group_id where usergroup0_.id=?
2017-03-06 14:50:01.830 TRACE 30295 --- [nio-8080-exec-3] o.h.cache.redis.client.RedisClient : retrieve cache item. region=hibernate.com.foo.bar.model.item.ItemCategory, key=com.foo.bar.model.item.ItemCategory#1, value={parent=null, lastModifiedDate=2016-12-05 09:31:51.0, lastModifiedBy=1, enabled=true, sonNumber=1, _subclass=com.foo.bar.model.item.ItemCategory, createBy=1, children=1, background=1, slide=0, root=1, name=bar, _lazyPropertiesUnfetched=false, _version=null, createDate=2016-12-05 09:31:28.0}
2017-03-06 14:51:02.165 INFO 30295 --- [nio-8080-exec-3] i.StatisticalLoggingSessionEventListener : Session Metrics {
15435533 nanoseconds spent acquiring 1 JDBC connections;
0 nanoseconds spent releasing 0 JDBC connections;
1405433 nanoseconds spent preparing 2 JDBC statements;
2301936 nanoseconds spent executing 2 JDBC statements;
0 nanoseconds spent executing 0 JDBC batches;
0 nanoseconds spent performing 0 L2C puts;
64020073 nanoseconds spent performing 3 L2C hits;
27037450 nanoseconds spent performing 2 L2C misses;
1247578 nanoseconds spent executing 1 flushes (flushing a total of 4 entities and 3 collections);
24403 nanoseconds spent executing 1 partial-flushes (flushing a total of 0 entities and 0 collections)
}
application.yml:
spring:
profiles: development
jpa:
show-sql: true
properties:
hibernate.cache.use_second_level_cache: true
hibernate.cache.region.factory_class: org.hibernate.cache.redis.hibernate5.SingletonRedisRegionFactory
hibernate.cache.use_query_cache: true
hibernate.cache.region_prefix: hibernate
hibernate.generate_statistics: true
hibernate.cache.use_structured_entries: true
redisson-config: classpath:redisson.yml
hibernate.cache.use_reference_entries: true
javax.persistence.sharedCache.mode: ENABLE_SELECTIVE

Related

DataNucleus relationship fetch group confusion

All models are here: https://github.com/valentijnscholten/dependency-track/tree/metrics-opt-comp/src/main/java/org/dependencytrack/model
I want to avoid the 1+N queries problem while processing a list of models and accessing a relationship/collection on each of them. This should be easy according to the DataNucleus docs, and is quite common. But it doesn't seem to work as advertised.
When adding the relationship to the fetch group, I can see that DN is doing an extra query to "Bulk Fetch" the relationship. This looks good. But as soon as I actually access the relationship, it does the same query again. But this time specifically for this model instance. So I end up with 1+1+N queries.
Question applies mostly to Component.java:
It has a many-to-many relationship (but I see the same problem with 1-N):
#Persistent(table = "COMPONENTS_VULNERABILITIES")
#Join(column = "COMPONENT_ID")
#Element(column = "VULNERABILITY_ID")
#Order(extensions = #Extension(vendorName = "datanucleus", key = "list-ordering", value = "id ASC"))
private List<Vulnerability> vulnerabilities;
#FetchGroup(name = "METRICS_UPDATE", members = {
#Persistent(name = "id"),
#Persistent(name = "lastInheritedRiskScore"),
#Persistent(name = "uuid"),
#Persistent(name = "vulnerabilities"),
#Persistent(name = "analysises"),
#Persistent(name = "policyViolations")
})
Query<Component> query = pm.newQuery(Component.class);
query.setOrdering("id DESC");
query.setRange(0, 500);
query.getFetchPlan().setGroup(Component.FetchGroup.METRICS_UPDATE.name());
query.getFetchPlan().setFetchSize(FetchPlan.FETCH_SIZE_GREEDY);
query.getFetchPlan().setMaxFetchDepth(10);
components = query.executeList();
LOGGER.debug("Fetched " + components.size() + " components for project " + project.getUuid());
for (final Component component : components) {
try {
LOGGER.debug("Printing vulnerabilities count: " + component.getVulnerabilities().stream().count());
FetchSize and MaxDepth are added because I was trying stuff.
Logs
2023-02-11 18:41:36,458 DEBUG [Query] JDOQL Query : Executing "SELECT FROM org.dependencytrack.model.Component WHERE project == :project ORDER BY id DESC RANGE 0,500" ...
2023-02-11 18:41:36,460 DEBUG [Connection] ManagedConnection OPENED : "org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl#3a5359fd [conn=com.zaxxer.hikari.pool.HikariProxyConnection#78a8a548, commitOnRelease=true, closeOnRelease=true, closeOnTxnEnd=true]" on resource "nontx" with isolation level "read-committed" and auto-commit=false
2023-02-11 18:41:36,460 DEBUG [Datastore] Using PreparedStatement "HikariProxyPreparedStatement#289630101 wrapping SELECT 'org.dependencytrack.model.Component' AS "DN_TYPE","A0"."ID" AS "NUCORDER0","A0"."LAST_RISKSCORE","A0"."UUID" FROM "COMPONENT" "A0" WHERE "A0"."PROJECT_ID" = ? ORDER BY "NUCORDER0" DESC FETCH NEXT 500 ROWS ONLY " for connection "com.zaxxer.hikari.pool.HikariProxyConnection#78a8a548"
2023-02-11 18:41:36,463 DEBUG [Native] SELECT 'org.dependencytrack.model.Component' AS "DN_TYPE","A0"."ID" AS "NUCORDER0","A0"."LAST_RISKSCORE","A0"."UUID" FROM "COMPONENT" "A0" WHERE "A0"."PROJECT_ID" = ? ORDER BY "NUCORDER0" DESC FETCH NEXT 500 ROWS ONLY
2023-02-11 18:41:36,466 DEBUG [Retrieve] SQL Execution Time = 3 ms
2
... some other bulk prefetches snipped
2023-02-11 18:41:36,476 DEBUG [Retrieve] JDOQL Bulk-Fetch of org.dependencytrack.model.Component.vulnerabilities
2023-02-11 18:41:36,476 DEBUG [Datastore] Using PreparedStatement "HikariProxyPreparedStatement#555511245 wrapping SELECT 'org.dependencytrack.model.Vulnerability' AS "DN_TYPE","A1"."CREATED","A1"."CREDITS","A1"."CVSSV2BASESCORE","A1"."CVSSV2EXPLOITSCORE","A1"."CVSSV2IMPACTSCORE","A1"."CVSSV2VECTOR","A1"."CVSSV3BASESCORE","A1"."CVSSV3EXPLOITSCORE","A1"."CVSSV3IMPACTSCORE","A1"."CVSSV3VECTOR","A1"."CWES","A1"."DESCRIPTION","A1"."DETAIL","A1"."EPSSPERCENTILE","A1"."EPSSSCORE","A1"."FRIENDLYVULNID","A1"."ID" AS "NUCORDER0","A1"."OWASPRRBUSINESSIMPACTSCORE","A1"."OWASPRRLIKELIHOODSCORE","A1"."OWASPRRTECHNICALIMPACTSCORE","A1"."OWASPRRVECTOR","A1"."PATCHEDVERSIONS","A1"."PUBLISHED","A1"."RECOMMENDATION","A1"."REFERENCES","A1"."SEVERITY","A1"."SOURCE","A1"."SUBTITLE","A1"."TITLE","A1"."UPDATED","A1"."UUID","A1"."VULNID","A1"."VULNERABLEVERSIONS","A0"."COMPONENT_ID" FROM "COMPONENTS_VULNERABILITIES" "A0" INNER JOIN "VULNERABILITY" "A1" ON "A0"."VULNERABILITY_ID" = "A1"."ID" WHERE EXISTS (SELECT 'org.dependencytrack.model.Component' AS "DN_TYPE","A0_SUB"."ID" AS "DN_APPID" FROM "COMPONENT" "A0_SUB" WHERE "A0_SUB"."PROJECT_ID" = ? AND "A0"."COMPONENT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"" for connection "com.zaxxer.hikari.pool.HikariProxyConnection#78a8a548"
2023-02-11 18:41:36,476 DEBUG [Native] SELECT 'org.dependencytrack.model.Vulnerability' AS "DN_TYPE","A1"."CREATED","A1"."CREDITS","A1"."CVSSV2BASESCORE","A1"."CVSSV2EXPLOITSCORE","A1"."CVSSV2IMPACTSCORE","A1"."CVSSV2VECTOR","A1"."CVSSV3BASESCORE","A1"."CVSSV3EXPLOITSCORE","A1"."CVSSV3IMPACTSCORE","A1"."CVSSV3VECTOR","A1"."CWES","A1"."DESCRIPTION","A1"."DETAIL","A1"."EPSSPERCENTILE","A1"."EPSSSCORE","A1"."FRIENDLYVULNID","A1"."ID" AS "NUCORDER0","A1"."OWASPRRBUSINESSIMPACTSCORE","A1"."OWASPRRLIKELIHOODSCORE","A1"."OWASPRRTECHNICALIMPACTSCORE","A1"."OWASPRRVECTOR","A1"."PATCHEDVERSIONS","A1"."PUBLISHED","A1"."RECOMMENDATION","A1"."REFERENCES","A1"."SEVERITY","A1"."SOURCE","A1"."SUBTITLE","A1"."TITLE","A1"."UPDATED","A1"."UUID","A1"."VULNID","A1"."VULNERABLEVERSIONS","A0"."COMPONENT_ID" FROM "COMPONENTS_VULNERABILITIES" "A0" INNER JOIN "VULNERABILITY" "A1" ON "A0"."VULNERABILITY_ID" = "A1"."ID" WHERE EXISTS (SELECT 'org.dependencytrack.model.Component' AS "DN_TYPE","A0_SUB"."ID" AS "DN_APPID" FROM "COMPONENT" "A0_SUB" WHERE "A0_SUB"."PROJECT_ID" = ? AND "A0"."COMPONENT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"
2023-02-11 18:41:36,480 DEBUG [Retrieve] SQL Execution Time = 4 ms
As far as I can see this is using the filters from the original query to fetch the vulnerabilities relationship. So far so good.
The first statement after the query above is to access getVulnerabilities(), and this results in, more or less, the exact same query:
2023-02-11 18:41:36,494 DEBUG [ProjectMetricsUpdateTask] Fetched 5 components for project f04cfba0-7b94-4380-8cbd-aca492f97a7f
2023-02-11 18:41:36,494 DEBUG [Persistence] Object with id "org.dependencytrack.model.Component:45106" has a lifecycle change : "HOLLOW"->"P_NONTRANS"
2023-02-11 18:41:36,508 DEBUG [Persistence] Object "org.dependencytrack.model.Component#68f46ca4" field "vulnerabilities" is replaced by a SCO wrapper of type "org.datanucleus.store.types.wrappers.backed.List" [cache-values=true, lazy-loading=true, allow-nulls=true]
2023-02-11 18:41:36,508 DEBUG [Persistence] Object "org.dependencytrack.model.Component#68f46ca4" field "vulnerabilities" loading contents to SCO wrapper from the datastore
2023-02-11 18:41:36,512 DEBUG [Connection] ManagedConnection OPENED : "org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl#73aeb4aa [conn=com.zaxxer.hikari.pool.HikariProxyConnection#30f8bfe1, commitOnRelease=true, closeOnRelease=true, closeOnTxnEnd=true]" on resource "nontx" with isolation level "read-committed" and auto-commit=false
2023-02-11 18:41:36,512 DEBUG [Datastore] Using PreparedStatement "HikariProxyPreparedStatement#1310502991 wrapping SELECT 'org.dependencytrack.model.Vulnerability' AS "DN_TYPE","A1"."CREATED","A1"."CREDITS","A1"."CVSSV2BASESCORE","A1"."CVSSV2EXPLOITSCORE","A1"."CVSSV2IMPACTSCORE","A1"."CVSSV2VECTOR","A1"."CVSSV3BASESCORE","A1"."CVSSV3EXPLOITSCORE","A1"."CVSSV3IMPACTSCORE","A1"."CVSSV3VECTOR","A1"."CWES","A1"."DESCRIPTION","A1"."DETAIL","A1"."EPSSPERCENTILE","A1"."EPSSSCORE","A1"."FRIENDLYVULNID","A1"."ID" AS "NUCORDER0","A1"."OWASPRRBUSINESSIMPACTSCORE","A1"."OWASPRRLIKELIHOODSCORE","A1"."OWASPRRTECHNICALIMPACTSCORE","A1"."OWASPRRVECTOR","A1"."PATCHEDVERSIONS","A1"."PUBLISHED","A1"."RECOMMENDATION","A1"."REFERENCES","A1"."SEVERITY","A1"."SOURCE","A1"."SUBTITLE","A1"."TITLE","A1"."UPDATED","A1"."UUID","A1"."VULNID","A1"."VULNERABLEVERSIONS" FROM "COMPONENTS_VULNERABILITIES" "A0" INNER JOIN "VULNERABILITY" "A1" ON "A0"."VULNERABILITY_ID" = "A1"."ID" WHERE "A0"."COMPONENT_ID" = ? ORDER BY "NUCORDER0"" for connection "com.zaxxer.hikari.pool.HikariProxyConnection#30f8bfe1"
2023-02-11 18:41:36,515 DEBUG [Native] SELECT 'org.dependencytrack.model.Vulnerability' AS "DN_TYPE","A1"."CREATED","A1"."CREDITS","A1"."CVSSV2BASESCORE","A1"."CVSSV2EXPLOITSCORE","A1"."CVSSV2IMPACTSCORE","A1"."CVSSV2VECTOR","A1"."CVSSV3BASESCORE","A1"."CVSSV3EXPLOITSCORE","A1"."CVSSV3IMPACTSCORE","A1"."CVSSV3VECTOR","A1"."CWES","A1"."DESCRIPTION","A1"."DETAIL","A1"."EPSSPERCENTILE","A1"."EPSSSCORE","A1"."FRIENDLYVULNID","A1"."ID" AS "NUCORDER0","A1"."OWASPRRBUSINESSIMPACTSCORE","A1"."OWASPRRLIKELIHOODSCORE","A1"."OWASPRRTECHNICALIMPACTSCORE","A1"."OWASPRRVECTOR","A1"."PATCHEDVERSIONS","A1"."PUBLISHED","A1"."RECOMMENDATION","A1"."REFERENCES","A1"."SEVERITY","A1"."SOURCE","A1"."SUBTITLE","A1"."TITLE","A1"."UPDATED","A1"."UUID","A1"."VULNID","A1"."VULNERABLEVERSIONS" FROM "COMPONENTS_VULNERABILITIES" "A0" INNER JOIN "VULNERABILITY" "A1" ON "A0"."VULNERABILITY_ID" = "A1"."ID" WHERE "A0"."COMPONENT_ID" = ? ORDER BY "NUCORDER0"
Disabling L2 Cache doesn't help. There are a couple more relationships in the Component class. They all have the samen problem. I also notice that DN only fetches 1 level of relationships, despite the max depth being set to 10.
So two questions:
Why is the prefetched contents of the relationship not used?
Why is only 1 level of relationships prefetched?
I do notice that the vulnerabilities relation is empty in the database. Could that be triggering the repeat of queries?

Entity Framework Takes a longer time to get the response the first time using within the same context

I am having an issues with the EF that the first query takes a long time. I thought the query itself was taking a long time. So, I used
context.Database.Log = s => System.Diagnostics.Debug.WriteLine(s);
to see what query is being sent. It only took only 1 ms but from the open connection to close connection, it took 18 second. The following is the message from the debug message.
**Opened connection at 3/19/2015 9:25:49 PM +06:30
SELECT
[Extent1].[Id] AS [Id],
[Extent1].[ItemId] AS [ItemId],
[Extent1].[SerialNumber] AS [SerialNumber],
[Extent1].[SimNumber] AS [SimNumber],
[Extent1].[ItemStatusId] AS [ItemStatusId],
[Extent1].[StoreId] AS [StoreId]
FROM [dbo].[ItemDetail] AS [Extent1]
-- Executing at 3/19/2015 9:25:49 PM +06:30
-- Completed in 1 ms with result: SqlDataReader
Closed connection at 3/19/2015 9:26:07 PM +06:30**
Within the same context, another query similar to the previous one was sent. It only took 1 second from the Open to Close connection.
**Opened connection at 3/19/2015 9:26:10 PM +06:30
SELECT
[Extent1].[Id] AS [Id],
[Extent1].[ItemId] AS [ItemId],
[Extent1].[SerialNumber] AS [SerialNumber],
[Extent1].[SimNumber] AS [SimNumber],
[Extent1].[ItemStatusId] AS [ItemStatusId],
[Extent1].[StoreId] AS [StoreId]
FROM [dbo].[ItemDetail] AS [Extent1]
INNER JOIN [dbo].[Item] AS [Extent2] ON [Extent1].[ItemId] = [Extent2].[Id]
WHERE ([Extent1].[ItemStatusId] = #p__linq__0) AND ([Extent2].[CategoryId] = #p__linq__1) AND ([Extent1].[StoreId] = #p__linq__2)
-- p__linq__0: '1' (Type = Int32, IsNullable = false)
-- p__linq__1: '2' (Type = Int32, IsNullable = false)
-- p__linq__2: '1' (Type = Int32, IsNullable = false)
-- Executing at 3/19/2015 9:26:10 PM +06:30
-- Completed in 1 ms with result: SqlDataReader
Closed connection at 3/19/2015 9:26:11 PM +06:30**
Why does the first query take longer time to close the connection?
I know that the first query usually take time because of loading the meta data. But this is different that the open connection and executing query are so close and after getting the results, it takes a long time to close the connection in the first query.
Without knowing more details of your code, the only factor I can come up with is that it matters how a LINQ query is processed. One thing in particular that can cause a connection to remain open relatively long is enumerating a LINQ query and executing some time-consuming process in each iteration:
var details = from d in ItemDetail
where ...
select new { ... }
foreach (var detail in details)
{
// DbDataReader reads a record on each iteration
SomeLengthyProcess(detail);
}
// Connection is closed after the last iteration.
Here's a sample output (leaving out some irrelevant details)
Opened connection at 09:48:33
SELECT statement
-- Executing at 09:48:33
-- Completed in 14 ms with result: SqlDataReader
Record1
Record2
Record3
Record4
Record5
Closed connection at 09:48:37
Which shows that the statement itself is executed in "no time", but that it takes another 4s for the reader to produce each record.
The way to make the connection close sooner is:
foreach (var detail in details.ToList())
which builds an in-memory list first and then iterates over it. In this case, the connection closes immediately after the "Completed" logging line.

Inserting data into more than one table in spring-batch using ibatis

I am using IbatisBatchItemWriter to write complex object into multiple tables.
Here is my object how it looks like
public class SfObject{
protected List<Person> person;
}
public class Person {
protected String personId;
protected XMLGregorianCalendar dateOfBirth;
protected String countryOfBirth;
protected String regionOfBirth;
protected String placeOfBirth;
protected String birthName;
protected XMLGregorianCalendar dateOfDeath;
protected XMLGregorianCalendar lastModifiedOn;
protected List<EmailInformation> emailInformation;
}
public class EmailInformation {
protected String emailType;
protected String emailAddress;
protected XMLGregorianCalendar lastModifiedOn;
}
And here is my ibatis configuration to insert above objests
<insert id="insertCompoundEmployeeData" parameterClass="com.domain.SfObject">
<iterate property="person">
insert into E_Person_Info
(person_id,
person_birth_dt,
person_country_of_birth,
person_region_of_birth,
person_place_of_birth,
person_birth_name,
person_death_dt,
last_modified_on
)
values (#person[].personId#,
#person[].dateOfBirth,
#person[].countryOfBirth#,
#person[].regionOfBirth#,
#person[].placeOfBirth#,
#person[].birthName#,
#person[].dateOfDeath#,
#person[].lastModifiedOn#
);
<iterate property="person[].emailInformation">
insert into E_Email_Info
(email_info_person_id,
email_info_email_type,
email_info_email_address,
last_modified_on
)
values (#person[].personId#,
#person[].emailInformation[].emailType#,
#person[].emailInformation[].emailAddress#,
#person[].emailInformation[].lastModifiedOn#
);
</iterate>
</iterate>
</insert>
I am not sure whether i could use above config to insert data into more than one table, but when i executed the above code i am getting below error for batch of 10 records. Btw, email information is not mandatory so, it may be null in some person object.
Stacktrace
[08.08.2014 17:30:07] DEBUG: WebservicePagingItemReader.doRead() - Reading page 0
[08.08.2014 17:30:09] DEBUG: RepeatTemplate.executeInternal() - Repeat operation about to start at count=1
[08.08.2014 17:30:09] DEBUG: RepeatTemplate.executeInternal() - Repeat operation about to start at count=2
[08.08.2014 17:30:09] DEBUG: RepeatTemplate.executeInternal() - Repeat operation about to start at count=3
[08.08.2014 17:30:09] DEBUG: RepeatTemplate.executeInternal() - Repeat operation about to start at count=4
[08.08.2014 17:30:09] DEBUG: RepeatTemplate.executeInternal() - Repeat operation about to start at count=5
[08.08.2014 17:30:09] DEBUG: RepeatTemplate.executeInternal() - Repeat operation about to start at count=6
[08.08.2014 17:30:09] DEBUG: RepeatTemplate.executeInternal() - Repeat operation about to start at count=7
[08.08.2014 17:30:09] DEBUG: RepeatTemplate.executeInternal() - Repeat operation about to start at count=8
[08.08.2014 17:30:09] DEBUG: RepeatTemplate.executeInternal() - Repeat operation about to start at count=9
[08.08.2014 17:30:09] DEBUG: RepeatTemplate.isComplete() - Repeat is complete according to policy and result value.
[08.08.2014 17:30:09] DEBUG: IbatisBatchItemWriter.write() - Executing batch with 10 items.
[08.08.2014 17:30:09] DEBUG: SqlMapClientTemplate.execute() - Opened SqlMapSession [com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl#168afdd] for iBATIS operation
[08.08.2014 17:30:10] DEBUG: Connection.debug() - {conn-100000} Connection
[08.08.2014 17:30:10] DEBUG: SqlMapClientTemplate.execute() - Obtained JDBC Connection [Transaction-aware proxy for target Connection from DataSource [org.springframework.jdbc.datasource.DriverManagerDataSource#8eae04]] for iBATIS operation
[08.08.2014 17:30:10] DEBUG: Connection.debug() - {conn-100000} Preparing Statement: insert into E_Person_Info (person_id, person_birth_dt, person_country_of_birth, person_region_of_birth, person_place_of_birth, person_birth_name, person_death_dt, last_modified_on ) values (?, ?, ?, ?, ?, ?, ?, ? );
[08.08.2014 17:30:10] DEBUG: Connection.debug() - {conn-100000} Preparing Statement: insert into E_Person_Info (person_id, person_birth_dt, person_country_of_birth, person_region_of_birth, person_place_of_birth, person_birth_name, person_death_dt, last_modified_on ) values (?, ?, ?, ?, ?, ?, ?, ? ); insert into E_Email_Info (email_info_person_id, email_info_email_type, email_info_email_address, last_modified_on ) values (?, ?, ?, ? );
[08.08.2014 17:30:10] DEBUG: Connection.debug() - {conn-100000} Preparing Statement: insert into E_Person_Info (person_id, person_birth_dt, person_country_of_birth, person_region_of_birth, person_place_of_birth, person_birth_name, person_death_dt, last_modified_on ) values (?, ?, ?, ?, ?, ?, ?, ? );
[08.08.2014 17:30:10] DEBUG: TaskletStep.doInChunkContext() - Applying contribution: [StepContribution: read=10, written=0, filtered=0, readSkips=0, writeSkips=0, processSkips=0, exitStatus=EXECUTING]
[08.08.2014 17:30:10] DEBUG: TaskletStep.doInChunkContext() - Rollback for Exception: org.springframework.dao.InvalidDataAccessResourceUsageException: Batch execution returned invalid results. Expected 1 but number of BatchResult objects returned was 3
[08.08.2014 17:30:10] DEBUG: DataSourceTransactionManager.processRollback() - Initiating transaction rollback
[08.08.2014 17:30:10] DEBUG: DataSourceTransactionManager.doRollback() - Rolling back JDBC transaction on Connection [net.sourceforge.jtds.jdbc.ConnectionJDBC3#190d8e1]
[08.08.2014 17:30:10] DEBUG: DataSourceTransactionManager.doCleanupAfterCompletion() - Releasing JDBC Connection [net.sourceforge.jtds.jdbc.ConnectionJDBC3#190d8e1] after transaction
[08.08.2014 17:30:10] DEBUG: DataSourceUtils.doReleaseConnection() - Returning JDBC Connection to DataSource
[08.08.2014 17:30:10] DEBUG: RepeatTemplate.doHandle() - Handling exception: org.springframework.dao.InvalidDataAccessResourceUsageException, caused by: org.springframework.dao.InvalidDataAccessResourceUsageException: Batch execution returned invalid results. Expected 1 but number of BatchResult objects returned was 3
[08.08.2014 17:30:10] DEBUG: RepeatTemplate.executeInternal() - Handling fatal exception explicitly (rethrowing first of 1): org.springframework.dao.InvalidDataAccessResourceUsageException: Batch execution returned invalid results. Expected 1 but number of BatchResult objects returned was 3
[08.08.2014 17:30:10] ERROR: AbstractStep.execute() - Encountered an error executing the step
org.springframework.dao.InvalidDataAccessResourceUsageException: Batch execution returned invalid results. Expected 1 but number of BatchResult objects returned was 3
at org.springframework.batch.item.database.IbatisBatchItemWriter.write(IbatisBatchItemWriter.java:140)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.writeItems(SimpleChunkProcessor.java:156)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.doWrite(SimpleChunkProcessor.java:137)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.write(SimpleChunkProcessor.java:252)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.process(SimpleChunkProcessor.java:178)
at org.springframework.batch.core.step.item.ChunkOrientedTasklet.execute(ChunkOrientedTasklet.java:74)
at org.springframework.batch.core.step.tasklet.TaskletStep$2.doInChunkContext(TaskletStep.java:268)
at org.springframework.batch.core.scope.context.StepContextRepeatCallback.doInIteration(StepContextRepeatCallback.java:76)
at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:367)
at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:215)
at org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:143)
at org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:242)
at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:198)
at org.springframework.batch.core.job.AbstractJob.handleStep(AbstractJob.java:348)
at org.springframework.batch.core.job.flow.FlowJob.access$0(FlowJob.java:1)
at org.springframework.batch.core.job.flow.FlowJob$JobFlowExecutor.executeStep(FlowJob.java:135)
at org.springframework.batch.core.job.flow.support.state.StepState.handle(StepState.java:60)
at org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:144)
at org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:124)
at org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:103)
at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:250)
at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:110)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:48)
at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:105)
at com.CtrlMPojoForBatch.initiateSpringBatchProcess(CtrlMPojoForBatch.java:92)
at com.CtrlMPojoForBatch.main(CtrlMPojoForBatch.java:33)
[08.08.2014 17:30:10] WARN: CustomStepExecutionListner.afterStep() - Failure occured executing the step readWriteExchagnerateConversionData
[08.08.2014 17:30:10] WARN: CustomStepExecutionListner.afterStep() - Initiating the rollback operation...
[08.08.2014 17:30:10] WARN: CustomStepExecutionListner.afterStep() - Rollback completed!
Assuming you're using the IbatisBatchItemWriter provided in Spring Batch (it's been deprecated in favor of the ones provided by the MyBatis project), set the assertUpdates to false. This will prevent Spring Batch from verifying that only one update was made per item.

Transactional Replication: Column name or number of supplied values does not match table definition

I have set up a transactionnal replication for some tables.
The Master and the Slave Database are identical.
I used this query and compared the result from master and slave to make sure the table is identical
select * from sys.columns c
join sys.tables t on t.object_id = c.object_id
where t.name = 'customers'
In the Replication Monitor I can find this error:
Column name or number of supplied values does not match table definition.
If I check the details I get this:
Command attempted:
if ##trancount > 0 rollback tran
(Transaction sequence number: 0x0011775200000105007600000000, Command ID: 1)
So I checked in the destribution database using this query to find the command that is failing.
sp_browsereplcmds #xact_seqno_start = '0x0011775200000105007600000000',
#xact_seqno_end = '0x0011775200000105007600000000'
This is the command (its in 2 lines in that table):
{CALL [sp_MSins_dboCustomers] (0,'575',N'todelete','575',N'todelete',118594,118595,118596,N'10T 3% Sk 30T net.',0,'Deutschland',4,24399158193054E-314,4,24399158193054E-314,4,24399158193054E-314,4,24399158193054E-314,2,54639494915833E-313,'','','','','','TGW',N'Liefern LKW',NULL,NULL,0,0,6,79038653108887E-311,NULL,'',0,NULL,NULL,NULL,0,0,0,-1,-1,1900-01-01 00:00:00,0,1,{AEB3D911-36D1-4A8A-B713-6B2F2CCA1641},0,0,2,'de-AT',25,NULL,NULL,0,1,NULL,NULL,2014-03-07 08:57:45.727,-1,NULL,0,'','','','','','','','','','','','',
'','','','','','','','')}
This is what I have in my DB
TypeID CustomerID Name SiteID SiteName AddressID BillAddressID ShipAddressID Terms TaxExempt TaxSchedID TaxPercent TaxPercent1 TaxPercent2 TaxPercent3 TaxPercent4 TaxTitle TaxTitle1 TaxTitle2 TaxTitle3 TaxTitle4 LocationID ShipVia PackingType PackingNoteID CutoffDay UploadAction LeadTime ExpDays Notes SalesPersonID CreditLimit OpenOrders OrderValueScheduleID OAHidePrices DefaultAckType DefaultInvType DefaultPackType UploadEmployee UploadDateTime OAHideImages MfgCustomer CustomerGUID PricingMethod DefaultCustomer EngineeringUnitSetID CurrencyCulture FamilyGroupID InvoiceMinimum InvoiceSurcharge InvoiceGroup InvoiceCopies DeliveryMinimum DeliverySurcharge CreateDate EnteredBy LanguageCulture DropShip UserDef1 UserDef2 UserDef3 UserDef4 UserDef5 UserDef6 UserDef7 UserDef8 UserDef9 UserDef10 UserDef11 UserDef12 UserDef13 UserDef14 UserDef15 UserDef16 UserDef17 UserDef18 UserDef19 UserDef20
0 575 todelete 575 todelete 118594 118595 118596 10T 3% Sk 30T net. 0 Deutschland 0 0 0 0 0 TGW Liefern LKW NULL NULL 0 0 0 NULL 302 NULL NULL NULL 0 0 0 -1 -1 1900-01-01 00:00:00 0 1 AEB3D911-36D1-4A8A-B713-6B2F2CCA1641 0 0 2 de-AT 25 NULL NULL 0 1 NULL NULL 2014-03-07 08:57:45.727 -1 NULL 0 1 2 3 4 0 1 2 3 4
As you can see here, the values for the taxpercent fields (after "Deutschland") are 0 in my DB, in the command they are really weird (4,24399158193054E-314)
The Datatype is "real"
Maybe this is not the issue but this is the only weird thing I could find.
I found my problem.
In fact this 4,24399158193054E-314 is a value for "0" in real, the problem is that it did not use the "." but the "," as decimal separator and therefore the call of the procedure had too much argument.
What I did is to change the statement delivery for insert, update, delete from "Call " to INSERT/UPDATE/DELETE statement.
I don't know why this is not selected by default, but now it works.

Firebird: Query execution time in iSQL

I would like to get query execution time in iSQL.
For instance :
SELECT * FROM students;
How do i get query execution time ?
Use SET STATS:
SQL> SET STATS;
SQL> SELECT * FROM RDB$DATABASE;
... query output removed ....
Current memory = 34490656
Delta memory = 105360
Max memory = 34612544
Elapsed time= 0.59 sec
Buffers = 2048
Reads = 17
Writes 0
Fetches = 270
SQL>