Inline Querys with : #Query ( nativeQuery= true, value="...") do it work? - spring-data

MariaDB Execute it but SpringBoot dont whant to get the TRACKING NUMBERS
#Query( nativeQuery = true, value =
"SELECT dc.tracktracenr " +
" FROM dist_container as dc LEFT JOIN distribution as d ON dc.distribution_oid = d.oid " +
" where " +
" d.provider_id like '04' " +
" and ( " +
" dc.tracktracenr IN (SELECT t.track_trace_code FROM tracking_state as t where t.received is null and t.failed is null) " +
" or " +
" CAST(d.processed_date as DATE) = cast(sysdate() as DATE) " +
" and dc.tracktracenr NOT IN (SELECT T.track_trace_code FROM tracking_state as T) " +
" )"
)
List<String> findActiveBy( );

Related

Flink - Cassandra sink - No support for the type of the given DataStream: Row<...>

I have a Flink (1.14, Java, Table API) app that consumes data from Kafka (Confluent Avro format), makes some transformation/aggregation and writes back to another Kafka topic (upsert-kafka connector).
Now I want to change sink to write results to Cassandra cluster (upsert). There is no Cassandra connector for the Table API. I'm trying to convert Table to DataStream and then use Apache Cassandra Connector (for DataStream API). But I'm getting an error.
Any ideas how to make it work?
My code:
public static void main(String[] args) throws Exception {
EnvironmentSettings settings = EnvironmentSettings.newInstance().inStreamingMode().build();
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
...
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings);
...
tableEnv.executeSql("CREATE TABLE `from_kafka_my_topic` (\n" +
" customer_id STRING, \n" +
" service_id STRING, \n" +
" upload_bytes STRING, \n" +
" download_bytes STRING, \n" +
" event_time_as_string STRING, \n" +
" event_time AS TO_TIMESTAMP(event_time_as_string, 'yyyyMMddHHmmssX'), \n" +
" WATERMARK FOR event_time AS event_time - INTERVAL '10' MINUTE\n" +
") WITH (\n" +
" 'connector' = 'kafka',\n" +
" 'topic' = 'my_topic',\n" +
" 'format' = 'avro-confluent',\n" +
" 'scan.startup.mode' = '"+SOURCE__KAFKA_OFFSET_RESET+"-offset',\n" +
" 'avro-confluent.schema-registry.url' = '"+SOURCE__KAFKA_SCHEMA_REGISTRY_URL+"',\n" +
" 'properties.group.id' = '"+SOURCE__KAFKA_GROUP_ID+"',\n" +
" 'properties.bootstrap.servers' = '"+SOURCE__KAFKA_BOOTSTRAP_SERVERS+"'\n" +
")");
Table resultTable = tableEnv.sqlQuery(
" SELECT \n"+
" window_start \n"+
" , window_end \n"+
" , cast('DATA' as string) as record_type \n"+
" , COALESCE(substr(customer_id, 3), '0') customer_id \n"+
" , service_id \n"+
" , cast(null as bigint) duration \n"+
" , sum(cast(upload_bytes as bigint)+cast(download_bytes as bigint)) as volume \n"+
" FROM TABLE ( \n"+
" TUMBLE(TABLE from_kafka_my_topic, DESCRIPTOR(event_time), INTERVAL '5' MINUTES) \n"+
" ) \n"+
" WHERE \n"+
" service_id is not null \n"+
" and coalesce(event_time_as_string, '') <> '' \n"+
" and cast(upload_bytes as bigint)+cast(download_bytes as bigint) > 0 \n"+
" GROUP BY \n"+
" window_start \n"+
" , window_end \n"+
" , COALESCE(substr(customer_id, 3), '0') \n"+
" , service_id \n"+
"");
DataStream<Row> dataStream = tableEnv.toDataStream(resultTable);
CassandraSink<Row> sink = CassandraSink
.addSink(dataStream)
.setClusterBuilder(new ClusterBuilder() {
#Override
protected Cluster buildCluster(Cluster.Builder builder) {
return Cluster.builder()
.addContactPoints("cassandra01.myhost.com,cassandra02.myhost.com".split(","))
.withPort(9042)
.withCredentials("LoginCass","123456")
.build();
}
})
.setQuery("INSERT INTO customers_usg.data_usage(window_start, window_end, record_type, customer_id, service_id, duration, volume) values (?, ?, ?, ?, ?, ?, ?);")
.build();
sink.name("Sink_to_Cassandra").disableChaining().uid("my_uid");
}
}
Error:
org.apache.flink.client.program.ProgramInvocationException: The main
method caused an error: No support for the type of the given
DataStream:
ROW<window_start TIMESTAMP(3) NOT NULL, window_end
TIMESTAMP(3) NOT NULL, record_type STRING NOT NULL, customer_id
STRING NOT NULL, service_id STRING NOT NULL, duration BIGINT,
volume BIGINT> NOT NULL(org.apache.flink.types.Row,
org.apache.flink.table.runtime.typeutils.ExternalSerializer)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)

How to apply "With UR" clause in existing Hql or SQL queries? How can I append this in Java file where my query is in string format

So I have this SQL query in one of the Java files as follows: (I am giving the sample & not the real one)
#Query( "Select T1.ID" +
", T1.CD" +
", T1.Date" +
", T1.Name" +
" From Table1 T1" +
" Join Table2 T2 " +
" On T1.ID = T2.ID" +
" Where T1.CD in
("test1","test2")" +
" And NOT EXISTS" +
" (Select 1 From Table2 T3"
+
" Where T3.ID = T1.ID +
" And T3.Name NOT
IN('P','Q','R')"+
" )")
List<Object[]> methodToRetrieve
(#Param("sequence")String
sequence,#Param("code") code);
Can someone please tell me where can I add "With UR" in the above query.
PSA: include your Db2 platform/version in your posts; also consider using a platform specific db2 tag if applicable (db2-luw or db2i)
The isolation-clause appears after the fullselect in the documentation
so your code should be
#Query( "Select T1.ID" +
", T1.CD" +
", T1.Date" +
", T1.Name" +
" From Table1 T1" +
" Join Table2 T2 " +
" On T1.ID = T2.ID" +
" Where T1.CD in
("test1","test2")" +
" And NOT EXISTS" +
" (Select 1 From Table2 T3"
+
" Where T3.ID = T1.ID +
" And T3.Name NOT
IN('P','Q','R')"+
" ) WITH UR")
List<Object[]> methodToRetrieve
(#Param("sequence")String
sequence,#Param("code") code);

spring data jpa update of multiple rows doesn't work

Time ago my query worked fine for a single row update. Now I have to modify this query to update multiple row. The query is native and use postgresql and postgis.
The old query:
#Modifying
#Transactional
#Query(value = "WITH tmp AS (SELECT ST_Difference( (SELECT ST_Buffer(ST_Union(ST_Buffer(a.area\\:\\:geometry, 0.002)), -0.002) \n" +
" FROM mydb.city_area a, mydb.dis_city d, mydb.city c \n" +
" where c.id_city=d.id_city and d.id_dis=?1 \n" +
" and c.cod_city=a.cod_city), \n" +
" (ST_Difference( ST_GeomFromGeoJSON(?2)\\:\\:geometry, (SELECT ST_Union(a.area\\:\\:geometry) \n" +
" FROM mydb.city_area a, mydb.dis_city d, mydb.city c \n" +
" where c.id_city=d.id_city and d.id_dis=?1 \n" +
" and c.cod_city=a.cod_city and c.full_area=true) \n" +
" )) \n" +
" ) AS final_area)\n" +
"UPDATE mydb.dis_area SET new_area=(SELECT final_area FROM tmp), " +
"id_type=3 " +
"WHERE id_dis=?1 ",
nativeQuery = true)
Integer insertShape(Integer id, String shapeGeoJson);
In the new query I added some parameter in #Modifying as stated here:
#Modifying(flushAutomatically = true, clearAutomatically = true)
#Transactional
#Query(value = "WITH tmp AS (SELECT ST_Difference( (SELECT ST_Buffer(ST_Union(ST_Buffer(a.area\\:\\:geometry, 0.002)), -0.002) \n" +
" FROM mydb.city_area a, mydb.dis_city d, mydb.city c \n" +
" where c.id_city=d.id_city and d.id_dis=?1 \n" +
" and c.cod_city=a.cod_city), \n" +
" (ST_Difference( ST_GeomFromGeoJSON(?2)\\:\\:geometry, (SELECT ST_Union(a.area\\:\\:geometry) \n" +
" FROM mydb.city_area a, mydb.dis_city d, mydb.city c \n" +
" where c.id_city=d.id_city and d.id_dis=?1 \n" +
" and c.cod_city=a.cod_city and c.full_area=true) \n" +
" )) \n" +
" ) AS final_area)\n" +
"UPDATE mydb.dis_area SET new_area=(SELECT final_area FROM tmp), " +
"id_type=3 " +
"WHERE id_dis_aree=(select id_dis_aree from dis_area where id_dis=?1) ",
nativeQuery = true)
Integer insertShape(Integer id, String shapeGeoJson);
But sadly this changed doesn't have any effect.
(If I lunch the query from postgresql it run perfectly).
How can I solve?
Edit: I added the query, but it works on postgresql. The only difference is that the old version: WHERE id_dis=?1 target a single row, and the new WHERE id_dis_aree=(select id_dis_aree from dis_area where id_dis=?1) target multiple rows.
The couple id_dis_aree and id_dis is a primary key. Two or more records could have same id_dis_aree and different id_dis. So with the second query, I fetch id_dis_aree from id_dis, to affect more rows.
Edit2: I did 2 tests:
substituting the last subselect directly with a fixed wired id value: WHERE id_dis_aree=123456 In this way it works. This could be a solution workaround, fetching id_dis_aree and after calling the query.
substituting the last subselect with this: WHERE id_dis_aree IN (select id_dis_aree from dis_area where id_dis=?1) Doesn't work. (Memo: the subselect return always a single value).
I didn't find a true solution, but just a workaround:
I fetched the value from the subquery: (select id_dis_aree from dis_area where id_dis=?1) in the #Service by calling disAreeRepository.findByIdIdDis(idDis).getId().getIdDisAree()
#Transactional
public Integer insertDis(Integer idDis, String shapeGeoJson) {
return disAreeRepository.
insertShape(
idDis,
shapeGeoJson,
disAreeRepository.findByIdIdDis(idDis).getId().getIdDisAree()
);
}
And then passed it to #Repository native query as third parameter:
#Modifying(flushAutomatically = true, clearAutomatically = true)
#Query(value = "WITH tmp AS (SELECT ST_Difference( (SELECT ST_Buffer(ST_Union(ST_Buffer(a.area\\:\\:geometry, 0.002)), -0.002) \n" +
" FROM mydb.city_area a, mydb.dis_city d, mydb.city c \n" +
" where c.id_city=d.id_city and d.id_dis=?1 \n" +
" and c.cod_city=a.cod_city), \n" +
" (ST_Difference( ST_GeomFromGeoJSON(?2)\\:\\:geometry, (SELECT ST_Union(a.area\\:\\:geometry) \n" +
" FROM mydb.city_area a, mydb.dis_city d, mydb.city c \n" +
" where c.id_city=d.id_city and d.id_dis=?1 \n" +
" and c.cod_city=a.cod_city and c.full_area=true) \n" +
" )) \n" +
" ) AS final_area)\n" +
"UPDATE mydb.dis_area SET new_area=(SELECT final_area FROM tmp), " +
"id_type=3 " +
"WHERE id_dis_aree=?3 ",
nativeQuery = true)
Integer insertShape(Integer id, String shapeGeoJson, Integer idDisAree);

Count using native SQL in JPA Data Spring returns 0

I have a problem with one query/interface:
#Query(value = "SELECT count(1) FROM que_table que " +
"WHERE que.CREATED_DATE >= (SELECT CREATED_DATE " +
" FROM que_table " +
" WHERE id = :selectedId " +
" ORDER BY CREATED_DATE DESC LIMIT 1) " +
"AND que.QUEUE_STATUS in (:queueStatuses)", nativeQuery = true)
Long countCurrentPosition(#Param("selectedId ") String selectedId , #Param("queueStatuses") Set<QueueStatus> queueStatuses);
I connect to MySQL using spring-data-JPA.
When I run this query on console it's working perfectly.
What is wrong here?
Thanks in advance.
use string name of QueueStatus enum :
#Query(value = "SELECT count(1) FROM que_table que " +
"WHERE que.CREATED_DATE >= (SELECT CREATED_DATE " +
" FROM que_table " +
" WHERE id = :selectedId " +
" ORDER BY CREATED_DATE DESC LIMIT 1) " +
"AND que.QUEUE_STATUS in (:queueStatuses)", nativeQuery = true)
Long countCurrentPosition(#Param("selectedenter code hereId ") String selectedId , #Param("queueStatuses") Set<String> queueStatuses);

Sub queries in jpa Query

This is my database query
select v.FK_EQUIPMENT, sum(fuel) as fuel,v1.END_TIME_METER
from vp_reports v ,
(select distinct FK_EQUIPMENT,END_TIME_METER
from vp_reports v
where MODIFIED_ON in (select max(MODIFIED_ON)
from vp_reports
where END_TIME_METER is not null
group by FK_EQUIPMENT )) v1
where v.FK_EQUIPMENT = v1.FK_EQUIPMENT
group by v.FK_EQUIPMENT,v1.END_TIME_METER;
and I am writing the same in jpa
#Query("select new Reports (v.fkEquipment, sum(fuel) as fuel, v1.endTimeMeter) "
+ " from Reports v , "
+ " (select distinct fkEquipment, endTimeMeter from Reports v "
+ " where modifiedOn in ( select max(modifiedOn) from Reports where endTimeMeter is not null "
+ " group by fkEquipment)) v1 "
+ " where v.fkEquipment = v1.fkEquipment "
//+ " and v.mrfId= ?1 "
//+ " and v.shift in (?2) and v.fkStatusDei= ?3 and v.assignedDate between to_date(?4, 'yyyy-mm-dd') and to_date(?5, 'yyyy-mm-dd') "
+ " GROUP BY v.fkEquipment,v1.endTimeMeter ")
List<Reports> findByMrfIdAndShiftAndFkStatusDeiAndAssignedDate(String mrfId, String shift, long fkStatusDei, String from, String to);
and getting below error -
Caused by: org.hibernate.hql.internal.ast.QuerySyntaxException:
unexpected token: ( near line 1, column 107 [select new Reports
(v.fkEquipment, sum(fuel) as fuel, v1.endTimeMeter) from
com.dei.domain.Reports v , (select distinct fkEquipment,
endTimeMeter from com.dei.domain.Reports v where modifiedOn in (
select max(modifiedOn) from com.dei.domain.Reports where endTimeMeter
is not null group by fkEquipment)) v1 where v.fkEquipment =
v1.fkEquipment GROUP BY v.fkEquipment,v1.endTimeMeter ] at
org.hibernate.hql.internal.ast.QuerySyntaxException.convert(QuerySyntaxException.java:91)
at
org.hibernate.hql.internal.ast.ErrorCounter.throwQueryException(ErrorCounter.java:109)
at
org.hibernate.hql.internal.ast.QueryTranslatorImpl.parse(QueryTranslatorImpl.java:304)
at
org.hibernate.hql.internal.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:203)
Could you please help me ? or how do I convert my DB query to JPQL?