How to delete records using jdbcTemplate. Query where clause with like clause - spring-data

I want to delete huge data (above 100K) from table using spring jdbcTemplate. Where clause contains like. eg. DELETE from TABLE_NAME where NAME like 'ABC%'. If possible how to use batching. Please suggest. thanks

Related

select all columns except two in q kdb historical database

In output I want to select all columns except two columns from a table in q/kdb historical database.
I tried running below query but it does not work on hdb.
delete colid,coltime from table where date=.z.d-1
but it is failing with below error
ERROR: 'par
(trying to update a physically partitioned table)
I referred https://code.kx.com/wiki/Cookbook/ProgrammingIdioms#How_do_I_select_all_the_columns_of_a_table_except_one.3F but no help.
How can we display all columns except for two in kdb historical database?
The reason you are getting par error is due to the fact that it is a partitioned table.
The error is documented here
trying to update a partitioned table
You cannot directly update, delete anything on a partitioned table ( there is a separate db maintenance script for that)
The query you have used as fix is basically selecting the data first in-memory (temporarily) and then deleting the columns, hence it is working.
delete colid,coltime from select from table where date=.z.d-1
You can try the following functional form :
c:cols[t] except `p
?[t;enlist(=;`date;2015.01.01) ;0b;c!c]
Could try a functional select:
?[table;enlist(=;`date;.z.d);0b;{x!x}cols[table]except`colid`coltime]
Here the last argument is a dictionary of column name to column title, which tells the query what to extract. Instead of deleting the columns you specified this selects all but those two, which is the same query more or less.
To see what the functional form of a query is you can run something like:
parse"select colid,coltime from table where date=.z.d"
And it will output the arguments to the functional select.
You can read more on functional selects at code.kx.com.
Only select queries work on partitioned tables, which you resolved by structuring your query where you first selected the table into memory, then deleted the columns you did not want.
If you have a large number of columns and don't want to create a bulky select query you could use a functional select.
?[table;();0b;{x!x}((cols table) except `colid`coltime)]
And show all columns except a subset of columns. The column clause expects a dictionary hence I am using the function {x!x} to convert my list to a dictionary. See more information here
https://code.kx.com/q/ref/funsql/
As nyi mentioned, if you want to permanently delete columns from an historical database you can use the deleteCol function in the dbmaint tools https://github.com/KxSystems/kdb/blob/master/utils/dbmaint.md

If a Postgres DB has unique IDs across its tables, how do you find a row using its ID without knowing its table?

Following the blog of Rob Conery I have set of unique IDs across the tables of my Postgres DB.
Now, using these unique IDs, is there a way to query a row on the DB without knowing what table it is in? Or can those tables be indexed such that if the row is not available on the current table, I just increase the index and I can query to the next table?
In short - if you did not prepared for that - then no. You can prepare for that by generating your own uuid. Please look here. For instance PG has uuid that preserve order. Also uuid v5 has something like namespaces. So you can build hierarchy. However that is done by hashing namespace, and I don't know tool to do opposite inside PG.
If you know all possible tables in advance you could prepare a query that simply UNIONs a search with a tagged type over all tables. In case of two tables named comments and news you could do something like:
PREPARE type_of_id(uuid) AS
SELECT id, 'comments' AS type
FROM comments
WHERE id = $1
UNION
SELECT id, 'news' AS type
FROM news
WHERE id = $1;
EXECUTE type_of_id('8ecf6bb1-02d1-4c04-8875-f1da62b7f720');
Automatically generating this could probably be done by querying pg_catalog.pg_tables and generating the relevant query on the fly.

postgres db trigger to log query type into another table

This problem scenario may sounds strange but I am trying to write a trigger to log the query type into another table and so far i havent been able to find anything on google
the database i am using is postgres
i.e.
if i have two tables; table1 and querylog(has a string field called querytype)
and a select query is executed on table1, i want to insert a row into the query log table with the querytype field populated with "select"
anyone have any idea how to reference the query type in a function that will be called by a trigger?
Triggers do not get called for SELECT queries, so that won't work.
If you want to audit queries, you can use the PostgreSQL log file or tools like pgaudit that hook into PostgreSQL to retrieve and log the information.

JPA - insert into select - copy large amount of records

I would like to copy records with diffrent key values. what is the best way to do so ?
In plain sql I would do:
insert into tableX values (x1,x2,x3,x4,x5) select 2,T1.x2,T1.x3,T1.x4,T1.x5 from tableX T1
(x1 is my primary key).
I tried writing this code inside the entity #NamedQuery, but i got org.eclipse.persistence.exceptions.JPQLException and after searching a way to write it i understand that this sql cannot be wriitten inside NamedQuery - is that correct?
I also tried looping through the object list representing tableX and for every object I did em.find() or created a new object and then inserted it with em.persist - but it seems to be an inefficient way. (when using find I do a select for each object , so if i have a list of 2000 records, it dosent make sense to create 2000 selectes and then insert with new key value).
So my question is what is the best way to implement copying all the records?
also if I got an exception, or something went wrong I would like to rollback so that I wont have only part of the records inside my database table.
Thanks In Advance.
You can use any SQL in JPA through a native query. SQL would be best for this type of insert.
If you need to do anything in Java on the data before inserting it, then you would query the objects, then insert them. Enable batch writing to improve efficiency.
http://java-persistence-performance.blogspot.com/2013/05/batch-writing-and-dynamic-vs.html

Optimize getting counts of rows grouped by first letter in SQLite?

My current query looks something like this:
SELECT SUBSTR(name,1,1), COUNT(*) FROM files GROUP BY SUBSTR(name,1,1)
But it's taking a pretty long time just to do counts on a table that's already indexed by the name column. I saw from this question that some engines might not use indexes correctly for the SUBSTR function, and in fact, sqlite will not use indexes for SUBSTR(string,1,1).
Is there any other approach that would utilize the index and net me some faster queries?
One strategy that is consistent with your access pattern is to add a new indexed column "first_letter" to your table. Use a trigger on to set the value on insert and update. Then your query is a simple group by first_letter.
Another strategy is to create a shadow table which contains an aggregation of the mother table. This isn't easy because it is your job as developer to keep the shadow table consistent with the mother table. Every delete, update or insert in table files needs to be accompanied by a change in the shadow table.
Databases like Oracle have support for materialized views to achieve this automatically but sqlite doesn't.