I am querying data from server to update fields in my local DB.
But, I have more than 100 columns hence I just want to know which column has new value(different than previous) so that I can use only those fields to prepare/make/build update command to my local DB.
EXAMPLE: I have 100 columns already in my DB. Same row in server has been update. Now that I have fetched all those 100 columns from server.(Stored in list object to make prepare statement in Java). Only 10 from 100 columns have updated. I want to which 10 columns have been updated.
How to do it with triggers?
Or there is any other way than triggers like, in Cassandra, inserting using same PK will act as an update for a row.
If you have to do the update you can add a RETURN statement for show updated rows.
You cn see more on the documentation of Postgres: https://www.postgresql.org/docs/current/dml-returning.html
Related
Any way to do this? Basically trying to do a SQL UPDATE SET function if matching record for one or more key fields exists in another dataset.
Tried using Joins and Merge. Joins seems like more steps and the Merge appends records instead of updating the correlating rows.
I have to update multiple records in a table with the most efficient way out there having the least latency and without utilising CPU extensively. At a time records to update can be ranged from 1 to 1000.
We do not want to lock the database when this update occurs as other services are utilising it.
Note: There are no dependencies generated from this table towards any other table in the system.
After looking in many places I've drilled down a few ways to do the task-
simple-update: A simple update query to the table with update command with already known id's
Either multiple update queries (one query for each individual record), or
Usage of update ... from clause as mentioned here as a single query (one query for all records)
delete-then-insert: Firstly, delete the outdated data and then insert updated data with new id's (since there is no dependency on records, new id's are acceptable)
insert-then-delete: Firstly, insert updated records with new id's and then delete outdated data using old id's (since there is no dependency on records, new id's are acceptable)
temp-table: Firstly, insert updated records into a temporary table. Secondly, update the original table with inserted records from the temporary table. At last, remove the temporary table.
We must not drop the existing table and create a new one in its place
We must not truncate the existing table because we have a huge number of records that we cannot store in the buffer memory
I'm open to any more suggestions.
Also, what will be the impact of making the update all at once vs doing it in batches of 100, 200 or 500?
References:
https://dba.stackexchange.com/questions/75631
https://dba.stackexchange.com/questions/230257
As mentioned by #Frank Heikens in the comments, I'm sure that different people will have different statistics based on their system design. I did some checks and I have found some insights to share for one of my development systems.
Configurations of the system used:
AWS
Engine: PostgreSQL
Engine version: 12.8
Instance class: db.m6g.xlarge
Instance vCPU: 4
Instance RAM: 16GB
Storage: 1000 GiB
I used a lambda function and pg package to write data into a table (default FILLFACTOR) that contains 34,09,304 records.
Both lambda function and database were in the same region.
UPDATE 1000 records into the database with a single query
Run
Time taken
1
143.78ms
2
115.277ms
3
98.358ms
4
98.065ms
5
114.78ms
6
111.261ms
7
107.883ms
8
89.091ms
9
88.42ms
10
88.95ms
UPDATE 1000 records into the database with a single query in 2 batches of 500 records concurrently
Run
Time taken
1
43.786ms
2
48.099ms
3
45.677ms
4
40.578ms
5
41.424ms
6
44.052ms
7
42.155ms
8
37.231ms
9
38.875ms
10
39.231ms
DELETE + INSERT 1000 records into the database
Run
Time taken
1
230.961ms
2
153.159ms
3
157.534ms
4
151.055ms
5
132.865ms
6
153.485ms
7
131.588ms
8
135.99ms
9
287.143ms
10
175.562ms
I did not proceed to check for updating records with the help of another buffer table because I had found my answer.
I've seen the database metrics graph provided by the AWS and by looking into those it was clear that DELETE + INSERT was more CPU intensive. And from the statistics shared above DELETE + INSERT took more time as compared to UPDATE.
If updates are done concurrently in batches, yes, updates will be faster, depending on the number of connections (a connection pool is recommended).
Using a buffer table, truncate, and other methods might be more suitable approaches if needed to update almost all the records in a giant table, though I currently do not have metrics to support this. However, for a limited number of records, UPDATE is a fine choice to proceed with.
Also, if not executed properly, please be mindful that if DELETE + INSERT fails, you might lose records and if INSERT + DELETE fails you might end up having duplicate records.
I'm trying to populate a prices and quotes database using AquaQ's TorQ. For this purpose I use the .loader.loadallfiles function. The difference being that prices is daily data and quotes is more intraday e.g. FX rates.
I do the loading as follows:
/- check the location of database directory
hdbdir:hsym `$getenv[`KDBHDB]
/hdbdir:#[value;`hdbdir;`:hdb]
rawdatadir:hsym `$getenv[`KDBRAWDATA]
target:hdbdir;
rawdatadir:hsym `$("" sv (getenv[`KDBRAWDATA]; "prices"));
.loader.loadallfiles[`headers`types`separator`tablename`dbdir`partitioncol`partitiontype`dataprocessfunc!(`date`sym`open`close`low`high`volume;"DSFFFFF";enlist ",";`prices;target;`date;`year;{[p;t] `date`sym`open`close`low`high`volume xcols update volume:"i"$volume from t}); rawdatadir];
rawdatadir:hsym `$("" sv (getenv[`KDBRAWDATA]; "quotes"));
.loader.loadallfiles[`headers`types`separator`tablename`dbdir`partitioncol`partitiontype`dataprocessfunc!(`date`sym`bid`ask;"ZSFF";enlist ",";`quotes;target;`date;`year;{[p;t] `date`sym`bid`ask`mid xcols update mid:(bid+ask)%2.0 from t}); rawdatadir];
and this works fine. However when loading the database I get errors attemping to select from either table. The reason is that for some partitions there aren't any prices or or there aren't any quotes data. e.g. attempting to:
quotes::`date`sym xkey select from quotes;
fails with an error saying the the partition for year e.g. hdb/2000/ doesn't exist which is true, there are only prices for year 2000 and no quotes
As I see there are two possible solutions but neither I know how to implement:
Tell .loader.loadallfiles to create empty schema for quotes and prices in partitions for which there isn't any data.
While loading the database, gracefully handle the case where there is no data for a given partition i.e. select from ... where ignore empty partitions
Try using .Q.chk[`:hdb]
Where `:hdb is the filepath of your HDB
This fills in missing tables, which will then allow you to preform queries.
Alternatively you can use .Q.bv, where the wiki states:
If your table exists in the latest partition (so there is a prototype
for the schema), then you could use .Q.bv[] to create empty tables
on the fly at run-time without having to create those empties on disk.
I have list of records, I want to perform following tasks using SpringJDBCTemplate
(1) Update existing records
(2) Insert new records.
Don't know how this happens using jdbcTemplate of spring.
Any insight?
You just use one of the various forms of batchUpdate for the update. Then you check the return value which will contain 1 if the row was present and 0 otherwise. For the later, you perform another batchUpdate with the insert statements.
In my application as I release a new version, I alter my existing database tables by adding a new table or altering existing tables in SQlite.
I have written the same in script/text file and want to just import in a form of batch directly into existing database where these queries will execute once.
I know that i can do same by writing each alter query separately but this will also increase execution time and time for writing question.
Any ideas on how I can achieve this?
One thing that I used to was to keep an array of colums for each table, like
persons = {[ fname, lname, address, zip ]}
then I also have another version array, that tells me I have for version 1, persons, 4 colums.
Then when I updated the application, and add f.ex. gsm to the persons, I update the array and the count. Then I run the query on the database sqlite_master, parse the data
you can run '.schema persons' to get the create statement. This is just work you do once, and you never run alter table on tables up to date this way. You need to be organized.