I have one source table and one destination table. I want to copy data from source table to destination table. The condition is while copying data from source to destination the existing data from destination table should be erased and data of source table should be copied. I have do the JPA implementation for this. I am new to JPA. can some one help me for this how i can implement this in JPA with Spring-Data-JPA.
I think you want to implement something like that with JPA:
First insert data from source to desired destination table
#query(INSERT INTO dest
SELECT * FROM src
WHERE id =1);
Then delete that data from source table
#query(Delete FROM src
WHERE id =1);
Related
I have a table ("file_upload") in a postgreSQL (11,8) database, which we use for storing the original CSV file that was used for loading some data to our system (I guess the question of best practices is up for debate here, but for now lets just assume it is).
The files are stored in a column ("file") which is of the data type "bytea"
So one row of this table contains
id - file_name - upload_date - uploaded_by - file <-- this being the column in question.
This column then stores the data of a csv file:
item_id;item_type_id;item_date;item_value
11;1;2022-09-22;123.45
12;4;2022-09-20;235.62
13;1;2022-09-21;99.99
14;2;2022-09-19;654.32
What I need to be able to do is query this column, extracrt the data and store it in a temporary table (note: the structure of these csv files are all the same, so the table structure can be pre-defined and does not have to be dynamic or anything).
Any help would be greatly appreciated
Use
COPY (SELECT file FROM file_upload WHERE id =1)
TO '/tmp/blob' (FORMAT 'binary');
to re-export the data to a file. Then create the temporary table and use COPY to read them in again. Make sure to use the proper ENCODING.
You can wrap that in a loop that performs this operation for all rows in your table.
I have a MANAGED table in delta format in databrciks and I wanted to change it to EXTERNAL to make sure dropping the table would not affect the data. However the following code did not change the table TYPE and just added a new table property. How can I correctly convert my managed table to an external table ?
%sql
alter table db_delta.table1 SET TBLPROPERTIES('EXTERNAL'='TRUE')
Describe Table:
# Detailed Table Information
Name
db_delta.table1
Location
dbfs:/user/hive/warehouse/db_delta.db/table1
Provider
delta
Type
MANAGED
Table Properties
[EXTERNAL=TRUE,overwriteSchema=true]
I found the following workaround for the above scenario.
1.Copy the Managed table location to external location
dbutils.fs.cp('dbfs:/user/hive/warehouse/amazon_data_agg','abfss://data#amazondata.dfs.core.windows.net/amzon_aggred/',True)
Now drop the managed table.
drop table amazon_data_agg;
Now create the external table by the schema of the already created table, if there is schema mismatch you will get error.
Now you can append and do all operation
df_agg.write.format('delta').mode('append').option('path','abfss://data#amazondata.dfs.core.windows.net/amzon_aggred/').saveAsTable('amazon_data_agg')
In SQL stored procedures, we have an option of creating a temporary table "#temp" whose structure is as that of another table that it is referring to. Here we don't explicitly create and mention the structure of "#temp" table.
Do we have similar option is HQL Hive script to create a temp table during run time without actually creating the table structure. Thus I can dump data to temp table and use it. Below code shows an example of #temp table in SQL.
SELECT name, age, gender
INTO #MaleStudents
FROM student
WHERE gender = 'Male'
Hive has the concept of temporary tables, which are local to a user's session. These tables behave just like any other table, and can be created using CTAS commands too. Hive automatically deletes all temporary tables at the end of the Hive session in which they are created.
Read more about them here.
Hive Documentation
DWGEEK
You can create simple temporary table. On this table you can perform any operation.
Once you are done with work and log out of your session they will be deleted automatically.
Syntax for temporary table is :
CREATE TEMPORARY TABLE TABLE_NAME_HERE (key string, value string)
I am working trying to write an insert query into a backup database. I writing place and entities tables into this database. The issue is entities is linked to place via place.id column. I added a column place.original_id in the place table to store it's original 'id'. so now that i entered place into the new database it's id column changed but i have the original id stored so I can still link entities table to it. I am trying to figure out how to write entities to get the new id
so far i am at this point:
insert into entities_backup (id, place_id)
select
nextval('public.entities_backup_id_seq'),
(select id from places where original_id = (select place_id from entities) as place_id
from
entities
I know I am missing something because this does not work. I need to grab the id column from places when entity.place_id = places.original_id. Any help would be great.
I think this is what you want
insert into entities_backup (id, place_id)
select nextval('public.entities_backup_id_seq'), places.id
from places, entities
where places.original_id = entities.place_id;
I am working trying to write an insert query into a backup database. I writing place and entities tables into this database. The issue is entities is linked to place via place.id column. I added a column place.original_id in the place table to store it's original 'id'. so now that i entered place into the new database it's id column changed but i have the original id stored so I can still link entities table to it.
It would be simpler to not have this problem in the first place.
Rather than trying to fix this up after the fact, the better solution is to dump and load places and entities complete with their primary and foreign keys intact. Oracle's EXPORT or a utility such as ora2pg should be able to do it.
Sorry I can't say more. I know Postgres, not Oracle.
I have a couple of questions regarding SqlBulkCopy.
I need to insert a few million records to a table from my business component. I am considering the following approaches:
Use SqlBulkCopy to insert data directly into the destination table. Because the table has existing data and index (which I cannot change), so I won't get bulk logging behavior and cannot apply TabLock.
Use SqlBulkCopy to insert data into a heap in temp db in one go (batchsize = 0). Once completed, use a stored procedure to move data from temp table to destination table.
Use SqlBulkCopy to insert data into a heap in temp db but specify a batchsize. Once completed, use a stored procedure to move data from temp table to destination table.
Split data and use multiple SqlBulkCopy to insert into multiple heaps in temp db concurrently. After each chunk of data is uploaded, use a stored procedure to move data from temp table to destination table.
Which approach has shortest end to end time?
If I use SqlBulkCopy to upload data into a table with index, will I be able to query the table at the same time?