how to use load cursor in db2 to insert null entries for newly created definition of existing table - db2

i have a requirement where i had to modify the definition of the existing table (my_Table) by adding new columns in mid of the table.
I have taken backup of the table, dropped the existing one and created it with new definition.
Now i wanted to load all the old data from backup file to my newly modified table using cursor load so that all the other columns get populated with data from backup file and newly added columns have ' ' (blank ) in the table
How to achieve this in DB2

Related

Change databricks delta table typr to external

I have a MANAGED table in delta format in databrciks and I wanted to change it to EXTERNAL to make sure dropping the table would not affect the data. However the following code did not change the table TYPE and just added a new table property. How can I correctly convert my managed table to an external table ?
%sql
alter table db_delta.table1 SET TBLPROPERTIES('EXTERNAL'='TRUE')
Describe Table:
# Detailed Table Information
Name
db_delta.table1
Location
dbfs:/user/hive/warehouse/db_delta.db/table1
Provider
delta
Type
MANAGED
Table Properties
[EXTERNAL=TRUE,overwriteSchema=true]
I found the following workaround for the above scenario.
1.Copy the Managed table location to external location
dbutils.fs.cp('dbfs:/user/hive/warehouse/amazon_data_agg','abfss://data#amazondata.dfs.core.windows.net/amzon_aggred/',True)
Now drop the managed table.
drop table amazon_data_agg;
Now create the external table by the schema of the already created table, if there is schema mismatch you will get error.
Now you can append and do all operation
df_agg.write.format('delta').mode('append').option('path','abfss://data#amazondata.dfs.core.windows.net/amzon_aggred/').saveAsTable('amazon_data_agg')

Postgres View, after alter table to change table name, View still queries it?

Using Postgres database. I have an existing table, and several existing Views that query that table.
Call the table, 'contacts'.
I alter the table, changing the name to 'contacts_backup'. I then created a new table with the same name the older table used to have 'contacts'
Now it appears that if I query the existing views, the data is still retrieved from the renamed table, contacts_backup, and not the new table, 'contacts'.
Can this be? How can I update the Views to query the new table of the same name, and not the renamed contacts_backup?
My new table is actually a foreign table, but shouldn't the principle be the same? I was expecting the existing tables to query against the new table, not the old renamed one.
What is an efficient way to update the existing views to query from the new table?
This is because PostgreSQL does not store the view definition as an SQL string, but as a parsed query tree.
These parsed query trees don't contain the names of the referenced objects, but only their object identifier (oid), which does not change when you rename an object.
The same is true for table columns. This all holds for foreign tables as well.
When you examine the view definition, for example with pg_get_viewdef, the parse tree is rendered as text, so you will see the changed names.
If you want to change the table that a view is referring to, the only solution is to either DROP the view and CREATE it again, or you can use CREATE OR REPLACE VIEW.

How to add a column in the existing table and update all the existing data in postgresql?

I have an existing table, I want to insert a new column and update values in the whole table so that I do not have to refill the table again.
But the problem is, I have a route column which is present in the format shown below. I want to add a new column route_name where I will not include data after the 2nd underscore '_'
How do I do this by running a query?
route route_name (should look like)
dehradun_delhi_09:30_am dehradun_delhi
katra_delhi_07:30_pm katra_delhi
delhi_katra_08:00_pm delhi_katra
bangalore_chennai_10:45_pm bangalore_chennai
delhi_lucknow_09:00_pm delhi_lucknow
chennai_bangalore_10:30_pm chennai_bangalore
lucknow_varanasi_10:30_pm lucknow_varanasi
varanasi_lucknow_09:30_pm varanasi_lucknow
delhi_katra_08:00_pm delhi_katra
katra_delhi_07:30_pm katra_delhi
delhi_jalandhar_10:00_pm delhi_jalandhar
jalandhar_delhi_11:00_am jalandhar_delhi
delhi_amritsar_11:00_pm delhi_amritsar
amritsar_delhi_11:00_pm amritsar_delhi
Please tell me what query should I run so that the data backfilled also gets updated and a new column called route_name gets updated in the existing table
You need to do this in two steps.
First you add the column:
alter table route_table add column route_name text;
and then populate it:
update route_table set route_name=split_part(route,'_',1)

Postgresql copy from file changed rows order

I'm trying to save text file content into Postgresql database. First of all I want to copy file into one table with one column in order to iterate over it and save values into specific tables.
I'm using Postgresql version 11.5 on Mac. I copied file into temp table (one line as one row). Then I wrote plpgsql function that iterates over each row of temp table and parses values into other tables. It was working fine on small dataset but when I used bigger one ~ aprox (6*10^5 lines) function failed because it expected specific (same as in file) rows order. After some investigation it turned out that rows order in temp table is different than lines order in the file. What's more interesting first difference occurs on 455864th row.
CREATE TABLE "Temp"
(
data_row text
);
COPY "Temp"(data_row) FROM 'PATH_TO_FILE';
I expect COPY FROM command to copy data in same order that is in file.

How can I copy an IDENTITY field?

I’d like to update some parameters for a table, such as the dist and sort key. In order to do so, I’ve renamed the old version of the table, and recreated the table with the new parameters (these can not be changed once a table has been created).
I need to preserve the id field from the old table, which is an IDENTITY field. If I try the following query however, I get an error:
insert into edw.my_table_new select * from edw.my_table_old;
ERROR: cannot set an identity column to a value [SQL State=0A000]
How can I keep the same id from the old table?
You can't INSERT data setting the IDENTITY columns, but you can load data from S3 using COPY command.
First you will need to create a dump of source table with UNLOAD.
Then simply use COPY with EXPLICIT_IDS parameter as described in Loading default column values:
If an IDENTITY column is included in the column list, the EXPLICIT_IDS
option must also be specified in the COPY command, or the COPY command
will fail. Similarly, if an IDENTITY column is omitted from the column
list, and the EXPLICIT_IDS option is specified, the COPY operation
will fail.
You can explicitly specify the columns, and ignore the identity column:
insert into existing_table (col1, col2) select col1, col2 from another_table;
Use ALTER TABLE APPEND twice, first time with IGNOREEXTRA and the second time with FILLTARGET.
If the target table contains columns that don't exist in the source
table, include FILLTARGET. The command fills the extra columns in the
source table with either the default column value or IDENTITY value,
if one was defined, or NULL.
It moves the columns from one table to another, extremely quickly; took me 4s for 1GB table in dc1.large node.
Appends rows to a target table by moving data from an existing source
table.
...
ALTER TABLE APPEND is usually much faster than a similar CREATE TABLE
AS or INSERT INTO operation because data is moved, not duplicated.
Faster and simpler than UNLOAD + COPY with EXPLICIT_IDS.