Is there any other alternatives for load command in DB2 - db2

We daily receive 7 millions of records , we are going to append to the existing target table.The target table is partitioned based on date
We are using DB2 Load command to load data from one DB2 table (stage) to another DB2 table target
call SYSPROC.ADMIN_CMD('LOAD FROM (SELECT * FROM stage_table )
OF CURSOR INSERT INTO target_table NONRECOVERABLE INDEXING MODE INCREMENTAL ALLOW READ ACCESS')
As per IBM documentation , ALLOW READ ACCESS is going to be deprecated suggested to use INGEST method instead of that
https://www.ibm.com/docs/en/db2/10.1.0?topic=functionality-fp1-allow-read-access-parameter-load-command
Question:
How to use INGEST method to load data from DB2 to DB2 tables ?
what would be other alternatives to load millions of records with improved performance.

Related

Redshift CDC or delta load

Any one knows best way for loading delta or CDC with using any tools
I got big table with billions of records and want to update or insert like Merge in Sql server or Oracle but in Amazon Redshift S3
Also we have loads of columns as can't compare all columns as well
e.g
TableA
Col1 Col2 Col3 ...
It has say already records
SO when inserting new records need to check that particular record is already existing if so no insert if not insert and if changed update record like that
I do have key id and date columns but as its got 200+ columns not easy to check all columns and taking much time
Many thanks in advance

SCD2 Implementation in Redshift using AWS GLue Pyspark

I have a requirement to move data from S3 to Redshift. Currently I am using Glue for the work.
Current Requirement:
Compare the primary key of record in redshift table with the incoming file, if a match is found close the old record's end date (update it from high date to current date) and insert the new one.
If primary key match is not found then insert the new record.
Implementation:
I have implemented it in Glue using pyspark with the following steps:
Created dataframes which will cover three scenarios:
If a match is found update the existing record's end date to current date.
Insert the new record to Redshift table where PPK match is found
Insert the new record to Redshift table where PPK match is not found
Finally, Union all these three data frames into one and write this to redshift table.
With this approach, both old record ( which has high date value) and the new record ( which was updated with current date value) will be present.
Is there a way to delete the old record with high date value using pyspark? Please advise.
We have successfully implemented the desired functionality where in we were using AWS RDS [PostGreSql] as database service and GLUE as a ETL service . My Suggestion would be instead of computing the delta in sparkdataframes it would be far more easier and elegant solution if you create stored procedures and call them in pyspark Glue Shell .
[for example : S3 bucket - > Staging table -> Target Table]
In addition if your execution logic is getting executed in less than 10 mins I will suggest you to use python shell and use external libraries such as psycopyg2 / sqlalchemy for DB operations .

Purging of transactional data in DB2

We have existing table of size more than 130 TB we have to delete records in DB2 . Using delete statement would will hang the system. So one way is we can partition the table month and year wise and then drop the partition one by one by using truncate or drop. Looking for a script which can create the partition and subsequently dropping.
You can't partition the data within an existing table. You would need to move the data to a new ranged partitioned table.
If using Db2 LUW, and depending on your specific requirments, consider using ADMIN_MOVE_TABLE to move your data to a new table while keeping your table "on-line"
ADMIN_MOVE_TABLE has the ability to add Range Partitioning and/or Multi-Dimentional Clustering on the new table during the move.
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.5.0/com.ibm.db2.luw.sql.rtn.doc/doc/r0055069.html
Still, a 130 TB table is very large, and you would be well advised to be carful in planning and testing such a movement.

How to export data including large objects from Postgres and later import the exported data to Greenplum

I don't want to use pg_dump to export data into sql script, since feeding it to the greenplum cluster is too slow when I have a large amount of data to import. So it seems using greenplum's gpfdist is prefered. Is there any way I can do this?
Or as an alternative, can I export a particular Postgres table's data into a CSV format file containing the large orbjects of that table?
pg_dump will create a file that will use "COPY" to load the data back into a database. When loading into Greenplum, it will load through the Master server and for very large loads, it will become a bottleneck. Yes, the preferred method is to use gpfdist but you can most certainly use COPY to load data into Greenplum. It won't load in the 10+ TB per hour rate that gpfdist can achieve but it still can achieve 1 to 2 TB per hour.
Another alternative is to use gpfdist to execute a program to get data. It would execute the SELECT statement against PostgreSQL to make that available to an External Table in Greenplum. I created a wrapper for this process called, "gplink". You can check it out here: http://www.pivotalguru.com/?page_id=982
Accoridng to greenplum reference:
The simplest data loading method is the SQL INSERT statement...
You can use the COPY command to load the data into a table when the data
is in external text files...
You can use a pair of Greenplum utilities, gpfdist and gpload, to load external data into tables...
Nevertheless if you want to use csv to import data, you can generate csv with large object "filename" joining you table against pg_largeobject. Eg:
b=# create table lo (n text,p oid);
CREATE TABLE
b=# insert into lo values('wheel',lo_import ('/tmp/wheel.PNG'));
INSERT 0 1
b=# copy (select lo.*, pg_largeobject.pageno, pg_largeobject.data from lo join pg_largeobject on lo.p = loid) to '/tmp/lo.csv' WITH (format csv, header);
COPY 20
Generated /tmp/lo.csv will have name, oid and data bytea in csv format.

Clearing records in HBase table

We are creating a Disaster Recovery System for HBase tables. Because of the restrictions we are not able to use the fancy methods to maintain the replica of the table. We are using Export/Import statements to get the data into HDFS and using that to create tables in the DR Servers.
While Importing the data into HBase table, we are using truncate command to clear the table and load the data fresh into the table. But the truncate statement is taking a long time to delete the rows. Is there are any other effective statements to clear the entire table?
(truncate takes 33 min for ~2500000 records)
disable -> drop -> create table again, maybe ? I don't know if drop takes too long.