How to transfer or copy tables of DB2 to oracle database - db2

I want to transfer some tables of DB2 to oracle daily for accessing them from web page,
But I don't know commands of DB2. How to do this?
I want this action should perform on database daily on particular time, so is there any tool is available to do this operation. And for writing the program for operating above query which programming language should I use? I am using windows XP.

I think Change Data Capture is used to replicate DML from one database to other databases continuously.
However, what you need is to transfer some data at a particular time each day, thus CDC could be too heavy for that.
You could do a simply "db2 export", and then you could import the generated file from Oracle.
There should be an option to create an adapter in Oracle that permits to query DB2 tables. The opposite is called federation in DB2 (InfoSphere Information Server) that permits to query Oracle tables.
Export http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0008303.html
CMD examples http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.dm.doc/doc/r0004567.html

Check this link
http://blogs.oracle.com/warehousebuilder/entry/simple_change_data_capture_from_db2_table_to_oracle_table
In 11.2 releases, Change Data Capture (CDC) can be done by code template mapping. This allows users to capture the data changes from heterogeneous data source, and load into the target across different platforms.

Related

Difference between copy/migrate/export in SQL developer

I am using Oracle SQL developer, it has the following tools,
DATABASE copy, DATABASE export and Migrate.
I want to move one schema and all the data in it from one server to another.
What is the difference between these options? Does anything serve what I am looking for?
Database Copy is probably what you want.
Supply two database connections, and we'll take objects and data and copy them from one database to another.
However, if your schema is large, this will be inefficient. The Copy routine does inserts, row-by-row across the jdbc connections.
Database Export takes the objects and data and offloads them to flat files. These flat files could then be used later to put in another database.
Migrate is used to take a database from SQL Server, Sybase, Teradata, Redshift, DB2, etc. to Oracle. It has an online (jdbc row-by-row) data copy and an offline (flat files for SQL Loader) data move mode. For SQL Server/Sybase, we can also translate the T-SQL stored procedures to PL/SQL.
Your solution might also lie elsewhere - Data Pump. We have a wizard for that as well, and works great for very large schemas/databases. You'll just need access to the database OS so you can put the DMP files into a Database Directory.

Mirror SAP internal data to an external system

We would like to mirror data which is inside SAP to an external database.
Up to now there is a script which exports the data every night.
The customer wants this to happen more often. It should happen every hour.
The export is quite big, and we search for a better way to mirror data which is inside SAP to an external database.
Based on the tag, I assume that your external database is a PostgreSQL database. In this case, I don't think you will really find a pure SAP, database independent solution.
The standard solution for this sort of replication is the SAP SLT Server. It supports taking data out of your SAP system to either a SAP target or a non-SAP target. Currently it supports the following non-SAP targets:
DB2
SAP MaxDB
Microsoft SQL Server
Oracle
Sybase ASE
As you can see, PostgreSQL is not included in there (yet). In conclusion, I see the following possibilities:
Use SLT in combination with some other external DB that is supported.
Use a third party replication tool like for example SymmetricDS.
Depending on your source database, you might be able to use some database specific tools (e.g. SAP HANA Smart Data Integration).
Write some custom code for doing it. In my opinion, you should try to build a sort of log table in this case, to record (using maybe triggers) which rows were inserted / updated / deleted since the last replication. IMO, this should be really a last resort, as database replication is a fairly common topic and you should not reinvent the wheel.

Synchronize between an MS Access (Jet / MADB) database and PostgreSQL DB, is this possible?

Is it possible to have a MS access backend database (Microsoft JET or Access Database Engine) set up so that whenever entries are inserted/updated those changes are replicated* to a PostgreSQL database?
Two-way synchronization would be nice, but one way would be acceptable.
I know it's popular to link the two and use one as a frontend, but it's essential that both be backend.
Any suggestions?
* ie reflected, synchronized, mirrored
Can you use Microsoft SQL Server Express Edition? Or do you have to use Microsoft Access Database Engine? It's possible you'll have more options using MS SQL express, like more complete triggers and logging.
Either way, you're going to need a way to accumulate a log of changed rows from the source database engine, and a program to sync them to PostgreSQL by reading the log and converting it into suitable PostgreSQL INSERT, UPDATE and DELETE statements.
You could do this by having audit triggers in MADB/Express insert a row into an audit shadow table for every "real" table whenever it changed, including inserting special "row deleted" audit entries. Then your sync program could connect to both MADB/Express, read the audit tables, apply the changes to PostgreSQL, and empty the audit tables.
I'll be surprised if you find anything to do this out of the box. It's one area where Microsoft SQL Server has a big advantage because of all the deep Access and MADB engine integation to support the synchronisation and integration features.
There are some ETL ("Extract, Transform, Load") tools that might be helpful, like Pentaho and Talend. I don't know if you can achieve the desired degree of automation with them though.

How to obtain the database schema from a Sybase ASA 11 Database

I am working on a project where I need to programmatically validate and/or compare a database schema between product releases.
I am using Perl and am looking for a cross-platform method to collect the database schema. I am currently able to perform database queries by utilizing the dbisql.exe command and then parsing the results.
I am wondering if there is potentially a stored procedure or set of queries that I can run to collect the database schema.
It appears that the dbunload.exe command could be used to generate a SQL regeneration script however I am thinking that this output may be difficult to parse.
Any feedback would be greatly appreciated.
If you would like to retrieve the DB schema data on a really low level you could query the corresponding system tables. They are in the SYS-Namespace, especially SYSTABLE (for all tables) and SYSCOLUMN for all fields in those tables.
Check the ASA SQL Reference Handbook for the schema of those system tables.
With Perl's DBI you can easily fire queries on those tables. But you will have to create some local storage for the schema to compare the query results with.
Sybase Central v3.0 has the possibility to export DDL with all DB objects;
and I think SC v6.0 can't connect to ASA 11 :(

DB2 external tables?

I just heard that Oracle has a feature called External Table that allows to access a flat file (for example a CSV file in the file system) from the database.
I just want to know if there is something similar in DB2 for LUW.
The closest thing I could see is to implement a Table function (written in Java, for example) that will read the file, and return a table with the data from the file. However, this procedure takes a long time (create the Java code, compile the Java and create the function in DB2 associating the Java class) and the implementation is not dynamic for different files with different quantity of columns (table function returns a predefined set of columns).
Here the documentation of Oracle External Tables: http://docs.oracle.com/cd/B28359_01/server.111/b28319/et_concepts.htm
Yes, IBM offers this as part of their InfoSphere Federation Server, which basically allows you to define nicknames inside a database to various data sources. Supported data sources
IBM Db2 11.5 has support for external tables that will allow you to do this.
This was formerly provided only by Netezza and this functionality has made its way to Db2.
See the manual page for CREATE EXTERNAL TABLE here https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r_create_ext_table.html
As mentioned, InfoSphere Federation Server is a good choice. There are two alternatives for DB2 UDB (Universal Database), which may be helpful in specific use cases:
DataLinks: it is basically another data type
that keeps a reference to your external file. It also provides
several levels of control over external data such as referential
integrity, access control, coordinated backup and recovery, and
transaction consistency.
DB2 Extenders: they extend functionality of the DB2 to operate on specific file formats, e.g. XML Extender provide set of features to operate on XML files inside DB2
There is also:
(a) external table support in the warehousing engine products (Db2 Warehouse, Db2 Warehouse on Cloud) (b) Data virtualization (aka federation/fluid query) in all Db2 products which may achieve the same thing.