Automated way to see Queries in all Oracle Connectors of all jobs in datastage - datastage

Is there a way to see all the queries that are there in my Oracle COnnector stages of my datastage project? I am using DS 11.3.

No not natively. You could export your project and parse the export for all of the SQL staments (this could be done by a DataStage job of cause) or you might be able to query it if you have IGC Information Governance Catalog) in place.

Related

is there a way to configure ibm business automation with postgre sql?

I have IBM Business Automation workflow 8.6.1.19002 installed and now i want to change my dbms which is oracle, but we want to use some opensource dbms for example postgre sql, so is there a way to do so. As per IBM's knowledge center it only supports oracle, sql server and db2. still I am looking for if any option is available.
All supported databases are listed here: IBM Business Automation Workflow Detailed System Requirements
There is no point of using other database, first because it is not supported and your installation will be not supported, second in the BAW license DB2 is included and you can install it without additional costs. So if you no longer want to use Oracle, then use provided DB2.

Selecting appropriate tool for replacement of IBM DataStage ETL tool

we are looking for replacement of existing IBM DataStage platform . it has had around 1500 + mappings/datastage jobs on-premise . these mapping have also some complex transformations and mappings.
It is a complete ETL architecture on-premise. If it needs to get replace (Datastage) with microsoft platform (SSIS or Azure data factory) , what are the options to get replace with SSIS ETL tool .
Option:
if SSIS ETL tool selected , it needs to re-write all datastage scrips (with etl transformation) to SSIS
and then SSIS packege can be run through Azure Data factory. this will also incur new license cost for SSIS.
Can we re-write all ETL datastage jobs/scripts in Azure Data factory , would it be recommended as
Azure Data factory is orchestration tool and mainly used for Data driven movement. Azure data factory is not a traditional ETL tool
if anyone can through some lights on DataStage ETL architecture and its rich in build transformation.
and advised which would best option for datastage replacement.
Regards,
mangesh
mangesh7632#gmail.com
Sorry to hear you're looking for a DataStage replacement. If you're still interested, come take a look at what we're building in our new DataStage SaaS product. Here, you'll get a true subscription pay-as-you-go model (if pricing was the issue)
https://developer.ibm.com/tutorials/getting-started-using-the-new-ibm-datastage-saas-beta-service/
Happy to chat with you. Let me know if you'd like me to email you.
Best wishes with your data integration endeavors.
Kevin Wei - Product Manager, IBM

Execute PostgreSQL statements upon deployment to Elastic Beanstalk

I am working on an application that has source code stored in GitHub, build and test is done by CodeShip, and hosting is done in Amazon Elastic Beanstalk.
I'm at a point where seed data is needed on the development database (PostgreSQL in Amazon RDS) and it is changing regularly in development.
I'd like to execute several SQL statements that are stored in GitHub when a deployment takes place. I haven't found a way to do this with the tools we're using, so I'm wondering if there are some alternatives.
If these are the same SQL statements, then you can simply create an .ebextension (see documentation) that will execute them after each deploy.
If the SQLs are dynamic per deploy, then I'd recommend a database migrations management tool. I'm familiar with rails that has that by default abut there's also a standalone migrations tool for non-rails projects. Google can suggest many other options.

IBM datastage integration with java

We have datastage jobs and want to use one java class which reads the file and gives some data back. Can someone explain the steps needed to perform this function?
There are java transformer and java client stages in Real Time section in Palette.
You will need to study the API that DataStage uses to work with Java.
Simply write a java code that reads the file and you can call its class in DataStage.
The Java Integration Stage is a DataStage Connector through which you can call a custom Java application from InfoSphere Data Stage and Quality Stage parallel jobs. The Java Integration Stage is available from IBM InfoSphere Information Server version 9.1 and higher. The Java Integration Stage can be used in the following topologies: as a source, as a target, as a transformer, and as a lookup stage. For more information on the Java Integration Stage, see Related topics.
The DataStage Java Pack is a collection of two plug-in stages, Java Transformer and Java Client, through which you can call Java applications from DataStage. The Java Pack is available from DataStage version 7.5.x and higher.
The Java Transformer stage is an active stage that can be used to call a Java application that reads incoming data, transforms it, and writes it to an output link defined in a DataStage job. The Java Client stage is a passive stage that can be used as a source, as a target, and as a lookup stage. When used as a Source, the Java Client will be producing data. When used as a target, the Java Client Stage will be consuming data. When used as a lookup, the Java Client Stage will perform lookup functions.
For more information on the Java Pack Stages, see Related topics.
https://www.ibm.com/developerworks/data/library/techarticle/dm-1305handling/index.html

Which ways do I have to import my tables and data from SQL Server to SQL Azure?

I'm looking for the easiest way to import data from SQL Server to SQL Azure.
I'd like to work locally, would there be a way to synchronize my local database to SQL Azure all the time?
The thing is I wouldn't like to update each time I add a table in my local database to SQL Azure.
I HIGHLY recommend using the SQL Database Migration Wizard: http://sqlazuremw.codeplex.com/ it is the best free tool I've used so far. Simple and works much easier the the SSMS and VS built in tools. I think the Red-Gate tools now work well with SQL Azure too - but I haven't had a chance to use them.
Have you looked at SQL Data Sync? A new October update just came out today.
http://msdn.microsoft.com/en-us/library/hh456371
Microsoft SQL Server Data Tools (SSDT) was developed by Microsoft to make it easy to deploy your DB to Azure.
Here's a detailed explanation: http://msdn.microsoft.com/en-us/library/windowsazure/jj156163.aspx or http://blogs.msdn.com/b/ssdt/archive/2012/04/19/migrating-a-database-to-sql-azure-using-ssdt.aspx
and here's how to automate the process of publishing: http://www.anujchaudhary.com/2012/08/sqlpackageexe-automating-ssdt-deployment.html
To look: SQL Server Data Tools Team Blog
There are a few ways to migrate databases, I would recommend you to do it by using the generate scripts Wizard.
Here are the steps to follow
http://msdn.microsoft.com/en-us/library/windowsazure/ee621790.aspx
Also there are others tools like Microsoft Sync Framework.
Here you'll find more information about it
http://msdn.microsoft.com/en-us/library/windowsazure/ee730904.aspx