If you choose to use ejbTimer feature with DashDB in Bluemix, you end up with errors. We've analyzed the problem and came up with the analysis that ejbTimer feature creates a set of tables (WLPTASK, WLPPART, WLPPROP) in its create phase. These assume 'ORGANIZE BY COLUMN' (default on DashDB).
As a workaround, we found out that if we used the feature in standalone (non Cloud) Liberty server, let that create the tables. Take the DDLs from that and adjust them with 'ORGANIZE BY ROW'. Manually create the tables in DashDB. Then using the feature in Bluemix does not need to create tables and works with these manually created tables.
I assume this is not expected behavior - is there a fix for it ?
What you have done to work around this issue is good. The reason why this doesn't work out of the box is that Liberty uses EclipseLink (ECL) to create the tables for EJB timers and ECL doesn't have full support for DashDB.
ECL supports all compliant SQL and JDBC drivers. However, ECL only supports schema generation for a select set of databases. Unfortunately, DashDB is not on the list of databases that ECL supports schema generation for.
I suggest that you continue to use this workaround of manually editing the DDL generated for Derby, and in the meantime open a Request For Enhancement (should take 10 mins or less) for IBM to add DashDB schema generation support to ECL.
Related
I a trying to do the migration for our Postgres database to Aurora postgres
first I create a normal task it migrates all tables only except its constraints.
My tries to clone our database
I downloaded AWS SCT (Schema Conversion Tool) then set my configuration to generate a migration report, here is the report
We completed the analysis of your PostgreSQL source database and
estimate that 100% of the database storage objects and 99.1% of
database code objects can be converted automatically or with minimal
changes if you select Amazon Aurora (PostgreSQL compatible) as your
migration target. Database storage objects include schemas, tables,
table constraints, indexes, types, sequences and foreign tables.
Database code objects include triggers, views, materialized views,
functions, domains, rules, operators, collations, fts configurations,
fts dictionaries and aggregates. Based on the source code syntax
analysis, we estimate 99.9% (based on # lines of code) of your code
can be converted to Amazon Aurora (PostgreSQL compatible)
automatically. To complete the migration, we recommend 133 conversion
action(s) ranging from simple tasks to medium-complexity actions to
complex conversion actions.
my question:
1- is there a way to automate including everything in my source database
2- the report mentions we recommend 133 conversion action(s) where I can find these conversion actions.
3- is it safe to ongoing migration as in my case we need to run migration every day.
Sequence, Index, and Constraint are not migrated and it is mentioned in the official docs on AWS.
You can use this source.
This will help you to migrate Sequence, Index, and Constraint at once.
p.s: this doesn't include View and Routine.
There's no way AFAIK in AWS to automate everything if that was there then it would have been already added in SCT. however, if there are similar errors that are occurring in code/DDL/function like some datatype conversions. you can create a script that will take schema dump and convert all these data types to the desired ones.
Choose the SQL Conversion Actions tab in SCT tool.
The SQL Conversion Actions tab contains a list of SQL code items that can't be converted automatically. There are also recommendations for how to manually convert the SQL code. You can look into the errors and make changes accordingly.
In case if you are migrating to the same version of PG in aurora you can take a schema only dump and restore it into target aurora and later setup a full load/ongoing replication with DMS and you don't have to take SCT into consideration(most of the time worked for me). Just make sure you adhere to aurora limitations specific to the PG version
We have been using ongoing migration in our project at it's working great. There are some best practices we have developed but that will differ from project to project
DDL changes must be made on the target first and stop replication while doing it and resume once done
Separate the tables with high transactions as different DMS task as it will help you in troubleshooting and your rest of the tables can still be working
Always keep in mind DMS replicates data, not the view/function/procedures
Active monitoring of tasks and replication instances
And I would like to suggest if you are performing homogenous migration(PG -> PG) you should consider pg_dump & pg_restore that easy and sophisticated for the same versions and AWS aurora supports it.
We would like to mirror data which is inside SAP to an external database.
Up to now there is a script which exports the data every night.
The customer wants this to happen more often. It should happen every hour.
The export is quite big, and we search for a better way to mirror data which is inside SAP to an external database.
Based on the tag, I assume that your external database is a PostgreSQL database. In this case, I don't think you will really find a pure SAP, database independent solution.
The standard solution for this sort of replication is the SAP SLT Server. It supports taking data out of your SAP system to either a SAP target or a non-SAP target. Currently it supports the following non-SAP targets:
DB2
SAP MaxDB
Microsoft SQL Server
Oracle
Sybase ASE
As you can see, PostgreSQL is not included in there (yet). In conclusion, I see the following possibilities:
Use SLT in combination with some other external DB that is supported.
Use a third party replication tool like for example SymmetricDS.
Depending on your source database, you might be able to use some database specific tools (e.g. SAP HANA Smart Data Integration).
Write some custom code for doing it. In my opinion, you should try to build a sort of log table in this case, to record (using maybe triggers) which rows were inserted / updated / deleted since the last replication. IMO, this should be really a last resort, as database replication is a fairly common topic and you should not reinvent the wheel.
I have a job in talend open studio which is working fine, it conects a tMSSqlinput to a tMap then tMysqlOutput, very straight forward. My problem is that i need this job running on daily basis, but only run when a new record is created or modified...any help is highly aprecciated!
It seems that you are searching for a Change Data Capture Tool for Talend.
Unfortunately it is only available on the licenced product.
To implement your need, you do have several ways. I want to show the most popular ones.
CDC from Talend
As Corentin said correctly, you could choose to use CDC (Change Data Capture) from Talend if you use the subscription version.
CDC of MSSQL
Alternatively you can check if you can activate or use CDC in your MSSQL server. This depends on your license. If it is possible, you can use the function to identify new elements and proceed them.
Triggers
Also you can create triggers on your database (if you have access to it). For example, creating a trigger for the cases INSERT, UPDATE, DELETE would help you getting the deltas. Then you could store those records separately or their IDs.
Software driven / API
If your database is connected to a software and you have developers around, you could ask for a service which identifies records on insert / update / delete and shows them to you. This could be done e.g. in a REST interface.
Delta via ID
If the primary key is an ID and it is set to autoincrement, you could also check your MySQL table for the biggest number and only SELECT those from the source which have a bigger ID than you have already got. This depends of course from the database layout.
Is it possible to have a MS access backend database (Microsoft JET or Access Database Engine) set up so that whenever entries are inserted/updated those changes are replicated* to a PostgreSQL database?
Two-way synchronization would be nice, but one way would be acceptable.
I know it's popular to link the two and use one as a frontend, but it's essential that both be backend.
Any suggestions?
* ie reflected, synchronized, mirrored
Can you use Microsoft SQL Server Express Edition? Or do you have to use Microsoft Access Database Engine? It's possible you'll have more options using MS SQL express, like more complete triggers and logging.
Either way, you're going to need a way to accumulate a log of changed rows from the source database engine, and a program to sync them to PostgreSQL by reading the log and converting it into suitable PostgreSQL INSERT, UPDATE and DELETE statements.
You could do this by having audit triggers in MADB/Express insert a row into an audit shadow table for every "real" table whenever it changed, including inserting special "row deleted" audit entries. Then your sync program could connect to both MADB/Express, read the audit tables, apply the changes to PostgreSQL, and empty the audit tables.
I'll be surprised if you find anything to do this out of the box. It's one area where Microsoft SQL Server has a big advantage because of all the deep Access and MADB engine integation to support the synchronisation and integration features.
There are some ETL ("Extract, Transform, Load") tools that might be helpful, like Pentaho and Talend. I don't know if you can achieve the desired degree of automation with them though.
I just heard that Oracle has a feature called External Table that allows to access a flat file (for example a CSV file in the file system) from the database.
I just want to know if there is something similar in DB2 for LUW.
The closest thing I could see is to implement a Table function (written in Java, for example) that will read the file, and return a table with the data from the file. However, this procedure takes a long time (create the Java code, compile the Java and create the function in DB2 associating the Java class) and the implementation is not dynamic for different files with different quantity of columns (table function returns a predefined set of columns).
Here the documentation of Oracle External Tables: http://docs.oracle.com/cd/B28359_01/server.111/b28319/et_concepts.htm
Yes, IBM offers this as part of their InfoSphere Federation Server, which basically allows you to define nicknames inside a database to various data sources. Supported data sources
IBM Db2 11.5 has support for external tables that will allow you to do this.
This was formerly provided only by Netezza and this functionality has made its way to Db2.
See the manual page for CREATE EXTERNAL TABLE here https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r_create_ext_table.html
As mentioned, InfoSphere Federation Server is a good choice. There are two alternatives for DB2 UDB (Universal Database), which may be helpful in specific use cases:
DataLinks: it is basically another data type
that keeps a reference to your external file. It also provides
several levels of control over external data such as referential
integrity, access control, coordinated backup and recovery, and
transaction consistency.
DB2 Extenders: they extend functionality of the DB2 to operate on specific file formats, e.g. XML Extender provide set of features to operate on XML files inside DB2
There is also:
(a) external table support in the warehousing engine products (Db2 Warehouse, Db2 Warehouse on Cloud) (b) Data virtualization (aka federation/fluid query) in all Db2 products which may achieve the same thing.