DB2 Warehouse on Cloud default PLSQL compatibility - db2

How can I find if a DB2 Warehouse on Cloud instance is enabled for Oracle compatibility? What is the default compatibility mode for DB2WoC instance which was provisioned from IBM Cloud? Is there a way to toggle the mode? Thanks.

Oracle compatibility can only be set at the time of provision. It can't be changed afterwards. If you need to change the setting, you would need to raise a Ticket and see if support will copy your data into a new instance. Alternatively, provision a new instance yourself, checking the Oracle compatibility option, and copy your data over.
A quick way to check if you have compatibility mode enabled is to look at the following parameters
select distinct name, value from sysibmadm.dbcfg where name like '%compat'
If they are all on, then your database was created in Oracle Compatibility mode
NAME VALUE
--------------- -----
date_compat ON
number_compat ON
varchar2_compat ON

Depends on which plan you are on. IBM Documents that Oracle-compatibility does not apply to the entry plan for the managed service of Db2 warehouse on cloud
Depending on what you've got, and deployment timing,there are configuration options (including ENABLE_ORACLE_COMPATIBILITY), see https://www.ibm.com/support/knowledgecenter/en/SS6NHC/com.ibm.swg.im.dashdb.doc/admin/configuring_Local.html#configuring_Local__config_options

Related

Implement Oracle external table like functionality in Azure managed postgresql

Currently we are using Oracle 19c external table functionality on-prem whereby CSV files are loaded to a specific location on DB server and they get automatically loaded into an oracle external table. The file location is mentioned as part of the table DDL.
We have a requirement to migrate to azure managed postgresql. As per checking the postgresql documentation, similar functionality as oracle external table can be achieved in standalone postgresql using "foreign tables" with the help of file_fdw extension. But in azure managed postgresql, we cannot use this since we do not have access to the DB file system.
One option I came across was to use azure data factory but that looks like an expensive option. Expected volume is about ~ 1 million record inserts per day.
Could anyone advise possible alternatives? One option I was thinking was to have a scheduled shell script running on an azure VM which loads the files to postgresql using PSQL commands like \copy. Would that be a good option for the volume to be supported?
Regards
Jacob
We have one last option that could be simple to implement in migration. We need to use Enterprise DB (EDB) which will avoid the vendor lock-in and also it is free of cost.
Check the below video link for the migration procedure steps.
https://www.youtube.com/watch?v=V_AQs8Qelfc

Clarification on Oracle DB Audit Configuration - Settings

I have read information regarding audit configuration of in Oracle 12c, however, looking for some clarification. Some information I read led to some confusion.
The audit config I am reviewing has the following settings:
audit_sys_operations
TRUE
audit_file_dest
D:\ORACLE\ADMIN\HOSTNAME\ADUMP
audit_trail
DB
SQL> spool off;
My understanding is that the adump directory is the default location on the database. Also, the AUDIT_TRAIL initialization parameter is set to DB, which I understand directs all audit records to the database audit trail. We have a Syslog configured that collects event logs from various servers, including this particular database server; however, I do not believe it is collecting database audit trail. My concern here is that the logs are written to the DB, and not to an external location. Wouldn’t having the AUDIT_TRAIL set to =OS be more appropriate, security wise? If the DB becomes inaccessible, so will the DB logs? I want to make sure my understanding is correct. I am not the DBA.
In your configuration the "adump" location will contain logs generated by "sysdba" activity, but not the general user audit trail. Setting audit_trail=os will send everything to the OS, but beginning with Oracle 12c and moving forward Oracle has implemented a "unified audit trail" in which everything is consolidated into a common database view and "OS" is no longer an option. Your configuration is the original "core" audit architecture, which is still supported (for now) for backwards compatibility. Ultimately you should move towards unified auditing and use some other tool to export your audit data to syslog or some other consolidation service. Check this link for more info: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/upgrd/recommended-and-best-practices-complete-upgrading-oracle-database.html#GUID-EB285325-CA65-41B4-BE58-D3F69CFED789

How to Edit postgresql.conf on Azure PostgreSQL?

I need to change log_hostname to off, in an attempt to fix a performance issue as recommended here. How do I access the postgresql.conf file for an Azure PostgreSQL instance?
log_hostname is NOT one of the parameters made available under Server Parameters in the GUI.
How do I edit it? Is it somehow accessible from pgAdmin?
Edit: Hmm, what I am asking might not be possible:
Not all PostgreSQL parameters are available for you to reconfigure in Azure Database for PostgreSQL. If a PostgreSQL parameter is not listed in your server's Azure portal Server parameters window, then it cannot be reconfigured from the default.
To review the current list of configurable parameters, navigate to the
Server parameters window in the Azure portal. A few Postgres
parameters require you to restart the server for them to take effect.
These are indicated by the property 'Static'
.
You are correct with you exceprt from the documentation there.
You can also use az cli to search through the configuration parameter details:
https://learn.microsoft.com/en-us/azure/postgresql/howto-configure-server-parameters-using-cli
Or through the portal as you have done:
https://learn.microsoft.com/en-us/azure/postgresql/howto-configure-server-parameters-using-portal
But it appears that log_hostname is not something available to change within Azure at the moment:

IBM Worklight 6.2 Server Deployement error: DB2 Instance not found on server

Environment:
IBM Worklight 6.2,
IBM Liberty 8.5.5.1,
IBM DB2 10.5 &
Windows 2008 standard Edition.
For the High Availability of DB instance[WLDBINST], the following Architecture I have followed.
2 Windows Clustered Machines with IBM DB2 binary and SAN storage used to share the Database file in Common.
If any 1 node is not available the other node will take over the control without any loss of the data.
I have tested the DB2 instance via Cluster IP and it works fine.
The below error has been logged, when I run the Worklight Server Configuration tool,
Instance WLDBINST not found on server. Found only [WLDBINST C, :, DB2CLUSTER, DB2]
I have found the reason for the above issue. To list the DB2 Instances we can use the command db2ilist
C:\>db2ilist
WLDBINST C : DB2CLUSTER
DB2
Above result shows that we have two instances
WLDBINST which is in "C" drive and part of DB2CLUSTER &
DB2
Worklight Configuration tool also uses the similar DB2 tool to list the instances, I guess.
So the configuration tool considering the result as 4 instances as follows,
WLDBINST C,
:,
DB2CLUSTER and
DB2
How I can resolve this issue.
If the Server Configuration Tool is not able to create the database for your topology, you should create it manually before running the tool.
For the Administration database, the doc is here:
https://www-01.ibm.com/support/knowledgecenter/SSZH4A_6.2.0/com.ibm.worklight.installconfig.doc/admin/t_creating_the_db2_database_for_wladmin.html
For the Project Runtime databases, the doc is here:
https://www-01.ibm.com/support/knowledgecenter/SSZH4A_6.2.0/com.ibm.worklight.deploy.doc/admin/t_creating_the_db2_databases.html
The server configuration tool will not do any specific configuration to ensure that Liberty reopens a connection if there is a database node switch. I recommend that you review the behavior of Liberty in this case, and add settings in the server.xml as required.

How do I setup DB2 Express-C Data Federation for a Sybase data source?

I wish to make fields in a remote public Sybase database outlined at http://www.informatics.jax.org/software.shtml#sql appear locally in our DB2 project's schema. To do this, I was going to use data federation, however I can't seem to be able to install the data source library (Sybase-specific file libdb2ctlib.so for Linux) because only DB2 and Infomatix work OOTB with DB2 Express-C v9.5 (which is the version we're currently running, I also tried the latest V9.7.)
From unclear IBM documentation and forum posts, the best I can gather is we need to spend $675 on http://www-01.ibm.com/software/data/infosphere/federation-server/ to get support for Sybase but budget-wise that's a bit out of the question.
So is there a free method using previous tool versions (as it seems DB2 Information Integrator was rebranded as InfoSphere Federation Server) to setup DB2 data wrappers for Sybase? Alternatively, is there another non-MySQL approach we can use, such as switching our local DBMS from DB2 to PostgreSQL? Does the latter support data integration/federation?
DB2 Express-C does not allow federated links to any remote database, not even other DB2 databases. You are correct that InfoSphere Federation Server is required to federate DB2 to a Sybase data source. I don't know if PostgreSQL supports federated links to Sybase.
Derek, there are several ways in which one can create a federated database. One is by using the federated database capability that is built in to DB2 Express-C. However, DB2 Express-C can only federate data from specific data sources i.e. other DB2 databases and industry standard web services. To add Sybase to this list you must purchase IBM Federation Server product.
The other way is to leverage DB2 capability to create User Defined Functions in DB2 Express-C that use OLE DB API to access other data sources. Because OLE DB is a Windows-based technology, only DB2 servers running on Windows can do that. What you do is create a table UDF that you can then use anywhere you would expect to see a table result set e.g view definition. For example, you could define a view that uses your UDF to materialize the results. These results would come from a query (via OLE DB) of your Sybase data (or any other OLE DB compliant data source).
You can find more information here http://publib.boulder.ibm.com/infocenter/idm/v2r2/index.jsp?topic=/com.ibm.datatools.routines.doc/topics/coledb_cont.html