Connecting to DB2 HammerDB - db2

I am using a Windows Machine to connect to a remote DB2 instance. Ran into this issue
SQL1531N The connection failed because the name specified with the DSN connection string keyword could not be found in either the db2dsdriver.cfg configuration file or the db2.cli.ini configuration file. Data source name specified in the connection string: <DSN>
I have configured ODBC Data source using ODBC Data Source Administrator it has connected successfully.
Upon further investigation, I am unable to locate db2dsdriver.cfg on IBM DATA SERVER DRIVER folder. I am able to find db2dsdriver.lvl and dbs2dsdriver.xds. Just not the .cfg file. I am also unsure where HammerDB looks for the config file.
I have looked at the configuration of DB2 from the website but I am unable to get any useful information from there. https://www.hammerdb.com/docs/ch04s02.html

For the tiny footprint ODBC and CLI driver (known as clidriver) from IBM, you are responsible for creating and editing the db2dsdriver.cfg configuration file. It is a small XML file documented here and in related linked pages. The hammerdb documentation also gives a minimal example and you linked to this page in your question.
You can create and edit this file either by command lines to the db2cli tool, or by directly editing with a text editor (or XML editor). It may be easier to use an editor than to learn the command lines, although command lines have the advantage that they lend themselves to scripting this activity for larger installations.
On Microsoft-Windows you can also use Notepad to create and edit the file db2dsdriver.cfg.
An important step is that following editing of the file you must first validate its contents before trying any database connections. Validation checks that the syntax of the XML in the file is correct. To validate, you use the db2cli validate command described here. It must show a successful result before you try to connect to any database. Once validation completes without errors, you can also use db2cli validate -connect -dsn XXX -user YYY -passwd ZZZ to test the connection independently of your application (in this case hammerdb). Once you get a successful connection with the db2cli validate -connect -dsn ... then your application (hammerdb) will connect correctly.
There are many examples of db2dsdriver.cfg contents online , but your first source should be the Db2 Knowledge Centre online, which details the command line options to the db2cli command, along with giving examples of db2dsdriver.cfg.
If you already have a working Db2 configuration with local and remote databases (but no db2dsdriver.cfg file), you can also use a tool db2dsdcfgfill to populate db2dsdriver.cfg from your existing Db2 configuration. See docs here.

Related

Postgresql Copy Function from the Server Computer to the Client Computer

I would like to import a table from the server computer into a Client computer using the copy command. I know this is a recurring issue for users, but I have not been able to get an answer to this particular one and it's also a different scenario, and I believe this to be common.
I used a copy command to copy a Table from the server to the client computer using the code below:
COPY (Select * from Table_Name) TO 'C:\somedirectory\file.csv' DELIMITER ',' CSV HEADER;
However, I got the following
ERROR: relative path not allowed for COPY to file
My question is: How do I use the correct COPY command to copy from the server computer to the client computer in Postgres.
Thank you in anticipation
Please check if your user has read/write access to the destination folder.
This is one thread I found, see if it helps
https://dba.stackexchange.com/questions/158466/relative-path-for-psql-copy-file
https://postgrespro.com/list/thread-id/1116997
Try with network through access using client public IP.
How do I use the correct COPY command to copy from the server computer to the client computer in Postgres
You simply can't.
Which is clearly stated in the manual
COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible by the PostgreSQL user (the user ID the server runs as) and the name must be specified from the viewpoint of the server
(emphasis mine)
You need to use psql's \copy command or any other export tool that works on the client side.

How to import sql file in Google SQL with binary mode enabled?

I have a database that is giving error:
ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected.
I'm including importing the database through the console with gcloud sql import sql mydb gs://my-path/mydb.sql --database=mydb but I don't see in the documentation any flags for binary mode. Is it possible at all?
Optional - is there a way to set this flag when importing through the MySQL Workbench. I haven't seen anything about it there too, but may be I'm missing some setting or something. If there is way to set that flag, then I can import my database through MySQL Workbench.
Thank you.
Depending where the source database is hosted, on Cloud SQL or on an on-premise environment, the proper flags are set during the export, so the dump file is compatible with the target database.
Since you would like to import a file that has been exported from an on-premise environment, mysqldump is the suggested way to perform the export.
First, create a dump file as suggested in the documentation. Make sure to pay attention to the following 2 points:
Do not export customer-created MySQL users. This will cause the import to the new instance to fail. Instead, manually create the MySQL users you wish to.
Make sure that you have configured the appropriate flags in order to make sure that the dump file will contain all the necessary details you need. Eg triggers, stored procedures etc.
Then, create a Cloud Storage Bucket and upload the dump file to the bucket.
Before proceeding with the import, grant the Storage Object Admin role to the service account of the target Cloud SQL instance. You may do that with the following command:
gsutil iam ch serviceAccount:[SERVICE-ACCOUNT]:objectAdmin gs://[BUCKET-NAME]
You may locate the aforementioned Service Account in the Cloud SQL instance Overview, or by running the following command:
gcloud sql instances describe [INSTANCE_NAME]
The service account will be mentioned at the serviceAccountEmailAddress field.
Now you are able to do the import either from Console, or using the gcloud command or a REST API.
More details in Google documentation
Best Practices for importing/exporting data

PSQL_HISTORY ignored by PyCharm

I have a Django project connecting to a PostgreSQL database which I develop in PyCharm, and I want to enable PostgreSQL history logging.
There is PSQL_HISTORY env variable set to /home/user/apps/postgres/logs/.pycharm_log, but when I start the project in PyCharm and update some data via the Django Admin (which certainly hits the database) -- nothing gets logged and the file is not created at all.
Is there a way to make PyCharm and PSQL_HISTORY work together as I expected?
'psql' is the name of a specific client tool. Why would a completely different tool use psql's configuration options? If you want to log every statement sent to the server, you could configure that in the server side with log_statement=all.

Datastage Oracle Importing Table

I have a error when i use datastage connect to Oracle and import the table definition. Below is the detailed situation.
enviroment:
OS:AIX6.1,64bit,power6 processor, LANG=en_US
Data Stage Version:8.5
Installation profile
three tiers install on same machine, repository use DB2 (default).
Oracle Client 11.2 (64bit) also install on this machine, I can use SQLPLUS connect to Oracle server (11.2, 64bit, AL32UTF8) on another machine.
"dsenv" setting
add "/oracle/product/11.2.0-64/lib" to the "LIBPATH"
add "export TNS_ADMIN=/oracle/product/11.2.0-64/network/admin"
Problem
1. I use Oracle Connector(parallel) create a Link, then use this Link import Metadata, when i press Test connection, there is a dialog with "The OCI function OraOCIEnvNlsCreate:OCI_UTF16ID returned status -1. Error code: NULL, Error message: NULL" popup, and the connection failed.
I use Oracle Enterprise(parallel) create a Link, then use it import Metadata, when i click Ellipsis button list all the tables in target database, there is a dialog with "cannot get list of table names from database" popup, after I click OK on this dialog, the detail error message popup.
12:37:21(002) Unable to access database oracleLibrary orchoracle could not be loaded; Could not load "orchoracle": 
0509-022 Cannot load module /opt/IBM/InformationServer/Server/DSComponents/bin/orchoracle.o.
0509-150 Dependent module /opt/IBM/InformationServer/Server/DSComponents/bin/libclntsh.so could not be loaded.
0509-103 The module has an invalid magic number.
0509-022 Cannot load module /opt/IBM/InformationServer/Server/DSComponents/bin/orchoracle.o.
0509-150 Dependent module /opt/IBM/InformationServer/Server/DSComponents/bin/orchoracle.o could not be loaded.
from the message I found the DS search some files in DSCompoments/bin, but these files are in the oracle bin directory. I can't find the error in dsenv file, so i copied these files into DSComponents/bin, this time the error message changed to "OCI_ERROR: Bad Oracle environment".
I am not sure which enviroment variable I missed, please tell me.
I use Oracle OCI(Server) create a Link and import a table, it works fine.
So, my question is why I can't use the Oracle Connector and Oracle Enterprise to connect the Oracle. Thanks.
Yes the PATH variable needs to be set to $ORACLE_HOME/bin. Adding this variable to the dsenv file and recycling all services fixed the Oracle COnnector issue for us. It is required to be added to the dsenv file and recycling ASBNode and datastage is also required. Here are the directives needed in the dsenv file to use Oracle Connector :(eg is from our system AIX 6.1 , datastage 8.5 connecting to Oracle 11g Enterprise)
We also added the following :
TNS_ADMIN=/opt/oracle/product/11.1.0/client_1/network/admin; export TNS_ADMIN

Can db2 import or load be used to populate DashDB?

I'm looking to bulk loads millions of rows into a DashDB database. After connecting using the DB2 CLI, I enter a command like:
db2 import from rowsToImport.csv of del insert into MY_TABLE
with results:
SQL0551N "DASHXXX" does not have the required authorization or privilege to
perform operation "BIND" on object "NULLID.SQLUAJ19". SQLSTATE=42501
Is this an inherent limitation with DashDB, or is something configured incorrectly on my client? I get a similar message when trying db2 load:
SQL2019N An error occurred while utilities were being bound to the database.
p.s. I'm aware of the rest client api for DashDB for loading data - I'm asking specifically how/if bulk loads can be done with the DB2 command line as an alternate option.
As per dashDB documentation you can use the Command line processor plus (CLPPlus). It is included in the dashDB driver package and provides a command-line user interface that you can use to connect to the dashDB database, BLUDB. You can use CLPPlus to define, edit, and run statements, scripts, and commands. Please take also a look at Connecting CLPPlus to the dashDB database to see how to connect and use the CLI.
Please note that in CLPPlus: IMPORT, EXPORT and LOAD commands have a restriction that processed files must be on the server: see here. So you should copy the input load file onto the remote server first with SCP. However SSH/SCP protocol should be blocked (not accessible) for a normal dashDB user.
Only geospatial data can be loaded from your local machine to dashDB, using IDA LOADGEOSPATIALDATA command in CLPPlus.
The file to be loaded in dashDB using the above command can be in the local file system, accessible to the CLPPlus user.
Alternative ways to do that are:
dashDB REST API (as you already mentioned). See Load delimited data using the REST API and cURL.
load the csv directly from the dashDB dashboard on Bluemix. See Loading data from the desktop into IBM dashDB.
load the csv using IBM Data Studio. See dashDB large file load using IBM Data Studio.
According to this technote, the package NULLID.SQLUAJ19 belongs to one of the early DB2 10.1 fix packs, so I suspect your client version is 10.1. When attempting to execute the IMPORT command it needs to bind some packages of that older version, since dashDB is DB2 10.5, obvisouly.
You may want to try installing the latest DB2 client fix pack, as the necessary packages may be already bound in the database.
To verify that you could run select pkgname from syscat.packages where pkgschema = 'NULLID' and pkgname like 'SQLUA%' -- you should see "SQLUAK20", which seems to be the corresponding package in DB2 10.5.
If that doesn't work, your other option might be to move to a dedicated dashDB instance, as you won't have sufficient privileges to bind missing packages in the entry-level shared dashDB service.