We have one application where we have db2 10.1 as database.
Now one requirement came in which we need to interface few tables to HOST which is on IBM DB2 VSE 7.4
I tried to execute load command with client option but it give "SQL1325N The remote database environment does not support the command or one
of the command options." error.
command is :"D:\tempdata>db2 load client from app.tbl of ixf
insert into host.tbl"
Many post says that its not allow to use load from 10.1 to VSE Z/OS.
Another option I tried is import but its too slow and we need to delete records every time as truncate is not available.
Replication can be think for option but we would like to avoid replication.
Can anyone suggest way to achieve this. Load can be use or not?
It seems its not allow to use load from remote machine. But wondering what is use of CLIENT option in load then.
Finally we decided to use import utility after deleting HOST DB2 records. In that we need to execute delete and import command on part of table. If we try to import or delete big table at one go it give error of log file size full.
Hope this will help.
Related
This question already has answers here:
Export and import table dump (.sql) using pgAdmin
(6 answers)
Closed 1 year ago.
Let I first state that I am not a DBA-guy but I do have a question regarding restoring remote databases using PG Admin.
I have this PG Admin tool (v4.27) running in a Docker container and I use this portal to maintain two separate Postgress databases, both running in a Docker container as well. I installed PG Agent in both database containers and run scheduled daily backup's, defined via PG Admin and stored in the container of each corresponding databases. So far so good.
Now I want to restore one of these databases by using the latest daily backup file (*.sql), but the Restore Dialog of PG Admin only looks for files stored locally (the PG Admin container)?
Whatever I tried or searched for on the internet, to me it seems not possible to show a list of remote backup files in PG Admin or run manually a remote SQL file. Is this even possible in PG Admin? Running psql in the query editor is not possible (duh ...) and due to not finding the remote SQL-restore file I have no clue how to run this code within PG Admin on the remote corresponding database container.
The one and only solution so far I can think of, is scheduling a restore which has no calendar and should be triggered manually when needed, but it's not the prettiest solution.
Do I miss something or did I overlook the right documentation or have I created a silly, unmaintainable solution?
Thanks in advance for thinking along and kind regards,
Aad Dijksman
You cannot restore a plain format dump (an SQL script) with pgAdmin. You will have to use psql, the command line client.
COPY statements and data are mixed in such a dump, and that would make pgAdmin choke.
The solution by #Laurenz Albe points out that it is best to use the command line psql here, and that would be my first go-to.
However, if for whatever reason you don't have access to the command line and are only able to connect to this database via pgadmin, there is another solution which you can find here:
Export and import table dump (.sql) using pgAdmin
I recommend looking at the solution by Tomas Greif.
I'd like to be able to upgrade my existing cloudsql postgres 9.6 instance to 11 to use some new pg 11 features.
I've been trying to figure out a good migration plan but it seems like the only option available is sql dump and restore. The database is 100Gig+ so this will take quite some time, and I'd like to avoid downtime as much as possible. Are there any options available? I was considering enabling statement logging: log_statement=mod, creating a dump, importing it into a pg-11 instance taking down the db + then scraping the logs to reply the latest updates into the pg-11 instance by downloading the logs and writing a script to re-run the inserts. Seems doable, but doesn't feel nice.
I am wondering if anyone faced this before and has had any other solutions?
Postgres 11 on Cloud SQL is still in Beta. It is not recommended to be using a product that is in Beta on a production environment.
However, should you choose to proceed, you must export the data by either creating a SQL dump or putting the data into a .csv file (depending on your needs)(best practices) create a Postgres 11 instance, and then import the data.
For the data that won’t be in the dump, you can either:
a) Do what you have suggested by logging the queries and then re-run the inserts
b) Create a dump, import it onto the new instance make it live and then take another dump of the old one again, compare to remove duplicates and import the differences. This will be difficult if you have auto-incrementing primary keys.
c) Create the schema on the Postgres 11 instance and deploy it. Then create the dump and import at a later time. If you have primary keys as auto incrementing, alter the schema to start at a value that you would like.
I'm looking to bulk loads millions of rows into a DashDB database. After connecting using the DB2 CLI, I enter a command like:
db2 import from rowsToImport.csv of del insert into MY_TABLE
with results:
SQL0551N "DASHXXX" does not have the required authorization or privilege to
perform operation "BIND" on object "NULLID.SQLUAJ19". SQLSTATE=42501
Is this an inherent limitation with DashDB, or is something configured incorrectly on my client? I get a similar message when trying db2 load:
SQL2019N An error occurred while utilities were being bound to the database.
p.s. I'm aware of the rest client api for DashDB for loading data - I'm asking specifically how/if bulk loads can be done with the DB2 command line as an alternate option.
As per dashDB documentation you can use the Command line processor plus (CLPPlus). It is included in the dashDB driver package and provides a command-line user interface that you can use to connect to the dashDB database, BLUDB. You can use CLPPlus to define, edit, and run statements, scripts, and commands. Please take also a look at Connecting CLPPlus to the dashDB database to see how to connect and use the CLI.
Please note that in CLPPlus: IMPORT, EXPORT and LOAD commands have a restriction that processed files must be on the server: see here. So you should copy the input load file onto the remote server first with SCP. However SSH/SCP protocol should be blocked (not accessible) for a normal dashDB user.
Only geospatial data can be loaded from your local machine to dashDB, using IDA LOADGEOSPATIALDATA command in CLPPlus.
The file to be loaded in dashDB using the above command can be in the local file system, accessible to the CLPPlus user.
Alternative ways to do that are:
dashDB REST API (as you already mentioned). See Load delimited data using the REST API and cURL.
load the csv directly from the dashDB dashboard on Bluemix. See Loading data from the desktop into IBM dashDB.
load the csv using IBM Data Studio. See dashDB large file load using IBM Data Studio.
According to this technote, the package NULLID.SQLUAJ19 belongs to one of the early DB2 10.1 fix packs, so I suspect your client version is 10.1. When attempting to execute the IMPORT command it needs to bind some packages of that older version, since dashDB is DB2 10.5, obvisouly.
You may want to try installing the latest DB2 client fix pack, as the necessary packages may be already bound in the database.
To verify that you could run select pkgname from syscat.packages where pkgschema = 'NULLID' and pkgname like 'SQLUA%' -- you should see "SQLUAK20", which seems to be the corresponding package in DB2 10.5.
If that doesn't work, your other option might be to move to a dedicated dashDB instance, as you won't have sufficient privileges to bind missing packages in the entry-level shared dashDB service.
While restoring a (pg_dump-produced) database dump, I get the following error:
Cannot execute COPY FROM on a distributed table on master node
How can I work around this?
COPY support was added in Citus 5.1, which was released May 2016 and is available in the official PostgreSQL Linux package repositories (PGDG).
Are you trying to load data via a pg_dump output? Creating distributed tables is slightly different than regular tables, and requires picking of partition columns and partitioning method. Take a look at the docs to get more information on both.
After painful installation of hadoop_fdw into our running pgsql 9.3.4, I am trying to connect it to cloudera cluster 5.2.0 with no luck.
Is there a way of debugging the fdw? After creating the foreign table and selecting from it, I just got an error - ERROR: failed to connect to Hive: No more data to read.
btw.: Some old version of hadoop_fdw was capable of using url (jdbc://server:port/args), but not the recent version, there's just address & port.
Hadoop_fdw didn'd make it. There's probably something wrong/old/obsolete in hive.c. But with even more effort we managed to make jdbc_fdw work with cloudera jdbc drivers. The steps were as follows:
1) install jdbc_fdw extension
2) merge all driver jar files into one
3) CREATE SERVER cloudera2 FOREIGN DATA WRAPPER jdbc_fdw OPTIONS(drivername 'com.cloudera.hive.jdbc4.HS2Driver',url 'jdbc:hive2://fqdn:10000;user=hive',querytimeout '15', jarfile '/opt/cloudera/combined.jar');
mental note: set client_min_messages to debug5; can help you identify where is the problem e.g.:driver not found etc