I am trying to connect to DB2 LUW. I am aware that if I catalog tcpip node and the database, I will be able to connect to the DB. Example:
db2 catalog tcpip node mynode remote 20.40.20.40 server 5555
db2 catalog database mydb as mydb at node mynode
db2 terminate
db2 connect to mydb user myuser using mypassword
However, if I may be required to connect to various DBs, does it mean that I HAVE TO undergo the catalog process every time there is a new DB involved? Or is there a way to connect without it? I did find this article from IBM KB, but it's for DB2 z/OS. Currently if I try the following syntax:
db2 connect to 20.40.20.40:5555/mydb user myuser using mypassword
I get an error:
SQL0104N An unexpected token "20.40.20.40:5555/mydb" was found following "TO". Expected tokens may include: "<database-alias>". SQLSTATE=42601
You can do this with CLPPlus, which is written in Java and therefore uses a JDBC driver:
clpplus myuser#20.40.20.40:5555/mydb
but not with the traditional CLP.
You can use the IBM data server driver configuration file, where you may specify your databases without cataloging them. There is a detailed description about the format and how to do this.
One may use a simple wrapper which parses the "URL" passed and generates such a configuration file on the fly.
We must call it namely in the dot space filename mode.
#!/bin/sh
if [ $# -eq 0 ]; then
echo "Usage: . ./db2connect host:port/dbname USER username [USING password]" >&2
exit 1
fi
DSN=${1}
CFGFILE=./db2dsdriver.cfg.$$
dbname=${DSN#*/}
hp=${DSN%/*}
host=${hp%:*}
port=${hp#*:}
cat > ${CFGFILE} <<EOF
<configuration>
<dsncollection>
<dsn alias="${dbname}" name="${dbname}" host="${host}" port="${port}"/>
</dsncollection>
<databases>
<database name="${dbname}" host="${host}" port="${port}"/>
</databases>
</configuration>
EOF
cfg_bkp=${DB2DSDRIVER_CFG_PATH}
export DB2DSDRIVER_CFG_PATH=${CFGFILE}
shift
db2 connect to ${dbname} "$#"
export DB2DSDRIVER_CFG_PATH=${cfg_bkp}
rm -f ${CFGFILE}
Does it work for you?
Related
My Goal is to have an automatic database backup that will be sent to my s3 backet
Jelastic has a good documentation how to run the pg_dump inside the database node/container, but in order to obtain the backup file you have to do it manually using an FTP add-ons!
But As I said earlier my goal is to send the backup file automatically to my s3 backet, what I tried to do is to run the pg_dump from my app node instead of postgresql node (hopefully I can have some control from the app side), the command I run basically looks like this:
PGPASSWORD="my_database_password" pg_dump --host "nodeXXXX-XXXXX.jelastic.XXXXX.net"
-U my_db_username -p "5432" -f sql_backup.sql "database_name" 2> $LOG_FILE
The output of my log file is :
pg_dump: server version: 10.3; pg_dump version: 9.4.10
pg_dump: aborting because of server version mismatch
The issue here is that the database node has a different pg_dump version than the nginx/app node, so the backup can't be performed! I looked around but can't find an easy way to solve this. Am open to any alternative way that helps to achieve my initial goal.
Your help will be very much appreciated.
We are using mongodb , 3.2.9 version , sharded cluster on RHEL 7.2.
While trying to restore the admin database via mongorestore we get the following error:
restoring users from /home/mongod/admin/system.users.bson
error: Writes to config servers must have batch size of 1, found 11
Indeed there are 11 users in source database.
system.users collection contains 11 documents.
But why would the restore fail ?.. error message is not clear to us.
Restore of other databases was successful.
Same result while trying to restore with and without authentication being enabled.
thanks in advance
You have to use an additional parameter: --batchSize=1 in your mongorestore command.
mongorestore --host <host utl>:<PORT> --db <db name> -u <user name> <path to your local backup> --batchSize=1
tip found here
I'm fairly new to AWS in general. I'm currently trying to replicate work by another group and therefore am attempting to mimic their setup. I've established an EC2 instance (Amazon Linux AMI) and a PostgreSQL 9.3.5 RDS instance. I've uploaded a 4 GB csv file to EC2 and would like to copy it to a table in my RDS db. I used the following code within the EC2 shell (following 2nd set of instructions here):
psql -h XX.us-west-2.rds.amazonaws.com -U username -d DBname -p 5432 -c "\copy tablename from 'data.csv' with DELIMITER ',';"
After giving my password I get the error "psql: FATAL: could not write init file". I think this psql client may be version 9.2, is that something that matters? Or is this the wrong syntax for this type of transfer? Or, could it be related to having free trial size instances, which I believe have a 5 GB limit? I think I should be under that limit, but would it tell me if that were the problem? Any help would be much appreciated.
I'm not sure if used the right terminology in my question but here's what I'm trying to do. From the command line, I am used to running the following command :
psql -U postgres
and then I can see a list of all my databases by doing:
postgres=# \l
I'm wondering how to do the same thing programmatically in lua?
The following function is what I currently use to connect to a specific db:
local db_env, db_con
local connect_db = function()
if not con then
db_env = assert (luasql.postgres())
db_con = assert (db_env:connect(databasename, databaseUser, databasepassword))
end
end
Just wondering how I would change it to connect to the instance of postgresql server to see all the dbs that are hosted by my server.
Thanks.
Edit 1
Perhaps when I'm running the command
psql -U postgres
it is connecting to a default database?
In your code you have to connect to the database server and query the server for a list of databases. The database will return a recordset containing a list of the databases on that server.
A great walk-through of connecting to postgres and executing queries ( like the one above that hjpotter92 posted).
Hitting Postgres From Lua
Hope you find it helpful
It might be a dead simple question yet I still wanted to ask. I've created a Node.js application and deployed it on Heroku. I've also set up the database connection without having any trouble as well.
However, I cannot get the load the local data in my MongoDB to MongoLab I use on heroku. I've searched google and could not find a useful solution so I ended up trying these commands;
mongodump
And:
mongorestore -h mydburl:mydbport -d mydbname -u myusername -p mypassword --db Collect.1
Now when I run the command mongorestore, I received the error;
ERROR: multiple occurrences
Import BSON files into MongoDB.
When I take a look at the DB file for MongoDB I've specified and used during the local development, I see that there are files Collect.0, Collect.1 and Collect.ns. Now I know that my db name is 'Collect' since when I use the shell I always type `use Collect`. So I specified the db as Collect.1 in command line but I still receive the same errors. Should I remove all the other collect files or there is another way around?
You can't use 'mongorestore' against the raw database files. 'mongorestore' is meant to work off of a dump file generated by 'mongodump'. First us 'mongodump' to dump your local database and then use 'mongorestore' to restore that dump file.
If you go to the Tools tab in the MongoLab UI for your database, and click 'Import / Export' you can see an example of each command with the correct params for your database.
Email us at support#mongolab.com if you continue to have trouble.
-will
This can done by two steps.
1.Dump the database
mongodump -d mylocal_db_name -o dump/
2.Restore the database
mongorestore -h xyz.mongolab.com:12345 -d remote_db_name -u username -p password dump/mylocal_db_name/