Does Enterprise DB can be migrated to Cloud SQL from Google Cloud? - postgresql

I'm currently trying to migrate from a SQL dump file from an EnterpriseDB server to Cloud SQL Product from Google Cloud.
While searching in logs i got these messages:
"FATAL: pg_hba.conf rejects connection for host "201.xxx.xx.xxx", user "postgres", database "cloudsqladmin", SSL off"
Also:
"ERROR: unrecognized configuration parameter "edb_redwood_date""
The file i'm trying to import is a EnterpriseDB PostgreSQL file (version 9.6.6.11) and wanted to migrate to Cloud SQL Postgres (9.6).
As far as i know pg_hba.conf is a configuration file in the EnterpriseDB server, what does this file has to do with ? Do I need some normal Postgres file to migrate?
Thanks in advance.

Related

Postgres OLBC Connection error: FATAL: No pg_hba_conf entry

I am trying to create a linked server between the warehouse and a amazon cloud service.
The service provide is using a PostgreSQL database.
I have installed the ODBC Driver (12.10) on my server but I keep getting this error.
I am not sure how to work around this as I have never used Postgres before.

Connecting to Cloud SQL from Azure Data Studio using an IAM user

Following the instructions here, I'm having problem connecting to the DB from Azure Data Studio using the token I generate. It connects to the DB successfully, but as soon as I want to run a simple query ( I already gave my user read access there), it gives me this connection error, and I need to connect using the token again and the disconnection happens again randomly after a short while:
FATAL: Cloud SQL IAM user authentication failed for user
"user#company.com" FATAL: pg_hba.conf rejects connection for host
"...", user "user#company.com", database "db-name",
SSL off
I did some search and found there is also a way of logging in with IAM database authentication using the Cloud SQL Auth proxy but the documentation is limited to Postgress command line and not a GUI database tool like Azure Data Studio. Can anyone shed some light on this about what's needed if you want to connect with a GUI tool in this case?
And about changing the pg_hba.conf file, since I work with a cloud SQL instance, I'm not sure how to turn sslmode off on the cloud instance. I checked the connection tab of my instance and SSL encryption wasn't checked there (not sure if that's the same),and I changed the sslmode to disable on my Azure Data Studio for the connection but it won't allow me to connect after this change:
FATAL: pg_hba.conf rejects connection for host "*.*.*.*", user "user#company.com", database "database", SSL off
Help, anyone?
I've found the answer: we can connect using IAM database authentication using the Cloud SQL Auth proxy. The only step after to be done from the GUI DB tool (mine is Azure Data Studio) would be, to connect to the IP (127.0.0.1 in my case)the Cloud SQL Auth proxy listens on(127.0.0.1 is the default) after starting the Cloud SQL Auth proxy using:
./cloud_sql_proxy -instances=<GCPproject:Region:DBname>=tcp:127.0.0.1:5432

DB2 CLP connect to remote DB

I downloaded and installed on Windows the following:
IBM DB2 Runtime Client (64-Bit) 10.5
with the aim of connecting to a remote server database.
It installed here:
C:\Program Files\IBM\SQLLIB
But I don't see any DB2 folders in there.
I tried to catalog the remote db like this:
db2 catalog tcpip node testing remote the.server.com server 446
If I then try to connect to it, I get the following:
SQL1031N The database directory cannot be found on the indicated file system.
There is some wizard installed called the 'Default DB2 and IBM Database Client Interface Selection Wizard'. I ran this and it said it would create a default DB2 copy and would be used by default, called DB2COPY1 and it would be installed to C:\Program Files\IBM\SQLLIB.
But I'm nnot sure what this is doing really.
What do I need to do here to connect to the remote DB2??
EDIT:
I have managed to get a bit further based on this article here:
https://www-01.ibm.com/support/docview.wss?uid=swg21008914
my current commands look like:
db2 catalog tcpip node tstnode remote my.server.com server 446
db2 catalog db db1name as mytstdb at node tstnode authentication server
db2 catalog dcs db db1name as A123456DAT
db2 terminate
db2 connect to mytstdb user <username> using <password>
However the connect fails with:
SQL30061N The database alias or database name "A123456DAT " was not
found at the remote node. SQLSTATE=08004
Any ideas?
If you are connecting through port 446, I guess you are trying to connect to DB2 for IBM z or DB2 for IBM i. If yes, you will need at least Db2 Connect.
Regarding error "SQL30061N The database alias or database name "A123456DAT " was not found at the remote node. SQLSTATE=08004" it happens to me when the userid does not have some priviledges on the source system. If it is an IBM i, look at the corresponding spool file. DRDA Connections are attended by jobs called QRWTSRVR. With the IBM i command WRKSPLF SELECT(USERID) (changing USERID by the user trying the DRDA connection) you can see the spool files for jobs related to your connection. Usually spool file messages are very specific on the cause of the failure.
If you are trying to connect to DB2 on z, I don't have experience.

IBM Cloud: Connecting SQuirreL to Databases for PostgreSQL gives SSL error

I provisioned Databases for PostgreSQL on IBM Cloud. Now I try to connect SQuirreL to my database. However, my attempts result in this error:
FATAL: no pg_hba.conf entry for host "xx.xx.xx.xx", user "myuser",
database "my-database", SSL off
Is this related to the JDBC driver or any SSL setting? The credentials say sslmode=verify-full, but not sure how to specify it in SQuirreL.
I was able to connect with the standard JDBC driver for PostgreSQL after changing the driver properties:
Simple, but not secure approach:
- ssl=true
- sslfactory=org.postgresql.ssl.NonValidatingFactory
Secure, more effort:
- download the SSL certificate as provided in the credentials
- add ?sslmode=verify-full&sslrootcert=path-to-certificate to the connection URI
Now SQuirrel connects to my database with IBM Cloud Databases for PostgreSQL. This also works with Hyper Protect DBaaS.

Creating a mysqldump file

Goal: migrate google-cloud-sql First Generation to second generation
Exporting Data from Cloud SQL is working fine.
https://cloud.google.com/sql/docs/backup-recovery/backing-up
But:
Note: If you are exporting your data for use in a Cloud SQL instance, you must use the instructions provided in Exporting data for Import into Cloud SQL. You cannot use these instructions.
So i get to this page:
Exporting Data for Import into Cloud SQL
https://cloud.google.com/sql/docs/import-export/creating-mysqldump-csv#mysqldump
This pages describes how to create a mysqldump or CSV file from a MySQL database that does not reside in Cloud SQL.
Instructions are not working:
mysqldump --databases [DATABASE_NAME] -h [INSTANCE_IP] -u [USERNAME] -p \
--hex-blob --skip-triggers --set-gtid-purged=OFF --default-character-set=utf8 > [DATABASE_FILE].sql
mysqldump: unknown variable 'set-gtid-purged=OFF
How do I create mysqldump for import in cloud sql second generation?
thanks in advance,
Sander
edit:
Using google cloud sql first generation via google cloud console
removed set-gtid-purged=OFF
result:
Enter password:
mysqldump: Got error: 2013: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 when trying to connect
s#folkloric-alpha-618:~$
For set-gtid-purged. Please verify which mysql-client version you have installed. Many OS come with the MariaDB version which does not support this flag (since their implementation of GTID is different).
I know the Oracle official mysql-client supports this flag since 5.6.9.
To verify your package run:
mysqldump --version
If you get this, you don't have the official client:
mysqldump Ver 10.16 Distrib 10.1.41-MariaDB, for debian-linux-gnu (x86_64)
The official client would be something like this:
mysqldump Ver 10.13 Distrib 5.7.27, for Linux (x86_64)
If you want to change the version, you can use their official repository.