Goal: migrate google-cloud-sql First Generation to second generation
Exporting Data from Cloud SQL is working fine.
https://cloud.google.com/sql/docs/backup-recovery/backing-up
But:
Note: If you are exporting your data for use in a Cloud SQL instance, you must use the instructions provided in Exporting data for Import into Cloud SQL. You cannot use these instructions.
So i get to this page:
Exporting Data for Import into Cloud SQL
https://cloud.google.com/sql/docs/import-export/creating-mysqldump-csv#mysqldump
This pages describes how to create a mysqldump or CSV file from a MySQL database that does not reside in Cloud SQL.
Instructions are not working:
mysqldump --databases [DATABASE_NAME] -h [INSTANCE_IP] -u [USERNAME] -p \
--hex-blob --skip-triggers --set-gtid-purged=OFF --default-character-set=utf8 > [DATABASE_FILE].sql
mysqldump: unknown variable 'set-gtid-purged=OFF
How do I create mysqldump for import in cloud sql second generation?
thanks in advance,
Sander
edit:
Using google cloud sql first generation via google cloud console
removed set-gtid-purged=OFF
result:
Enter password:
mysqldump: Got error: 2013: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 when trying to connect
s#folkloric-alpha-618:~$
For set-gtid-purged. Please verify which mysql-client version you have installed. Many OS come with the MariaDB version which does not support this flag (since their implementation of GTID is different).
I know the Oracle official mysql-client supports this flag since 5.6.9.
To verify your package run:
mysqldump --version
If you get this, you don't have the official client:
mysqldump Ver 10.16 Distrib 10.1.41-MariaDB, for debian-linux-gnu (x86_64)
The official client would be something like this:
mysqldump Ver 10.13 Distrib 5.7.27, for Linux (x86_64)
If you want to change the version, you can use their official repository.
Related
I downloaded the postgresql .dmp file from the chembl database.
I want to import this into gcp cloudsql.
When I run it with the console and gcloud command, I get the following error:
Importing data into Cloud SQL instance...failed.
ERROR: (gcloud.sql.import.sql) [ERROR_RDBMS] exit status 1
The input is a PostgreSQL custom-format dump.
Use the pg_restore command-line client to restore this dump to a database.
Can I import custom-format dmp files without using the pg_restore command?
https://cloud.google.com/sql/docs/postgres/import-export/importing
There is a description of pg_restore in the document on that site, but I didn't get it right.
In the case of custom-format files, is it necessary to pg_restore after uploading them to the cloud shell?
According to the CloudSQL docs:
Only plain SQL format is supported by the Cloud SQL Admin API.
The custom format is allowed if the dump file is intended for use with pg_restore.
If you cannot use pg_restore for some reason, I would spin up a local Postgres instance (i.e., on your laptop) and use pg_restore to restore the database.
After loading into your local database, you can use pg_dump to dump to file in plaintext format, then load into CloudSQL with the console or gcloud command.
I have a postgres 12 database in use on heroku with postgres 11 installed on my macOS workstation. When I try to restore the file provided to me by Heroku, I get
$ pg_restore --verbose --no-owner -h localhost -d myapp_development latest-heroku.dump
pg_restore: [archiver] unsupported version (1.14) in file header
According to Heroku's documentation, they make it sound like the only option is that if a Heroku user wants to access their data locally, they must be running postgres 12? That seems silly.
Digging into the Postgres docs on this topic, they say:
pg_dump can also dump from PostgreSQL servers older than its own version. (Currently, servers back to version 8.0 are supported.)
Which certainly sounds like it should be possible to specify a target version of pg_restore to be used by pg_dump? But nowhere on the internet does there seem to be an example of this in action. Including the postgres docs themselves, which offer no clues about the syntax that would be used to target the "dump versions back to version 8.0".
Has anyone ever managed to use the pg_restore installed with postgres 11 to import a dump from the pg_dump installed with postgres 12?
The best answer to this that I figured out was to upgrade via brew upgrade libpq. This upgrades psql, pg_dump and pg_restore to the latest version (to link them I had to use brew link --force libpq). Once that upgrade was in place, I was able to dump from the postgres 12 databases on heroku, and import into my postgres 11 database locally. I thought I might need to dump to raw SQL for that to work, but thankfully the pg-12 based pg_restore was able to import into my postgres 11 database without issue.
pg_restore will refuse to handle a dump with a later version than itself - basically, if it encounters a file "from the future", it cannot know how to handle it.
So if you dump a database with pg_dump from v12, pg_restore from v11 cannot handle it.
The solution, as you have found out, is to upgrade the v11 installation.
I exported a postgresql database from an external server, and attempted to import it into my local server but got this error:
unrecognized configuration parameter "idle_in_transaction_session_timeout"
Does this kind of error mean that the two servers are using different versions of postgresql? I looked into that, and the external server is running:
version
PostgreSQL 9.5.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit
and my server is running:
version
PostgreSQL 9.5.5 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.2) 5.4.0 20160609, 64-bit
Pretty much the same thing. Is there a site where you can see all of the valid config parameters for each version? And is there a way to sync up two databases like this, so incompatibilities like this get patched up automatically?
According to Postgresql 9.6 Release Notes the idle_in_transaction_session_timeout parameter was introduced in version 9.6.
E.2.3.1.10. Server Configuration
Allow sessions to be terminated automatically if they are in
idle-in-transaction state for too long (Vik Fearing)
This behavior is controlled by the new configuration parameter
idle_in_transaction_session_timeout. It can be useful to prevent
forgotten transactions from holding locks or preventing vacuum cleanup
for too long.
Since you are using version 9.5 on the server, the parameter is not recognized.
It's possible that you used version 9.6 of the Postgresql client to export data from the the source 9.5 server and the parameter was introduced in the dump file. If this was the case I would recommend using a 9.5 client version to export and import the data.
The accepted answer is the way to go, but if for some reason you can not upgrade version, here is a workaround.
Export using plain text. You probably want to use compression too. pg_dump -F c -Z 9 dbname > file.zip
Before import, we need to remove the offending parameter. To do that we can use zcat and grep. zcat file.zip | grep -vw "idle_in_transaction_session_timeout" | psql -d newdb
Note that there are drawbacks using psql instead of pg_import. For instance, one can not use the -j to import concurrently.
can you help me with importing planet.osm file to my PostGist db? I am new in this and I found tutorials only for linux.
There are some commands, but I do not know how use it ... I will be grateful for some step by step tutorial. I'm using GeoServer if it is important information for us to help me. Thanks for advices.
edit:
I used osm2pgsql -s -U postgres -d nameofdatabase name.osm
but unsuccessful because I have error with no database found.
I used OGR2OGR to import osm data in pbf format on Windows (Windows 10, Posgres 9.6 with Postgis 2.3). You can use OGR2OGR from the "OSgeo42 shell", which comes with QGIS or you can get Osgeo4w separately here). The steps are something like this:
Create a new database: create database db_for_osm
Create Postgis extension in your db. In SQL create extension postgis
Now you can run OGR2OGR. Open the "OSGEO4W
shell". This will open a command window with all
the environment variables set. The command will be something like
ogr2ogr -f PostgreSQL PG:"dbname='db_for_osm' host='localhost' port='5432' user='myuser' password='mypassword'" planet.osm.pbf
My large upload took a couple of days to complete, so be prepared for this to take a long time - I suggest you do a test with a small region first - for the test I did for this answer I downloaded a city from Mapzen.
I have a database file (*.db) that need to be recovered.
The bad is, the end-user have null idea of the version of the database. Not know the password. The original developer is lost. The computer where was installed was formatted. We have not experience in this database software. Yeah, nightmare.
My guess is a old database. I'm trying to open it in Sybase 11, dev edition.
I follow this steps: http://dcx.sybase.com/1101en/sachanges_en11/unloading-reloading-upgrading-newjasper.html
I try to use the UNLOAD utility from command line & from the Sybase central utility. From command line I do:
./dbinfo -c "DBF=/Users/mamcx/Downloads/CEMDE_ENDOCRINO_S.A.DB;UID=DBA;PWD=sql"
SQL Anywhere Information Utility Version 11.0.1.2045
Unable to start specified database: '/Users/mamcx/Downloads/CEMDE_ENDOCRINO_S.A.DB' was created by a different version of the software
Ok, I try to unload:
./dbunload -c "DBF=/Users/mamcx/Downloads/CEMDE_ENDOCRINO_S.A.DB;UID=DBA;PWD=sql" -n /Users/mamcx/Desktop/
SQL Anywhere Unload Utility Version 11.0.1.2045
Connecting and initializing
***** SQL error: Unable to start database server
Ok, from the server admin tool:
dbunload -v -c "UID=dba;PWD=***;DBF=/Users/mamcx/Downloads/CEMDE_ENDOCRINO_S.A.DB" -an "/Users/mamcx/Desktop/baba.db" -ap 4096 -ea None -ii -sa -so _sc866192545
Connecting and initializing
***** SQL error: Unable to start database server
An error occurred while attempting to unload the database '/Users/mamcx/Downloads/CEMDE_ENDOCRINO_S.A.DB'.
Exist a way to know the version of the database server used to create this? Is possible to recover this file?
I don't know how to get the version out of the Database File if you are not able to start it.
You could get a hint from the hopefully existing Client PC's. Check the ODBC Driver Version they have installed.
I had good success with the support of Sybase. If you or your client has a support contract you can get them involved.
HTH
Try to simply start a server with that database and capture the output with -z -o server.out. The server.out file should contain a more specific error telling you why it can't start the database. This error can occur if you are trying to start something that is not a SQL Anywhere database.
You may also want to post this question over at http://sqlanywhere-forum.sap.com/.