Sys schema on Cloud SQL MySQL 2nd gen - google-cloud-sql

I enabled performance schema flag on my Cloud SQL MySQL 2nd gen so the performance_schema becomes available but I'm still missing the sys schema.
I think that during my experiments I saw it few times on some instances but I'm not able to find specific steps to turn it on on current production.
Can anybody help?

Sys schema is enabled by default from 5.7.7 and higher. If you have a 5.7 instance, it will not include the sys schema from the new version. In any case, if is not there there would not be any problem by manually installing and enabling it.
The documentation indicates that it's requirement depends on how the database is initialized.

Related

Anyway in GORM to preview the schema DDL that will run during migration before running it?

We are using Golang and GORM for ORM and database schema migrations. Using the defautl AUtoMigrate is nice to add tables/columns/indexes that are net new. One big issue with Postgres, specifically AWS AUrora Postgres is it seems even a simple alter table add column with no default and no not null constraints still creates a full on AccessExclusiveLock. This caused us downtime in production for what is otherwise a deployable infrastructure anytime with zero downtime.
Postgres claims this wont cause lock, but think Aurora has some innodb engine magic that does cause one, probably for read replica consistency ?
Regardless, in our deploy shell script trying to see if we can ask GORM to give us the DDL it would run, like a preview, without running it. Anyway to do that?

Validating the postgres upgrade using logical replication

Currently I am trying to upgrade my postgres 9.1 to 10 using logical replication. As 9.1 does not support native logical_replication, I tried slony and made a replica successfully.
P.S. The above replica I created is using a sample dump from an year ago which is only of 800mb.
Now I have few questions.
How can I validate whether the replica has all the data replicated successfully. Few suggested to put the master on maintenance mode(few downtime) and do a last N items comparison with both the database on all the tables.
I tried with 800mb. Will there be any problem when I go and try with 100+ GB?
Please share your personal experience here. I have been trying to document what are the things that could go wrong so I can always try to anticipate the next course of action.
You may use the Data Validator that is shipped with trial version of EDB Postgres Replication server for validating the data between old and new PostgreSQL databases.
You may read the details of Data Validator at Data Validator Document
To download the EDB Replication Server please follow this link: EDB Replication Server
Disclosure: I work for EnterpriseDB (EDB)

Gcloud SQL upgrade postgres 9.6 to 11

I'd like to be able to upgrade my existing cloudsql postgres 9.6 instance to 11 to use some new pg 11 features.
I've been trying to figure out a good migration plan but it seems like the only option available is sql dump and restore. The database is 100Gig+ so this will take quite some time, and I'd like to avoid downtime as much as possible. Are there any options available? I was considering enabling statement logging: log_statement=mod, creating a dump, importing it into a pg-11 instance taking down the db + then scraping the logs to reply the latest updates into the pg-11 instance by downloading the logs and writing a script to re-run the inserts. Seems doable, but doesn't feel nice.
I am wondering if anyone faced this before and has had any other solutions?
Postgres 11 on Cloud SQL is still in Beta. It is not recommended to be using a product that is in Beta on a production environment.
However, should you choose to proceed, you must export the data by either creating a SQL dump or putting the data into a .csv file (depending on your needs)(best practices) create a Postgres 11 instance, and then import the data.
For the data that won’t be in the dump, you can either:
a) Do what you have suggested by logging the queries and then re-run the inserts
b) Create a dump, import it onto the new instance make it live and then take another dump of the old one again, compare to remove duplicates and import the differences. This will be difficult if you have auto-incrementing primary keys.
c) Create the schema on the Postgres 11 instance and deploy it. Then create the dump and import at a later time. If you have primary keys as auto incrementing, alter the schema to start at a value that you would like.

Connection validation error using postgresql jdbc 4.2 driver against a 9.3 database -- apparent case-sensitivity of SEARCH_PATH keyword

Using the jdbc4.2 implementation contained in postgresql-9.4.1212.jar, I generate an error when calling the java.Sql.Connection isValid() method on a connection to a postgresql 9.3 database (java8 and postgres both running on windows 7).
The path to producing the error is complicated but reproducible (will provide relevant code shortly) and involves a sequence of sql calls on a single db connection whose default schema is reconfigured prior to each use via an explicit execution of SET SEARCH_PATH='[some schema]'.
I find that the error occurs if and only if I render the SEARCH_PATH keyword using upper case (that is, the error does not occur if I execute SET search_path='[some schema]' - only when I execute SET SEARCH_PATH='[some schema]').
Note that the direct effect of executing either variant is the same -- in both cases the default schema associated with the connection is changed to [some schema]. It's just that, eventually, a downstream call to java.sql.connection.isValid() causes the database to crash if I've used SEARCH_PATH instead of search_path.
I can see that the jdbc driver's implementation of java.sql.connection.setSchema() uses the lower-case variant; something that makes me think this apparent case-sensitivity may be a known issue, but I have found no mentions of it anywhere online.
Note that the problem does not occur if I either: (1) use an older jdbc driver (postgresql-9.3.1100.jdbc4.1.jar) with my 9.3 database, or (2) use the latest jdbc driver with a postgresql 9.6 database.
I'm wondering if anyone has run into this specific problem, and also, if there are other known incompatibilities b/w the 9.3 database and the latest jdbc driver.
The driver is failing to invalidate the prepared statement cache because it only detects SET search_path=... when the config parameter is lower case.
See line 2056 of this commit.
I can't find an issue that describes this. Please have a look yourself and raise one if needed.

xWiki 6.3. changing database (HSQL to PostgreSQL) and migrating data

I have an instance of xWiki 6.3 running on default database i.e. HSQL.
I need to move it to PostgreSQL database.
I have installed PostgreSQL and have followed following steps from documentation to point xWiki 6.3 to my new PostgreSQL database.
1 - Copied jdbc drivers at required path in xwiki
2 - In xwiki.cfg file, I have un-commented following two lines
xwiki.store.migration=1 (was already un-commented)
xwiki.store.migration.databases= all
3 - commented hsql related section in hibernate.cfg.xml and have un-commented and updated PostgreSQL related section with required information.
After that, once I start my xWiki 6.3 instance, it shows me a home page with Add Button. However none of the existing content is visible.
I can see that all the tables are moved to postgreSQL if I connect to Database.
Also, I am unable to login with the admin account that was working when the application was running on hsql.
Any idea if I am missing something ?
Regards,
I don't think your process will port any existing data (i.e. the rows, ather than the tables) from one database to the other, not least because your configuration will only know about one database at a time. I suggest you follow the guidelines to export your content as a XAR while configured for HSQL and then import it again once you've reconfigured for PostgreSQL.