Gcloud SQL upgrade postgres 9.6 to 11 - google-cloud-sql

I'd like to be able to upgrade my existing cloudsql postgres 9.6 instance to 11 to use some new pg 11 features.
I've been trying to figure out a good migration plan but it seems like the only option available is sql dump and restore. The database is 100Gig+ so this will take quite some time, and I'd like to avoid downtime as much as possible. Are there any options available? I was considering enabling statement logging: log_statement=mod, creating a dump, importing it into a pg-11 instance taking down the db + then scraping the logs to reply the latest updates into the pg-11 instance by downloading the logs and writing a script to re-run the inserts. Seems doable, but doesn't feel nice.
I am wondering if anyone faced this before and has had any other solutions?

Postgres 11 on Cloud SQL is still in Beta. It is not recommended to be using a product that is in Beta on a production environment.
However, should you choose to proceed, you must export the data by either creating a SQL dump or putting the data into a .csv file (depending on your needs)(best practices) create a Postgres 11 instance, and then import the data.
For the data that won’t be in the dump, you can either:
a) Do what you have suggested by logging the queries and then re-run the inserts
b) Create a dump, import it onto the new instance make it live and then take another dump of the old one again, compare to remove duplicates and import the differences. This will be difficult if you have auto-incrementing primary keys.
c) Create the schema on the Postgres 11 instance and deploy it. Then create the dump and import at a later time. If you have primary keys as auto incrementing, alter the schema to start at a value that you would like.

Related

Anyway in GORM to preview the schema DDL that will run during migration before running it?

We are using Golang and GORM for ORM and database schema migrations. Using the defautl AUtoMigrate is nice to add tables/columns/indexes that are net new. One big issue with Postgres, specifically AWS AUrora Postgres is it seems even a simple alter table add column with no default and no not null constraints still creates a full on AccessExclusiveLock. This caused us downtime in production for what is otherwise a deployable infrastructure anytime with zero downtime.
Postgres claims this wont cause lock, but think Aurora has some innodb engine magic that does cause one, probably for read replica consistency ?
Regardless, in our deploy shell script trying to see if we can ask GORM to give us the DDL it would run, like a preview, without running it. Anyway to do that?

Backing up redshift database

I want to backup entire redshift cluster, such that I can use it in other databases like mysql or hadoop in future.
I was looking up and creating a manual screenshot seems to be an option but I guess that wont work for cross database languages.
So what would be the detailed steps to backup the entire cluster of redshift
Cluster backups can be done via the aws console, however these can only be restored to another redshift cluster.
Because Redshift is not the same as postgres in many ways, it will be inpossible / tricky to use standard tools like pg_dump and pg_restore.
I think that your best option is to :
extract the ddl from the Redshift tables that you wish to create elsewhere, most ide's have a simple way to do this.
modify the ddl to work with your target database (e.g. postgres will
be easy, mysql harder)
copy the contents of the Redshift database, one table at a time to s3 using
the unload command
import the data that you unloaded in step 3 to your target tables

PostgreSQL in-memory [duplicate]

I want to run a small PostgreSQL database which runs in memory only, for each unit test I write. For instance:
#Before
void setUp() {
String port = runPostgresOnRandomPort();
connectTo("postgres://localhost:"+port+"/in_memory_db");
// ...
}
Ideally I'll have a single postgres executable checked into the version control, which the unit test will use.
Something like HSQL, but for postgres. How can I do that?
Were can I get such a Postgres version? How can I instruct it not to use the disk?
(Moving my answer from Using in-memory PostgreSQL and generalizing it):
You can't run Pg in-process, in-memory
I can't figure out how to run in-memory Postgres database for testing. Is it possible?
No, it is not possible. PostgreSQL is implemented in C and compiled to platform code. Unlike H2 or Derby you can't just load the jar and fire it up as a throwaway in-memory DB.
Its storage is filesystem based, and it doesn't have any built-in storage abstraction that would allow you to use a purely in-memory datastore. You can point it at a ramdisk, tempfs, or other ephemeral file system storage though.
Unlike SQLite, which is also written in C and compiled to platform code, PostgreSQL can't be loaded in-process either. It requires multiple processes (one per connection) because it's a multiprocessing, not a multithreading, architecture. The multiprocessing requirement means you must launch the postmaster as a standalone process.
Use throwaway containers
Since I originally wrote this the use of containers has become widespread, well understood and easy.
It should be a no-brainer to just configure a throw-away postgres instance in a Docker container for your test uses, then tear it down at the end. You can speed it up with hacks like LD_PRELOADing libeatmydata to disable that pesky "don't corrupt my data horribly on crash" feature ;).
There are a lot of wrappers to automate this for you for any test suite and language or toolchain you would like.
Alternative: preconfigure a connection
(Written before easy containerization; no longer recommended)
I suggest simply writing your tests to expect a particular hostname/username/password to work, and having the test harness CREATE DATABASE a throwaway database, then DROP DATABASE at the end of the run. Get the database connection details from a properties file, build target properties, environment variable, etc.
It's safe to use an existing PostgreSQL instance you already have databases you care about in, so long as the user you supply to your unit tests is not a superuser, only a user with CREATEDB rights. At worst you'll create performance issues in the other databases. I prefer to run a completely isolated PostgreSQL install for testing for that reason.
Instead: Launch a throwaway PostgreSQL instance for testing
Alternately, if you're really keen you could have your test harness locate the initdb and postgres binaries, run initdb to create a database, modify pg_hba.conf to trust, run postgres to start it on a random port, create a user, create a DB, and run the tests. You could even bundle the PostgreSQL binaries for multiple architectures in a jar and unpack the ones for the current architecture to a temporary directory before running the tests.
Personally I think that's a major pain that should be avoided; it's way easier to just have a test DB configured. However, it's become a little easier with the advent of include_dir support in postgresql.conf; now you can just append one line, then write a generated config file for all the rest.
Faster testing with PostgreSQL
For more information about how to safely improve the performance of PostgreSQL for testing purposes, see a detailed answer I wrote on this topic earlier: Optimise PostgreSQL for fast testing
H2's PostgreSQL dialect is not a true substitute
Some people instead use the H2 database in PostgreSQL dialect mode to run tests. I think that's almost as bad as the Rails people using SQLite for testing and PostgreSQL for production deployment.
H2 supports some PostgreSQL extensions and emulates the PostgreSQL dialect. However, it's just that - an emulation. You'll find areas where H2 accepts a query but PostgreSQL doesn't, where behaviour differs, etc. You'll also find plenty of places where PostgreSQL supports doing something that H2 just can't - like window functions, at the time of writing.
If you understand the limitations of this approach and your database access is simple, H2 might be OK. But in that case you're probably a better candidate for an ORM that abstracts the database because you're not using its interesting features anyway - and in that case, you don't have to care about database compatibility as much anymore.
Tablespaces are not the answer!
Do not use a tablespace to create an "in-memory" database. Not only is it unnecessary as it won't help performance significantly anyway, but it's also a great way to disrupt access to any other you might care about in the same PostgreSQL install. The 9.4 documentation now contains the following warning:
WARNING
Even though located outside the main PostgreSQL data directory,
tablespaces are an integral part of the database cluster and cannot be
treated as an autonomous collection of data files. They are dependent
on metadata contained in the main data directory, and therefore cannot
be attached to a different database cluster or backed up individually.
Similarly, if you lose a tablespace (file deletion, disk failure,
etc), the database cluster might become unreadable or unable to start.
Placing a tablespace on a temporary file system like a ramdisk risks
the reliability of the entire cluster.
because I noticed too many people were doing this and running into trouble.
(If you've done this you can mkdir the missing tablespace directory to get PostgreSQL to start again, then DROP the missing databases, tables etc. It's better to just not do it.)
Or you could create a TABLESPACE in a ramfs / tempfs and create all your objects there.
I recently was pointed to an article about doing exactly that on Linux. The original link is dead. But it was archived (provided by Arsinclair):
https://web.archive.org/web/20160319031016/http://magazine.redhat.com/2007/12/12/tip-from-an-rhce-memory-storage-on-postgresql/
Warning
This can endanger the integrity of your whole database cluster.
Read the added warning in the manual.
So this is only an option for expendable data.
For unit-testing it should work just fine. If you are running other databases on the same machine, be sure to use a separate database cluster (which has its own port) to be safe.
This is not possible with Postgres. It does not offer an in-process/in-memory engine like HSQLDB or MySQL.
If you want to create a self-contained environment you can put the Postgres binaries into SVN (but it's more than just a single executable).
You will need to run initdb to setup your test database before you can do anything with this. This can be done from a batch file or by using Runtime.exec(). But note that initdb is not something that is fast. You will definitely not want to run that for each test. You might get away running this before your test-suite though.
However while this can be done, I'd recommend to have a dedicated Postgres installation where you simply recreate your test database before running your tests.
You can re-create the test-database by using a template database which makes creating it quite fast (a lot faster than running initdb for each test run)
Now it is possible to run an in-memory instance of PostgreSQL in your JUnit tests via the Embedded PostgreSQL Component from OpenTable: https://github.com/opentable/otj-pg-embedded.
By adding the dependency to the otj-pg-embedded library (https://mvnrepository.com/artifact/com.opentable.components/otj-pg-embedded) you can start and stop your own instance of PostgreSQL in your #Before and #Afer hooks:
EmbeddedPostgres pg = EmbeddedPostgres.start();
They even offer a JUnit rule to automatically have JUnit starting and stopping your PostgreSQL database server for you:
#Rule
public SingleInstancePostgresRule pg = EmbeddedPostgresRules.singleInstance();
You could use TestContainers to spin up a PosgreSQL docker container for tests:
http://testcontainers.viewdocs.io/testcontainers-java/usage/database_containers/
TestContainers provide a JUnit #Rule/#ClassRule: this mode starts a database inside a container before your tests and tears it down afterwards.
Example:
public class SimplePostgreSQLTest {
#Rule
public PostgreSQLContainer postgres = new PostgreSQLContainer();
#Test
public void testSimple() throws SQLException {
HikariConfig hikariConfig = new HikariConfig();
hikariConfig.setJdbcUrl(postgres.getJdbcUrl());
hikariConfig.setUsername(postgres.getUsername());
hikariConfig.setPassword(postgres.getPassword());
HikariDataSource ds = new HikariDataSource(hikariConfig);
Statement statement = ds.getConnection().createStatement();
statement.execute("SELECT 1");
ResultSet resultSet = statement.getResultSet();
resultSet.next();
int resultSetInt = resultSet.getInt(1);
assertEquals("A basic SELECT query succeeds", 1, resultSetInt);
}
}
If you are using NodeJS, you can use pg-mem (disclaimer: I'm the author) to emulate the most common features of a postgres db.
You will have a full in-memory, isolated, platform-agnostic database replicating PG behaviour (it even runs in browsers).
I wrote an article to show how to use it for your unit tests here.
There is now an in-memory version of PostgreSQL from Russian Search company named Yandex: https://github.com/yandex-qatools/postgresql-embedded
It's based on Flapdoodle OSS's embed process.
Example of using (from github page):
// starting Postgres
final EmbeddedPostgres postgres = new EmbeddedPostgres(V9_6);
// predefined data directory
// final EmbeddedPostgres postgres = new EmbeddedPostgres(V9_6, "/path/to/predefined/data/directory");
final String url = postgres.start("localhost", 5432, "dbName", "userName", "password");
// connecting to a running Postgres and feeding up the database
final Connection conn = DriverManager.getConnection(url);
conn.createStatement().execute("CREATE TABLE films (code char(5));");
I'm using it some time. It works well.
UPDATED: this project is not being actively maintained anymore
Please be adviced that the main maintainer of this project has successfuly
migrated to the use of Test Containers project. This is the best possible
alternative nowadays.
If you can use docker you can mount postgresql data directory in memory for testing
docker run --tmpfs=/data -e PGDATA=/data postgres
You can also use PostgreSQL configuration settings (such as those detailed in the question and accepted answer here) to achieve performance without necessarily resorting to an in-memory database.
If you're using java, there is a library I've seen effectively used that provides an in memory "embedded" postgres environment used mostly for unit tests.
https://github.com/opentable/otj-pg-embedded
This might be able to solve your use case if you've come to this search result looking for the answer.
If have full control over your environment, you arguably want to run postgreSQL on zfs.

How to salvage data from Heroku Postgres

we are using Heroku Postgres with Ruby on Rails 3.2.
A few days before, we deleted important data by mistake using 'heroku run db:load' with misconfigured data.yml, that is, drop tables and the recreate tables with almost no data.
Backup data is only available 2 weeeks before, so we lost data of 2 weeks.
So We need to recover not by PG Backup/pg_dump but by postgresql's system data files.
I think, the only way to recover data is to restore data from xlog or archive file, but of course we don't have permission to be Super User/Replication Role to copy postgres database on heroku (or Amazon EC2) to local server.
Is there anyone who confronted such a case and resolved the problem?
Your only option is the backups provided by the PgBackups service (if you had that running). If not, Heroku support might have more options available.
At a minimum, you will have some data loss, but you can guarantee you won't do it again ;)

Slow database restore from SQL file, considering binary replication to rollback tests database

When using Cucumber with Capybara, I have to load test database data from SQL data dump.
Unfortunately it takes 10s for each scenario, which slows down the tests.
I have found something like: http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial#How_to_Replicate
Do you think binary replication will be quicker then using SQL files?
Is there anything I can do to make the restore quicker (I restore just data, not structure)?
What approaches would you recommend to try?
You could try to put your test data into a "template" database (e.g. mydb_template)
To prepare the test scenario, you simply drop your database using DROP DATABASE mydb and recreate based on the template: CREATE DATABASE mydb TEMPLATE = mydb_template;.
Of course you'll need to connect to e.g. template0 or the postgres database in order to be able to drop mydb.
I think this could be faster than importing a dump.
I recall a discussion on the PG mailing list regarding this approach and some performance problems with large "templates" that were fixed with 9.0.
(I restore just data, not structure)
COPY is always fastest for importing just data. Other answer deals with restoring a whole database.